content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
[Numpy-discussion] cross product of two 3xn arrays
Ian Harrison harrison.ian at gmail.com
Wed Feb 15 17:21:17 CST 2006
I have two groups of 3x1 arrays that are arranged into two larger 3xn
arrays. Each of the 3x1 sub-arrays represents a vector in 3D space. In
Matlab, I'd use the function cross() to calculate the cross product of
the corresponding 'vectors' from each array. In other words:
if ai and bj are 3x1 column vectors:
A = [ a1 a2 a3 ]
B = [ b1 b2 b3 ]
C = A x B = [ (a1 x b1) (a2 x b2) (a3 x b3) ]
Could someone suggest a clean way to do this? I suppose I could write
a for loop to cycle through each pair of vectors and send them to
numpy's cross(), but since I'm new to python/scipy/numpy, I'm guessing
that there's probably a better method that I'm overlooking.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/006332.html","timestamp":"2014-04-20T18:53:24Z","content_type":null,"content_length":"3309","record_id":"<urn:uuid:ef863541-a3d5-411c-909d-ea0a1866d1b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Neurophysicists to everyone: “There is an optimal brain frequency”
We may be familiar with the concept of electrical/chemical signals relating to neural communication. So, now imagine of every synapse branching out from every neuron - like an antenna, is tuned to a
different frequency signal with a specific optimal point and this optimum frequency point depends on the location of the synapse on a neuron. The farther away the synapse is from the neuron’s cell
body, the higher the optimum frequency was found to be. And it seems the more rhythmicly synced the frequencies were - the stronger the connection for memory and learning synapses.
The researchers found that not only does each synapse have a preferred frequency for achieving optimal learning, but for the best effect, the frequency needs to be perfectly rhythmic — timed at
exact intervals. Even at the optimal frequency, if the rhythm was thrown off, synaptic learning was substantially diminished.
Their research also showed that once a synapse learns, its optimal frequency changes. In other words, if the optimal frequency for a naïve synapse — one that has not learned anything yet — was,
say, 30 spikes per second, after learning, that very same synapse would learn optimally at a lower frequency, say 24 spikes per second. Thus, learning itself changes the optimal frequency for a
As well as possibly strengthening and enhancing learning and memory, learning-induced re-tuning and de-tuning could be have “important implications for treating disorders related to forgetting, such
as PTSD disorder”. via
Life is just a frequency…
The image shows a neuron with a tree trunk-like dendrite. Each triangular shape touching the dendrite represents a synapse, where inputs from other neurons, called spikes, arrive (the squiggly
shapes). Synapses that are further away on the dendritic tree from the cell body require a higher spike frequency (spikes that come closer together in time) and spikes that arrive with perfect
timing to generate maximal learning. VIA | {"url":"http://psydoctor8.tumblr.com/post/11030206754/neurophysicists-to-everyone-there-is-an-optimal","timestamp":"2014-04-19T06:55:11Z","content_type":null,"content_length":"37982","record_id":"<urn:uuid:d44bce67-e5b3-4640-879c-123de705b778>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
A characterization of the Gamma distribution
Results 1 - 10 of 16
, 1995
"... The two-parameter Poisson-Dirichlet distribution, denoted pd(ff; `), is a distribution on the set of decreasing positive sequences with sum 1. The usual Poisson-Dirichlet distribution with a
single parameter `, introduced by Kingman, is pd(0; `). Known properties of pd(0; `), including the Markov ..."
Cited by 221 (37 self)
Add to MetaCart
The two-parameter Poisson-Dirichlet distribution, denoted pd(ff; `), is a distribution on the set of decreasing positive sequences with sum 1. The usual Poisson-Dirichlet distribution with a single
parameter `, introduced by Kingman, is pd(0; `). Known properties of pd(0; `), including the Markov chain description due to Vershik-Shmidt-Ignatov, are generalized to the two-parameter case. The
size-biased random permutation of pd(ff; `) is a simple residual allocation model proposed by Engen in the context of species diversity, and rediscovered by Perman and the authors in the study of
excursions of Brownian motion and Bessel processes. For 0 ! ff ! 1, pd(ff; 0) is the asymptotic distribution of ranked lengths of excursions of a Markov chain away from a state whose recurrence time
distribution is in the domain of attraction of a stable law of index ff. Formulae in this case trace back to work of Darling, Lamperti and Wendel in the 1950's and 60's. The distribution of ranked
lengths of e...
- In Proc. of the 4th Int. Symp. on Independent Component Analysis and Blind Signal Separation (ICA2003 , 2003
"... Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent
Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A f ..."
Cited by 30 (2 self)
Add to MetaCart
Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent
Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A fundamental difficulty in the nonlinear BSS problem and even more so in the nonlinear ICA
problem is that they are nonunique without extra constraints, which are often implemented by using a suitable regularization. Post-nonlinear mixtures are an important special case, where a
nonlinearity is applied to linear mixtures. For such mixtures, the ambiguities are essentially the same as for the linear ICA or BSS problems. In the later part of this paper, various separation
techniques proposed for post-nonlinear mixtures and general nonlinear mixtures are reviewed. I. THE NONLINEAR ICA AND BSS PROBLEMS Consider Æ samples of the observed data vector Ü, modeled by
"... The first part of this paper is concerned by the history of source separation. It include our comments and those of a few other researchers on the development of this new research field. The
second part is focused on recent developments of the separation in nonlinear mixtures. ..."
Cited by 15 (4 self)
Add to MetaCart
The first part of this paper is concerned by the history of source separation. It include our comments and those of a few other researchers on the development of this new research field. The second
part is focused on recent developments of the separation in nonlinear mixtures.
, 2000
"... We study fundamental properties of the gamma process and their relation to various topics such as Poisson–Dirichlet measures and stable processes. We prove the quasi-invariance of the gamma
process with respect to a large group of linear transformations. We also show that it is a renormalized limit ..."
Cited by 7 (1 self)
Add to MetaCart
We study fundamental properties of the gamma process and their relation to various topics such as Poisson–Dirichlet measures and stable processes. We prove the quasi-invariance of the gamma process
with respect to a large group of linear transformations. We also show that it is a renormalized limit of the stable processes and has an equivalent sigma-finite measure (quasi-Lebesgue) with
important invariance properties. New properties of the gamma process can be applied to the Poisson—Dirichlet measures. We also emphasize the deep similarity between the gamma process and the Brownian
motion. The connection of the above topics makes more transparent some old and new facts about stable and gamma processes, and the Poisson-Dirichlet measures.
- In Proc. IEEE Workshop on Statistical and Signal Processing , 2005
"... We investigate a multiple hypothesis test designed for detecting signals embedded in noisy observations of a sensor array. The global level of the multiple test is controlled by the false
discovery rate (FDR) criterion recently suggested by Benjamini and Hochberg instead of the classical familywise ..."
Cited by 6 (4 self)
Add to MetaCart
We investigate a multiple hypothesis test designed for detecting signals embedded in noisy observations of a sensor array. The global level of the multiple test is controlled by the false discovery
rate (FDR) criterion recently suggested by Benjamini and Hochberg instead of the classical familywise error rate (FWE) criterion. In the previous study [3], the suggested procedure has shown
promising results on simulated data. Here we carefully examine the independence condition required by the Benjamini Hochberg procedure to ensure the control of FDR. Based on the properties of beta
distribution, we proved the applicability of the Benjamini Hochberg procedure to the underlying test. Further simulation results show that the false alarm rate is less than 0.02 for a choice of FDR
as high as 0.1, implying the reliability of the test has not been affected by the increase in power. 1.
, 708
"... Abstract: An important line of research is the investigation of the laws of random variables known as Dirichlet means as discussed in Cifarelli and Regazzini (7). However there is not much
information on inter-relationships between different Dirichlet means. Here we introduce two distributional oper ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract: An important line of research is the investigation of the laws of random variables known as Dirichlet means as discussed in Cifarelli and Regazzini (7). However there is not much
information on inter-relationships between different Dirichlet means. Here we introduce two distributional operations, which consist of multiplying a mean functional by an independent beta random
variable and an operation involving an exponential change of measure. These operations identify relationships between different means and their densities. This allows one to use the often
considerable analytic work to obtain results for one Dirichlet mean to obtain results for an entire family of otherwise seemingly unrelated Dirichlet means. Additionally, it allows one to obtain
explicit densities for the related class of random variables that have generalized gamma convolution distributions, and the finite-dimensional distribution of their associated Lévy processes. This
has implications in, for instance, the explicit description of Bayesian nonparametric prior and posterior models, and more generally in a variety of applications in probability and statistics
involving Lévy processes.
- Proceedings of the 1999 Command and Control Research and Technology Symposium , 1999
"... The principles underlying this paper can be applied not only to C2I systems, but also to many other complex structures, such as those involving medical, fault or configuration diagnoses and
analyses. In short, a new relatively simple look-up table of formulas is presented which can be used as a prac ..."
Cited by 2 (0 self)
Add to MetaCart
The principles underlying this paper can be applied not only to C2I systems, but also to many other complex structures, such as those involving medical, fault or configuration diagnoses and analyses.
In short, a new relatively simple look-up table of formulas is presented which can be used as a practical aid in C2 decision-making nodes for deducing conditional or unconditional conclusions from
(conditional or unconditional) premises in probability form. This results from a recent breakthrough yielding a new Cognitive Probability Logic-- or Logic of Averages. This logic is actually a
natural weighting modification of Adams ’ well-known High Probability Logic. Consequently, a number of long-standing conflicts between ordinary probability logic and “commonsense ” reasoning are
resolved for the first time, including the well-known transitivitysyllogism problem. These results are based upon completely rigorous universal second order probability principles, together with use
of the newly emerging field of product space conditional event algebra. Surprisingly, both disciplines are actually technically entirely within the purview of classical logic and basic probability
theory. Applications to linguistic-based information can also be obtained by use of these techniques, together with one-point random set coverage representations of fuzzy logic and an extension of
product space conditional event algebra, dubbed relational event algebra.
"... We consider the problem of simulating X conditional on the value of X +Y, when X and Y are independent positive random variables. We propose approximate methods for sampling (X|X +Y) by
approximating the fraction (X/z|X + Y = z) with a beta random variable. We discuss applications to Lévy processes ..."
Add to MetaCart
We consider the problem of simulating X conditional on the value of X +Y, when X and Y are independent positive random variables. We propose approximate methods for sampling (X|X +Y) by approximating
the fraction (X/z|X + Y = z) with a beta random variable. We discuss applications to Lévy processes and infinitely divisible distributions, and we report numerical tests for Poisson processes,
tempered stable processes, and the Heston stochastic volatility model. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=458696","timestamp":"2014-04-19T01:07:15Z","content_type":null,"content_length":"36239","record_id":"<urn:uuid:b157282e-f543-453f-843c-e909cf985099>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chern studied at Nankai University in Tientsin, China, then undertook graduate studies at Tsing Hua University, Peking. He was the only graduate student in mathematics to enter the university in 1930
but during his four years there he not only studied widely in projective differential geometry but he also began to publish his own papers on the topic.
He received a scholarship in 1934 to study in the United States, but he made a special request that he be allowed to go to the University of Hamburg. His reason was that he had met Blaschke when he
visited Peking in 1932 and found his mathematics attractive. After working under Blaschke for only a little over a year Chern received his D.Sc. from Hamburg in 1936.
At this stage Chern was forced to choose between two attractive options, namely to stay in Hamburg and work on algebra under Artin or to go to Paris and study under Cartan. Although Chern knew Artin
well and would have liked to have worked with him, the desire to continue work on differential geometry was the deciding factor and he went to Paris. His time in Paris was a very productive one and
he learnt to approach mathematics, as Cartan did, see [2]:-
... from evidence and the phenomena which arise from special cases rather than from a general and abstract viewpoint.
In 1937 Chern left Paris to become professor of mathematics at Tsing Hua University. However the Chinese-Japanese war began while he was on the journey and the university moved twice to avoid the
war. He worked at what was then named Southwest Associated University from 1938 until 1943. After spending 1943-1945 at Princeton where he impressed both Weyl and Veblen. He became friendly with
Lefschetz who persuaded him to become an editor of the Annals of Mathematics.
At the end of World War II, Chern returned to China to the Institute of Mathematics at the Academia Sinica in Nanking. However this time a civil war in China began to make life difficult and he was
pleased to accept an invitation in 1948 from Weyl and Veblen to return to Princeton.
From 1949 Chern worked in the USA accepting the chair of geometry at the University of Chicago after first making a short visit to Princeton. He remained at Chicago until 1960 when he went to the
University of California, Berkeley.
He was awarded the National Medal of Science in 1975 and the Wolf Prize in 1983/84. In 1985 he was elected a Fellow of the Royal Society of London and the following year he was made an honorary
member of the London Mathematical Society.
His area of research was differential geometry where he studied the (now named) Chern characteristic classes in fibre spaces. These are important not only in mathematics but also in mathematical
physics. He worked on characteristic classes during his 1943-45 visit to Princeton and, also at this time, he gave a now famous proof of the Gauss-Bonnet formula.
His work is summed up as follows:-
When Chern was working on differential geometry in the 1940s, this area of mathematics was at a low point. Global differential geometry was only beginning, even Morse theory was understood and
used by a very small number of people. Today, differential geometry is a major subject in mathematics and a large share of the credit for this transformation goes to Professor Chern.
In 1979 a Chern Symposium held in his honour offered him this tribute in song:-
Hail to Chern! Mathematics Greatest!
He made Gauss-Bonnet a household word,
Intrinsic proofs he found,
Throughout the World his truths abound,
Chern classes he gave us,
and Secondary Invariants,
Fibre Bundles and Sheaves,
Distributions and Foliated Leaves!
All Hail All Hail to CHERN.
Article by: J J O'Connor and E F Robertson | {"url":"http://www.cim.nankai.edu.cn/nim_e/aboutchern.htm","timestamp":"2014-04-16T19:35:01Z","content_type":null,"content_length":"11711","record_id":"<urn:uuid:a90b5fe3-9fc5-4b57-97ce-d225c6cab15b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Principia: Mathematical Principles of Natural Philosophy
Lowest new price: $13.15
Lowest used price: $11.87
List price: $19.99
Author: Isaac Newton
Brand: Brand: Snowball Publishing
This book is a complete volume of Newton's mathematical principles relating to natural philosophy and his system of the world. Newton, one of the most brilliant scientists and thinkers of all time,
presents his theories, formulas and thoughts. Included are chapters relative to the motion of bodies; motion of bodies in resisting mediums; and system of the world in mathematical treatment; a
section on axioms or laws of motion, and definitions.
• ISBN13: 9781607962403
• Condition: New
• Notes: BRAND NEW FROM PUBLISHER! 100% Satisfaction Guarantee. Tracking provided on most orders. Buy with Confidence! Millions of books sold!
Similar Products:
What Is Relativity?: An Intuitive Introduction to Einstein's Ideas, and Why They Matter
Lowest new price: $20.87
Lowest used price: $24.83
List price: $25.95
Author: Jeffrey Bennett
It is commonly assumed that if the Sun suddenly turned into a black hole, it would suck Earth and the rest of the planets into oblivion. Yet, as prominent author and astrophysicist Jeffrey Bennett
points out, black holes don't suck. With that simple idea in mind, Bennett begins an entertaining introduction to Einstein's theories of relativity, describing the amazing phenomena readers would
actually experience if they took a trip to a black hole.
The theory of relativity also reveals the speed of light as the cosmic speed limit, the mind-bending ideas of time dilation and curvature of spacetime, and what may be the most famous equation in
history: E = mc2. Indeed, the theory of relativity shapes much of our modern understanding of the universe. It is not "just a theory" -- every major prediction of relativity has been tested to
exquisite precision, and its practical applications include the Global Positioning System (GPS). Amply illustrated and written in clear, accessible prose, Bennett's book proves anyone can grasp the
basics of Einstein's ideas. His intuitive, nonmathematical approach gives a wide audience its first real taste of how relativity works and why it is so important to science and the way we view
ourselves as human beings.
Similar Products:
Magnificent Principia: Exploring Isaac Newton's Masterpiece
Lowest new price: $12.98
Lowest used price: $12.46
List price: $26.00
Author: Colin Pask
Nobel laureate Steven Weinberg has written that "all that has happened since 1687 is a gloss on the Principia." Now you too can appreciate the significance of this stellar work, regarded by many as
the greatest scientific contribution of all time. Despite its dazzling reputation, Isaac Newton's Philosophiae Naturalis Principia Mathematica, or simply the Principia, remains a mystery for many
people. Few of even the most intellectually curious readers, including professional scientists and mathematicians, have actually looked in the Principia or appreciate its contents. Mathematician Pask
seeks to remedy this deficit in this accessible guided tour through Newton's masterpiece.
Using the final edition of the Principia, Pask clearly demonstrates how it sets out Newton's (and now our) approach to science; how the framework of classical mechanics is established; how
terrestrial phenomena like the tides and projectile motion are explained; and how we can understand the dynamics of the solar system and the paths of comets. He also includes scene-setting chapters
about Newton himself and scientific developments in his time, as well as chapters about the reception and influence of the Principia up to the present day.
Similar Products:
Gravity: An Introduction to Einstein's General Relativity
Lowest new price: $75.17
Lowest used price: $68.49
List price: $100.60
Author: James B. Hartle
The aim of this groundbreaking new book is to bring general relativity into the undergraduate curriculum and make this fundamental theory accessible to all physics majors. Using a "physics first"
approach to the subject, renowned relativist James B. Hartle provides a fluent and accessible introduction that uses a minimum of new mathematics and is illustrated with a wealth of exciting
applications. The emphasis is on the exciting phenomena of gravitational physics and the growing connection between theory and observation. The Global Positioning System, black holes, X-ray sources,
pulsars, quasars, gravitational waves, the Big Bang, and the large scale structure of the universe are used to illustrate the widespread role of how general relativity describes a wealth of everyday
and exotic phenomena.
Similar Products:
Gravity Is a Mystery (Let's-Read-and-Find-Out Science 2)
Lowest new price: $1.74
Lowest used price: $1.39
List price: $5.99
Author: Franklyn M. Branley
Brand: Collins
What goes up must come down.
Everybody knows that. But what is it that pulls everything from rocks to rockets toward the center of the earth? It's gravity. Nobody can say exactly what it is, but gravity is there, pulling on
everything, all the time. With the help of an adventurous scientist and his fun-loving dog, you can read and find out about this mysterious force.
Similar Products:
Gravitation (Physics Series)
Lowest new price: $148.87
Lowest used price: $44.99
Author: Charles W. Misner
Brand: Brand: W. H. Freeman
This landmark text offers a rigorous full-year graduate level course on gravitation physics, teaching students to:
• Grasp the laws of physics in flat spacetime
• Predict orders of magnitude
• Calculate using the principal tools of modern geometry
• Predict all levels of precision
• Understand Einstein's geometric framework for physics
• Explore applications, including pulsars and neutron stars, cosmology, the Schwarzschild geometry and gravitational collapse, and gravitational waves
• Probe experimental tests of Einstein's theory
• Tackle advanced topics such as superspace and quantum geometrodynamics
The book offers a unique, alternating two-track pathway through the subject:
• In many chapters, material focusing on basic physical ideas is designated as
Track 1. These sections together make an appropriate one-term advanced/graduate level course (mathematical prerequisites: vector analysis and simple partial-differential equations). The book is
printed to make it easy for readers to identify these sections.
• The remaining Track 2 material provides a wealth of advanced topics instructors can draw from to flesh out a two-term course, with Track 1 sections serving as prerequisites.
• Used Book in Good Condition
Similar Products:
Relativity, Gravitation and Cosmology: A Basic Introduction (Oxford Master Series in Physics)
Lowest new price: $39.56
Lowest used price: $40.82
List price: $49.95
Author: Ta-Pei Cheng
Einstein's general theory of relativity is introduced in this advanced undergraduate and beginning graduate level textbook. Topics include special relativity, in the formalism of Minkowski's
four-dimensional space-time, the principle of equivalence, Riemannian geometry and tensor analysis, Einstein field equation, as well as many modern cosmological subjects, from primordial inflation
and cosmic microwave anisotropy to the dark energy that propels an accelerating universe. The author presents the subject with an emphasis on physical examples and simple applications without the
full tensor apparatus. The reader first learns how to describe curved spacetime. At this mathematically more accessible level, the reader can already study the many interesting phenomena such as
gravitational lensing, precession of Mercury's perihelion, black holes, and cosmology. The full tensor formulation is presented later, when the Einstein equation is solved for a few symmetric cases.
Many modern topics in cosmology are discussed in this book: from inflation, cosmic microwave anisotropy to the "dark energy" that propels an accelerating universe. Mathematical accessibility,
together with the various pedagogical devices (e.g., worked-out solutions of chapter-end problems), make it practical for interested readers to use the book to study general relativity and cosmology
on their own.
Similar Products:
The Manga Guide to Relativity
Lowest new price: $10.61
Lowest used price: $10.25
List price: $19.95
Author: Hideo Nitta
Everything's gone screwy at Tagai Academy. When the headmaster forces Minagi's entire class to study Einstein's theory of relativity over summer school, Minagi volunteers to go in their place.
There's just one problem: He's never even heard of relativity before! Luckily, Minagi has the plucky Miss Uraga to teach him.
Follow along with The Manga Guide to Relativity as Minagi learns about the non-intuitive laws that shape our universe. Before you know it, you'll master difficult concepts like inertial frames of
reference, unified spacetime, and the equivalence principle. You'll see how relativity affects modern astronomy and discover why GPS systems and other everyday technologies depend on Einstein's
extraordinary discovery.
The Manga Guide to Relativity also teaches you how to:
• Understand and use E = mc^2, the world's most famous equation
• Calculate the effects of time dilation using the Pythagorean theorem
• Understand classic thought experiments like the Twin Paradox, and see why length contracts and mass increases at relativistic speeds
• Grasp the underpinnings of Einstein's special and general theories of relativity
If the idea of bending space and time really warps your brain, let The Manga Guide to Relativity straighten things out.
Similar Products:
A Brief History of String Theory: From Dual Models to M-Theory (The Frontiers Collection)
Lowest new price: $42.55
Lowest used price: $45.70
List price: $49.99
Author: Dean Rickles
During its forty year lifespan, string theory has always had the power to divide, being called both a 'theory of everything' and a 'theory of nothing'. Critics have even questioned whether it
qualifies as a scientific theory at all. This book adopts an objective stance, standing back from the question of the truth or falsity of string theory and instead focusing on how it came to be and
how it came to occupy its present position in physics. An unexpectedly rich history is revealed, with deep connections to our most well-established physical theories. Fully self-contained and written
in a lively fashion, the book will appeal to a wide variety of readers from novice to specialist.
Similar Products:
Three Roads To Quantum Gravity (Science Masters)
Lowest new price: $4.28
Lowest used price: $0.01
List price: $16.99
Author: Lee Smolin
The Holy Grail of modern physics is a theory of the universe that unites two seemingly opposing pillars of modern science: Einstein's theory of general relativity, which deals with large-scale
phenomena (planets, solar systems and galaxies), and quan
It's difficult, writes Lee Smolin in this lucid overview of modern physics, to talk meaningfully about the big questions of space and time, given the limitations of our technology and perceptions.
It's more difficult still given some of the contradictions and inconsistencies that obtain between quantum theory, which "was invented to explain why atoms are stable and do not instantly fall apart"
but has little to say about space and time, and general relatively theory, which has everything to say about the big picture but tends to collapse when describing the behavior of atoms and their even
smaller constituents. Whence the hero of Smolin's tale, the as-yet-incomplete quantum theory of gravity, which seeks to unify relativity and quantum theory--and, in the bargain, to move toward a
"grand theory of everything." Smolin ably explains concepts that underlie quantum gravity, such as background independence, the superposition principle, and the notion of causal structure, and he
traces the development of allied theories that have shaped modern physics and led to this new view of the universe.
Although he allows that "it has not been possible to test any of our new theories of quantum gravity experimentally," Smolin predicts that a solid framework will be established by 2015 at the
outside. If he's correct, the years in between promise to be an exciting time for students of the physical sciences, and Smolin's book makes an engaging introduction to some of the big questions
they'll be asking. --Gregory McNamee
Similar Products: | {"url":"http://www.linuxcoding.com/skedoodle/Books/14561","timestamp":"2014-04-20T08:44:55Z","content_type":null,"content_length":"43564","record_id":"<urn:uuid:14803f9d-daab-4958-b716-0b0124320826>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Identification of partially coated anisotropic buried objects using electromagnetic Cauchy data.
(English) Zbl 1134.78008
Summary: We consider the three dimensional electromagnetic inverse scattering problem of determining information about a target buried in a known inhomogeneous medium from a knowledge of the electric
and magnetic fields corresponding to time harmonic electric dipoles as incident fields. The scattering object is assumed to be an anisotropic dielectric that is (possibly) partially coated by a thin
layer of highly conducting material. The data is measured at a given surface containing the object in its interior. Our concern is to determine the shape of this scattering object and some
information on the surface conductivity of the coating without any knowledge of the index of refraction of the inhomogeneity. No a priori assumption is made on the extent of the coating, i.e., the
object can be fully coated, partially coated or not coated at all. Our method, introduced in [
F. Cakoni, M. Fares
H. Haddar
, Inverse Probl. 22, No. 3, 845–867 (2006;
Zbl 1099.35167
D. Colton
H. Haddar
, Inverse Probl. 21, No. 1, 383–398 (2005;
Zbl 1086.35129
)], is based on the linear sampling method and reciprocity gap functional for reconstructing the shape of the scattering object. The algorithm consists in solving a set of linear integral equations
of the first kind for several sampling points and three linearly independent polarizations. The solution of these integral equations is also used to determine the surface conductivity.
78A46 Inverse scattering problems
78A45 Diffraction, scattering (optics) | {"url":"http://zbmath.org/?q=an:1134.78008","timestamp":"2014-04-17T09:37:12Z","content_type":null,"content_length":"21746","record_id":"<urn:uuid:aa0c67b9-ca10-4a40-ab78-a30972e68c80>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithm Tutorials
Geometry Concepts: Basic Concepts
By lbackstrom
TopCoder Member
Vector Addition
Dot Product
Cross Product
Line-Point Distance
Polygon Area
Many TopCoders seem to be mortally afraid of geometry problems. I think it's safe to say that the majority of them would be in favor of a ban on TopCoder geometry problems. However, geometry is a
very important part of most graphics programs, especially computer games, and geometry problems are here to stay. In this article, I'll try to take a bit of the edge off of them, and introduce some
concepts that should make geometry problems a little less frightening.
Vectors are the basis of a lot of methods for solving geometry problems. Formally, a vector is defined by a direction and a magnitude. In the case of two-dimension geometry, a vector can be
represented as pair of numbers, x and y, which gives both a direction and a magnitude. For example, the line segment from (1,3) to (5,1) can be represented by the vector (4,-2). It's important to
understand, however, that the vector defines only the direction and magnitude of the segment in this case, and does not define the starting or ending locations of the vector.
Vector Addition
There are a number of mathematical operations that can be performed on vectors. The simplest of these is addition: you can add two vectors together and the result is a new vector. If you have two
vectors (x[1], y[1]) and (x[2], y[2]), then the sum of the two vectors is simply (x[1]+x[2], y[1]+y[2]). The image below shows the sum of four vectors. Note that it doesn't matter which order you add
them up in - just like regular addition. Throughout these articles, we will use plus and minus signs to denote vector addition and subtraction, where each is simply the piecewise addition or
subtraction of the components of the vector.
Dot Product
The addition of vectors is relatively intuitive; a couple of less obvious vector operations are dot and cross products. The dot product of two vectors is simply the sum of the products of the
corresponding elements. For example, the dot product of (x[1], y[1]) and (x[2], y[2]) is x[1]*x[2] + y[1]*y[2]. Note that this is not a vector, but is simply a single number (called a scalar). The
reason this is useful is that the dot product, A ⋅ B = |A||B|Cos(θ), where θ is the angle between the A and B. |A| is called the norm of the vector, and in a 2-D geometry problem is simply the length
of the vector, sqrt(x^2+y^2). Therefore, we can calculate Cos(θ) = (A ⋅ B)/(|A||B|). By using the acos function, we can then find θ. It is useful to recall that Cos(90) = 0 and Cos(0) = 1, as this
tells you that a dot product of 0 indicates two perpendicular lines, and that the dot product is greatest when the lines are parallel. A final note about dot products is that they are not limited to
2-D geometry. We can take dot products of vectors with any number of elements, and the above equality still holds.
Cross Product
An even more useful operation is the cross product. The cross product of two 2-D vectors is x[1]*y[2] - y[1]*x[2] Technically, the cross product is actually a vector, and has the magnitude given
above, and is directed in the +z direction. Since we're only working with 2-D geometry for now, we'll ignore this fact, and use it like a scalar. Similar to the dot product, A x B = |A||B|Sin(θ).
However, θ has a slightly different meaning in this case: |θ| is the angle between the two vectors, but θ is negative or positive based on the right-hand rule. In 2-D geometry this means that if A is
less than 180 degrees clockwise from B, the value is positive. Another useful fact related to the cross product is that the absolute value of |A||B|Sin(θ) is equal to the area of the parallelogram
with two of its sides formed by A and B. Furthermore, the triangle formed by A, B and the red line in the diagram has half of the area of the parallelogram, so we can calculate its area from the
cross product also.
Line-Point Distance
Finding the distance from a point to a line is something that comes up often in geometry problems. Lets say that you are given 3 points, A, B, and C, and you want to find the distance from the point
C to the line defined by A and B (recall that a line extends infinitely in either direction). The first step is to find the two vectors from A to B (AB) and from A to C (AC). Now, take the cross
product AB x AC, and divide by |AB|. This gives you the distance (denoted by the red line) as (AB x AC)/|AB|. The reason this works comes from some basic high school level geometry. The area of a
triangle is found as base*height/2. Now, the area of the triangle formed by A, B and C is given by (AB x AC)/2. The base of the triangle is formed by AB, and the height of the triangle is the
distance from the line to C. Therefore, what we have done is to find twice the area of the triangle using the cross product, and then divided by the length of the base. As always with cross products,
the value may be negative, in which case the distance is the absolute value. Things get a little bit trickier if we want to find the distance from a line segment to a point. In this case, the nearest
point might be one of the endpoints of the segment, rather than the closest point on the line. In the diagram above, for example, the closest point to C on the line defined by A and B is not on the
segment AB, so the point closest to C is B. While there are a few different ways to check for this special case, one way is to apply the dot product. First, check to see if the nearest point on the
line AB is beyond B (as in the example above) by taking AB ⋅ BC. If this value is greater than 0, it means that the angle between AB and BC is between -90 and 90, exclusive, and therefore the nearest
point on the segment AB will be B. Similarly, if BA ⋅ AC is greater than 0, the nearest point is A. If both dot products are negative, then the nearest point to C is somewhere along the segment.
(There is another way to do this, which I'll discuss here).
//Compute the dot product AB ⋅ BC
int dot(int[] A, int[] B, int[] C){
AB = new int[2];
BC = new int[2];
AB[0] = B[0]-A[0];
AB[1] = B[1]-A[1];
BC[0] = C[0]-B[0];
BC[1] = C[1]-B[1];
int dot = AB[0] * BC[0] + AB[1] * BC[1];
return dot;
//Compute the cross product AB x AC
int cross(int[] A, int[] B, int[] C){
AB = new int[2];
AC = new int[2];
AB[0] = B[0]-A[0];
AB[1] = B[1]-A[1];
AC[0] = C[0]-A[0];
AC[1] = C[1]-A[1];
int cross = AB[0] * AC[1] - AB[1] * AC[0];
return cross;
//Compute the distance from A to B
double distance(int[] A, int[] B){
int d1 = A[0] - B[0];
int d2 = A[1] - B[1];
return sqrt(d1*d1+d2*d2);
//Compute the distance from AB to C
//if isSegment is true, AB is a segment, not a line.
double linePointDist(int[] A, int[] B, int[] C, boolean isSegment){
double dist = cross(A,B,C) / distance(A,B);
int dot1 = dot(A,B,C);
if(dot1 > 0)return distance(B,C);
int dot2 = dot(B,A,C);
if(dot2 > 0)return distance(A,C);
return abs(dist);
That probably seems like a lot of code, but lets see the same thing with a point class and some operator overloading in C++ or C#. The * operator is the dot product, while ^ is cross product, while +
and - do what you would expect.
//Compute the distance from AB to C
//if isSegment is true, AB is a segment, not a line.
double linePointDist(point A, point B, point C, bool isSegment){
double dist = ((B-A)^(C-A)) / sqrt((B-A)*(B-A));
int dot1 = (C-B)*(B-A);
if(dot1 > 0)return sqrt((B-C)*(B-C));
int dot2 = (C-A)*(A-B);
if(dot2 > 0)return sqrt((A-C)*(A-C));
return abs(dist);
Operator overloading is beyond the scope of this article, but I suggest that you look up how to do it if you are a C# or C++ coder, and write your own 2-D point class with some handy operator
overloading. It will make a lot of geometry problems a lot simpler.
Polygon Area
Another common task is to find the area of a polygon, given the points around its perimeter. Consider the non-convex polygon below, with 5 points. To find its area we are going to start by
triangulating it. That is, we are going to divide it up into a number of triangles. In this polygon, the triangles are ABC, ACD, and ADE. But wait, you protest, not all of those triangles are part of
the polygon! We are going to take advantage of the signed area given by the cross product, which will make everything work out nicely. First, we'll take the cross product of AB x AC to find the area
of ABC. This will give us a negative value, because of the way in which A, B and C are oriented. However, we're still going to add this to our sum, as a negative number. Similarly, we will take the
cross product AC x AD to find the area of triangle ACD, and we will again get a negative number. Finally, we will take the cross product AD x AE and since these three points are oriented in the
opposite direction, we will get a positive number. Adding these three numbers (two negatives and a positive) we will end up with a negative number, so will take the absolute value, and that will be
area of the polygon. The reason this works is that the positive and negative number cancel each other out by exactly the right amount. The area of ABC and ACD ended up contributing positively to the
final area, while the area of ADE contributed negatively. Looking at the original polygon, it is obvious that the area of the polygon is the area of ABCD (which is the same as ABC + ABD) minus the
area of ADE. One final note, if the total area we end up with is negative, it means that the points in the polygon were given to us in clockwise order. Now, just to make this a little more concrete,
lets write a little bit of code to find the area of a polygon, given the coordinates as a 2-D array, p.
int area = 0;
int N = lengthof(p);
//We will triangulate the polygon
//into triangles with points p[0],p[i],p[i+1]
for(int i = 1; i+1<N; i++){
int x1 = p[i][0] - p[0][0];
int y1 = p[i][1] - p[0][1];
int x2 = p[i+1][0] - p[0][0];
int y2 = p[i+1][1] - p[0][1];
int cross = x1*y2 - x2*y1;
area += cross;
return abs(area/2.0);
Notice that if the coordinates are all integers, then the final area of the polygon is one half of an integer.
...continue to Section 2 | {"url":"http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=geometry1","timestamp":"2014-04-16T10:15:03Z","content_type":null,"content_length":"59207","record_id":"<urn:uuid:31dae460-26d1-4b0a-be3d-83ae4ae7acee>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convolution is the most important and fundamental concept in signal processing and analysis. By using convolution, we can construct the output of system for any arbitrary input signal, if we know the
impulse response of system.
How is it possible that knowing only impulse response of system can determine the output for any given input signal? We will find out the meaning of convolution.
Related Topics: Window Filters
Download: conv1d.zip, conv2d.zip
First, let's see the mathematical definition of convolution in discrete time domain. Later we will walk through what this equation tells us.
(We will discuss in discrete time domain only.)
where x[n] is input signal, h[n] is impulse response, and y[n] is output. * denotes convolution. Notice that we multiply the terms of x[k] by the terms of a time-shifted h[n] and add them up.
The keystone of understanding convolution is laid behind impulse response and impulse decomposition.
Impulse Function Decomposition
In order to understand the meaning of convolution, we are going to start from the concept of signal decomposition. The input signal is decomposed into simple additive components, and the system
response of the input signal results in by adding the output of these components passed through the system.
In general, a signal can be decomposed as a weighted sum of basis signals. For example, in Fourier Series, any periodic signal (even rectangular pulse signal) can be represented by a sum of sine and
cosine functions. But here, we use impulse (delta) functions for the basis signals, instead of sine and cosine.
Examine the following example how a signal is decomposed into a set of impulse (delta) functions. Since the impulse function, δ[n] is 1 at n=0, and zeros at n ≠ 0. x[0] can be written to 2·δ[n]. And,
x[1] will be 3·δ[n-1], because δ[n-1] is 1 at n=1 and zeros at others. In same way, we can write x[2] by shifting δ[n] by 2, x[2] = 1·δ[n-2]. Therefore, the signal, x[n] can be represented by adding
3 shifted and scaled impulse functions.
In general, a signal can be written as sum of scaled and shifted delta functions;
Impulse Response
Impulse response is the output of a system resulting from an impulse function as input.
And it is denoted as h[n].
If the system is time-invariant, the response of a time-shifted impulse function is also shifted as same amount of time.
For example, the impulse response of δ[n-1] is h[n-1]. If we know the impulse response h[n], then we can immediately get the impulse response h[n-1] by shifting h[n] by +1. Consequently, h[n-2]
results from shifting h[n] by +2.
If the system is linear (especially scalar rule), a scaled in input signal causes an identical scaling in the output signal.
For instance, the impulse response of 3·δ[n] is just multiplying by 3 to h[n].
If the input has 3 components, for example, a·δ[n] + b·δ[n-1] + c·δ[n-2], then the output is simply a·h[n] + b·h[n-1] + c·h[n-2]. This is called additive property of linear system, thus, it is valid
only on the linear system.
Back to the Definition
By combining the properties of impulse response and impulse decomposition, we can finally construct the equation of convolution. In linear and time-invariant system, the response resulting from
several inputs can be computed as the sum of the responses each input acting alone.
For example, if input signal is x[n] = 2·δ[n] + 3·δ[n-1] + 1·δ[n-2], then the output is simply y[n] = 2·h[n] + 3·h[n-1] + 1·h[n-2].
Therefore, we now clearly see that if the input signal is linear and time invariant system.
To summarize, a signal is decomposed into a set of impulses and the output signal can be computed by adding the scaled and shifted impulse responses.
Furthermore, there is an important fact under convolution; the only thing we need to know about the system's characteristics is the impulse response of the system, h[n]. If we know a system's impulse
response, then we can easily find out how the system reacts for any input signal.
Convolution in 1D
Let's start with an example of convolution of 1 dimensional signal, then find out how to implement into computer programming algorithm.
x[n] = { 3, 4, 5 }
h[n] = { 2, 1 }
x[n] has only non-zero values at n=0,1,2, and impulse response, h[n] is not zero at n=0,1. Others which are not listed are all zeros.
One thing to note before we move on: Try to figure out yourself how this system behaves, by only looking at the impulse response of the system. When the impulse signal is entered the system, the
output of the system looks like amplifier and echoing. At the time is 0, the intensity was increased (amplified) by double and gradually decreased while the time is passed.
From the equation of convolution, the output signal y[n] will be
Let's compute manually each value of y[0], y[1], y[2], y[3], ...
Notice that y[0] has only one component, x[0]h[0], and others are omitted, because others are all zeros at k ≠ 0. In other words, x[1]h[-1] = x[2]h[-2] = 0. The above equations also omit the terms if
they are obviously zeros.
If n is larger than and equal to 4, y[n] will be zeros. So we stop here to compute y[n] and adding these 4 signals (y[0], y[1], y[2], y[3]) produces the output signal y[n] = {6, 11, 14, 5}.
Let's look more closely the each output. In order to see a pattern clearly, the order of addition is swapped. The last term comes first and the first term goes to the last. And, all zero terms are
y[0] = x[0]·h[0]
y[1] = x[1]·h[0] + x[0]·h[1]
y[2] = x[2]·h[0] + x[1]·h[1]
y[3] = x[3]·h[0] + x[2]·h[1]
Can you see the pattern? For example, y[2] is calculated from 2 input samples; x[2] and x[1], and 2 impulse response samples; h[0] and h[1]. The input sample starts from 2, which is same as the
sample point of output, and decreased. The impulse response sample starts from 0 and increased.
Based on the pattern that we found, we can write an equation for any sample of the output;
where i is any sample point and k is the number of samples in impulse response.
For instance, if an impulse response has 4 samples, the sample of output signal at 9 is;
y[9] = x[9]·h[0] + x[8]·h[1] + x[7]·h[2] + x[6]·h[3]
Notice the first sample, y[0] has only one term. Based on the pattern that we found, y[0] is calculated as:
y[0] = x[0]·h[0] + x[-1]·h[1]. Because x[-1] is not defined, we simply pad it to zero.
C++ Implementation for Convolution 1D
Implementing the convolution algorithm is quite simple. The code snippet is following;
for ( i = 0; i < sampleCount; i++ )
y[i] = 0; // set to zero before sum
for ( j = 0; j < kernelCount; j++ )
y[i] += x[i - j] * h[j]; // convolve: multiply and accumulate
However, you should concern several things in the implementation.
Watch out the range of input signal. You may be out of bound of input signal, for example, x[-1], x[-2], and so on. You can pad zeros for those undefined samples, or simply skip the convolution at
the boundary. The results at the both the beginning and end edges cannot be accurate anyway.
Second, you need rounding the output values, if the output data type is integer and impulse response is floating point number. And, the output value may be exceeded the maximum or minimum value.
If you have unsigned 8-bit integer data type for output signal, the range of output should be between 0 and 255. You must check the value is greater than the minimum and less than the maximum value.
Download the 1D convolution routine and test program.
Convolution in 2D
2D convolution is just extension of previous 1D convolution by convolving both horizontal and vertical directions in 2 dimensional spatial domain. Convolution is frequently used for image processing,
such as smoothing, sharpening, and edge detection of images.
The impulse (delta) function is also in 2D space, so δ[m, n] has 1 where m and n is zero and zeros at m,n ≠ 0. The impulse response in 2D is usually called "kernel" or "filter" in image processing.
The second image is 2D matrix representation of impulse function. The shaded center point is the origin where m=n=0.
Once again, a signal can be decomposed into a sum of scaled and shifted impulse (delta) functions;
For example, x[0, 0] is x[0, 0]·δ[m, n], x[1, 2] is x[1, 2]·δ[m-1, n-2], and so on. Note that the matrices are referenced here as [column, row], not [row, column]. M is horizontal (column) direction
and N is vertical (row) direction.
And, the output of linear and time invariant system can be written by convolution of input signal x[m, n], and impulse response, h[m, n];
Notice that the kernel (impulse response) in 2D is center originated in most cases, which means the center point of a kernel is h[0, 0]. For example, if the kernel size is 5, then the array index of
5 elements will be -2, -1, 0, 1, and 2. The origin is located at the middle of kernel.
Examine an example to clarify how to convolve in 2D space.
Let's say that the size of impulse response (kernel) is 3x3, and it's values are a, b, c, d,...
Notice the origin (0,0) is located in the center of kernel.
Let's pick a simplest sample and compute convolution, for instance, the output at (1, 1) will be;
It results in sum of 9 elements of scaled and shifted impulse responses. The following image shows the graphical representation of 2D convolution.
Notice that the kernel matrix is flipped both horizontal and vertical direction before multiplying the overlapped input data, because x[0,0] is multiplied by the last sample of impulse response, h
[1,1]. And x[2,2] is multiplied by the first sample, h[-1,-1].
Exercise a little more about 2D convolution with another example. Suppose we have 3x3 kernel and 3x3 input matrix.
The complete solution for this example is here; Example of 2D Convolution
By the way, the kernel in this example is called Sobel filter, which is used to detect the horizontal edge lines in an image. See more details in the window filters.
Separable Convolution 2D
In convolution 2D with M×N kernel, it requires M×N multiplications for each sample. For example, if the kernel size is 3x3, then, 9 multiplications and accumulations are necessary for each sample.
Thus, convolution 2D is very expensive to perform multiply and accumulate operation.
However, if the kernel is separable, then the computation can be reduced to M + N multiplications.
A matrix is separable if it can be decomposed into (M×1) and (1×N) matrices.
For example;
And, convolution with this separable kernel is equivalent to;
(Proof of Separable Convolution 2D)
As a result, in order to reduce the computation, we perform 1D convolution twice instead of 2D convolution; convolve with the input and M×1 kernel in vertical direction, then convolve again
horizontal direction with the result from the previous convolution and 1×N kernel. The first vertical 1D convolution requires M times of multiplications and the horizontal convolution needs N times
of multiplications, altogether, M+N products.
However, the separable 2D convolution requires additional storage (buffer) to keep intermediate computations. That is, if you do vertical 1D convolution first, you must preserve the results in a
temporary buffer in order to use them for horizontal convolution subsequently.
Notice that convolution is associative; the result is same, even if the order of convolution is changed. So, you may convolve horizontal direction first then vertical direction later.
Gaussian smoothing filter is a well-known separable matrix. For example, 3x3 Gaussian filter is;
C++ Algorithm for Convolution 2D
We need 4 nested loops for 2D convolution instead of 2 loops in 1D convolution.
// find center position of kernel (half of kernel size)
kCenterX = kCols / 2;
kCenterY = kRows / 2;
for(i=0; i < rows; ++i) // rows
for(j=0; j < cols; ++j) // columns
sum = 0; // init to 0 before sum
for(m=0; m < kRows; ++m) // kernel rows
mm = kRows - 1 - m; // row index of flipped kernel
for(n=0; n < kCols; ++n) // kernel columns
nn = kCols - 1 - n; // column index of flipped kernel
// index of input signal, used for checking boundary
ii = i + (m - kCenterY);
jj = j + (n - kCenterX);
// ignore input samples which are out of bound
if( ii >= 0 && ii < rows && jj >= 0 && jj < cols )
out[i][j] += in[ii][jj] * kernel[mm][nn];
The above snippet code is simple and easiest way to understand how convolution works in 2D. But it may be the slowest implementation.
Take a look at a real example; convolution with 256x256 image and 5x5 Gaussian filter.
The source image is uncompressed raw, 8-bit (unsigned char) grayscale image. And again, Gaussian kernel is separable;
On my system (AMD 64 3200+ 2GHz), normal convolution took about 10.3 ms and separable convolution took only 3.2 ms. You can see how much separable convolution is faster compared to normal
Download 2D convolution application and source code here: conv2d.zip
The program uses OpenGL to render images on the screen. | {"url":"http://songho.ca/dsp/convolution/convolution.html","timestamp":"2014-04-17T17:28:13Z","content_type":null,"content_length":"27796","record_id":"<urn:uuid:c4a5ddf8-99b4-4cb8-8282-a87c5fa76508>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clifford Johnson Tribute PostClifford Johnson Tribute Post
Clifford Johnson Tribute Post
Chad Orzel on May 14, 2007
Here’s a picture of some pretty flowers:
These are from the ornatmental cherry tree in our front yard. Like all the other similar trees in the neighborhood, it’s absolutely exploded over the past week.
Also, I rode my bike a bunch this weekend:
Saturday, I rode down to Lock 8, stopping for a few minutes to talk to another faculty member who was out with a group of students and community volunteers cleaning up one of the preserved locks of
the original Erie Canal.
Total Distance: 17.41
Average Speed: 14.53
Maximum Speed: 24.62
Sunday, I went in the other direction on the bike path, and a little farther, stopping just short of the Twin Bridges:
Total Distance: 20.41
Average Speed: 14.35
Maximum Speed: 32.01
The maximum speed is a little higher, because there weren’t any inferior dogs in my way on the big downhill this week. The average speed is lower, because I did a little bit of poking around on side
Year to date statistics:
Total Distance: 81.1 miles
Maximum Speed: 32.0 mph
And here’s a picture of the cherry tree in context:
1. #1 mollishka May 14, 2007
Very pretty! I want to sneeze just looking at it.
2. #2 did May 14, 2007
If you want to get truly obsessive, check out http://www.routeslip.com/. The elevation profiles are particularly entertaining.
3. #3 Chad Orzel May 14, 2007
Hey, that’s a neat toy.
And, in fact, Sunday’s route is in there, more or less. I turned around at around the six-mile mark of that route.
Recent Comments
• Art on Go Fly a Kite
• Jesse on In Which I Read Hard Science Fiction
• D. C. Sessions on In Which I Read Hard Science Fiction
• Jesse on In Which I Read Hard Science Fiction
• Pierce R. Butler on In Which I Read Hard Science Fiction
• Jesse on In Which I Read Hard Science Fiction
• jane on Superheros are Anti-Science | {"url":"http://scienceblogs.com/principles/2007/05/14/clifford-johnson-tribute-post-2/","timestamp":"2014-04-25T00:07:40Z","content_type":null,"content_length":"74173","record_id":"<urn:uuid:6d2c9ff9-f745-48f7-a6ae-8557022cfa48>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: New Spectra of Strongly Minimal Theories in Finite Languages
Uri Andrews
We describe strongly minimal theories Tn with finite languages such that in the chain of
countable models of Tn, only the first n models have recursive presentations. Also, we
describe a strongly minimal theory with a finite language such that every non-saturated
model has a recursive presentation.
1. Introduction
Given an 1-categorical non-0-categorical theory T in a countable language, the
Baldwin-Lachlan theorem [2] says that the countable models of T form an + 1-chain:
M0 M1 . . . M. We define the spectrum of recursive models of T to be
SRM(T) = {i|Mi has a recursive presentation}. The spectrum problem asks "Which
subsets of + 1 can occur as spectra of 1-categorical theories?", and of particular inter-
est is which subsets of + 1 can occur as spectra of strongly minimal theories.
There have been various contributions to the spectrum problem over the years. Many
have been of the form "S is a possible spectrum achieved with a strongly minimal (or
simply 1-categorical) theory". In this paper, the goal is to achieve many of the same
spectra while using a theory in a finite language. This goal has its roots in Herwig,
Lempp, Ziegler [3], where it is shown that {0} is a possible spectrum using only a finite
language. In [1], we show that {} is a possible spectrum using only a finite language. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/594/1397329.html","timestamp":"2014-04-19T08:05:53Z","content_type":null,"content_length":"8496","record_id":"<urn:uuid:758b4926-fa79-4850-a621-24f7244ea50e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the following system of equations by using the substitution method. x - y = 3 2x + 2y = 2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ff1db9ae4b03c0c488a9135","timestamp":"2014-04-21T15:37:43Z","content_type":null,"content_length":"84672","record_id":"<urn:uuid:655e9b05-f5bf-4547-afc6-293f09ce8faf>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving trig equations
March 4th 2010, 11:02 AM #1
Mar 2010
Solving trig equations
Ok Well, I need a lot of help with this. I hav eso many problems to do and I do not even know where to start.
I just want to get help with a few and then I'll see if I can figure out how to do them on my own..
Ok so here is the first type of problem I have.
Solve each trig equation.
First off I'm not entirely sure what I'm trying to find in the following. Even then I don't really know where to start.
1. sin x = -sqrt2/2
2. Cos 2x=-sqrt2/2
3. tan .5x= sqrt3
4. sqrt3 sec x +2 =0
I think if i know how to do those I will be able to figure out the rest of those type of problems.
Here id the other type of problem i have to do.
Solve the each trig equation
1.4 sin^2 x-3=0
For this one would I just add 3 then divide by 4. Then take the inverse sin of sqrt of 3/4 and the answer would be 1.047. If that is right then that would be one of the solutions but Im not sure
how to find the other one
2. cos x +2= 3cos x
3. 2 sin ^2 x-3=0
Ok Well, I need a lot of help with this. I hav eso many problems to do and I do not even know where to start.
I just want to get help with a few and then I'll see if I can figure out how to do them on my own..
Ok so here is the first type of problem I have.
Solve each trig equation.
First off I'm not entirely sure what I'm trying to find in the following. Even then I don't really know where to start.
1. sin x = -sqrt2/2
2. Cos 2x=-sqrt2/2
3. tan .5x= sqrt3
4. sqrt3 sec x +2 =0
I think if i know how to do those I will be able to figure out the rest of those type of problems.
Here id the other type of problem i have to do.
Solve the each trig equation
1.4 sin^2 x-3=0
For this one would I just add 3 then divide by 4. Then take the inverse sin of sqrt of 3/4 and the answer would be 1.047. If that is right then that would be one of the solutions but Im not sure
how to find the other one
2. cos x +2= 3cos x
3. 2 sin ^2 x-3=0
A) For the first lot you can take the inverse function.
For $\sin(x) = -\frac{\sqrt2}{2}$
$x = \arcsin \left(-\frac{\sqrt2}{2}\right)$
You should also know your special triangles
B) Divide by 4. Then use the difference of two squares
$\left(\sin (x) - \frac{\sqrt3}{2}\right) \left(\sin (x) + \frac{\sqrt3}{2}\right)=0$
Which will give two solutions
Well I'm still confused. in my class we have never used arcsin so I don't even know what that is or how to use it.
And for the second part I'm not exactly sure what you did.
arcsin basically means $sin^{-1}\theta$ which is an inverse function
$<br /> \cos^{-1}\left({-\frac{\sqrt{2}}{2}}\right) = 2x$
$\frac{3\pi}{4} = 2x$
$\frac{3\pi}{8} = x$
Last edited by bigwave; March 4th 2010 at 12:36 PM. Reason: more into
thanks for the help guys.
I did some of the problems and I wanted to see if i got the right answer for them.
$sin=-\sqrt{2} /2$
x= $\frac{5pi}{4}$ or $\frac{7pi}{4}$
cos 2x= $\frac{\sqrt{2}}{2}$
x= $\frac{3pi}{8}$ or $\frac{5pi}{8}$
$sin(x+\frac{pi}{4} )$= $\frac{-\sqrt{2} }{2}$
x= $6\frac{pi}{4}$ or $4\frac{pi}{4}$
$<br /> tan.5x=\sqrt{3}$
$x= \frac{pi}{1.5}$
$x= 5\frac{pi}{3}$ or $4\frac{pi}{3}$
$4 tanx-2=2$
$x= \frac{pi}{4}$
I wasn't able to get these 2.
$\sqrt{3} secx+2=0$
Also how do I show that an equation has an infinite amount of solutions
March 4th 2010, 11:11 AM #2
March 4th 2010, 11:24 AM #3
Mar 2010
March 4th 2010, 11:43 AM #4
March 4th 2010, 02:19 PM #5
Mar 2010
March 4th 2010, 02:23 PM #6
Mar 2010 | {"url":"http://mathhelpforum.com/trigonometry/132040-solving-trig-equations.html","timestamp":"2014-04-17T23:34:56Z","content_type":null,"content_length":"50382","record_id":"<urn:uuid:1e031e7b-5b58-4842-a68a-fab9028cdd50>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Homework Help - eNotes.com - All Questions
Homework Help
Better Students Ask More Questions.
You are asking a question in Math
If your question concerns another topic, please post it here
Tips for Getting a Great Answer Fast
1. Ask one specific question. General questions or multiple questions are less likely to be answered.
2. Always include the title of your book, along with a chapter, act, scene, or problem number.
3. Get an eNotes subscription! Subscriptions include Homework Help Credits and are prioritized. | {"url":"http://www.enotes.com/homework-help/topic/math?pg=6&filters=All","timestamp":"2014-04-18T10:58:40Z","content_type":null,"content_length":"92066","record_id":"<urn:uuid:ff696151-2147-427e-bceb-319bbf573d7c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
With antenna simulation software we can estimate the possible performance of antennas down to fractions of a dB - but only under the assumption of perfectly flat terrain. While still looking for such
a QTH I got interested in the influence of the real world terrain on our antennas. It seems to be a terrain with not too much guru wisdom but nevertheless a lot of dB to gain or loose.
One of the most helpful tools is the HFTA-software by N6BV, supplemented with the ARRL antenna handbook, analyzing the effects of antenna height, terrain contour and ground properties for a given
band. It helps to visualize the basic effects of slopes which can roughly be described as: The steeper a slope the more (virtual) height it adds to an antenna - thus resulting in more low angle
manual (.pdf) on the ARRL-site or to be googled depending on your map software.
The graph indicates the calculated gain in dBi at takeoff-angles between 1 and 34 degrees showing strong lobes as well as nulls for some take-off agles. On this graph we have a dipole at 25 ft/8m.
The green line is for flat terrain in a specific direction. The blue line is the southeast direction on one of my favourite hills along a smooth slope and the red line down a rather steep slope: more
than 10 dB advantage for the hilltop location against flat terrain at the important very low angle.
But: the height advantage can´t be generalized because from some height on the lower lobe decreases and additional lobes with higher angle appear. So here we go with the difference between the steep
hillslope (red) and a 90 ft (30m) high cliff (blue).
As instructive the HFTA-graphs look there is one real world thing to consider: its calculations base on straight two-dimensional lines. But the radiated wave and the surface of slopes is
three-dimensional being plain, concave or convex. In his antenna-book Les Moxon, G6XN, describes interesting experiences: "...very good results can be expected, particularly if the reflecting area is
bowl-shaped so that it acts like a concave mirror to focus the signal in the desired direction. Some relatively poor results (including total failures) have been attributed to convex ground which
disperses the wave..." (HF Antennas for all Locations, 2nd ed., p.167).
HFTA can also calculate the effect of the distance between antenna and edge of the slope. BUT: it has one restriction because it can only calculate antennas with horizontal polarization. For
verticals there is very little information available about the influence of terrain.
Some of the experiences are summarized in this .rtf-file. It seems that verticals don´t like steep hills and seem to deteriorate on cliffs. Still open is the question from what slope angle on
verticals lose their important ground under the feet. Moxon (quoted below) argues that from slope angles steeper than 3-5 degrees the horizontal antennas win. ON4UN shows a graph in the fourth
edition of his book stating still a 2 dB advantage for a vertical at a 8 degree downslope over a vertical in a plain at 10 degree elevation angle - and an even bigger advantage up to 7 dB at 5 degree
takeoff angle (Low-Band DXing, pg. 9-5).
He shows also a graph indicating that a 80m-dipole at 30m/100ft takes more advantage of an 8-degree-slope than a vertical at the same slope (page 5-6).
Besides the software ON4UN uses there is "Terrain Analyzer" (TA) developed by K6STI capable of calculating terrain influence on antenna behaviour. But this program is rarely available since K6STI is
no longer active in ham software. At least I got the help by Peter, DJ2ZS (SK), who tried some calculations with TA. There is one important remark by K6STI who stated in the manual that the absolute
gain figures are not "valid" for long verticals close to the ground. The software uses a single point source as a simulation of the antenna. It should work with antennas being small compared to their
height, i.e. a 20m-quad on a regular tower. But that shouldn´t refrain from comparing the sheer terrain-influence with one given "single-point"-antenna.
Here are figures for a 7 MHz-vertical close to the ground. On flat terrain it shows about 1 dB more than the same vertical calculated with an EZNEC-software. This may be due to the
single-point-restriction. But now to the figures:
Green = smooth slope (five degr.)
Blue = initial steep slope with 13 degrees (8 degr. average)
Red = Cliff (100ft/30m)
More to follow, especially comparisons for one terrain with differing frequencies and terrains with "optimized" straight slopes and defined angles.
Those having access to scientific librairies may look for an article mentioned by K6STI. It should be about aircraft-measured comparisons between verticals and dipoles in hilly terrain:
Modeling and measurement of HF antenna skywave radiation patterns in
irregular terrain
Breakall, J. K.; Young, J. S.; Hagn, G. H.; Adler, R. W.; Faust,
D. L.; Werner, D. H.
in: IEEE Transactions on Antennas and Propagation (ISSN 0018-926X),
vol. 42, no. 7, p. 936-945
Publication Date: 07/1994
If you want to read more about the basics of reflections in the foreground of an antenna and the influence of antenna-height and terrain I recommend an online-article by Palle, OZ1RH.
Any comments, corrections, experiences and additions are welcome and will be added here from time to time. | {"url":"http://www.dl8mbs.de/40993/44860.html","timestamp":"2014-04-20T08:14:54Z","content_type":null,"content_length":"21080","record_id":"<urn:uuid:2b981b91-298c-42aa-bd3a-d42a316582f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contributions of Tidal Poisson Terms in the Theory of the Nutation of a Nonrigid Earth
Dehant, V. and Folgueira, Marta and Rambaux, N. and Lambert, S.B. (2009) Contributions of Tidal Poisson Terms in the Theory of the Nutation of a Nonrigid Earth. In Observing our changing earth.
Springer, Berlín, pp. 455-462. ISBN 978-3-540-85425-8
Restricted to Repository staff only until 2020.
Official URL: http://www.springerlink.com/content/q527541907427303/
The tidal potential generated by bodies in the solar system contains Poisson terms, i.e., periodic terms with linearly time-dependent amplitudes. The influence of these terms in the Earth's rotation,
although expected to be small, could be of interest in the present context of high accuracy modelling. We have studied their contribution in the rotation of a non rigid Earth with elastic mantle and
liquid core. Starting from the Liouville equations, we computed analytically the contribution in the wobble and showed that the presently-used transfer function must be supplemented by additional
terms to be used in convolution with the amplitude of the Poisson terms of the potential and inversely proportional to (sigma - sigma(n))(2) where sigma is the forcing frequency and sigma(n) are the
eigenfrequencies associated with the retrograde free core nutation and the Chandler wobble. These results have been detailed in a paper that we published in Astron. Astrophys. in 2007. In the present
paper, we further examine the contribution from the core on the wobble and the nutation. In particular, we examine the contribution on extreme cases such as for wobble frequencies near the Free Core
Nutation Frequency (FCN) or for long period nutations. In addition to the analytical computation, we used a time-domain approach through a numerical model to examine the core and mantle motions and
discuss the analytical results.
Item Type: Book Section
Additional General Assembly of the International Association of Geodesy/24th General Assembly of the International Union of Geodesy and Geophysics
Uncontrolled Precession - Nutation - Poisson terms
Subjects: Sciences > Physics > Astronomy
ID Code: 15265
References: Bois, E., 2000. Connaissance de la libration lunaire `a l’`ere de la t´el´em´etrie laser-Lune, C. R. Acad. Sci. Paris, t. 1, S´erie IV, 809–823.
Bois, E., and D. Vokrouhlick`y, 1995. Relativistic spin effects inthe Earth-Moon system, A & A 300, 559.
Bois, E., and N. Rambaux, 2007. On the oscillations in Mercury’s obliquity, Icarus, in press.
Dehant, V., J. Hinderer, H. Legros, and M. Lefftz, 1993. Analytical approach to the computation of the Earth, the outer core and the inner core rotational motions, Phys. Earth
Planet. Inter. 76, 259–282.
Dehant, V., M. Feissel-Vernier, O. de Viron, C. Ma, M. Yseboodt, and C. Bizouard, 2003. Remaining error sources in the nutation at the submilliarcsecond level, J. Geophys. Res.
108(B5), 2275, DOI: 10.1029/2002JB001763.
Ferr´andiz, J.M., J.F. Navarro, A. Escapa, and J. Getino, 2004. Precession of the nonrigid Earth: effect of the fluid outer core, Astron. J. 128, 1407–1411.
Folgueira, M., V. Dehant, S.B. Lambert, and N. Rambaux, 2007. Impact of tidal Poisson terms to non-rigid Earth rotation, A&A 469(3), 1197–1202, DOI: 10.1051/0004-6361:20066822.
Greff-Lefftz, M., H. Legros, and V. Dehant, 2002. Effect of
inner core viscosity on gravity changes and spatial nutations induced by luni-solar tides. Phys. Earth Planet. Inter. 129(1–2), 31–41.
Hinderer, J., H. Legros, and M. Amalvict, 1987. Tidal motions within the earth’s fluid core: resonance process and possible variations, Phys. Earth Planet. Inter. 49(3–4),
Mathews, P. M., T. A. Herring, and B. A. Buffett, 2002. Modeling of nutation and precession: new nutation series for nonrigid Earth and insights into the Earth’s interior, J.
Geophys. Res. 107(B4), DOI: 10.1029/2001JB000390.
McCarthy, D.D., and G. Petit (Eds.), 2004. Conventions 2003. IERS Technical Note 32, Publ. Frankfurt am Main: Verlag des Bundesamts f¨ur Kartographie und Geod¨asie.
Moritz, H. and I.I. Mueller, 1987. Earth Rotation: Theory and Observation. The Ungar Publishing Company, New York.
Poincar´e H., 1910. Sur la pr´ecession des corps d´eformables. Bulletin Astronomique, Serie I, 27, 321–356.
Rambaux, N., T. Van Hoolst, V. Dehant, and E. Bois, 2007. Inertial core-mantle coupling and libration of Mercury, A & A 468(2), 711–719.
Roosbeek, F., 1998. Analytical developments of rigid Mars nutation and tide generating potential series, Celest. Mech. Dynamical Astron. 75, 287–300.
Roosbeek, F., and V. Dehant, 1998. RDAN97: An analytical development of rigid Earth nutations series using the torque approach, Celest. Mech. Dynamical Astron. 70, 215–253.
Sasao, T., S. Okubo, and M. Saito, 1980. A simple theory on dynamical effects of stratified fluid core upon nutational motion of the Earth, Proc. IAU Symposium 78, ‘Nutation and
the Earth’s Rotation’, Dordrecht, Holland, Boston, D. Reidel
Pub. Co., 165–183.
Wahr, J.M., 1981. The forced nutations of an elliptical, rotating, elastic and oceanless earth, Geophys. J. R. Astron. Soc. 64, 705–727.
Deposited On: 18 May 2012 09:10
Last Modified: 06 Feb 2014 10:20
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/15265/","timestamp":"2014-04-19T02:43:48Z","content_type":null,"content_length":"39769","record_id":"<urn:uuid:1c150df0-0c39-45ba-a2c4-c4398f0093f6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Density of numbers having large prime divisors (formalizing heuristic probability argument)
up vote 8 down vote favorite
I want to prove that the set of natural numbers n having a prime divisor greater than $\sqrt{n}$ is positive.
I have a heuristic argument that this density should be $\log 2$, which is approximately 0.7, but I am not sure how this could be converted to a formal argument.
For any x, the probability that x is prime is approximately $1/ \log x$ (By the prime number theorem). Further, the probability that n is a multiple of x is approximately $1/x$. These are
"independent" so the probability that n is a multiple of x and x is prime is approximately $1/x\log x$.
We know that n can have at most one prime divisor greater than $\sqrt{n}$, so the probability that n has a prime divisor greater than $\sqrt{n}$ can be approximated by the integral:
$$\int_{\sqrt{n}}^n \frac{dx}{x \log x} = [\log (\log x)]_{\sqrt{n}}^n = \log 2$$
Can this be made precise in terms of densities? How would the error terms be handled? Has this or a similar result already been proved?
ADDED LATER: The proofs below resolve this question, and they also seem to show that the density of numbers n with a prime divisor greater than $n^\alpha$ is $-\log \alpha$ for $1 > \alpha \ge 1/2$.
My question: is the result also valid for $0 < \alpha < 1/2$? For such $\alpha$, we could have more than one prime divisor, so the simple counting above doesn't work; we need to use a sieve that
subtracts the multiple contributions occurring from numbers that have more than one such prime divisor. My guess would be that asymptotically, this wouldn't matter, but I'm not sure how to formally
show this.
nt.number-theory pr.probability prime-numbers
add comment
6 Answers
active oldest votes
This is actually fairly straightforward, and reduces to the fact that
$ \sum_{p \text{prime}} \frac 1p = \log \log x + C + o(1)$
for some constant $C$. To see how to apply this to the original problem, let $p$ denote the largest prime divisor of $n$, and write $n = pz$. Then $p \ge \sqrt{n}$ if and only if $z \le
p$. Thus the number of such $n \le x$ is
up vote 8 $\sum_{p \le x} \sum_{z \le \min(p,x/p)} 1$ which breaks up into a main term
down vote
$\sum_{p \ge \sqrt{x}} \lfloor \frac xp \rfloor$ plus a smaller term $\sum_{p \le \sqrt{x}} p \le \sqrt{x} \pi(\sqrt x) = o(x)$, which is therefore negligible compared with the first
term. The main term is taken care of by the equation I cited at the beginning, using the fact that $\pi(n) = o(n)$ and finally that
$\log \log x - \log \log \sqrt x = \log 2$.
I don't quite understand. What would be the precise statement in density terms? What density are we using? I'd appreciate a little more detail, thanks. – Vipul Naik Feb 8 '10 at 17:31
1 You're correct in that you're implicitly using the prime number theorem which says that the density of primes is $\le x$ is $1/\log x$. However, you don't need anything that strong,
as the formula for the sum of the reciprocals of the primes is much easier to prove than the PNT. – Victor Miller Feb 8 '10 at 17:34
1 The density of integer $n \le x$ whose largest prime factor is $> \sqrt{n}$ is $\sum_{p > \sqrt{x}} \lfloor \frac xp \rfloor/x$, so that up to lower order terms you can "cancel the
$x$". – Victor Miller Feb 8 '10 at 17:36
Victor Miller is right. The weakest bounds you can get away with is his statement about sum 1/p and the bound pi(n) = o(n). You need the latter in order to switch \lfloor n/p \rfloor
with n/p + O(1). – David Speyer Feb 8 '10 at 17:37
1 @DS: Thanks. I was concerned that I might be viewed as too cavalier in slightly changing the problem. However the differences between allowing the larges prime factor to be bigger
than $\sqrt{x}$ and $\sqrt{n}$ is actually negligible (i.e. only affects the remainder). – Victor Miller Feb 8 '10 at 17:44
show 3 more comments
If I recall correctly, this was an exercise in Mathematics for the Analysis of Algorithms. I don't have access to a library right now, so I can't check that.
In any case, here is a proof. Fix a positive integer $N$. We will be counting the number of $n$ in $\{ 1,2, \ldots, N \}$ such that the largest prime divisor of $n$ is $\leq \sqrt{n}$.
We can break this count into two parts: (1) Those $n$ which are divisible by $p$ where $p \leq \sqrt{N}$ and $n \leq p^2$ and (2) Those $n$ which are divisible by $p > \sqrt{n}$.
Case (1) is easier. We are looking at $\sum_{p \leq \sqrt{N}} p = \int_{t=0}^{\sqrt{N}} t d \pi(t)$. (This is a Riemann-Stieltjes integral.) Integrating by parts, this is $\int_{0}^{\sqrt
{N}} \pi(u) du + O( \sqrt{N} \ \pi(\sqrt{N}))$. Since $\pi(u) = O(u / \log u)$ as $u \to \infty$, this integral is $O \left( \int^{\sqrt{N}} \pi(u) du \right) = O\left(\sqrt{N} \frac{\sqrt
{N}}{\log N} \right) = O(N/\log N)$, and the second term is also $O(N/\log N)$. So case 1 contributes density zero.
Case (2) is the same idea — integrate by parts and use the prime number theorem — but the details are messier because we need a better bound.
We are trying to compute $$\sum_{\sqrt{N} \leq p \leq N} \lfloor \frac{N}{p} \rfloor = \int_{\sqrt{N}}^N \lfloor \frac{N}{t} \rfloor d\pi(t) = \int_{\sqrt{N}}^N \left( \frac{N}{t} + O(1) \
right) d\pi(t).$$
The error term is $O(\pi(N)) = N/\log N$ so, again, it doesn't effect the density. Integrating the main term by parts, we have $$\int_{\sqrt{N}}^N \left( \frac{\partial}{\partial t} \ \frac
up vote 4 {N}{t} \right) \pi(t) dt +O(N/\log N).$$ Where the error term is $\left( N/t \pi(t) \right)|^N_{\sqrt{N}}$.
down vote
Now, $\pi(t) = Li(t) + O(t/(\log t)^K)$ for any $K$, by the prime number theorem, where $Li(t) = \int^t du/\log u$. So the main term is $$\int_{\sqrt{N}}^N \left( \frac{\partial}{\partial
t} \ \frac{N}{t} \right) Li(t) dt + O\left( \int^N \frac{N}{t^2} \frac{t}{(\log t)^K} dt \right).$$ The error term is $O \left( N/(\log N)^{K-1} \right)$.
In the main term, integrate by parts, "reversing" our previous integration by parts. We get $$\int_{\sqrt{N}}^N \frac{N}{t} \frac{dt}{\log t} + O(N/\log N).$$
Focusing on the main term once more, we have $$N \int_{\sqrt{N}}^N \frac{dt}{t \log t} = N \log 2.$$
Putting all of our error terms together, the number of integers with large prime factors is $$N \log 2 + O(N/\log N).$$
In summary, integration by parts, the Prime Number Theorem, and aggressive pruning of error terms.
As I recall, the follow-up exercise in Mathematics of the Analysis of Algorithms is to obtain a formula of the form $$N \log 2 + c N/\log N + O(N/(\log N)^2).$$ That's a hard exercise! If
you want to learn a lot about asymptotic technique, I recommend it.
1 Now that I see Victor Miller's answer, I see that I could have taken a bit of a shorter route, by quoting the asymptotic for sum 1/p rather than rederiving it and using weaker bounds in
various places. But I'm going to leave my answer as is, to show how to brute force your way through this sort of thing. – David Speyer Feb 8 '10 at 17:42
Thanks! This is very useful. – Vipul Naik Feb 8 '10 at 17:48
Is there a typo in your second para where you say " ... such that the largest prime divisor of $n$ is $\leq n$"? – Vipul Naik Feb 8 '10 at 17:52
Yup, that's a typo. Fixed now, thanks. – David Speyer Feb 8 '10 at 18:37
1 This is Problem 2 in "Final Exam I" in Appendix D of my (third, 1990) edition. – Michael Lugo Feb 8 '10 at 20:25
add comment
As you suspected, when the largest prime factor of $n$ is smaller than $\sqrt{n}$ then things get more complicated. However, there has been a lot done about this. One of the more
interesting is in http://www.google.com/url?sa=t&source=web&ct=res&cd=2&ved=0CA0QFjAB&url=
up vote 2 by Donnelly and Grimmet.
down vote
This also leads to the study of "smooth" numbers: an integer is y-smooth, if all of its prime factors are $\le y$. A good survey of that is in http://www.dms.umontreal.ca/~andrew/PDF/
add comment
The expected number of prime divisors greater than $n^{\alpha}$ is, in fact, $-\log \alpha$. For $\alpha \ge 1/2$ this reduces to the probability of having one large divisor, since $n$
can't have two divisors greater than $n^{1/2}$. For $1/3 < \alpha < 1/2$ the situation is more complicated, since one has to consider the probability that there are two "large" (larger
than $n^\alpha$) divisors; for $1/4 < \alpha < 1/3$ there could even be three large divisors, and so on.
There's an analogy between the cycle structure of permutations and the prime factorizations of integers; in particular your claim that the expected number of prime divisors of $n$ greater
up vote 2 than $n^{\alpha}$ is $-\log \alpha$ is equivalent to the claim that the expected number of cycles of a permutation on $n$ elements which are longer than $\alpha n$ is $-\log \alpha$. See
down vote Andrew Granville's preprint The anatomy of integers and permutations.
Keeping this in mind, I've written up some part of the analogous sieving argument for permutations in a preprint, The number of cycles of specified normalized length in permutations
(arXiv:0909.2909). I'm not sure if it's been written down for prime factorizations -- the literature is a bit of a blur in my mind -- but there's a good chance it has been.
add comment
Your "added later" question can be answered by reading these two Wikipedia articles. The integers you are interested in are commonly called "smooth numbers." Actually you're counting
up vote 1 down the non-smooth integers but that is a trivial change.
add comment
I found (via Wikipedia) this paper by V Ramaswami that address the question. It seems that the function isn't quite log; rather it is the Dickman-de Bruijn function, but there's
up vote 0 down still a positive density result.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory pr.probability prime-numbers or ask your own question. | {"url":"http://mathoverflow.net/questions/14664/density-of-numbers-having-large-prime-divisors-formalizing-heuristic-probabilit?sort=oldest","timestamp":"2014-04-19T02:36:46Z","content_type":null,"content_length":"88101","record_id":"<urn:uuid:2b95de43-9a35-4fe0-888f-7ab0ff8daf03>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
publications > report > an isotopic study of northeast Everglades National Park and adjacent urban areas > box models
An Isotopic Study of Northeast Everglades National Park and Adjacent Urban Areas
Chapter II: Technical Results
II.4 Box Models
Two box models were developed for this study and are referred to herein as the "simple" and "complex" box models.
II.4.a Simple Box Model
A simple box model was developed in order to determine the percent contribution of Everglades water to the West Wellfield. This model is based upon the assumption that two isotopically different
waters are being drawn to and mixed at the pumping well site (Figure 23). These waters include Everglades type water (water west of L-31N) and urban type water (water east of Levee 31N). An isotopic
balance is therefore represented by the following two equations:
x + y = 100 % (equation 4)
(δ^18O[x])x + (δ^18O[y])y = (δ^18O[p])100 (equation 5)
where x is the percentage of Everglades water at the pumping well, y is the percentage of urban water at the pumping well, δ^18O[x] is the δ^18O value of Everglades water (taken as the δ^18O value at
G618), δ^18O[y] is the δ^18O value of urban water (taken as the δ^18O value at G3555) and δ^18O[p] is the δ^18O value of water at pumping well 29/30. This model was evaluated using different sets of
input data. These sets included the overall average of all samples, the 1998 yearly data average, and the 1996/1997 combined data average. Also used as input for model runs were the averages of
"Summer" months (considered to be May through October), "Winter" months (November through April), "Dry" months (those having less than four inches of rainfall during the thirty days prior to
sampling), and "Wet" months (those having more than four inches of rainfall during the thirty days prior to sampling). Rainfall measurements were those collected at S338. Results of the model for
these data sets are provided in Table 3.
Everglades Water Urban Water
Overall Average 68.9 31.1
1998 Average 65.7 34.3
1996-1997 Average 72.0 28.8
"Summer" Months 59.6 40.4
"Winter" Months 86.4 13.6
"Dry" Months 73.8 26.2
"Wet" Months 66.4 33.6
Table 3: Simple Box Model Results
This model, using data for the entire study period, shows that 69% of the water being pumped from the well is indicative of Everglades water while only 31% is indicative of urban water. This supports
the hypothesis that Everglades type water is reaching the pumping well and may be the major contributing source. Furthermore, for all conditions, over 50% of the water at the pumping well is
Everglades type. The simple model results also show that during "dry" conditions, when a smaller quantity of recharge is available, a greater demand is placed upon the contribution from Everglades
groundwater. This causes the percent composition of Everglades water in the pumping well to increase. This observation also holds true when comparing summer and winter months. Summer months in
general correlate with the wet season in South Florida during which rainfall recharges groundwater more consistently than during winter months. Consequently, an increase in the quantity of Everglades
water reaching the pumping well is observed during the drier winter conditions. The difference between the "1998 Average" model results and those of the "1996-1997 Average" is also likely the result
of rainfall differences. On average, there was less rainfall during 1996 and 1997 (50.5 inches) than in 1998 (52.5 inches) in the study area. Accordingly, the percentage of Everglades water returned
by the model is higher during the drier 1996-1997 years.
While this simple box model is useful for assessing general trends, certain conceptual problems are inherent as a result of the simplicity of this type of model. These include the lack of
compensation for the direct isotopic influence of rainfall and inflow from water conservation areas at gate S333 on the system as well as the influence of any mixing across geologic layers in the
rock mining lakes and evaporation of water at the lake surface. There is no simple way to correct these problems within the framework of the simple model. While introducing only rainfall to the model
would result in a higher Everglades influence (as additional heavy Everglades water would be needed to balance the light rain input in the isotope balance), introducing only isotopically heavy lake
water as an inflow would cause an increase in the observed urban influence. In order to address some of these problems, a more complex box model was developed. Results of the complex model are
provided in the next section.
II.4.b Complex Box Model
For the complex box model, a two-mile by four-mile rectangular area within the focus area (down to the Biscayne aquifer) was selected and broken into five boxes which represent the Everglades area,
canal, lakes, deep groundwater, and urban areas (Figures 24 and 25). A water balance and an isotopic balance were then established for each box in order to compute water flows between each of the
boxes. Specifications for these boxes are provided in Table 4. Those variables which were measured versus those computed through the complex model described in this section are summarized in Table 5.
Figure 25: Control Volumes Used for Complex Model [larger image]
Values of δ[G618], δ[G3660], δ[G3575], δ[G3551], δ[G3662], δ[Well 29/30], and δ[G3555] utilized for the complex model were the average values measured at the corresponding well locations (given by
the subscripts). Isotopic values for the rainfall were also measured directly at sampling stations located next to well G618 (δ[Rain G618]) and at the West Wellfield (δ[RainWW]). Values of δ[E1], δ
[E2], and δ[E3] for evaporated water were calculated using the method developed by Gonfiantini (1986). The computation was a function of the δ values for rainfall and surface water corresponding to a
particular site. Details concerning this computation are provided by Wilcox, 2000, and Herrera, 2000. The value of δ[L] utilized is the average of the δ^18O values for RL1 and RL3. Herrera, 2000,
showed that values of δ[L] for RL1 and RL3 were similar to one another and the values did not vary considerably with depth within each lake. Please refer to Herrera, 2000, or Solo-Gabriele and
Herrera, 2000, for more details concerning δ[L] values for the lakes. The rainfall depths, R1, R2, R3, and R5, were obtained from station S336. Values of ET1, ET2, and ET3 were obtained from the
Tamiami Trail weather station located roughly 15 miles west of the study site. P was obtained from chart records from each well. Charts were provided by Miami Dade Water and Sewer Department. The
value used for the model was 4.53 x 10^8 cubic feet per year (9.3 mgd) which was found to be representative of the pumping well data evaluated. A1, A2, A3, and A5, correspond to the surface area of
the Everglades, canal, lakes, and urban control volumes. The Everglades control volume corresponds to a surface area of 2 miles by 2 miles (A1). The canal is 2 miles by 0.02 miles in area (A2). The
urban side (A5) is assumed to represent an area of 2 miles by 1.76 miles. The value of A3, which corresponds to the lakes, was determined by summing the surface area of the two rock-mining lakes
included within this study (1.24 x 10^7 sq ft, Herrera 2000). Conceptually, the model accounts for the lakes as a thin strip which is 0.22 miles long and two miles wide. While the lakes actual shapes
are in fact very different, for the purposes of the model flow balances, only the surface area is important.
Box # Box Description Inputs Corresponding Values Used Outputs Corresponding Values Used
Evapo-transpiration (ET1) over A1, a 2.00 mile by 2.00 mile area δ[E1] from Rain
Everglades water including inflow from S333 (E) G618 G618, S3575, S3577 & S3578
1 Everglades Shallow Groundwater (X)
Rainfall (R1) over A1, a 2.00 mile by 2.00 mile area Rain G618 Deep Groundwater (Y) G3575
Shallow Groundwater (X) G3575 Evapo-transpiration (ET2) δ[E2] from Rain
2 Canal Rainfall (R2) over A2, a 2.00 mile by 0.02 mile area Rain WW over A2, a 2.00 mile by 0.02 mile area WW, 2M3, 3M4 & 4M5
Shallow Groundwater (Z) G3551
Shallow Groundwater (Z) G3551 Evapo-transpiration (ET3) δ[E3] from Rain
Rainfall (R3) over A3, a 2.00 mile by 0.22 mile area Rain WW over A3, a 2.00 mile by 0.22 mile area WW, RL1 & RL3
3 Lakes Shallow Groundwater (L) δ[L] from RL1 & RL3
δ[L] from RL1 & RL3
Seepage (S)
4 Deep Deep Groundwater (Y) G3660 Deep Groundwater (D) G3662
Groundwater Seepage from lakes (S) δ[L] from RL1 & RL3
Shallow Groundwater (L) δ[L] from RL1 & RL3 Pumping Well (P) Well 29/30
5 Urban Urban Water (U) G3555
Deep Groundwater (D) G3662
Rainfall (R5) over A5, a 2.00 mile by 1.76 mile area RainWW
Table 4: Complex Box Model Parameters
Box # Measured Variables Calculated Variables
1 R1, ET1, A1, δ[G618], δ[Rain G618], δ[E1], E, X, Y
δ[G3575], δ[G3660]
2 R2, ET2, A2, δ[G3575], δ[Rain WW], X, Z
δ[E2], δ[G3551]
3 R3, ET3, A3, δ[G3551], δ[Rain WW], Z, L, D
δ[E3], δ[L]
4 δ[G3660], δ[L], δ[G3662] Y, S, D
5 R5, A5, P, δ[L], δ[G3662], δ[Rain WW], L, D, U
δ[Well 29/30], δ[G3555]
Table 5: List of Measured and Calculated Parameters in Complex Box Model
The model incorporated a seepage term from deep groundwater into the lake control volume. This seepage term, while drawn as an input through the bottom of the lake in the figure, in fact incorporates
both movement through the bottom of the lakes (vertical flow) and any inflow through the side (primarily horizontal flow) of the lake between the bottom of the canal and the base of the lakes
(between 30 and 40 feet). The model does not distinguish between horizontal and vertical flow across the boundary between box 3 and box 4. Canal seepage, on the other hand, is considered to be only
through the sides of the canal. This arrangement is considered to physically describe the system given that hydraulic gradients are very flat in the area of the canal resulting in horizontal flow
lines. Furthermore this conceptualization is consistent with the existing MODBRANCH model of the study site (Nemeth et al. 2000). This model utilizes the a relationship which simulates canal seepage
through the sides of the canal rather than the bottom.
The unknown flow values were calculated in the model by simultaneously solving a series of mass balance equations. The equations assume steady state conditions and include both volumetric and
isotopic balances. Equations were developed for six control volumes (Figure 25). Details of these computations are provided in Wilcox 2000. An example of the equations utilized are provided for box 1
Volumetric Water Balance:
E + R1*A1 ET1*A1 X Y = 0 (equation 6)
Isotopic Balance:
E*δ[G618] + R1 *A1*δ[Rain G618] ET1 *A1*δ[E1] X*δ[G3575] Y*δ[G3660] = 0 (equation 7)
For these equations, all variables are defined in Table 4. All flows are measured in cubic feet per year (cfy), all areas are in square feet (sq. ft) and rainfall/evapo-transpiration values are
measured in feet per year (ft/yr).
Figure 26: Complex Model Results Using Isotopic Data from 1998 [larger image]
Results of the complex box model for the 1998 and the overall average data sets (Figures 26 and 27) indicate that water leaving the Everglades and seeping under the Levee 31N preferentially moves
through the deep groundwater layer. This is observed from the flow ratio of over ten to one in the deep groundwater as compared to shallow groundwater. Deep groundwater travels east until moving into
the vicinity of the rock mining lakes. As the lakes cut through the deeper semi-confining layer, the model indicates that nearly sixty percent of the deep groundwater flow travels into the lake.
Water from both the lake and deep groundwater migrate eastward into control volume number five, the urban box. Here the model flow terms indicate that the pumping wells draw water from surrounding
urban shallow groundwater, the lakes, and deep groundwater. Furthermore, it is important to note that the results of the complex box model are consistent with those from the numerical model
(MODBRANCH) developed by Nemeth et al. 2000 and later modified by Herrera 2000 to incorporate lakes. A detailed comparison between the results of the complex model and those of the numerical model
are provided by Wilcox 2000. Wilcox, 2000, reports that the results are within the same order of magnitude and within only a 30 to 35% difference between the MODBRANCH and complex models.
Figure 27: Complex Model Results Using Entire Isotopic Data Set [larger image]
The complex model is in many ways an improvement over the simple model. It incorporates rainfall and evapo-transpiration data. In addition, it accounts for the presence of both deep groundwater flow
and the rock mining lakes. Another positive aspect of the complex box model is that it utilizes data from several of the isotope monitoring stations rather than only two as in the simple box model.
Despite all of the positive aspects of the complex box model, it has its limitations. The complex box model does not fully account for north/south water migration or surficial Everglades flow. In
addition, some of the sites used in the complex box model were not monitored until the start of 1998 or later. As a result, at sites such as G3660 too few data points were available to accurately
perform additional model runs such as those done in the simple box model (section II.4.a) that assess the impact of seasonal variations on the system. It is also important to note that the areal size
of the complex model was chosen so as to incorporate the rock mining lakes, the West Wellfield and Everglades isotope monitoring stations. As such, redefining the boundaries of the model could result
in different model output. | {"url":"http://sflwww.er.usgs.gov/publications/reports/isotopic_ever/boxmodels.html","timestamp":"2014-04-20T13:35:12Z","content_type":null,"content_length":"53058","record_id":"<urn:uuid:121ef1b5-9a2c-4e64-923d-7cd4411b3def>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adaptive harvest management 2006 duck hunting season
Adaptive Harvest
22000066 Hunttiingg Seeaassoon
U.S. Fish & Wildlife Service
2006 Hunting Season
The process of setting waterfowl hunting regulations is conducted annually in the United States (Blohm 1989).
This process involves a number of meetings where the status of waterfowl is reviewed by the agencies responsible
for setting hunting regulations. In addition, the U.S. Fish and Wildlife Service (USFWS) publishes proposed
regulations in the Federal Register to allow public comment. This document is part of a series of reports intended
to support development of harvest regulations for the 2006 hunting season. Specifically, this report is intended to
provide waterfowl managers and the public with information about the use of adaptive harvest management
(AHM) for setting duck-hunting regulations in the United States. This report provides the most current data,
analyses, and decision-making protocols. However, adaptive management is a dynamic process and some
information presented in this report will differ from that in previous reports.
A working group comprised of representatives from the USFWS, the Canadian Wildlife Service (CWS), and the
four Flyway Councils (Appendix A) was established in 1992 to review the scientific basis for managing
waterfowl harvests. The working group, supported by technical experts from the waterfowl management and
research community, subsequently proposed a framework for adaptive harvest management, which was first
implemented in 1995. The USFWS expresses its gratitude to the AHM Working Group and to the many other
individuals, organizations, and agencies that have contributed to the development and implementation of AHM.
This report was prepared by the USFWS Division of Migratory Bird Management. F. A. Johnson and G. S.
Boomer were the principal authors. Individuals that provided essential information or otherwise assisted with
report preparation were D. Case (D.J. Case & Assoc.), M. Conroy (U.S. Geological Survey [USGS]), E. Cooch
(Cornell University), P. Garrettson (USFWS), W. Harvey (Maryland Dept. of Natural Resources), R. Raftovich
(USFWS), E. Reed (Canadian Wildlife Service), K. Richkus (USFWS), J. Royle (USGS), M. Runge (USGS), J.
Serie, (USFWS), S. Sheaffer (Cornell University), and K. Wilkins (USFWS). Comments regarding this document
should be sent to the Chief, Division of Migratory Bird Management - USFWS, 4401 North Fairfax Drive, MS
MSP-4107, Arlington, VA 22203.
Citation: U.S. Fish and Wildlife Service. 2006. Adaptive Harvest Management: 2006 Hunting Season. U.S. Dept. Interior,
Washington, D.C. 45pp.
U.S. Fish & Wildlife Service
Executive Summary .............................................................................................................3
Background ..........................................................................................................................4
Mallard Stocks and Flyway Management............................................................................5
Mallard Population Dynamics..............................................................................................6
Harvest-Management Objectives .......................................................................................21
Regulatory Alternatives .....................................................................................................22
Optimal Regulatory Strategies ...........................................................................................25
Application of AHM Concepts to Species of Concern......................................................27
Literature Cited ..................................................................................................................37
Appendix A: AHM Working Group .................................................................................39
Appendix B: Modeling Mallard Harvest Rates.................................................................42
This report and others regarding Adaptive Harvest Management are available online at
In 1995 the U.S. Fish and Wildlife Service (USFWS) implemented the Adaptive Harvest Management (AHM)
program for setting duck hunting regulations in the United States. The AHM approach provides a framework for
making objective decisions in the face of incomplete knowledge concerning waterfowl population dynamics and
regulatory impacts.
The original AHM protocol was based solely on the dynamics of mid-continent mallards, but efforts are being
made to account for mallards breeding eastward and westward of the mid-continent region. The challenge for
managers is to vary hunting regulations among Flyways in a manner that recognizes each Flyway’s unique
breeding-ground derivation of mallards. For the 2006 hunting season, the USFWS will continue to consider a
regulatory choice for the Atlantic Flyway that depends exclusively on the status of eastern mallards. The
prescribed regulatory choice for the Mississippi, Central, and Pacific Flyways continues to depend exclusively on
the status of mid-continent mallards. Investigations of the dynamics of western mallards (and their potential
effect on regulations in the West) are continuing and the USFWS is not yet prepared to recommend an AHM
protocol for this mallard stock.
The mallard population models that are the basis for prescribing hunting regulations were revised extensively in
2002. These revised models account for an apparent positive bias in estimates of survival and reproductive rates,
and also allow for alternative hypotheses concerning the effects of harvest and the environment in regulating
population size. Model-specific weights reflect the relative confidence in alternative hypotheses, and are updated
annually using comparisons of predicted and observed population sizes. For mid-continent mallards, current
model weights favor the weakly density-dependent reproductive hypothesis (91%). Evidence for the additive-mortality
hypothesis remains equivocal (58%). For eastern mallards, current model weights favor the strongly
density-dependent reproductive hypothesis (65%). By consensus, hunting mortality is assumed to be additive in
eastern mallards.
For the 2006 hunting season, the USFWS is continuing to consider the same regulatory alternatives as last year.
The nature of the restrictive, moderate, and liberal alternatives has remained essentially unchanged since 1997,
except that extended framework dates have been offered in the moderate and liberal alternatives since 2002.
Also, at the request of the Flyway Councils in 2003 the USFWS agreed to exclude closed duck-hunting seasons
from the AHM protocol when the breeding-population size of mid-continent mallards is >5.5 million (traditional
survey area plus the Great Lakes region).
Harvest rates associated with each of the regulatory alternatives are predicted using Bayesian statistical methods.
Essentially, the idea is to use historical information to develop initial harvest-rate predictions, to make regulatory
decisions based on those predictions, and then to observe realized harvest rates. Those observed harvest rates, in
turn, are used to update the predictions. Using this approach, predictions of harvest rates of mallards under the
regulatory alternatives have been updated based on band-reporting rate studies conducted since 1998. Estimated
harvest rates from the 2002-2005 liberal hunting seasons have averaged 0.12 (SD = 0.01) and 0.14 (SD = 0.01)
for adult male mid-continent and eastern mallards, respectively. The estimated marginal effect of framework-date
extensions has been an increase in harvest rate of 0.012 (SD = 0.008) and 0.006 (SD = 0.010) for mid-continent
and eastern mallards, respectively.
Optimal regulatory strategies for the 2006 hunting season were calculated using: (1) harvest-management
objectives specific to each mallard stock; (2) the 2006 regulatory alternatives; and (3) current population models
and associated weights for mid-continent and eastern mallards. Based on this year’s survey results of 7.86 million
mid-continent mallards (traditional survey area plus MN, WI, and MI), 4.45 million ponds in Prairie Canada, and
899 thousand eastern mallards, the optimal regulatory choice for all four Flyways is the liberal alternative.
The annual process of setting duck-hunting regulations in the United States is based on a system of resource
monitoring, data analyses, and rule-making (Blohm 1989). Each year, monitoring activities such as aerial surveys
and hunter questionnaires provide information on population size, habitat conditions, and harvest levels. Data
collected from this monitoring program are analyzed each year, and proposals for duck-hunting regulations are
developed by the Flyway Councils, States, and USFWS. After extensive public review, the USFWS announces
regulatory guidelines within which States can set their hunting seasons.
In 1995, the USFWS adopted the concept of adaptive resource management (Walters 1986) for regulating duck
harvests in the United States. This approach explicitly recognizes that the consequences of hunting regulations
cannot be predicted with certainty, and provides a framework for making objective decisions in the face of that
uncertainty (Williams and Johnson 1995). Inherent in the adaptive approach is an awareness that management
performance can be maximized only if regulatory effects can be predicted reliably. Thus, adaptive management
relies on an iterative cycle of monitoring, assessment, and decision-making to clarify the relationships among
hunting regulations, harvests, and waterfowl abundance.
In regulating waterfowl harvests, managers face four fundamental sources of uncertainty (Nichols et al. 1995,
Johnson et al. 1996, Williams et al. 1996):
(1) environmental variation - the temporal and spatial variation in weather conditions and other key features
of waterfowl habitat; an example is the annual change in the number of ponds in the Prairie Pothole
Region, where water conditions influence duck reproductive success;
(2) partial controllability - the ability of managers to control harvest only within limits; the harvest resulting
from a particular set of hunting regulations cannot be predicted with certainty because of variation in
weather conditions, timing of migration, hunter effort, and other factors;
(3) partial observability - the ability to estimate key population attributes (e.g., population size, reproductive
rate, harvest) only within the precision afforded by extant monitoring programs; and
(4) structural uncertainty - an incomplete understanding of biological processes; a familiar example is the
long-standing debate about whether harvest is additive to other sources of mortality or whether
populations compensate for hunting losses through reduced natural mortality. Structural uncertainty
increases contentiousness in the decision-making process and decreases the extent to which managers can
meet long-term conservation goals.
AHM was developed as a systematic process for dealing objectively with these uncertainties. The key
components of AHM include (Johnson et al. 1993, Williams and Johnson 1995):
(1) a limited number of regulatory alternatives, which describe Flyway-specific season lengths, bag limits,
and framework dates;
(2) a set of population models describing various hypotheses about the effects of harvest and environmental
factors on waterfowl abundance;
(3) a measure of reliability (probability or "weight") for each population model; and
(4) a mathematical description of the objective(s) of harvest management (i.e., an "objective function"), by
which alternative regulatory strategies can be compared.
These components are used in a stochastic optimization procedure to derive a regulatory strategy. A regulatory
strategy specifies the optimal regulatory choice, with respect to the stated management objectives, for each
possible combination of breeding population size, environmental conditions, and model weights (Johnson et al.
1997). The setting of annual hunting regulations then involves an iterative process:
(1) each year, an optimal regulatory choice is identified based on resource and environmental conditions, and
on current model weights;
(2) after the regulatory decision is made, model-specific predictions for subsequent breeding population size
are determined;
(3) when monitoring data become available, model weights are increased to the extent that observations of
population size agree with predictions, and decreased to the extent that they disagree; and
(4) the new model weights are used to start another iteration of the process.
By iteratively updating model weights and optimizing regulatory choices, the process should eventually identify
which model is the best overall predictor of changes in population abundance. The process is optimal in the sense
that it provides the regulatory choice each year necessary to maximize management performance. It is adaptive in
the sense that the harvest strategy “evolves” to account for new knowledge generated by a comparison of
predicted and observed population sizes.
Since its inception AHM has focused on the population dynamics and harvest potential of mallards, especially
those breeding in mid-continent North America. Mallards constitute a large portion of the total U.S. duck harvest,
and traditionally have been a reliable indicator of the status of many other species. As management capabilities
have grown, there has been increasing interest in the ecology and management of breeding mallards that occur
outside the mid-continent region. Geographic differences in the reproduction, mortality, and migrations of
mallard stocks suggest that there may be corresponding differences in optimal levels of sport harvest. The ability
to regulate harvests of mallards originating from various breeding areas is complicated, however, by the fact that a
large degree of mixing occurs during the hunting season. The challenge for managers, then, is to vary hunting
regulations among Flyways in a manner that recognizes each Flyway’s unique breeding-ground derivation of
mallards. Of course, no Flyway receives mallards exclusively from one breeding area, and so Flyway-specific
harvest strategies ideally must account for multiple breeding stocks that are exposed to a common harvest.
The optimization procedures used in AHM can account for breeding populations of mallards beyond the mid-continent
region, and for the manner in which these ducks distribute themselves among the Flyways during the
hunting season. An optimal approach would allow for Flyway-specific regulatory strategies, which in a sense
represent for each Flyway an average of the optimal harvest strategies for each contributing breeding stock,
weighted by the relative size of each stock in the fall flight. This joint optimization of multiple mallard stocks
requires: (1) models of population dynamics for all recognized stocks of mallards; (2) an objective function that
accounts for harvest-management goals for all mallard stocks in the aggregate; and (3) decision rules allowing
Flyway-specific regulatory choices.
Joint optimization of multiple stocks presents many challenges in terms of population modeling, parameter
estimation, and computation of regulatory strategies. These challenges cannot always be overcome due to
limitations in monitoring and assessment programs and in access to sufficient computing resources. In some
cases, it may be possible to impose constraints or assumptions that simplify the problem. Although sub-optimal
by design, these constrained regulatory strategies may perform nearly as well as those that are optimal,
particularly in cases where breeding stocks differ little in their ability to support harvest, where Flyways do not
receive significant numbers of birds from more than one breeding stock, or where management outcomes are
highly uncertain.
Currently, two stocks of mallards are officially recognized for the purposes of AHM (Fig. 1). We continue to use
a constrained approach to the optimization of these stock’s harvest, whereby the Atlantic Flyway regulatory
strategy is based exclusively on the status of eastern mallards, and the regulatory strategy for the remaining
Flyways is based exclusively on the status of mid-continent mallards. This approach has been determined to
perform nearly as well as a joint-optimization approach because mixing of the two stocks during the hunting
season is limited.
Fig 1. Survey areas currently assigned to the mid-continent and eastern stocks of mallards for the purposes
of AHM. Delineation of the western-mallard stock is pending further review and development of population
models and monitoring programs.
Mid-continent Mallards
Population size.--For the purposes of AHM, mid-continent mallards currently are defined as those breeding in
federal survey strata 1-18, 20-50, and 75-77 (i.e., the “traditional” survey area), and in Minnesota, Wisconsin, and
Michigan. Estimates of the abundance of this mid-continent population are available only since 1992 (Table 1,
Fig. 2).
Population models.-In 2002 we extensively revised the set of alternative models describing the population
dynamics of mid-continent mallards (Runge et al. 2002, USFWS 2002). Collectively, the models express
uncertainty (or disagreement) about whether harvest is an additive or compensatory form of mortality (Burnham
et al. 1984), and whether the reproductive process is weakly or strongly density-dependent (i.e., the degree to
which reproductive rates decline with increasing population size).
Table 1. Estimates (N) and standard errors (SE) of mallards (in millions) in spring in the traditional survey area
(strata 1-18, 20-50, and 75-77) and the states of Minnesota, Wisconsin, and Michigan.
Traditional surveys State surveys Total
Year N SE N SE N SE
1992 5.9761 0.2410 0.9946 0.1597 6.9706 0.2891
1993 5.7083 0.2089 0.9347 0.1457 6.6430 0.2547
1994 6.9801 0.2828 1.1505 0.1163 8.1306 0.3058
1995 8.2694 0.2875 1.1214 0.1965 9.3908 0.3482
1996 7.9413 0.2629 1.0251 0.1443 8.9664 0.2999
1997 9.9397 0.3085 1.0777 0.1445 11.0174 0.3407
1998 9.6404 0.3016 1.1224 0.1792 10.7628 0.3508
1999 10.8057 0.3445 1.0591 0.2122 11.8648 0.4046
2000 9.4702 0.2902 1.2350 0.1761 10.7052 0.3395
2001 7.9040 0.2269 0.8622 0.1086 8.7662 0.2516
2002 7.5037 0.2465 1.0820 0.1152 8.5857 0.2721
2003 7.9497 0.2673 0.8360 0.0734 8.7857 0.2772
2004 7.4253 0.2820 0.9333 0.0748 8.3586 0.2917
2005 6.7553 0.2808 0.7862 0.0650 7.5415 0.2883
2006 7.2765 0.2237 0.5881 0.4645 7.8646 0.2284
Fig. 2. Population estimates of mid-continent mallards in the traditional survey area (TSA)
and the Great Lakes region. Error bars represent one standard error.
Population size (millions)
Great Lakes
All population models for mid-continent mallards share a common “balance equation” to predict changes in
breeding-population size as a function of annual survival and reproductive rates:
Nt Nt (mSt AM ( m)(St AF Rt (St JF St JM F )))
+ = + − + + 1 , 1 , , , φ φ
N = breeding population size,
m = proportion of males in the breeding population,
SAM, SAF, SJF, and SJM = survival rates of adult males, adult females, young females, and young males, respectively,
R = reproductive rate, defined as the fall age ratio of females,
φ F φ
= the ratio of female (F) to male (M) summer survival, and
t = year.
We assumed that m and φ F φ
sum are fixed and known. We also assumed, based in part on information provided by
Blohm et al. (1987), the ratio of female to male summer survival was equivalent to the ratio of annual survival
rates in the absence of harvest. Based on this assumption, we estimated φ F φ
sum = 0.897. To estimate m we
expressed the balance equation in matrix form:
S RS
S RS
t AM
t AF
AM JM F
t AM
t AF
⎣ ⎢
⎦ ⎥
⎣ ⎢
⎦ ⎥
⎣ ⎢
⎦ ⎥
φ φ
and substituted the constant ratio of summer survival and means of estimated survival and reproductive rates. The
right eigenvector of the transition matrix is the stable sex structure that the breeding population eventually would
attain with these constant demographic rates. This eigenvector yielded an estimate of m = 0.5246.
Using estimates of annual survival and reproductive rates, the balance equation for mid-continent mallards over-predicted
observed population sizes by 10.8% on average. The source of the bias is unknown, so we modified the
balance equation to eliminate the bias by adjusting both survival and reproductive rates:
Nt S Nt (mSt AM ( m)(St AF RRt (St JF St JM F )))
+ = + − + + 1 γ 1 γ φ φ , , , ,
where γ denotes the bias-correction factors for survival (S) and reproduction (R). We used a least squares
approach to estimate γS = 0.9479 and γR = 0.8620.
Survival process.–We considered two alternative hypotheses for the relationship between annual survival and
harvest rates. For both models, we assumed that survival in the absence of harvest was the same for adults and
young of the same sex. In the model where harvest mortality is additive to natural mortality:
St sex age s sex ( K )
, , , t ,sex,age = − 0 1
and in the model where changes in natural mortality compensate for harvest losses (up to some threshold):
s if K s
t sex age K if K s
t sex age sex
t sex age t sex age sex
, , C
, ,, ,
, , , , ,
≤ −
− > −
⎧⎨ ⎪
where s0 = survival in the absence of harvest under the additive (A) or compensatory (C) model, and K = harvest
rate adjusted for crippling loss (20%, Anderson and Burnham 1976). We averaged estimates of s0 across banding
reference areas by weighting by breeding-population size. For the additive model, s0 = 0.7896 and 0.6886 for
males and females, respectively. For the compensatory model, s0 = 0.6467 and 0.5965 for males and females,
respectively. These estimates may seem counterintuitive because survival in the absence of harvest should be the
same for both models. However, estimating a common (but still sex-specific) s0 for both models leads to
alternative models that do not fit available band-recovery data equally well. More importantly, it suggests that the
greatest uncertainty about survival rates is when harvest rate is within the realm of experience. By allowing s0 to
differ between additive and compensatory models, we acknowledge that the greatest uncertainty about survival
rate is its value in the absence of harvest (i.e., where we have no experience).
Reproductive process.–Annual reproductive rates were estimated from age ratios in the harvest of females,
corrected using a constant estimate of differential vulnerability. Predictor variables were the number of ponds in
May in Prairie Canada (P, in millions) and the size of the breeding population (N, in millions). We estimated the
best-fitting linear model, and then calculated the 80% confidence ellipsoid for all model parameters. We chose
the two points on this ellipsoid with the largest and smallest values for the effect of breeding-population size, and
generated a weakly density-dependent model:
Rt = 0.7166 + 0.1083Pt − 0.0373Nt
and a strongly density-dependent model:
Rt = 1.1390 + 0.1376Pt − 0.1131Nt
Pond dynamics.–We modeled annual variation in Canadian pond numbers as a first-order autoregressive process.
The estimated model was:
P P t + t t = + + 1 2.2127 0.3420 ε
where ponds are in millions and εt
is normally distributed with mean = 0 and variance = 1.2567.
Variance of prediction errors.–Using the balance equation and sub-models described above, predictions of
breeding-population size in year t+1 depend only on specification of population size, pond numbers, and harvest
rate in year t. For the period in which comparisons were possible, we compared these predictions with observed
population sizes.
We estimated the prediction-error variance by setting:
( ) ( )
( )
[ ( ) ( )] ( )
e N N
e N
N N n
t t
= −
= Σ − −
ln ln
~ ,
$ ln ln
then assuming
and estimating
σ 2
σ 2 2
where obs and pre are observed and predicted population sizes (in millions), respectively, and n = the number of
years being compared. We were concerned about a variance estimate that was too small, either by chance or
because the number of years in which comparisons were possible was small. Therefore, we calculated the upper
80% confidence limit for σ2 based on a Chi-squared distribution for each combination of the alternative survival
and reproductive sub-models, and then averaged them. The final estimate of σ2 was 0.0243, equivalent to a
coefficient of variation of about 17%.
Model implications.--The set of alternative population models suggests that carrying capacity (average population
size in the absence of harvest) for an average number of Canadian ponds is somewhere between about 6 and 16
million mallards. The population model with additive hunting mortality and weakly density-dependent
recruitment (SaRw) leads to the most conservative harvest strategy, whereas the model with compensatory
hunting mortality and strongly density-dependent recruitment (ScRs) leads to the most liberal strategy. The other
two models (SaRs and ScRw) lead to strategies that are intermediate between these extremes. Under the models
with compensatory hunting mortality (ScRs and ScRw), the optimal strategy is to have a liberal regulation
regardless of population size or number of ponds because at harvest rates achieved under the liberal alternative,
harvest has no effect on population size. Under the strongly density-dependent model (ScRs), the density-dependence
regulates the population and keeps it within narrow bounds. Under the weakly density-dependent
model (ScRw), the density-dependence does not exert as strong a regulatory effect, and the population size
fluctuates more.
Model weights.--Model weights are calculated as Bayesian probabilities, reflecting the relative ability of the
individual alternative models to predict observed changes in population size. The Bayesian probability for each
model is a function of the model’s previous (or prior) weight and the likelihood of the observed population size
under that model. We used Bayes’ theorem to calculate model weights from a comparison of predicted and
observed population sizes for the years 1996-2004, starting with equal model weights in 1995. For the purposes
of updating, we predicted breeding-population size in the traditional survey area in year t + 1, from breeding-population
size, Canadian ponds, and harvest rates in year t.
Model weights changed little until all models under-predicted the change in population size from 1998 to 1999,
perhaps indicating there is a significant factor affecting population dynamics that is absent from all four models
(Fig. 3). Throughout the period of updating model weights, there has been no clear preference for either the
additive (58%) or compensatory (42%) mortality models. For most of the time frame, model weights have
strongly favored the weakly density-dependent (91%) reproductive model over the strongly density-dependent
(9%) one. The reader is cautioned, however, that models can sometimes make reliable predictions of population
size for reasons having little to do with the biological hypotheses expressed therein (Johnson et al. 2002b).
Inclusion of mallards in the Great Lakes region.--Model development originally did not include mallards
breeding in the states of Wisconsin, Minnesota, and Michigan, primarily because full data sets were not available
from these areas to permit the necessary analysis. However, mallards in the Great Lakes region have been
included in the mid-continent mallard AHM protocol since 1997 by assuming that population dynamics for these
mallards are similar to those in the traditional survey area. Based on that assumption, predictions of breeding
population size are scaled to reflect inclusion of mallards in the Great Lakes region. From 1992 through 2006,
when population estimates were available for all three states, the average proportion of the total mid-continent
mallard population that was in the Great Lakes region was 0.1117 (SD = 0.0200). We assumed a normal
distribution with these parameter values to make the conversion between the traditional survey area and total
breeding-population size.
Fig 3. Weights for models of mid-continent mallards (ScRs = compensatory mortality and strongly density-dependent
reproduction, ScRw = compensatory mortality and weakly density-dependent reproduction, SaRs =
additive mortality and strongly density-dependent reproduction, and SaRw = additive mortality and weakly
density-dependent reproduction). Model weights were assumed to be equal in 1995.
Eastern Mallards
Population size.--For purposes of AHM, eastern mallards are defined as those breeding in southern Ontario and
Quebec (federal survey strata 51-54 and 56) and in the northeastern U.S. (state plot surveys; Heusman and Sauer
2000) (see Fig. 1). Estimates of population size have varied from 856 thousand to 1.1 million since 1990, with
the majority of the population accounted for in the northeastern U.S. (Table 3, Fig. 4). The reader is cautioned that
these estimates differ from those reported in the USFWS annual waterfowl trend and status reports, which include
composite estimates based on more fixed-wing strata in eastern Canada and helicopter surveys conducted by
Population models.–We also revised the population models for eastern mallards in 2002 (Johnson et al. 2002a,
USFWS 2002). The current set of six models: (1) relies solely on federal and state waterfowl surveys (rather than
the Breeding Bird Survey) to estimate abundance; (2) allows for the possibility of a positive bias in estimates of
survival or reproductive rates; (3) incorporates competing hypotheses of strongly and weakly density-dependent
reproduction; and (4) assumes that hunting mortality is additive to other sources of mortality.
Model weight
Table 3. Estimates (N) and associated standard errors (SE) of mallards (in thousands) in spring in the
northeastern U.S. (state plot surveys) and eastern Canada (federal survey strata 51-54 and 56).
State surveys Federal surveys Total
Year N SE N SE N SE
1990 665.1 78.3 190.7 47.2 855.8 91.4
1991 779.2 88.3 152.8 33.7 932.0 94.5
1992 562.2 47.9 320.3 53.0 882.5 71.5
1993 683.1 49.7 292.1 48.2 975.2 69.3
1994 853.1 62.7 219.5 28.2 1072.5 68.7
1995 862.8 70.2 184.4 40.0 1047.2 80.9
1996 848.4 61.1 283.1 55.7 1131.5 82.6
1997 795.1 49.6 212.1 39.6 1007.2 63.4
1998 775.1 49.7 263.8 67.2 1038.9 83.6
1999 879.7 60.2 212.5 36.9 1092.2 70.6
2000 757.8 48.5 132.3 26.4 890.0 55.2
2001 807.5 51.4 200.2 35.6 1007.7 62.5
2002 834.1 56.2 171.3 30.0 1005.4 63.8
2003 731.8 47.0 308.3 55.4 1040.1 72.6
2004 809.1 51.8 301.5 53.3 1110.7 74.3
2005 753.6 53.6 293.4 53.1 1047.0 75.5
2006 725.2 47.9 174.0 28.4 899.2 55.7
As with mid-continent mallards, all population models for eastern mallards share a common balance equation to
predict changes in breeding-population size as a function of annual survival and reproductive rates:
Nt Nt ((p St ) (( p) S ) (p (A d) S ) (p (A d) S ))
+ = ⋅ ⋅ + − ⋅ + ⋅ ⋅ + ⋅ ⋅ ⋅ 1 1 ψ
N = breeding-population size,
p = proportion of males in the breeding population,
Sam, Saf, Sym, and Syf = survival rates of adult males, adult females, young males, and young females, respectively,
Am = ratio of young males to adult males in the harvest,
d = ratio of young male to adult male direct recovery rates,
ψ = the ratio of male to female summer survival, and t = year.
Fig. 4. Population estimates of eastern mallards in the northeastern U.S. (NE state
survey) and in federal surveys in southern Ontario and Quebec. Error bars represent
one standard error.
In this balance equation, we assume that p, d, and ψ are fixed and known. The parameter ψ is necessary to
account for the difference in anniversary date between the breeding-population survey (May) and the survival and
reproductive rate estimates (August). This model also assumes that the sex ratio of fledged young is 1:1; hence
Am/d appears twice in the balance equation. We estimated d = 1.043 as the median ratio of young:adult male
band-recovery rates in those states from which wing receipts were obtained. We estimated ψ = 1.216 by
regressing through the origin estimates of male survival against female survival in the absence of harvest,
assuming that differences in natural mortality between males and females occur principally in summer. To
estimate p, we used a population projection matrix of the form:
( )
( ) ⎥⎦
⋅ ⎥⎦
⋅ ⋅
+ ⋅
= ⎥⎦
m yf af
am m ym
A d S S
S A d S
where M and F are the relative number of males and females in the breeding populations, respectively. To
parameterize the projection matrix we used average annual survival rate and age ratio estimates, and the estimates
of d and ψ provided above. The right eigenvector of the projection matrix is the stable proportion of males and
females the breeding population eventually would attain in the face of constant demographic rates. This
eigenvector yielded an estimate of p = 0.544.
We also attempted to determine whether estimates of survival and reproductive rates were unbiased. We relied on
the balance equation provided above, except that we included additional parameters to correct for any bias that
might exist. Because we were unsure of the source(s) of potential bias, we alternatively assumed that any bias
resided solely in survival rates:
Nt Nt ((p St ) (( p) S ) (p (A d) S ) (p (A d) S ))
+ = ⋅ ⋅ ⋅ + − ⋅ + ⋅ ⋅ + ⋅ ⋅ ⋅ 1 Ω 1 ψ
Populations size (millions)
NE statesurvey 1.4
Federal survey
(where Ω is the bias-correction factor for survival rates), or solely in reproductive rates:
Nt Nt ((p St ) (( p) S ) (p (A d) S ) (p (A d) S ))
+ = ⋅ ⋅ + − ⋅ + ⋅ ⋅ ⋅ + ⋅ ⋅ ⋅ ⋅ 1 1 α α ψ
(where α is the bias-correction factor for reproductive rates). We estimated Ω and α by determining the values of
these parameters that minimized the sum of squared differences between observed and predicted population sizes.
Based on this analysis, Ω = 0.836 and α = 0.701, suggesting a positive bias in survival or reproductive rates.
However, because of the limited number of years available for comparing observed and predicted population
sizes, we also retained the balance equation that assumes estimates of survival and reproductive rates are
Survival process.–For purposes of AHM, annual survival rates must be predicted based on the specification of
regulation-specific harvest rates (and perhaps on other uncontrolled factors). Annual survival for each age (i) and
sex (j) class under a given regulatory alternative is:
( )
( ) S
h v
c t
i j j t
am i j
= ⋅ −
θ 1
S = annual survival,
θ j = mean survival from natural causes,
ham = harvest rate of adult males, and
v = harvest vulnerability relative to adult males,
c = rate of crippling (unretrieved harvest).
This model assumes that annual variation in survival is due solely to variation in harvest rates, that relative
harvest vulnerability of the different age-sex classes is fixed and known, and that survival from natural causes is
fixed at its sample mean. We estimated θ j = 0.7307 and 0.5950 for males and females, respectively.
Reproductive process.–As with survival, annual reproductive rates must be predicted in advance of setting
regulations. We relied on the apparent relationship between breeding-population size and reproductive rates:
Rt = a ⋅ exp(b ⋅ Nt )
where Rt is the reproductive rate (i.e., Am d
t ), Nt is breeding-population size in millions, and a and b are model
parameters. The least-squares parameter estimates were a = 2.508 and b = -0.875. Because of both the
importance and uncertainty of the relationship between population size and reproduction, we specified two
alternative models in which the slope (b) was fixed at the least-squares estimate ± one standard error, and in
which the intercepts (a) were subsequently re-estimated. This provided alternative hypotheses of strongly
density-dependent (a = 4.154, b = -1.377) and weakly density-dependent reproduction (a = 1.518, b = -0.373).
Variance of prediction errors.--Using the balance equations and sub-models provided above, predictions of
breeding-population size in year t+1 depend only on the specification of a regulatory alternative and on an
estimate of population size in year t. For the period in which comparisons were possible (1991-96), we were
interested in how well these predictions corresponded with observed population sizes. In making these
comparisons, we were primarily concerned with how well the bias-corrected balance equations and reproductive
and survival sub-models performed. Therefore, we relied on estimates of harvest rates rather than regulations as
model inputs.
We estimated the prediction-error variance by setting:
( ) ( )
( )
[ ( ) ( )]
e N N
e N
N N n
t t
= −
= Σ −
ln ln
~ ,
$ ln ln
then assuming
and estimating
0 σ 2
σ 2 2
where obs and pre are observed and predicted population sizes (in millions), respectively, and n = 6.
Variance estimates were similar regardless of whether we assumed that the bias was in reproductive rates or in
survival, or whether we assumed that reproduction was strongly or weakly density-dependent. Thus, we averaged
variance estimates to provide a final estimate of σ2 = 0.006, which is equivalent to a coefficient of variation (CV)
of 8.0%. We were concerned, however, about the small number of years available for estimating this variance.
Therefore, we estimated an 80% confidence interval for σ2 based on a Chi-squared distribution and used the upper
limit for σ2 = 0.018 (i.e., CV = 14.5%) to express the additional uncertainty about the magnitude of prediction
errors attributable to potentially important environmental effects not expressed by the models.
Model implications.--Model-specific regulatory strategies based on the hypothesis of weakly density-dependent
reproduction are considerably more conservative than those based on the hypothesis of strongly density-dependent
reproduction. The three models with weakly density-dependent reproduction suggest a carrying
capacity (i.e., average population size in the absence of harvest) >2.0 million mallards, and prescribe extremely
restrictive regulations for population size <1.0 million. The three models with strongly density-dependent
reproduction suggest a carrying capacity of about 1.5 million mallards, and prescribe liberal regulations for
population sizes >300 thousand. Optimal regulatory strategies are relatively insensitive to whether models
include a bias correction or not. All model-specific regulatory strategies are “knife-edged,” meaning that large
differences in the optimal regulatory choice can be precipitated by only small changes in breeding-population
size. This result is at least partially due to the small differences in predicted harvest rates among the current
regulatory alternatives (see the section on Regulatory Alternatives later in this report).
Model weights.—We used Bayes’ theorem to calculate model weights from a comparison of predicted and
observed population sizes for the years 1996-2006. We calculated weights for the alternative models based on an
assumption of equal model weights in 1996 (the last year data was used to develop most model components) and
on estimates of year-specific harvest rates (Appendix B). There is no single model that is clearly favored over the
others at the end of the time frame, although collectively the models with strongly density-dependent reproduction
(65%) are better predictors of changes in population size than those with weak density dependence (35%) (Fig. 5).
In addition, there is substantial evidence of bias in extant estimates of survival and/or reproductive rates (99%).
Fig. 5. Weights for models of eastern mallards (Rw0 = weak density-dependent reproduction and no model
bias, Rs0 = strong -dependent reproduction and no model bias, RwS = weak density-dependent reproduction
and biased survival rates, RsS = strong density-dependent reproduction and biased survival rates, RwR = weak
density-dependent reproduction and biased reproductive ates, and RsR = strong density-dependent
reproduction and biased reproductive rates). Model weights were assumed to be equal in 1996.
Western Mallards
Substantial numbers of mallards occur in the states of the Pacific Flyway (including Alaska), British Columbia,
and the Yukon Territory during the breeding season. The distribution of these mallards during fall and winter is
centered in the Pacific Flyway (Munro and Kimball 1982). Unfortunately, data-collection programs for
understanding and monitoring the dynamics of this mallard stock are highly fragmented in both time and space.
This makes it difficult to aggregate monitoring instruments in a way that can be used to reliably model this stock’s
dynamics and, thus, to establish criteria for regulatory decision-making under AHM. Another complicating factor
is that federal survey strata 1-12 in Alaska and the Yukon are within the current geographic bounds of mid-continent
mallards. The AHM Working Group is continuing its investigations of western mallards and while it is
not prepared to recommend an AHM protocol at this time, progress is being made on a number of issues:
Breeding populations surveys – The development of AHM for western mallards continues to present technical
challenges that make implementation much more difficult than with either mid-continent or eastern mallards. In
particular, we remain concerned about our ability to reliably determine changes in the population size of western
mallards based on a collection of surveys conducted independently by Pacific Flyway States and the Province of
British Columbia. These surveys tend to vary in design and intensity, and in some cases lack measures of
precision (i.e., sampling error). For example, methods for estimating mallard abundance in British Columbia are
still in the development and evaluation phase, and there are as yet unanswered questions about how mallard
Model weight
abundance will be determined there on an operational basis. Helicopters are currently being evaluated for use in
surveys that eventually could cover the majority of key waterfowl habitats in British Columbia.
During the last year we reviewed extant surveys to determine their adequacy for supporting a western-mallard
AHM protocol. We were principally interested in whether the surveys: (a) estimate total birds (rather than
breeding pairs); (b) have a sound sampling design (and SEs available); (c) consider imperfect detection of birds;
and (d) require data augmentation (i.e., filling missing years). Based on these criteria, Alaska, California, and
Oregon were selected for modeling purposes. These three states likely harbor about 75% of the western-mallard
breeding population (Fig. 6). Nonetheless, this geographic focus is temporary until such time that surveys in
other areas can be brought up to similar standards and an adequate record of population estimates is available for
Fig. 6. Status of surveys in the range of western mallards. States with solid shading represent those that
currently are being used to model western-mallard population dynamics.
Population modeling – For modeling purposes we were hesitant to pool Alaska mallards with those in California
and Oregon because of differing population trajectories (Fig. 7), and because we believed it likely that different
environmental driving variables were at play during the breeding season in northern and southern latitudes.
Mallards banded in Alaska and in Californian/Oregon also had different recovery distributions, suggesting that
these two groups of mallards may be subject to differences in mortality rates during the non-breeding season. The
separation of western mallards into northern and southern groups is for exploratory purposes. It remains unclear
whether the dynamics of these two groups are different enough to be meaningful in a harvest-management
context, especially given that many of these birds are subject to a common hunting season.
Fig. 7. Population estimates of two groups of western mallards. Surveys were not conducted in Oregon in
1992, 1993, and 2001 so we imputed estimates based on the correlation between estimates from Oregon
and California. Error bars represent one standard error.
We used a discrete logistic model to characterize population dynamics because it requires a minimum of data to
parameterize. The traditional approach of constructing a population balance equation (i.e., that used for mid-continent
and eastern mallards) was deemed impractical because of the paucity of banding data to estimate
survival rates and because of the difficulty of estimating reproductive rates from a collection of wings aggregated
from several mallard stocks. The logistic model took the form:
and where: N = breeding-population size, r = the intrinsic rate of population growth, K = carrying capacity, hAM =
( )d
where h
N N r N
N N N r N
t t t
t t t
+ ⎟ ⎟⎠
⎜ ⎜⎝
− + ⎛ − ⎟⎠
= + ⎛ − +
α σ
adult-male harvest rate, c = crippling loss, d = a scaling factor, and σ2 = process error.
As model inputs, we used breeding population estimates (derived from breeding surveys) and harvest rates of
adult males (derived from band-recovery data corrected for reporting rates). We assumed a 20% crippling loss
and scaled adult-male harvest rates (d) to represent a harvest-rate for the population as a whole. The magnitude of
d depends on the differential vulnerability of the age-sex cohorts, as well as their relative abundance in the
population. Based on values from better understood mallard stocks we specified d = 1.4, but also explicitly
allowed for considerable variation in this parameter throughout the analyses.
The model we used assumes that both the Alaska and California-Oregon breeding stocks are closed. While this is
a tenuous assumption, breeding-ground fidelity is difficult to investigate using dead recoveries of banded birds,
even where large samples are available from all stocks of interest (in this case including mid-continent mallards).
Low fidelity could lead to poor model fit; however, this lack of fit should be absorbed in the process-error term.
The logistic model also assumes density-dependent growth, but does not specify whether it occurs in the mortality
process, the reproductive process, or both. However, some assumptions about the seasonality of events had to be
assumed in order to reconcile the different timing of population surveys and preseason banding.
We used a “state-space model” that allows the partitioning of observation error, which is specified by the
sampling error of population estimates, and process error, which specifies the discrepancy between predicted and
observed population sizes (Meyer and Millar 1999). Estimates of model parameters were calculated via Markov
Chain Monte Carlo simulations using the WinBugs public-domain software. We specified uninformative or
vague prior distributions for all model parameters. Once parameter estimates were available, we derived optimal
harvest strategies with stochastic dynamic programming and simulated their use via the public-domain software
ASDP (Lubow 1995). We assumed perfect control over harvest rates, but explicitly considered estimation error
in key model parameters.
We examined a number of models in which K was either constant or allowed to vary over time. We present here
only the results for the model with constant K. This model provides a reasonable fit to the data from both areas
(Fig. 8) and suggests a similar K (Table 4). The intrinsic rate of growth r appears to be higher in
California/Oregon and thus these mallards should be able to support slightly more harvest pressure over the long
term than those originating from Alaska. However, we note there is a high degree of uncertainty associated with
all parameter estimates.
Table 4. Estimates of carrying capacity K and the intrinsic rate of growth r from the logistic model for two
groups of western mallards. (CI = Bayesian credibility intervals).
Stock K 95% CI (K) r 95% CI (r)
AK 0.95 0.68 – 1.09 0.36 0.14 – 0.64
CA/OR 0.88 0.60 – 1.09 0.46 0.20 – 0.94
Fig. 8. Mallard breeding population size in Alaska (top) and California/Oregon (bottom) as estimated
from surveys (obs pop) and those predicted from a logistic model that assumes a constant carrying
capacity K (M0).
Bpop (millions)
obs pop
obs pop
Harvest rates – We estimated harvest rates of adult male mallards in Alaska and California/Oregon directly from
recoveries of reward bands placed on mallards prior to the hunting seasons in 2002-2005 (Table 5). Generally,
these rates are similar to those for mid-continent mallards.
Table 5. Harvest rates (h, and standard errors, se) of adult-male mallards
banded in Alaska and California/Oregon as based on reward banding. (There
was an insufficient number of reward bandings in Alaska in 2005).
AK CA/OR
h se h se
2002 0.1121 0.0306 0.1049 0.0109
2003 0.1000 0.0391 0.0970 0.0124
2004 0.0968 0.0379 0.1239 0.0175
2005 0.1086 0.0098
mean 0.1030 0.0057 0.1099 0.0082
Ultimately, the ability to predict stock-specific harvest rates as a function of flyway-specific regulations involves:
(a) accounting for movements of breeding stocks to various harvest areas (flyways); (b) estimating harvest rates
on mallards wintering in the various flyways; and (c) correlating these harvest rates with flyway framework
regulations. This work is currently underway and may be completed prior to the 2007 regulations cycle.
The basic harvest-management objective for mid-continent mallards is to maximize cumulative harvest over the
long term, which inherently requires perpetuation of a viable population. Moreover, this objective is constrained
to avoid regulations that could be expected to result in a subsequent population size below the goal of the North
American Waterfowl Management Plan (NAWMP) (Fig. 9). According to this constraint, the value of harvest
decreases proportionally as the difference
between the goal and expected population
size increases. This balance of harvest and
population objectives results in a regulatory
strategy that is more conservative than that
for maximizing long-term harvest, but more
liberal than a strategy to attain the NAWMP
goal (regardless of effects on hunting
opportunity). The current objective uses a
population goal of 8.8 million mallards,
which is based on 8.2 million mallards in
the traditional survey area (from the 1998
update of the NAWMP) and a goal of 0.6
million for the combined states of
Minnesota, Wisconsin, and Michigan.
For eastern mallards, there is no NAWMP
goal or other established target for desired
population size. Accordingly, the
management objective for eastern mallards is
simply to maximize long-term cumulative (i.e.,
sustainable) harvest.
Population size expected next year (millions)
Harvest value (%)
NAWMP goal
(8.8 million)
Fig. 9. The relative value of mid-continent mallard harvest,
expressed as a function of breeding-population size expected in
the subsequent year.
Evolution of Alternatives
When AHM was first implemented in 1995, three regulatory alternatives characterized as liberal, moderate, and
restrictive were defined based on regulations used during 1979-84, 1985-87, and 1988-93, respectively. These
regulatory alternatives also were considered for the 1996 hunting season. In 1997, the regulatory alternatives
were modified to include: (1) the addition of a very-restrictive alternative; (2) additional days and a higher duck
bag limit in the moderate and liberal alternatives; and (3) an increase in the bag limit of hen mallards in the
moderate and liberal alternatives. In 2002 the USFWS further modified the moderate and liberal alternatives to
include extensions of approximately one week in both the opening and closing framework dates.
In 2003 the very-restrictive alternative was eliminated at the request of the Flyway Councils. Expected harvest
rates under the very-restrictive alternative did not differ significantly from those under the restrictive alternative,
and the very-restrictive alternative was expected to be prescribed for <5% of all hunting seasons. Also, at the
request of the Flyway Councils the USFWS agreed to exclude closed duck-hunting seasons from the AHM
protocol when the breeding-population size of mid-continent mallards is >5.5 million (traditional survey area plus
the Great Lakes region). Based on our assessment, closed hunting seasons do not appear to be necessary from the
perspective of sustainable harvesting when the mid-continent mallard population exceeds this level. The impact
of maintaining open seasons above this level also appears to be negligible for other mid-continent duck species
(scaup, gadwall, wigeon, green-winged teal, blue-winged teal, shoveler, pintail, redhead, and canvasbacks), as
based on population models developed by Johnson (2003). However, complete or partial season-closures for
particular species or populations could still be deemed necessary in some situations regardless of the status of
mid-continent mallards. Details of the regulatory alternatives for each Flyway are provided in Table 6.
Regulation-Specific Harvest Rates
Initially, harvest rates of mallards associated with each of the open-season regulatory alternatives were predicted
using harvest-rate estimates from 1979-84, which were adjusted to reflect current hunter numbers and
contemporary specifications of season lengths and bag limits. In the case of closed seasons in the U.S., we
assumed rates of harvest would be similar to those observed in Canada during 1988-93, which was a period of
restrictive regulations both in Canada and the U.S. All harvest-rate predictions were based only in part on band-recovery
data, and relied heavily on models of hunting effort and success derived from hunter surveys (USFWS
2002: Appendix C). As such, these predictions had large sampling variances and their accuracy was uncertain.
In 2002 we began relying on Bayesian statistical methods for improving regulation-specific predictions of harvest
rates, including predictions of the effects of framework-date extensions. Essentially, the idea is to use existing
(prior) information to develop initial harvest-rate predictions (as above), to make regulatory decisions based on
those predictions, and then to observe realized harvest rates. Those observed harvest rates, in turn, are treated as
new sources of information for calculating updated (posterior) predictions. Bayesian methods are attractive
because they provide a quantitative and formal, yet intuitive, approach to adaptive management.
Table 6. Regulatory alternatives for the 2006 duck-hunting season.
Regulation Atlantica Mississippi Centralb Pacificc
Shooting hours one-half hour before sunrise to sunset
Framework dates
Restrictive Oct 1 - Jan 20 Saturday nearest Oct 1to the Sunday nearest Jan 20
Moderate and
Saturday nearest September 24
to the last Sunday in January
Season length (days)
Restrictive 30 30 39 60
Moderate 45 45 60 86
Liberal 60 60 74 107
Bag limit (total / mallard / female mallard)
Restrictive 3 / 3 / 1 3 / 2 / 1 3 / 3 / 1 4 / 3 / 1
Moderate 6 / 4 / 2 6 / 4 / 1 6 / 5 / 1 7 / 5 / 2
Liberal 6 / 4 / 2 6 / 4 / 2 6 / 5 / 2 7 / 7 / 2
a The states of Maine, Massachusetts, Connecticut, Pennsylvania, New Jersey, Maryland, Delaware, West
Virginia, Virginia, and North Carolina are permitted to exclude Sundays, which are closed to hunting, from
their total allotment of season days.
b The High Plains Mallard Management Unit is allowed 8, 12, and 23 extra days in the restrictive, moderate,
and liberal alternatives, respectively.
c The Columbia Basin Mallard Management Unit is allowed seven extra days in the restrictive, and moderate
For mid-continent mallards, we have empirical estimates of harvest rate from the recent period of liberal hunting
regulations (1998-2005). The Bayesian methods thus allow us to combine these estimates with our prior
predictions to provide updated estimates of harvest rates expected under the liberal regulatory alternative.
Moreover, in the absence of experience (so far) with the restrictive and moderate regulatory alternatives, we
reasoned that our initial predictions of harvest rates associated with those alternatives should be re-scaled based
on a comparison of predicted and observed harvest rates under the liberal regulatory alternative. In other words,
if observed harvest rates under the liberal alternative were 10% less than predicted, then we might also expect that
the mean harvest rate under the moderate alternative would be 10% less than predicted. The appropriate scaling
factors currently are based exclusively on prior beliefs about differences in mean harvest rate among regulatory
alternatives, but they will be updated once we have experience with something other than the liberal alternative.
A detailed description of the analytical framework for modeling mallard harvest rates is provided in Appendix B.
Our models of regulation-specific harvest rates also allow for the marginal effect of framework-date extensions in
the moderate and liberal alternatives. A previous analysis by the USFWS (2001) suggested that implementation
of framework-date extensions might be expected to increase the harvest rate of mid-continent mallards by about
15%, or in absolute terms by about 0.02 (SD = 0.01). Based on the observed harvest rate during the 2002-2005
hunting seasons, the updated (posterior) estimate of the marginal change in harvest rate attributable to the
framework-date extension is 0.012 (SD = 0.008). Therefore, the estimated effect of the framework-date extension
has been to increase harvest rate of mid-continent mallards by about 10% over what would otherwise be expected
in the liberal alternative. However, the reader is strongly cautioned that reliable inference about the marginal
effect of framework-date extensions ultimately depends on a rigorous experimental design (including controls and
random application of treatments).
Current predictions of harvest rates of adult-male mid-continent mallards associated with each of the regulatory
alternatives are provided in Table 7 and Fig. 9. Predictions of harvest rates for the other age-sex cohorts are based
on the historical ratios of cohort-specific harvest rates to adult-male rates (Runge et al. 2002). These ratios are
considered fixed at their long-term averages and are 1.5407, 0.7191, and 1.1175 for young males, adult females,
and young females, respectively. We continued to make the simplifying assumption that the harvest rates of mid-continent
mallards depend solely on the regulatory choice in the western three Flyways. This appears to be a
reasonable assumption given the small proportion of mid-continent mallards wintering in the Atlantic Flyway
(Munro and Kimball 1982), and harvest-rate predictions that suggest a minimal effect of Atlantic Flyway
regulations (USFWS 2000). Under this assumption, the optimal regulatory strategy for the western three Flyways
can be derived by ignoring the harvest regulations imposed in the Atlantic Flyway.
Table 7. Predictions of harvest rates of adult-male mid-continent mallards expected with
application of the 2006 regulatory alternatives in the three western Flyways.
Regulatory alternative Mean SD
Closed (U.S.) 0.0088 0.0019
Restrictive 0.0583 0.0128
Moderate 0.1107 0.0216
Liberal 0.1269 0.0213
Fig. 9. Probability distributions of harvest rates of adult male mid-continent mallards expected
with application of the 2006 regulatory alternatives in the three western Flyways.
Harvest rate
0.00 0.04 0.08 0.12 0.16 0.20 0.24 0.28
pdf(harvest rate)
Until last year, predictions of harvest rates for eastern mallards depended exclusively on historical (prior)
information because more contemporary estimates of harvest rate were unavailable. However, we can now update
the predictions of eastern-mallard harvest rates in the same fashion as that for mid-continent mallards based on
reward banding conducted in eastern Canada and the northeastern U.S. (Appendix B). Like mid-continent
mallards, harvest rates of age and sex cohorts other than adult male mallards are based on constant rates of
differential vulnerability as derived from band-recovery data. For eastern mallards, these constants are 1.153,
1.331, and 1.509 for adult females, young males, and young females, respectively (Johnson et al. 2002a).
Regulation-specific predictions of harvest rates of adult-male eastern mallards are provided in Table 8 and Fig.
In contrast to mid-continent mallards, framework-date extensions were expected to increase the harvest rate of
eastern mallards by only about 5% (USFWS 2001), or in absolute terms by about 0.01 (SD = 0.01). Based on the
observed harvest rate during the 2002-2005 hunting seasons, the updated (posterior) estimate of the marginal
change in harvest rate attributable to the framework-date extension is 0.006 (SD = 0.010). Therefore, the
estimated effect of the framework-date extension has been to increase harvest rate of eastern mallards by about
4% over what would otherwise be expected in the liberal alternative.
Table 8. Predictions of harvest rates of adult-male eastern mallards expected with
application of the 2006 regulatory alternatives in the Atlantic Flyway.
Regulatory alternative Mean SD
Closed (U.S.) 0.0797 0.0230
Restrictive 0.1209 0.0392
Moderate 0.1417 0.0473
Liberal 0.1636 0.0460
We calculated optimal regulatory strategies using stochastic dynamic programming (Lubow 1995, Johnson and
Williams 1999). For the three western Flyways, we based this optimization on: (1) the 2006 regulatory
alternatives, including the closed-season constraint; (2) current population models and associated weights for mid-continent
mallards; and (3) the dual objectives of maximizing long-term cumulative harvest and achieving a
population goal of 8.8 million mid-continent mallards. The resulting regulatory strategy (Table 9) is similar to
that used last year.
Assuming that regulatory choices adhered to this strategy (and that current model weights accurately reflect
population dynamics), breeding-population size would be expected to average 7.37 million (SD = 1.77). Note
that prescriptions for closed seasons in this strategy represent resource conditions that are insufficient to support
one of the current regulatory alternatives, given current harvest-management objectives and constraints.
However, closed seasons under all of these conditions are not necessarily required for long-term resource
protection, and simply reflect the NAWMP population goal and the nature of the current regulatory alternatives.
Based on an observed population size of 7.86 million mid-continent mallards (traditional surveys plus MN, MI,
and WI) and 4.45 million ponds in Prairie Canada, the optimal regulatory choice for the Pacific, Central, and
Mississippi Flyways in 2006 is the liberal alternative.
Fig. 10. Probability distributions of harvest rates of adult male eastern mallards expected with
application of the 2005 regulatory alternatives in the Atlantic Flyway.
Table 9. Optimal regulatory strategya for the three western Flyways for the 2006 hunting season. This strategy is based on
current regulatory alternatives (including the closed-season constraint), on current mid-continent mallard models and weights,
and on the dual objectives of maximizing long-term cumulative harvest and achieving a population goal of 8.8 million mallards.
The shaded cell indicates the regulatory prescription for 2006.
1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0
≤5.25 C C C C C C C C C C
5.50-6.25 R R R R R R R R R R
6.50 R R R R R R R R M M
6.75 R R R R R R R M M L
7.00 R R R R M M M L L L
7.25 R R R M M L L L L L
7.50 R R M M L L L L L L
7.75 M L L L L L L L L L
8.00 M L L L L L L L L L
≥8.25 L L L L L L L L L L
a C = closed season, R = restrictive, M = moderate, L = liberal.
b Mallard breeding population size (in millions) in the traditional survey area (survey strata 1-18, 20-50, 75-77) and Michigan,
Minnesota, and Wisconsin.
c Ponds (in millions) in Prairie Canada in May.
Harvest rate
0.00 0.04 0.08 0.12 0.16 0.20 0.24 0.28
pdf(harvest rate)
We calculated an optimal regulatory strategy for the Atlantic Flyway based on: (1) the 2006 regulatory
alternatives; (2) current population models and associated weights for eastern mallards; and (3) an objective to
maximize long-term cumulative harvest. The resulting strategy suggests liberal regulations for all population sizes
of record, and is characterized by a lack of intermediate regulations (Table 10). The strategy exhibits this
behavior in part because of the small differences in harvest rate among regulatory alternatives (Fig. 10).
Table 10. Optimal regulatory strategya for the Atlantic Flyway for
the 2006 hunting season. This strategy is based on current
regulatory alternatives, on current eastern mallard models and
weights, and on an objective to maximize long-term cumulative
harvest. The shaded cell indicates the regulatory prescription for
Mallardsb Regulation
<225 C
225 R
>225 L
a C = closed season, R = restrictive, M = moderate, and L = liberal.
b Estimated number of mallards in eastern Canada (survey strata
51-54, 56) and the northeastern U.S. (state plot surveys), in
We simulated the use of the regulatory strategy in Table 10 to determine expected performance characteristics.
Assuming that harvest management adhered to this strategy (and that current model weights accurately reflect
population dynamics), the annual breeding-population size would be expected to average 872 thousand (SD = 16
thousand). Based on a breeding population size of 899 thousand mallards, the optimal regulatory choice for the
Atlantic Flyway in 2006 is the liberal alternative.
Application of AHM Concepts to Species of Concern
The USFWS is striving to apply the principles and tools of AHM to improve decision-making for several species
of special concern. We here report on four such efforts in which progress has been made since last year. This
work is being conducted as a joint effort with USGS and we particularly appreciate the technical assistance
provided M. C. Runge (Patuxent Wildlife Research Center) and M. J. Conroy (Georgia Cooperative Fish and
Wildlife Research Unit).
Pursuant to requests from the Service Regulations Committee and the Pacific Flyway Council, we reviewed
available information about northern pintail (Anas acuta) demography, population dynamics, and harvest
(http://www.fws.gov/migratorybirds/reports/ahm05/NOPI%202005%20Report%202.pdf). Based on this review,
several technical improvements in our ability to model pintail harvest dynamics have been adopted. In addition,
we undertook an effort to evaluate pintail harvest potential based on these model improvements and to explore the
impacts of these improvements on past and future pintail harvest management policy.
Breeding population survey corrections.--There is general agreement among waterfowl biologists that the May
breeding population survey undercounts pintails in dry years when pintails tend to settle farther north on the
breeding grounds. We developed a method to correct the observed breeding population estimates for this bias.
The effect of this correction is to remove some of the apparent sharp drops in pintail numbers during dry years.
Further, this correction suggests that in recent years, there were 30-60% more pintails in the breeding population
than the May surveys indicated.
Updated recruitment, harvest, and population Models.--We developed improved methods to predict recruitment
and harvest, and included these components in an updated population model for use in the pintail harvest strategy.
The recruitment model uses latitude of the pintail population and the corrected breeding population size estimates
as predictors. The harvest models identify a “season-within-a-season” effect in the Central and Mississippi
flyways. The new population model predicts population change better than the previous model.
Pintail harvest potential.--Using the new pintail population model, we were able to analyze the harvest potential
of the pintail population. There is evidence that the pintail population is settling, on average, about 2.4° of latitude
farther north now than it did prior to 1975, possibly as a result of large-scale changes in habitat. This more
northern distribution has resulted in lower reproduction, a 30-45% decrease in carrying capacity, and a 40-65%
decrease in sustainable harvest potential.
Incorporating the technical improvements described above into the population model, we can calculate and depict
the current harvest strategy (Fig. 11). The season would be closed when the observed BPOP is less than ~1
million (which is roughly equivalent to a corrected fall flight of 2 million), restrictive when the observed BPOP is
less than 2.5 million with a high latitude of the BPOP, and liberal otherwise (this graph assumes the AHM
package is liberal; the restrictive section of the graph implies a season-within-a-season). More than one bird in
the bag is allowed when the population growth is expected to be greater than 6%. The corresponding state-dependent
harvest policies when the AHM package is moderate or restrictive are shown in Fig. 12. When the
AHM package is moderate, a restrictive season-within-a-season is possible. When the AHM package is
restrictive, the pintail season is either also restrictive, or else closed.
Average Latitude of the BPOP
Pintail BPOP (millions) Closed
Liberal (1-bird)
Liberal (3-birds)
Fig. 11. Pintail harvest strategy for a liberal duck season, based on the overflight
bias correction, new recruitment model, and updated harvest models. For a given
value of the observed (not corrected) pintail BPOP and average latitude of the BPOP,
the resulting regulatory decision is shown. Note that this graph assumes the AHM
package is “liberal”, thus the “restrictive” regulation implies a season-within-a-season.
Black Ducks
We continued to examine the harvest potential of black ducks using models of population dynamics described by
Conroy et al. (2002). These models incorporate the most controversial hypotheses about reproductive and
survival processes in black ducks, and also allow for the possibility that extant estimates of reproductive and
survival rates are positively biased. Using empirically based model weights (from 1962-93) in conjunction with
deterministic dynamic programming, we can derive combinations of equilibrium population size and harvest for a
range of adult harvest rates. Because of evidence that the reproductive rate of black ducks declines with
increasing numbers of mallards, the carrying capacity and harvestable surplus of black ducks are smaller with
higher numbers of sympatric mallards.
There also appears be a temporal decline in the reproductive rate of black ducks that cannot be explained by
changes in black duck and mallard abundance. The cause is unknown but may be related to declines in the
quantity and/or quality of breeding habitat, wintering habitat, or both. Whatever the cause, the management
implications are profound, suggesting that carrying capacity and maximum sustainable harvest of black ducks
have decreased by 35% and 60%, respectively, in the past two decades (Fig. 13).
Since 1983, the U.S. Fish and Wildlife Service has been operating under guidance provided in an Environmental
Assessment that specified states harvesting significant numbers of black ducks achieve at least a 25% reduction in
harvest from 1977- 81 levels. Although this level of harvest reduction has been achieved, black duck harvest rates
appear to have increased with the return of 50-60 day duck hunting seasons associated with implementation of
AHM (Fig. 14). Whether these harvest rates are appropriate given current population status is unclear; based on
black duck counts in the midwinter survey, current rates might be at or above maximum sustainable levels.
However, a recent publication by Link et al. (2006), which compared the U.S. midwinter survey with the
Christmas Bird Count, suggests that a larger portion of the black duck population may now be wintering in
Canada than in the past. If this is the case, then regulatory prescriptions based on the U.S. midwinter survey
could be overly conservative. Therefore, the USFWS, USGS, the Atlantic and Mississippi Flyways, and CWS are
aggressively pursuing efforts to develop an adaptive framework based on breeding-population surveys and
internationally agreed-upon management objectives.
Average Latitude of the BPOP
Pintail BPOP (millions)
Moderate (1-bird)
Moderate (3-birds)
Average Latitude of the BPOP
Pintail BPOP (millions)
Restrictive (1-bird)
Restrictive (3-birds)
Fig. 12. State-dependent pintail harvest strategy, given a moderate AHM alternative (left) and a restrictive AHM
alternative (right). For details see Fig. 11.
Fig 13. Collapsing yield curves of black ducks as result of declining productivity. Yield curves were
based on population models provided by Conroy et al. (2002). For each period, we used the median
year to represent black duck productivity and fixed the number of mallards at their average midwinter
count. The diagonal lines intersect the indicated adult harvest rate on each yield curve.
Fig 14. Estimates of harvest rates of adult-male black ducks based on recoveries of standard
bands, adjusted for preliminary estimates of band-reporting rates (P. Garrettson, unpubl. data).
Black duck MWI (k)
Harvest (k)
0.15 0.10
Harvest rate
We continued to evaluate the harvest potential of the continental scaup (greater Aythya marila and lesser Aythya
affinis) population using a discrete, logistic population model and available monitoring information on scaup
population and harvest dynamics (http://www.fws.gov/migratorybirds/reports/ahm05/scaupharvestpotential.pdf).
We used a fully Bayesian approach to estimate scaup population parameters and to characterize the uncertainty
related to scaup harvest potential (Table 11.).
We plotted mean scaup equilibrium population sizes and corresponding sustainable harvests (Fig. 15) along with
95% credibility intervals (gray shading). This yield curve, in combination with observed harvests and breeding
population sizes from 1994 – 2005, suggests that current harvests may be at maximum sustainable levels.
Table 11. Estimates of model and management parameters (posterior
means and 95% credibility intervals) derived from fitting a logistic
population model to continental scaup populations using a Bayesian
hierarchical approach. (r = intrinsic rate of growth, K = carrying
capacity, MSY = maximum sustainable yield, hMSY = harvest rate at
MSY, and BPOPMSY = breeding population size at MSY)
Parameter mean 2.50% 97.50%
r 0.1763 0.0978 0.2974
K 8.6259 6.4830 11.450
MSY 0.3694 0.2271 0.5377
hMSY 0.0881 0.0489 0.1487
BPOPMSY 4.3129 3.2420 5.7230
Fig. 15. Equilibrium population sizes and harvests (shaded area represents 95%
credibility interval) estimated for continental scaup populations from a logistic
model using a Bayesian hierarchical approach. The years represent recent
combinations of observed population size and harvest.
Atlantic Population of Canada Geese
For the purposes of this AHM application, Atlantic Population Canada Geese (APCG) are defined as those geese
breeding on the Ungava Peninsula. By this delineation, we assume that geese in the Atlantic population outside
this area are either few in number, similar in population dynamics to the Ungava birds, or both.
To account for heterogeneity among individuals, we developed a base model consisting of a truncated time-invariant
age-based projection model to describe the dynamics of APCG,
where n(t) is a vector of the abundances of the ages in the population at time t, and A is the population projection
matrix, whose ijth entry aij gives the contribution of an individual in stage j to stage i over 1 time step. The
projection interval (from t to t+1) is one year, with the census being taken in mid-June (i.e., this model has a pre-breeding
census). The life cycle diagram reflecting the transition sequence is:
where node 1 refers to one-year-old birds, node 2 refers to two-year-old birds, node B refers to adult breeders, and
node NB refers to adult non-breeders. One immediate extension of the base model is to remove the assumption of
time-invariance, and express the parameters as time-dependent quantities:
Pt = proportion of adult birds in population in year t which breed;
Rt = basic breeding productivity in year t (per capita);
(0) = annual survival rate of young from fledging in year t to the census point the next year;
(1) = annual survival rate of one-year-old birds in year t; etc.
For APCG, only N(B), R and z are observable annually, where N(B) is the number of breeding adults, R is the per
capita reproductive rate (ratio of fledged young to breeding adults), and z is an extrinsic variable (a function of
timing of snow melt on the breeding grounds).
Note that at the time of the management decision in the United States (July), estimates for only the breeding
population size and the environmental variable(s) are available; the age-ratio isn’t estimated until later in the
summer. Thus, in year t, the observable state variables are Nt
(B), zt, and Rt–1.
There are several other state variables of interest, however, namely, N(1), N(2), and N(NB). Because annual harvest
decisions need to be made based on the total population size (Ntot), which is the sum of contributions from various
non-breeding age classes as well as the number of breeding individuals, abundance of non-breeding individuals
(N(NB), N(1), and N(2)) needs to be derived using population-reconstruction techniques. In most cases, population
reconstruction involves estimating the most likely population projection matrix, given a time series of population
vectors (where number of individuals in each age class at each time is known). However, in our case, only
estimates of NB, R and z are available (not the complete population vector); in effect, we must estimate some of
the population abundance values given the other parameters in the model. Recent extensions of Bayesian
statistical methods to population reconstruction may provide an adequate solution (Fig. 16).
Fig.16. Observed breeding population size of Atlantic Population Canada Geese (estimates and 95% confidence
bounds) compared with the population trajectory based on our population model and population reconstruction
Management of the APCG has, in recent years, been focused on achieving the minimum population needed to
sustain some level of sport harvest. However, there is growing concern over the potential problems caused by
overabundant goose species, and management objectives for goose species are increasingly considering
population control as an important objective.
Specification of an explicit, mathematical objective function for APCG will require careful deliberation among
the appropriate stakeholders. Since formal AHM is an exercise in optimization, the objective often not only
drives the outcome, but also strongly influences the development of the other components of the decision
framework (e.g., the decision variables, the projection model, etc.). As a starting point for our work in developing
an AHM application for APCG, and as a starting point for discussions about the management objectives for this
resource, we developed a candidate objective function. We propose that the management objective needs to
reflect the simultaneous problem of maximizing opportunity for harvest, while minimizing the risk that the
population will become either too large (i.e., beyond human tolerance in terms of impacts on habitat or other
species), or too small (i.e., requiring season closure for political reasons).
We believe that the critical components governing the dynamics of APCG, unlike those governing ducks, are
generally density-independent over the range of population sizes that likely characterize management objectives;
as such, harvest represents an imposed regulatory mechanism on the dynamics of the population. This requires
specification of a desired range for the population size. Let NMTP represent the maximum tolerable population size
that stakeholders would accept, given the potential for negative impacts of overabundant APCG on stakeholder
interests. Let NMin be the minimum tolerable population size, below which season closure is the only politically
viable management option. The management objective is to maintain the population in the range between the
maximum and minimum values, while simultaneously maximizing opportunity for sport harvest.
There is another implicit dynamic that may interact with this objective: there may be a limit to the amount of
harvest that could be induced with traditional harvest regulations. Let NMCP represent the maximum controllable
population level that could be regulated by harvest (a function of a finite number of goose hunters or hunting
effort; this is currently an unknown quantity for APCG). We think it’s most likely that NMTP < NMCP, although this
assumption won’t affect the development of any other aspect of the AHM protocol. NMCP might strongly affect
the optimal policy, however, as the policy should avoid letting the population reach an uncontrollable level,
especially if that level is also intolerable. Thus, the objective should implicitly minimize the risk of losing the
ability to control the population. Note that NMCP should be calculated from biological considerations in
conjunction with information about the limits to harvest. NMTP, however, is a purely sociological constraint.
We think this objective will hold the population as close to the maximum tolerable population size as possible
(thus, allowing the greatest harvest), while guarding against the risk of the population getting out of control.
Mathematically, these objectives can be expressed as
max ( ) ,
u N Ht
t Σ ∞
that is, maximizing the long-term cumulative harvest utility, where the value (utility) of harvest is decremented
relative to the bounds of the constraint (i.e., the maximum and minimum bounds). One possible form of the
utility function u is a ‘square-wave’, where utility of the harvest is 0 when the population size is above and below
NMTP and NMin, respectively. The Atlantic Flyway Canada Goose Committee (AFCGC) has proposed values for
NMTP and NMin (Fig. 17).
Fig. 17. Proposed utility of APCG harvest as a function of breeding-population size.
The AFCGC would also like to consider a management objective to avoid overly restrictive regulations when
populations are close to goal, as well as abrupt changes in regulations with relatively small changes in population
size or spring weather conditions (i.e., a “knife-edged” strategy). One way this may be accomplished is by adding
a cost that increases with increasing harvest rates to the objective function. The addition of this cost is sufficient
to induce more intermediate harvest rates in the optimal harvest strategy (Fig. 18).
Development of a preliminary AHM framework for APCG is nearing completion and, pending sufficient review
and evaluation, may be ready for implementation in 2007.
Fig. 18. Optimal harvest rates for APCG at a stable stage distribution where there is
no cost (above) and a relatively high cost (below) for a knife-edged harvest strategy.
Anderson, D. R., and K. P. Burnham. 1976. Population ecology of the mallard. VI. The effect of exploitation on
survival. U.S. Fish and Wildlife Service Resource Publication No. 128. 66pp.
Blohm, R. J. 1989. Introduction to harvest - understanding surveys and season setting. Proceedings of the
International Waterfowl Symposium 6:118-133.
Blohm, R. J., R. E. Reynolds, J. P. Bladen, J. D. Nichols, J. E. Hines, K. P. Pollock, and R. T. Eberhardt. 1987.
Mallard mortality rates on key breeding and wintering areas. Transactions of the North American
Wildlife and Natural Resources Conference 52:246-263.
Burnham, K. P., G. C. White, and D. R. Anderson. 1984. Estimating the effect of hunting on annual survival rates
of adult mallards. Journal of Wildlife Management 48:350-361.
Conroy, M. J., M. W. Miller, and J. E. Hines. 2002. Identification and synthetic modeling of factors affecting
American black duck populations. Wildlife Monographs 150. 64pp.
Heusman, H W, and J. R. Sauer. 2000. The northeastern states’ waterfowl breeding population survey. Wildlife
Society Bulletin 28:355-364.
Johnson, F. A. 2003. Population dynamics of ducks other than mallards in mid-continent North America. Draft.
Fish and Wildlife Service, U.S. Dept. Interior, Washington, D.C. 15pp.
Johnson, F. A., J. A. Dubovsky, M. C. Runge, and D. R. Eggeman. 2002a. A revised protocol for the adaptive
harvest management of eastern mallards. Fish and Wildlife Service, U.S. Dept. Interior, Washington,
D.C. 13pp. [online] URL: http://migratorybirds.fws.gov/reports/ahm02/emal-ahm-2002.pdf.
Johnson, F. A., W. L. Kendall, and J. A. Dubovsky. 2002b. Conditions and limitations on learning in the
adaptive management of mallard harvests. Wildlife Society Bulletin 30:176-185.
Johnson, F. A., C. T. Moore, W. L. Kendall, J. A. Dubovsky, D. F. Caithamer, J. R. Kelley, Jr., and B. K.
Williams. 1997. Uncertainty and the management of mallard harvests. Journal of Wildlife Management
Johnson, F. A., and B. K. Williams. 1999. Protocol and practice in the adaptive management of waterfowl
harvests. Conservation Ecology 3(1): 8. [online] URL: http://www.consecol.org/vol3/iss1/art8.
Johnson, F. A., B. K. Williams, J. D. Nichols, J. E. Hines, W. L. Kendall, G. W. Smith, and D. F. Caithamer.
1993. Developing an adaptive management strategy for harvesting waterfowl in North America.
Transactions of the North American Wildlife and Natural Resources Conference 58:565-583.
Johnson, F. A., B. K. Williams, and P. R. Schmidt. 1996. Adaptive decision-making in waterfowl harvest and
habitat management. Proceedings of the International Waterfowl Symposium 7:26-33.
Link, W. A., J. R. Sauer, and D. K. Niven. 2006. A hierarchical model for regional analysis of population change
using Christmas bird count data, with application to the American black duck. The Condor 108:13-24.
Lubow, B. C. 1995. SDP: Generalized software for solving stochastic dynamic optimization problems. Wildlife
Society Bulletin 23:738-742.
Meyer, R., and R. B. Millar. 1999. BUGS in Bayesian stock assessments. Canadian Journal of Fisheries and
Aquatic Sciences 56:1078-1086.
Munro, R. E., and C. F. Kimball. 1982. Population ecology of the mallard. VII. Distribution and derivation of
the harvest. U.S. Fish and Wildlife Service Resource Publication 147. 127pp.
Nichols, J. D., F. A. Johnson, and B. K. Williams. 1995. Managing North American waterfowl in the face of
uncertainty. Annual Review of Ecology and Systematics 26:177-199.
Runge, M. C., F. A. Johnson, J. A. Dubovsky, W. L. Kendall, J. Lawrence, and J. Gammonley. 2002. A revised
protocol for the adaptive harvest management of mid-continent mallards. Fish and Wildlife Service, U.S.
Dept. Interior, Washington, D.C. 28pp. [online] URL:
U.S. Fish and Wildlife Service. 2000. Adaptive harvest management: 2000 duck hunting season. U.S. Dept.
Interior, Washington. D.C. 43pp. [online] URL:
U.S. Fish and Wildlife Service. 2001. Framework-date extensions for duck hunting in the United States:
projected impacts & coping with uncertainty, U.S. Dept. Interior, Washington, D.C. 8pp. [online] URL:
U.S. Fish and Wildlife Service. 2002. Adaptive harvest management: 2002 duck hunting season. U.S. Dept.
Interior, Washington. D.C. 34pp. [online] URL:
Walters, C. J. 1986. Adaptive management of renewable resources. MacMillan Publ. Co., New York, N.Y.
Williams, B. K., and F. A. Johnson. 1995. Adaptive management and the regulation of waterfowl harvests.
Wildlife Society Bulletin 23:430-436.
Williams, B. K., F. A. Johnson, and K. Wilkins. 1996. Uncertainty and the adaptive management of waterfowl
harvests. Journal of Wildlife Management 60:223-232.
APPENDIX A: AHM Working Group
(Note: This list includes only permanent members of the AHM Working Group. Not listed here are numerous
persons from federal and state agencies that assist the Working Group on an ad-hoc basis.)
Fred Johnson
U.S. Fish & Wildlife Service
McCarty C 420, University of Florida
P.O. Box 110339
Gainesville, FL 32611
phone: 352-392-3052
fax: 352-392-8555
e-mail: fred_a_johnson@fws.gov
USFWS representatives:
Bob Blohm (Region 9)
U.S. Fish and Wildlife Service
4401 N Fairfax Drive
MS MSP-4107
Arlington, VA 22203
phone: 703-358-1966
fax: 703-358-2272
e-mail: robert_blohm@fws.gov
Brad Bortner (Region 1)
U.S. Fish and Wildlife Service
911 NE 11th Ave.
Portland, OR 97232-4181
phone: 503-231-6164
fax: 503-231-2364
e-mail: brad_bortner@fws.gov
David Viker (Region 4)
U.S. Fish and Wildlife Service
1875 Century Blvd., Suite 345
Atlanta, GA 30345
phone: 404-679-7188
fax: 404-679-7285
e-mail: david_viker@fws.gov
Dave Case (contractor)
D.J. Case & Associates
607 Lincolnway West
Mishawaka, IN 46544
phone: 574-258-0100
fax: 574-258-0189
e-mail: dave@djcase.com
John Cornely (Region 6)
U.S. Fish and Wildlife Service
P.O. Box 25486, DFC
Denver, CO 80225
phone: 303-236-8155 (ext 259)
fax: 303-236-8680
e-mail: john_cornely@fws.gov
Ken Gamble (Region 9)
U.S. Fish and Wildlife Service
101 Park DeVille Drive, Suite B
Columbia, MO 65203
phone: 573-234-1473
fax: 573-234-1475
e-mail: ken_gamble@fws.gov
Diane Pence (Region 5)
U.S. Fish and Wildlife Service
300 Westgate Center Drive
Hadley, MA 01035-9589
phone: 413-253-8577
fax: 413-253-8424
e-mail: diane_pence@fws.gov
Jeff Haskins (Region 2)
U.S. Fish and Wildlife Service
P.O. Box 1306
Albuquerque, NM 87103
phone: 505-248-6827 (ext 30)
fax: 505-248-7885
e-mail: jeff_haskins@fws.gov
Bob Leedy (Region 7)
U.S. Fish and Wildlife Service
1011 East Tudor Road
Anchorage, AK 99503-6119
phone: 907-786-3446
fax: 907-786-3641
e-mail: robert_leedy@fws.gov
Jerry Serie (Region 9)
U.S. Fish and Wildlife Service
11510 American Holly Drive
Laurel, MD 20708
phone: 301-497-5851
fax: 301-497-5885
e-mail: jerry_serie@fws.gov
Dave Sharp (Region 9)
U.S. Fish and Wildlife Service
P.O. Box 25486, DFC
Denver, CO 80225-0486
phone: 303-275-2386
fax: 303-275-2384
e-mail: dave_sharp@fws.gov
Bob Trost (Region 9)
U.S. Fish and Wildlife Service
911 NE 11th Ave.
Portland, OR 97232-4181
phone: 503-231-6162
fax: 503-231-6228
e-mail: robert_trost@fws.gov
Sean Kelly (Region 3)
U.S. Fish and Wildlife Service
1 Federal Drive
Ft. Snelling, MN 55111-4056
phone: 612-713-5470
fax: 612-713-5393
e-mail: sean_kelly@fws.gov
Canadian Wildlife Service representatives:
Dale Caswell
Canadian Wildlife Service
123 Main St. Suite 150
Winnepeg, Manitoba, Canada R3C 4W2
phone: 204-983-5260
fax: 204-983-5248
e-mail: dale.caswell@ec.gc.ca
Eric Reed
Canadian Wildlife Service
351 St. Joseph Boulevard
Hull, QC K1A OH3, Canada
phone: 819-953-0294
fax: 819-953-6283
e-mail: eric.reed@ec.gc.ca
Flyway Council representatives:
Scott Baker (Mississippi Flyway)
Mississippi Dept. of Wildlife, Fisheries, and Parks
P.O. Box 378
Redwood, MS 39156
phone: 601-661-0294
fax: 601-364-2209
e-mail: mahannah1@aol.com
Diane Eggeman (Atlantic Flyway)
Florida Fish and Wildlife Conservation Commission
8932 Apalachee Pkwy.
Tallahassee, FL 32311
phone: 850-488-5878
fax: 850-488-5884
e-mail: diane.eggeman@fwc.state.fl.us
Jim Gammonley (Central Flyway)
Colorado Division of Wildlife
317 West Prospect
Fort Collins, CO 80526
phone: 970-472-4379
fax: 970-472-4457
e-mail: jim.gammonley@state.co.us
Mike Johnson (Central Flyway)
North Dakota Game and Fish Department
100 North Bismarck Expressway
Bismarck, ND 58501-5095
phone: 701-328-6319
fax: 701-328-6352
e-mail: mjohnson@state.nd.us
Don Kraege (Pacific Flyway)
Washington Dept. of Fish and Wildlife
600 Capital Way North
Olympia. WA 98501-1091
phone: 360-902-2509
fax: 360-902-2162
e-mail: kraegdkk@dfw.wa.gov
Bryan Swift (Atlantic Flyway)
Dept. Environmental Conservation
625 Broadway
Albany, NY 12233-4754
phone: 518-402-8866
fax: 518-402-9027 or 402-8925
e-mail: blswift@gw.dec.state.ny.us
Dan Yparraguirre (Pacific Flyway)
California Dept. of Fish and Game
1812 Ninth Street
Sacramento, CA 95814
phone: 916-445-3685
e-mail: dyparraguirre@dfg.ca.gov
Guy Zenner (Mississippi Flyway)
Iowa Dept. of Natural Resources
1203 North Shore Drive
Clear Lake, IA 50428
phone: 515/357-3517, ext. 23
fax: 515-357-5523
e-mail: gzenner@netins.net
APPENDIX B: Modeling Mallard Harvest Rates
We modeled harvest rates of mid-continent mallards within a Bayesian hierarchical framework. We developed a
set of models to predict harvest rates under each regulatory alternative as a function of the harvest rates observed
under the liberal alternative, using historical information relating harvest rates to various regulatory alternatives.
We modeled the probability of regulation-specific harvest rates (h) based on normal distributions with the
following parameterizations:
p h N
p h N
p h N
p h N
C C C
R R L R
M M L f M
L L f L
( )~ ( , )
( )~ ( , )
( )~ ( , )
( )~ ( , )
μ ν
γ μ ν
γ μ δ ν
μ δ ν
For the restrictive and moderate alternatives we introduced the parameter γ to represent the relative difference
between the harvest rate observed under the liberal alternative and the moderate or restrictive alternatives. Based
on this parameterization, we are making use of the information that has been gained (under the liberal alternative)
and are modeling harvest rates for the restrictive and moderate alternatives as a function of the mean harvest rate
observed under the liberal alternative. For the harvest-rate distributions assumed under the restrictive and
moderate regulatory packages, we specified that γR and γM are equal to the prior estimates of the predicted mean
harvest rates under the restrictive and moderate alternatives divided by the prior estimates of the predicted mean
harvest rates observed under the liberal alternative. Thus, these parameters act to scale the mean of the restrictive
and moderate distributions in relation to the mean harvest rate observed under the liberal regulatory alternative.
We also considered the marginal effect of framework-date extensions under the moderate and liberal alternatives
by including the parameter δf.
In order to update the probability distributions of harvest rates realized under each regulatory alternative, we first
needed to specify a prior probability distribution for each of the model parameters. These distributions represent
prior beliefs regarding the relationship between each regulatory alternative and the expected harvest rates. We
used a normal distribution to represent the mean and a scaled inverse-chi-square distribution to represent the
variance of the normal distribution of the likelihood. For the mean (μ) of each harvest-rate distribution associated
with each regulatory alternative, we use the predicted mean harvest rates provided in USFWS (2000a:13-14),
assuming uniformity of regulatory prescriptions across flyways. We set prior values of each standard deviation
(ν) equal to 20% of the mean (CV = 0.2) based on an analysis by Johnson et al. (1997). We then specified the
following prior distributions and parameter values under each regulatory package:
Closed (in U.S. only):
p N
p ScaledInv
( )~ ( . , .
( )~ ( , . )
ν χ
These closed-season parameter values are based on observed harvest rates in Canada during the 1988-93 seasons,
which was a period of restrictive regulations in both Canada and the United States.
For the restrictive and moderate alternatives, we specified that the standard error of the normal distribution of the
scaling parameter is based on a coefficient of variation for the mean equal to 0.3. The scale parameter of the
inverse-chi-square distribution was set equal to the standard deviation of the harvest rate mean under the
restrictive and moderate regulation alternatives (i.e., CV = 0.2).
p N
p Scaled Inv
( )~ ( . , .
( )~ ( , . )
ν χ
p N
p ScaledInv
( )~ ( . , .
( )~ ( , . )
ν χ
p N
p ScaledInv
( )~ ( . , .
( )~ ( , . )
ν χ
The prior distribution for the marginal effect of the framework-date extension was specified as:
p( ) N( ) f δ
~ 0.02,0.012
The prior distributions were multiplied by the likelihood functions based on the last seven years of data under
liberal regulations, and the resulting posterior distributions were evaluated with Markov Chain Monte Carlo
simulation. Posterior estimates of model parameters and of annual harvest rates are provided in the following
Parameter Estimate SD Parameter Estimate SD
μC 0.0088 0.0020 h1998 0.1093 0.0114
νC 0.0019 0.0005 h1999 0.1000 0.0077
γR 0.5105 0.0593 h2000 0.1261 0.0101
νR 0.0128 0.0032 h2001 0.1071 0.0111
γM 0.8501 0.1113 h2002 0.1132 0.0059
νM 0.0216 0.0054 h2003 0.1131 0.0084
μL 0.1154 0.0071 h2004 0.1245 0.0108
νL 0.0213 0.0042 h2005 0.1174 0.0082
δf 0.0115 0.0082
We modeled harvest rates of eastern mallards using the same parameterizations as those for mid-continent
p h N
p h N
p h N
p h N
C C C
R R L R
M M L f M
L L f L
( )~ ( , )
( )~ ( , )
( )~ ( , )
( )~ ( , )
μ ν
γ μ ν
γ μ δ ν
μ δ ν
We set prior values of each standard deviation (ν) equal to 30% of the mean (CV = 0.3) to account for additional
variation due to changes in regulations in the other Flyways and their unpredictable effects on the harvest rates of
eastern mallards. We then specified the following prior distribution and parameter values for the liberal
regulatory alternative:
p N
p ScaledInv
( )~ ( . , .
( )~ ( , . )
ν χ
p N
p ScaledInv
( )~ ( . , .
( )~ ( , . )
ν χ
p N
p ScaledInv
( )~ ( . , .
( )~ ( , . )
ν χ
Closed (in U.S. only):
p N
p ScaledInv
( )~ ( . , .
( )~ ( , . )
ν χ
A previous analysis suggested that the effect of the framework-date extension on eastern mallards would be of
lower magnitude and more variable than on mid-continent mallards (USFWS 2000). Therefore, we specified the
following prior distribution for the marginal effect of the framework-date extension for eastern mallards as:
p( ) N( ) f δ
~ 0.01,0.012
The prior distributions were multiplied by the likelihood functions based on the last four years of data under
liberal regulations, and the resulting posterior distributions were evaluated with Markov Chain Monte Carlo
simulation. Posterior estimates of model parameters and of annual harvest rates are provided in the following
Parameter Estimate SD Parameter Estimate SD
μC 0.0797 0.0251 h2002 0.1630 0.0129
νC 0.0230 0.0055 h2003 0.1466 0.0105
γR 0.7695 0.1175 h2004 0.1373 0.0115
νR 0.0392 0.0098 h2005 0.1282 0.0118
γM 0.9074 0.1139
νM 0.0473 0.0119
μL 0.1576 0.0169
νL 0.0460 0.0106
δf 0.0060 0.0098 | {"url":"http://digitalmedia.fws.gov/cdm/singleitem/collection/document/id/1392/rec/28","timestamp":"2014-04-17T04:49:41Z","content_type":null,"content_length":"274179","record_id":"<urn:uuid:0132f7e1-653c-4de4-a236-21bb58efc4ea>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Effects of inter-ribbon LIPC on ZGNRs
1. Atomic structure and charge density
2. Spin density
3. Energy band structure
B. Effects of inter-ribbon LIPC on AGNRs
1. Atomic structure and charge density
2. Energy band structure | {"url":"http://scitation.aip.org/content/aip/journal/jap/111/4/10.1063/1.3686673","timestamp":"2014-04-16T05:42:23Z","content_type":null,"content_length":"81565","record_id":"<urn:uuid:4c346116-508a-4989-8f49-4b6151224fb4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also
wcstod, wcstof, wcstold, wstod, watof– convert wide character string to floating-point number
#include <wchar.h>
double wcstod(const wchar_t *restrict nptr,
wchar_t **restrict endptr);
float wcstof(const wchar_t *restrict nptr,
wchar_t **restrict endptr);
long double wcstold(const wchar_t *restrict nptr,
wchar_t **restrict endptr);
double wstod(const wchar_t *nptr, wchar_t **endptr);
double watof(wchar_t *nptr);
The wcstod(), wcstof(), and wcstold() functions convert the initial portion of the wide-character string pointed to by nptr to double, float, and long double representation, respectively. They
first decompose the input wide-character string into three parts:
1. An initial, possibly empty, sequence of white-space wide-character codes (as specified by iswspace(3C))
2. A subject sequence interpreted as a floating-point constant or representing infinity or NaN
3. A final wide-character string of one or more unrecognized wide-character codes, including the terminating null wide–character code of the input wide-character string.
Then they attempt to convert the subject sequence to a floating-point number, and return the result.
The expected form of the subject sequence is an optional plus or minus sign, then one of the following:
□ A non-empty sequence of decimal digits optionally containing a radix character, then an optional exponent part
□ A 0x or 0X, then a non-empty sequence of hexadecimal digits optionally containing a radix character, then an optional binary exponent part
□ One of INF or INFINITY, or any other wide string equivalent except for case
In default mode for wcstod(), only decimal, INF/INFINITY, and NAN/NAN(n-char-sequence) forms are recognized. In C99/SUSv3 mode, hexadecimal strings are also recognized.
In default mode for wcstod(), the n-char-sequence in the NAN(n-char-equence) form can contain any character except ')' (right parenthesis) or '\0' (null). In C99/SUSv3 mode, the n-char-sequence
can contain only upper and lower case letters, digits, and '_' (underscore).
The wcstof() and wcstold() functions always function in C99/SUSv3-conformant mode.
The subject sequence is defined as the longest initial subsequence of the input wide string, starting with the first non-white-space wide character, that is of the expected form. The subject
sequence contains no wide characters if the input wide string is not of the expected form.
If the subject sequence has the expected form for a floating-point number, the sequence of wide characters starting with the first digit or the radix character (whichever occurs first) is
interpreted as a floating constant according to the rules of the C language, except that the radix character is used in place of a period, and that if neither an exponent part nor a radix
character appears in a decimal floating-point number, or if a binary exponent part does not appear in a hexadecimal floating-point number, an exponent part of the appropriate type with value zero
is assumed to follow the last digit in the string. If the subject sequence begins with a minus sign, the sequence is interpreted as negated. A wide-character sequence INF or INFINITY is
interpreted as an infinity. A wide-character sequence NAN or NAN(n-wchar-sequence[opt]) is interpreted as a quiet NaN. A pointer to the final wide string is stored in the object pointed to by
endptr, provided that endptr is not a null pointer.
If the subject sequence has either the decimal or hexadecimal form, the value resulting from the conversion is rounded correctly according to the prevailing floating point rounding direction
mode. The conversion also raises floating point inexact, underflow, or overflow exceptions as appropriate.
The radix character is defined in the program's locale (category LC_NUMERIC). In the POSIX locale, or in a locale where the radix character is not defined, the radix character defaults to a
period ('.').
If the subject sequence is empty or does not have the expected form, no conversion is performed; the value of nptr is stored in the object pointed to by endptr, provided that endptr is not a null
The wcstod() function does not change the setting of errno if successful.
The wstod() function is identical to wcstod().
The watof(str) function is equivalent to wstod(nptr, (wchar_t **)NULL).
Return Values
Upon successful completion, these functions return the converted value. If no conversion could be performed, 0 is returned.
If the correct value is outside the range of representable values, ±HUGE_VAL, ±HUGE_VALF, or ±HUGE_VALL is returned (according to the sign of the value), a floating point overflow exception is
raised, and errno is set to ERANGE.
If the correct value would cause an underflow, the correctly rounded result (which may be normal, subnormal, or zero) is returned, a floating point underflow exception is raised, and errno is set
to ERANGE.
The wcstod() and wstod() functions will fail if:
The value to be returned would cause overflow or underflow.
The wcstod() and wcstod() functions may fail if:
No conversion could be performed.
Because 0 is returned on error and is also a valid return on success, an application wishing to check for error situations should set errno to 0 call wcstod(), wcstof(), wcstold(), or wstod(),
then check errno and if it is non-zero, assume an error has occurred.
See attributes(5) for descriptions of the following attributes:
│ ATTRIBUTE TYPE │ ATTRIBUTE VALUE │
│ Interface Stability │ wcstod(), wcstof(), and wcstold() are Standard. │
│ MT-Level │ MT-Safe │
See Also
SunOS 5.11 Last Revised 31 Mar 2003
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also | {"url":"http://docs.oracle.com/cd/E19082-01/819-2243/wcstod-3c/index.html","timestamp":"2014-04-17T07:43:38Z","content_type":null,"content_length":"12014","record_id":"<urn:uuid:0ec32539-2dbc-4e87-b7e5-7da4b4bfe9ed>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shortest Paths and d-cycle Problems
Algorithms for Shortest Paths and d-cycle Problems
Abstract: Let G be a weighted graph with n vertices and m edges. We address the d-cycle problem, i.e., the problem of finding a subgraph of minimum weight with given cyclomatic number d. Hartvigsen
[1] presented an algorithm with running time O(n^2m) and O(n^{2d-1}m^2) for the cyclomatic numbers d=1 and d\ge 2, respectively. Using a (d+1)-shortest-paths algorithm, we develop a new more
efficient algorithm for the d-cycle problem with running time O(n^{2d-1}+n^2m+n^3\log n).
[1] D. Hartvigsen, Minimum path bases. Journal of Algorithms, 15 (1993) 125-142.
, author = "S. Bespamyatnikh and A. Kelarev"
, title = "Algorithms for Shortest Paths and {$d$}-cycle Problems"
, journal = "Journal of Discrete Algorithms"
, volume = 1
, pages = "1--9"
, year = 2003 | {"url":"http://www.utdallas.edu/~besp/abs/cycle.html","timestamp":"2014-04-18T01:12:06Z","content_type":null,"content_length":"1543","record_id":"<urn:uuid:49ee5f65-4eb9-44ee-b790-e1be9cc70e36>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
The inner automorphism 3-group of a strict 2-group
David Michael Roberts and Urs Schreiber
Any group $G$ gives rise to a 2-group of inner automorphisms, $\mathrm{INN}(G)$. It is an old result by Segal that the nerve of this is the universal $G$-bundle. We discuss that, similarly, for every
2-group $G_{(2)}$ there is a 3-group $\mathrm{INN}(G_{(2)})$ and a slightly smaller 3-group $\mathrm{INN}_0(G_{(2)})$ of inner automorphisms. We describe these for $G_{(2)}$ any strict 2-group,
discuss how $\mathrm{INN}_0(G_{(2)})$ can be understood as arising from the mapping cone of the identity on $G_{(2)}$ and show that its underlying 2-groupoid structure fits into a short exact
sequence $$ \xymatrix{ G_{(2)} \ar[r] & \mathrm{INN}_0(G_{(2)}) \ar[r] & \mathbf{B} G_{(2)} } \,. $$ As a consequence, $\mathrm{INN}_0(G_{(2)})$ encodes the properties of the universal $G_{(2)}$
Journal of Homotopy and Related Structures, Vol. 3(2008), No. 1, pp. 193-244 | {"url":"http://www.emis.de/journals/JHRS/volumes/2008/n1a6/abstract.htm","timestamp":"2014-04-19T17:11:06Z","content_type":null,"content_length":"2267","record_id":"<urn:uuid:c178bb6a-59d3-456c-a80b-bcc9f6669b9c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implement Map-reduce version of stochastic SVD
• Type:
• Status: Closed
• Priority:
• Resolution: Duplicate
• Affects Version/s: None
See attached pdf for outline of proposed method.
All comments are welcome.
MAHOUT-309 Implement Stochastic Decomposition
is related to
MAHOUT-623 Bug/improvement: add "overwrite" option to Stochastic SVD command line and API
relates to
MAHOUT-593 Backport of Stochastic SVD patch (Mahout-376) to hadoop 0.20 to ensure compatibility with current Mahout dependencies.
That's fine. We'll have to go back to a slight permutation of this patch with 0.21 api, but it's fine. I'll re-create it when it's time.
dlyubimov2 added a comment - That's fine. We'll have to go back to a slight permutation of this patch with 0.21 api, but it's fine. I'll re-create it when it's time.
Am I correct that this is, for our purposes, subsumed into [DEL:MAHOUT-593:DEL]? It's the same patch, just ported to work with Mahout now. I presume that going forward we'll want to iterate on that
version rather than entertain this version any more.
Reopen if I'm wrong.
Sean Owen
added a comment -
Am I correct that this is, for our purposes, subsumed into MAHOUT-593 ? It's the same patch, just ported to work with Mahout now. I presume that going forward we'll want to iterate on that version
rather than entertain this version any more. Reopen if I'm wrong.
added a comment -
Integrated in Mahout-Quality #658 (See https://hudson.apache.org/hudson/job/Mahout-Quality/658/ ) MAHOUT-593 : initial addition of Stochastic SVD method (related issue is MAHOUT-376 )
yes. current patch references CDH. It's either 0.21 or CDH but since we use CDH in production, i retained CDH reference. We will have to change it to a concrete 0.21 reference. My best hope is that
there's going to be another release of 0.21 or later which Hadoop group would more or less assert as a stable, at which point Mahout could switch dependencies. It looks like it will indeed require
some revamping as there's a number of tests that is currently failing with that switch. Unless it is CDH2b2.
dlyubimov2 added a comment - yes. current patch references CDH. It's either 0.21 or CDH but since we use CDH in production, i retained CDH reference. We will have to change it to a concrete 0.21
reference. My best hope is that there's going to be another release of 0.21 or later which Hadoop group would more or less assert as a stable, at which point Mahout could switch dependencies. It
looks like it will indeed require some revamping as there's a number of tests that is currently failing with that switch. Unless it is CDH2b2.
It's just it grabs CDH3 from cloudera's repo.
Meaning the pom references CDH?
Ted Dunning
added a comment -
It's just it grabs CDH3 from cloudera's repo. Meaning the pom references CDH?
Actually, the last time i checked, and unless it got out of sync since then, it would compile and although there are couple of tests that would not pass with CDH3b3 (but all would pass with CDH3b2).
It's just it grabs CDH3 from cloudera's repo. So integration attempt might be dangerous.
dlyubimov2 added a comment - Actually, the last time i checked, and unless it got out of sync since then, it would compile and although there are couple of tests that would not pass with CDH3b3 (but
all would pass with CDH3b2). It's just it grabs CDH3 from cloudera's repo. So integration attempt might be dangerous.
PS. I guess i did click on 'patch available', didn't I? I did not mean to – this patch changes the whole dependency thing for mahout. please roll back the status. My apologies.
Don't worry about it. Until it is committed, it isn't real. And as soon as somebody applies the patch and compiles, they will see failures and thus shouldn't commit it.
Ted Dunning
added a comment -
PS. I guess i did click on 'patch available', didn't I? I did not mean to – this patch changes the whole dependency thing for mahout. please roll back the status. My apologies. Don't worry about it.
Until it is committed, it isn't real. And as soon as somebody applies the patch and compiles, they will see failures and thus shouldn't commit it.
This is essentially 'patch available' except it requires 0.21 or CDH3 to compile (new API which is incomplete in 0.20). Hence i will not be changing the status here – i guess we can use this when
Mahout switches to new api.
I will open another issue for backport of this to 0.20.
PS. I guess i did click on 'patch available', didn't I? I did not mean to – this patch changes the whole dependency thing for mahout. please roll back the status. My apologies.
dlyubimov2 added a comment - - edited This is essentially 'patch available' except it requires 0.21 or CDH3 to compile (new API which is incomplete in 0.20). Hence i will not be changing the status
here – i guess we can use this when Mahout switches to new api. I will open another issue for backport of this to 0.20. PS. I guess i did click on 'patch available', didn't I? I did not mean to –
this patch changes the whole dependency thing for mahout. please roll back the status. My apologies.
• Optimized B x B' job . Now BBt step runs in 38 seconds in distributed mode on a single node (including the job setup which is about 20 seconds) (Reuters dataset with bigrams).
• fixes R computation bug introduced in recent optimizations
• Verified on Reuters dataset with bigrams. Ran in almost exactly 10 minutes on a single node per command line report (300 singular values) with both U and V jobs. V computation is now the longest
(which is expected as likely # of stems >> # of documents). Not sure if there's much space for improvement there, but i will take a look.
• Verified with multiple mappers, multiple q blocks again with full rank decomposition to confirm singular value matches to Colt stock solver's and orthogonality of U and V.
dlyubimov2 added a comment - Optimized B x B' job . Now BBt step runs in 38 seconds in distributed mode on a single node (including the job setup which is about 20 seconds) (Reuters dataset with
bigrams). fixes R computation bug introduced in recent optimizations Verified on Reuters dataset with bigrams. Ran in almost exactly 10 minutes on a single node per command line report (300 singular
values) with both U and V jobs. V computation is now the longest (which is expected as likely # of stems >> # of documents). Not sure if there's much space for improvement there, but i will take a
look. Verified with multiple mappers, multiple q blocks again with full rank decomposition to confirm singular value matches to Colt stock solver's and orthogonality of U and V.
Apparently, i lost several commits when setting code with github. Some CLI patches were lost. So, repared.
just ran thru Reuters dataset example with no problems in hadoop mode on a single noder(with another performance branch, sibling of this one). The longest job seems to be B*Bt job at the moment, will
have to look at it to see if there's a space for efficiency improvement.
But the branch with VectorWritable preprocessing is especially usable as it saved on GC iterations a lot and runs much faster
dlyubimov2 added a comment - Apparently, i lost several commits when setting code with github. Some CLI patches were lost. So, repared. just ran thru Reuters dataset example with no problems in
hadoop mode on a single noder(with another performance branch, sibling of this one). The longest job seems to be B*Bt job at the moment, will have to look at it to see if there's a space for
efficiency improvement. But the branch with VectorWritable preprocessing is especially usable as it saved on GC iterations a lot and runs much faster
• couple commits for U, V jobs went missing. restored.
• added couple options for computing U*pow(Sigma,0.5), V*pow(Sigma,0.5) outputs instead of U,V (makes "user"-"item" similarity computations easier)
dlyubimov2 added a comment - couple commits for U, V jobs went missing. restored. added couple options for computing U*pow(Sigma,0.5), V*pow(Sigma,0.5) outputs instead of U,V (makes "user"-"item"
similarity computations easier)
a code drop from the head of my stable branch. Various number of computational improvements. Cloudera repo integration for CDH dependency. Automatic detection of the label writable type in the input
and propagating that to the U output. This is not the most efficient code i have but that's the only one that contains 0 Mahout code hacks.
dlyubimov2 added a comment - a code drop from the head of my stable branch. Various number of computational improvements. Cloudera repo integration for CDH dependency. Automatic detection of the
label writable type in the input and propagating that to the U output. This is not the most efficient code i have but that's the only one that contains 0 Mahout code hacks.
Working notes update:
• added more or less formal description of hierarchical QR technique.
dlyubimov2 added a comment - Working notes update: added more or less formal description of hierarchical QR technique.
Sorry for iterating too often, but this was small but important fix for a showstopper.
• added orthonormality assertions in local tests for V, U (they pass with epsilon 1e-10 or better).
• small fixes to U, V jobs.
Now should be suitable for LSI type of work with documents having 10k-30k lemmas average.
BTW when inserting dependencies for CDH3b3, additional jackson jar is required in order for local hadoop test to work.
in CDH3b2 no such change is required.
Not sure about 0.21, but proper care should be taken as usual to integrate hadoop client's transitive dependencies into mahout dependencies.
dlyubimov2 added a comment - Sorry for iterating too often, but this was small but important fix for a showstopper. added orthonormality assertions in local tests for V, U (they pass with epsilon
1e-10 or better). small fixes to U, V jobs. Now should be suitable for LSI type of work with documents having 10k-30k lemmas average. BTW when inserting dependencies for CDH3b3, additional jackson
jar is required in order for local hadoop test to work. in CDH3b2 no such change is required. Not sure about 0.21, but proper care should be taken as usual to integrate hadoop client's transitive
dependencies into mahout dependencies.
Patch update.
• finalized mahout CLI integration & tested.
Tested on 10k x 10k dense matrix in distributed hadoop mode (compressed source sequence file 743mb) on my 3 yo dual core. It is indeed, as i expected, quite cpu-bound but good news is that it is so
well parallelizable with most load on map-only jobs that it should be no problem to redistribute and front end doesn't require any or cpu capacity at all. Square symmetric matrix of 200x200 sizes
computes instantaneously.
The command line i used was:
bin/mahout ssvd -i /mahout/ssvdtest/A -o /mahout/ssvd-out/1 -k 100 -p 100 -r 200 -t 2
I also was testing this with CDH3b3 setup.
dlyubimov2 added a comment - - edited Patch update. finalized mahout CLI integration & tested. Tested on 10k x 10k dense matrix in distributed hadoop mode (compressed source sequence file 743mb) on
my 3 yo dual core. It is indeed, as i expected, quite cpu-bound but good news is that it is so well parallelizable with most load on map-only jobs that it should be no problem to redistribute and
front end doesn't require any or cpu capacity at all. Square symmetric matrix of 200x200 sizes computes instantaneously. The command line i used was: bin/mahout ssvd -i /mahout/ssvdtest/A -o /mahout/
ssvd-out/1 -k 100 -p 100 -r 200 -t 2 I also was testing this with CDH3b3 setup.
update to working notes
• changes in latest patch brought to sync with the doc (Q-Job mapper, no combiner)
• added issues section including my thoughts on limitations of this approach and possible attack angles to alleviate them.
dlyubimov2 added a comment - update to working notes changes in latest patch brought to sync with the doc (Q-Job mapper, no combiner) added issues section including my thoughts on limitations of this
approach and possible attack angles to alleviate them.
Actually i think the biggest issue here is not scale for memory but what i call 'supersplits'.
if we have a row-wise matrix format, and by virtue of SSVD algorithm we have to consider no less than 500x500 blocks, then even with terasort 40tb 2000 node cluster block size parameter setting
(128Mb) we are constrained to approx. ~30-50k densely wide matrices (even then the expectation is that half of the mapper's data would have to be downloaded from some other node). Which kind of
defeats one of the main pitches of MR, code-data collocation. so in case of 1 mln densely wide matrices, and big cluster, we'd be downloading like 99% of data from somewhere else. But we already paid
in IO bandwidth when we created input matrix file in the first place, so why should it be a giant inefficient model of a supercomputer in a cloud? Custom batching approach would be way more
I kind of dubbed the problem above as a 'supersplits problem' in my head.
I beleive i am largely done with this mahout issue as far as method and code are concerned. We, of course, need to test it on something sizable. Benchmarks thus are a pending matter, and i expect
they will be net io-bound but they would be reasonably scaled for memory per discussion (less the issue of deficient prebuffering in VectorWritable on wide matrices) but additional remedies are clear
if needed. There might be some minor tweaks for outputting U,V required. Maybe add one or two more map-only passes over Q data to get additional scale for m. Maybe backport for hadoop 0.20 if mahout
decides to release this code.
Next problem i am going to ponder as a side project is devising an SSVD MR method on a block-wise serialized matrices. I think i can devise an SSVD method that can efficiently address "supersplits"
problem (with more shuffle and sort I/O though but it would be much more mr-like). Since I think Mahout supports neither block wise formatted matrices, nor, respectively, any BLAS ops for such
inputs, an alternative approach to matrix (de-)serialization would have to be created. Conceivable scenario would be to reprocess mahout's row-wise matrices into such SSVD block-wise input at
additional expense, but single-purposed data perhaps may well just vectorise block-wise directly.
dlyubimov2 added a comment - Actually i think the biggest issue here is not scale for memory but what i call 'supersplits'. if we have a row-wise matrix format, and by virtue of SSVD algorithm we
have to consider no less than 500x500 blocks, then even with terasort 40tb 2000 node cluster block size parameter setting (128Mb) we are constrained to approx. ~30-50k densely wide matrices (even
then the expectation is that half of the mapper's data would have to be downloaded from some other node). Which kind of defeats one of the main pitches of MR, code-data collocation. so in case of 1
mln densely wide matrices, and big cluster, we'd be downloading like 99% of data from somewhere else. But we already paid in IO bandwidth when we created input matrix file in the first place, so why
should it be a giant inefficient model of a supercomputer in a cloud? Custom batching approach would be way more efficient. I kind of dubbed the problem above as a 'supersplits problem' in my head.
------- I beleive i am largely done with this mahout issue as far as method and code are concerned. We, of course, need to test it on something sizable. Benchmarks thus are a pending matter, and i
expect they will be net io-bound but they would be reasonably scaled for memory per discussion (less the issue of deficient prebuffering in VectorWritable on wide matrices) but additional remedies
are clear if needed. There might be some minor tweaks for outputting U,V required. Maybe add one or two more map-only passes over Q data to get additional scale for m. Maybe backport for hadoop 0.20
if mahout decides to release this code. Next problem i am going to ponder as a side project is devising an SSVD MR method on a block-wise serialized matrices. I think i can devise an SSVD method that
can efficiently address "supersplits" problem (with more shuffle and sort I/O though but it would be much more mr-like). Since I think Mahout supports neither block wise formatted matrices, nor,
respectively, any BLAS ops for such inputs, an alternative approach to matrix (de-)serialization would have to be created. Conceivable scenario would be to reprocess mahout's row-wise matrices into
such SSVD block-wise input at additional expense, but single-purposed data perhaps may well just vectorise block-wise directly.
Patch update
• eliminates use of combiners. Combiner's code moved back to mapper of the same task. This makes first stage of hierarchical R merges more efficient and consistent with MR spec.
Retested with 100k rows matrix.I will be updating working notes to reflect the change shortly.
dlyubimov2 added a comment - - edited Patch update eliminates use of combiners. Combiner's code moved back to mapper of the same task. This makes first stage of hierarchical R merges more efficient
and consistent with MR spec. Retested with 100k rows matrix.I will be updating working notes to reflect the change shortly.
You don't get a choice. The frame will run the combiner as many times as it feels it wants to. It will run in the mapper or reducer or both or neither.
Ted, thank you for pointing it. As i mentioned couple of comments above, this is very well understood, was a concern from the beginning, never came up as a problem in tests, but I will do a small
patch per my comment above that will make processing complexity even faster albeit code might become a little uglier. I am very well aware mapper can spill some records past combiner and send them to
sort in the reducer per spec. I
In fact, i had version that does just that on another branch. i just need to yank it and bring in sync with this branch.
Thank you.
dlyubimov2 added a comment - You don't get a choice. The frame will run the combiner as many times as it feels it wants to. It will run in the mapper or reducer or both or neither. Ted, thank you for
pointing it. As i mentioned couple of comments above, this is very well understood, was a concern from the beginning, never came up as a problem in tests, but I will do a small patch per my comment
above that will make processing complexity even faster albeit code might become a little uglier. I am very well aware mapper can spill some records past combiner and send them to sort in the reducer
per spec. I In fact, i had version that does just that on another branch. i just need to yank it and bring in sync with this branch. Thank you.
> This also raises the question of whether your combiner can be applied multiple times. I suspect yes. You will know better than I.
Yes, that's the optional hierarchy increment that i was talking about, that we can additionally implement if billion for m and unbound n is not good enough.
I think I wrote poorly here.
You don't get a choice. The frame will run the combiner as many times as it feels it wants to. It will run in the mapper or reducer or both or neither.
Your combiner has to be ready to run any number of times on the output of the mapper or the output of other combiners or a combination of the same. This isn't optional in Hadoop. It may not have
happened in the test runs you have done, but that doesn't imply anything about whether it will happen at another time.
Ted Dunning
added a comment -
> This also raises the question of whether your combiner can be applied multiple times. I suspect yes. You will know better than I. Yes, that's the optional hierarchy increment that i was talking
about, that we can additionally implement if billion for m and unbound n is not good enough. I think I wrote poorly here. You don't get a choice. The frame will run the combiner as many times as it
feels it wants to. It will run in the mapper or reducer or both or neither. Your combiner has to be ready to run any number of times on the output of the mapper or the output of other combiners or a
combination of the same. This isn't optional in Hadoop. It may not have happened in the test runs you have done, but that doesn't imply anything about whether it will happen at another time.
That means that we are limited to cases with a few hundred million non-zero elements and are effectively unlimited on the number of potential columns of A.
I beleive in case of SSVD this statement only partially valid.
It all depends on what you are spec'd to. say we are spec'd to 1G + java/mr overhead, and few hundred million non-zero elements will take few hundred megabytes multiplied by 8. Which is already all
or more than all i have. In my spec, (million non-zeros) it's only 8 mb that's seems ok. SSVD assumption doesn't include any significant memory allocation for rows of A, and most importantly, it
doesn't have to, i think. Philosophy here is that A is a file, and read it with a buffer to optimize I/O, but my stream buffer doesn't have to be forced to be 1G on me.
dlyubimov2 added a comment - That means that we are limited to cases with a few hundred million non-zero elements and are effectively unlimited on the number of potential columns of A. I beleive in
case of SSVD this statement only partially valid. It all depends on what you are spec'd to. say we are spec'd to 1G + java/mr overhead, and few hundred million non-zero elements will take few hundred
megabytes multiplied by 8. Which is already all or more than all i have. In my spec, (million non-zeros) it's only 8 mb that's seems ok. SSVD assumption doesn't include any significant memory
allocation for rows of A, and most importantly, it doesn't have to, i think. Philosophy here is that A is a file, and read it with a buffer to optimize I/O, but my stream buffer doesn't have to be
forced to be 1G on me.
Actually i am seriously considering reworking combiner approach into two-pass-in-the-mapper approach.
This should be ok because side file is going to be n/(k+p) times smaller than original split. Most likely IO cache will not even let us wait on io in some cases. but if there is disk io, it would be
100% sequential speed.
And that should be even more efficient than asking combiner to sort what is already sorted (which i mostly did out of aesthetics as combiner looks much more fanciful than some explicit sequence file
dlyubimov2 added a comment - - edited Actually i am seriously considering reworking combiner approach into two-pass-in-the-mapper approach. This should be ok because side file is going to be n/(k+p)
times smaller than original split. Most likely IO cache will not even let us wait on io in some cases. but if there is disk io, it would be 100% sequential speed. And that should be even more
efficient than asking combiner to sort what is already sorted (which i mostly did out of aesthetics as combiner looks much more fanciful than some explicit sequence file read/writes)
Sounds to me like the reducer could replicate the combiner and thus implement the second step of your hierarchy which would avoid the second MR pass. You could have a single reducer which
receives all combiner outputs and thus merge everything.
First of all, second level hierarchical merge is done in Bt job mappers – so moving this code into reducer wouldn't actually really win anything in terms of actual IO or # of MR passes.
Secondly, I don't beleive single reducer pass is efficient. All parallelization is gone, isn't it? But the problem is not even that but limitation on how many Rs you can preload for the merges into
same process. Rs are preloaded as a side info (and we relinquish one R with every Q we process in combiner, so initially it loads all but then throws them away one by one. So we can't merge them all
in one pass, but we can divide them into subsequences. If they merged by subsequences, subsequences must be completed and order is important and should be exactly the same for all Q blocks. The catch
is that each Q update has to complete the merges of the reminder of the R sequence. Such subsequence merge is described in computeQHatSequence algorithm in my notes.
Finally, there's no need to sort. When we do R merge, as it is shown in the notes, we have to revisit Q blocks (and Rs) exactly in the same order we just produced them in. Problem is that when we
revisit a Q block, we have to have the tail of all subsequent R blocks handy. So even if we sent them to reducers (which was my initial plan), we'd have to sort them by task id they came from and
then by their order inside the task that produced them. And then we might need to duplicate R traffic to every reducer process. Huge waste, again.
Since you can't guarantee that the combiner does any work, this is best practice anyway. The specification is that the combiner will run zero or more times.
That actually is absolutely valid and i was very concerned about it in the beginning. I expected it but it did not turn up in tests so far, so that issue was slowly glimmering in my subconsciousness
ever since.
If that actually is a problem then yes I might be forced to extend Q pass to include reducer. Which IMO would be a humongous exercise in IO bandwidth waste. there's no need for that. there's only
need a for local merge of R sequences and doing second pass over q block sequence you just generated, in the same order you generated it. Even a combiner is an overshoot for that, if i were
implementing it in a regular parallel batching way, since there's no need to reorder q blocks.
I also considered that If i am forced to address that i can still do it in the mapper only without even a combiner by pushing Q blocks to local FS side files and doing a second pass over them and
that'd be much more efficient than pushing them to another networked process. Not mentioning all the unneeded sorting involved in between and all the armtwisting i was struggling with involved in
sending them in same task chunks, same order. There's still rudimentary remainder of that attempt in the code (the key that carries task id and order id for Q block that are never subsequently used
anywhere). But saving local side file in mapper and do second pass over it is still way more attractive than reducer approach if i need to fix this.
This also raises the question of whether your combiner can be applied multiple times. I suspect yes. You will know better than I.
Yes, that's the optional hierarchy increment that i was talking about, that we can additionally implement if billion for m and unbound n is not good enough.
dlyubimov2 added a comment - - edited Sounds to me like the reducer could replicate the combiner and thus implement the second step of your hierarchy which would avoid the second MR pass. You could
have a single reducer which receives all combiner outputs and thus merge everything. First of all, second level hierarchical merge is done in Bt job mappers – so moving this code into reducer
wouldn't actually really win anything in terms of actual IO or # of MR passes. Secondly, I don't beleive single reducer pass is efficient. All parallelization is gone, isn't it? But the problem is
not even that but limitation on how many Rs you can preload for the merges into same process. Rs are preloaded as a side info (and we relinquish one R with every Q we process in combiner, so
initially it loads all but then throws them away one by one. So we can't merge them all in one pass, but we can divide them into subsequences. If they merged by subsequences, subsequences must be
completed and order is important and should be exactly the same for all Q blocks. The catch is that each Q update has to complete the merges of the reminder of the R sequence. Such subsequence merge
is described in computeQHatSequence algorithm in my notes. Finally, there's no need to sort. When we do R merge, as it is shown in the notes, we have to revisit Q blocks (and Rs) exactly in the same
order we just produced them in. Problem is that when we revisit a Q block, we have to have the tail of all subsequent R blocks handy. So even if we sent them to reducers (which was my initial plan),
we'd have to sort them by task id they came from and then by their order inside the task that produced them. And then we might need to duplicate R traffic to every reducer process. Huge waste, again.
Since you can't guarantee that the combiner does any work, this is best practice anyway. The specification is that the combiner will run zero or more times. That actually is absolutely valid and i
was very concerned about it in the beginning. I expected it but it did not turn up in tests so far, so that issue was slowly glimmering in my subconsciousness ever since. If that actually is a
problem then yes I might be forced to extend Q pass to include reducer. Which IMO would be a humongous exercise in IO bandwidth waste. there's no need for that. there's only need a for local merge of
R sequences and doing second pass over q block sequence you just generated, in the same order you generated it. Even a combiner is an overshoot for that, if i were implementing it in a regular
parallel batching way, since there's no need to reorder q blocks. I also considered that If i am forced to address that i can still do it in the mapper only without even a combiner by pushing Q
blocks to local FS side files and doing a second pass over them and that'd be much more efficient than pushing them to another networked process. Not mentioning all the unneeded sorting involved in
between and all the armtwisting i was struggling with involved in sending them in same task chunks, same order. There's still rudimentary remainder of that attempt in the code (the key that carries
task id and order id for Q block that are never subsequently used anywhere). But saving local side file in mapper and do second pass over it is still way more attractive than reducer approach if i
need to fix this. This also raises the question of whether your combiner can be applied multiple times. I suspect yes. You will know better than I. Yes, that's the optional hierarchy increment that i
was talking about, that we can additionally implement if billion for m and unbound n is not good enough.
I think you misunderstanding it a little. the actual implementation is not that naive. let me clarify.
I was hoping I misunderstood it.
And there's no reducer (i.e. any sizable shuffle and sort) here. At the end of this operation we have a bunch of Rs which corresponds to the number of splits, and a bunch of interbediate Q blocks
still same size which correspond to number of Q-blocks.
Now we can repeat this process hierarchically with additional map-only passes over Q blocks until only one R block is left. with 1G memory, as i said, my estimate is we can merge up to 1000 Rs
per combiner with one MR pass (without extra overhead for single Q block and other java things). (in reality in this implementation there are 2 levels in this hierarchy which seems to point to
over 1 bln rows, or about 1 mln Q blocks of some relatively moderate height r>>k+p, but like i said with just one map-only pass one can increase scale of m to single trillions ). This
hierarchical merging is exactly what i meant by 'making MR work harder' for us.
Sounds to me like the reducer could replicate the combiner and thus implement the second step of your hierarchy which would avoid the second MR pass. You could have a single reducer which receives
all combiner outputs and thus merge everything. Since you can't guarantee that the combiner does any work, this is best practice anyway. The specification is that the combiner will run zero or more
This also raises the question of whether your combiner can be applied multiple times. I suspect yes. You will know better than I.
Ted Dunning
added a comment -
I think you misunderstanding it a little. the actual implementation is not that naive. let me clarify. I was hoping I misunderstood it. And there's no reducer (i.e. any sizable shuffle and sort)
here. At the end of this operation we have a bunch of Rs which corresponds to the number of splits, and a bunch of interbediate Q blocks still same size which correspond to number of Q-blocks. Now we
can repeat this process hierarchically with additional map-only passes over Q blocks until only one R block is left. with 1G memory, as i said, my estimate is we can merge up to 1000 Rs per combiner
with one MR pass (without extra overhead for single Q block and other java things). (in reality in this implementation there are 2 levels in this hierarchy which seems to point to over 1 bln rows, or
about 1 mln Q blocks of some relatively moderate height r>>k+p, but like i said with just one map-only pass one can increase scale of m to single trillions ). This hierarchical merging is exactly
what i meant by 'making MR work harder' for us. Sounds to me like the reducer could replicate the combiner and thus implement the second step of your hierarchy which would avoid the second MR pass.
You could have a single reducer which receives all combiner outputs and thus merge everything. Since you can't guarantee that the combiner does any work, this is best practice anyway. The
specification is that the combiner will run zero or more times. This also raises the question of whether your combiner can be applied multiple times. I suspect yes. You will know better than I.
My real worry with your approach is that the average number of elements per row of A is likely to be comparable to p+k. This means that Y = A \Omega will be about as large as A. Processing that
sequentially is a non-starter and the computation of Q without block QR means that Y is processed sequentially. On the other hand, if we block decompose Y, we want blocks that fit into memory
because that block size lives on in B and all subsequent steps. Thus, streaming QR is a non-issue in a blocked implementation. The blocked implementation gives a natural parallel implementation.
I think you misunderstanding it a little. the actual implementation is not that naive. let me clarify.
First, there is blocking. More over, it's a hierarchical blocking.
the way it works, you specify block height, which is more k+p but ideally less than a MR split would host (you can specify more but you may be producing some network traffic then to move
non-collocated parts of the split). Blocks are considered completely in parallel. Hence, initial parallelizm degree is m/r where r is average block height. They can (and are) considered
independently, among the splits. "thin streaming QR" runs inside the blocks, not on the whole Y.
Secondly, Y matrix, or even its blocks, are never formed. What is formed is shifting intermediate Q buffer of the size (k+p)xr and intermediate upper triangular R of size (k+p)x(k+p). Since they are
triangular, there's a rudimental implementation of Matrix itnerface called UpperTriangular not to waste space on lower triangle but still allow random access.
Thirdly, the hierarchy. when we form Q blocks, we will have to update them with Givens operations resulting from merging R matrices. This is done in combiner and this comes very natural to it. If
there's say z blocks in a mapper then Q1 goes thru updates resulting from z merges of R, Q2 goes thru udpates resulting from z-1 merges and so on. Nothing being concatenated (or unblocked) there
except for the R sequence (but it is still sequence, that is sequentially accessed thing) which i already provided memory estimates for. Most importantly, it does not depend on the block height, so
you can shrink R sequence length if you have higher Q blocks, but higher Q blocks also take more memory to process at a time. there's a sweet spot to be hit here with parameters defining block height
and split size, so it maximizes the thruput. for k+p=500 i don't see any memory concerns there in a single combiner run.
And there's no reducer (i.e. any sizable shuffle and sort) here. At the end of this operation we have a bunch of Rs which corresponds to the number of splits, and a bunch of interbediate Q blocks
still same size which correspond to number of Q-blocks.
Now we can repeat this process hierarchically with additional map-only passes over Q blocks until only one R block is left. with 1G memory, as i said, my estimate is we can merge up to 1000 Rs per
combiner with one MR pass (without extra overhead for single Q block and other java things). (in reality in this implementation there are 2 levels in this hierarchy which seems to point to over 1 bln
rows, or about 1 mln Q blocks of some relatively moderate height r>>k+p, but like i said with just one map-only pass one can increase scale of m to single trillions ). This hierarchical merging is
exactly what i meant by 'making MR work harder' for us.
There is a poor illustration of this hierarchical process in the doc that makes it perhaps more clear than words.
Also let me point out that the fact that the processes involved in R merging are map-only, which means that if we play the splitting game right in MR, there would practically be no networking IO per
MR theory. This is very important imo for such scale. The only IO that occurs is to 'slurp' r sequences from HDFS before next stage of hierarchical R-merge. For a sequence of 1000 R, k+p 500, the
size of R, dense and uncompressed, is approximately 1 mb each, so for a sequence of thousand Rs, the size of such slurp IO, dense and uncompressed, would be 1G, which is less than what i am having
today in a single step with Pig for a 200k of proto-packed log records today in production and that finishes in a minute.
Bottom line, let's benchmark it. So we don't have to guess. Especially if we can do A vector streaming. I am personally having trouble with logistics of this so far, as i mentioned before. I will get
to benchmarking it sooner or later. Important thing for me at this point was to make sure it does correct computation (which it does) and make educated guess about the scale (which is billion by
million without vector streaming support or billion to gazillion with vector streaming support, with potential to extend m scale thousand times with each additional map-only pass over Q data (which
is (k+p)xm which is again unbounded for n).
dlyubimov2 added a comment - - edited My real worry with your approach is that the average number of elements per row of A is likely to be comparable to p+k. This means that Y = A \Omega will be
about as large as A. Processing that sequentially is a non-starter and the computation of Q without block QR means that Y is processed sequentially. On the other hand, if we block decompose Y, we
want blocks that fit into memory because that block size lives on in B and all subsequent steps. Thus, streaming QR is a non-issue in a blocked implementation. The blocked implementation gives a
natural parallel implementation. I think you misunderstanding it a little. the actual implementation is not that naive. let me clarify. First, there is blocking. More over, it's a hierarchical
blocking. the way it works, you specify block height, which is more k+p but ideally less than a MR split would host (you can specify more but you may be producing some network traffic then to move
non-collocated parts of the split). Blocks are considered completely in parallel. Hence, initial parallelizm degree is m/r where r is average block height. They can (and are) considered
independently, among the splits. "thin streaming QR" runs inside the blocks, not on the whole Y. Secondly, Y matrix, or even its blocks, are never formed. What is formed is shifting intermediate Q
buffer of the size (k+p)xr and intermediate upper triangular R of size (k+p)x(k+p). Since they are triangular, there's a rudimental implementation of Matrix itnerface called UpperTriangular not to
waste space on lower triangle but still allow random access. Thirdly, the hierarchy. when we form Q blocks, we will have to update them with Givens operations resulting from merging R matrices. This
is done in combiner and this comes very natural to it. If there's say z blocks in a mapper then Q1 goes thru updates resulting from z merges of R, Q2 goes thru udpates resulting from z-1 merges and
so on. Nothing being concatenated (or unblocked) there except for the R sequence (but it is still sequence, that is sequentially accessed thing) which i already provided memory estimates for. Most
importantly, it does not depend on the block height, so you can shrink R sequence length if you have higher Q blocks, but higher Q blocks also take more memory to process at a time. there's a sweet
spot to be hit here with parameters defining block height and split size, so it maximizes the thruput. for k+p=500 i don't see any memory concerns there in a single combiner run. And there's no
reducer (i.e. any sizable shuffle and sort) here. At the end of this operation we have a bunch of Rs which corresponds to the number of splits, and a bunch of interbediate Q blocks still same size
which correspond to number of Q-blocks. Now we can repeat this process hierarchically with additional map-only passes over Q blocks until only one R block is left. with 1G memory, as i said, my
estimate is we can merge up to 1000 Rs per combiner with one MR pass (without extra overhead for single Q block and other java things). (in reality in this implementation there are 2 levels in this
hierarchy which seems to point to over 1 bln rows, or about 1 mln Q blocks of some relatively moderate height r>>k+p, but like i said with just one map-only pass one can increase scale of m to single
trillions ). This hierarchical merging is exactly what i meant by 'making MR work harder' for us. There is a poor illustration of this hierarchical process in the doc that makes it perhaps more clear
than words. Also let me point out that the fact that the processes involved in R merging are map-only , which means that if we play the splitting game right in MR, there would practically be no
networking IO per MR theory. This is very important imo for such scale. The only IO that occurs is to 'slurp' r sequences from HDFS before next stage of hierarchical R-merge. For a sequence of 1000
R, k+p 500, the size of R, dense and uncompressed, is approximately 1 mb each, so for a sequence of thousand Rs, the size of such slurp IO, dense and uncompressed, would be 1G, which is less than
what i am having today in a single step with Pig for a 200k of proto-packed log records today in production and that finishes in a minute. Bottom line, let's benchmark it. So we don't have to guess.
Especially if we can do A vector streaming. I am personally having trouble with logistics of this so far, as i mentioned before. I will get to benchmarking it sooner or later. Important thing for me
at this point was to make sure it does correct computation (which it does) and make educated guess about the scale (which is billion by million without vector streaming support or billion to
gazillion with vector streaming support, with potential to extend m scale thousand times with each additional map-only pass over Q data (which is (k+p)xm which is again unbounded for n). thanks.
> I think that my suggested approach handles this already.
> The block decomposition of Q via the blockwise QR decomposition implies a breakdown of B into column-wise blocks which
> can each be handled separately. The results are then combined using concatenation.
Ted, yes, i understand that part, but i think we are talking about different things. What I am talking about is formation of Y rows well before orthonotmalization is even concerned.
What i mean is that right now VectorWritable loads the entire thing into memory. Hence, the bound for width of A. (i.e. we can't load A row that is longer than some memory chunk we can afford for
I understand that now.
The current limitation is that the sparse representation of a row has to fit into memory. That means that we are limited to cases with a few hundred million non-zero elements and are effectively
unlimited on the number of potential columns of A.
The only other place that the total number of elements in a row comes into play is in B. Using the block form of Q, however, we never
have to store an entire row of B, just manageable chunks.
My real worry with your approach is that the average number of elements per row of A is likely to be comparable to p+k. This means that Y = A \Omega will be about as large as A. Processing that
sequentially is a non-starter and the computation of Q without block QR means that Y is processed sequentially. On the other hand, if we block decompose Y, we want blocks that fit into memory because
that block size lives on in B and all subsequent steps. Thus, streaming QR is a non-issue in a blocked implementation. The blocked implementation gives a natural parallel implementation.
Ted Dunning
added a comment -
> I think that my suggested approach handles this already. > The block decomposition of Q via the blockwise QR decomposition implies a breakdown of B into column-wise blocks which > can each be
handled separately. The results are then combined using concatenation. Ted, yes, i understand that part, but i think we are talking about different things. What I am talking about is formation of Y
rows well before orthonotmalization is even concerned. What i mean is that right now VectorWritable loads the entire thing into memory. Hence, the bound for width of A. (i.e. we can't load A row that
is longer than some memory chunk we can afford for it). I understand that now. The current limitation is that the sparse representation of a row has to fit into memory. That means that we are limited
to cases with a few hundred million non-zero elements and are effectively unlimited on the number of potential columns of A. The only other place that the total number of elements in a row comes into
play is in B. Using the block form of Q, however, we never have to store an entire row of B, just manageable chunks. My real worry with your approach is that the average number of elements per row of
A is likely to be comparable to p+k. This means that Y = A \Omega will be about as large as A. Processing that sequentially is a non-starter and the computation of Q without block QR means that Y is
processed sequentially. On the other hand, if we block decompose Y, we want blocks that fit into memory because that block size lives on in B and all subsequent steps. Thus, streaming QR is a
non-issue in a blocked implementation. The blocked implementation gives a natural parallel implementation.
I think that my suggested approach handles this already.
The block decomposition of Q via the blockwise QR decomposition implies a breakdown of B into column-wise blocks which can each be handled separately. The results are then combined using
Ted, yes, i understand that part, but i think we are talking about different things. What I am talking about is formation of Y rows well before orthonotmalization is even concerned.
What i mean is that right now VectorWritable loads the entire thing into memory. Hence, the bound for width of A. (i.e. we can't load A row that is longer than some memory chunk we can afford for
However, what A row is participating in in each case is a bunch of (namely, k+p) dot-products. In order to produce those, it is sufficient to examine A row sequentialy (i.e. streamingly) in one pass
while keeping only k+p values in memory as dot-accumulators.
Hence, say if we equipped VectorWritable with a push-parser like element handler (notorious DocumentHanlder immediately pops in memory form SAXParser) then we will never have to examine more than one
element of A row at a time. And hence we are not bound by memory for n (A width) anymore. That handler would form Y rows directly during the sequential examination of A rows. Identical considerations
are in effect when forming Qt*A partial products (i already checked for this).
I already thought about this approach a little (and i believe Jake Mannix also posted something very similar to that recently to effect of sequential vector examination backed by a streaming read).
Since it is touching VectorWritable internals, i think i would need to make a change proposal for it and if seems reasonable handle it in another issue. I will do so but i need to check couple of
things first in Hadoop see if it is feasible within current MR framework and doesn't blow off all benefits code-data collocation.
If that proposal is implemented, and MR considerations are tackled, we will have SSVD that scales to about a billion rows for 500 singular vlaues and 1G memory in mapper vertically (m) and a
gazillion in n (width).
theoretically. How about that.
dlyubimov2 added a comment - I think that my suggested approach handles this already. The block decomposition of Q via the blockwise QR decomposition implies a breakdown of B into column-wise blocks
which can each be handled separately. The results are then combined using concatenation. Ted, yes, i understand that part, but i think we are talking about different things. What I am talking about
is formation of Y rows well before orthonotmalization is even concerned. What i mean is that right now VectorWritable loads the entire thing into memory. Hence, the bound for width of A. (i.e. we
can't load A row that is longer than some memory chunk we can afford for it). However, what A row is participating in in each case is a bunch of (namely, k+p) dot-products. In order to produce those,
it is sufficient to examine A row sequentialy (i.e. streamingly) in one pass while keeping only k+p values in memory as dot-accumulators. Hence, say if we equipped VectorWritable with a push-parser
like element handler (notorious DocumentHanlder immediately pops in memory form SAXParser) then we will never have to examine more than one element of A row at a time. And hence we are not bound by
memory for n (A width) anymore. That handler would form Y rows directly during the sequential examination of A rows. Identical considerations are in effect when forming Qt*A partial products (i
already checked for this). I already thought about this approach a little (and i believe Jake Mannix also posted something very similar to that recently to effect of sequential vector examination
backed by a streaming read). Since it is touching VectorWritable internals, i think i would need to make a change proposal for it and if seems reasonable handle it in another issue. I will do so but
i need to check couple of things first in Hadoop see if it is feasible within current MR framework and doesn't blow off all benefits code-data collocation. If that proposal is implemented, and MR
considerations are tackled, we will have SSVD that scales to about a billion rows for 500 singular vlaues and 1G memory in mapper vertically (m) and a gazillion in n (width). theoretically. How about
We could scale n even further by splitting the vector into slices as said before, but not before we solve the problem of code-data collocation in 'supersplits' for wide matrices. If we don't do
that, it will cause a lot of IO in mappers and kind of defeats the purpose of MR imo.
I think that my suggested approach handles this already.
The block decomposition of Q via the blockwise QR decomposition implies a breakdown of B into column-wise blocks which can each be handled separately. The results are then combined using
Ted Dunning
added a comment -
We could scale n even further by splitting the vector into slices as said before, but not before we solve the problem of code-data collocation in 'supersplits' for wide matrices. If we don't do that,
it will cause a lot of IO in mappers and kind of defeats the purpose of MR imo. I think that my suggested approach handles this already. The block decomposition of Q via the blockwise QR
decomposition implies a breakdown of B into column-wise blocks which can each be handled separately. The results are then combined using concatenation.
small update.
• added special treatment for SequentialAccessSparseVector computations during dot product computation like it is done everywhere else.
I guess it is about all i can do at this point for n scaling efficiently.
We could scale n even further by splitting the vector into slices as said before, but not before we solve the problem of code-data collocation in 'supersplits' for wide matrices. If we don't do that,
it will cause a lot of IO in mappers and kind of defeats the purpose of MR imo.
dlyubimov2 added a comment - small update. added special treatment for SequentialAccessSparseVector computations during dot product computation like it is done everywhere else. I guess it is about
all i can do at this point for n scaling efficiently. We could scale n even further by splitting the vector into slices as said before, but not before we solve the problem of code-data collocation in
'supersplits' for wide matrices. If we don't do that, it will cause a lot of IO in mappers and kind of defeats the purpose of MR imo.
Although can one really implement a streaming read of a value for a mapper value?
Of course we can split a vector into several slices shoved into sequential records. That would require some work to tweak SequenceFile's record reader logic so it doesn't stop in the middle of a
vector (and respectively skips records to the next vector at the beginning of a split) but such possibility definitely exists.
Not sure if Mahout has already something like that, need to look closer. But it should be possible to develop something like DistributedSlicedRowMatrix.
dlyubimov2 added a comment - Although can one really implement a streaming read of a value for a mapper value? Of course we can split a vector into several slices shoved into sequential records. That
would require some work to tweak SequenceFile's record reader logic so it doesn't stop in the middle of a vector (and respectively skips records to the next vector at the beginning of a split) but
such possibility definitely exists. Not sure if Mahout has already something like that, need to look closer. But it should be possible to develop something like DistributedSlicedRowMatrix.
Also, if we consider wide matrices instead of tall matrices, then maximum number of mappers might be reduced which would affect parallelism on big clusters.
Another consideration for extremely wide matrices i guess is that the A block in this case will certainly overshoot several standard DFS blocks so it may only be efficient if we force data
collocation for a big number of blocks (or just increase number of blocks). I am not sure if hadoop quite there yet to tweak that on individual file basis.
dlyubimov2 added a comment - Also, if we consider wide matrices instead of tall matrices, then maximum number of mappers might be reduced which would affect parallelism on big clusters. Another
consideration for extremely wide matrices i guess is that the A block in this case will certainly overshoot several standard DFS blocks so it may only be efficient if we force data collocation for a
big number of blocks (or just increase number of blocks). I am not sure if hadoop quite there yet to tweak that on individual file basis. -d
Although can one really implement a streaming read of a value for a mapper value? I am not sure. i guess i need to look at implementation of sparse sequential vector. It would seem to me though that
sequenceFile.next() requires deserialization of the whole record, doesn't it? so reading mapper value in a streaming fashion should not be possible, not without some sort of a hack, right?
dlyubimov2 added a comment - Although can one really implement a streaming read of a value for a mapper value? I am not sure. i guess i need to look at implementation of sparse sequential vector. It
would seem to me though that sequenceFile.next() requires deserialization of the whole record, doesn't it? so reading mapper value in a streaming fashion should not be possible, not without some sort
of a hack, right?
Doesn't the streaming QR decomposition require that we look at each row of Y one at a time in a streaming fashion? That is, isn't that a completely sequential algorithm?
Even if it is dense, one such vector would take 8MB memory at a time. but sparse sequential vectors should be ok too (it will probably require a little tweak during Y computations to scan it one
time sequentially instead of k+p times as i think it is done now with assumption it can be random).
Oh. I guess you hinted at possibilty that if we use sparse sequential vector for A rows, then we memory-unbound for n! so who cares about m then! And then we can have billion by billion even with
this implemetnation. Wow. That's an extremely powerful suggestion. But that's definitely requires code review and performance tests. And we go over A only one time so there's no need to revisit the
sparse vectors. I'll take a look at it to see if i can engineer a solution. If it is possible at all, it should be extremely simple.
dlyubimov2 added a comment - - edited Doesn't the streaming QR decomposition require that we look at each row of Y one at a time in a streaming fashion? That is, isn't that a completely sequential
algorithm? Even if it is dense, one such vector would take 8MB memory at a time. but sparse sequential vectors should be ok too (it will probably require a little tweak during Y computations to scan
it one time sequentially instead of k+p times as i think it is done now with assumption it can be random). Oh. I guess you hinted at possibilty that if we use sparse sequential vector for A rows,
then we memory-unbound for n! so who cares about m then! And then we can have billion by billion even with this implemetnation. Wow. That's an extremely powerful suggestion. But that's definitely
requires code review and performance tests. And we go over A only one time so there's no need to revisit the sparse vectors. I'll take a look at it to see if i can engineer a solution. If it is
possible at all, it should be extremely simple.
There is a catch though. With this implementation, m is memory bound. It is not in mapper though but in combiners and mapper of the next step.
But my reasoning was, with 1G memory and k+p =500 it seems to exist rather wide spectrum of admissible combinations of r (block height) and minSplitSize (governing essentually # of mappers needed)
that would cover billion rows, and the sweet spot of this combination seem to exceed 1 bln rows.
In addition, there are 2 remedies to consider. First is rather straightforward application of compression on R sequence.
Second remedy results from the fact that our QR merge process is hierarchical. Right now it's two-level hierarchy. I.e if processes at each stage merge 1000 q blocks, then at the next level we can
merge another 1000 q blocks, so total number of q blocks is thus approx. 1 mln. (for 1G memory the number of some 600-800k blocks is more plausible). Assuming Q blocks are k+p rows high, that gives
us aprroximately 300 mln rows for m. But the trick is that if we have 1G memory in the mapper, then Q blocks don't have to be 500 rows high, then can easily be 200k rows high. Which immediately puts
us in the range, conservatively, of 10 bln rows or so without any additional remedies, which i think is about the scale of Google's document index in 2005, if we wanted to do an LSI on it, assuming
there's 1 mln lemmas in English, which there's not.
But if we add another map-only pass over blocked Q data, then we can have 1 bln blocks with all the considerations above and that should put us in the range of 30 trillion rows of A. This number
grows exponentially with every added MR pass which is why i am saying m is virtually unbound.
Adding these remedies seems to be pretty straighforward, but for a first stab at the problem my estimates for m bound seem to be adequate. With these kind of numbers, this may easily become a
technology in a search of a problem. We may add some analysis on optimality of combination of block size and minSplitSize. My initial thought is that finding maximum here is pretty straigtoward,
seems to be a task of finding maximum on a second degree polynomial function.
It's more likely that much more memory would go into precision effort rather than maximizing m bound though. E.g. instead of covering the scale, the resources may go into increasing precision and
oversampling. In which case additional map-only passes over Q will be tremendously useful. (imagine this could do k+p=5000 with just one additional map-only pass over Q data) If this is the case,
then the next low hanging fruit step is to add map-only hierarchical merge of Rs on Q blocks.
However, it's a stochastic algorithm and as such it is probably not good enough for processes that would require such precision (e.g certainly not to do math work on rocket boosters, i think). That
said, k+p=5000 probably doesn't make sense. I think applications sharply divide into 2 categories, where precision requirements are either much higher than that, or much lower. I can't think of much
in between.
Apologies for multiple editions, mostly typos and correction to numbers. These numbers are still pretty offhand and exercise in mental arithmetics, actual mileage probably will vary as much as +- 40%
because of unaccounted overhead. I tried to be conservative though.
dlyubimov2 added a comment - - edited There is a catch though. With this implementation, m is memory bound. It is not in mapper though but in combiners and mapper of the next step. But my reasoning
was, with 1G memory and k+p =500 it seems to exist rather wide spectrum of admissible combinations of r (block height) and minSplitSize (governing essentually # of mappers needed) that would cover
billion rows, and the sweet spot of this combination seem to exceed 1 bln rows. In addition, there are 2 remedies to consider. First is rather straightforward application of compression on R
sequence. Second remedy results from the fact that our QR merge process is hierarchical. Right now it's two-level hierarchy. I.e if processes at each stage merge 1000 q blocks, then at the next level
we can merge another 1000 q blocks, so total number of q blocks is thus approx. 1 mln. (for 1G memory the number of some 600-800k blocks is more plausible). Assuming Q blocks are k+p rows high, that
gives us aprroximately 300 mln rows for m. But the trick is that if we have 1G memory in the mapper, then Q blocks don't have to be 500 rows high, then can easily be 200k rows high. Which immediately
puts us in the range, conservatively, of 10 bln rows or so without any additional remedies, which i think is about the scale of Google's document index in 2005, if we wanted to do an LSI on it,
assuming there's 1 mln lemmas in English, which there's not. But if we add another map-only pass over blocked Q data, then we can have 1 bln blocks with all the considerations above and that should
put us in the range of 30 trillion rows of A. This number grows exponentially with every added MR pass which is why i am saying m is virtually unbound. Adding these remedies seems to be pretty
straighforward, but for a first stab at the problem my estimates for m bound seem to be adequate. With these kind of numbers, this may easily become a technology in a search of a problem. We may add
some analysis on optimality of combination of block size and minSplitSize. My initial thought is that finding maximum here is pretty straigtoward, seems to be a task of finding maximum on a second
degree polynomial function. It's more likely that much more memory would go into precision effort rather than maximizing m bound though. E.g. instead of covering the scale, the resources may go into
increasing precision and oversampling. In which case additional map-only passes over Q will be tremendously useful. (imagine this could do k+p=5000 with just one additional map-only pass over Q data)
If this is the case, then the next low hanging fruit step is to add map-only hierarchical merge of Rs on Q blocks. However, it's a stochastic algorithm and as such it is probably not good enough for
processes that would require such precision (e.g certainly not to do math work on rocket boosters, i think). That said, k+p=5000 probably doesn't make sense. I think applications sharply divide into
2 categories, where precision requirements are either much higher than that, or much lower. I can't think of much in between. Apologies for multiple editions, mostly typos and correction to numbers.
These numbers are still pretty offhand and exercise in mental arithmetics, actual mileage probably will vary as much as +- 40% because of unaccounted overhead. I tried to be conservative though.
yes, it is 100% streaming in terms of A and Y rows. Assumption is that we are ok to load one A row into memory at a time and we optimize for tall matrices (such as billion by million) So we are bound
at n but not at m from memory point of view. Even if it is dense, one such vector would take 8MB memory at a time. but sparse sequential vectors should be ok too (it will probably require a little
tweak during Y computations to scan it one time sequentially instead of k+p times as i think it is done now with assumption it can be random).
For memory, the concern is random access q blocks which can be no less than k+p by k+p (that is, for the case of k+p=500, it gets to be 2 Mb). But this is all as far as memory is concerned. (well
actually 2 times that, plus there's a Y lookahead buffer in order to make sure we can safely form next block. Plus there's a packed R. so for k+p=500 it looks like minimum memory requirement is
rougly in the area of 7-8Mb. which is well below anything).
Y lookahead buffer is not a requirement of algorithm itself, it's MR specific, to make sure we have at least k+p rows to process in the split before we start next block. i thought about it but not
much; at 2mb minimum dense memory requirement it did not strike me as a big issue.
CPU may be more of a problem, it is quadratic, as in any QR algorithm known to me. but i am actually not sure if Givens series would produce more crunching than e.g. Householder's . Givens certainly
is as numerically stable as householder's and better than Gramm-Schmidt. In my tests for 100k tall matrix the orthonormality residuals seem to hold at about no less than 10e-13 and surprisingly i did
not notice any degradataion at all compared to smaller sizes. Actually I happened to read aobut LAPack methods ithat prefer Givens for possiblity of re-ordering and thus easier parallelization).
Anyway, speaking of numerical stability, whatever degradation occurs, i think it would be dwarfed by stochastic inaccuracy which grows quite significantly in my low rank tests. Perhaps for kp=500 it
should degrade much less than for 20-30.
But i guess we need to test at scale to see the limitations.
dlyubimov2 added a comment - - edited yes, it is 100% streaming in terms of A and Y rows. Assumption is that we are ok to load one A row into memory at a time and we optimize for tall matrices (such
as billion by million) So we are bound at n but not at m from memory point of view. Even if it is dense, one such vector would take 8MB memory at a time. but sparse sequential vectors should be ok
too (it will probably require a little tweak during Y computations to scan it one time sequentially instead of k+p times as i think it is done now with assumption it can be random). For memory, the
concern is random access q blocks which can be no less than k+p by k+p (that is, for the case of k+p=500, it gets to be 2 Mb). But this is all as far as memory is concerned. (well actually 2 times
that, plus there's a Y lookahead buffer in order to make sure we can safely form next block. Plus there's a packed R. so for k+p=500 it looks like minimum memory requirement is rougly in the area of
7-8Mb. which is well below anything). Y lookahead buffer is not a requirement of algorithm itself, it's MR specific, to make sure we have at least k+p rows to process in the split before we start
next block. i thought about it but not much; at 2mb minimum dense memory requirement it did not strike me as a big issue. CPU may be more of a problem, it is quadratic, as in any QR algorithm known
to me. but i am actually not sure if Givens series would produce more crunching than e.g. Householder's . Givens certainly is as numerically stable as householder's and better than Gramm-Schmidt. In
my tests for 100k tall matrix the orthonormality residuals seem to hold at about no less than 10e-13 and surprisingly i did not notice any degradataion at all compared to smaller sizes. Actually I
happened to read aobut LAPack methods ithat prefer Givens for possiblity of re-ordering and thus easier parallelization). Anyway, speaking of numerical stability, whatever degradation occurs, i think
it would be dwarfed by stochastic inaccuracy which grows quite significantly in my low rank tests. Perhaps for kp=500 it should degrade much less than for 20-30. But i guess we need to test at scale
to see the limitations.
Doesn't the streaming QR decomposition require that we look at each row of Y one at a time in a streaming fashion? That is, isn't that a completely sequential algorithm?
Ted Dunning
added a comment -
Doesn't the streaming QR decomposition require that we look at each row of Y one at a time in a streaming fashion? That is, isn't that a completely sequential algorithm?
Ted, sorry i kind of polluted your issue here. Thank you for your encouragement and help. i probably should've opened another issue once it was clear it diverged far enough, instead of keep
putting stuff here.
You didn't pollute anything at all. You contributed a bunch of code here.
So far, the only thing that I worry about is the mixture of array and matrices, but even that might be fine.
I was dubious of the scalability of the streaming QR, but if that works then we should be good to go with very little more work.
Do you have any idea if this will work for large sparse matrices as opposed to the 100K x 100 matrix you are using?
Ted Dunning
added a comment -
Ted, sorry i kind of polluted your issue here. Thank you for your encouragement and help. i probably should've opened another issue once it was clear it diverged far enough, instead of keep putting
stuff here. You didn't pollute anything at all. You contributed a bunch of code here. So far, the only thing that I worry about is the mixture of array and matrices, but even that might be fine. I
was dubious of the scalability of the streaming QR, but if that works then we should be good to go with very little more work. Do you have any idea if this will work for large sparse matrices as
opposed to the 100K x 100 matrix you are using?
Final trunk patch for CDH3 or 0.21 api.
This includes code cleanup, javadoc updates, and mahout CLI class (not tested though).
all existing tests and this test are passing. I tested 100Kx100 matrix in local mode only, S values coincide with 1e-10 or better.
changes to dependencies i had to make
• hadoop 0.21 or cdh3 to support multiple outputs
• local MR mode has dependency on commons-http client, so i included it for test scope only in order for test to work
• changed apache-math dependency from 1.2
• commons-math 1.2 seemed to have depended on commons-cli and 2.1 doesn't have it transitively anymore, but one of the classes in core required it. so i added commons-cli in order to fix the build.
Ted, sorry i kind of polluted your issue here. Thank you for your encouragement and help. i probably should've opened another issue once it was clear it diverged far enough, instead of keep putting
stuff here.
This should be compatible with DistributedRowMatrix. I did not have real distributed test yet as i don't have a suitable data set yet, but perhaps somebody in the user community with the interest in
the method could do it faster than i get to it. I will do tests with moderate scale at some point but i don't want to do it on my company's machine cluster yet and i don't exactly own a good one
I did have a rather mixed use of mahout vector math and just dense arrays. Partly becuase i did not quite have enough time to study all capabilities in math module, and partly becuase i wanted
explicit access to memory for control over its more efficient re-use in mass iterations. This may or may not need be rectified over time. But it seems to work pretty well as is.
The patch is git patch (so one needs to use patch -p1 instead of -p0). I know the standard is set to use svn patches... but i already used git for pulling the trunk (so happens i prefer git in
general too so i can have my own commit tree and branching for this work).
If there's enough interest from the project to this contribution, i will support it, and if requested, i can port it to 0.20 if that's the target platform for 0.5, as well as doing other specific
mahout architectural tweaks. Please kindly let me know.
Thank you.
dlyubimov2 added a comment - - edited Final trunk patch for CDH3 or 0.21 api. This includes code cleanup, javadoc updates, and mahout CLI class (not tested though). all existing tests and this test
are passing. I tested 100Kx100 matrix in local mode only, S values coincide with 1e-10 or better. changes to dependencies i had to make hadoop 0.21 or cdh3 to support multiple outputs local MR mode
has dependency on commons-http client, so i included it for test scope only in order for test to work changed apache-math dependency from 1.2 to 2.1. Actually mahout math module seems to depend on
2.1 too, not clear why it was not transitive for this one. commons-math 1.2 seemed to have depended on commons-cli and 2.1 doesn't have it transitively anymore, but one of the classes in core
required it. so i added commons-cli in order to fix the build. Ted , sorry i kind of polluted your issue here. Thank you for your encouragement and help. i probably should've opened another issue
once it was clear it diverged far enough, instead of keep putting stuff here. This should be compatible with DistributedRowMatrix. I did not have real distributed test yet as i don't have a suitable
data set yet, but perhaps somebody in the user community with the interest in the method could do it faster than i get to it. I will do tests with moderate scale at some point but i don't want to do
it on my company's machine cluster yet and i don't exactly own a good one myself. I did have a rather mixed use of mahout vector math and just dense arrays. Partly becuase i did not quite have enough
time to study all capabilities in math module, and partly becuase i wanted explicit access to memory for control over its more efficient re-use in mass iterations. This may or may not need be
rectified over time. But it seems to work pretty well as is. The patch is git patch (so one needs to use patch -p1 instead of -p0). I know the standard is set to use svn patches... but i already used
git for pulling the trunk (so happens i prefer git in general too so i can have my own commit tree and branching for this work). If there's enough interest from the project to this contribution, i
will support it, and if requested, i can port it to 0.20 if that's the target platform for 0.5, as well as doing other specific mahout architectural tweaks. Please kindly let me know. Thank you.
patch m3:
• updated U, V jobs to produce m x k and n x k geometries, respectively. Notes updated to reflect that.
• added minSplitSize setting to enable larger n and (k+p).
• added apache license statements and minor code cleanup.
dlyubimov2 added a comment - patch m3: updated U, V jobs to produce m x k and n x k geometries, respectively. Notes updated to reflect that. added minSplitSize setting to enable larger n and (k+p).
added apache license statements and minor code cleanup.
Added U and V computations. Labels of A rows are propagated to U rows as keys. bumped up the test to A of dimensions of 100,000x100 (so it produces 3 A splts now). Actually i just realize that
technically U and V should be k columns wide, and i am producing k+p. ok, who cares. Distinction between k and p becomes really nominal now.
I'd appreciate if somebody reviews V computation (p.5.5.2 in working notes), just in case, i already forget the derivation of V computation.
Another uncertainty i have is that i am not sure how to best construct tests for U and V outputs, but i am not sure if i care that much since the computation logic is really trivial (compared to
everything else). Existing tests verify singlular values against independent memory-only svd and assert orthogonality of Q output only.
Stuff that is left is really minor and mahout-specific or engineering:
– figure out how to integrate with mahout CLI
– add minSplitSize parameter to CLI, and others (k,p, computeU, computeV, .. )
– do we want to backport it to apache 0.20? multiple outputs are crippled in 0.20 and i don't know how one can live without multiple outputs in a job like this.
dlyubimov2 added a comment - - edited Added U and V computations. Labels of A rows are propagated to U rows as keys. bumped up the test to A of dimensions of 100,000x100 (so it produces 3 A splts
now). Actually i just realize that technically U and V should be k columns wide, and i am producing k+p. ok, who cares. Distinction between k and p becomes really nominal now. I'd appreciate if
somebody reviews V computation (p.5.5.2 in working notes), just in case, i already forget the derivation of V computation. Another uncertainty i have is that i am not sure how to best construct tests
for U and V outputs, but i am not sure if i care that much since the computation logic is really trivial (compared to everything else). Existing tests verify singlular values against independent
memory-only svd and assert orthogonality of Q output only. Stuff that is left is really minor and mahout-specific or engineering: – figure out how to integrate with mahout CLI – add minSplitSize
parameter to CLI, and others (k,p, computeU, computeV, .. ) – do we want to backport it to apache 0.20? multiple outputs are crippled in 0.20 and i don't know how one can live without multiple
outputs in a job like this.
Working notes for the process i used. i think these are pretty much final. I guess it turned out to be a little voluminous, but it mentions pretty much all essential details i may want to put down
for my future reference. If one reads it, he/she may start directly with p. 5 of MapReduce implementation and then refer to algorithms in previous sections and dicussion that lead to their formation
as needed.
dlyubimov2 added a comment - Working notes for the process i used. i think these are pretty much final. I guess it turned out to be a little voluminous, but it mentions pretty much all essential
details i may want to put down for my future reference. If one reads it, he/she may start directly with p. 5 of MapReduce implementation and then refer to algorithms in previous sections and
dicussion that lead to their formation as needed.
git patch m1 : WIP but important milestone: prototype & MR implementation at the level of computing full Q and singlular values. as i mentioned, needs CDH3b2 (or b3).
Local MR test runs MR solver in local mode for a moderately low rank random matrix 80,000x100 (r=251, k+p=100). Test output i get on my laptop (first are singulars for 100x100 BBt matrix using
commons-math Eigensolver; 2 – output of SVs produced by Colt SVD of 80,000x100 same source matrix
--SSVD solver singular values:
svs: 4220.258342 4215.924299 4213.352353 4210.786495 4203.422385 4201.047189 4194.987920 4193.434856 4187.610381 4185.546818 4179.867986 4176.056232 4172.784145 4169.039073 4168.384457 4164.293827
4162.647531 4160.483398 4157.878385 4154.713189 4152.172788 4149.823917 4146.500139 4144.565227 4142.625983 4141.291209 4138.105799 4135.564939 4134.772833 4129.223450 4129.101594 4126.679080
4124.385614 4121.791730 4119.645948 4115.975993 4112.947092 4109.586452 4107.985419 4104.871381 4102.438854 4099.762117 4098.968505 4095.720204 4091.114871 4090.190141 (...omited) 3950.897035
--Colt SVD solver singular values:
svs: 4220.258342 4215.924299 4213.352353 4210.786495 4203.422385 4201.047189 4194.987920 4193.434856 4187.610381 4185.546818 4179.867986 4176.056232 4172.784145 4169.039073 4168.384457 4164.293827
4162.647531 4160.483398 4157.878385 4154.713189 4152.172788 4149.823917 4146.500139 4144.565227 4142.625983 4141.291209 4138.105799 4135.564939 4134.772833 4129.223450 4129.101594 4126.679080
4124.385614 4121.791730 4119.645948 4115.975993 4112.947092 4109.586452 4107.985419 4104.871381 4102.438854 4099.762117 4098.968505 4095.720204 4091.114871 4090.190141 (....omited) 3950.897035
I will be updating my notes with a couple of optimizations i applied in this code not yet mentioned.
dlyubimov2 added a comment - - edited git patch m1 : WIP but important milestone: prototype & MR implementation at the level of computing full Q and singlular values. as i mentioned, needs CDH3b2 (or
b3). Local MR test runs MR solver in local mode for a moderately low rank random matrix 80,000x100 (r=251, k+p=100). Test output i get on my laptop (first are singulars for 100x100 BBt matrix using
commons-math Eigensolver; 2 – output of SVs produced by Colt SVD of 80,000x100 same source matrix --SSVD solver singular values: svs: 4220.258342 4215.924299 4213.352353 4210.786495 4203.422385
4201.047189 4194.987920 4193.434856 4187.610381 4185.546818 4179.867986 4176.056232 4172.784145 4169.039073 4168.384457 4164.293827 4162.647531 4160.483398 4157.878385 4154.713189 4152.172788
4149.823917 4146.500139 4144.565227 4142.625983 4141.291209 4138.105799 4135.564939 4134.772833 4129.223450 4129.101594 4126.679080 4124.385614 4121.791730 4119.645948 4115.975993 4112.947092
4109.586452 4107.985419 4104.871381 4102.438854 4099.762117 4098.968505 4095.720204 4091.114871 4090.190141 (...omited) 3950.897035 --Colt SVD solver singular values: svs: 4220.258342 4215.924299
4213.352353 4210.786495 4203.422385 4201.047189 4194.987920 4193.434856 4187.610381 4185.546818 4179.867986 4176.056232 4172.784145 4169.039073 4168.384457 4164.293827 4162.647531 4160.483398
4157.878385 4154.713189 4152.172788 4149.823917 4146.500139 4144.565227 4142.625983 4141.291209 4138.105799 4135.564939 4134.772833 4129.223450 4129.101594 4126.679080 4124.385614 4121.791730
4119.645948 4115.975993 4112.947092 4109.586452 4107.985419 4104.871381 4102.438854 4099.762117 4098.968505 4095.720204 4091.114871 4090.190141 (....omited) 3950.897035 I will be updating my notes
with a couple of optimizations i applied in this code not yet mentioned. -Dima
QR step is now fully verified with MapReduce emulation. Updated QR step document with fixes that resulted from verification. I will be putting all that into complete map reduce, hopefully within a
couple of weeks now. Initial patch will be dependent on CDH3b3 hadoop, if we need a backport to legacy api, that'll take some more time on best effort basis on my part.
dlyubimov2 added a comment - QR step is now fully verified with MapReduce emulation. Updated QR step document with fixes that resulted from verification. I will be putting all that into complete map
reduce, hopefully within a couple of weeks now. Initial patch will be dependent on CDH3b3 hadoop, if we need a backport to legacy api, that'll take some more time on best effort basis on my part.
QR step doc is ready for review.
Especially sections 3.2 and on which i still haven't worked up in the actual code. These reflect my original ideas to deal with blockwise QR in a MapReduce settings. I am probably retracing some
block computations already found elsewhere (such as in LAPack) but i think it may actually work out ok.
dlyubimov2 added a comment - QR step doc is ready for review. Especially sections 3.2 and on which i still haven't worked up in the actual code. These reflect my original ideas to deal with blockwise
QR in a MapReduce settings. I am probably retracing some block computations already found elsewhere (such as in LAPack) but i think it may actually work out ok.
I am currently working to drive the working prototype the version with standalone thin QR step which would be memory independent of num of rows in A and would result into BBt eigensolution of (k+p)x
(k+p) dimensionality only, the rest being driven by Map Reduce. I've got single stream version working seemingly well, here's the update on this WIP. Although in the end i am not sure it would offer
any real-life improvement, as it would seem to require a second pass over A as it is not possible to finish Q^t x B computation in single step with this approach.
Still, after 2 passes, we should be done with eigen values (and perhaps (k+p)x(k+p) dimensionality for eigensolver input would allow us to increase oversampling p somewhat, hence precision). Hard to
see from here yet though. Additional (optional) MR steps would only be needed if U or V is or both are desired.
dlyubimov2 added a comment - I am currently working to drive the working prototype the version with standalone thin QR step which would be memory independent of num of rows in A and would result into
BBt eigensolution of (k+p)x(k+p) dimensionality only, the rest being driven by Map Reduce. I've got single stream version working seemingly well, here's the update on this WIP. Although in the end i
am not sure it would offer any real-life improvement, as it would seem to require a second pass over A as it is not possible to finish Q^t x B computation in single step with this approach. Still,
after 2 passes, we should be done with eigen values (and perhaps (k+p)x(k+p) dimensionality for eigensolver input would allow us to increase oversampling p somewhat, hence precision). Hard to see
from here yet though. Additional (optional) MR steps would only be needed if U or V is or both are desired.
These are really the same.
Updated version.
I think that this is actually a feasible algorithm.
Ted Dunning
added a comment -
Updated version. I think that this is actually a feasible algorithm.
I think that the outline of the algorithm is now in place. As it currently stands, it should scale to
about a thousand cores (maybe more) and should allow very small memory footprint machines
to participate.
That level should allow us to decompose data in the scale of 1-10 billion non-zeros, at a guess.
Ted Dunning
added a comment -
Updates. I think that the outline of the algorithm is now in place. As it currently stands, it should scale to about a thousand cores (maybe more) and should allow very small memory footprint
machines to participate. That level should allow us to decompose data in the scale of 1-10 billion non-zeros, at a guess.
I just attached an update to the original outline document I posted. The gist of it is that the Q_i need to be arranged in block diagonal form in order to form a bases of A\Omega. When that is done,
my experiments show complete agreement with the original algorithm.
Here is R code that demonstrates decomposition without blocking and a 2 way block decomposition:
# SVD decompose a matrix, extracting the first k singular values/vectors
# using k+p random projection
svd.rp = function(A, k=10, p=5) {
n = nrow(A)
y = A %*% matrix(rnorm(n * (k+p)), nrow=n)
q = qr.Q(qr(y))
b = t(q) %*% A
svd = svd(b)
list(u=q%*%svd$u, d=svd$d, v=svd$v)
# block-wise SVD decompose a matrix, extracting the first k singular values/vectors
# using k+p random projection
svd.rpx = function(A, k=10, p=5) {
n = nrow(A)
# block sizes
n1 = floor(n/2)
n2 = n-n1
r = matrix(rnorm(n * (k+p)), nrow=n)
A1 = A[1:n1,]
A2 = A[(n1+1):n,]
# block-wise multiplication and basis
y1 = A1 %*% r
q1 = qr.Q(qr(y1))
y2 = A2 %*% r
q2 = qr.Q(qr(y2))
# construction of full q (not really necessary)
z1 = diag(0, nrow=nrow(q1), ncol=(k+p))
z2 = diag(0, nrow=nrow(q2), ncol=(k+p))
q = rbind(cbind(q1, z1), cbind(z2, q2))
b = t(q) %*% A
# we can compute b without forming the block diagonal Q
bx = rbind(t(q1)%*%A1, t(q2)%*%A2)
# now the decomposition continues
svd = svd(bx)
# return all the pieces for checking
list(u=q%*%svd$u, d=svd$d, v=svd$v, q1=q1, q2=q2, q=q, b=b, bx=bx)
Note that this code has a fair bit of fat in it for debugging or illustrative purposes.
Ted Dunning
added a comment -
Dima, I just attached an update to the original outline document I posted. The gist of it is that the Q_i need to be arranged in block diagonal form in order to form a bases of A\Omega. When that is
done, my experiments show complete agreement with the original algorithm. Here is R code that demonstrates decomposition without blocking and a 2 way block decomposition: # SVD decompose a matrix,
extracting the first k singular values/vectors # using k+p random projection svd.rp = function(A, k=10, p=5) { n = nrow(A) y = A %*% matrix(rnorm(n * (k+p)), nrow=n) q = qr.Q(qr(y)) b = t(q) %*% A
svd = svd(b) list(u=q%*%svd$u, d=svd$d, v=svd$v) } # block-wise SVD decompose a matrix, extracting the first k singular values/vectors # using k+p random projection svd.rpx = function(A, k=10, p=5) {
n = nrow(A) # block sizes n1 = floor(n/2) n2 = n-n1 r = matrix(rnorm(n * (k+p)), nrow=n) A1 = A[1:n1,] A2 = A[(n1+1):n,] # block-wise multiplication and basis y1 = A1 %*% r q1 = qr.Q(qr(y1)) y2 = A2
%*% r q2 = qr.Q(qr(y2)) # construction of full q (not really necessary) z1 = diag(0, nrow=nrow(q1), ncol=(k+p)) z2 = diag(0, nrow=nrow(q2), ncol=(k+p)) q = rbind(cbind(q1, z1), cbind(z2, q2)) b = t
(q) %*% A # we can compute b without forming the block diagonal Q bx = rbind(t(q1)%*%A1, t(q2)%*%A2) # now the decomposition continues svd = svd(bx) # return all the pieces for checking list(u=
q%*%svd$u, d=svd$d, v=svd$v, q1=q1, q2=q2, q=q, b=b, bx=bx) } Note that this code has a fair bit of fat in it for debugging or illustrative purposes.
Updated outline document to show corrected form of block-wise extraction of Q
Ted Dunning
added a comment -
Updated outline document to show corrected form of block-wise extraction of Q
My quick scan of the literature seems to indicate that best promise for full scale QR over map/reduce is row-wise givens with top k+p rows of the blocks being processed in combiners and reducers for
the reminder of the first step of givens QR step. this can be done in the first step of the job, but i am yet to figure how to combine and reduce 'rho' intermediate results into final Q. [Golub, Van
Loan 3rd ed] p. $5.2.3
dlyubimov2 added a comment - My quick scan of the literature seems to indicate that best promise for full scale QR over map/reduce is row-wise givens with top k+p rows of the blocks being processed
in combiners and reducers for the reminder of the first step of givens QR step. this can be done in the first step of the job, but i am yet to figure how to combine and reduce 'rho' intermediate
results into final Q. [Golub, Van Loan 3rd ed] p. $5.2.3
attaching my first iteration implementation doc
dlyubimov2 added a comment - attaching my first iteration implementation doc
so i got to singular values now. I run a unit test so that k+p=n. When i parameterize algorithm so that only one Q-block is produced , the eigenvalues match the stock result at least as good as
10E-5. Which is expected under the circumstances. however as soon as i increase number of Q-blocks >1, the singular values go astray as much as 10%. Not good. In both cases, the entire Q passes the
orthonormality test. I guess it means that as i thought before, doing block orthonormalization this way does result in a subspace different from original span by Y.I need to research on doing
orthonormalization with blocks. I think that's the only showstopper here that is still left. It may result in a rewrite that splits one job producing both Q and Bt, into several though.
dlyubimov2 added a comment - - edited so i got to singular values now. I run a unit test so that k+p=n. When i parameterize algorithm so that only one Q-block is produced , the eigenvalues match the
stock result at least as good as 10E-5. Which is expected under the circumstances. however as soon as i increase number of Q-blocks >1, the singular values go astray as much as 10%. Not good. In both
cases, the entire Q passes the orthonormality test. I guess it means that as i thought before, doing block orthonormalization this way does result in a subspace different from original span by Y.I
need to research on doing orthonormalization with blocks. I think that's the only showstopper here that is still left. It may result in a rewrite that splits one job producing both Q and Bt, into
several though.
yes, I mean rank (Y-block) < (k+p) sometimes.
Ok. I don't know how often matrix A may be too sparse.
Just in case, i gave it a thought and here's what i think may help to account for this.
It would seem that we can address that by keeping vector L of dimension k+p where L[i]=# of blocks of Q where rank(Q-block)>i.
if B' is compiled in the same pass as B'=sum[ Q^t_(i*)A_(i*)] then it just means that for actual B we need to correct rows of B as B(i*)=(1/L[i]) * B'_(i*). Of course we don't actually have to
correct them but just rather keep in mind that B is defined not just by the data but also by this scaling vector L. So subsequent steps may just account for it .
Of course, as an intermediate validation step, we check if any of L[i] is 0, and if it is it pretty much means that rank(A)<k+p and we can't have a good svd anyway so we will probably raise and
exception in this case and ask to consider to reduce oversampling or k. Or perhaps it is a bad case for distributed computation anyway.
Right now i am just sending partial L vectors as q row with index -1 and sum it up in combiner and reducer.
dlyubimov2 added a comment - yes, I mean rank (Y-block) < (k+p) sometimes. Ok. I don't know how often matrix A may be too sparse. Just in case, i gave it a thought and here's what i think may help to
account for this. It would seem that we can address that by keeping vector L of dimension k+p where L [i] =# of blocks of Q where rank(Q-block)>i. if B' is compiled in the same pass as B'=sum[ Q^t_
(i*)A_(i* )] then it just means that for actual B we need to correct rows of B as B (i*)=(1/L [i] ) * B'_(i*). Of course we don't actually have to correct them but just rather keep in mind that B is
defined not just by the data but also by this scaling vector L. So subsequent steps may just account for it . Of course, as an intermediate validation step, we check if any of L [i] is 0, and if it
is it pretty much means that rank(A)<k+p and we can't have a good svd anyway so we will probably raise and exception in this case and ask to consider to reduce oversampling or k. Or perhaps it is a
bad case for distributed computation anyway. Right now i am just sending partial L vectors as q row with index -1 and sum it up in combiner and reducer.
I think that what you are saying is that some of the A_i blocks that make up A may be rank deficient.
That is definitely a risk if the number of rows in A_i is small compared to the size of Q_i. If the number of rows of A_i is much larger than the size of Q_i, then that is very, very unlikely.
Regardless, we will still have Q*Q = I and Q_i will still span the projection of A_i even if A_i and thus A_i \Omega is rank deficient. Thus Q will still span A \Omega. This gives us Q Q* A \approx A
as desired.
So, my feeling is that what you say is correct, but it isn't a problem.
Ted Dunning
added a comment -
I think that what you are saying is that some of the A_i blocks that make up A may be rank deficient. That is definitely a risk if the number of rows in A_i is small compared to the size of Q_i. If
the number of rows of A_i is much larger than the size of Q_i, then that is very, very unlikely. Regardless, we will still have Q*Q = I and Q_i will still span the projection of A_i even if A_i and
thus A_i \Omega is rank deficient. Thus Q will still span A \Omega. This gives us Q Q* A \approx A as desired. So, my feeling is that what you say is correct, but it isn't a problem.
Another detail in Q computation:
For now we assume that Q=(1/L)Q', where Q' is the output of block orthonormalization, and L is the number of blocks we ended up with.
However, in case when A is rather sparse, some Y blocks may not end up spanning R^(k+p) in which case orthogonalization would not produce normal columns. It doesn't mean that we can't orthonormalize
the entire Y though. It just means that L is not for the entire Q but rather is individual for every column vector of Q and our initial assumption Q=(1/L)Q' doesn't always hold.
Which is fine per se, but then it means that there are perhaps quite frequent exceptions to assumption B =(1/L) Q'^transpose * A. It would be easy to compute B^transpose in the same map-reduce pass
as Q using multiple outputs , one for Q and one for B-transpose (which is what i am doing right now). But unless there's any workaround for the problem above, this B^transpose is incorrect for some
sparse cases.
Ted, any thoughts? Thank you.
dlyubimov2 added a comment - Another detail in Q computation: For now we assume that Q=(1/L)Q', where Q' is the output of block orthonormalization, and L is the number of blocks we ended up with.
However, in case when A is rather sparse, some Y blocks may not end up spanning R^(k+p) in which case orthogonalization would not produce normal columns. It doesn't mean that we can't orthonormalize
the entire Y though. It just means that L is not for the entire Q but rather is individual for every column vector of Q and our initial assumption Q=(1/L)Q' doesn't always hold. Which is fine per se,
but then it means that there are perhaps quite frequent exceptions to assumption B =(1/L) Q'^transpose * A. It would be easy to compute B^transpose in the same map-reduce pass as Q using multiple
outputs , one for Q and one for B-transpose (which is what i am doing right now). But unless there's any workaround for the problem above, this B^transpose is incorrect for some sparse cases. Ted,
any thoughts? Thank you. -Dima
Actually, i reviewed getSplits() code for sequence file. it seems to honor minSplitSize property of the FileInputFormat. What's more, it makes sure that the last block is no less than 1.1 times min
split size. So that should work nicely. if we get insufficient # of rows in mappers, i guess we can just increase the minSplitSize. Unless input is partitioned so that min split size > min file size
in the input.
dlyubimov2 added a comment - Actually, i reviewed getSplits() code for sequence file. it seems to honor minSplitSize property of the FileInputFormat. What's more, it makes sure that the last block is
no less than 1.1 times min split size. So that should work nicely. if we get insufficient # of rows in mappers, i guess we can just increase the minSplitSize. Unless input is partitioned so that min
split size > min file size in the input.
I have couple of doubts. I do amended Gram-Schmidt for the blocks of Y to produce blocks of Q, but while Q would end up orthonormal, i am not sure that Q and Y would end up spanning the same
space. Although the fact that Y is random product means Q may also be more or less random basis so maybe it doesn't matter so much that span(Q)=exactly span(Y).
Since Q is orthonormal and since QR = Y, Q is exactly a basis of Y. The only issue is that R isn't really right triangular. That doesn't matter here.
Second concern is still the situation when last split producted by MR doesn't have minimally sufficient k+p records of A for producing orthogonal Q. The ideal outcome is then just to add it to
another split, but i can't figure an easy enough way to do that within MR framework (esp. if the input is serialized using compressed sequence file).
If necessary, the input format can just note when the last block is small compared to previous blocks and round up on the next to last block.
Ted Dunning
added a comment -
I have couple of doubts. I do amended Gram-Schmidt for the blocks of Y to produce blocks of Q, but while Q would end up orthonormal, i am not sure that Q and Y would end up spanning the same space.
Although the fact that Y is random product means Q may also be more or less random basis so maybe it doesn't matter so much that span(Q)=exactly span(Y). Since Q is orthonormal and since QR = Y, Q is
exactly a basis of Y. The only issue is that R isn't really right triangular. That doesn't matter here. Second concern is still the situation when last split producted by MR doesn't have minimally
sufficient k+p records of A for producing orthogonal Q. The ideal outcome is then just to add it to another split, but i can't figure an easy enough way to do that within MR framework (esp. if the
input is serialized using compressed sequence file). If necessary, the input format can just note when the last block is small compared to previous blocks and round up on the next to last block.
I have couple of doubts. I do amended Gram-Schmidt for the blocks of Y to produce blocks of Q, but while Q would end up orthonormal, i am not sure that Q and Y would end up spanning the same space.
Although the fact that Y is random product means Q may also be more or less random basis so maybe it doesn't matter so much that span(Q)=exactly span(Y).
Second concern is still the situation when last split producted by MR doesn't have minimally sufficient k+p records of A for producing orthogonal Q. The ideal outcome is then just to add it to
another split, but i can't figure an easy enough way to do that within MR framework (esp. if the input is serialized using compressed sequence file). one way is to do custom split indexing based on #
of records encountered (similar to what that lzo MR project does). but it sounds too complicated to me. Another way is just to do a pre-pass over A and prepartition it the way that this condition is
satisfied. Then have a custom split so that there's 1 mapper per partition. But that's still one additional preprocessing step which we'd make just for the sake of just a fraction of A. Ideas are
welcome here.
dlyubimov2 added a comment - I have couple of doubts. I do amended Gram-Schmidt for the blocks of Y to produce blocks of Q, but while Q would end up orthonormal, i am not sure that Q and Y would end
up spanning the same space. Although the fact that Y is random product means Q may also be more or less random basis so maybe it doesn't matter so much that span(Q)=exactly span(Y). Second concern is
still the situation when last split producted by MR doesn't have minimally sufficient k+p records of A for producing orthogonal Q. The ideal outcome is then just to add it to another split, but i
can't figure an easy enough way to do that within MR framework (esp. if the input is serialized using compressed sequence file). one way is to do custom split indexing based on # of records
encountered (similar to what that lzo MR project does). but it sounds too complicated to me. Another way is just to do a pre-pass over A and prepartition it the way that this condition is satisfied.
Then have a custom split so that there's 1 mapper per partition. But that's still one additional preprocessing step which we'd make just for the sake of just a fraction of A. Ideas are welcome here.
i am chipping out on mapper implementation on weekends. so far i got to Q orthogonalization, and should be able to produce BB^t at the next one.
I'll post a patch when there's something working, but it is a slow progress due to my load. My employer at the moment is not eager to advance tasks that may depend on this, so i have to do it on my
own time.
dlyubimov2 added a comment - i am chipping out on mapper implementation on weekends. so far i got to Q orthogonalization, and should be able to produce BB^t at the next one. I'll post a patch when
there's something working, but it is a slow progress due to my load. My employer at the moment is not eager to advance tasks that may depend on this, so i have to do it on my own time.
Postponing since it is still too raw.
More minor comments:
□ Don't forget a copyright header for new files
□ murmurInt should inline those constants or declare them constants
□ More VirtualRandomMatrix fields ought be final, conceptually
□ VirtualRandomVector doesn't need "size"
□ "Utils" classes ought to have a private constructor IMHO
Sean Owen
added a comment -
More minor comments: Don't forget a copyright header for new files murmurInt should inline those constants or declare them constants More VirtualRandomMatrix fields ought be final, conceptually
VirtualRandomVector doesn't need "size" "Utils" classes ought to have a private constructor IMHO
Here is a work-in-progress patch that illustrates how I plan to do the stochastic multiplication.
For moderate sized problems, this will be the major step required since all of the dense intermediate products will fit in memory. For larger problems, additional tricks will be necessary.
Ted Dunning
added a comment -
Here is a work-in-progress patch that illustrates how I plan to do the stochastic multiplication. For moderate sized problems, this will be the major step required since all of the dense intermediate
products will fit in memory. For larger problems, additional tricks will be necessary.
Per Ted's request, i am attaching a conspectus of our previous discussion of Ted's suggested mods to Tropp's stochastic svd. It doesn't include Q orthonormalization.
Dmitriy Lyubimov
added a comment -
Per Ted's request, i am attaching a conspectus of our previous discussion of Ted's suggested mods to Tropp's stochastic svd. It doesn't include Q orthonormalization. | {"url":"https://issues.apache.org/jira/browse/MAHOUT-376?focusedCommentId=12921444&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel","timestamp":"2014-04-17T09:04:28Z","content_type":null,"content_length":"354205","record_id":"<urn:uuid:60abaddb-08f2-4184-9611-d0d01e566712>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intrinsic ultracontractivity, conditional lifetimes and conditional gauge for symmetric stable processes on rough domains
Results 1 - 10 of 12
- Ann. Probab , 2002
"... General gauge and conditional gauge theorems are established for a large class of (not necessarily symmetric) strong Markov processes, including Brownian motions with singular drifts and
symmetric stable processes. Furthermore, new classes of functions are introduced under which the general gauge an ..."
Cited by 21 (13 self)
Add to MetaCart
General gauge and conditional gauge theorems are established for a large class of (not necessarily symmetric) strong Markov processes, including Brownian motions with singular drifts and symmetric
stable processes. Furthermore, new classes of functions are introduced under which the general gauge and conditional gauge theorems hold. These classes are larger than the classical Kato class when
the process is Brownian motion in a bounded C 1,1 domain. 1. Introduction. Given a strong Markov process X and a potential q, the conditional expectation u(x, y) of the Feynman–Kac transform of X by
q is called the conditional gauge function. (The precise definition will be given later.) The function u is important in studying the potential theory of the Schrödinger-type operator L + q, as it is
the ratio of the Green’s function of L + q and that
- TOHOKU MATH. J. , 2008
"... We extend the concept of intrinsic ultracontractivity to non-symmetric semigroups and prove the intrinsic ultracontractivity of the Dirichlet semigroups of non-symmetric second order elliptic
operators in bounded Lipschitz domains. ..."
Cited by 21 (18 self)
Add to MetaCart
We extend the concept of intrinsic ultracontractivity to non-symmetric semigroups and prove the intrinsic ultracontractivity of the Dirichlet semigroups of non-symmetric second order elliptic
operators in bounded Lipschitz domains.
- ZBL 1112.47034 MR 2231884 , 2006
"... Let Xt be the relativistic α-stable process in Rd, α ∈ (0, 2), d>α, with infinitesimal generator H (α) 0 = −((− ∆ +m2/α) α/2 − m). We study intrinsic ultracontractivity (IU) for the Feynman-Kac
semigroup Tt for this process with generator H (α) 0 − V, V ≥ 0, V locally bounded. We prove that if lim ..."
Cited by 21 (3 self)
Add to MetaCart
Let Xt be the relativistic α-stable process in Rd, α ∈ (0, 2), d>α, with infinitesimal generator H (α) 0 = −((− ∆ +m2/α) α/2 − m). We study intrinsic ultracontractivity (IU) for the Feynman-Kac
semigroup Tt for this process with generator H (α) 0 − V, V ≥ 0, V locally bounded. We prove that if lim |x|→ ∞ V (x) =∞, then for every t>0 the operator Tt is compact. We consider the class V of
potentials V such that V ≥ 0, lim |x|→ ∞ V (x) = ∞ and V is comparable to the function which is radial, radially nondecreasing and comparable on unit balls. For V in the class V we show that the
semigroup Tt is IU if and only if lim |x|→ ∞ V (x)/|x | = ∞. If this condition is satisfied we also obtain sharp estimates of the first eigenfunction φ1 for Tt. Inparticular, when V (x) =|x | β, β>0,
then the semigroup Tt is IU if and only if β>1. For β>1 the first eigenfunction φ1(x) is comparable to exp(−m 1/α |x|)(|x | +1) (−d−α−2β−1)/2.
- Probab. Theory Relat. Fields
"... We present several constructions of a \censored stable process" in an open set D R n , i.e., a symmetric stable process which is not allowed to jump outside D. ..."
Cited by 19 (11 self)
Add to MetaCart
We present several constructions of a \censored stable process" in an open set D R n , i.e., a symmetric stable process which is not allowed to jump outside D.
- PROBAB. THEORY RELAT. FIELDS , 2003
"... ..."
"... Let X t be a Cauchy process in R , d 1. We investigate some of the fine spectral theoretic properties of the semigroup of this process killed upon leaving a domain D. We establish a connection
between the semigroup of this process and a mixed boundary value problem for the Laplacian in one dimen ..."
Cited by 15 (9 self)
Add to MetaCart
Let X t be a Cauchy process in R , d 1. We investigate some of the fine spectral theoretic properties of the semigroup of this process killed upon leaving a domain D. We establish a connection
between the semigroup of this process and a mixed boundary value problem for the Laplacian in one dimension higher, known as the "Mixed Steklov Problem." Using this we derive a variational
characterization for the eigenvalues of the Cauchy process in D. This characterization leads to many detailed properties of the eigenvalues and eigenfunctions for the Cauchy process inspired by those
for Brownian motion. Our results are new even in the simplest geometric setting of the interval (-1, 1) where we obtain more precise information on the size of the second and third eigenvalues and on
the geometry of their corresponding eigenfunctions. Such results, although trivial for the Laplacian, take considerable work to prove for the Cauchy processes and remain open for general symmetric #
--stable processes. Along the way we present other general properties of the eigenfunctions, such as real analyticity, which even though well known in the case of the Laplacian, are not rarely
available for more general symmetric #--stable processes. #
, 1998
"... Martin boundaries and integral representations of positive functions which are harmonic in a bounded domain D with respect to Brownian motion are well understood. Unlike the Brownian case, there
are two different kinds of harmonicity with respect to a discontinuous symmetric stable process. One kind ..."
Cited by 4 (4 self)
Add to MetaCart
Martin boundaries and integral representations of positive functions which are harmonic in a bounded domain D with respect to Brownian motion are well understood. Unlike the Brownian case, there are
two different kinds of harmonicity with respect to a discontinuous symmetric stable process. One kind are functions harmonic in D with respect to the whole process X, and the other are functions
harmonic in D with respect to the process X D killed upon leaving D. In this paper we show that for bounded Lipschitz domains, the Martin boundary with respect to the killed stable process X D can be
identified with the Euclidean boundary. We further give integral representations for both kinds of positive harmonic functions. Also given is the conditional gauge theorem conditioned according to
Martin kernels and the limiting behaviors of the h-conditional stable process, where h is a positive harmonic function of X D . In the case when D is a bounded C 1;1 domain, sharp estimate on the
- J. Funct. Anal , 2006
"... A connection between the semigroup of the Cauchy process killed upon exiting a domain D and a mixed boundary value problem for the Laplacian in one dimension higher known as the mixed Steklov
problem, was established in [6]. From this, a variational characterization for the eigenvalues λn, n ≥ 1, of ..."
Cited by 4 (2 self)
Add to MetaCart
A connection between the semigroup of the Cauchy process killed upon exiting a domain D and a mixed boundary value problem for the Laplacian in one dimension higher known as the mixed Steklov
problem, was established in [6]. From this, a variational characterization for the eigenvalues λn, n ≥ 1, of the Cauchy process in D was obtained. In this paper we obtain a variational
characterization of the difference between λn and λ1. We study bounded convex domains which are symmetric with respect to one of the coordinate axis and obtain lower bound estimates for λ ∗ −λ1 where
λ ∗ is the eigenvalue corresponding to the “first ” antisymmetric eigenfunction for D. The proof is based on a variational characterization of λ ∗ − λ1 and on a weighted Poincaré–type inequality. The
Poincaré inequality is valid for all α symmetric stable processes, 0 < α ≤ 2, and any other process obtained from Brownian motion by subordination. We also prove upper bound estimates for the
spectral gap λ2 − λ1 in bounded convex domains.
, 904
"... Abstract. We study the Feynman-Kac semigroup generated by the Schrödinger operator based on the fractional Laplacian −(−∆) α/2 −q in R d, for q ≥ 0, α ∈ (0, 2). We obtain sharp estimates of the
first eigenfunction ϕ1 of the Schrödinger operator and conditions equivalent to intrinsic ultracontractivi ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We study the Feynman-Kac semigroup generated by the Schrödinger operator based on the fractional Laplacian −(−∆) α/2 −q in R d, for q ≥ 0, α ∈ (0, 2). We obtain sharp estimates of the first
eigenfunction ϕ1 of the Schrödinger operator and conditions equivalent to intrinsic ultracontractivity of the Feynman-Kac semigroup. For potentials q such that lim |x|→ ∞ q(x) = ∞ and comparable on
unit balls we obtain that ϕ1(x) is comparable to (|x|+1) −d−α (q(x)+1) −1 and intrinsic ultracontractivity holds iff lim |x|→ ∞ q(x) / log |x | = ∞. Proofs are based on uniform estimates of
q-harmonic functions. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2013416","timestamp":"2014-04-16T06:21:57Z","content_type":null,"content_length":"35980","record_id":"<urn:uuid:3fc9c7ce-2908-4ff7-b37c-71654ccb2760>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Connection between bi-Hamiltonian systems and complete integrability
up vote 6 down vote favorite
As I understand, the lack of indication on how to obtain first integrals in Arnol'd-Liouville theory is a reason why we are interested in bi-Hamiltonian systems.
Two Poisson brackets $\{ \cdot,\cdot \} _{1} , \{ \cdot , \cdot \} _{2}$ on a manifold $M$ are compatible if their arbitrary linear combination $\lambda \{ \cdot , \cdot \} _1+\mu\{\cdot,\cdot\} _2$
is also a Poisson bracket. A bi-Hamiltonian system is one which allows Hamiltonian formulations with respect to two compatible Poisson brackets. It automatically posseses a number of integrals in
The definition of a complete integrability (à la Liouville-Arnol'd) is:
Hamiltonian flows and Poisson maps on a $2n$-dimensional symplectic manifold $\left(M,\{ \cdot, \cdot \}_M\right)$ with $n$ (smooth real valued) functions $F _1,F _2,\dots,F _n$ such that: (i) they
are functionally independent (i.e. the gradients $\nabla F _k$ are linearly independent everywhere on $M$) and (ii) these functions are in involution (i.e. $\{F _k,F _j\}=0$) are called completely
Now, I would like to understand the connections between these two notions, and because I haven't studied the theory, any answer would be helpful. I find reading papers on these subjects too technical
at the moment. Specific questions I have in mind are:
Does completely integrable system always allow for a bi-Hamiltonian structure? Is every bi-Hamiltonian system completely integrable? If not, what are examples (or places where to find examples) of
systems that posses one property but not the other?
I apologize for any stupid mistakes I might have made above. Feel free to edit (tagging included).
integrable-systems examples mp.mathematical-physics
add comment
1 Answer
active oldest votes
Your understanding is essentially correct. There are three basic (and closely related) approaches to constructing the integrals of motion required for complete integrability: through
separation of variables, through the Lax representation, and through the bi-Hamiltonian representation. The relationship among them is not yet fully understood. See, however, this paper by
M. Blaszak, which, in essence, states that any Hamiltonian system that admits separation of variables is (or, rather, can be extended to) bi-Hamiltonian, and this survey paper by G. Falqui
and M. Pedroni on separation of variables for bi-Hamiltonian systems. As for the relationship among the Lax representation and bi-Hamiltonian property, see this paper by F. Magri and Y.
Kosmann-Schwarzbach and references therein. Now to your questions.
First of all, the bi-Hamiltonian property as you state it, without further restrictions, does not necessarily lead to integrability, and the claim that a bi-Hamiltonian system
up vote 8 automatically possesses some integrals of motion does not hold in full generality, as far as I know. I can't think of a specific example right now, but, roughly speaking, if both your
down vote Poisson structures are too degenerate (their rank is too low), the recursion can break down and you will not get enough integrals of motion. An example of this for the infinite-dimensional
accepted case can be found in the paper Is a bi-Hamiltonian system necessarily integrable? by B.A. Kupershmidt. However, if you put in some additional nondegeneracy assumptions, the answer is yes,
and dates back to Magri, Morosi, Gelfand and Dorfman. It is nicely summarized e.g. in Theorem 1.1 of this paper by R.G. Smirnov. The idea behind this is that the integrals of motion are
provided by the traces of powers of the ratio of your Poisson structures.
As for the second question, not any Liouville integrable system is bi-Hamiltonian, at least if you impose some fairly reasonable technical assumptions, see the paper Completely integrable
bi-Hamiltonian systems by R.L. Fernandes; cf. also the above Smirnov's paper.
thanks for the reference – Jay Mar 16 '13 at 5:51
add comment
Not the answer you're looking for? Browse other questions tagged integrable-systems examples mp.mathematical-physics or ask your own question. | {"url":"http://mathoverflow.net/questions/14740/connection-between-bi-hamiltonian-systems-and-complete-integrability/14747","timestamp":"2014-04-21T02:59:40Z","content_type":null,"content_length":"55093","record_id":"<urn:uuid:0092406c-6d4b-40ef-a0fa-f03ec0e0af64>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding 3 Fractions Calculator for Like or Different Denominators
Adding 3 Fractions Calculator
for Like or Different Denominators
The Adding 3 Fractions Calculator on this page will add three fractions with like or unlike denominators.
Special thanks goes out to Denise Thomas for suggesting this calculator.
This free online fraction addition and subtraction calculator will add three fractions together (optional subtraction) -- regardless if the three fractions have the same or different denominators --
and give the result in proper and simplest form.
Plus, unlike other online 3 fraction calculators, this calculator will show its work and give a detailed step-by-step explanation as to how it arrived at the answer.
Note that if you only have two fractions to add or subtract, please visit the 2 fraction add subtract calculator.
Also, be sure to check out our other online fraction calculators for multiplying, dividing, and comparing.
How Do You Add 3 Fractions?
The answer to that question depends on whether you are adding fractions with like denominators or adding fractions with unlike denominators.
Adding 3 Fractions with Like Denominators
Adding 3 fractions with like denominators is easy. All you do is add the numerators together and keep the denominator the same, like this:
Adding with Like Denominators
1 + 3 + 5 = 9
Adding 3 Fractions with Unlike Denominators
Adding 3 fractions with unlike denominators requires a little more work (unless you use the calculator on this page), because in order to add the 3 fractions you must first turn their unlike
denominators into like denominators. You do that by finding the lowest common multiple (LCM) of the denominators.
To illustrate how you use LCM to turn unlike denominators into like denominators, let's suppose you want to add the fractions 1/2, 2/3, and 3/4.
The first step is to find the lowest number that 2, 3, and 4 will each divide into evenly (the LCM). According to my calculations, the LCM of the three denominators is 12.
Once we have found the LCM for the three denominators, the next step is to multiply the top and bottom of each fraction by the number of times each fraction's denominator goes into the LCM (remember,
as long as you multiply the top and bottom of a fraction by the same number, the fraction's decimal value does not change, as in 1/2 = 2/4 = 4/8 and so on -- because all of the latter divide out to
Since 2 goes into 12 a total of 6 times, you would multiply the top and bottom of 1/2 by 6, which results in 6/12.
Next, since 3 goes into 12 a total of 4 times, you would multiply the top and bottom of 2/3 by 4, which results in 8/12.
Then, since 4 goes into 12 a total of 3 times, you would multiply the top and bottom of 3/4 by 3, which results in 9/12.
Finally, since all three denominators are now the same, you simply add the numerators (6 + 8 + 9) while keeping the denominator (12) the same -- giving you a result of 23/12. But since 23/12 is an
improper fraction, you would convert it to the mixed number 1 and 11/12.
Here is how our example of adding 3 fractions with unlike denominators might appear on paper:
1^ x 6 + 2^ x 4 + 3^ x 3 = 6 + 8 + 9 = 23
2^ x 6 3^ x 4 4^ x 3 12 12 12 12
Step #1
The steps to subtract 3 fractions with like or different denominators are the same as the steps to add fractions, you just subtract the numerators while keeping the denominators the same.
With that, let's use the Subtracting or Adding 3 Fractions Calculator to add three fractions together, or to subtract one or two fractions the second or third.
Subtracting Adding 3 Fractions Calculator
Instructions: Enter the numerator (top) and the denominator (bottom) of the 1st fraction.
Next, select + (plus) or - (minus) from the add subtract fractions drop down menu.
Enter the numerator and the denominator of the 2nd fraction.
Next, select + (plus) or - (minus) from the 2nd add subtract fractions drop down menu.
Finally, enter the numerator and denominator of the 3rd fraction, and then click the "Add Subtract 3 Fractions" button.
Mouse over the blue question marks for a further explanation of each entry field. More in-depth explanations can be found in the glossary of terms located beneath the Adding 3 Fractions Calculator.
Adding 3 Fractions Calculator Glossary of Terms
Check Out My Other Super
Online Math Calculators
To Help You To
Solve and Learn ...
Least Common Multiple (LCM): The process of finding the smallest number that the denominators of two or more fractions will all divide into evenly. This process is necessary for adding or subtracting
fractions with different denominators.
Numerators: Enter the top numbers for each of the 3 fractions you are adding and/or subtracting. If you would like one or more of the fractions to be a whole number, simply enter the desired whole
number in the numerator field and enter a 1 in the denominator field (4/1 = 4). If you want to add or subtract a mixed number, simply convert the mixed number to an improper fraction (1 1/3 = 4/3)
before entering.
Denominators: Enter the bottom numbers for each of the 3 fractions you are adding or subtracting. Note that the Adding 3 Fractions Calculator requires the denominators to be greater than zero.
Answer: This is the result of addition and/or subtraction of the three fractions. After calculating fractions, the Adding 3 Fractions Calculator will show its work and give a detailed explanation of
each step it took to arrive at the answer.
Check out 3 of my most popular financial calculators:
[ Return to Top of Calculator ] [ Return to Top of Page ]
+1 Free-Online-Calculator-Use.com
+1 Page Site | {"url":"http://www.free-online-calculator-use.com/adding-3-fractions-calculator.html","timestamp":"2014-04-19T14:29:31Z","content_type":null,"content_length":"58785","record_id":"<urn:uuid:f6a209f6-fbc1-452c-80d6-00ba9cd184e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is Energy
The greatest gift of nature to our universe is a energy. It is a boon to the mankind and it exists in different forms like solar, wind, hydro, earthly and sky. In Hindu philosophy these are
considered as five giants. The greatest and most tapped form is the energy from sun.
But then, why there is a hue and cry for energy and why there is a crisis of energy. It is mainly because of two reasons. The first is that the energy when converted for doing a work cannot be cycled
back. Secondly, we still lag behind in tapping up all the energy resources though considerable advancement is made. To achieve this, we must clearly understand what is energy?
Let us approach to the concept of energy from our daily experience. A man can walk a certain distance under normal condition. The same person may not be able to do the same if he is sick. What makes
the difference in the two situations? That is ‘energy’. When in normal health, he has the ‘energy’ to walk and in the second case he does not have that ‘energy’. Thus,
Energy is basically defined as an ability to do some act or work.
This above Energy Definition is same not only for a man but extends to all kinds of living things, non living things and even to natural objects. For example, a horse has an energy to drag a cart, a
piece of coal has the energy to produce heat and a storm has the energy even to uproot a tree.
Since the energy is described as an ability to do a work, the accepted general unit of energy is ‘joules’. However, the units of energy vary depending on the context.
It may be realized that energy is an hidden ability which varies from case to case. Further, energy does not pertain only to a particular type of act. A man’s walk corresponds to one type of energy
and burning of coal corresponds to another type. Therefore, energy can be of various and different types. Let us take a closer look in the next section.
As mentioned earlier energy is a hidden ability which varies from case to case. The ability is ‘hidden’ or stored in many ways and means, giving rise to different
types of energy
An energy which is stored by virtue of its position is called ‘potential energy’. Suppose an object of mass ‘m’ is carried to a height ‘h’, the work done to do that is
m $\times$ g $\times$ h
. This is stored in the object as a
‘Potential Energy’
at that height.
An energy which is acquired when an object is in motion, then the type of energy acquired is called as
‘Kinetic Energy’
. For a linear motion of an object of mass ’m’ moving at a velocity of ‘ v ’, the
kinetic energy
The energy stored or released under a thermal change is called
‘Thermal Energy’.
The most common type of energy that we come across in our daily life is
Electrical Energy
in which electricity is stored and this electricity is used for various purposes.
The energy stored or released under a chemical change is called
‘Chemical Energy’
The energy associated with atomic structure of a material is called
‘Atomic Energy’
Like this we can list various types of energy depending on the context.
Energy resources mean the sources which has the energy. In true sense every object in the world is acquired with some energy. But in practical sense, we refer energy resources for the agencies which
have substantial energies that can be transformed for useful purposes.
→ Read More An interesting feature of energy is the conservation of energy. That is, in a closed system, the total energy remains the same. In other words, an energy can not be created or destroyed.
It can only be transferred from one form to another form.
The change in energy of an object due to a transformation is equal to the work done on the object or by the object for that transformation.
For example, when an object is at a height, a
potential energy
is stored by virtue of its height. When the same object is dropped the height decreases. But because of the reduction in height, the potential energy is not destructed but it is only transformed into
kinetic energy as visible from its velocity during the fall.
Conservation of energy helps us a lot in energy solutions. We are able to predict the results whenever an energy is transformed to another form.
→ Read More
Energy resources mean the sources which have the energy. In true sense every object in the world is acquired with some energy. But in practical sense, we refer energy resources for the agencies which
have substantial energies that can be transformed for useful purposes.
If you look back on energy of any agency you will find the sun is the source for all energy. Hence let us cite that as the first example. The solar energy in the form of solar heat is widely used in
various types of solar heaters. Water and wind are among the important natural agencies which are considered as important energy resources. Water can be stored at high altitudes in the form of dams
and its potential energy can be converted into kinetic energy by flowing the water through pen stocks. The turbines coupled to electrical generators are fitted below the pen stocks to convert the
kinetic energy to electrical energy.
Almost the same principle is used in wind mills which uses the wind forces to generate electrical power. The energy resource of earth are items like coal, petroleum oils and other mineral products.
These products have high calorific values and release high thermal energy which in turn is transformed for useful purpose.
In the context of this article, energy solutions mean that energy equations arise due to conservation of energy. We will explain such energy solutions with examples.
Solved Examples
Question 1:
An object of 5 kg is placed at an height of 10 meters. Suppose the object is freely dropped, what is the velocity of the object when it hits the ground?
First let us calculate the potential energy P of the object when it was at a height of 10 meter.
It is calculated as,
P = 5 $\times$ 9.8 $\times$ 10 = 490kgm/s^2
Let ‘v’ meter per second be the velocity of the object when it hits the ground. The kinetic energy K acquired by the object at this point is given by
K = $\frac{1}{2}$(5)(v^2) kgm/s^2
When the object hits the ground, the entire potential energy has become 0 because the height has become 0. But as per conservation of energy this must be equal to the kinetic energy acquired.
Therefore, E = P and hence,
$\frac{1}{2}$(5)(v^2) kgm/s^2 = 490kgm/s^2
From the above equation ‘v’ can be solved as 14. That is, the object hits the ground with a velocity of 14 meters per second.
Question 2:
100 kg of water is heated on a 3 kw electric heater for 1 hour. If the initial temperature of the water was 20°C, what would be its final temperature?
First let us calculate the electrical energy E consumed by the heater. It is given by,
E = 3kw $\times$ 1hr = 3 kwh
= $\frac{3}{2.78\times10^{-7}}$ joules
= $\frac{3}{2.78}$ $\times$10^7$\times$ 2.39 $\times$ 10^-4 kilo calories
= 2580 kilo calories
The thermal energy H acquired in kilo calories by the water is given by,
H = M $\times$ s $\times$ T,
where 'M' is mass, 's' is the specific heat, which is 1 in this case and 'T' is the temperature rise in ^oC.
= 100$\times$1$\times$ T = 100T kilo calories.
As per conservation of energy, H = E and therefore,
100T = 2580, which gives the temperature rise as 25.8°C
Therefore the final temperature of water would be (20 + 25.8) °C = 48.8 °C | {"url":"http://physics.tutorvista.com/energy.html","timestamp":"2014-04-17T10:01:48Z","content_type":null,"content_length":"34557","record_id":"<urn:uuid:bb3d8da6-9c38-46f6-9311-fdbf06e4f35e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 4513
THIS IS A WAC COURSE!!
Instructor: Scott Sykes
Office: MP 314
Office Hours: Monday 11:00-2:00, 3:30-5:30
Wednesday 11:00-2:00
Friday 11:00-12:00
or by appointment
Office Phone: 836-4346
Email: ssykes@westga.edu
Prerequisite: MATH 2853 and 3003
Course Description: The first course in a comprehensive, theoretically-oriented, two-course sequence in linear algebra. Topics include abstract vector spaces, subspaces, linear transformations,
determinants, and elementary canonical forms.
Learning Outcomes:
At the end of this course, the student will be able to:
TEXT: Elementary Linear Algebra, Kohlman-Hill
TESTS: There will be exams on Monday, February 24 and Monday, April 21. Each will count 100 points towards your final grade.
FINAL: The final is Monday, May 5 at 5:30-7:30. It counts 200 points towards your final grade and will be comprehensive.
HOMEWORK: Homework will be assigned each week and collected the following Monday. Each assignment is worth 15 points. You will lose 3 points for each day it is late. You may count a maximum of
200 points towards your final grade.
WRITTEN ASSIGNEMENT (4513 Students Only) This is a WAC course and you will have to do a 5 – 7 page written paper on some application of linear algebra. The deadlines are as follows:
Topic: Monday, February 3 (10 points)
Rough Draft #1 : Monday, March 10 (15 points)
Rough Draft # 2 : Monday, April 7 (15 points)
Final paper: Monday, April 28 (60 points)
No two students can do a paper on the same topic. It will be determined who can do a particular topic based on when I get the topic from you. Rough drafts will be read for mathematics quality only
although excessive grammatical or spelling errors will lower your grade. All other mistakes should be corrected before the final paper is submitted. Each day late will lose you 5 points.
GRADES: Your grade will be determined based on the following formula
TESTS 200 points
FINAL 200 points
HOMEWORK 200 points
PAPER/PROJECT 100 points
TOTAL 700 points
POINTS GRADE
630-700 A
560-629 B
490-559 C
420-489 D
0-419 F
If you ever have any questions or suggestions, feel free to come by my office at any time. I will definitely be there during my office hours, you can just stop by. You can also stop by or call to
see if I am there at other times. | {"url":"http://www.westga.edu/~math/syllabi/syllabi/spring03/MATH4513.htm","timestamp":"2014-04-18T00:20:18Z","content_type":null,"content_length":"36163","record_id":"<urn:uuid:ca37338b-d3b1-4769-aa7b-a2c6ad8a8f99>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] untenable matrix behavior in SVN
Christopher Barker Chris.Barker@noaa....
Mon Apr 28 14:02:57 CDT 2008
Gael Varoquaux wrote:
> On Fri, Apr 25, 2008 at 01:40:29PM -0400, Alan G Isaac wrote:
>> In contrast, there *is* universal agreement that
>> x[0][0]==x[0,0] is desirable. Or so I've understood the
>> discussion.
> I don't know why people are indexing matrices with A[x][y], but they
> shouldn't.
I think there has been a misunderstanding here. I don't think anyone is
suggesting that if a coder wants an element of a matrix, that s/he
should write that:
element = M[i,j]
Rather, that one might want to extract a row from a Matrix, and then
index THAT object with a scalar:
row = M[i]
now do something with each element in that row: something = row[j]
This is a rational, normal thing to do, and I think the desire for this
is the core of this entire discussion, coming from the fact that in the
current version of matrix both of M[i] and M[i,:] yield a matrix, which
is 2-d, and cannot be indexed with a scalar. This is odd, as it is
pretty natural to expect a single row (or column) to behave like a 1-d
Alan G Isaac wrote:
> I believe that this conflicts with submatrix extraction.
> If we want to keep submatrix extraction (as we should),
why? with 2-d arrays, you get a subarray with:
A[i:j,:] and a 1-d array with A[i,:] -- why does that need to be
different for Matrices?
> 3. If ``v`` is a "RowVector" what behavior do we get?
> Suppose the idea is that this will rescue
> ``x[0][0]==x[0,0]`` **and** ``x[0]=x[0,:]``.
> But then we must give up that ``x[0,:]`` is a submatrix.
Correct, but why should it be -- we can get a submatrix with slicing, as
above. Indeed, I was about to post this comment the other day, as
someone was concerned that there needs to be a distinction between a
ColumnVector and a Matrix that happens to have a second dimension of one.
I think that the Vector proposal satisfies that:
if you have code like:
SubMatrix = M[i:j, k]
Then you will always get a SubMatrix, even if j == i+1. If you index
like so:
Vector = M[i,k] you will always get a vector (a 1-d object)
> Must ``v`` deviate from submatrix behavior in an important way?
> Yes: ``v[0][0]`` is an IndexError.
Correct, but if you're writing the code that generated that vector,
you'd know it was a 1-d object.
> Since submatrix extraction is fundamental, I think it is
> a *very* bad idea to give it up.
I agree -- but we're not giving it up, what we are doing is making a
distinction between extracting a single row or column, and extracting a
submatrix (that may or may not be a single row or column) -- just like
that distinction is make for regular old ndarrays.
> RowVector proposal we must give up ``x[0]==x[0,:]``.
> But this is just what we give up with
> the much simpler proposal that ``v`` be a 1d array.
no, we also give up that v acts like a row or column (particularly
column) vector in computation (*, **)
We still need real linear algebra computation examples....
Alan G Isaac wrote:
> I weight the future more heavily. We are approaching a last
> chance to do things better, and we should seize it.
Yes, but it seems that while a consensus will not be reached in time for
1.1, there is one that a change will probably occur in the next version,
so there is a lot to be said for waiting until a proposal is settled on,
then make the whole change at once.
> The right questions looking forward:
> - what behavior allows the most generic code?
> - what behavior breaks fewest expectations?
> - what behavior is most useful?
Yes, but I'm going to try to put down what I think are the key, very
simple, questions:
1) Do we want to be able to extract a single row or column from a
Matrix, and have it be indexable like a 1-d object?
2) Do we want to be able to do that without having to explicitly call
the Array attribute: M.A[i]?
3) Do we want to have those 1-d objects act like Row or Column vectors
in linear algebra operations (and broadcasting)?
4) Do we want to require slice notation to get a submatrix that happens
to have a single row or column?
If we want (1) and (2), then we need to make a change. If we want (3)
and (4), then something like the Row/ColumnVector proposal is needed.
I really think it's that simple.
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/033281.html","timestamp":"2014-04-16T07:59:47Z","content_type":null,"content_length":"7455","record_id":"<urn:uuid:959f9780-b39d-4a1a-81d2-87c3cc6c40db>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about the additive group of a finitely generated integral domain
up vote 4 down vote favorite
Let $R$ be an integral domain of characteristic 0 finitely generated as a ring over $\mathbb{Z}$. Can the quotient group $(R,+)/(\mathbb{Z},+)$ contain a divisible element? By a "divisible element" I
mean an element $e\ne 0$ such that for every positive integer $n$ there is an element f such that $e=nf$.
As Darji points out, another way to ask the question is this: Suppose $e\in R$ has the property that for all positive integers $n$, $e$ is congruent to an integer mod $nR$. Must $e$ be an integer?
Note: I previously posted this to Math StackExchange here: http://math.stackexchange.com/questions/71031/a-question-about-the-additive-group-of-a-finitely-generated-integral-domain
TO SUMMARIZE: Qing Liu showed that in fact any non-integer rational in $R$ determines a divisible element of $(R,+)/(\mathbb{Z},+)$, and Wilberd van der Kallen showed that all divisible elements
arise in this way. I wish I could accept both answers.
ac.commutative-algebra abelian-groups
Maybe a more attractive way to state the question: Can there be an element of $R$ which is equivalent to an integer in $R$ modulo $nR$ for every integer $n$, but not an integer in $R$ itself? –
darij grinberg Oct 10 '11 at 3:42
Darji, That is more attractive. Maybe that explains the no-response on MSE. On the other hand, I'm thinking that maybe there's some fact I'm not seeing about abelian groups that nails the question.
– SJR Oct 10 '11 at 3:55
I'd rather expect either a counterexample, or a proof using the Noetherianness of $R$. Possibly things like residual finitness would come into play. – darij grinberg Oct 10 '11 at 4:04
add comment
4 Answers
active oldest votes
The answer is no in general ($e$ needs not to be in $\mathbb Z$), but one can show that $e$ is divisible in $R/\mathbb Z$ if and only if $e\in \mathbb Q\cap R$.
First let $R=\mathbb Z[1/p]$ for some prime number $p$. Then I claim that $1/p$ is divisible in $R/\mathbb Z$. Indeed for any $n\ge 1$, write $n=p^rm$ with $m$ prime to $p$. Let $a,b
\in \mathbb Z$ such that $am+bp=1$. Then $$\frac{1}{p}=b+ \frac{am}{p}=b+n\frac{a}{p^{r+1}}\in \mathbb Z + nR.$$
For general $R$, denote by $D$ the elements $e\in R$ which are divisible in $R/\mathbb Z$. One can check directly that $D$ is a subring of $R$. Let us show $\mathbb Q\cap R\subseteq
D$. If $e=k/q\in \mathbb Q\cap R$ with coprime $k, q$, then again using Bézout, $1/q\in R$. Then it is enough to show that $1/p\in D$ for all prime divisors $p$ of $q$. But this is
up vote 4 down done just above.
vote accepted
The converse is proved in Wilberd's answer ($e\in \mathbb Z[1/a]$).
Final remark: $\mathbb Q\cap R=\mathbb Z$ if and only if $\mathrm{Spec}(R)\to \mathrm{Spec}(\mathbb Z)$ is surjective. This is because the fiber of this morphism above $p$ is the
spectrum of $R/pR$, and this spectrum is empty if and only if $1/p\in R$.
Thanks for putting me straight. – Wilberd van der Kallen Oct 10 '11 at 11:32
1 But you did the hardest part :) – Qing Liu Oct 10 '11 at 12:01
I didn't check in detail, but it seems to me that your result is even true if $\mathbb{Z}$ is replaced by any PID (and, of course, $\mathbb{Q}$ by the quotient field of the PID).
– Ralph Oct 10 '11 at 12:40
add comment
As Qing Liu explains there may be such nontrivial $e$.
Suppose there was such an $e$. By Grothendieck's Generic Freeness Theorem, [Theorem 14.4 in David Eisenbud, Commutative algebra with a view toward algebraic geometry, Graduate Texts in
Mathematics, Springer-Verlag, New York, 1995] there is $0\neq a\in \Bbb Z$ so that $A[1/a]$ is a free $\Bbb Z[1/a]$-module. Choose a basis and write $1$ in terms of that basis. We see $1$
lies in a direct summand spanned by finitely many basis vectors. By the structure theorem of finitely generated modules over a PID we see that in fact $Q=A[1/a]/\Bbb Z[1/a]$ is a free $Z[1/
up vote a]$-module plus a finite group. So it does not contain any nontrivial divisible element. But the image of $e$ in $Q$ is divisible. That means that $e\in \Bbb Z[1/a]$.
5 down
vote So far so good. The next line is wrong, as explained by Qing Liu.
But $\Bbb Z[1/a]/\Bbb Z$ does not contain any divisible element.
Short remark: In my version of Eisenbud, Grothendieck's Generic Finiteness Theorem is Theorem 14.4. Anyway, I think that's the crucial point to reduce to the case of a finitely generated
module over a PID (and thanks for letting me know Grothendieck's theorem - I didn't know that so far). – Ralph Oct 10 '11 at 10:42
@Ralph 14.4 it is. – Wilberd van der Kallen Oct 10 '11 at 11:05
You mean Generic <b>freeness</b>. – Qing Liu Oct 10 '11 at 12:02
I don't understand why some people downvote this answer. It is helpful. – Qing Liu Oct 10 '11 at 12:05
@Qing Liu Freeness it is. – Wilberd van der Kallen Oct 10 '11 at 12:28
show 1 more comment
Edit: I misread the question and thought $R$ should be finitely generated as $\mathbb{Z}$-module (instead of as $\mathbb{Z}$-algebra). The proof below requires $R$ to be a finitely
generated as $\mathbb{Z}$-module.
$R/\mathbb{Z}$ can't contain other divisible elements than $0$.
This can be seen as follows: By assumption $(R,+)$ is a finitely generated abelian group and since $R$ is integral, the group $(R,+)$ is torsion free. Thus $R$ is a finitely generated free
up vote 2 $\mathbb{Z}$-module with $\mathbb{Z} \cdot 1_R$ as rank one sub-module. By elementary divisors there are $e_1,...,e_m \in R$ and $l \in \mathbb{Z}$ such that $R = \oplus_{i=1}^m \mathbb{Z}
down vote e_i$ and $1_R = le_1$.
Let $x= \sum_i x_ie_i \in R$ such that $\bar{x} \in R/\mathbb{Z}$ is divisible. Thus, for $n \in \mathbb{Z}$ there is $y = \sum_i y_ie_i \in R$ with $$\mathbb{Z}\cdot 1_R = \mathbb{Z}le_1 \
in x-ny = (x_1 -ny_1)e_1 + ... + (x_m-ny_m)e_m.\hspace{20pt}(\ast)$$ In particular, $x_i-ny_i=0$, i.e. $n | x_i$ for $i >1$ and all $n$. Hence $x_i=0$ for $i>1$. Now choose $n:= l$.
Comparing the first component in $(\ast)$ shows $l | x_1$, say $x_1= kl$. Therefore $x=k(le_1) = k \cdot 1_R$ and $\bar{x} = 0$ in $R/\mathbb{Z}$. q.e.d.
Note that $\Bbb Z [x]$ is finitely generated as an algebra, not as an abelian group. – Wilberd van der Kallen Oct 10 '11 at 9:51
Thanks for pointing out. – Ralph Oct 10 '11 at 10:32
add comment
Let $e$ be an element of $\langle R,+\rangle / \langle \mathbb{Z},+\rangle$ such that $e\neq 0$.
Since the representatives of $e$ only differ in the constant term, let $m_1,...,m_n$ be non-negative integers which are not all zero such that the $(a_1)^{m_1}\cdot ... \cdot (a_n)^
{m_n}$ coefficient of $e$'s representives is non-zero.
Since that coefficient is not divisible by itself plus one, $e$ is not divisible by that coefficient plus one.
up vote 0 down Therefore the quotient group cannot contain a divisible element.
I don't know what the answer would be if you let $\; R = \mathbb{Z}[a_1,...,a_n]/I \;$ instead of $\; R = \mathbb{Z}[a_1,...,a_n] \;$.
Ricky, Why should the representatives of $e$ differ only in the constant term? Are you assuming that the $a_i$ are algebraically independent? I am not making that assumption. Or am
I missing something obvious?? – SJR Oct 10 '11 at 3:45
The algebraic independence of the $a_i$ is basically part of the definition of $\mathbb{Z}[a_1,...,a_n]$, since the $a_i$ were not given as elements of some ring that $\mathbb{Z}$
is a subring of. $\;$ – Ricky Demer Oct 10 '11 at 3:51
Ricky, The $a_i$ are given as elements of $R$! But anyway, I will edit my question to clarify this. Sorry for the confusion. – SJR Oct 10 '11 at 3:57
No, the $a_i$ are used to define $R$, not given as elements of it. – Ricky Demer Oct 10 '11 at 4:03
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra abelian-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/77653/a-question-about-the-additive-group-of-a-finitely-generated-integral-domain/77669","timestamp":"2014-04-16T04:56:16Z","content_type":null,"content_length":"84393","record_id":"<urn:uuid:22319d45-53cd-4dc0-ac2f-5e3495670434>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: 4x4 matrices using Cramer's Rule
Uinseann wrote:
Anyone got any ideas on how to solve 4x4 matrices by using Cramer's
Rule. I've looked in a multitude of math books and surfed the web for
hours to no avail. I can do a 3x3 no problem but unfortunately I dont
seem to be able to see how to do a 4X4. Any advice that anyone might
be able to offer regarding the problem below would be greatly
13 10 0 0 i1 6
-10 13 0 -3 X i2 = 10
0 0 18 -3 i3 0
0 -3 -3 6 i4 5
Many thanks,
Do you know how to find the determinant of 4x4 matrices (using
"Laplace's cofactor expansion") ?
Relevant Pages
• Re: 4x4 matrices using Cramers Rule
... Uinseann wrote: ... I've looked in a multitude of math books and surfed the web for ... eqns becomes large. ...
• Re: 4x4 matrices using Cramers Rule
... Han de Bruijn wrote: ... I've looked in a multitude of math books and surfed the web for ...
• 4x4 matrices using Cramers Rule
... I've looked in a multitude of math books and surfed the web for ... I can do a 3x3 no problem but unfortunately I dont ... Any advice that anyone might ... | {"url":"http://sci.tech-archive.net/Archive/sci.math/2006-12/msg03344.html","timestamp":"2014-04-20T05:49:24Z","content_type":null,"content_length":"9031","record_id":"<urn:uuid:26205996-0daf-473c-94a5-5a0f28157d74>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
2C Combinations
Suppose that Alice, Betty, Cindy, and Dianne have their own private outdoors club, and that they need to select a president, a secretary, and a treasurer. This problem is a permutation one; they have
to fill a president slot, then a secretary slot, and finally a treasurer slot. As no woman would wish to hold two of these jobs, the three choices must be different. They have 4 choices for
president, then 3 for secretary, and 2 for treasurer, for a total of
P(4,3) = 4 · 3 · 2 = 24
ways of selecting officers.
To demonstrate a point, we list all 24 possibilities:
ABC ABD ACD BCD
ACB ADB ADC BDC
BAC BAD CAD CBD
BCA BDA CDA CDB
CAB DAB DAC DBC
CBA DBA DCA DCB
In each group of three, the first letter refers to the President, the second to the secretary, and the third to the treasurer. Note that in the first column we have listed all the permutations with
A, B, and C as officers. In the second column A, B, and D are officers, in the third A, C, and D are officers, and in the fourth column B, C, and D are officers.
Next we consider a somewhat modified problem. We suppose that the four women will choose not officers but only an executive committee, consisting of three members of equal status. The difference here
is that the order of the choosing makes no difference - for example, the listings ABC and ACB give exactly the same committee of three women. In fact, the six listings in the first column above refer
to the same committee consisting of Alice, Betty, and Cindy. Likewise the six listings in the second column refer to only one committee, as do the six listings in the third column and the six
listings in the fourth column. Altogether there are only four possible committees of three people, which we may list as
We say that there are 4 combinations of 4 objects taken 3 at a time, and write
C(4,3) = 4 .
Observe that the number of ways of selecting 3 officers is 6 times as large as the number of ways of selecting a committee of 3. The factor 6 is just 3!, the number of permutations of 3 objects. For
the listing ABC in the committee list, there are 6 listings in the first column of the officer list, corresponding to the number of ways of permuting the letters A, B, C. Likewise, for each of the
other listings in the committee list there correspond six listings in the officer list. The relation between C(4,3) and P(4,3) then is
Now we discuss combinations in general. We have N objects, and we wish to choose M objects from these N objects without regard to order. We will not order the M objects - we will just choose them and
give them all equal status, without distinguishing one from another. Any single way of doing this is called a combination. The number of ways of doing this is denoted with the notation C(N,M). We say
that C(N,M) is the number of combinations of N objects taken M at a time. We calculate C(N,M) with the formula
This relation holds because, as the preceding example shows, when choosing M objects from N objects there are M! as many permutations as there are combinations. In a permutation the objects are
ordered, but in a combination they are not. The number of ways of ordering M objects is M!, so you get M! times as many permutations as you get combinations. Each combination corresponds to one whole
column in the permutation list, whereas each column in the permutation list has M! entries.
Of course, now that we have the formula we do not need to write down any lists to compute C(N,M). Here are a few examples:
The last two examples illustrate the general formulas
C(N,1) = N , C(N,N) = 1 .
These formulas are easy to remember if you just recall what they mean. The expression C(N,1) denotes the number of ways of choosing 1 object from N objects - obviously N ways because you have N
choices. On the other hand, C(N,N) is the number of ways of choosing N objects from N objects without ordering them - since you have to choose all the objects there is only one way. (But if you order
the objects after choosing them, the number of ways becomes P(N,N) = N!)
Newcomers to counting techniques very often have trouble deciding whether a problem involves permutations or combinations. Remember, if you order or distinguish the choices it is a permutation, but
if you treat all choices equally without ordering them or distinguishing between them, it is a combination.
example 1
Six young women have applied for a part-time job at Miguel's Restaurant. How many ways can Miguel choose
(a) a cook, a dishwasher, and a cashier?
(b) three waitresses?
In part (a) the three choices will be ordered, so we are dealing with permutations; the number of ways is
P(6,3) = 6 · 5 · 4 = 120.
In part (b) the three choices have equal status as waitresses, with no distinction between them, so we are counting combinations; the number of ways now becomes
example 2
A basketball team has 10 players. How many ways can the coach select
(a) 5 players to start the game?
(b) 2 starting guards?
(c) a starting center?
(d) a starting center and a backup center?
In parts (a), (b), and (c) the choices are not being ordered, so we are dealing with combinations; the three respective answers are
In (d) the two choices are distinguished from one another, so we are counting permutations; the number of ways is
P(10,2) = 10 · 9 = 90 .
example 3
From a standard deck of 52 playing cards, how many ways can you choose
(a) a poker hand of 5 cards?
(b) 5 clubs?
(c) 2 aces?
(d) 4 queens?
These questions all involve combinations, as order in a hand of cards makes no difference. In (a) we select 5 cards from 52; the number of combinations is
In (b) we restrict the choices to clubs, limiting the number of available cards to 13; the number of ways to choose 5 of 13 clubs from the deck is
In (c) we select 2 aces from 4 available aces, and in (d) we select 4 queens from only 4 queens; the respective answers are
example 4
A business meeting has 9 participants. If everyone at the meeting shakes hands with each other person once, how many handshakes are there?
You have to look at this question the right way. When two people shake hands, you can think of them as forming a temporary “handshaking committee”. The total number of handshakes will be the same as
the number of ways of forming a committee of 2 people from 9 people. As the 2 choices are not ordered, we are counting combinations; thus the number of handshakes is
example 5
A set has 5 elements. How many subsets of the set are there consisting of
(a) 2 elements
(a) 3 elements?
To be specific, let us suppose the set is
S = {a,b,c,d,e} .
In (a) we must count the number of ways of selecting 2 elements from 5 elements to form a subset. As the order of listing of elements in a subset does not matter - for example, {a,b} = {b,a} - we are
counting the number of combinations of 5 elements taken 2 at a time; we get
In (b) we select 3 objects from 5, so the calculation changes to
An alert student might notice that the two answers in the preceding example are the same - is that only a coincidence? Think of it this way - when we select 2 elements for a subset from a set of 5
elements, we are at the same time selecting 3 elements not to be in that subset. Thus the number of ways of selecting a subset of 2 elements is the same as the number of ways of selecting 3 elements
for another subset - the chosen subset's complement.
More generally, if you begin with N objects, then the number of ways of choosing M of these objects is the same as the number of ways of choosing N-M of them; that is,
C(N,M) = C(N,N−M) .
For instance, suppose we want to count the number of possible committees of 6 people chosen from a club of 8 people. Instead of calculating C(8,6), it is easier to calculate C(8,2), the number of
ways of leaving 2 people off the committee. We get
1. A set has 20 elements. How many 2-element subsets does the set have? How many 18-element subsets does it have?
2. A club has 100 members. How many ways can the club form a committee of
(a) 3 members? (b) 97 members? (c) 100 members? (d) 1 member?
3. A Waianae farmer has 6 horses. How many ways can he choose 3 horses to march in the Kamehameha Day parade?
4. There are 11 cats living on Mr. Yau's block. If every cat gets into a fight with each other cat once during the night, how many catfights are there?
5. A singing class has 8 students. How many ways can the teacher choose
1. one student to sing a solo?
2. two students to sing a duet?
3. three students to sing a trio?
4. one student to sing, another to hold the music, and a third to play the piano?
6. The Wahine cross country team has 4 freshmen, 2 sophomores, 4 juniors, and 3 seniors. How many ways can the coach choose
1. 6 runners to race on the mainland?
2. 3 freshmen to enter a freshmen meet?
3. 4 runners and line them up for a publicity picture?
4. 1 runner from each class to visit a high school?
5. 2 senior co-captains?
6. 1 senior co-captain and 1 junior co-captain?
7. A history test has 7 questions, and you must answer 4 out of 7.
1. How many ways can you choose which 4 questions to answer?
2. How many ways can you choose which 3 questions not to answer?
3. How many ways can you choose a question to answer first, a question to answer second, a question to answer third, and a question to answer fourth?
8. From a jury pool of 17 citizens, how many ways can 12 jurors be chosen?
9. From a deck of 52 cards, how many ways can you choose
1. 3 face cards?
2. 2 black cards?
3. a rummy hand of 6 cards?
4. 3 jacks?
5. 5 aces?
10. Mr. Dela Cruz, madly in love with his wife, has 15 nice photos of her. How many ways can he choose
1. 3 photos to place on his office desk?
2. one photo to carry in his wallet, another to place on his desk, and a third to hang from the rear view mirror in his car?
11. In the Miss Chinatown Hawaii pageant, there are 12 semifinalists.
1. From the 12 semifinalists, how many ways can 5 finalists be chosen?
2. From the 5 finalists, how many ways can a winner, a first runner-up, and a second runner-up be chosen?
3. How many ways can the winner and two runner-ups be lined up for a picture?
4. How many ways can the winner and two runner-ups be lined up for a picture, with the winner in the middle?
5. From the 5 finalists, how many ways can a winner, a first runner-up, and a second runner-up be chosen and then lined up for a picture with the winner in the middle?
12. How many ways can Janet select 3 of her 8 business suits to pack for a trip?
13. In a race of 13 horses, how many ways can you bet one horse to win, a second horse to place, and a third to show?
14. At a dinner party of 6 people, someone has just proposed a toast. If everyone clinks everyone else's wine glass and if you listen carefully, how many clinks will you hear?
15. Sylvia and Raymond will be getting married next June. How many ways can they choose
1. 2 of Sylvia's 5 sisters to serve as bridesmaids?
2. one of Raymond's 6 brothers to serve as best man, and a second to serve as head usher?
3. 4 of their 11 siblings to serve refreshments at the reception?
4. 3 of Sylvia's sisters or 3 of Raymond's brothers to sing the wedding song as a trio?
5. one sister to sing solo, or 2 brothers to sing as a duet, or 3 sisters to sing as a trio, or 4 brothers to sing as a quartet? | {"url":"http://www.math.hawaii.edu/~hile/math100/combc.htm","timestamp":"2014-04-18T13:09:10Z","content_type":null,"content_length":"18662","record_id":"<urn:uuid:e20be572-bfb4-44f3-b184-29949d4773d6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Murrieta Precalculus Tutors
...For that reason, I will provide homework and create progress reports to show the student's areas of mastery and improvement. I will not bill for a session where the student or parent is not
satisfied with my tutoring. I will often ask for ways that I can improve in my teaching methods.
31 Subjects: including precalculus, chemistry, statistics, geometry
...I worked in a number of academic study programs doing tutoring in a number of subjects. However a majority of my experience in developing study skills came with the Wayne State University
Upward Bound Program. In that program I taught math and science and worked as a study center supervisor for the Wayne State University Upward Bound Program, from 1981-1994.
69 Subjects: including precalculus, reading, English, physics
...One year I worked for Perris High School in its after school program. Another year I worked for the private company Friendly Community Outreach Center (FCOC). My main goals as a tutor are to
help students increase their grade, to establish study techniques, and to understand basic concepts that ...
7 Subjects: including precalculus, Spanish, geometry, algebra 2
...You learn why it is important to be able to complete algebra/math problems. I tutor at the Tutoring Club and also privately tutor students. I have participated in the Financial Literacy
Campaign as well.
11 Subjects: including precalculus, calculus, geometry, algebra 1
I have been teaching Physics for the past 5 years and I truly love it. Before that, I tutored it for 4 years. In college I chose Physics as my major because I enjoyed it and it made sense to me.
8 Subjects: including precalculus, calculus, physics, geometry | {"url":"http://www.algebrahelp.com/Murrieta_precalculus_tutors.jsp","timestamp":"2014-04-18T11:56:00Z","content_type":null,"content_length":"24913","record_id":"<urn:uuid:cd7f310b-10f3-4134-acb5-1d749c5d3040>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
the definition of inner product
Computing Dictionary
inner product definition
linear algebra
, any linear map from a
vector space
to its
defines a product on the vector space: for u, v in V and linear g: V -> V' we have gu in V' so (gu): V -> scalars, whence (gu)(v) is a scalar, known as the inner product of u and v under g. If the
value of this scalar is unchanged under interchange of u and v (i.e. (gu)(v) = (gv)(u)), we say the inner product, g, is symmetric. Attention is seldom paid to any other kind of inner product.
An inner product, g: V -> V', is said to be positive definite iff, for all non-zero v in V, (gv)v > 0; likewise negative definite iff all such (gv)v < 0; positive semi-definite or non-negative
definite iff all such (gv)v >= 0; negative semi-definite or non-positive definite iff all such (gv)v <= 0. Outside relativity, attention is seldom paid to any but positive definite inner products.
Where only one inner product enters into discussion, it is generally elided in favour of some piece of syntactic sugar, like a big dot between the two vectors, and practitioners don't take much
effort to distinguish between vectors and their duals. | {"url":"http://dictionary.reference.com/browse/inner+product","timestamp":"2014-04-16T04:46:20Z","content_type":null,"content_length":"90257","record_id":"<urn:uuid:c19a61ab-be13-47b7-93b5-4f0169cc4897>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Increasing/Decreasing Functions
The derivative of a function may be used to determine whether the function is increasing or decreasing on any intervals in its domain. If
> 0 at each point in an interval I, then the function is said to be increasing on I.
< 0 at each point in an interval I, then the function is said to be
decreasing on I
. Because the derivative is zero or does not exist only at critical points of the function, it must be positive or negative at all other points where the function exists.
In determining intervals where a function is increasing or decreasing, you first find domain values where all critical points will occur; then, test all intervals in the domain of the function to the
left and to the right of these values to determine if the derivative is positive or negative. If f′(x) > 0, then f is increasing on the interval, and if f′(x) < 0, then f is decreasing on the
interval. This and other information may be used to show a reasonably accurate sketch of the graph of the function.
Example 1: For f(x) = x ^4 − 8 x ^2 determine all intervals where f is increasing or decreasing.
The domain of f(x) is all real numbers, and its critical points occur at x = −2, 0, and 2. Testing all intervals to the left and right of these values for f′(x) = 4 x ^3 − 16 x, you find that
hence, f is increasing on (−2,0) and (2,+ ∞) and decreasing on (−∞, −2) and (0,2).
Example 2: For f(x) = sin x + cos x on [0,2π], determine all intervals where f is increasing or decreasing.
The domain of f(x) is restricted to the closed interval [0,2π], and its critical points occur at π/4 and 5π/4. Testing all intervals to the left and right of these values for f′(x) = cos x − sin x,
you find that
hence, f is increasing on [0, π/4](5π/4, 2π) and decreasing on (π/4, 5π/4). | {"url":"http://www.cliffsnotes.com/math/calculus/calculus/applications-of-the-derivative/increasing-decreasing-functions","timestamp":"2014-04-21T03:59:24Z","content_type":null,"content_length":"106652","record_id":"<urn:uuid:6ca64f48-dad3-4326-a8fb-7647171164b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Formulation of the Unified Monte Carlo Approach (UMC-B) and Cross-Section Evaluation for the Dosimetry Reaction <sup>55</sup>Mn(n,γ)<sup>56</sup>Mn
STP1550: A New Formulation of the Unified Monte Carlo Approach (UMC-B) and Cross-Section Evaluation for the Dosimetry Reaction ^55Mn(n,γ)^56Mn
Capote, Roberto
NAPC-Nuclear Data Section, International Atomic Energy Agency, Vienna,
Smith, Donald L.
Argonne National Laboratory, Coronado, CA
Trkov, Andrej
Jozef Stefan Institute, Ljubljana,
Meghzifene, Mehdi
NAPC-Nuclear Data Section, International Atomic Energy Agency, Vienna,
Pages: 18 Published: Aug 2012
Two relatively new approaches to neutron cross section data evaluation are described. They are known collectively as Unified Monte Carlo (versions UMC-G and UMC-B). Comparisons are made between these
two methods, as well as with the well-known generalized least-squares (GLSQ) technique, through the use of simple, hypothetical (toy) examples. These new Monte Carlo methods are based on stochastic
sampling of probability functions that are constructed with the use of theoretical and experimental data by applying the principle of maximum entropy. No further assumptions are involved in either
UMC-G or UMC-B. However, the GLSQ procedure requires the linearization of non-linear terms, such as those that occur when cross section ratio data are included in an evaluation. It is shown that
these two stochastic techniques yield results that agree well with each other, and with the GLSQ method, when linear data are involved, or when the perturbations due to data discrepancies and
nonlinearity effects are small. Otherwise, there can be noticeable differences. The present investigation also demonstrates, as observed in earlier work, that the least-squares approach breaks down
when these conditions are not satisfied. This paper also presents an actual evaluation of the ^55Mn(n,γ)^56Mn neutron dosimetry reaction cross section in the energy range from 100 keV to 20 MeV,
which was performed using both GLSQ and UMC-G approaches.
evaluation methods, Unified Monte Carlo, generalized least-squares, nuclear data
Paper ID: STP155020120014
Committee/Subcommittee: E10.07
DOI: 10.1520/STP155020120014 | {"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP155020120014.htm","timestamp":"2014-04-20T13:32:18Z","content_type":null,"content_length":"13585","record_id":"<urn:uuid:7c93d982-6d22-44cb-827c-ed47daaa04c0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structural engineering
Structural engineering
is the field of
civil engineering
particularly concerned with the design of
-bearing structures. In practice, it is largely the implementation of
to the design of structures, such as
, walls (including retaining walls),
, etc.
Structural engineers need to design structures so that while serving their useful function, they do not collapse, and do not bend, twist, or vibrate in undesirable ways. In addition they are
responsible for making efficient use of funds and materials to achieve these structural goals. Typically, apprentice structural engineers may design simple beamss, columns, and floors of a new
building, including calculating the loads on each member and the load capacity of various building materials (steel, timber, masonry, concrete). An experienced engineer would tend to design more
complex structures, such as multi-storey buildings (including skyscrapers), or bridges.
Loads are generally classified as: "live loads" such as the weight of occupants and furniture in a building, the forces of wind or weights of water, and the forces due to an earthquake; or "dead
loads" such as the weight of the building itself.
Traditionally, structural engineering used careful placement of coordinate axes to simplify complex equations associated with tensor quantities such as stress and resulting displacements of
structural elements, such as beams. This simplification was essential to being able to solve problems. A successful engineer must design a structure to withstand the loads specified to be placed upon
it. As long as the design loads are not exceeded the structure should spring back when the load is lifted, or hold steady indefinitely. The advance of computer software has allowed many of the more
complicated calculations to be carried out more accurately and quickly.
One of the most straightforward mechanisms of analyzing structures is the method of statics in which Newton's laws of motion are used to determine the forces acting on the components of a structure,
generally by assuming that the material is rigid and uniform.
Another mechanism capable of handling more complicated situations is the finite element method which is capable of calculating forces in structures made of various materials with differing
See also | {"url":"http://july.fixedreference.org/en/20040724/wikipedia/Structural_engineering","timestamp":"2014-04-17T21:23:10Z","content_type":null,"content_length":"5791","record_id":"<urn:uuid:8d3be1ed-3b3d-4149-ad8a-73638342ac49>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circle Summary
This section contains 418 words
(approx. 2 pages at 300 words per page)
A circle is defined as the set of all points in a plane that are a given distance from a fixed point in the plane. The fixed point is referred to as the center of the circle. The fixed distance,
which is the distance from the center of the circle to any point on the rim of the circle, is called the radius. The center of the circle is considered a point of symmetry. Any line through it is
called a line a symmetry. A circle has infinite lines of symmetry and one point of symmetry. Circles that have the same center, but not the same radius are called concentric circles.
The circle belongs to the group of curves known as conic sections. The circle can be shown as the intersection of a plane that is perpendicular to the axis of the cone and a right circular cone...
This section contains 418 words
(approx. 2 pages at 300 words per page) | {"url":"http://www.bookrags.com/research/circle-wom/","timestamp":"2014-04-20T08:59:22Z","content_type":null,"content_length":"31896","record_id":"<urn:uuid:b58d916e-007f-44af-9036-11db06992229>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Role of applications in modern mathematics
up vote 9 down vote favorite
Older days scientists were universalists and philosophy, physics and mathematics were a part the same question - understanding the world. Nowadays one may get feeling that the role of applications in
development of modern mathematics is negligible - of course it depends on the field. And aim of the question is to get different opinions from different points of view.
Question 1 What is the role of applications in modern mathematics ?
Question 2 Different countries have different mechanisms to stimulate interaction between mathematics and applications - what are these mechanisms and what are their advantages and disadvantages ?
Question 3 (for pure mathematicians) What is your personal stance on applications ? Is it out of your scope of interests or you are have (trying to have) some contact with applications ?
applications soft-question big-list
9 It might be better to ask each question separately, since the questions are very different. – Ben McKay Mar 6 '13 at 20:16
1 META discussion tea.mathoverflow.net/discussion/1551/… – Alexander Chervov Mar 9 '13 at 7:37
add comment
closed as not constructive by Federico Poloni, Douglas Zare, Felipe Voloch, Emil Jeřábek, Mark Sapir Mar 9 '13 at 1:48
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate,
arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules
in the help center, please edit the question.
4 Answers
active oldest votes
Question 1 What is the role of applications in modern mathematics ?
I don't know if there is such a thing as THE role of applications, but I think it's a great motivation for developing a theory and/or solving a particular problem if it has an application
outside of mathematics.
Then again, the same can be said about applications to different branches of mathematics. Besides, applications are not the only source of motivation to do mathematics. Perhaps, one thing
real-life applications play an important role in is to make it easier for non-mathematicians to see that mathematics isn't pretentious junk art with no practical or intelectual value. This
isn't to say that mathematics with no real-life applications is junk art. But if it has a practical application, it'd be a lot easier to defend your own work from such criticisms from
Question 2 Different countries have different mechanisms to stimulate interaction between mathematics and applications - what are these mechanisms and what are their advantages and
disadvantages ?
I don't know much about other countries and other branches of mathematics, but it appears that in Japanese universities discrete mathematicians tend to work at non-math departments
up vote (typically but not exclusively departments of computer science or computer engineering) slightly more often than in some other countries, regardless of whether their work has immediate
8 down applications to the respective fields. And when I say discrete mathematics, I include some branches that some may not consider discrete mathematics, such as number theory and algebra.
From graduate students' perspective, this is a great thing because you can work on "mathematics for the sake of mathematics" if that's your thing while naturally getting a lot of exposure to
the applied side of mathematics. For example, I finished my undergraduate study at a traditional mathematics department and went to the graduate school of mathematics of the same university.
Then a couple years later, I transfered to the information science department at a different university. I didn't change my research topic during my Ph.D. program, but this transfer
definitely influenced my view and attitude toward mathematics and gave me a lot of opportunities to learn stuff outside mathematics, which I don't think would have occurred, at least not to
the same degree, if I stayed at the math department.
The negative side effect I hear from faculties is that it makes it harder to get graduate students with strong backgrounds in mathematics because math majors tend to apply for math graduate
programs like I did at first.
Question 3 (for pure mathematicians)
Hmm... I don't know how you define that pure mathematicians thing, but looking at my own publication list, I guess I'm not pure or innocent anymore. Oh, well.
add comment
These questions are too broad, and not very clearly stated. First of all what's "an application"? Application to what? Is application to some "physics" which itself has little or no
connection to real world (like string theory) counted as an application? We all know examples when this kind of "physics" stimulate a lot of interesting mathematics. Is an application in a
completely different area of mathematics counted as an application?
Or only an application which brings profit is counted?
Anyway, no matter how one defines an application, my answers are these.
1. Outside applications play an important role for mathematics as a whole. Probably as important as it was in the past. There is a huge (and not very well defined) part of mathematics
which is called "applied mathematics". This usually does not include "mathematical physics" (at least in the US. But this is a matter of labels).
What goes under the label of pure mathematics, also frequently has applications, sometimes very important.
up vote 6 Second question. The mechanism that I know is financial. Those who distribute money want "aplications". In many cases they do not really understand what it means but they want it to be
down vote called this way. Various financing is available for applications, real and imaginary. Sometimes, there is an administrative pressure. But some money is usually involved behind it.
Third question. I qualify myself as a "pure mathematician". I define this as follows: the main criterion for choice of a problem to work on is usually aesthetic; I just like the problem.
An alternative consideration is that a problem has potential applications in the real world. This is rarely a motivation for me.
Sometimes my results find applications in various areas, like material science, computer science (always unexpected to me). By "applications" here I mean that scientists from other
sciences (who do not call themselves mathematicians) sometimes cite and use my results. I understand that parts of other sciences can be also very remote from the real world. But I am
always pleased when people from outside use my results.
Sometimes an applied mathematician or non-mathematician asks a math question. I always try to help if I can. Sometimes I can. Sometimes the problem is even mathematically interesting. This
is also very pleasant. In my youth, I was sometimes involved in "applied research" for money and other benefits. I did not like it. I'd rather teach to make my living.
add comment
Answer to Question #3: As a pure mathematician I know that what I do has an application, but I don't know what that is. Would I be curious how it's applied? You're damn right I would, for
knowing would help me to live and better judge my own place in history. Considering that the old dichotomy of the contexts of justification and discovery has been shown to be a false way of
looking at the sciences, even the purest of mathematicians should be interested in the history of their own ideas!
up vote 5
down vote Paul
add comment
The question is vague, and the answer could be used against "pure" mathematicians, especially by people in administrative positions.
First of all, let me say that as far as "applied math" is concerned, I am on the same page with the late V.I. Arnold who liked to say that there is no applied math, there are only
applications of math. I say this as a person who started as a hard core "applied mathematician", doing optimal control.
At some point I found the experience intellectually unsatisfying and I moved on to "pure" math. This is not a value judgement, it's a matter of taste. I still keep an eye open towards
"applications", because I sometime see glimpses of interesting math.
up vote 3
down vote Like other poster's I am confused by the term application. Does applying analysis to answer e.g. a famous topology question count as application? (Perelman comes to mind.)
Do Persi Diaconis' card tricks (backed by highly nontrivial math) count as applications?
Do we rank applications according to the number of zero's in a research grant?
Thank you for the answer. My question is about applications of math outside math. I do not see the sense of specifying the "application" very precisely - hope everybody understands vague
meaning and that is enough. I "assume a good will" - if some one thinks it is worth to write an in the answer about what he thinks deserves to be shared with the community - go on... –
Alexander Chervov Mar 7 '13 at 11:33
add comment
Not the answer you're looking for? Browse other questions tagged applications soft-question big-list or ask your own question. | {"url":"http://mathoverflow.net/questions/123796/role-of-applications-in-modern-mathematics/123822","timestamp":"2014-04-16T13:59:25Z","content_type":null,"content_length":"63827","record_id":"<urn:uuid:362eeade-d394-47c1-b55e-1e44cd05d138>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
User login
Enter your username and password here in order to log in on the website:
Description of the DLR TAU Code
Grid partitioning
For parallel computations the grids are partitioned in the requested number of domains at the start of the simulation. Therefore a simple bisection algorithm is employed. The load balancing is
performed on edge- and point weights which are adjusted to the needs of the solver, which is the most time consuming part of the simulation system. During the migration of the grid domains onto the
different processes the communication tables are stored in the grid partition, which contains the necessary information for updating data on points in the overlap region between a domain and its
neighbours. For adapted grids the grid hierarchy, which contains the information for grid de-refinement, is distributed over the domains as well.
After the grid partitioning, all other modules of TAU, which are described below, compute the requested data for a single domain per process. Grid re-partitioning is performed either if the grid was
locally (de-)refinement in an adaptation or if the number of domains is changed, when a simulation is restarted on a different number of CPU’s.
pre-processing needs to be employed once for a given primary grid. It computes the dual grid composed of general control volumes from the primary elements [2]. They are stored in an edge based data
structure, which makes the solver independent of the element types of the primary grid. All metrics are given by normal vectors, representing size and orientation of the faces, the geometric
coordinates of the grid nodes and the volumes of the auxiliary cells (i.e. dual cells). The connectivity of the grid is given by linking the two nodes on both sides of each face to the corresponding
edge from the primary grid elements. In order to enable the use of a multi-grid technique the agglomeration approach [16] is employed to obtain coarse grids by fusing fine grid control volumes
together. As the coarse grids employ the same type of metric description as the fine dual grids, the solution on coarse grids can be computed with the same approach as on the finest grid. The
transfer operators needed for the communication between the different grids are obtained directly during the agglomeration process.
In order to optimize the solver efficiency the edges of the dual grid are sorted such that cache-loads are minimised in the flux-computation part of the solver. The point indices are re-ordered to
optimise memory and cache-line accesses as far as possible. This optimisation reduces the solver runtime to less than half for standard PC-architectures.
For the use in turbulence models wall distances are computed for each grid point and regions of laminar flow are flagged depending on user input or on the result of a transition prediction method.
Fig. 1: Convergence behaviour of the hybrid TAU-Code for calculations of viscous flow around a delta wing at M=0.5, alpha=9ş. Comparison shows baseline Runge-Kutta scheme (RK) and implicit LU-SGS
As the explicit approach leads to severe restrictions of the CFL number which in turn often resulted in slow convergence, especially in case of large scale applications an implicit approximate
factorization scheme has recently been implemented [17], in order to improve the performance and robustness of the solver. The LU-SGS (Lower-Upper Symmetric Gauss-Seidel) scheme has been selected
because this method has low memory requirements, low operation counts and can be parallelized with relative ease. Compared to the explicit Runge-Kutta method, the LU-SGS scheme is stable with almost
no time step restrictions.An example of the performance improvement achieved is given in Fig. 1, where two convergence histories for viscous calculations on a delta wing are shown. The calculations
were performed with multi-grid on 16 processors of a Linux cluster. The figure shows the residual and the rolling moment against iteration count. In terms of iterations LU-SGS can be seen to converge
approximately twice as fast as the Runge-Kutta scheme. Furthermore, one iteration of LU-SGS costs roughly 80% of one Runge-Kutta step. This results in a reduction of the overall calculation time by a
factor of 2.5.
For time accurate computations the dual time stepping approach of the Jameson is employed. As the solver respects also the geometric conservation law both grid deformation as well as bodies in
arbitrary motion can be simulated.
Grid Adaptation
In order to efficiently resolve detailed flow features, a grid adaptation algorithm for hybrid meshes based on local grid refinement and wall-normal mesh movement in semi-structured near-wall layers
was implemented [2]. This algorithm has been extended to allow also for de-refinement of earlier refined elements thus enabling the code to be used for unsteady time-accurate adaptation in unsteady
flows [18]. Fig. 2 gives an example of the de-/refinement process for the flow in a shock tube: After a second diaphragm to the right of the depicted area is broken, a complex interaction between
shock waves and expansions begins. The figures display the density gradients resulting from the simulation as well as the grid which is automatically adapted to those gradients in each time step.
This local refinement approach greatly reduces the number of grid points needed for the total simulation compared to a simulation on globally refined grids. Thus, given a limited computer memory it
allows better resolution at the cost of additional CPU time (usually below 20 %) for adaptation per time step. Compared to the same resolution on globally refined grids this reduces the CPU and
memory requirements considerably.
Fig 2: Dynamic mesh refinement and de-refinement for the flow in a shock-tube 50, 70 and 110 ms after breaking of the second diaphragm. (left: computed Schlieren-pictures, right: grid development)
Grid Deformation
A grid deformation tool is used to account for moderate changes of the geometry, defined e.g. by an optimization technique during shape design or by the structural response of the geometry on
aerodynamic loads in a coupled simulation, e.g. [13],[14]. An algebraic method has been developed in order to avoid time consuming iterative solutions of equations based e.g. on linear elasticity or
spring analogy. The displacements which are the input for the deformation tool and the rotation of surface points are transported into the interior of the grid by an advancing front technique.
Depending on the ratio between the local point displacement and the cell size the displacement is reduced by some fraction in each step of the front. This procedure ends, when no more grid points are
moved during a sweep. In parallel computations, due to displacements coming from neighbouring domains the sweeps are continued until the grid does not change any more. Since a single sweep requires
almost negligible effort, this is not a significant drawback of the parallel mode where usually an order of 10 to 20 sweeps is needed.
This algebraic method is robust enough for small and sometimes also for medium displacements and can handle e.g. wing tip deflections of one or several chord length. It has been observed that the
limit, i.e. the deflection which results in a collapse of a first cell, can be extended considerably if it is accepted that a few more cells collapse. Repairing these cells with another algorithm, in
a second stage of the deformation increases the robustness considerably and allows for large grid deformations. In this algorithm, each region in the grid containing collapsed cells is marked such
that it is bounded by valid cells only. With the shape of this boundary in both the deformed and the non-deformed grid a transformation can be computed applying the volume spline technique, which
allows rebuilding the collapsed cells as images of the original ones. As long as these regions remain small the additional computational costs for the local volume spline is low.
Fig. 3 indicates that this robust approach allows going beyond realistic deformations. It shows the maximum possible wing tip deflections for a hybrid grid composed of 2.5*106 points. CPU time
requirements on one single Opteron CPU is less than 2 minutes for small deflections (of about a chord length) and less than 10 minutes for the maximum wing tip deflection.
Fig. 3: (Left) Maximum wing tip deflections and details of the deformed hybrid grid for the DLR F6 configuration (available from the second DPWS); (Right) Maximum wing tip deflections in parallel
(left) and sequential mode (right)
CHIMERA technique
As the Chimera technique has been recognized as an important feature to efficiently simulate manoeuvring aircraft, it has been also integrated into the TAU-Code [19]. In the context of hybrid meshes
the overlapping grid technique allows an efficient handling of complex configurations with movable control surfaces. For the point update on chimera boundaries linear interpolation based on a finite
element approach is used in case of tetrahedral mesh elements. For other types of elements (prisms, hexahedrons, pyramids) either linear interpolation is performed by splitting the elements into
tetrahedrons or non-linear interpolation for the different element types is used. The search for cells which are used for interpolation is performed using the data structure of an alternating digital
tree. The current implementation of the Chimera technique can handle both steady and unsteady simulations for inviscid and viscous flows with multiple moving bodies and is also available in parallel
mode. Applications of this technique in TAU can be found e.g. in [5],[6],[14].
Transition and Turbulence modelling
The turbulence models implemented within the TAU code include linear as well as non-linear eddy viscosity models spanning both one- and two-equation model families.
The standard turbulence model in TAU is the Spalart-Allmaras model with Edwards modification, yielding highly satisfactory results for a wide range of applications while being numerically robust. The
k-ω model provides the basis for the two equation models, where the one mostly used is probably the Menter SST model. Besides this, a number of different k-ω models, like Wilcox and Kok-TNT, are
available. Also nonlinear explicit algebraic Reynolds stress models (EARSM) and the linearized LEA model [20] have been integrated. The implementation of RSM models is ongoing work. A number of
rotation corrections for vortex dominated flows are available for the different models.
For a long time only the low Reynolds formulation has been available in TAU for accuracy reasons, but recently model specific “universal” wall-function have been introduced to achieve a higher
efficiency of the solver, especially for use in design or optimisation as well as to allow for a “first quick look” on new configurations. Although tests are till on-going, this very promising
approach [21] seems to be able to deliver nearly as good results as the low Reynolds approach for pressure and skin friction distributions over a wide range of y+ values for the first cell height at
the wall, while saving up to 75 percent of computation time and 40 percent of memory.
Finally, there are options to perform Detached Eddy Simulations (DES) [22] based on the Spalart-Allmaras or the Menter SST model or the so-called Extra-Large Eddy Simulation (XLES) [23]. Since the
DES method is a development strategy that can be applied, in principal, to any eddy viscosity turbulence model, the original implementation within the TAU code required only the calculation of
additional terms and a suitable switch to activate the DES model within the calculation of the source term of the turbulence model. In computational terms, the overhead for the solution of a time
step using Detached Eddy Simulation is negligible compared to URANS in three-dimensional cases. However, the true cost arises from the need to sufficiently resolve the temporal scales such that
unsteadiness in a solution can grow in a physical way.
In order to allow for modelling of transitional flow the turbulent production terms are suppressed in regions which are flagged in the grid as being of laminar flow type. Flagging of laminar regions
can be done in the pre-processing by the definition of polygon-lines which encircle the laminar region on the surface grid and the definition of a maximum height over the surface. The polygon lines
for laminar regions can be defined by the user for simulations with fixed transition to turbulent flow or can be computed by a transition prediction module, which is ongoing development and is thus
described in more detail in the following chapter.
TAU version for hypersonic and reacting flows
To extend the range of applicability of TAU to hypersonic [24] or high enthalpy flows additional modifications and extensions have been introduced into the code already a while ago, enabling e.g.
simulations of re-entry vehicles including chemical reactions of air as a five component gas. These modifications range from stabilization of the solver for high Mach numbers over additions for
thermo-chemical equilibrium flows to the consideration of non-equilibrium gases. Due to the latter additional conservation equations for the partial densities and the vibrations energies of the
species are introduced in the code and to close the system, models for the state of species as well as fits for their viscosity (Blottner) and the resulting heat conductivity (modified Eucken
correction) are taken into account. Furthermore mixture rules (Wilke and Herning/Zipperer), diffusion (following Ficks law) and detailed chemistry based on the Arrhenius-Ansatz with thermal coupling
after Park are implemented together with thermal relaxation after Landau-Teller to allow for full thermo-chemical non-equilibrium simulations.
With respect to boundary conditions walls with full, finite or non catalytic surfaces can be considered as well as radiation-adiabatic walls or effusion-cooled walls (porous walls).
Currently data bases are under construction to consider
• air plasma with 11 components (including electrical conductivity for MHD effects),
• a CO2 atmosphere (Mars) as well as
• H2-O2 combustion including probability density functions (PDF) for coupling of chemistry and turbulence.
This description is available as pdf-file. | {"url":"http://tau.dlr.de/code-description/","timestamp":"2014-04-24T05:48:16Z","content_type":null,"content_length":"30477","record_id":"<urn:uuid:8bf3de66-ad28-424e-b48b-28093faeed47>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Detailed algorithms on inline optimization
Holger Siegel <holgersiegel74@yahoo.de>
Wed, 20 Jan 2010 01:57:06 +0100
From comp.compilers
| List of all articles for this month |
From: Holger Siegel <holgersiegel74@yahoo.de>
Newsgroups: comp.compilers
Date: Wed, 20 Jan 2010 01:57:06 +0100
Organization: Compilers Central
References: 10-01-058
Keywords: optimize
Posted-Date: 19 Jan 2010 23:10:38 EST
Am Montag, den 18.01.2010, 07:09 -0800 schrieb Peng Yu:
> I'm looking for detailed algorithms on inline optimization. But I only
> find some document like http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html,
> which describes different ways of doing inline. Is 'inline' so trivial
> that it is not worthwhile to explain how it is implemented?
> If there is some reference that describe inline in details, could
> somebody let me?
> [Unless you're trying to in-line a recursive function, it's pretty straightforward. -John]
The paper 'Secrets of the Glasgow Haskell Compiler inliner' by Simon
Peyton Jones and Simon Marlow explains how inlining is implemented in
the Haskell compiler GHC:
"... The purpose of this paper is, therefore, to articulate the key
lessons we learned from a full-scale ``production'' inliner, the one
used in the Glasgow Haskell compiler. We focus mainly on the algorithmic
aspects, but we also provide some indicative measurements to
substantiate the importance of various aspects of the inliner."
You can find it at
It sketches some of the heuristics needed to avoid exponential bloat and
shows how one can deal with recursive declarations.
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/10-01-062","timestamp":"2014-04-20T20:57:19Z","content_type":null,"content_length":"6762","record_id":"<urn:uuid:5fcd5505-fc0b-4f78-aa3b-5dbd27a2c788>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Academe Today: The Chronicle of Higher Education
The Chronicle of Higher Education
Date: March 1, 1996
Section: Opinion
Page: B1
U. of Rochester Plan to Cut Mathematics Is Recipe for Disaster
By Arthur Jaffe, Joseph Lipman, and Morton Lowengrub
Financially beleaguered, the University of Rochester recently announced its "Renaissance Plan," designed to improve the institution's quality by reducing the student body by 20 per cent and the
faculty by 10 per cent, or 37 positions. Four graduate programs are to be terminated: mathematics, chemical engineering, comparative literature, and linguistics. Four others are to be reduced. The
faculty reductions will occur mostly in these eight departments, through attrition. Mathematics will be hit the hardest, shrinking from 21 to 10 faculty members.
Even though more than 70 per cent of Rochester's undergraduates enroll in calculus courses, Richard Aslin, Rochester's vice-provost and dean, says: "There are other ways to service our need for
calculus instruction, including the hiring of non-research adjunct faculty and/or the redirection of other qualified faculty from other disciplines."
The plan to downgrade mathematics at Rochester has produced an extraordinary wave of protest, not only from mathematicians, but also from well-known biologists, chemists, computer scientists,
economists, physicists, and others. Four Nobel laureates have agreed to serve on a 27-member task force, with representatives from the sciences and business, formed by the American Mathematical
Society to try to resolve the situation at Rochester. Four other Nobel laureates and several dozen members of the National Academy of Sciences are among the leaders in science, industry, and
education who have sent letters and resolutions to the Rochester administration.
The letter writers state forcefully that advances in their fields increasingly depend on sophisticated mathematical methods, which only active researchers in mathematics can teach properly. Some
characterize the plan to rely heavily on adjuncts and faculty members from other departments to teach calculus as a "recipe for disaster." Accomplished scholars in mathematics can offer students
inspiration, insights, and approaches that are not available from textbooks, computerized tutorials, or even from other scholars who do not devote their intellectual lives to the discipline. The
overall message is that a university cannot maintain a distinguished reputation in either research or teaching in the physical sciences and other quantitative areas without nurturing mathematics at
all levels.
The letter writers and members of the task force include past and present top administrators at leading universities, people who understand how limited resources require difficult choices. Like them,
we are well aware that most universities are in stringent financial circumstances, and we applaud Rochester's creativity in confronting its problems by restructuring itself, to give its
undergraduates a superior education while maintaining its character as a research university. But reducing a mathematics program of recognized excellence to the status of a service department is a
bad choice. It cannot serve the interests of students or help the university's reputation. It is like deciding to lose weight by cutting off a foot.
Not all academic subjects are equal. Without mathematics, science and technology would be in a primitive state. Mathematical concepts underlie our view of the physical world, and they pervade our
culture in many subtle ways, through disciplines such as economics, architecture, and even the fine arts.
Speculation about "mathematical truth" lies at the foundation of the philosophy of knowledge. Mathematics has been studied for more than 2,500 years, with an exponential rate of progress in the past
few decades. It is a universal human language: Modern scholars can still read mathematical texts written by Babylonian, Chinese, Greek, and Indian mathematicians thousands of years ago. Through
mathematics, we can understand phenomena on scales ranging from the subatomic to the structure of the universe itself -- phenomena that are otherwise unfathomable.
A shrinking job market for Ph.D.'s in the sciences and technology already has reduced the number of graduate students in mathematics and other disciplines nationwide, and more reductions will
certainly take place. Paring down a graduate program in mathematics is not unreasonable, but eliminating it totally at a prominent university like Rochester makes little sense.
Severe cuts in a mathematics department, like the ones planned at Rochester, are likely to drive the best mathematics faculty members to seek other jobs. New adjunct faculty members will not have a
long-term commitment to the department. This is hardly a situation conducive to high-quality instruction, or to outstanding research. Nor is such a department likely to attract talented new members
with fresh ideas. Furthermore, if other universities follow the lead Rochester is proposing, the consequences for the quality of American scientific and technological research over all could be
The University of Rochester's president, Thomas H. Jackson, has rejected "the notion that tenure-track mathematicians and mathematics Ph.D. students ... are the only potential groups capable of
offering high-quality mathematics instruction." Indeed, why should it be better to have courses taught by graduate students, for example, than by adjuncts and faculty members in other departments who
may even have Ph.D.'s in mathematics? After all, at some institutions, departments such as business and engineering, which require students to take mathematics, already offer their own math courses.
We would argue that transmitting a discipline -- a mode of thinking, a "miniculture" -- to thousands of students is the task of a team, not of isolated individuals. If you needed brain surgery, would
you rather go to a hospital with a stable surgical team run by crack neurosurgeons, familiar with new developments in their area and involved in training residents? Or to one with a team of
dispirited surgeons -- many of them temporary employees -- with no high-level teaching program, and which saves money by assigning operations to surgeons who learned the basics of the brain when they
were younger, but who now spend most of their time on orthopedics?
It is disturbing that Rochester made such a drastic decision about a department without the benefit of careful external evaluation. In fact, many of the letters sent by prominent mathematicians
assert high regard for Rochester's mathematics department, and the administration acknowledges the presence of world-class mathematicians on its faculty.
Among the justifications given by the administration for its action are that "despite good intentions by several faculty in Math, undergraduate instruction is less than optimal," and that "linkages
with other departments and programs are minimal." The mathematics department has refuted these charges in detail, listing teaching innovations, comparing evaluations of mathematics instruction with
university-wide averages, and providing specific examples of instruction linked to other programs and collaboration with faculty members in other departments.
Whatever changes are desirable in the role played by mathematics at Rochester will not be brought about by crippling the program. The mathematics department has given the administration a plan for
more contact between faculty members and students, and for further links with other departments. Even if the department were cut back by 10 per cent, in line with the proposed university average, it
could effectively implement this plan, preserve its existing strengths, and support Rochester's restructuring goals.
We urge the administration at Rochester to accept a limited reduction, such as that proposed by the mathematics department. We do not believe that eliminating graduate education in mathematics makes
sense for any university in the front ranks of research in science and technology.
Arthur Jaffe is a professor of mathematics and physics at Harvard University and president-elect of the American Mathematical Society. Joseph Lipman is a professor of mathematics at Purdue University
and chair of the American Mathematical Society's Committee on the Profession. Morton Lowengrub is dean of the College of Arts and Sciences at Indiana University, and chair of the American
Mathematical Society's Task Force on Excellence in Mathematics Scholarship.
Copyright (c) 1996 by The Chronicle of Higher Education, Inc.
Posted with permission on MathSciNet. You may print or save a copy of this story for your own use. All other uses, including posting elsewhere, require permission from The Chronicle
Visit The Chronicle's ACADEME THIS WEEK at http://chronicle.merit.edu, or by Gopher at chronicle.merit.edu. Or, if you are a subscriber to The Chronicle, visit ACADEME TODAY at http://chronicle.com.
Title: U. of Rochester Plan to Cut Mathematics Is Recipe for Disaster
Published: 96/03/01 | {"url":"http://www.ams.org/news/chronicle_3-1","timestamp":"2014-04-19T18:08:24Z","content_type":null,"content_length":"45927","record_id":"<urn:uuid:27eef0e2-ec83-4687-9170-b3d63136d679>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
primitive roots proof
November 3rd 2009, 04:21 PM #1
Junior Member
Nov 2009
primitive roots proof
Show that if m is a number having primitive roots then the product of the positive intgeters less than or equal to m and relatively prime to it is congruent to -1 (mod m)
November 4th 2009, 04:25 AM #2
Oct 2009 | {"url":"http://mathhelpforum.com/number-theory/112255-primitive-roots-proof.html","timestamp":"2014-04-18T00:27:54Z","content_type":null,"content_length":"32436","record_id":"<urn:uuid:347de225-7cac-4a40-ad99-5507139a92df>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kalman Filter and INS
There are 16 messages in this thread.
You are currently looking at messages 1 to .
Is this discussion worth a thumbs up?
Re: Kalman Filter and INS - Malachy Moses - 2009-04-15 00:49:00
> Two questions:
> One: Why is acceleration a state? I thought you were measuring it.
Acceleration should definitely be one of the states. Simply because
you are measuring acceleration does not somehow disqualify it from
also being a state.
> Two: If you _are_ measuring acceleration, how do you know what direction
> it's coming from? IOW with an accelerometer you can measure
> acceleration with respect to your vehicle frame of motion, but doesn't
> that rotate with respect to the local inertial frame?
As posed by the OP, the problem is one of estimating distance
travelled along the frame of reference defined by the heading of the
vehicle/robot/whatever. Currently, the OP has not asked for an
estimation of current position in some inertial frame.
If it were, then your point is completely valid, and changes the
design of the state model etc., such that roll pitch and heading etc
would probably form part of the state vector.
In answer to the original question, which is:
"Here is what i have, a measurement of a distance from an external
sensor at
a lower sampling rate and acceleration measurements from accelerometer
IMU at a higher rate). How can i use a kalman filter to fuse these to
produce a good estimate of the position travelled."
Ans: Change your H matrix each time interval so that it matches to the
current measurement(s) being taken. For example, if your current
measurement is a high-rate measurement of acceleration only, then your
H matrix is [0 0 1]. If your current measurement is both position and
acceleration, then your H matrix is [1 0 1] (correct dimensionality
omitted for purposes of emphasis).
Re: Kalman Filter and INS - Tim Wescott - 2009-04-15 01:22:00
Malachy Moses wrote:
>> Two questions:
>> One: Why is acceleration a state? I thought you were measuring it.
> Acceleration should definitely be one of the states. Simply because
> you are measuring acceleration does not somehow disqualify it from
> also being a state.
That would depend on how you're modeling the system, and how you're
measuring acceleration. If the higher-order inputs and dynamics are
unknown then you probably want to treat acceleration as a measurement,
not a state.
>> Two: If you _are_ measuring acceleration, how do you know what
>> it's coming from? IOW with an accelerometer you can measure
>> acceleration with respect to your vehicle frame of motion, but doesn't
>> that rotate with respect to the local inertial frame?
> As posed by the OP, the problem is one of estimating distance
> travelled along the frame of reference defined by the heading of the
> vehicle/robot/whatever. Currently, the OP has not asked for an
> estimation of current position in some inertial frame.
As posed by the OP, frames of reference are not mentioned. In other
posts on similar topics this same OP has shown confusion about the need
for tracking heading along with position and velocity.
> If it were, then your point is completely valid, and changes the
> design of the state model etc., such that roll pitch and heading etc
> would probably form part of the state vector.
> In answer to the original question, which is:
> "Here is what i have, a measurement of a distance from an external
> sensor at
> a lower sampling rate and acceleration measurements from accelerometer
> (
> IMU at a higher rate). How can i use a kalman filter to fuse these to
> produce a good estimate of the position travelled."
> Ans: Change your H matrix each time interval so that it matches to the
> current measurement(s) being taken. For example, if your current
> measurement is a high-rate measurement of acceleration only, then your
> H matrix is [0 0 1]. If your current measurement is both position and
> acceleration, then your H matrix is [1 0 1] (correct dimensionality
> omitted for purposes of emphasis).
Or find a good treatment of a mixed continuous/discrete Kalman filter,
extend it to a multi-rate filter, and find how you can avoid a lot of
computations when you're not getting position fixes.
Tim Wescott
Wescott Design Services
Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
Re: Kalman Filter and INS - arvkr - 2009-04-15 12:29:00
>As posed by the OP, frames of reference are not mentioned. In other
>posts on similar topics this same OP has shown confusion about the need
>for tracking heading along with position and velocity.
Thanks for all the replies guys. As Tim has mentioned, i have had other
posts in the forum talking about using just accelerometer ( 3 axis) to
measure distance without any external sensor and that idea for valid
reasons was quashed to death ( such as the bias grows at 1/2*t^2 when
integrated etc). I am relatively new at the INS ( less than 6 weeks) and i
am involved in a project of developing a low cost INS to measure the
distance travelled, hence trying to get as much info as possible.
IMU i am using spits out 3 axis acc, gyro readings. This coupled with
another external sensor i have and 3d compass, i am trying to develop a low
cost robust INS.
Ideally i would have liked to just use the MEMS unit without any external
sensors for ease of installation but as i mentioned Tim and others who have
more experience than i do in this have strongly suggested thats not
possible with low cost IMU.
Can you guys think of any non-contact speed sensors which i might be able
to replace my current speed sensor which has some tedious installation?
Re: Kalman Filter and INS - Tim Wescott - 2009-04-15 21:31:00
On Wed, 15 Apr 2009 11:29:00 -0500, arvkr wrote:
>>As posed by the OP, frames of reference are not mentioned. In other
>>posts on similar topics this same OP has shown confusion about the need
>>for tracking heading along with position and velocity.
> Thanks for all the replies guys. As Tim has mentioned, i have had other
> posts in the forum talking about using just accelerometer ( 3 axis) to
> measure distance without any external sensor and that idea for valid
> reasons was quashed to death ( such as the bias grows at 1/2*t^2 when
> integrated etc). I am relatively new at the INS ( less than 6 weeks) and
> i am involved in a project of developing a low cost INS to measure the
> distance travelled, hence trying to get as much info as possible.
> IMU i am using spits out 3 axis acc, gyro readings. This coupled with
> another external sensor i have and 3d compass, i am trying to develop a
> low cost robust INS.
> Ideally i would have liked to just use the MEMS unit without any
> external sensors for ease of installation but as i mentioned Tim and
> others who have more experience than i do in this have strongly
> suggested thats not possible with low cost IMU.
> Can you guys think of any non-contact speed sensors which i might be
> able to replace my current speed sensor which has some tedious
> installation?
If you have a 6-axis IMU, a distance measurement from a reference point,
and a vehicle that's moving enough, you should be able to settle to an
trajectory that is accurate but for a cylindrically symmetric uncertainty
centered around the reference point (i.e., you'll have the right shape
with respect to the reference point but you won't know which way north
Add in a compass or distances to two references, and you should be there.
Re: Kalman Filter and INS - arvkr - 2009-04-16 13:52:00
>If you have a 6-axis IMU, a distance measurement from a reference
>and a vehicle that's moving enough, you should be able to settle to an
>trajectory that is accurate but for a cylindrically symmetric uncertainty
>centered around the reference point (i.e., you'll have the right shape
>with respect to the reference point but you won't know which way north
>Add in a compass or distances to two references, and you should be
Thanks Tim. I have a slightly off topic question. Do you know of any good
doppler radar based speed sensor which consumes the lowest power and lowest
in size etc?
Re: Kalman Filter and INS - Vladimir Vassilevsky - 2009-04-16 14:01:00
Tim Wescott wrote:
> If you have a 6-axis IMU, a distance measurement from a reference point,
> and a vehicle that's moving enough, you should be able to settle to an
> trajectory that is accurate but for a cylindrically symmetric uncertainty
> centered around the reference point (i.e., you'll have the right shape
> with respect to the reference point but you won't know which way north
> is).
> Add in a compass or distances to two references, and you should be there.
But would the INS be any useful if you have compass (or a steering wheel
position sensor) and odometer? The navigation by the trivial dead
reckoning is many orders more accurate then MEMS INS.
Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant | {"url":"http://www.dsprelated.com/showmessage/111654/2.php","timestamp":"2014-04-19T12:03:09Z","content_type":null,"content_length":"32641","record_id":"<urn:uuid:b2077e47-0b77-4b23-b912-74888ff498ec>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/elica85/answered/1","timestamp":"2014-04-16T22:36:08Z","content_type":null,"content_length":"110056","record_id":"<urn:uuid:8a5664b4-15eb-462d-a61d-6e366d0fe1fa>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/prawal/answered","timestamp":"2014-04-20T18:55:10Z","content_type":null,"content_length":"105272","record_id":"<urn:uuid:f5a903d9-2958-42d7-9322-0ef0372cee5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area Tool
Use this tool to determine how the length of the base and the height of a figure can be used to determine its area. Can you find the similarities and differences between the area formulas for
trapezoids, parallelograms, and triangles?
Investigate trapezoids, parallelograms, and triangles, using the tabs along the top of the frame. Drag the vertices to explore different sizes of each shape.
The "Add to Table" button can be used to record data in the table, which may be useful in trying to identify patterns.
Investigate various sizes for each of the different shapes.
• What is the formula for finding the area of a trapezoid? How is the length of the midline involved in the formula?
• What is the formula for finding the area of a parallelogram? How is it related to the formula for finding the area of a rectangle?
• What is the formula for finding the area of a triangle? How is it related to the formula for finding the area of a rectangle? | {"url":"http://illuminations.nctm.org/Activity.aspx?id=3567","timestamp":"2014-04-20T14:15:13Z","content_type":null,"content_length":"33403","record_id":"<urn:uuid:7450206e-f051-4a3f-8575-031fd26df88d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting Cartersian coordinate term to spherical
June 10th 2011, 08:12 PM
Converting Cartersian coordinate term to spherical
I am working through a problem where the vector has a Cartesian term and want to convert it to spherical::
(Sorry, I couldn't figure out how to use the equation editor)
sqrt(x^2+y^2)/sqrt(X^2+y^2+z^2) (x component of vector)
I know:
x=rsin\theta cos\phi
y=rsin\theta sin\phi
I also know the answer is:
(rsin\phi )/r = sin\phi
I can't figure out how rsin\phi = sqrt(x^2=y^2) in the answer.
June 10th 2011, 08:28 PM
Also sprach Zarathustra
I am working through a problem where the vector has a Cartesian term and want to convert it to spherical::
(Sorry, I couldn't figure out how to use the equation editor)
sqrt(x^2+y^2)/sqrt(X^2+y^2+z^2) (x component of vector)
I know:
x=rsin\theta cos\phi
y=rsin\theta sin\phi
I also know the answer is:
(rsin\phi )/r = sin\phi
I can't figure out how rsin\phi = sqrt(x^2=y^2) in the answer.
How this is possible that the first red is NOT an equation and the second red is an equation?
June 10th 2011, 08:55 PM
The second red, just goes one step further to simplify the answer.
June 10th 2011, 09:02 PM
Also sprach Zarathustra
June 10th 2011, 09:10 PM
I'll ask the question is way, how did the left convert to the right side? Cartesian to spherical
sqrt(x^2+y^2)/sqrt(X^2+y^2+z^2) = [r(sin(phi) ]/r
June 11th 2011, 03:49 AM
sqrt(x^2+ y^2+ z^2)= r. that's where the denominator on the right comes from.
x= r cos(theta) sin(phi), y= r sin(theta) sin(phi) so x^2= r^2 cos^2(\theta) sin^2(phi) and y^2= r^2 sin^2(\theta) sin^2(\phi).
x^2+ y^2= r^2 cos^2(theta) sin^2(phi)+ r^2 sin^2(theta) cos^2(phi)= r^2 sin^2(phi)(cos^2(theta)+ sin^2(theta))
= r^2 sin^2(\phi). That's the numerator.
June 11th 2011, 10:15 AM
Thanks for the help.
Did you mean for the second term in the x^2+y^2 equation to be sin^2(phi) instead of cos^2(phi)?
Second term: r^2 sin^2(theta) cos^2(phi)
You used; x= r cos(theta) sin(phi)
My electromagnetics book uses; x= r cos(phi) sin(theta)
So I get sin(theta) as an answer, the correct answer is sin(phi). Am I missing something?
Theta is defined as the angle between the z-axis the position vector.
Phi is measured from the x-axis to the plane of the vector. | {"url":"http://mathhelpforum.com/calculus/182812-converting-cartersian-coordinate-term-spherical-print.html","timestamp":"2014-04-20T13:51:56Z","content_type":null,"content_length":"8633","record_id":"<urn:uuid:f7e8734e-02e5-4359-8f18-374d8ca11ea6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marblehead SAT Math Tutor
Find a Marblehead SAT Math Tutor
...I also enjoy helping others to master different types of questions. I have several years part-time experience holding office hours and working in a tutorial office. I majored in philosophy and
mathematics, which has given me a broad exposure to the kinds of material on a high school equivalency test.
29 Subjects: including SAT math, English, reading, writing
...I scored a 32 on the ACT English section during my senior year of high school. I scored a 32 overall on my exam. English has always been one of my strong courses - I also placed out of several
english requirements in college due to my taking and excelling in AP English in high school.
19 Subjects: including SAT math, Spanish, English, geometry
...As they progress I help them see how these particular examples and problems fit into the big ideas they are studying. I have also found that study skills and organization play a large role in
students' academic success, and that certain study techniques are particularly useful for math and physics. I like to check my students' notebooks and give them guidelines for note-taking and
9 Subjects: including SAT math, physics, calculus, geometry
...It show a tremendous amount of courage to face this type of challenge. I find that the most difficult aspect of the process for students is getting past the fear of facing material that is
unfamiliar to them. I believe that working together we can build both skills and confidence and that students will find that they have abilities that they never realized they possessed.
64 Subjects: including SAT math, reading, English, geometry
...My experience with tutoring began in college where I took a student position at the University's Learning Center. I worked with college level students, some with minor learning disabilities,
on a one-to-one basis throughout the week. Some students came in only as needed while others had recurring appointments to meet with me.
49 Subjects: including SAT math, English, reading, calculus | {"url":"http://www.purplemath.com/Marblehead_SAT_Math_tutors.php","timestamp":"2014-04-17T07:47:28Z","content_type":null,"content_length":"24155","record_id":"<urn:uuid:37895cd5-2a9f-4680-adb0-cad668ec4524>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
User bobuhito
bio website
visits member for 2 years, 3 months
seen Apr 15 at 22:44
stats profile views 365
Apr coin reversal puzzle with one hand and two stacks
9 comment @Zack In similar words, I want to reverse one LIFO queue (i.e., a stack) using two other LIFO queues, but the reversed data needs to be in the original LIFO queue and the other two
queues cannot feed each other directly. I'll give mathematicians a chance first because I think this is quite difficult and perhaps no algorithm is even known.
Apr coin reversal puzzle with one hand and two stacks
9 revised removed constraint on emptying hand since it was not needed
9 asked coin reversal puzzle with one hand and two stacks
20 awarded Yearling
Jan Consecutive Primes mod 3
16 comment @quid As Greg said there, that effect goes to zero for large primes, so it wouldn't apply here (I'm looking for effects that stay finite in the asymptotic limit).
Jan Consecutive Primes mod 3
16 comment Just considering consecutive pairs, has anyone proven that there is no asymptotic correlation? For example, is it impossible to have "21" and "12" each occur with a 30% rate, and "11"
and "22" each occur with a 20% rate?
Jan Consecutive Primes mod 3
16 comment Is Shiu's result based on hypothesis or pure theory? I can't find the reference from Google and want to try to understand if this is proven with no assumptions.
Jan Consecutive Primes mod 3
16 comment Interesting. Let's say the sequence is "1212222121221111122..." from which I naturally calculate a "predomination sequence", P, as "1111222222222222112...". I can't use your summation
(since the starting prime is considered secret), but I suppose that I could calculate the average P and see that it is greater than, for example, 1.501, even as my starting secret prime
goes to infinity. Of course, this is all based on hypotheses, so I wonder if someone has tried to measure this limit.
16 asked Consecutive Primes mod 3
15 accepted Joint Modular Distribution of Primes
Jan Joint Modular Distribution of Primes
15 comment I really wanted to pose a question about the independence across consecutive primes too, but was vague...so, I'll mark your answer as correct and open a new question.
15 asked Joint Modular Distribution of Primes
1 awarded Caucus
Aug Solving a System of Quadratic Equations
30 comment I have 6/7 cases like the one I posted (but also set j=f=0 like I said in a different comment). Thanks for the continuing effort!
Aug Solving a System of Quadratic Equations
22 comment Oh, I get it now and Mathematica then computes it quickly. Now that I think about things, I would prefer a minimal solution with j=0 and f=0 (or j=0 and i=0), but Mathematica can't
compute this in reasonable time...do you see one? Sorry, I guess I underspecified by problem originally.
Aug Solving a System of Quadratic Equations
22 comment I wasn't aware of the Groebner basis method. But, when I try this approach in Mathematica (version 6), the software is still thinking after hours, so it's been of no use to me. Are you
using your own home-built software? I'll wait to close this question until others have had a chance to answer about minimization approaches, but I do believe your method is best in cases
where a zero exists.
Aug Solving a System of Quadratic Equations
22 comment Interesting idea, but I would guess any improvement from the convexity is more than offset by the increase in functional complexity (from the inverse matrix). Still, I hope you try my
example (or a similar one with higher constants so that there is no zero and the problem is truly a minimization problem) and prove me wrong...please keep us posted.
Aug Solving a System of Quadratic Equations
21 comment Real solutions only. @Dietrich Great, but are you able to find one solution from that basis in a reasonable time?
21 asked Solving a System of Quadratic Equations
Aug comment Embedding of Two Objects Into Higher Dimensions With Their Sum
18 I agree. Though the simpler problem is NP-hard, it's still analytically solvable. This tougher problem, however, is forcing me into numerical solutions. | {"url":"http://mathoverflow.net/users/20757/bobuhito?tab=activity","timestamp":"2014-04-19T07:34:52Z","content_type":null,"content_length":"46779","record_id":"<urn:uuid:d35badd3-b1a9-42a2-9e1e-6a2b23eee4ac>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about taniyama-shimura on A Mind for Madness
We’ve done a lot of work so far just to try to define the terms in the Taniyama-Shimura conjecture, but today we should finally make it. Our last piece of information is to write down what the
L-function of a modular form is. Since I don’t want to build a whole bunch of theory needed to define the special class of modular forms we’ll be considering, I’ll just say that we actually need to
restrict our definition of “modular form” to “normalized cuspidal Hecke eigenform”. I’ll point out exactly why we need this, but it doesn’t change anything in the conjecture except that every
elliptic curve actually corresponds to an even nicer type of modular form.
Let ${f\in S_k(\Gamma_0(N))}$ be a weight ${k}$ cusp form with ${q}$-expansion ${\displaystyle f=\sum_{n=1}^\infty a_n q^n}$. Since this is an analytic function on the disk, we have the tools and
theorems of complex analysis at our disposal. We can perform something called the Mellin transform. It is just a standard integral transform given by the formula $\displaystyle {\Lambda (s) = \int_0^
\infty f(it)t^s\frac{dt}{t}}$.
After some computation you find that this transformed function is a product of really nice functions. We get $\displaystyle {\Lambda (s)=\frac{N^{s/2}}{(2\pi)^s}\Gamma(s)L(f,s)}$, where ${\Gamma(s)}$
is the usual Gamma function. Now if you actually went through and worked this out you would find out that ${L(f,s)}$ has a really nice form in terms of the Fourier coefficients. The so-called
L-series associated to the Mellin transform is given by
$\displaystyle \displaystyle L(f,s)=\sum_{n=1}^\infty \frac{a_n}{n^s}$.
If your eyes glazed over for the Mellin transform talk, then just think of the L-function of the modular form as taking all of its Fourier coefficients and throwing them in the numerator of this
series to make a new function. A quick remark is that if all the ${a_n}$ are ${1}$ (this won’t happen) we recover the Riemann zeta function. Thus you could think of the L-function we get as some sort
of generalization of the zeta function. If you’ve been through some elementary number theory you have probably even seen a proof that $\displaystyle {\sum_{n=1}^\infty \frac{1}{n^s}=\prod \frac{1}
{1-p^{-s}}}$ where the product is over all primes called an Euler product. Now in general if I hand you a sequence of integers ${a_n}$ that has some reasonable growth condition, then ${\sum_{n=1}^\
infty \frac{a_n}{n^s}}$ will be a nice convergent series, probably with an analytic continuation to the plane. The tricky part is to figure out what types of sequences allow this Euler product
This is where we have to use that ${f}$ was of this special form. In the theory of modular forms there is something called Atkin-Lehner theory which tells us that the ${a_n}$ for a cusp form of this
special type actually satisfy some nice relations such as ${a_{nm}=a_na_m}$ when ${(m,n)=1}$. These relations are precisely the ones needed to conclude that there is a nice Euler product expansion
and it is given by
$\displaystyle \displaystyle L(f,s)=\prod_{p|N}(*)\prod_{pmid N} \frac{1}{1-a_pp^{-s}+p^{k-1-2s}}.$
We say that a variety is modular if ${L(X,s)}$ coincides with ${L(f,s)}$ up to finitely many primes for some ${f\in S_k(\Gamma_0(N))}$. We’ve been ignoring the technicalities of dealing with the
primes of bad reduction and the primes that divide the level (a surprisingly hard problem to determine when these are the same set!), but now we see that for the definition of a variety being modular
this doesn’t even matter. There are other subtleties in defining all of this for when the variety does not have ${2}$-dimensional middle cohomology, but again for our immediate purposes you can trust
that people have made the suitable adjustments.
Now we see the truly shocking results of Taniyama-Shimura. We take this incredibly symmetric analytic object (so symmetric it is surprising any exist at all) and we take this completely algebraic
variety defined over ${\mathbb{Q}}$ and the conjecture claims that we can always find one of these symmetric things that match up with this action on the cohomology. Wiles and Taylor are often
credited with proving it in 1994, but it wasn’t actually proved until 2003 by Breuil, Conrad, Diamond, and Taylor. This was the elliptic curve case.
Just last year Gouvea and Yui proved that all rigid Calabi-Yau threefolds are modular. It is a conjecture that all Calabi-Yau varieties over ${\mathbb{Q}}$ should be modular, so this includes K3
surfaces. It might seem weird that K3 surfaces haven’t been proven but the threefold case has been. This just has to do with those technicalities of what to do if the middle cohomology is bigger than
2-dimensional, which it always is. There you have it. The famous Taniyama-Shimura conjecture which led to a proof of Fermat’s Last Theorem.
by hilbertthm90 2 Comments
Taniyama-Shimura 3: L-Series
For today, we assume our ${d}$-dimensional variety ${X/\mathbb{Q}}$ has the property that its middle etale cohomology is 2-dimensional. It won’t hurt if you want to just think that ${X}$ is an
elliptic curve. We will first define the L-series via the Galois representation that we constructed last time. Fix ${p}$ a prime not equal to ${\ell}$ and of good reduction for ${X}$. Let ${M=\
overline{\mathbb{Q}}^{\ker \rho_X}}$. By definition the representation factors through ${{Gal} (M/\mathbb{Q})}$. For ${\frak{p}}$ a prime lying over ${p}$ the decomposition group ${D_{\frak{p}}}$
surjects onto ${{Gal} (\overline{\mathbf{F}}_p/\mathbf{F}_p)}$ with kernel ${I_{\frak{p}}}$. One of the subtleties we’ll jump over to save time is that ${\rho_X}$ acts trivially on ${I_{\frak{p}}}$
(it follows from the good reduction assumption), so we can lift the generator of ${{Gal} (\overline{\mathbf{F}}_p/\mathbf{F}_p)}$ to get a conjugacy class ${{Frob}_p}$ whose image under ${\rho_X}$
has well-defined trace and determinant.
We define
$\displaystyle L(X,s) =(*)\prod_{p \ good} \frac{1}{1-{tr}(\rho_X({Frob}_p))p^{-s}+\det(\rho_X({Frob}_p))p^{-2s}}$
where ${(*)}$ is a product of terms at the bad primes. Note that since this is a two-dimensional representation basic linear algebra tells us that the product is over the simpler expression ${(\det
If you don’t like all this Galois representation stuff, we can describe this L-series without reference to the Galois representation at all. In order to ease notation we will denote the reduction of
${X}$ at a fixed good prime ${p}$ by ${Y:=X_{\mathbf{F}_p}}$ and base changing to the algebraic closure ${\overline{Y}:=X_{\overline{\mathbf{F}_p}}}$. To simplify notation let ${k=\mathbf{F}_p}$.
We have several natural Frobenius actions on ${\overline{Y}}$. The first we will call the absolute Frobenius which we will denote ${F_{ab}:\overline{Y}\rightarrow \overline{Y}}$. This is the identity
on the topological space and the ${p}$-th power map on the structure sheaf. On affine patches ${{Spec} A\rightarrow {Spec} \overline{k}}$ the map is the one induced by ${a\mapsto a^p}$ on ${A}$. We
can check directly that the map on topological spaces is the identity. For any prime ideal ${\frak{q}\in{Spec} A}$ the contraction ${\frak{q}^c=\{a\in A: a^p\in \frak{q}\}=\{a\in A: a\in\frak{q}\}=\
frak{q}}$ by the property of ${\frak{q}}$ being prime. This map translates in the language of schemes to ${(id, F): (\overline{Y}, \mathcal{O}_{\overline{Y}})\rightarrow (\overline{Y}, \mathcal{O}_{\
overline{Y}})}$ where ${F}$ is raising sections of the sheaf to the ${p}$-th power.
Note that the absolute Frobenius is not a map of ${\overline{Y}}$ over ${\overline{k}}$. The map is also not the pullback despite making the same commutative diagram.
Let the standard structure map be ${\phi: \overline{Y}\rightarrow {Spec}\overline{k}}$ and define ${\overline{Y}^{(p)}=\overline{Y}\otimes\overline{k}}$ to be the pullback of Frobenius acting on the
base field. Since we have a commutative diagram we get by the universal property of a pullback diagram some map ${F_r: \overline{Y}\rightarrow \overline{Y}^{(p)}}$ called the relative Frobenius. We
define the arithmetic Frobenius to be the projection on the first factor ${F_{ar}:\overline{Y}^{(p)}\rightarrow \overline{Y}}$. A nice exercise to see if you understand these would be to write down a
big commuting diagram that relates all these. Due to wordpress constraints, I won’t actually do this.
Instead, we’ll do an example. Let ${Y={Spec} k[t]}$ (recall that ${k=\mathbf{F}_p}$). This means that ${\overline{Y}={Spec} \overline{k} [t]={Spec} (k[t]\otimes_k \overline{k})}$. The descriptions in
terms of the ring homomorphism that induces the map on the spectra are as follows. The absolute Frobenius is still just ${f\mapsto f^p}$. The relative Frobenius is ${F_{ab}\otimes id}$. Since the
absolute raises elements of ${k[t]}$ to the ${p}$, everything in ${k}$ is fixed by this map, and on ${\overline{k}}$ it is defined to be fixed. This means that the relative Frobenius only alters the
${t}$ by ${t\mapsto t^p}$. This is sometimes referred to as “raising coordinates to the ${p}$-th power”. The arithemtic Frobenius does nothing to the ${k[t]}$ part, but raises the ${\overline{k}}$
coefficients to the ${p}$, so ${\sum a_nt^n\mapsto \sum a_n^p t^n}$. Likewise, the geometric Frobenius takes the ${p}$-th root of the coefficients.
Straightforward (but non-trivial) computations also give that the map that the absolute Frobenius induces on the \’{e}tale site is trivial. If we look at our diagram we see that ${F_{ab}=F_{ar}\circ
F_r}$. Since the induced map on cohomology is contravariant this gives ${F^*_r\circ F^*_{ar}=id}$. This means that on cohomology ${F^*_r=(F^*_{ar})^{-1}=F^*_{ge}}$ by definition of the geometric
Now the smooth, proper base change theorem for \’{e}tale cohomology tells us that ${H^d_{\textit{\'{e}t}}(\overline{Y}, \mathbb{Q}_\ell)\simeq H^d_{\textit{\'{e}t}}(\overline{X}, \mathbb{Q}_\ell)}$
which is two-dimensional. Since the ${F_r}$ action here is a linear operator on a vector space it makes sense to take the trace and determinant. We can define the L-series without use of the Galois
representation as:
$\displaystyle L(X,s)=(*)\prod_{p \ good} \frac{1}{1-{tr}(F_r^*)p^{-s}+\det(F_r^*)p^{-2s}}$
where again the ${(*)}$ is a product of terms involving primes of bad reduction. Since there are only finitely many this is irrelevant for the definition of modularity. Of course we could have
defined this without all the different Frobenius actions (we only used the relative one), but now we can get to the punchline. These two L-series are actually the same.
We just sketched above that the action of ${F_r}$ and ${F_{ge}}$ were the same on the \’{e}tale site. But ${F_{ge}=1\times {Frob}_p^{-1}}$ where ${{Frob}_p}$ is the canonical generator of ${{Gal} (\
overline{k}/k)}$. We have a surjection ${{Gal} (\overline{\mathbb{Q}}/\mathbb{Q})\rightarrow {Gal} (\overline{k}/k)}$ and if we consider ${{Frob}_p}$ a lift of this element, then by the functoriality
and equivariant isomorphisms above we get that ${{tr}(\rho_X({Frob}_p))={tr}(\rho_X'({Frob}_p^{-1}))={tr}(F_r^*)}$. The determinant term turns out to always be ${p^3}$ since it can be checked to be
the third power of the ${\ell}$-adic cyclotomic character in both cases. Thus the two L-series are the same. This also tells us the representation is odd.
Note that they appear to be off by an inverse, but we actually took the contragredient representation of the one that acts on ${H^d_{\textit{\'{e}t}}(\overline{X}, \mathbb{Q}_\ell)}$, so the inverse
corrects for this and they are actually the same.
This got a little technical at parts, so the one thing to take away from this post is that to any ${X/\mathbb{Q}}$ with two-dimensional middle cohomology we can produce some function which is just
defined in terms of the trace and determinant of certain operators on the cohomology. This is called the L-series and will be crucial in the definition of modularity.
by hilbertthm90 2 Comments
Taniyama-Shimura 2: Galois Representations
Fix some proper variety ${X/\mathbb{Q}}$. Our goal today will seem very strange, but it is to explain how to get a continuous representation of the absolute Galois group of ${\mathbb{Q}}$ from this
data. I’m going to assume familiarity with etale cohomology, since describing Taniyama-Shimura is already going to take a bit of work. To avoid excessive notation, all cohomology in this post
(including the higher direct image functors) are done on the etale site.
For those that are intimately familiar with etale cohomology, we’ll do the quick way first. I’ll describe a more hands on approach afterwards. Let ${\pi: X\rightarrow \mathrm{Spec} \mathbb{Q}}$ be
the structure morphism. Fix an algebraic closure ${v: \mathrm{Spec} \overline{\mathbb{Q}}\rightarrow \mathrm{Spec}\mathbb{Q}}$ (i.e. a geometric point of the base). We’ll denote the base change of $
{X}$ with respect to this morphism ${\overline{X}}$. Suppose the dimension of ${X}$ is ${n}$.
Let ${\ell}$ be a prime. We consider the constructible sheaf ${R^n\pi_*(\mathbb{Z}/\ell^m)}$. Now we have an equivalence of categories between these sheaves and continuous ${G=Gal(\overline{\mathbb
{Q}}/\mathbb{Q})}$-modules by taking the stalk at our geometric point. Thus ${R^n\pi_*(\mathbb{Z}/\ell^m)_v\simeq H^n(\overline{X}, \mathbb{Z}/\ell^m)}$ has a continuous action of ${G}$ on it, and
hence we get a continuous representation ${\rho_{X,m}: G\rightarrow Aut(H^n(\overline{X}, \mathbb{Z}/\ell^m)\simeq GL_d(\mathbb{Z}/\ell^m)}$. These all form a compatible family and hence we can take
the inverse limit and tensor with ${\mathbb{Q}_\ell}$ to get what is known as an ${\ell}$-adic Galois representation ${\rho_X: G\rightarrow GL_d(\mathbb{Q}_\ell)}$. For a technicality that will come
up later, we will abuse notation and now relabel ${\rho_X}$ to be the dual (or contragredient) representation.
If you aren’t comfortable with etale cohomology, then you can just use it as a black box cohomology theory to get the same thing as follows. First take the base change ${\overline{X}\rightarrow \
mathrm{Spec} \overline{\mathbb{Q}}}$. Given any element of the Galois group ${\sigma \in G}$ we get an automorphism of ${\overline{\mathbb{Q}}}$. Thus we can fill in the diagram:
${\begin{matrix} \overline{X} & \stackrel{\sigma}{\rightarrow} & \overline{X} \\ \downarrow & & \downarrow \\ \mathrm{Spec} \overline{\mathbb{Q}} & \stackrel{\sigma}{\rightarrow} & \mathrm{Spec} \
overline{\mathbb{Q}} \end{matrix}}$
Since ${\sigma}$ was an automorphism, then only thing you have to believe about cohomology is that you then get an isomorphism via pullback ${H^n(\overline{X}, \mathbb{Q}_\ell)\stackrel{\sigma^*}{\
rightarrow} H^n(\overline{X}, \mathbb{Q}_\ell)}$. Thus we get a continuous group homomorphism ${G\rightarrow Aut(H^n(\overline{X}, \mathbb{Q}_\ell))}$ as before. Again, we’ll actually use the dual of
this in the future.
To return to an elliptic curve ${E}$ over ${\mathbb{Q}}$, we know that these are just tori, and hence the first Betti number is ${2}$. In this case we get that our Galois representation ${\rho_E: G\
rightarrow GL_2(\mathbb{Q}_\ell)}$. If you’ve seen Taniyama-Shimura explained before, this should look familiar. This turns out to be exactly the same representation as the one you get from the
Galois action on the Tate module. But the definition of the Tate module requires a group law, and hence the ability to get such a representation doesn’t generalize to all varieties in the way that
using middle $\ell$-adic cohomology does. This is the standard modern approach to defining modularity for other types of varieties.
by hilbertthm90 1 Comment
Taniyama-Shimura 1
It’s time to return to plan A. I started this year by saying I’d post on some fundamental ideas in arithmetic geometry. The local system thing is hard to get motivated about, since the way I was
going to use it in my research seems irrelevant at the moment. My other option was to blog some stuff about class field theory, since there is a reading group on the topic that I belong to this
The first goal of this new series is to understand the statement of the famous Taniyama-Shimura conjecture that led to the proof of Fermat’s Last Theorem. A lot of people can probably mumble
something about the conjecture if they have any experience in algebraic/arithmetic geoemtry or any of the number theory type fields, but most people probably can’t say anything precise about what the
conjecture says (I’ll continue to call it a “conjecture” even though it has been proved).
The statement of the conjecture is that every elliptic curve over ${\mathbb{Q}}$ is modular. Simple enough, but to unravel what it means to be modular we are going to have to take many posts just for
the definition. If you’ve seen this explained before, it might still be interesting to read this series because I’m going to set up the machinery in a slightly different (but equivalent) way so that
it will generalize to varieties other than elliptic curves in the future.
We’ll first define modular forms. A modular form of weight ${k}$ and level ${N}$ is an element of the vector space ${M_k(\Gamma_0(N))}$ which consists of holomorphic functions on the upper half plane
${f:\mathcal{H}\rightarrow \mathbf{C}}$ satisfying the additional transformation property that
$\displaystyle \displaystyle f \left(\frac{az+b}{cz+d}\right)=(cz+d)^kf(z)$
for all matrices ${\left(\begin{matrix} a & b \\ c & d \end{matrix}\right)\in SL_2(\mathbf{Z})}$ such that ${c\equiv 0 \mod N}$ (plus something else that we’ll get to shortly).
This is an analytic object if there ever was one. If this is the first time you’ve seen this, then the thing to pay attention to is that these depend on a choice of weight, ${k}$, and level, ${N}$.
To get a feel for the level, note that it becomes “easier” to satisfy this transformation law as the level increases, because the amount of matrices we have to check is less. For example, when ${N=1}
$ this says our ${f}$ has to behave nicely under every single linear fractional transformation that sends the upper half plane to the upper half plane. One might reasonably guess that ${0}$ is the
only holomorphic function with this property. More on this later. The weight is a little harder to get a feel for.
The map ${z\mapsto e^{2\pi i z}}$ is a holomorphic map from the upper half plane onto the punctured unit disk. Note that ${e^{2\pi i z}\rightarrow 0}$ as ${z}$ tends to infinity along the imaginary
axis. We can compose with this map and consider our modular form to be a holomorphic function on the punctured disk. This is well-defined because if ${e^{2\pi i z}=e^{2\pi i w}}$, then ${z}$ and ${w}
$ differ by an integer and ${f(z+n)=f\left(\left(\begin{matrix}1 & n \\ 0 & 1 \end{matrix}\right)\cdot z\right)=(1)^kf(z)=f(z)}$.
We say ${f}$ extends to be holomorphic at infinity if there is a holomorphic extension to the whole disk. We require modular forms to have this property. Thus a modular form has a Fourier expansion
called a ${q}$-expansion denoted
$\displaystyle \displaystyle f=\sum_{n=0}^\infty a_nq^n \ \text{where} \ q=e^{2\pi i z}$
(note that a Fourier series in general involves negative powers, but these would give a pole at infinity). The cusp forms are the subspace denoted $S_k(\Gamma_0(N))$ of the modular forms that vanish
at all cusps. To define cusp, just think of the extended upper half plane as ${\mathbf{H}\cup \mathbb{P}^1_{\mathbb{Q}}}$. We stick all the rational numbers along the real line in and also throw in a
point at infinity. In practice, we only have to check holomorphic extension across finitely many of these cusps because due to the transformation law we only need to pick on cusp in each equivalence
class under the action of the matrix group. When for instance $N=1$ again, all we have to check is that $f$ vanishes at infinity, or upon composing to the disk we get that $a_0=0$.
Do any of these things exist? Well, as we’ve already noted, for small N it seems very hard to satisfy these properties. In fact, our guess was right, $\dim_\mathbb{C} S_k(\Gamma_0(1))=0$ for $1<k<12$
. So until we bump the weight up to 12, we actually only have the 0 function satisfying our properties. For weight 12, there is only one up to scalar multiple. This doesn't look good, but actually
when we allow the level to grow we get a lot (even of low weight). But before next time, just ponder how severe the symmetry condition we are imposing is. Somehow every elliptic curve is closely
related to one of these which is why the result is so surprising.
Now we have our basic analytic object of the conjecture. The next several posts will go back to the algebraic side of things. Depending on how much detail I decide to give to define the terms in the
Taniyama-Shimura conjecture could take anywhere from 4 to 8 or so posts, just to give you an idea of how long you have to hold out for the statement.
2012 Blogging
It’s the start of a new year, so I’m going to start up something new here. My research interests have recently tended towards arithmetic geometry, so my plan for 2012 is to write some basics of
algebraic number theory and arithmetic geometry that I use a lot. I’ll try to avoid redoing some of the stuff that has already been done at Climbing Mount Bourbaki.
I’d like to explain what modular forms are and what some of their basic properties are. I may detour a little into Galois representations at some point. I definitely want to talk about L-functions of
varieties and what it means for a variety to be modular. This may lead to some discussion about Fermat’s Last Theorem and the Taniyama-Shimura conjecture. Scattered throughout I’ll probably have to
cover some more classical algebraic number theory.
If you’re interested in related topics just post a comment and maybe I’ll get to it. Maybe in a few weeks I’ll scratch this whole idea and do something else. Who knows? | {"url":"http://hilbertthm90.wordpress.com/tag/taniyama-shimura/","timestamp":"2014-04-20T00:58:55Z","content_type":null,"content_length":"106824","record_id":"<urn:uuid:146532c9-ccac-4def-a6e7-cafee65f1e7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Compensating variation
compensating variation
(CV) is a measure of utility change introduced by
John Hicks
(1939). 'Compensating variation' refers to the amount of additional money an agent would need to reach its initial utility after a change in prices, or a change in product quality, or the
introduction of new products. Compensating variation can be used to find the effect of a price change on an agent's net welfare. CV reflects new prices and the old utility level. It is often written
using an
expenditure function
, e(p,u):
$CV = e\left(p_1, u_1\right) - e\left(p_1, u_0\right)$
$= w - e\left(p_1, u_0\right)$
$= e\left(p_0, u_0\right) - e\left(p_1, u_0\right)$
where $w$ is the wealth level, $p_0$ and $p_1$ are the old and new prices respectively, and $u_0$ and $u_1$ are the old and new utility levels respectively. The first equation can be interpreted as
saying that, under the new price regime, the consumer would accept CV in exchange for allowing the change to occur.
More intuitively, the equation can be written using the value function, v(p,w):
one of the equivalent definitions of the CV.
Compensating variation is the metric behind Kaldor-Hicks efficiency; if the winners from a particular policy change can compensate the losers it is Kaldor-Hicks efficient, even if the compensation is
not made.
Equivalent variation (EV) is a closely related measure that uses old prices and the new utility level. It measures the amount of money a consumer would pay to avoid a price change, before it happens.
When the good is neither normal nor inferior, or when there are no income effects for the good, then EV (Equivalent variation) = CV (Compensating Variation) = CS (Consumer Surplus)
Example of Adding a New Product
Assume a log-linear demand function for a product given by $x\left(p,y\right)=Ap^\left\{alpha\right\}y^\left\{delta\right\}$.
The compensating variation resulting from the introduction of this new product is
$CV = left\left[\left\{fracy^\left\{ - delta \right\} \left(p_\left\{n_\left\{0\right\}\right\} x_0 - p_\left\{n_\left\{1\right\}\right\} x_1 \right) + y^\left\{\left(1 - delta \right)\right\} \right
\} right\right]^\left\{1/\left(1 - delta \right)\right\} - y.$
Assuming no income effect $\left\{ delta \right\} = 0$ and no sales of the product prior to introduction $p_\left\{n_\left\{0\right\}\right\}x_\left\{0\right\} = 0$, this simplifies to
$CV = -frac x_1 \right\}\right\}.$
For no income effect but previous products on the market at a different price,
$CV = -frac\left\{\left\{p_\left\{1\right\} x_1 -p_\left\{0\right\} x_0 \right\}\right\}.$
In the case of online book sellers, Brynjolfsson, Hu, and Smith find that the compensating variation is quite large and mostly the result of a wider assortment of books being offered.
See also
Equivalent variation
(EV) is a closely related measure of welfare change.
Hicks, J.R. Value and capital: An inquiry into some fundamental principles of economic theory Oxford: Clarendon Press, 1939
Brynjolfsson, E., Y. Hu, and M. Smith. "Consumer Surplus in the Digital Economy: Estimating the Value of Increased Product Variety at Online Booksellers," Management Science: 49, No. 1, November, pp.
1580-1596. 2003. | {"url":"http://www.reference.com/browse/Compensating+variation","timestamp":"2014-04-23T18:06:35Z","content_type":null,"content_length":"79556","record_id":"<urn:uuid:45321731-cb74-4fa1-8e4b-edfb1e0ea080>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surface integral computation
January 10th 2012, 10:21 AM
Surface integral computation
Okay, so this is a step in the proof to Liouville's theorem, and it comes from Fritz John's Partial Differential Equations 4th ed (p109, sec. 4.3). The problem is, it requires computation of a
surface integral (I think), and I don't know how to do that.
Here's the setup:
Let $u$ be harmonic in $\mathbb{R}^n$ with $\xi$ in the ball of radius $a>0$ about the origin. Let $\omega_n=2\pi^{n/2}/\Gamma(n/2)$ be the surface area of the unit sphere $S^{n-1}\subset\mathbb
{R}^n$ (i.e. $\omega_2=2\pi$, $\omega_3=4\pi$, and so on.)
We are also given the following identity:
$u_{\xi_i}(0)=\frac{n}{\omega_n a^{n+1}}\int_{|x|=a}x_i u(x)\;dS_x$
Frankly, I'm not sure what $dS_x$ is supposed to denote. I assume it's the "surface integral" with respect to x, but then I'm not sure what the definition of a surface integral is, much less how
to evaluate them routinely.
Anyway, we have to use the above information to prove the following identity:
Any ideas on how to do this? Any help would be much appreciated!
January 11th 2012, 09:28 AM
Re: Surface integral computation
I'm not sure what the definition of a surface integral is, much less how to evaluate them routinely.
Maybe you should revisit your notes, it is quite helpful to be able to deal with them.
Now for the estimate on the integral. Let $\omega_n a^{n-1}=\int_{|x|=a}dS_x$ be the area of the $a-$radius n-sphere.
Use the Cauchy-Schwartz inequality and basic properties of the integral to obtain
$|u_{\xi_i}(0)|\leq\frac{n}{\omega_n a^{n+1}}\left(\int_{|x|=a}\sum|x_i|^2dS_x\right)^{ 1/2}\left(\int_{|x|=a}|u|^2dS_x\right)^{1/2}$
$|u_{\xi_i}(0)|\leq\frac{n}{\omega_n a^{n+1}}\left(a\left[\omega_n a^{n-1}\right]^{1/2}\right)\left((\sup|u|)\left[\omega_n a^{n-1}\right]^{1/2}\right)$
from where the result follows. | {"url":"http://mathhelpforum.com/differential-equations/195102-surface-integral-computation-print.html","timestamp":"2014-04-17T16:21:32Z","content_type":null,"content_length":"8407","record_id":"<urn:uuid:785d28d7-0af9-4310-9ae8-d45387c58118>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Modules, comodules and cotensor products
over Frobenius algebras
Lowell Abrams
Department of Mathematics
Rutgers University
New Brunswick, NJ 08903
We characterize noncommutative Frobenius algebras A in terms of
the existence of a coproduct which is a map of left A e modules. We
show that the category of right (left) comodules over A, relative to
this coproduct, is isomorphic to the category of right (left) modules.
This isomorphism enables a reformulation of the cotensor product of
Eilenberg and Moore as a functor of modules rather than comodules.
We prove that the cotensor product M2N of a right Amodule M
and a left Amodule N is isomorphic to the vector space of homomor
phisms from a particular left A e module D to
N# M , viewed as a
left A e module. Some properties of D are described. Finally, we show
that when A is a symmetric algebra, the cotensor product M2N and | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/1021/3896347.html","timestamp":"2014-04-21T07:50:09Z","content_type":null,"content_length":"7984","record_id":"<urn:uuid:7d1308d4-123e-458f-b36e-97ed5bea85dd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arvada, CO ACT Tutor
Find an Arvada, CO ACT Tutor
...I worked as an Electrical Engineer for 30 years. I am very knowlegable in digital design and assembly level programming. Iam considered as a expert in the field of digital design and boolean
13 Subjects: including ACT Math, calculus, physics, algebra 2
...I've used this software for over eight years, under the supervision of our tax accountant. I obtained a master’s degree in Chemical Engineering from Colorado State University. I worked five
years as a process engineer in the mining industry and five years as an R&D engineer in the medical device industry.
24 Subjects: including ACT Math, chemistry, reading, writing
...I have education training in terms of lesson planning, test and study strategies. I love working with all ages and I am very patient and easy going with all my students. If you have any
questions please feel free to ask me!
31 Subjects: including ACT Math, reading, biology, algebra 2
...Mathematics, University of Louisiana, Lafayette, LA B.S. Physics, University of Louisiana, Lafayette, LA M.S. Paleoclimatology, Georgia Institute of Technology, Atlanta, GA Ph.D.
16 Subjects: including ACT Math, chemistry, calculus, physics
...To tutor Matlab, I try to gather information on where the student's skill level is currently and build upon the basics to get the student up to speed. This includes demonstration of proper
coding techniques, higher level functions to simplify a task, and use of the debugging tool in coding. I g...
20 Subjects: including ACT Math, chemistry, physics, calculus
Related Arvada, CO Tutors
Arvada, CO Accounting Tutors
Arvada, CO ACT Tutors
Arvada, CO Algebra Tutors
Arvada, CO Algebra 2 Tutors
Arvada, CO Calculus Tutors
Arvada, CO Geometry Tutors
Arvada, CO Math Tutors
Arvada, CO Prealgebra Tutors
Arvada, CO Precalculus Tutors
Arvada, CO SAT Tutors
Arvada, CO SAT Math Tutors
Arvada, CO Science Tutors
Arvada, CO Statistics Tutors
Arvada, CO Trigonometry Tutors
Nearby Cities With ACT Tutor
Aurora, CO ACT Tutors
Broomfield ACT Tutors
Centennial, CO ACT Tutors
Denver ACT Tutors
Edgewater, CO ACT Tutors
Englewood, CO ACT Tutors
Federal Heights, CO ACT Tutors
Golden, CO ACT Tutors
Lakeside, CO ACT Tutors
Lakewood, CO ACT Tutors
Littleton, CO ACT Tutors
Northglenn, CO ACT Tutors
Thornton, CO ACT Tutors
Westminster, CO ACT Tutors
Wheat Ridge ACT Tutors | {"url":"http://www.purplemath.com/arvada_co_act_tutors.php","timestamp":"2014-04-20T02:05:45Z","content_type":null,"content_length":"23410","record_id":"<urn:uuid:3216ebb9-35ff-486b-bd4b-3dfed9bc76ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix Transformation
June 1st 2011, 10:38 AM #1
Apr 2011
Matrix Transformation
Hey, our lecture course on matrices has finished and I have no idea how to solve this problem, could someone point me in the right direction?
'Find the transformation that takes:
3x^2 - 2y^2 - z^2 -4xy +12yz +8xz
Into diagonal form.
Thanks a lot
Last edited by ChemistryJack; June 1st 2011 at 10:38 AM. Reason: I need to make it easier to read
This is a quadratic form and can be written as
$\begin{bmatrix} x & y & z \end{bmatrix} \begin{bmatrix} 3 & -2 & 4 \\ -2 & -2 & 6 \\ 4 & 6 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix}$
Since this is a real symmetric matrix it can be diagonalized. Can you finish from here?
I find the eigenvalues and arrange them diagonally? Is there a good way of cancelling the determinant on this one? Also, do you mind explaining to me how you derived your matrix, I see how it
works but I don't think I'd be able to come up with it by myself at the moment.
Thanks a lot
After you subtract lambda down the diagonal if you use these row operations you can get two zero's in the matrix.
$2R_2+R_3 \to R_3$
$-\frac{2}{3}R_2+R_1 \to R_1$
That will made taking the determinant a bit easier.
for the next part notice that the matrix is symmetric and always should be
Also note that
$a_{(1,2)}+a_{(3,2)}=$ the coefficient on the xy term and
$a_{(1,3)}+a_{(3,1)}=$ the coefficient on the xz term
$a_{(2,3)}+a_{(3,2)}=$ the coefficient on the yz term
If you multiply the matrix out you will see why this is true. Also note this is not unique but we can always choose to write it as a real symmetric matrix.
This choice is important because real symmetric matrices Always orthogonally diagonalizeable.
There are, actually, many different ways of setting up a matrix that will give you
$3x^2 - 2y^2 - z^2 -4xy +12yz +8xz$
What TheEmptySet did was take the simplest: the numbers on the diagonal are the coeffiicents of $x^2$, $y^2$, and $z^2$. The off diagonal numbers are half the coefficients of the mixed terms
If you go ahead and do the multiplication
$\begin{bmatrix}x & y & z\end{bmatrix}\begin{bmatrix}3 & -2 & 4 \\ -2 & -2 & 6 \\ 4 & 6 & 8\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}$
you will see why that works.
June 1st 2011, 10:54 AM #2
June 1st 2011, 11:20 AM #3
Apr 2011
June 1st 2011, 12:10 PM #4
June 2nd 2011, 05:08 AM #5
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/182185-matrix-transformation.html","timestamp":"2014-04-18T09:49:03Z","content_type":null,"content_length":"49340","record_id":"<urn:uuid:35ea0b25-2a45-437a-ab83-f75df94dfa43>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Probabilities not in [0,1]?
Replies: 8 Last Post: Mar 12, 2013 10:56 PM
Messages: [ Previous | Next ]
Probabilities not in [0,1]?
Posted: Mar 10, 2013 2:36 PM
Is there a theory of probability in which probabilities do not lie in the
real interval [0,1]? More specifically, is there one in which the "space"
of probabilities may have points x, y, z with x < y, x < z but y and z not
necessarily comparable?
Using Opera's revolutionary email client: http://www.opera.com/mail/
Date Subject Author
3/10/13 Probabilities not in [0,1]? Peter Percival
3/10/13 Re: Probabilities not in [0,1]? bacle
3/10/13 Re: Probabilities not in [0,1]? William Elliot
3/11/13 Re: Probabilities not in [0,1]? dan.ms.chaos@gmail.com
3/12/13 Re: Probabilities not in [0,1]? mueckenh@rz.fh-augsburg.de
3/12/13 Re: Probabilities not in [0,1]? Virgil
3/12/13 Re: Probabilities not in [0,1]? ross.finlayson@gmail.com
3/12/13 Re: Probabilities not in [0,1]? FredJeffries@gmail.com
3/12/13 Re: Probabilities not in [0,1]? ross.finlayson@gmail.com | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2440056&messageID=8587675","timestamp":"2014-04-16T16:21:24Z","content_type":null,"content_length":"25635","record_id":"<urn:uuid:da754de0-2545-4615-81aa-df77aed548c7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 6
Please help! Does this question have a formula, if so how can I answer this question without the formula? Is it possible. Hockey teacm receive 2 points when they win and 1 point when they tie. The
team won a championship with 50 points. They won 10 more games than they tied. H...
can someone please help me. The perimeter of a rectangle is 80m. The length is 7m more than twice and width. Find the dimensions. I would appreciate help with the steps so I can practice it for the
my next problem. Thanks.
In an automatic clothes drier, a hollow cylinder moves the clothes on a vertical circle (radius r = 0.39 m), as the drawing shows. The appliance is designed so that the clothes tumble gently as they
dry. This means that when a piece of clothing reaches an angle of above the ho...
Suppose the water near the top of Niagara Falls has a horizontal speed of 2.3 m/s just before it cascades over the edge of the falls. At what vertical distance below the edge does the velocity vector
of the water point downward at a 55° angle below the horizontal? I'm ...
A golfer imparts a speed of 32.9 m/s to a ball, and it travels the maximum possible distance before landing on the green. The tee and the green are at the same elevation. (a) How much time does the
ball spend in the air? s (b) What is the longest "hole in one" that t...
A jetliner is moving at a speed of 220 m/s. The vertical component of the plane's velocity is 36.0 m/s. Determine the magnitude of the horizontal component of the plane's velocity. | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Hawkins","timestamp":"2014-04-21T08:29:19Z","content_type":null,"content_length":"7694","record_id":"<urn:uuid:759f3121-1e9e-46cd-8ec8-08b0b84e2f1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surface Networks: New techniques for their automated extraction, ge...
Surface Networks: New techniques for their automated extraction, generalisation and application
The nature of data structures used for the representation of terrain has a great influence on the possible applications and reliability of consequent terrain analyses....
The nature of data structures used for the representation of terrain has a great influence on the possible applications and reliability of consequent terrain analyses.
This research demonstrates a concise review and treatment of the surface network data structure, a topological data structure for terrains. A surface network represents the terrain as a graph where
the vertices are the fundamental topographic features (also called critical points), namely the local peaks, pits, passes (saddles) and the edges are the ridges and channels that link these vertices.
Despite their obvious and widely believed potential for being a natural and intuitive representation of terrain datasets, surface networks have only attracted limited research, leaving several
unsolved aspects, which have restricted their use as viable digital terrain data structures. The research presented here presents novel techniques for the automated generation, analysis and
application of surface networks.
The research reports a novel method for generating the surface networks by extending the ridges and channels, unlike the conventional critical points-based approach. This proposed algorithm allows
incorporation of much wider variety of terrain features in the surface network data structure.
Several ways of characterising terrain structure based on the graph-theoretic analysis of surface networks are presented. It is shown that terrain structures display certain empirical characteristics
such as the stability of the structure under changes and relationship between hierarchies of topographic features. Previous proposals for the simplification of surface networks have been evaluated
for potential limitations and solutions have been presented including a user-defined simplification. Also methods to refine (to add more details to) a surface network have been shown.
Finally, it is shown how surface networks can be successfully used for spatial analyses e.g. optimisation of visibility index computation time, augmenting the visualisation of dynamic raster surface
animation, and generating multi-scale morphologically consistent terrain generalisations.
Total Views
Views on SlideShare
Embed Views
Usage Rights
© All Rights Reserved | {"url":"http://www.slideshare.net/sanjay_rana/surface-networks-new-techniques-for-their-automated-extraction-generalisation-and-application","timestamp":"2014-04-18T07:05:04Z","content_type":null,"content_length":"480493","record_id":"<urn:uuid:906f6312-9241-4c70-94c7-35ed2eee93ee>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
find the area coordinate triangle
Hello Given 3 points: (3,2) (-3,-3) (5,-1) are the vertices of a triangle. Find the area of the triangle.
Do you have to use a particular method?. Try using the distance formula to find the lengths of the sides, then use Heron's formula. You could show off and find the equations of the lines that make up
the triangles sides and then integrate.(Nerd) $\int_{-3}^{3}\left[(\frac{5x}{6}-\frac{1}{2})-(\frac{x}{4}-\frac{9}{4})\right]dx+\int_{3}^{5}\left[(\frac{-3x}{2}+\frac{13}{2})-(\frac{x}{4}-\frac{9}
i tried using ABS formula Let (x1,y1)=(3,2) ; (x2,y2)=(-3,-3) ; (x3-y3) = (5,-1) ABS(x2*y1-x1*y2+x3*y2-x2*y3+x1*y3-x3*y1)/2 =ABS[(-3)(2)-(3)(-3)+(5)(-3)-(-3)(-1)+(3)(-1)-(5)(2)]/2 =ABS
(-6+9-15-3-3-10)/2 =ABS(-28)/2 =-14 for some reason i get -14...am i suppose to get a "-" for an area??
i didn't check your calculations, but if you take the absolute value of a negative number, the answer is positive. so ABS(-28)/2 = 28/2 = 14 so provided your other calculations are correct, you would
get +14 as the answer, which makes more sense than a negative answer
oooh yea i get it now thank you very much! | {"url":"http://mathhelpforum.com/pre-calculus/21017-find-area-coordinate-triangle-print.html","timestamp":"2014-04-18T10:19:31Z","content_type":null,"content_length":"6345","record_id":"<urn:uuid:771f668e-36a1-4fb4-8711-e9fb16bdc367>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Bruno Algebra Tutor
Find a San Bruno Algebra Tutor
...In the past I have tutored undergraduate engineering students on the use of MATLAB to help solve math problems. To tutor MATLAB programming I will create a series of exercises designed to show
the student the specific tools that they will need for their desired application. I am a trained engineer, with an M.S. from UC Berkeley, and a B.S. from the University of Illinois at
15 Subjects: including algebra 1, algebra 2, Spanish, Microsoft Excel
...I love to teach, and the extra money is definitely a plus, so I am excited about the opportunity to spread my knowledge.I am a student of the game and have been playing for 20 years. My 2008
Brookwood High School Baseball Team was ranked 1st in the country in the USA Today, and finished the seas...
33 Subjects: including algebra 2, algebra 1, calculus, geometry
...My availability is: Monday-Friday: 10:00-8:00 Saturday: 10:00-3:00I have worked as an eighth grade English teacher and tutored many of my students after school. These study sessions were held
to review material for upcoming quizzes and tests. I taught my students study strategies and mnemonic devices so that they could become more independent.
14 Subjects: including algebra 1, reading, English, grammar
With a BA in Economics from the University of Chicago, and an MFA in Creative Writing from the University of Georgia, I can tutor a wide variety of subjects. I have worked with kids of all ages
through 826 Valencia, and I currently teach undergraduate writing at the University of San Francisco. I was also a Research Fellow at Stanford Law School, where I did empirical economics research.
39 Subjects: including algebra 2, algebra 1, reading, English
...During college at UC Berkeley, I took several advance physiology and anatomy classes. Since then, many of the college classes I have taught have had a heavy-emphasis on human anatomy. I have
always been interested in the human body, which is why I completed several advance human physiology classes during college.
13 Subjects: including algebra 1, chemistry, biology, anatomy | {"url":"http://www.purplemath.com/san_bruno_algebra_tutors.php","timestamp":"2014-04-16T10:15:13Z","content_type":null,"content_length":"24181","record_id":"<urn:uuid:7ba25033-313f-4ed6-8a41-721191d4709f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Floating point numbers - what else can be done?
Avoiding errors
High performance access to file storage
So what does using base-10 floating point actually mean? Well, as there effectively isn't a standard governing decimal floats we can't say exactly, but to give us an idea lets stick with the
structure of the IEEE base-2 float.
The representation for a 32-bit decimal float is of the form 10^exp * 1.n where n is an arithmetic sequence 1 * 10^-1 + 1 * 10^-2 + … + 1 * 10^-5. You may remember that the equivalent binary sequence
continued to 2<sup-23, the difference is because each decimal term takes 4-bits and five terms use 20-bits.
The IEEE representation allows 23-bits for the mantissa and, although there are three bits left over, these aren't adequate to represent a BCD number so we don't have a 10^-6 term and we can
represent six digit numbers in our decimal float. This compares to eight useful digits of a decimal number when we use 32-binary float representations so the difference in storage efficiency is
easily seen.
Six digits may seem too small to be useful - it's only five decimal places after all - but two things should be taken into consideration; firstly this is for a 32-bit float, most of the time 64-bit
floats are available which would allow for a more respectable 12-digit number.
Secondly, the format for decimal floating point numbers is probably going to change as part of the revision to the floating point standard. Currently, if you want to start using decimal arithmetic
and you're a Java programmer you're in luck, there's the java.math.BigDecimal class in the library.
If you're a C++ programmer there are plenty of libraries out there. The IBM one is in the early stages of development and contains known bugs, but as the IBM people are heavily involved in the
associated extensions to C and C++, this library is probably going to resemble what will eventually be seen in the C++ language that bit more than the others.
This isn't all good news, however. Firstly, decimal floating calculations are going to be mostly done in software until the new floating standard is ratified and the hardware catches up. This means
that for a while using decimal arithmetic is going to imply a performance cost and, depending on the constraints you're working to, that may or may be acceptable.
Secondly, converting to and from base-2 isn't the only source of error in floating point calculations. It is, however, the one that we've all seen a bit too often and, although we're still going to
need to understand about floats and numerical analysis to do serious things with floats, at least it won't be as easy to shoot yourself in the foot with the basics.
So let's be thankful that we no longer work in the dark days when storage was at a premium and that we can do things that were unthinkable in the past; such as using four digits to store the year and
sacrificing a few bits to make life that bit easier for the poor misunderstood man in the trench writing the code that keeps our satellites in orbit. ®
<a href="http://www.petebecker.com/js% | {"url":"http://www.theregister.co.uk/2006/09/20/floating_point_numbers_2?page=3","timestamp":"2014-04-17T08:29:23Z","content_type":null,"content_length":"49338","record_id":"<urn:uuid:b43c1c13-4c82-4a40-86b0-2964756e51ce>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Montgomery, IL Prealgebra Tutor
Find a Montgomery, IL Prealgebra Tutor
...I have my undergraduate degree in special education from Illinois State University. In addition, I have my teaching credentials for elementary education. In addition, I received my master's in
art of education from Aurora University.
21 Subjects: including prealgebra, reading, biology, dyslexia
...Whether it is math abilities, general reasoning, or test taking abilities that need improvements, I can help you progress substantially. I work with systems of linear equations and matrices
almost every day. My PhD in physics and long experience as a researcher in theoretical physics make me well qualified for teaching linear algebra.
23 Subjects: including prealgebra, physics, calculus, statistics
...While I was abroad in Spain in fall 2012, I took all five of my college courses in Spanish, and this spring I was invited to join the Spanish honor society on campus, Sigma Delta Pi. More
generally, I do have some coursework in general biology, chemistry, and basic music theory. I've also been ...
29 Subjects: including prealgebra, Spanish, English, grammar
...Even if it's in a book form?? :-)I have been to places like Greece, Italy, France, Belgium, Monaco, Austria, Netherlands, Germany and more and live now in Chicago area. Traveling around the
world and meeting people of different backgrounds taught me to appreciate the languages more, including my...
13 Subjects: including prealgebra, reading, algebra 1, English
...As a tutor, I will explain math concepts in several different ways until understanding is achieved. I will also supply appropriate practice problems that will lead your student to the common
missteps that are made. I will provide problems that cover a varying degree of difficulty so that your student is prepared for any level problem.
2 Subjects: including prealgebra, algebra 1 | {"url":"http://www.purplemath.com/Montgomery_IL_Prealgebra_tutors.php","timestamp":"2014-04-19T20:04:16Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:84421258-601a-4ff9-ab29-96b105e0f491>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Infosys Placement Paper Aptitude Questions
18 June 2009 8,675 views One Comment
Infosys Placement Paper Questions
Aptitude / Reasoning objective type – multiple choice questions. Latest Sample RS Aggarwal and Barons Placement Test Problems.
IF rose code as 6786 hot coded as 879 then search will code asin this q there is one no. is assigned t each letter.so take search and letter by letter see the given condition.
2.600,180,54………..complete the series.
3.sedane:pain :: solace:……..ans: grief.
4.play:director :: newsans: editor.
5.river:dam :: traffic(a)signal (b)motin (c)vehicle (d)
6.find the greatest no. (a)half 50%of 50 (b)3times 40%of 40see the obtion & place 3times 40% of 40 blindly.
7.find the compound interest of 1000 rs. at the rate of 5% p.a. for 15 years.
8.find the greatest of 1000power of 1000,1001power of 999
9.product of two no is constt. if one no. is increased by 50% then other no is decreased how much.ans: 33.3
10.l.c.m & h.cf of two no is 84 & 21 respectively and the ratio of these two no is 1:4 findthe gretest no ans: 84 place blindly.
11.if x is 90% of y then y is how much % of x.
12.the cost of 15 apples & 20 mangoes is much as equal to the 15 mangoes & 20 apple then what is the relation between their cost..(a)apple is as much equal as mangoes.place blindly (a) obtion.
13.there r 20 men and 5 women then how much maximum couple can be made…..ans: 20c1*5c1=100. place 100 blindly.
14.a bag contains 8 white and 6 black balls find the prob. of drawing a white ball….ans: 4/7 place blindly.
15.if numerator is less than 2 by denominator and then if 1 is subtract from the numerator and 1 is added to the denominator then the ratio becomes half what is the no…..ans: 5/7 place blindly.
16.if a certain money becomes twice in 5 years.then the 300 rs. will become 2400 at the same rate in howmany years……..
17.if the average of three numbers is 135.and the difference between others is 25 then find thelowest no.
18.if the thrice of three consecutive odd no is equal to the more three of twice the last no.then find the 3rd (largest odd no).ans: 11.
19.there are 5 questions in each of the two section.three questions must be attempt forpassing the exam how many ways a student will choose the questionans: 100 place blindly.
20.if the lenght of the rectangle is decreased by 4 and breath is increased by 3.then it becomes sqaurewhose areas is equal to that of rectangle.what is the perimeter of the original rectangle.
21.there is a hemisphere of radius of 2 cm how much litre will be occupied in the hemisphere.given 1 litre= 1000 cubic cm.
22.there is a water tank which has enough water to be consumed in 60 days.if there is a leakage water stays there for 40 days.if there are more similar leakage then how lomg water will consumed.
23.a man saves money 1000 in each year.and he gives this amount at the end of year for the compound interest at the rate of 5% how much he will save after 25 years.
One Comment »
• raman said:
Hi ,
What about the cube questions ,,ie. cube is cut into pieces and all
that is the pet question of INFOSYS.
Leave your response!
Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.
Be nice. Keep it clean. Stay on topic. No spam.
You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.
Most Viewed | {"url":"http://collegekhabar.com/resource/2009/06/infosys-placement-paper-aptitude-questions/","timestamp":"2014-04-19T04:29:57Z","content_type":null,"content_length":"38169","record_id":"<urn:uuid:35834ef6-33ff-4d45-9c13-6e66413f3f73>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Small Rings
Thanks to students Kristine, Peter D, David H, Brandon, Nathan, Matt, Ben, Liz T, Serena, Elizabeth M, Peter Q, Jit, and Emily for their contributions! (All are in Math 322, the second semester of
Abstract Algebra.) Thanks also to Professor Siehler for his valuable assistance in this work. This page is copyright 2005 by Gregory Dresden, of the Department of Mathematics here at Washington & Lee
Our goal is to find "nice" descriptions of small rings. Mathworld has a definition and discussion of rings, if you'd like a brief refresher. Note that our page is one of the references listed on the
Mathworld site! We're also (for now at least) the first item returned by Google when searching for "Small Rings". (Here's a cache of Google's search results, in case Google changes its mind.)
We would like to represent each ring using only subsets of:
• matrices (over Z or Z[n])
• modular rings Z[n]
• factor rings of Z[x]
• direct products of the above three types of rings
We are being fairly arbitrary with what is a "nice" description of a ring, but these seem to fit most people's description of "nice" rings. Notice also that the third item on the list can cover a lot
of different cases; for example, the ring of size four given by Z[2][i] is the factor ring ^Z[x][/<2,x^2+1>].
For further study, here are some good references:
• Christof Nöbauer's web site contains some info on finite rings. Of particular interest are his technical report Numbers of small rings (ps-file, middle of the page) and this chart on the number
of rings of prime-power order.
• Colin Fletcher's article Rings of small order (Mathematical Gazette, volume 64 [1980], no. 427, pages 9--22) can be found in our library. (Sorry, I can't find it online.)
• Benjamin Fine has a nice article, Classification of Finite Rings of Order p² (Mathematics Magazine, Vol. 66, No. 4 [Oct., 1993], pages 248--252) which can be downloaded for free from this JSTOR
link so long as you're on a University computer. (If you're having trouble, the math/science library can help you out.)
• R. Raghavendran has a lovely article, Finite Associative Rings (Compositio Mathematica, Vol. 21, No. 2 [1969], pages 195-229), which covers much of the material later used by Nöbauer
• Eric Weisstein's Mathworld site was mentioned above, but it's worth repeating.
• The number of rings of size 0, 1, 2, 3, 4, 5, etc. forms the sequence 0, 1, 2, 2, 11, 2, etc., also known as sequence number A037234 from Neil Sloane's On-line Encyclopedia of Integer Sequences.
If you find any other good articles or references, let me know!
Here's what we have so far:
Rings of Size 4
There are eleven rings of size 4, as follows:
• Three rings over Z[4].
• Eight rings over Z[2]+Z[2]:
□ Three commutative with unity.
□ Three commutative without unity.
□ Two non-commutative.
This was also the solution to problem E1648 in the MAA Monthly (Vol. 71, No. 8 [Oct., 1964], pages 918--920) by David Singmaster and D. M. Bloom, and can be found at this JSTOR link.
If you're interested, here are my old hand-written notes on rings of size 4.
Rings of Size p
There are only two rings of size p (for p prime), as follows:
• The ring Z[p] (standard multiplication).
• The ring of size p with trivial multiplication, which can be represented as the subring <p> of the ring Z[p^2].
Rings of Size pq
There are four rings of size pq (for p,q distinct prime). Any ring of size pq will have an ideal of size p and an ideal of size q, with trivial multiplication occurring between them. Thus, any ring
of size pq can be written as a direct product of rings of prime powers; that is, the ring will have elements of form (a,b) where a is from a ring of size p, and b from a ring of size q.
Rings of Size p^2
Just as in the case of rings of size 4, there are eleven rings of size p^2, as follows:
• Three rings over Z[p^2].
• Eight rings over Z[p]+Z[p]:
□ Three commutative with unity.
□ Three commutative without unity.
□ Two non-commutative.
Benjamin Fine's article (discussed above) contains an abstract description of these eleven rings.
Rings of Size p^2q
Any ring of size p^2q will be a direct product of two smaller rings with trivial multiplication occurring "between" the two rings. Thus, there will be twenty-two rings of size p^2q.
Rings of Size p^3
There are 52 (if p=2) or 53 (if p>2) rings of size p^3, as follows:
• Four rings over Z[p^3].
• Twenty (or twenty-one if p>2) rings over Z[p^2]+Z[p].
• Twenty-eight rings over Z[p]+Z[p]+Z[p].
There are eleven (twelve if p>2) rings of size p^3 with identity, broken down as follows:
• One over Z[p^3] (namely, the ring Z[p^3]).
• Three (or four if p>2) over Z[p^2]+Z[p]. According to R. Raghavendran's article, these rings are the following:
□ Z[p^2]+Z[p] with standard multiplication.
□ ^Z[x][/<p^2, px, x^2>].
□ ^Z[x][/<p^2, px, x^2 - p>].
□ ^Z[x][/<p^2, px, x^2 - mp>], for m a quadratic non-residue mod p. (Note that this is only possible for p>2 !!)
• Seven over Z[p]+Z[p]+Z[p].
Gregory Dresden, Department of Mathematics at Washington & Lee University | {"url":"http://home.wlu.edu/~dresdeng/smallrings/","timestamp":"2014-04-20T10:54:41Z","content_type":null,"content_length":"9402","record_id":"<urn:uuid:aae280ab-3ec1-481f-9bae-9f5cfed4263b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling the Force From a Fan | Science Blogs | WIRED
• By Rhett Allain
• 09.07.12 |
• 8:25 am |
I am obsessed with helicopters. You probably already knew that. In my previous post, I looked at the power and force for a hovering helicopter. The basic assumption was that by pushing air down, the
air pushed back up. From this, I obtained the following expression for the force from the helicopter rotor:
I am calling this F[air] because it is the force of the air pushing on the helicopter. Don’t confuse this with the air resistance for an object moving through the air (which I often call the same
thing). In this model, ρ is the density of air (about 1.2 kg/m^2), A is the area of the rotors and v is the speed of the air after it passes through the fan.
For the hovering helicopter, this air force would be equal to the weight of the helicopter. Using this, I could get an expression for the velocity of the air. With the velocity of the air and the
force on the air, I could calculate the power needed to hover. This is what I got (without putting a value in for the air speed).
Now for some more data. I found this old fan cart in one of the labs.
Can I measure the force this fan exerts on the cart? Yup. Can I measure the power going into the fan motor? Yup. So, even though it isn’t really a helicopter, it is sort of like one. First, let’s get
to the force.
Measuring the Force
You can put this cart on a track and let it go. In order to get the acceleration, I will create a plot of position vs. time using the Vernier motion sensor. So, before I do this, let me get an
estimate of the amount of friction in this system. If I just give the cart a push with the fan off, I get an average acceleration of about 0.027 m/s^2. I suspect this will be small enough to ignore,
so I will for now.
Here is a plot for the motion of the cart with the fan on after it is released from rest.
The fitting parameter in front of the t^2 term is 0.1757 m/s^2. Since this is the same as the (1/2)*a term in the kinematic equation:
Then the acceleration is twice that parameter giving it a value of 0.3514 m/s^2. I am sure I have said this point about finding the acceleration before, but it is easy to forget. I repeated the
process a few times and found an average acceleration of 0.354 m/s^2. This seems large enough for me to neglect the effects of friction – if this were a real lab report, I would include friction.
If I consider this fan to be the only horizontal force on the cart, then I can find the value of this force from the acceleration and the mass. The cart plus its batteries has a mass of 0.576 kg.
This puts the fan force at:
But wait! There’s more. The fan has a “high” and a “low” setting. The above force is for the “high” setting. If I repeat the experiment on the “low” setting, I get a force from the fan of 0.122
Oh, I guess I should be clear about my assumptions. I already said I was ignoring friction. The other assumption is that the force from the fan doesn’t change with the speed of the cart. Of course,
this isn’t really true. As the carts goes faster, it won’t push the air as hard. Also, as it it goes faster there would be an air resistance force. In this case, the car is going pretty slowly so it
shouldn’t matter too much. Also, for the case of the hovering helicopter the speed would be zero.
I want to know the power going into this motor. The simplest way is to measure the change in electric potential (voltage) across the battery at the same time as measuring the current through the
battery. With this, the power would be:
In “high” mode, the fan has 4.22 volts across it with a current of 2.12 Amps. This gives a power of 8.95 Watts. In “low” mode, the fan potential is 3.44 volts with a current of 1.59 Amps. This gives
a power of 5.47 Watts.
Perhaps I should make one change to my expression for power:
The power the motor gets is indeed IΔV. But not all of this power goes into pushing the air. There is some loss. So, the e is the efficiency of this power transfer.
Oh, there is one more thing to measure – the fan size. In my helicopter analysis, I used this for the rotor area. This fan has a radius of 7.5 cm which gives it a rotor area of 0.0176 m^2.
Comparing Models to Data
I really measured two things. I measured the force from the fan and the power of the air. I don’t know the speed of the air coming off the fan. Let me solve the force expression for the velocity and
plug that into the power equation. This gives:
I already see that I am going to have a tough time here with just two data points. Ok. What if I solve for the efficiency for both high and low modes?
With this I get an efficiency of (remember the density of air is about 1.2 kg/m^3) 0.0378 for low power and 0.0500 for high power. Odd. I thought it would be much higher than that. At least the
efficiency is in the same ball park for high and low setting. Still, I am troubled. Maybe these tiny fan blades just don’t work as well as larger helicopter blades. Maybe I am an idiot and messed up
Even More Data
I couldn’t leave it alone. I had to get more data. So, I stuck some more batteries on the cart.
With this, I could run the fan in “high” and “low” mode with 1 extra battery and 2 extra batteries. This gives me a total of 6 different settings. Of course, the mass of the cart changes with more
batteries. That just means that I will have to multiply the acceleration by a different value to get the force of the fan.
Let me just throw that efficiency thing out. Here is a plot of the measured fan power vs. the calculated power.
At least it looks linear. However, the slope of this fitting function is only 0.0618. If I interpret this as the efficiency, it would only be about 6% efficient. I don’t know. Maybe these small fans
just aren’t the same as the big helicopter rotors. Clearly, I have no idea what I am doing.
You know, it would be cool if I repeated this with a very large low friction cart with a large (person-sized) fan. Maybe. | {"url":"http://www.wired.com/2012/09/modeling-the-force-from-a-fan/","timestamp":"2014-04-19T17:52:36Z","content_type":null,"content_length":"107743","record_id":"<urn:uuid:88e60d91-6f54-40fc-8f2b-9b1cc9a8d611>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help needed
October 4th 2012, 06:56 AM
Help needed
These are the questions
1) Evaluate : tan 95 + tan 40 - tan 95 * tan 40
2) If (√8 + i)^50 = 3^49(a+ib), find a^2 and b^2
3) Prove that : 2cos(pi/13)cos(9pi/13)+cos(3pi/13)+cos(5pi/13)=0
One is from complex numbers. I'm in XIth, need help, i have the formulas, but i can't solve it ,unfortunately. Someone explain it, or atleast solve it please.
And if anyone can give related questions for me to practise, i don't mind.
Thanks in advance. I've been trying it the whole day, and my its doing my head in.
October 4th 2012, 08:38 AM
Re: Help needed
Nobody still? I could use some help, to understand this atleast.
October 4th 2012, 09:02 AM
Re: Help needed
Perhaps we do not understand what you mean by "help". You have posted several problems but not shown what you have tried yourself or what you know about the problems.
October 4th 2012, 09:08 AM
Re: Help needed
By help i mean i need help to solve these problems. I've tried applying the identities but i couldn't get the final answer. I need to know how to solve them.
Especially the first one.
October 5th 2012, 04:28 AM
Re: Help needed
For the first one, use the trig identity for tan(A+B) and notice that 95+4.
For the second one, the general advice is that to raise a complex number to some power, first switch to polars.
For the third one make use of the identity for cosA + cosB.
October 5th 2012, 04:43 AM
Re: Help needed
I've done the last one.
Still didn't get you on the first one. How do i use the tan(A+B) identity?
October 5th 2012, 05:02 AM
Re: Help needed
Just write down the identity and follow your nose !
October 5th 2012, 05:10 AM
Re: Help needed
BTW I've just noticed that in an earlier post a 0 = 135 got chopped from the end of a line.
The line should have finished as ........ 95 + 40 = 135.
October 5th 2012, 06:09 AM
Re: Help needed
but the identity is tan(a+b) = (tan a + tan b)/1-tan a *tan b
There is no 1-tan a tan b in denominator, and i tried adding it, and it didn't work.
October 6th 2012, 02:39 AM
Re: Help needed
95 + 40 = 135 is important.
October 7th 2012, 05:53 AM
Re: Help needed
use 95+40=135 ==> 95=135-40
tan(135-40) + tan (40) - tan(135-40)*tan(40) = ...
tan135 = -1, in the expression will be only tan(40), not tan (95) more.(Wink)
October 7th 2012, 06:18 AM
Re: Help needed
2) If (√8 + i)^50 = 3^49(a+ib), find a^2 and b^2
For the begin 2): 3^50((√8/3+1/3*i)^50 = 3^49(a+ib)
√8/3 = cos(fi), 1/3 =sin (fi)
3) Prove that : 2cos(pi/13)cos(9pi/13)+cos(3pi/13)+cos(5pi/13)=0
Use identity: 2cos(x)*cos(y) = cox(x+y)+cos(x-y) | {"url":"http://mathhelpforum.com/trigonometry/204632-help-needed-print.html","timestamp":"2014-04-17T21:45:27Z","content_type":null,"content_length":"8256","record_id":"<urn:uuid:4c78370f-8ee6-475b-bbd2-01c025a69f9b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
grouop of order 4
August 15th 2009, 09:50 PM #1
Junior Member
Mar 2009
grouop of order 4
1. Let G be a group of order 4. Prove that every element of G has order 1,2, or 4.
2. Classify groups of order 4 by considering the following two cases:
a. G contains an element of order 4
b. Every element of G has order <4
could anyone please help me to solve these problems?
and what does it mean by 'Classify groups of order 4'?
the group itself will have order 4,|e|=1
for any element a, |a|=1 or 2 or 4
if |a|=1 then a=e
if there is an a such that |a|=4, than it would be a cyclic, thus isomorphic to Z(4), so it will also have subgroup of order 2.
if such element does not exist, then 3 elements will have order 2, that is, isomorphic to Z(2)*Z(2).
"classify"actually means classify the group up to isomorphism
August 15th 2009, 09:58 PM #2
Senior Member
Jul 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/98199-grouop-order-4-a.html","timestamp":"2014-04-21T11:53:40Z","content_type":null,"content_length":"31725","record_id":"<urn:uuid:ae532c8c-0aa2-40a9-a0f9-da0d940a3c19>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Body Mass Index
Body Mass Index (BMI) Calculator
The BMI Calculator calculates the Body Mass Index which is a measure of obesity for adult men and women. The BMI is calculated by dividing the body weight (in kilograms) by the height (in meters)
squared (BMI = weight/height^2). The BMI will overestimate fatness in people who are muscular or athletic, but fit persons will have a Waist-to-Height ratio less than 0.5, whereas overweight people
have Waist-to-Height ratios greater than 0.5.
Underweight BMI less than 18.5
Normal weight BMI 18.5 to 24.9
Overweight BMI 25 to 29.9
Obese BMI 30 or greater
Morbidly Obese BMI 40 or greater
Ronnie Coleman
Mr. Olympia
BMI 41.4
BMI 17.5 BMI 18.5 BMI 22.0 BMI 24.9 BMI 30 BMI 40
Underweight Normal Overweight
← Anorexia Lowest Normal Middle Normal Highest Normal Obesity Morbid Obesity →
Enter your height and weight,
then click the "Compute BMI" button.
BMI 17.5 BMI 18.5 BMI 22.0 BMI 24.9 BMI 30 BMI 40
Underweight Normal Overweight
← Anorexia Lowest Normal Middle Normal Highest Normal Obesity Morbid Obesity →
BMI Calculator for Women, BMI Calculator for Men
BMI is calculated based on the weight and the height of a person. There is no such thing as a BMI calculator for females or a BMI calculator for males, but as the pictures show, men and women have
different weight distributions. Men tend to accumulate mass at the waist, whereas women accumulate weight in the hips and buttocks.
The Body Mass Index was invented by Adolphe Quetelet in the first half of the 19th century. Although the index does not measure the percentage of body fat, it is used to estimate a healthy body
weight based on a person's height. It is the most widely used metric for identifying individuals with weight problems within a population due to its ease of measurement and calculation.
1. Special thanks to Professor Brian Curless at the Department of Computer Science & Engineering, University of Washington for creating the woman images. [Brian Curless homepage]
2. Brett Allen, Brian Curless, Zoran Popović, Digital Humans [link]
3. Brett Allen, Brian Curless, Zoran Popović, The space of human body shapes: reconstruction and parameterization from range scans. [link]
Search for information about Body Mass Index:
Sample Search Results:
Calculate your BMI - Standard BMI Calculator
National Institute of Health (NIH) page uses height and weight (English or metric) to determine amount of body fat.
Healthy Weight: Assessing Your Weight: Body Mass Index (BMI ...
Jan 27, 2009 ... Body Mass Index (BMI) is a number calculated from a person's weight and height. BMI provides a reliable indicator of body fatness for most ...
Healthy Weight: Assessing Your Weight: BMI: About Adult BMI ...
Jan 27, 2009 ... If an athlete or other person with a lot of muscle has a BMI over 25, is that person still considered to be overweight? ...
Healthy women of normal body mass index or BMI -- a measure of height to ... BMI is accepted globally as a good measure of whether someone is overweight. ...
Multiple Factors Tied to Risk of Pancreatic Cancer in Women? - Cancerpage.com
Influence of body mass index on the efficacy of revascularization ...? - The Journal of Thoracic and Cardiovascular Surgery
Body Mass Index calculator you'll like
Calculate your Body Mass Index (with this friendly BMI calculator) and compare yourself to other women or men of the same age and height.
Body Mass Index Chart
Body mass index, or BMI, is a new term to most people. However, it is the measurement of choice for many physicians and researchers studying obesity. ...
Body mass index - Wikipedia, the free encyclopedia
A graph of body mass index is shown above. The dashed lines represent subdivisions within a major class. For instance the "Underweight" classification is ...
Calorie Control Council | Body Mass Index Calculator
Body Mass Index (BMI) is one of the most accurate ways to determine when extra pounds translate into health risks. BMI is a measure which ...
NHLBI, Obesity Guidelines-Executive Summary-BMI Chart
Body Mass Index Table. for BMI greater than 35, go to Table 2 ...
The number at the top of the column is the BMI at that height and weight. ...
Body Mass Index (BMI)
Did you know you had a BMI? Body mass index is a calculation that uses your height and ...
BMI, although not a perfect method for judging someone's weight, ...
© Copyright - Antonio Zamora | {"url":"http://www.scientificpsychic.com/health/Body-Mass-Index-BMI.html","timestamp":"2014-04-21T05:04:37Z","content_type":null,"content_length":"27005","record_id":"<urn:uuid:52aade35-c7a0-4f43-9e01-02e823c4b5fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell Typeclass for Tuples
up vote 13 down vote favorite
I was playing around with typeclasses and made this:
class Firstable f where
fst :: f a -> a
class Secondable f where
snd :: f a -> a
I then tried to add an implementation for (,) and realized that I could do:
instance Secondable ((,) a) where
snd (x,y) = y
I'm pretty sure this works because Secondable should have kind (* -> *) where ((,) a) has that type, however, I don't know how to implement Firstable for ((,) * a) where * is the bound variable, In
my interpretation I am trying to do the equivalent of:
instance Firstable (flip (,) a) where ...
Is there a way to do this in Haskell? Preferably without extensions?
haskell typeclass
AFAIK, no: you'd need TypeSynonymInstances, but type synonyms cannot be partially evaluated. But are you aware of the alternative with MultiParamTypeClasses? That's perhaps a bit ugly, but it
works. – leftaroundabout Jun 5 '12 at 15:28
1 You might be interested in how the tuple package handles this: hackage.haskell.org/package/tuple – John L Jun 16 '12 at 2:54
@JohnL really cool, thanks! – Charles Durham Jun 18 '12 at 20:58
add comment
3 Answers
active oldest votes
A version with worse parametricity guarantees can be had with MPTCS and Fundeps or with TypeFamilies.
type family Fst p
type instance Fst (a,b) = a
type instance Fst (a,b,c) = a
class First p where
fst :: p -> Fst p
up vote 1 down vote accepted instance Fst (a,b) where
fst (a,_) = a
instance Fst (a,b,c) where
fst (a,_,_) = a
but ultimately, you'll need to use some extensions.
Is type instance Fst (a,b,c) = b supposed to read type instance Fst (a,b,c) = a instead? – mithrandi Jun 14 '12 at 6:47
Yep, fixed it. =) – Edward Kmett Jun 16 '12 at 2:08
add comment
You can use type families like so (A different take on what Edward wrote):
{-# LANGUAGE TypeFamilies #-}
class Firstable a where
type First a :: *
fst :: a -> First a
class Secondable a where
type Second a :: *
up vote 8 down vote snd :: a -> Second a
instance Firstable (a,b) where
type First (a, b) = a
fst (x, _) = x
instance Secondable (a,b) where
type Second (a, b) = b
snd (_, y) = y
add comment
class Firstable f where
fst :: f a b -> a
up vote 1 class Secondable f where
down vote snd :: f a b -> b
This way only 2-tuples could be made an instance of that class, no? That would kind of defeat the purpose of having the type class to begin with. – sepp2k Jun 5 '12 at 15:38
@sepp2k First, he never specified the purpose and I interpreted it to mean he just wanted to generalize type constructors of (at least) two arguments. Second, his two original classes
have the exact same signature, implying he either got them wrong, or he should just use one class to describe both fields. – Gabriel Gonzalez Jun 5 '12 at 15:44
@GabrielGonzalez yeah, I intended the tuples to be able to be implemented for (,),(,,)... – Charles Durham Jun 5 '12 at 15:52
1 This approach will actually work if you're willing to accept counting from the right instead of from the left, but I don't think you're going to make it work the other way around. –
Louis Wasserman Jun 5 '12 at 16:12
That's an interesting idea, although in that case, you would want to do: fst :: f a -> a, snd :: f b a -> b, ... – Gabriel Gonzalez Jun 5 '12 at 17:03
add comment
Not the answer you're looking for? Browse other questions tagged haskell typeclass or ask your own question. | {"url":"http://stackoverflow.com/questions/10899804/haskell-typeclass-for-tuples","timestamp":"2014-04-17T22:11:55Z","content_type":null,"content_length":"80852","record_id":"<urn:uuid:a24a4a6e-32cf-4be9-9fa4-3865d6e0cb23>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dataflow Analysis of Scalar Vars
goren@cse.ucsc.edu (Sezer Goren)
17 Jan 1998 00:05:06 -0500
From comp.compilers
| List of all articles for this month |
From: goren@cse.ucsc.edu (Sezer Goren)
Newsgroups: comp.compilers
Date: 17 Jan 1998 00:05:06 -0500
Organization: UC Santa Cruz CIS/CE
Keywords: analysis, question
Dear Compiler People:
My specialty is behavioral synthesis which is nothing but hardware
compilation from an HDL specification such as Verilog. I have
developed a state-of-the-art tool for that.
In doing that, I believe I devised a very useful algorithm for
dataflow analysis of scalar variables. The algorithm is of quadratic
complexity even for non-reducible flow graphs. According to the DRAGON
book, the worst case complexity of known algorithms for non-reducible
flow graphs is exponential. I am not sure if I made a breakthrough, or
am not aware of the new research on the subject, or am making a
mistake in my algorithm. If I am making a mistake, it should not be a
trivial one as my algorithm is used in hundreds of Verilog designs and
always compiles correct hardware, and also it is very solid in terms
of its graph theoretical basis.
I would like to publish my research. Since I am not a compiler
reasearcher, I would like to team up with a compiler expert and
publish a joint paper.
I have contacted a well-known prof at Stanford but I don't think I
will get enough attention if any. I am looking for interested
researchers or maybe you could point me to the right person.
PS: I am posting this from my fiancee's account since we have a
firewall at work. Please reply me at fatih@aluxs.micro.lucent.com.
A short bio:
I got my PhD from CWRU in Jan 95. I joined General Motors Research in
March 93 before I finished my PhD. I worked on behavioral synthesis
there until Aug 97 when I joined Lucent Microelectronics in Silicon
Valley as an ASIC design consultant.
Fatih Ugurdag
Lucent Technologies, Santa Clara, CA
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/98-01-070","timestamp":"2014-04-18T19:02:22Z","content_type":null,"content_length":"5187","record_id":"<urn:uuid:ebf24939-d87f-4969-ac58-9d11fc975ac2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Notation in Email Messages or Web Forms
Communicating about mathematics via email can be difficult, because you must work within the restrictions imposed by your mail program. You can either (1) try to represent mathematical expressions in
the body of your message using only the standard character set, or (2) send typeset documents as attachments.
If you would like to talk about math in a web form (such as this Dr. Math submission form), you are limited to what you can type into the form window. Typing math in a web form involves the same
problems as typing it in the body of an email message.
1. Math Notation in the Body of an Email Message or Web Form
If you have ever tried to represent math symbols in an email message, you are aware of the limitations involved in doing so. Even if your mail program allows special typesetting, you shouldn't
use it unless you're certain that the recipient of your message can process this information. To communicate with the broadest possible audience, you need to use only ASCII - the standard set of
characters. Following are guidelines for and examples of ASCII math notation:
2. Sending Attached Documents
This method first requires that you know how to use some math typesetting program (such as MathType or TeX) to create the files. You also need to make sure that your mail program allows you to
send attached documents. Assuming you've gotten this far, you still need to take into account whether or not the recipient of your message is able to accept attached files with their mail
program. If they can, you will also need to make sure that they are able to read the particular type of file you are sending.
The bottom line is, sending a math typeset document as an attachment is the right choice if you and the recipient of your message are comfortable using the technology at hand. If not, you might
want to opt for ASCII notation method (1). | {"url":"http://mathforum.org/typesetting/email.html","timestamp":"2014-04-21T12:40:25Z","content_type":null,"content_length":"5327","record_id":"<urn:uuid:c963bd49-2259-41bc-b5d4-d1ec1e1fce6b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Algebra Tutors
New Castle, DE 19720
Experienced Math Tutor Available in New Castle!
...I completed math classes at the university level through advanced calculus. This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in
linear algebra
, differential equations, analysis, complex variables, number theory, and...
Offering 10+ subjects including calculus | {"url":"http://www.wyzant.com/Chester_PA_Linear_Algebra_tutors.aspx","timestamp":"2014-04-19T22:52:54Z","content_type":null,"content_length":"60958","record_id":"<urn:uuid:bc0fcccc-2497-44d4-ac29-653167fa3ea0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Motion Of A 4-lb Block B In A Horizontal... | Chegg.com
The motion of a 4-lb block B in a horizontal plane is defined by the relations r = 3t 2- t 3 and θ=2t2, where r is expressed in feet, t in seconds, and θ in radians. Determine the radial and
transverse components of the force exerted on the block when (a) t = 0, (b) t = 1 s.
Fig. P12.68 | {"url":"http://www.chegg.com/homework-help/motion-4-lb-block-b-horizontal-plane-defined-relations-r-3t-chapter-12-problem-68p-solution-9780072976939-exc","timestamp":"2014-04-18T14:42:12Z","content_type":null,"content_length":"42888","record_id":"<urn:uuid:7c4cf7b9-41a8-44d2-9c56-81253715fca5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
N(t) = γA(t-τ)
June 25th 2008, 09:03 AM #1
Jun 2008
N(t) = γA(t-τ)
A step forward from N(t) = γA(t) situation.
N(t) = γA(t-τ)
As before, N(t) is a known number (of peatlands) at time t, A(t) is a known area (deglaciated) at time t. γ is a hypothethical number of peatlands that can form per unit of deglaciated area.
τ is a delay period (before peatlands get established).
Ok, so I got y solved. I'm not quite sure though if I should use the variable y for each time or the average value for all. I tried both. Neither makes any sense with the following
> A(t-τ) = N(t)/y
> A(t)-A(τ) = N(t)/y
> A(τ) = A(t) - N(t)/y
Is this correct so far? Probably not, because it doesn't give any meaningful answer. And I don't know what A(τ) actually is, what I want to find is τ.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/42398-n-t-t.html","timestamp":"2014-04-17T20:17:58Z","content_type":null,"content_length":"25228","record_id":"<urn:uuid:3eeae96c-516d-46ad-a1fa-19c05744c7dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to graph more than two restrictions on a TI-84 Plus Silver Edition?
I am doing a calculator art project and I can't figure out how to graph pieces of functions together using more than just two restrictions on one equation.
For example:
I want to graph the line 0.02X^2+15.8 when (X>-3.4), (X<-2.55), (X>-0.43), (X<0.43), (X>2.55), and (X<3.4). I know about the whole typing the equation before putting in the restrictions separated by
division signs, but no line at all shows up (well, at least in the window) when I enter more than two restrictions. Why is this?! It is frusterating as hell and I have spent about the last hour
looking everywhere online, but I haven't found anything related to this. Is it simply impossible for a TI-84 Plus Silver Edition to graph with more than two restrictions? If not, how can I get all
three sets of inequalities to graph on my calculator?
Hum it works if you do something like this (tested!) :
0.02X^2+15/ (((X>-3.4) and (X<-2.55)) or ((X>-0.43) and (X<0.43)) or ((X>2.55) and (X<3.4)))
Notice that if you use _only_ "and" you will have a problem because the x value can't be in the 3 domains at a time and "(((X>-3.4) and (X<-2.55)) or ((X>-0.43) and (X<0.43)) or ((X>2.55) and (X
<3.4)))" will always answer 0 !!!
I can see the result but I need to zoom in on a particular zone.
Is it ok for you ?
Good luck with your art project :) | {"url":"http://www.ti-84-plus.com/calculatorquestions/index.php/1664/how-graph-more-than-two-restrictions-84-plus-silver-edition","timestamp":"2014-04-17T18:24:10Z","content_type":null,"content_length":"41523","record_id":"<urn:uuid:7207c11a-aab4-4aa2-aca7-61fcab307eb4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Sciences—MS
Students can pursue a Master's Degree in Mathematical Sciences with an emphasis in discrete mathematics, pure mathematics, statistics, or applied mathematics. Applicants are not required to have an
undergraduate degree in mathematics. Each concentration area has a set of required courses. Students completing a thesis qualify for a Plan A degree; a Plan B degree requires completion of a project,
and a Plan C degree requires completion of course work plus passing the qualifying exams at the MS level.
Concentrations—Core Courses and Electives
All students pursuing an MS in mathematical sciences must choose one of four concentrations.
Applied Mathematics
Applied Mathematics students develop expertise in the theory and application of ordinary and partial differential equations, optimization, and computational methods.
Core courses:
Elective courses (choose two):
Discrete Mathematics
Students of Discrete Mathematics study design and coding theory, graph theory number theory, and algebra.
Core courses:
Elective courses (choose two):
Pure Mathematics
The curriculum in Pure Mathematics consists of a blend of both theoretical and applied courses.
Core courses:
Elective courses (choose three):
While providing a broad statistics background, the Statistics program also specializes in statistical genetics, computational methods, and functional data analysis.
Core courses:
Note: It is important to recognize that many of these courses are offered only in alternate years. Students must plan carefully to complete the MS degree in the expected two academic years.
MS Plans
There are three different plans under which the MS in mathematical sciences can be earned. Regardless of the plan, students must complete the core courses in their chosen concentration (see above).
Thesis Option
This option requires a research thesis prepared under the supervision of the advisor. The thesis describes a research investigation and its results. The scope of the research topic for the thesis
should be defined in such a way that a full-time student could complete the requirements for a master’s degree in twelve months or three semesters following the completion of course work by regularly
scheduling graduate research credits. The thesis must be prepared following the current procedures.
At least two weeks prior to the oral examination, students must
The Degree schedule form (M4) must be approved before a defense is scheduled.
Students must also report the results of the oral examination and submit a final thesis to the Graduate School prior to completing their degrees.
The minimum requirements are as follows:
Course work (minimum) 20 credits
Thesis research 6–10 credits
Total (minimum) 30 credits
Distribution of course work credit
5000–6000 series (minimum) 12 credits
3000–4000 level (maximum) 12 credits
Report Option
This option requires a report describing the results of an independent study project. The scope of the research topic should be defined in such a way that a full-time student could complete the
requirements for a master’s degree in twelve months or three semesters following the completion of course work by regularly scheduling graduate research credits. The report must be prepared following
the current procedures.
At least two weeks prior to the oral examination, students must
• Schedule their examination using the Pre-defense form
• Distribute the report to the examining committee
The Degree schedule form (M4) must be approved before a defense is scheduled.
Students must also report the results of the oral examination and submit a single paper copy of the corrected and approved report in a sturdy binder including an original signature page to the
Graduate School.
Of the minimum total of 30 credits, at least 24 must be earned in course work other than the project.
Course work 24 credits
Report 2–6 credits
Total (minimum) 30 credits
Distribution of course work credit
5000–6000 series (minimum) 12 credits
3000–4000 level (maximum) 12 credits
Coursework Option
This option requires a minimum of 30 credits be earned through coursework. Research credits may be used on a case-by-case basis following the approval of the graduate program director.
A graduate program may require an oral or written examination before conferring the degree.
Distribution of coursework credit
5000–6000 series (minimum) 18 credits
3000–4000 level (maximum) 12 credits
If you choose a research-based plan (Plan A or B), you should find an advisor by the end of your first semester, if possible, and no later than the end of your second semester.
If you choose Plan C, you should take Qualifying Exam no later than your second semester. Some students require more than one try to pass the exam. | {"url":"http://www.mtu.edu/math/graduate/masters/","timestamp":"2014-04-17T21:43:31Z","content_type":null,"content_length":"41433","record_id":"<urn:uuid:cd6ac527-6ada-4b89-95d7-f033defae5f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problems where Conjugate gradient works much better than GMRES
up vote 7 down vote favorite
I am interested in cases where Conjugate gradient works much better than GMRES method.
In general, CG is preferable choice in many cases of SPD because it requires less storage and theoretical bound on convergence rate for CG is double of that GMRES. Are there any problems where such
rates are actually observed? Is there any characterization of cases where GMRES performs better or comparable to CG for same number of spmvs.
Since Residual history is only available, in many cases to judge how well an algorithm has performed, would GMRES have always lower residual norm than CG in that case?
linear-algebra numerical-linear-algebra na.numerical-analysis
1 This is probably not apt for MO. You should probably try scicomp.stackexchange.com. – user11000 May 1 '13 at 4:09
2 From the look of it, yes, but there is probably a completely theoretical question in matrix approximation theory hidden behind this one. – Federico Poloni May 1 '13 at 7:14
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged linear-algebra numerical-linear-algebra na.numerical-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/129277/problems-where-conjugate-gradient-works-much-better-than-gmres","timestamp":"2014-04-16T14:13:11Z","content_type":null,"content_length":"48662","record_id":"<urn:uuid:dce77f87-8176-4046-b72e-b04669857d7c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Engineering books
Free books on technology subjects
Calculus Based Physicsis a two-volume introductory physics textbook complete with ancillary materials. It can be used as is or edited/modified by users. Ancillary materials include physics
problems with screen-capture video solutions, Physics question slides, and on-line quizzes.
This book is intended for undergraduate students in Mechanical, Chemical, and Aeronautical Engineering.
The book contains chapters on Isentropic Flow (nozzle flow), Isothermal Nozzle, Shock wave and Oblique shocks, and Prandtl-Meyer flow as well chapters on Isothermal Flow, Fanno Flow, and Rayleigh
Classic kinetics (e.g. in chemistry) is based on the assumption that reactions take place in small vessels ... is often not justified. This book formulates a basis for a kinetics where the
“mixing condition” is relaxed: the condition is qualitatively deleted – not merely neutralized by use of various approximations.
An examination of the benefits and potential dangers of the new technology revolution
The book is out-of-print. A scanned version (PDF format) may be downloaded for personal use.
Molecular Cell Biology
A really nice book with a weird interface where you're required to type in a keyword search to access the chapters. Apparently there's no way to read it chapter by chapter.
In quantum computers we exploit quantum effects to compute in ways that are faster or more efficient than, or even impossible, on conventional computers. Quantum computers use a specific physical
implementation to gain a computational advantage over conventional computers. Properties called superposition and entanglement may, in some cases, allow an exponential amount of parallelism.
Also, special purpose machines like quantum cryptographic devices use entanglement and other peculiarities like quantum uncertainty.
• Universal algebra for computer science - all the algebra computer scientists will need
• Lessons in Electric Circuits - 6 volumes, the last one published in January 2004, for students in Electrical Engineering. Scroll down for complete downloads of all the books in a single tar.gz
• Structure and interpretation of classical mechanics - There has been a remarkable revival of interest in classical mechanics in recent years. We now know that there is much more to classical
mechanics than previously suspected. The behavior of classical systems is surprisingly rich; derivation of the equations of motion, the focus of traditional presentations of mechanics, is just
the beginning. Classical systems display a complicated array of phenomena such as nonlinear resonances, chaotic behavior, and transitions to chaos.
• How language works: the cognitive science of linguistics - Students studying linguistics for the first time often have misconceptions about what it is about and what it can offer them. They may
think that linguists are authorities on what is correct and what is incorrect in a given language. But linguistics is the science of language; it treats language and linguistic behavior as
phenomena to be studied scientifically. Linguists want to figure out how language works. They are no more in the business of making value judgments about people's language than geologists are in
the business of making value judgments about the behavior of the earth.
• Modern Signal Processing - Signal processing is a ubiquitous part of modern technology. Its mathematical basis and many areas of application are the subject of this book, based on a series of
graduate-level lectures held at the Mathematical Sciences Research Institute. Emphasis is on current challenges, new techniques adapted to new technologies, and certain recent advances in
algorithms and theory. The book covers two main areas: computational harmonic analysis, envisioned as a technology for efficiently analyzing real data using inherent symmetries; and the
challenges inherent in the acquisition, processing and analysis of images and sensing data in general - ranging from sonar on a submarine to a neuroscientist's fMRI study.
• Model Theory, Algebra, and Geometry - Model theory is a branch of mathematical logic that has found applications in several areas of algebra and geometry. It provides a unifying framework for the
understanding of old results and more recently has led to significant new results, such as a proof of the Mordell-Lang conjecture for function fields in positive characteristic. Perhaps
surprisingly, it is sometimes the most abstract aspects of model theory that are relevant to these applications.
• Comparison Geometry - Comparison Geometry asks: What can we say about a Riemannian manifold if we know a (lower or upper) bound for its curvature, and perhaps something about its topology?
Powerful results that allow the exploration of this question were first obtained in the 1950s by Rauch, Alexandrov, Toponogov, and Bishop, with some ideas going back to Hopf, Morse, Schoenberg,
Myers, and Synge in the 1930s.
• Mathematical Tools for Physics - This text is in PDF format, and is my attempt to provide a less expensive alternative to some of the printed texts currently available for this course. If you
find any mistakes or any parts that are unclear or any topics that you think I should not have omitted, please tell me. I intend this for the undergraduate level, providing a one-semester bridge
between some of the introductory math courses and the physics courses in which we expect to use the mathematics. This is the course typically called Mathematical Methods in Physics at many
• The Chaos Hypertextbook - Mathematics in the age of computers
• Immunology Overview - bacteriology, virology, mycology, parasitology, infectious diseases and lots of fun stuff if you're a biology or medical major
• A Radically Modern Approach to Introductory Physics - This text has developed out of an alternate beginning physics course at New Mexico Tech designed for those students with a strong interest in
physics. The course includes students intending to major in physics, but is not limited to them. The idea for a "radically modern" course arose out of frustration with the standard two-semester
treatment. It is basically impossible to incorporate a significant amount of "modern physics" (meaning post-19th century!) in that format. Furthermore, the standard course would seem to be
specifically designed to discourage any but the most intrepid students from continuing their studies in this area - students don't go into physics to learn about balls rolling down inclined
planes - they are (rightly) interested in quarks and black holes and quantum computing, and at this stage they are largely unable to make the connection between such mundane topics and the
exciting things that they have read about in popular books and magazines. It would, of course, be easy to pander to students - teach them superficially about the things they find interesting,
while skipping the "hard stuff". However, I am convinced that they would ultimately find such an approach as unsatisfying as would the educated physicist.
• The Unknowable - Having published four books on this subject, why a fifth?! Because there's something new: I compare and contrast Godel's, Turing's and my work in a very simple and
straight-forward manner using LISP.
• Exploring randomness - I really want you to follow my example and hike off into the wilderness and explore AIT on your own! You can stay on the trails that I've blazed and explore the well-known
part of AIT, or you can go off on your own and become a fellow researcher, a colleague of mine! One way or another, the goal of this book is to make you into a participant, not a passive observer
of AIT. In other words, it's too easy to just listen to a recording of AIT, that's not the way to learn music. I'd like you to learn to play it on an instrument yourself, or, better still, to
even become a composer!
• Introduction to packet radio - This series of eighteen articles was originally written in 1988 to appear in Nuts & Volts, the newsletter of the San Francisco Amateur Radio Club. The series has
been widely distributed since then, with revisions issued in 1991, 1993, and 1995. Occasional revisions were made to this version on the web thereafter, in the late 1990s. The author is no longer
active in packet radio and is unable to provide up to date information on packet radio; however he has left this material on the Internet for access by those who might find it helpful.
• Newtonian Physics - not a programming or computer science book, but considering how many computer science majors have to take physics sooner or later in their life, this is a good place to start.
A free Physics textbook suitable for introductory college Physics course.
• Fundamentals of Die Casting - pdf file:Technologies for die casting professionals: Technologies developed in recent years are described in this book. Errors of the old models and the violations
of physical laws are shown. Examples: The ``common'' \pQtwo{} diagram violates many physical laws, such as the first and second laws of thermodynamics. The ``common'' \pQtwo{} diagram produces
trends that don't reflect reality.
• A problem course in Mathematical Logic - A Problem Course in Mathematical Logic is intended to serve as the text for an introduction to mathematical logic for undergraduates with some
mathematical sophistication. It supplies definitions, statements of results, and problems, along with some explanations, examples, and hints. The idea is for the students, individually or in
groups, to learn the material by solving the problems and proving the results for themselves. The book should do as the text for a course taught using the modified Moore-method. The material and
its presentation are pretty stripped-down and it will probably be desirable for the instructor to supply further hints from time to time or to let the students consult other sources. Various
concepts and and topics that are often covered in introductory mathematical logic or computability courses are given very short shrift or omitted entirely, among them normal forms, definability,
and model theory.
• A Heat Transfer Textbook - We are placing a mechanical engineering textbook into an electronic format for worldwide, no-charge distribution. The aim of this effort is to explore the possibilities
of placing textbooks online -- effectively giving them away. Two potential benefits should accrue from doing this. First, in electronic format, textbooks can be continually corrected and updated,
without the delays inherent in printed books (second and later editions are typically published on a five-year cycle). Second, free textbooks hold the potential for fundamentally altering the
economics of higher education, particularly in those environments where money is scarce.
• Stephen Wolfram - New Kind of Science - When Stephen Wolfram of Mathematica fame self-published A New Kind of Science in 2002, he raised the suspicions of many in scientific communities that he
was taking advantage of a lot of other people's work for his sole financial gain and that he was going against the open nature of academia by using restrictive copyright. Yesterday, Wolfram and
company released the entire contents of NKS for free on the Web (short registration required). Perhaps Wolfram is giving back to the scientific community; perhaps it is simply clever marketing
for a framework that is beginning to gain momentum. For any matter, the entire encyclopedic volume is online, and this appears to be a positive step for scientific writing.
• Cheap Complex Devices - Computers can play chess as well as any grandmaster. They can diagnose cancer as well as any oncologist, find oil as well as any seismologist. But can they do that most
human of all activities: can they tell a story? Read Cheap Complex Devices and find out. This volume, edited by John Compton Sundman, (an erstwhile technical writer whose out-of-print manuals
command large sums at online auctions, now a recluse), contains the two winning entries of the novel-writing contest sponsored by the Society for Analytical Engines (SAE). The introduction to
Cheap Complex Devices, written by the SAE Contest Committee, contains the history of the contest and explains the criteria by which the entries were judged.
• The Scientist and Engineer's Guide to Digital Signal Processing - available in PDF format. Excellent book to start with digital signal processing, by the way.
• Computer Aids for VLSI Design - HTML book with nice table of contents.
• Neural Nets: A Neural Network is an interconnected assembly of simple processing elements, units or nodes, whose functionality is loosely based on the animal neuron. The processing ability of the
network is stored in the inter-unit connection strengths, or weights, obtained by a process of adaptation to, or learning from, a set of training patterns. In order to see how very different this
is from the processing done by conventional computers it is worth examining the underlying principles that lie at the heart of all such machines.
• Other Maths books for free download Most college mathematics textbooks attempt to be all things to all people and, as a result, are much too big and expensive. This perhaps made some sense when
these books were rather expensive to produce and distribute--but this time has passed.
• Machine Language For Beginners Machine Language or Assembly language is the programming language that directly talks with the machine or computer at the basic machine level. So, machine language
is fast, flexible and consumes less memory, but at the same time is a bit complex for novices. This free book download by Richard Mansfield describes the language with reference to easily
comprehensible BASIC language with great figures and charts, to make it simple for any programmer. Although understanding machine language requires a study of microprocessor architecture, this
book is not alone enough to understand the architecture of microprocessor. The book describes mostly 6502 machine language, but on the positive side it has appendices and instructions to convert
the programming examples given into any machine language version. Another advantage with this book is that it describes programming for most personal computers.
• Alphabetical List of FreeTechBooks Book Titles listed in the Science Category Page!Alphabetical summary of free internet Science book titles found on this TechBooks4Free Science Books Page. Thank
you for visiting and be sure to your bookmark or short cut to TechBooks4Free site for future free book search or free book reference.
More free Science books | {"url":"http://www.techbooksforfree.com/science.shtml","timestamp":"2014-04-19T20:26:54Z","content_type":null,"content_length":"27207","record_id":"<urn:uuid:aa511af3-6596-4c99-a57c-bf5232a75896>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Their infinite wisdom
Graphic: Christine Daniloff
(PhysOrg.com) -- Hotel guests come and go. But in the first decade of the 1900s, a pair of frequent Russian visitors to the Hotel Parisiana, near the Sorbonne on Paris' Left Bank, stood out vividly.
The children of the hotel's proprietors, the Chamont family, remembered them into the 1970s as 'hardworking' and 'pious' men. The guests, Dimitri Egorov and Nikolai Luzin, were mathematicians,
studying in Paris; they often prayed and went to church.
The Russians were embarking on a grand project: exploring the unknown features of infinity, the notion that a quantity can always increase. Infinity’s riddles have fascinated intellectuals from
Aristotle to Jorge Luis Borges to David Foster Wallace. In ancient Greece, Zeno’s Paradox stated that a runner who keeps moving halfway toward a finish line will never cross it (in effect, Zeno
realized the denominator of a fraction can double infinitely, from 1/2 to 1/4 to 1/8, and so on). Galileo noticed but left unresolved another brain-teaser: A series that includes every integer (1, 2,
3, and so on) seems like it should contain more numbers than one that only includes even integers (2, 4, 6, and so on). But if both continue infinitely, how can one be bigger than the other?
As it happens, infinity does come in multiple sizes. And by discovering some of its precise characteristics, the Russians helped show that infinity is not just one abstract concept. Egorov and Luzin,
with the help of another colleague, Pavel Florensky, created a new field, Descriptive Set Theory, which remains a pillar of contemporary mathematical inquiry. They also founded the Moscow School of
mathematics, home to generations of leading researchers.
The Russians’ success in grasping infinity concretely went hand in hand with their unorthodox religious beliefs, according to MIT historian of science Loren Graham. In a recent book, Naming Infinity:
A True Story of Religious Mysticism and Mathematical Creativity, co-written with French mathematician Jean-Michel Kantor and published this year by Harvard University Press, Graham describes how the
Russians were “Name-worshippers,” a cult banned in their own country. Members believed they could know God in detail, not just as an abstraction, by repeating God’s name in the “Jesus prayer.”
Graham thinks this openness to apprehending the infinite let the trio make its discoveries--before Egorov and Florensky were swept up in Stalin’s purges. “The impact of the Russian mathematicians has
been enormous,” says Graham, who has spent a half-century studying the history of science in Russia. “But their fates were tragic.”
Settling set theory
In studying infinity, the Russians followed Georg Cantor, the German theorist who from the 1870s to the 1890s formalized the notion that infinity comes in multiple sizes. As Cantor showed, the
infinite set of real numbers is greater than the infinite set of integers. Because real numbers can be expressed as infinite decimals (like 6.52918766145 … ), there are infinitely many in between
each integer. The set containing this continuum of real numbers must thus be larger than the set of integers. In Cantor’s terms, when there is no one-to-one correspondence between members of infinite
sets, those infinities have different sizes.
Cantor’s work made it clear that the study of infinity was actually the study of sets: their properties and the functions used to create them. Today, set theory has become the foundation of modern
math. But in the aftermath of Cantor, the basics of set theory were unclear. As Graham and Kantor describe it, even leading mathematicians found the situation unsettling. Three French thinkers —
Emile Borel, Henri Lebesgue, and Rene Baire — who made advances in set theory nonetheless decided by the early 1900s that the study of infinity had lost its way. They felt theorists were relying more
on arbitrary rule-making than rigorous inquiry. “The French lost their nerve,” says Graham.
By contrast, Graham and Kantor assert, the Russian trio found “freedom” in the mathematical uncertainties of the time. It turns out there were plenty of concrete advances in set theory yet to be
made; Luzin in particular pushed the field forward in the 1910s and 1920s, making discoveries about numerous types of sets involving the continuum of real numbers (the larger of the infinities Cantor
found); Descriptive Set Theory details the properties of these sets. In turn, many of Luzin’s students in the Moscow School also became prominent figures in the field, including Andrei Kolmogorov,
the best-known Russian mathematician of the 20th century.
What’s in a name?
Naming Infinity argues that the Russians thought their mathematical inquiries corresponded to their religious practices. The Name-worshippers believed the name of God was literally God, and that by
invoking it repeatedly in their prayer, they could know God closely — a heretical view for some.
Graham and Kantor think the Russians saw their explorations in math the same way; they were defining (and naming) sets in areas where others thought knowledge was impossible. Luzin, for one, often
stressed the importance of “naming” infinite sets as a part of discovering them. The Russians “believed they made God real by worshipping his name,” the book states, “and the mathematicians … thought
they made infinities real” by naming and defining them.
Graham also suggests a parallel between the Russians and Isaac Newton, another believer (and heretic). Historians today largely view Newton’s advances in physics as part of a larger personal effort —
including readings in theology and alchemy experiments — to find divine order in the world. Similarly, the Russians thought they could comprehend infinity through both religion and mathematics.
Mathematicians have responded to Naming Infinity with enthusiasm. “It’s a wonderful book for many reasons,” says Barry Mazur, the Gerhard Gade University Professor at Harvard, who regards it as “an
excellent way of getting into the development of set theory at the turn of the century.”
Moreover, Mazur agrees that the connection between the religious impulses of the three Russians and their mathematical studies seems significant, even if there is only a general affinity between the
two areas in matters such as naming objects. “It is more a conveyance of energy, than a conveyance of logic,” Mazur says. Religion could not trigger precise mathematical moves, he thinks, but it
provided the Russians with the intellectual impetus to move forward.
Victor Guillemin, a professor of mathematics at MIT, also finds this account convincing. In the 1970s, it was Guillemin, staying at the Hotel Parisiana like Egorov and Luzin before him, who discussed
the Russians’ lives with the Chamont family daughters (then elderly women, having been children just after the turn of the century). While reading Graham and Kantor’s book, Guillemin says, “I was
fascinated at the idea that the Russians were able to push the subject further because they had less trepidation at dealing with infinity.”
As Graham and Kantor point out, many other prominent mathematicians have had a mystical bent, from Pythagoras to Alexander Grothendieck, an innovative French theorist of the 1960s who now lives as a
recluse in the Pyrenees. Yet Graham emphasizes that mysticism is not a precondition for mathematical insight. “To see if science and religion are opposed to each other, or help each other,” Graham
says, “you have to select a specific episode and study it.”
Egorov’s exile, Florensky’s fate
Naming Infinity also starkly recounts the sorry fates of Egorov and Florensky, as publicly religious figures in atheist, postrevolutionary Russia. Egorov was exiled to the provinces and starved to
death in 1931. Florensky, a flamboyant figure who wore priestly garb in public, was executed in 1937. Luzin was spared after the physicist Peter Kapitsa made a direct appeal to Stalin on his behalf.
These men were not just endangered by their religiosity, however, but also by their style of math. The intangible nature of infinity contradicted the Marxist notion that intellectual activity should
be grounded in material matters, a charge made by one of their accusers: Ernst Kol’man, a mathematician and seemingly sinister figure called “the dark angel” for his role as an informant on other
Soviet intellectuals.
Graham, who knew both Kapitsa and Kol’man, says Kol’man “really believed his Marxism, and believed it was wrong to think mathematics has no relationship to the material world. He thought this was a
threat to the Soviet order.” Even so, Kol’man, who died in 1979, left behind writings acknowledging he had judged such matters “extremely incorrectly.”
The Russian trio was thus part of a singular saga, belonging to a now-vanished historical era. Naming Infinity rescues that story for readers who never had the chance to hear it directly from the
owners of the Hotel Parisiana.
Provided by Massachusetts Institute of Technology (news : web)
not rated yet Dec 18, 2009
Although the size (i.e. cardinality) of the set of real numbers is indeed bigger than that of the integers, it is not because "...there are infinitely many in between each integer." The set of
rational numbers is a counterexample to this argument; there are infinitely many rationals between each integer but the set of rationals is the same size as the set of integers.
not rated yet Jan 27, 2010
I would compare the part about infinity coming in different sizes to time and its different speed of flow. The only real number, quite paradoxically, is zero. All the rest of the numbers are there to
represent zero's flow through time.
-1+1,-2+2,-3+3,-4+4,-5+5,-infinity+infinity = 0
Numbers are just time and space, simplified, condensed into one.
Another interesting study is the story 1 to 5 and 7 & 8. The numbers 6 and 9 don't count as they essentially represent recyclement/emerging from an egg/zero. Thus transition of 1 to 5 into 7 (which
lasts for a mini eternity represented by 8) then comes transition again, represented by 9. 7 after 8 breaks down to 1 (but this 1 is a new level one, represented by the presence of a 0. 1 becomes one
with 0 (the universe), thus comes ten. And the universe begins again in the image of 1. 11,12,13,14 etc)
not rated yet Jan 27, 2010
P.S: Note only do numbers represent zero's passage through time, but it also represents energy/zero's subtle shifts in its tone/characteristic/properties during that time, where each number is a
not rated yet Jan 31, 2010
P.P.S: I forgot to tell that the opposite (regarding numbers and zero) is also true. After all, that's why it's called a zero. Both theories, about everything being zero and zero being nothing/
unachievable are true. Kindda like your right hand is left in the mirror. | {"url":"http://phys.org/news180030744.html","timestamp":"2014-04-20T03:38:12Z","content_type":null,"content_length":"78971","record_id":"<urn:uuid:30ddfc0c-b916-45fa-9dae-907f305c7967>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blaise Pascal
Blaise Pascal Blaise Pascal was born on June, 19 1623 at Clermont in Auvergne. His father Etienne, was a successful lawyer doing well in his profession. Three years after Pascal’s birth, Etienne lost
his wife Antoinette, and found himself responsible for three children, Gilberte, Jacqueline and Blaise who was a sickly child. It seems as though growing up without his mother condemned him to bad
health and his development permanently. His father insisted on educating him at home, because he did not want his son to he mistreated in school. Etienne was an accomplished mathematician, lawyer,
and thought probably that he could teach his son as well as a schoolteacher. From 1631 until 1640, when Pascal’s family moved to Rouen, he pursued a course of education under his father’s supervision
which gave him the opportunity to master, Latin, Greek, mathematics and science. It was in Rouen where Blaise soon Continue...
More sample essays on Blaise Pascal
began the career for which his education had prepared him. "Pascal's law, or Pascal's principle, in fluid mechanics, the principle that in a fluid at rest in a closed container, a pressure change in
one part of the fluid is transmitted without loss to every portion of the fluid and to the walls of the container. In 1640 he published a little treatise on projective geometry. He was buried in the
parish church of Saint-Etienne-du-Mont, where his memorial can still be seen, next to what is now the Pantheon. Of his mathematical works, the most solid is to do with what is now called probability
theory. Two players have agreed to play dice until one wins three throws. Gilberte, Pascal's sister tells a story of how her brother discovered Pythagoras' theorem by do-it-yourself methods when he
was about twelve. Each has put in 32 pistoles, a substantial sum; one has won two throws, his opponent one. By 1646 he had begun to work on the problem of the vacuum, which was to win him public
renown. Today's computer is designed on the same essential lines as the mechanical calculator. " Pascal begins with a simple example. By 1644 a Rouen craftsman under his supervision had actually
built the first of a small number of machines, all of which worked. In 1642 he invented a calculating machine called the machine arithmetique which is on display in Clermont-Ferrand, where Pascal was
born. At the end of June 1662, Pascal was so ill that he moved in with his sister Gilberte, now installed in Paris, and there, in the heart of the Latin Quarter, at the age of thirty-nine, he died in
physical distress but spiritual serenity on August 19, 1662. | {"url":"http://www.megaessays.com/viewpaper/155.html","timestamp":"2014-04-16T10:11:39Z","content_type":null,"content_length":"25997","record_id":"<urn:uuid:a9180282-7395-4517-b458-4fce8c909741>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reconstructing recombination network from sequence data: the small parsimony problem
- ISBRA 2007. LNCS (LNBI , 2007
"... Phylogenies play a major role in representing the interrelationships among biological entities. Many methods for reconstructing and studying such phylogenies have been proposed, almost all of
which assume that the underlying history of a given set of species can be represented by a binary tree. Alt ..."
Cited by 6 (3 self)
Add to MetaCart
Phylogenies play a major role in representing the interrelationships among biological entities. Many methods for reconstructing and studying such phylogenies have been proposed, almost all of which
assume that the underlying history of a given set of species can be represented by a binary tree. Although many biological processes can be effectively modeled and summarized in this fashion, others
cannot: recombination, hybrid speciation, and horizontal gene transfer result in networks, rather than trees, of relationships. In a series of papers, we have extended the maximum parsimony (MP)
criterion to phylogenetic networks, demonstrated its appropriateness, and established the intractability of the problem of scoring the parsimony of a phylogenetic network. In this work we show the
hardness of approximation for the general case of the problem, devise a very fast (linear-time) heuristic algorithm for it, and implement it on simulated as well as biological data.
- IEEE/ACM TRANS. COMPUT. BIOLOGY BIOINFORM , 2009
"... Phylogenies—the evolutionary histories of groups of organisms—play a major role in representing the interrelationships among biological entities. Many methods for reconstructing and studying
such phylogenies have been proposed, almost all of which assume that the underlying history of a given set o ..."
Cited by 5 (2 self)
Add to MetaCart
Phylogenies—the evolutionary histories of groups of organisms—play a major role in representing the interrelationships among biological entities. Many methods for reconstructing and studying such
phylogenies have been proposed, almost all of which assume that the underlying history of a given set of species can be represented by a binary tree. Although many biological processes can be
effectively modeled and summarized in this fashion, others cannot: recombination, hybrid speciation, and horizontal gene transfer result in networks of relationships rather than trees of
relationships. In previous works, we formulated a maximum parsimony (MP) criterion for reconstructing and evaluating phylogenetic networks, and demonstrated its quality on biological as well as
synthetic data sets. In this paper, we provide further theoretical results as well as a very fast heuristic algorithm for the MP criterion of phylogenetic networks. In particular, we provide a novel
combinatorial definition of phylogenetic networks in terms of “forbidden cycles, ” and provide detailed hardness and hardness of approximation proofs for the “small ” MP problem. We demonstrate the
performance of our heuristic in terms of time and accuracy on both biological and synthetic data sets. Finally, we explain the difference between our model and a similar one formulated by Nguyen et
al., and describe the implications of this difference on the hardness and approximation results.
"... Abstract. Answering a problem posed by Nakhleh, we prove that counting the number of phylogenetic trees inferred by a (binary) phylogenetic network is #P-complete. An immediate consequence of
this result is that counting the number of phylogenetic trees commonly inferred by two (binary) phylogenetic ..."
Add to MetaCart
Abstract. Answering a problem posed by Nakhleh, we prove that counting the number of phylogenetic trees inferred by a (binary) phylogenetic network is #P-complete. An immediate consequence of this
result is that counting the number of phylogenetic trees commonly inferred by two (binary) phylogenetic networks is also #P-complete. Key words. Phylogenetic trees, phylogenetic networks, #P-complete
1. Introduction. A | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=781897","timestamp":"2014-04-17T02:21:12Z","content_type":null,"content_length":"19099","record_id":"<urn:uuid:58474434-36a6-46f5-9cb1-bfd9e32d853c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maxwells equations
From Exampleproblems
Maxwell's equations are the set of four equations, attributed to James Clerk Maxwell (written by Oliver Heaviside), that describe the behavior of both the electric and magnetic fields, as well as
their interactions with matter.
Maxwell's four equations express, respectively, how electric charges produce electric fields (Gauss' law), the experimental absence of magnetic charges, how currents produce magnetic fields (Ampere's
law), and how changing magnetic fields produce electric fields (Faraday's law of induction). Maxwell, in 1864, was the first to put all four equations together and to notice that a correction was
required to Ampere's law: changing electric fields act like currents, likewise producing magnetic fields. (This additional term is called the displacement current.)
Furthermore, Maxwell showed that waves of oscillating electric and magnetic fields travel through empty space at a speed that could be predicted from simple electrical experiments—using the data
available at the time, Maxwell obtained a velocity of 310,740,000 m/s. Maxwell (1865) wrote:
This velocity is so nearly that of light, that it seems we have strong reason to conclude that light itself (including radiant heat, and other radiations if any) is an electromagnetic disturbance
in the form of waves propagated through the electromagnetic field according to electromagnetic laws.
Maxwell was correct in this conjecture, though he did not live to see its vindication by Heinrich Hertz in 1888. Maxwell's quantitative explanation of light as an electromagnetic wave is considered
one of the great triumphs of 19th-century physics. (Actually, Michael Faraday had postulated a similar picture of light in 1846, but had not been able to give a quantitative description or predict
the velocity.) Moreover, it laid the foundation for many future developments in physics, such as special relativity and its unification of electric and magnetic fields as a single tensor quantity,
and Kaluza and Klein's unification of electromagnetism with gravity and general relativity.
Historical developments of Maxwell's equations and relativity
Maxwell's 1865 formulation was in terms of 20 equations in 20 variables, which included several equations now considered to be auxiliary to what are now called "Maxwell's equations" — the corrected
Ampere's law (three component equations), Gauss' law for charge (one equation), the relationship between total and displacement current densities (three component equations), the relationship between
magnetic field and the vector potential (three component equations, which imply the absence of magnetic charge), the relationship between electric field and the scalar and vector potentials (three
component equations, which imply Faraday's law), the relationship between the electric and displacement fields (three component equations), Ohm's law relating current density and electric field
(three component equations), and the continuity equation relating current density and charge density (one equation).
The modern mathematical formulation of Maxwell's equations is due to Oliver Heaviside and Willard Gibbs, who in 1884 reformulated Maxwell's original system of equations to a far simpler
representation using vector calculus. (In 1873 Maxwell also published a quaternion-based notation that ultimately proved unpopular.) The change to the vector notation produced a symmetric
mathematical representation that reinforced the perception of physical symmetries between the various fields. This highly symmetrical formulation would directly inspire later developments in
fundamental physics.
In the late 19th century, because of the appearance of a velocity,
in the equations, Maxwell's equations were only thought to express electromagnetism in the rest frame of the luminiferous aether (the postulated medium for light, whose interpretation was
considerably debated). When the Michelson-Morley experiment, conducted by Edward Morley and Albert Abraham Michelson, produced a null result for the change of the velocity of light due to the Earth's
hypothesized motion through the aether, however, alternative explanations were sought by Lorentz and others. This culminated in Einstein's theory of special relativity, which postulated the absence
of any absolute rest frame (or aether) and the invariance of Maxwell's equations in all frames of reference.
The electromagnetic field equations have an intimate link with special relativity: the magnetic field equations can be derived from consideration of the transformation of the electric field equations
under relativistic transformations at low velocities. (In relativity, the equations are written in an even more compact, "manifestly covariant" form, in terms of the rank-2 antisymmetric
field-strength 4-tensor that unifies the electric and magnetic fields into a single object.)
Kaluza and Klein showed in the 1920s that Maxwell's equations can be derived by extending general relativity into five dimensions. This strategy of using higher dimensions to unify different forces
is an active area of research in particle physics.
Summary of the equations
All variables that are in bold represent vector quantities.
General case
│ Name │ Differential form │ Integral form │
│ Gauss' law: │ $abla \cdot \mathbf{D} = \rho$ │ $\oint_S \mathbf{D} \cdot d\mathbf{A} = \int_V \rho \cdot dV$ │
│ Gauss' law for magnetism │ │ │
│ (absence of magnetic │ $abla \cdot \mathbf{B} = 0$ │ $\oint_S \mathbf{B} \cdot d\mathbf{A} = 0$ │
│ monopoles): │ │ │
│ Faraday's law of │ $abla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\ │ $\oint_C \mathbf{E} \cdot d\mathbf{l} = - \ { d \over dt } \int_S \mathbf{B} \cdot d\mathbf{A}$ │
│ induction: │ partial t}$ │ │
│ Ampère's law │ $abla \times \mathbf{H} = \mathbf{J} + \frac{\partial \mathbf │ $\oint_C \mathbf{H} \cdot d\mathbf{l} = \int_S \mathbf{J} \cdot d \mathbf{A} + {d \over dt} \int_S \ │
│ (with Maxwell's │ {D}} {\partial t}$ │ mathbf{D} \cdot d \mathbf{A}$ │
│ extension): │ │ │
where the following table provides the meaning of each symbol and the SI unit of measure:
│ Symbol │ Meaning │ SI Unit of Measure │
│ $\mathbf{E}$ │ electric field │ volt per metre │
│ $\mathbf{H}$ │ magnetic field strength │ ampere per metre │
│ $\mathbf{D}$ │ electric displacement field │ coulomb per square metre │
│ $\mathbf{B}$ │ magnetic flux density │ tesla, or equivalently, │
│ │ also called the magnetic induction. │ weber per square metre │
│ $\ \rho \$ │ free electric charge density, │ coulomb per cubic metre │
│ │ not including dipole charges bound in a material │ │
│ $\mathbf{J}$ │ free current density, │ ampere per square metre │
│ │ not including polarization or magnetization currents bound in a material │ │
│ │ differential vector element of surface area A, with infinitesimally │ │
│ $d\mathbf{A}$ │ │ square meters │
│ │ small magnitude and direction normal to surface S │ │
│ $dV \$ │ differential element of volume V enclosed by surface S │ cubic meters │
│ $d \mathbf{l}$ │ differential vector element of path length tangential to contour C enclosing surface S │ meters │
$abla \cdot$ is the divergence operator (SI unit: 1 per metre),
$abla \times$ is the curl operator (SI unit: 1 per metre).
Although SI units are given here for the various symbols, Maxwell's equations will hold unchanged in many different unit systems (and with only minor modifications in all others). The most commonly
used systems of units are SI units, used for engineering, electronics and most practical physics experiments, and Planck units (also known as "natural units"), used in theoretical physics, quantum
physics and cosmology. An older system of units, the cgs system, is sometimes also used.
The second equation is equivalent to the statement that magnetic monopoles do not exist. The force exerted upon a charged particle by the electric field and magnetic field is given by the Lorentz
force equation:
$\mathbf{F} = q (\mathbf{E} + \mathbf{v} \times \mathbf{B}),$
where $q \$ is the charge on the particle and $\mathbf{v} \$ is the particle velocity. This is slightly different when expressed in the cgs system of units below.
Maxwell's equations are generally applied to macroscopic averages of the fields, which vary wildly on a microscopic scale in the vicinity of individual atoms (where they undergo quantum mechanical
effects as well). It is only in this averaged sense that one can define quantities such as the permittivity and permeability of a material, below (the microscopic Maxwell's equations, ignoring
quantum effects, are simply those of a vacuum — but one must include all atomic charges and so on, which is normally an intractable problem).
In linear materials
In linear materials, the polarization density P (in coulombs per square meter) and magnetization density M (in amperes per meter) are given by:
$\mathbf{P} = \chi_e \varepsilon_0 \mathbf{E}$
$\mathbf{M} = \chi_m \mathbf{H}$
and the D and B fields are related to E and H by:
$\mathbf{D} \ \ = \ \ \varepsilon_0 \mathbf{E} + \mathbf{P} \ \ = \ \ (1 + \chi_e) \varepsilon_0 \mathbf{E} \ \ = \ \ \varepsilon \mathbf{E}$
$\mathbf{B} \ \ = \ \ \mu_0 ( \mathbf{H} + \mathbf{M} ) \ \ = \ \ (1 + \chi_m) \mu_0 \mathbf{H} \ \ = \ \ \mu \mathbf{H}$
χ[e] is the electrical susceptibility of the material,
χ[m] is the magnetic susceptibility of the material,
ε is the electrical permittivity of the material, and
μ is the magnetic permeability of the material
(This can actually be extended to handle nonlinear materials as well, by making ε and μ depend upon the field strength; see e.g. the Kerr and Pockels effects.)
In non-dispersive, isotropic media, ε and μ are time-independent scalars, and Maxwell's equations reduce to
$abla \cdot \varepsilon \mathbf{E} = \rho$
$abla \cdot \mu \mathbf{H} = 0$
$abla \times \mathbf{E} = - \mu \frac{\partial \mathbf{H}} {\partial t}$
$abla \times \mathbf{H} = \mathbf{J} + \varepsilon \frac{\partial \mathbf{E}} {\partial t}$
In a uniform (homogeneous) medium, ε and μ are constants independent of position, and can thus be furthermore interchanged with the spatial derivatives.
More generally, ε and μ can be rank-2 tensors (3×3 matrices) describing birefringent (anisotropic) materials. Also, although for many purposes the time/frequency-dependence of these constants can be
neglected, every real material exhibits some material dispersion by which ε and/or μ depend upon frequency (and causality constrains this dependence to obey the Kramers-Kronig relations).
In vacuum, without charges or currents
The vacuum is a linear, homogeneous, isotropic, dispersionless medium, and the proportionality constants in the vacuum are denoted by ε[0] and μ[0] (neglecting very slight nonlinearities due to
quantum effects).
$\mathbf{D} = \varepsilon_0 \mathbf{E}$
$\mathbf{B} = \mu_0 \mathbf{H}$
Since there is no current or electric charge present in the vacuum, we obtain the Maxwell's equations in free space:
$abla \cdot \mathbf{E} = 0$
$abla \cdot \mathbf{H} = 0$
$abla \times \mathbf{E} = - \mu_0 \frac{\partial\mathbf{H}} {\partial t}$
$abla \times \mathbf{H} = \ \ \varepsilon_0 \frac{\partial \mathbf{E}} {\partial t}$
These equations have a simple solution in terms of travelling sinusoidal plane waves, with the electric and magnetic field directions orthogonal to one another and the direction of travel, and with
the two fields in phase, travelling at the speed
$c = \frac{1}{\sqrt{\mu_0 \varepsilon_0}}$
Maxwell discovered that this quantity c is simply the speed of light in vacuum, and thus that light is a form of electromagnetic radiation. The currently accepted values for the speed of light, the
permittivity,and the permeability are summarized in the following table:
│ Symbol │ Name │ Numerical Value │ SI Unit of Measure │ Type │
│ $c \$ │ Speed of light │ $2.998 \times 10^{8}$ │ meters per second │ defined │
│ $\ \varepsilon_0$ │ Permittivity │ $8.854 \times 10^{-12}$ │ farads per meter │ derived │
│ $\ \mu_0 \$ │ Permeability │ $4 \pi \times 10^{-7}$ │ henries per meter │ defined │
Charge density and the electric field
$abla \cdot \mathbf{D} = \rho$,
where ρ is the free electric charge density (in units of C/m^3), not including dipole charges bound in a material, and $\mathbf{D}$ is the electric displacement field (in units of C/m^2). This
equation corresponds to Coulomb's law for stationary charges in vacuum.
The equivalent integral form (by the divergence theorem), also known as Gauss' law, is:
$\oint_A \mathbf{D} \cdot d\mathbf{A} = Q_\mbox{enclosed}$
where $d\mathbf{A}$ is the area of a differential square on the closed surface A with an outward facing surface normal defining its direction, and Q[enclosed] is the free charge enclosed by the
In a linear material, $\mathbf{D}$ is directly related to the electric field $\mathbf{E}$ via a material-dependent constant called the permittivity, ε:
$\mathbf{D} = \varepsilon \mathbf{E}$.
Any material can be treated as linear, as long as the electric field is not extremely strong. The permittivity of free space is referred to as ε[0], and appears in:
$abla \cdot \mathbf{E} = \frac{\rho_t}{\varepsilon_0}$
where, again, $\mathbf{E}$ is the electric field (in units of V/m), ρ[t] is the total charge density (including bound charges), and ε[0] (approximately 8.854 pF/m) is the permittivity of free space.
ε can also be written as $\varepsilon_0 \cdot \varepsilon_r$, where ε[r] is the material's relative permittivity or its dielectric constant.
Compare Poisson's equation.
The structure of the magnetic field
$abla \cdot \mathbf{B} = 0$
$\mathbf{B}$ is the magnetic flux density (in units of teslas, T), also called the magnetic induction.
Equivalent integral form:
$\oint_A \mathbf{B} \cdot d\mathbf{A} = 0$
$d\mathbf{A}$ is the area of a differential square on the surface A with an outward facing surface normal defining its direction.
Like the electric field's integral form, this equation only works if the integral is done over a closed surface.
This equation is related to the magnetic field's structure because it states that given any volume element, the net magnitude of the vector components that point outward from the surface must be
equal to the net magnitude of the vector components that point inward. Structurally, this means that the magnetic field lines must be closed loops. Another way of putting it is that the field lines
cannot originate from somewhere; attempting to follow the lines backwards to their source or forward to their terminus ultimately leads back to the starting position. Hence, this is the mathematical
formulation of the assumption that there are no magnetic monopoles.
A changing magnetic flux and the electric field
$abla \times \mathbf{E} = -\frac {\partial \mathbf{B}}{\partial t}$
Equivalent integral Form:
$\oint_{s} \mathbf{E} \cdot d\mathbf{s} = - \frac {d\Phi_{\mathbf{B}}} {dt}$ where $\Phi_{\mathbf{B}} = \int_{A} \mathbf{B} \cdot d\mathbf{A}$
Φ[B] is the magnetic flux through the area A described by the second equation
E is the electric field generated by the magnetic flux
s is a closed path in which current is induced, such as a wire.
The electromotive force (sometimes denoted $\mathcal{E}$, not to be confused with the permittivity above) is equal to the value of this integral.
This law corresponds to the Faraday's law of electromagnetic induction.
Some textbooks show the right hand sign of the Integral form with an N (representing the number of coils of wire that are around the edge of A) in front of the flux derivative. The N can be taken
care of in calculating A (multiple wire coils means multiple surfaces for the flux to go through), and it is an engineering detail so it has been omitted here.
The negative sign is necessary to maintain conservation of energy. It is so important that it even has its own name, Lenz's law.
This equation relates the electric and magnetic fields, but it also has a lot of practical applications, too. This equation describes how electric motors and electric generators work. Specifically,
it demonstrates that a voltage can be generated by varying the magnetic flux passing through a given area over time, such as by uniformly rotating a loop of wire through a fixed magnetic field. In a
motor or generator, the fixed excitation is provided by the field circuit and the varying voltage is measured across the armature circuit. In some types of motors/generators, the field circuit is
mounted on the rotor and the armature circuit is mounted on the stator, but other types of motors/generators employ the reverse configuration.
Maxwell's equations apply to a right-handed coordinate system. To apply them unmodified to a left handed system would mean a reversal of polarity of magnetic fields (not inconsistent, but confusingly
against convention).
The source of the magnetic field
$abla \times \mathbf{H} = \mathbf{J} + \frac {\partial \mathbf{D}} {\partial t}$
where H is the magnetic field strength (in units of A/m), related to the magnetic flux B by a constant called the permeability, μ (B = μH), and J is the current density, defined by: J = ∫ρ[q]vdV
where v is a vector field called the drift velocity that describes the velocities of that charge carriers which have a density described by the scalar function ρ[q].
In free space, the permeability μ is the permeability of free space, μ[0], which is defined to be exactly 4π×10^-7 W/A·m. Also, the permittivity becomes the permittivity of free space ε[0]. Thus, in
free space, the equation becomes:
$abla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0\varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}$
Equivalent integral form:
$\oint_s \mathbf{B} \cdot d\mathbf{s} = \mu_0 I_\mbox{encircled} + \mu_0\varepsilon_0 \int_A \frac{\partial \mathbf{E}}{\partial t} \cdot d \mathbf{A}$
s is the edge of the open surface A (any surface with the curve s as its edge will do), and I[encircled] is the current encircled by the curve s (the current through any surface is defined by the
equation: I[through A] = ∫[A]J·dA).
If the electric flux density does not vary rapidly, the second term on the right hand side (the displacement flux) is negligible, and the equation reduces to Ampere's law.
Maxwell's equations in CGS units
The above equations are given in the International System of Units, or SI for short. In a related unit system, called cgs (short for centimeter-gram-second), the equations take the following form:
$abla \cdot \mathbf{E} = 4\pi\rho$
$abla \cdot \mathbf{B} = 0$
$abla \times \mathbf{E} = -\frac{1}{c} \frac{\partial \mathbf{B}} {\partial t}$
$abla \times \mathbf{B} = \frac{1}{c} \frac{ \partial \mathbf{E}} {\partial t} + \frac{4\pi}{c} \mathbf{J}$
Where c is the speed of light in a vacuum. For the electromagnetic field in a vacuum, the equations become:
$abla \cdot \mathbf{E} = 0$
$abla \cdot \mathbf{B} = 0$
$abla \times \mathbf{E} = -\frac{1}{c} \frac{\partial \mathbf{B}} {\partial t}$
$abla \times \mathbf{B} = \frac{1}{c} \frac{\partial \mathbf{E}}{\partial t}$
The force exerted upon a charged particle by the electric field and magnetic field is given by the Lorentz force equation:
$\mathbf{F} = q (\mathbf{E} + \frac{\mathbf{v}}{c} \times \mathbf{B}),$
where $q \$ is the charge on the particle and $\mathbf{v} \$ is the particle velocity. This is slightly different from the SI-unit expression above. For example, here the magnetic field $\mathbf{B} \
$ has the same units as the electric field $\mathbf{E} \$.
Formulation of Maxwell's equations in special relativity
Mathematical note: In this section the abstract index notation will be used.
In special relativity, in order to more clearly express the fact that Maxwell's equations (in vacuum) take the same form in any inertial coordinate system, the vacuum Maxwell's equations are written
in terms of four-vectors and tensors in the "manifestly covariant" form:
$J^ b = \partial_a F^{ab} \,\!$,
$0 = \partial_c F_{ab} + \partial_b F_{ca} + \partial_a F_{bc}$
the last of which is equivalent to:
$0 = \epsilon_{dabc}\partial^a F^{bc} \,\!$
where $\, J^a$ is the 4-current, $\, F^{ab}$ is the field strength tensor (written as a 4 × 4 matrix), $\, \epsilon_{abcd}$ is the Levi-Civita symbol, and $\partial_a = (\partial/\partial ct, abla)$
is the 4-gradient (so that $\partial_a \partial^a$ is the d'Alembertian operator). (The a in the first equation is implicitly summed over, according to Einstein notation.) The first tensor equation
expresses the two inhomogeneous Maxwell's equations: Gauss' law and Ampere's law with Maxwell's correction. The second equation expresses the other two, homogenous equations: Faraday's law of
induction and the absence of magnetic monopoles.
More explicitly, $J^a = \, (c \rho, \vec J)$ (as a contravariant vector), in terms of the charge density ρ and the current density $\vec J$. The 4-current satisfies the continuity equation
$J^a{}_{,a} \, = 0$
In terms of the 4-potential (as a contravariant vector) $A^{a} = \left(\phi, \vec A c \right)$, where φ is the electric potential and $\vec A$ is the magnetic vector potential in the Lorenz gauge $\
left ( \partial_a A^a = 0 \right )$, F can be expressed as:
$F^{ab} = \partial^b A^a - \partial^a A^b \,\!$
which leads to the 4 × 4 matrix rank-2 tensor:
$F^{ab} = \left( \begin{matrix} 0 & -\frac {E_x}{c} & -\frac {E_y}{c} & -\frac {E_z}{c} \\ \frac{E_x}{c} & 0 & -B_z & B_y \\ \frac{E_y}{c} & B_z & 0 & -B_x \\ \frac{E_z}{c} & -B_y & B_x & 0 \end
{matrix} \right) .$
The fact that both electric and magnetic fields are combined into a single tensor expresses the fact that, according to relativity, both of these are different aspects of the same thing—by changing
frames of reference, what seemed to be an electric field in one frame can appear as a magnetic field in another frame, and vice versa.
Using the tensor form of Maxwell's equations, the first equation implies
$\Box F^{ab} = 0$ (See Electromagnetic four-potential for the relationship between the d'Alembertian of the four-potential and the four-current, expressed in terms of the older vector operator
Different authors sometimes employ different sign conventions for the above tensors and 4-vectors (which does not affect the physical interpretation).
$\, F^{ab}$ and $\, F_{ab}$ are not the same: they are related by the Minkowski metric tensor η: $F_{ab} =\, \eta_{ac} \eta_{bd} F^{cd}$. This introduces sign changes in some of F's components; more
complex metric dualities are encountered in general relativity.
Maxwell's equations in terms of differential forms
In a vacuum, where ε and μ are constant everywhere, Maxwell's equations simplify considerably once the language of differential geometry and differential forms is used. The electric and magnetic
fields are now jointly described by a 2-form F in a 4-dimensional spacetime manifold. Maxwell's equations then reduce to the Bianchi identity
where d denotes the exterior derivative - a differential operator acting on forms - and the source equation
where * is the Hodge star (dual) operator. Here, the fields are represented in natural units where ε[0] is 1. Here, J is a 1-form called the "electric current" or "current form" satisfying the
continuity equation
In a linear, macroscopic theory, the influence of matter on the electromagnetic field is described through a linear transformation in the space of 2-forms, Λ^2. We call this the constitutive
$C:\Lambda^2i\bold{F}\mapsto \bold{G}=C\bold{F}\in\Lambda^2$
The rôle of this transformation is comparable to the Hodge duality transformation and we write the Maxwell equations in the presence of matter as:
$d*\bold{G} = *\bold{J}$
$d\bold{F} = 0$
When the fields are expressed as linear combinations (of exterior products) of basis forms $\bold{\theta}^p$,
$\bold{F} = F_{pq}\bold{\theta}^p\wedge\bold{\theta}^q$
the constitutive relation takes the form
$G_{pq} = C_{pq}^{mn}F_{mn}$
where the field coefficient functions are antisymmetric in the indices and the constitutive coefficients are antisymmetric in the corresponding pairs.
This shows that the expression of Maxwell's equations in terms of differential forms leads to a further notational simplification. Whereas Maxwell's equations were once eight scalar equations, they
could be written as two tensor equations, from which the propagation of electromagnetic disturbances and the continuity equatin could be derived with a little effort. Using the differential forms
notation however, leads to an even simpler derivation of these results. The price to pay for this simplification is that one needs knowledge of more technical mathematics.
Classical electrodynamics as a line bundle
An elegant and intuitive way to formulate Maxwell's equations is to use line bundles or principal bundles with fibre U(1). The connection on the line bundle is d+A with A the four-vector comprised of
the electric potential and the magnetic vector potential. The curvature of the connection F=dA is the field strength. Some feel that this formulation allows a more natural description of the
Aharonov-Bohm effect, namely in terms of the holonomy of a curve on a line bundle. (See Micheal Murray, Line Bundles, 2002 (PDF web link) for a simple mathematical review of this formulation. See
also R. Bott, On some recent interactions between mathematics and physics, Canadian Mathematical Bulliten, 28 (1985) )no. 2 pp 129-164.)
See also
Journal articles
• James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459-512 (1865). (This article accompanied a December 8, 1864
presentation by Maxwell to the Royal Society.)
University level textbooks
• Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.), Prentice Hall. ISBN 013805326X.
• Tipler, Paul (2004). Physics for Scientists and Engineers: Electricity, Magnetism, Light, and Elementary Modern Physics (5th ed.), W. H. Freeman. ISBN 0716708108.
• Edward M. Purcell, Electricity and Magnetism (McGraw-Hill, New York, 1985).
• Banesh Hoffman, Relativity and Its Roots (Freeman, New York, 1983).
• Charles F. Stevens, The Six Core Theories of Modern Physics, (MIT Press, 1995) ISBN 0-262-69188-4.
External links
ca:Equacions de Maxwell cs:Maxwellovy rovnice de:Maxwellsche Gleichungen es:Ecuaciones de Maxwell eo:Ekvacioj de Maxwell fr:Équations de Maxwell gl:Ecuacións de Maxwell ko:맥스웰 방정식 it:Equazioni
di Maxwell he:משוואות מקסוול la:Aequationes Maxwellianae hu:Maxwell-egyenletek nl:Wetten van Maxwell ja:マクスウェルの方程式 no:Maxwells likninger nn:Maxwells likningar pl:Równania Maxwella
pt:Equações de Maxwell ru:Уравнения Максвелла sl:Maxwellove enačbe sv:Maxwells elektromagnetiska ekvationer th:สมการของแมกซ์เวลล์ tr:Maxwell denklemleri zh:麦克斯韦方程组 | {"url":"http://www.exampleproblems.com/wiki/index.php/Maxwells_equations","timestamp":"2014-04-20T23:40:29Z","content_type":null,"content_length":"87867","record_id":"<urn:uuid:9fd3e8b8-f3b2-498d-9158-c05108433d97>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
For what value of k.. no solution / unique solution / infinitely many solutions
September 23rd 2011, 02:19 AM
For what value of k.. no solution / unique solution / infinitely many solutions
The question is really confusing me, I have no clue what its asking/how to do it. What I do know..
a) No solution occurs when the bottom row is all zeroes except for the constant. Like (0 0 0 4) because 0 = 4 is impossible.
b) A unique solution occurs when the leading 1s are in a perfect diagonal like:
(1 0 0 0)
(0 1 0 0)
(0 0 1 0)
c) And infinitely many solutions for anything else.
How would I go about doing this im completely confused (Headbang)
September 23rd 2011, 03:39 AM
Re: For what value of k.. no solution / unique solution / infinitely many solutions
Whoa there! Please do not define these concepts in terms of the mechanical outcomes. Concentrate on the concepts of dependence vs independence and consistent vs inconsistent.
Have you met the Determinant? It will help, here.
An obvious observation is k = 1. What does that do to the three equations?
k = -2 is a little less obvious, but equally interesting. | {"url":"http://mathhelpforum.com/advanced-algebra/188619-what-value-k-no-solution-unique-solution-infinitely-many-solutions-print.html","timestamp":"2014-04-17T16:29:45Z","content_type":null,"content_length":"4675","record_id":"<urn:uuid:8107d4e2-8723-4a42-810b-86487eea7bab>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
LINGUIST List 21.5213: General Linguistics: von Mengden (2010)
LINGUIST List 21.5213
Wed Dec 22 2010
Review: General Linguistics: von Mengden (2010)
Editor for this issue: Monica Macaulay <monica
New! LL's Multitree project for over 1000 trees dynamically generated from scholarly hypotheses about language relationships:
This LINGUIST List issue is a review of a book published by one of our supporting publishers, commissioned by our book review editorial staff. We welcome discussion of this book review on the list,
and particularly invite the author(s) or editor(s) of this book to join in. If you are interested in reviewing a book for LINGUIST, look for the most recent posting with the subject "Reviews:
AVAILABLE FOR REVIEW", and follow the instructions at the top of the message. You can also contact the book review staff directly.
1. Stephen Chrisomalis , Cardinal Numerals: Old English from a Cross-Linguistic Perspective
Message 1: Cardinal Numerals: Old English from a Cross-Linguistic Perspective
Date: 22-Dec-2010
From: Stephen Chrisomalis <chrisomalis
Subject: Cardinal Numerals: Old English from a Cross-Linguistic Perspective
Announced at http://linguistlist.org/issues/21/21-1110.html
AUTHOR: von Mengden, Ferdinand
TITLE: Cardinal Numerals
SUBTITLE: Old English from a Cross-Linguistic Perspective
SERIES: Topics in English Linguistics [TiEL] 67
PUBLISHER: De Gruyter Mouton
YEAR: 2010
Stephen Chrisomalis, Department of Anthropology, Wayne State University
This monograph is a systematic analysis of Old English numerals that goes far
beyond descriptive or historical aims to present a theory of the morphosyntax of
numerals, including both synchronic and diachronic perspectives, and to
contribute to the growing linguistic literature on number concepts and numerical
The volume is organized into five chapters and numbered subsections throughout
and for the most part is organized in an exemplary fashion. Chapters II and
III, where the evidence for the structure of the Old English numerals is
presented, will be of greatest interest to specialists in numerals. Chapter IV
will be of greatest interest to specialists in Old English syntax. Chapter V is
a broader contribution to the theory of word classes and should be of interest
to all linguists.
The author begins with an extensive theoretical discussion of number concepts
and numerals, working along the lines suggested by Wiese (2003). Chapter I
distinguishes numerals (i.e., numerically specific quantifiers) from other
quantifiers, and distinguishes systemic cardinal numerals from non-systemic
expressions like 'four score and seven'. As the book's title suggests, cardinal
numerals are given theoretical priority over ordinal numerals, and nominal forms
like 'Track 29' or '867-5309' are largely ignored. Cardinal numerals exist in
an ordered sequence of well-distinguished elements of expandable but
non-infinite scope. Here the author builds upon the important work of Greenberg
(1978) and Hurford (1975, 1987), without presenting much information about Old
English numerals themselves.
Chapter II introduces the reader to the Old English numerals as a system of
simple forms joined through a set of morphosyntactic principles. It is
abundantly data-rich and relies on the full corpus of Old English to show how
apparent allomorphs (like HUND and HUNDTEONTIG for '100') in fact are almost
completely in complementary distribution, with the former almost always being
used for multiplicands, the latter almost never. This analysis allows the
author to maintain the principle that each numeral has only one systemic
representation, but at the cost of making a sometimes arbitrary distinction
between systemic and non-systemic expressions. This links to a fascinating but
all-too-brief comparative section on the higher numerals in the ancient Germanic
languages, which demonstrates the typological variability demonstrated even
within a closely related subfamily of numeral systems.
Chapter III deals with complex numerals, a sort of hybrid category encompassing
various kinds of complexities. The first sort of complexity, common in Old
English, involves the use of multiple noun phrases to quantify expressions that
use multiple bases (e.g. 'nine hundred years and ten years' for '910 years').
The second complexity is the typological complexity of Old English itself; the
author cuts through more than a century of confusion from Grimm onward in
demonstrating conclusively that there is no 'duodecimal' (base 12) element to
Old English (or present-day English) -- that oddities like 'twelve' and
'hundendleftig' (= 11x10) can only be understood in relation to the decimal
base. The third is the set of idiosyncratic expressions ranging from the
not-uncommon use of subtractive numerals, to the overrunning of hundreds (as in
modern English 'nineteen hundred'), to the multiplicative phrases used
sporadically to express numbers higher than one million. Where a traditional
grammar might simply list the common forms of the various numeral words, here we
are presented with numerals in context and in all their variety.
Chapter IV presents a typology of syntactic constructions in which Old English
numerals are found: Attributive, Predicative, Partitive, Measure, and Mass
Quantification. In setting out the range of morphosyntactic features
demonstrated within the Old English corpus, the aim is not simply descriptive,
but rather, assuming that numerals are a word class, to analyze that class in
terms of the variability that any word class exhibits, without making
unwarranted comparisons with other classes.
In Chapter V the author argues against the prevalent view that numerals are
hybrid combinations of nouns and adjectives. While there are similarities,
these ought not to be considered as definitional of the category, but as results
of the particular ways that cardinal numerals are used. Because it is
cross-linguistically true that higher numerals behave more like nouns than lower
ones, this patterned variability justifies our understanding the cardinal
numerals as a single, independent word class. It is regarded as the result of
higher numerals being later additions to the number sequence -- rather than
being 'more nounish', they are still in the process of becoming full numerals.
They are transformed from other sorts of quantificational nouns (like
'multitude') into systemic numerals with specific values, but retain vestiges of
their non-numeral past.
This is an extremely important volume, one that deserves a readership far beyond
historical linguists interested in Germanic languages. It is not the last word
on the category status of cardinal numerals, cross-linguistic generalizations
about number words, or the linguistic aspects of numerical cognition, but it
represents an exceedingly detailed and well-conceived contribution to all these
areas. While virtually any grammar can be relied upon to present a list of
numerals, virtually none deals with the morphosyntactic complexities and
historical dimensions of this particular domain that exist for almost any
language. Minimal knowledge of Old English is required to understand and
benefit from the volume.
The specialist in numerals will be struck by the richness and depth of the
author's specific insights regarding numerical systems in general, using the Old
English evidence to great effect. Because it is one of very few monographs to
be devoted specifically to a single numeral system, and by far the lengthiest
and theoretically the most sophisticated (cf. Zide 1978, Olsson 1997, Leko
2009), there is time and space to deal with small complexities whose broader
relevance is enormous. The volume thus strikes that fine balance between
empiricism and theoretical breadth required of this sort of cross-linguistic
study rooted in a single language.
With regard to the prehistory of numerals, we are very much working from a
speculative framework, and where the author treads into this territory, of
necessity the argument is more tenuous. It may be true that for most languages,
the hands and fingers are the physical basis for the counting words, but
Hurford's ritual hypothesis (1987), of which von Mengden does not think highly,
is at the very least plausible for some languages if not for all. These issues
are not key to the argument, which is all the more striking given that they are
presented conclusively in Chapter I.
A potential limitation of the volume is that, by restricting his definition of
numerals to cardinals (by far the most common form in the Old English corpus),
the author is forced into an exceedingly narrow position, so that, ultimately,
ordinals, nominals, frequentatives, and other forms are derived from numerals
but are not numerals as a word class, but something else. But the morphosyntax
of each of these forms has its own complexities -- think of the nominal '007' or
the decimal '6.042' - that deserve attention from specialists on numerals.
Numerals may well be neither adjectives nor nouns, but omitting the clearly
numerical is not a useful way to show it. Similarly, the insistence that each
language possesses one and only one systemic set of cardinal numerals is
problematic in light of evidence such as that presented by Bender and Beller
When comparing with other sorts of numerical expressions, e.g. numerical
notations, the author is on shakier grounds. It is certainly not the case, as
the author claims that the Inka khipus had a zero symbol, and it is equally the
case that the Babylonian sexagesimal notation and the Chinese rod-numerals did
(Chrisomalis 2010). Similarly, the author seems to suggest that in present-day
English, any number from 'ten' to 'ninety-nine' can be combined multiplicatively
with 'hundred', whereas in fact *ten hundred, *twenty hundred, … *ninety hundred
are well-formed in Old English but not in later varieties.
It is curious that von Mengden does not link the concept of numerical 'base' to
that of 'power', but rather to the patterned recurrence of sequences of
numerals. Rather than seeing '10', '100' and '1000' as powers of the same base
(10), they are conceptualized as representing a series of bases that combine
with the recurring sequence 1-9. But a system that is purely decimal, except
that numbers ending with 5 through 9 are constructed as 'five', 'five plus one'
… 'five plus four', would by this definition have a base of 5 even though powers
of 5 have no special structural role and even though 5 never serves as a
multiplicand. This definition is theoretically useful in demonstrating that Old
English does not have a duodecimal (base-12) component, but as a
cross-linguistic definition will likely prove unsatisfactory.
Because the Old English numerals are all Germanic in origin, with no obvious
loanwords, it is perhaps unsurprising that language contact and numerical
borrowing play no major role in this account. Yet on theoretical grounds the
borrowing of numerals, including the wholesale replacement of structures and
atoms for higher powers, is of considerable importance cross-linguistically.
Comparative analysis will need to demonstrate whether morphosyntactically,
numerical loanwords are similar to or different from non-loanwords.
The author has incorporated the work of virtually every major recent theorist on
numerals, and the volume is meticulously referenced. There are a few irrelevant
typos, and a few somewhat more serious errors in tables and text that create
ambiguity or confusion, but no more than might be expected in any volume of this
This monograph is a major contribution to the literature on numerals and
numerical cognition. Its value will be in its rekindling of debates long left
dormant, and its integration of Germanic historical linguistics, syntax,
semantics, and cognitive linguistics within a fascinating study of this
neglected lexical domain.
Bender, A., and S. Beller. 2006. Numeral classifiers and counting systems in
Polynesian and Micronesian languages: Common roots and cultural adaptations.
Oceanic Linguistics 45, no. 2: 380-403.
Chrisomalis, Stephen. 2010. Numerical Notation: A Comparative History. New York:
Cambridge University Press.
Greenberg, Joseph H. 1978. Generalizations about numeral systems. In Universals
of Human Language, edited by J. H. Greenberg. Stanford: Stanford University Press.
Hurford, James R. 1975. The Linguistic Theory of Numerals. Cambridge: Cambridge
University Press.
Hurford, James R. 1987. Language and Number. Oxford: Basil Blackwell.
Leko, Nedžad. 2009. The syntax of numerals in Bosnian. Lincom Europa.
Olsson, Magnus. 1997. Swedish numerals: in an international perspective. Lund
University Press.
Wiese, Heike. 2003. Numbers, Language, and the Human Mind. Cambridge: Cambridge
University Press.
Zide, Norman H. 1978. Studies in the Munda numerals. Central Institute of Indian
Stephen Chrisomalis is an assistant professor in the Department of Anthropology and the Linguistics Program at Wayne State University. His research interests include numerals, linguistic
anthropology, and writing systems / literacy.
New! LL's Multitree project for over 1000 trees dynamically generated from scholarly hypotheses about language relationships:
Read more issues|LINGUIST home page|Top of issue
Page Updated: 22-Dec-2010
About LINGUIST | Contact Us
While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed on its pages, it cannot vouch for their contents. | {"url":"http://linguistlist.org/issues/21/21-5213.html","timestamp":"2014-04-16T20:06:46Z","content_type":null,"content_length":"27419","record_id":"<urn:uuid:f3cc2af0-528d-434c-b14f-88fdcb827932>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Fall 2010 Workshop #1 - MTC Austin
Problem 1: Pick any integer greater than or equal to 1. Multiply it by 9. Permute its digits in any way. Show that the number you get is a multiple of 9.
Problem 2: "The Infected Checkerboard" An infection spreads among the squares of an n by n checkerboard in the following manner: If a square has 2 or more infected neighbors, then it becomes
infected itself. Neighbors are orthogonal only, so each square has at most 4 neighbors. Show that you cannot infect the whole board if you begin with fewer than n infected squares. | {"url":"https://sites.google.com/site/mtcaustin/home/side-board-problems/fromfall2010workshop1","timestamp":"2014-04-17T07:33:45Z","content_type":null,"content_length":"25807","record_id":"<urn:uuid:9488e047-a6d2-435d-bc3c-88de18d196d5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
categorical shape theory
categorical shape theory
In the original form of abstract shape theory, the restrictions that (i) it should refer to a subcategory of ‘nice’ or ‘good’ objects, and (ii) that subcategory should be pro-reflective (shape
theorist's 'dense') were unnecessarily restrictive, eliminating interesting situations where similar constructions could be useful, and tended to obscure the nature of certain of the results, which
were more much general than was initially apparent.
This was explored by Deleanu and Hilton in the early 1970s.
Let $K : D\to C$ be a functor. The shape category of $K$ is the category with objects the same objects as those of $C$ but with morphisms from $c$ to $c^\prime$ being the functors between the
corresponding comma categories $(c^\prime)/K)$ and $(c/K)$ (note reversal of order) that are compatible with the codomain functors to $D$.
(More to come later!)
Revised on March 26, 2010 23:48:40 by
Toby Bartels | {"url":"http://ncatlab.org/nlab/show/categorical+shape+theory","timestamp":"2014-04-17T10:09:41Z","content_type":null,"content_length":"14249","record_id":"<urn:uuid:d2d8068a-4ad3-42fa-b3a2-d3f54e600d3c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contact Heterogeneity and Phylodynamics: How Contact Networks Shape Parasite Evolutionary Trees
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Interdiscip Perspect Infect Dis. 2011; 2011: 238743.
Contact Heterogeneity and Phylodynamics: How Contact Networks Shape Parasite Evolutionary Trees
The inference of population dynamics from molecular sequence data is becoming an important new method for the surveillance of infectious diseases. Here, we examine how heterogeneity in contact shapes
the genealogies of parasitic agents. Using extensive simulations, we find that contact heterogeneity can have a strong effect on how the structure of genealogies reflects epidemiologically relevant
quantities such as the proportion of a population that is infected. Comparing the simulations to BEAST reconstructions, we also find that contact heterogeneity can increase the number of sequence
isolates required to estimate these quantities over the course of an epidemic. Our results suggest that data about contact-network structure will be required in addition to sequence data for accurate
estimation of a parasitic agent's genealogy. We conclude that network models will be important for progress in this area.
1. Introduction
Epidemiology is a data-driven field, and it is currently being infused at an increasing rate with molecular sequence data. This new and growing data source has led to a call for multi-level models of
the relationship between sequence data and infectious disease dynamics [1, 2], dubbed phylodynamic models.
By allowing for additional data to be used and integrated, phylodynamic modeling may lead to improvements in the accuracy and quality of the surveillance of infectious diseases. For example, the
number of norovirus outbreaks reported increased in 2002. It was not clear, however, whether the higher reported numbers were a sign of more outbreaks or more frequent reporting of outbreaks.
Case-reporting bias does not affect molecular data, however. So coalescent analysis of molecular data [3] provided a valuable and largely independent line of evidence that the increase in outbreaks
was real. Of course, coalescent analysis will have its own biases, and here we examine those that result from host heterogeneity in contact.
To model heterogeneity in contact, we represent individuals in a population as nodes, and we represent the potential for two hosts to infect each other as an edge that links two nodes. Researchers
call the resulting networks contact networks. Contact-network structure necessarily affects the genealogy of any replicating infectious agent that is spreading through a host population. In this
paper, we use the term parasite to refer to all such infectious agents, including bacteria and viruses. The genealogy of these parasites must fit inside the tree of infections that forms as the
parasite spreads from host to host, and this tree of infections must fit inside the host population's contact network. While more elaborate elements of contact-network structure may be important, we
here focus simply on variation in the number of edges coming out of nodes, which corresponds to heterogeneity in contact rates.
Contact heterogeneity has often not been discussed as a possible bias in coalescent analyses (e.g., [4–6]). Researchers performing coalescent analyses have considered contact heterogeneity in a
variety of other ways. Hughes et al. [7] linked it to the phylogenetic clustering of sequence isolates. Biek et al. [8] mentioned that it may have contributed to changes in an estimation of R[0] (the
expected number of new cases that a single case produces in a susceptible population). Nakano et al. [9] discussed how iatrogenic transmission may have been an important type of transmission in the
spread of hepatitis C. Bennett et al. [10] pointed out that population-size estimates from coalescent analyses are more accurately interpreted as ratios of population size to reproductive variance.
But researchers have rarely quantitatively considered how contact heterogeneity might be directly influencing the results of their coalescent analyses. Volz et al. [11] did account for contact
heterogeneity in their coalescent model with a saturation parameter, but this application does not provide a general illustration of how contact-network structure can affect genealogies.
Our primary goal here is to assess how contact heterogeneity affects the relationship between coalescent reconstructions and the reality of parasite population dynamics. First, we build contact
networks with different levels of heterogeneity. Then, we simulate the spread of parasites through the networks, generating epidemic dynamics and a genealogy of the parasite with each simulation.
Then, we use the BEAST software package [12] to produce Bayesian skyride [13] reconstructions of parasite population dynamics based on the simulated genealogies. We also use the framework of Volz et
al. [11] to predict the skyride reconstructions based on the simulated epidemic dynamics. We explain how the contact-network structure affects the epidemic dynamics that, in turn, affect the
predicted reconstructions. The close agreement between the predicted skyrides and the skyride reconstructions validates this explanation. We also examine how much of the simulated genealogy the
skyride reconstruction requires as input in order to produce a reconstruction that agrees with the theoretical prediction.
2. Materials and Methods
We simulated infectious disease progression on networks. The nodes of the networks represented hosts and had states of being susceptible, infectious, or recovered. The edges of the network determined
the set of possible transmission events; infectious hosts transmitted infection across edges shared with susceptible hosts until the infectious hosts recovered. The number of nodes in the network was
kept at 10,000, and the mean degree (degree is the number of edges coming out of a node) was kept at 4. The networks were built to be either regular, meaning that all nodes have the same degree, or
with degree distributions sampled from Poisson, exponential, or Pareto distributions. The minimum degree in the Pareto networks was 1. The regular networks served as models with zero heterogeneity,
Poisson networks as models with heterogeneity similar to a Poisson process, exponential networks as models with heterogeneity similar to a variety of social networks [14], and Pareto networks
(scale-free networks) as models with the extreme levels of heterogeneity that might be found in sexual contact networks [15]. We used the Erdös-Rényi algorithm [16] to generate Poisson networks and
an edge-shuffling algorithm [17] to generate the regular, exponential, and Pareto networks.
We simulated epidemics and genealogies in continuous time using a method based on the Stochastic Simulation Algorithm [18, 19]. Epidemics began with one node infectious and the rest of the nodes
being susceptible. Infectious nodes recovered at a set rate and transmitted infection to susceptible neighbors (nodes sharing an edge) at a set rate. We drew the time to the next event from an
exponential distribution with a rate equal to the sum of the rates of all possible events. We then selected an event with probability proportional to its rate, updated the state of the network
accordingly, and drew the time until the next event. This process was iterated until either the time evolution of the epidemic reached a set time point or no more events were possible.
Simulation source code is available from the authors upon request. The code made use of the GNU scientific library [20, version 1.13+dsfg-1] to generate random numbers and the igraph library [21,
version 0.5.3-6] to construct networks.
The output of a simulation included a time series of prevalence, that is, the count of infected nodes (given a fixed population of 10,000 nodes), and incidence, that is, the sum of the rates of all
possible transmissions. Simulations also generated infection trees in which each transmission was a bifurcating node, each recovery was a terminal node, and branch lengths were equal to the time
between events. We sampled from the full infection trees to generate the trees for input in the skyride coalescent analyses. We sampled by selecting a set of nodes uniformly at random from the full
infection tree to become tip branches of an infection subtree. To generate the subtree, we cut the branches of the full infection tree at the subset of randomly selected nodes that had no descendants
in the set of randomly selected nodes, and we pruned off any paths that did not terminate in this subset of nodes.
Using the sampled infection trees as genealogies, we obtained a posterior distribution for the skyride population sizes with the time-aware method of Minin et al. [13], implemented in BEAST [12,
version 1.5.4]. The MCMC chain lengths were 100,000 states, and every 10th state was written to a log file. We discarded the first 10,000 states as burn in. In all cases, effective sample sizes were
well above 200. Thus, convergence had occurred. Examples of BEAST XML input files are available from the authors upon request.
Using the posterior skyride population-size distributions, we obtained the skyride trajectories with Tracer [22, version 1.5]. Using the framework of Volz et al. [11], we calculated a predicted
skyride as described next in the Results.
To plot time series from different stochastic simulations on a common time scale, we used the time at which growth became nearly deterministic in each simulation as time zero for that simulation.
3. Results
3.1. Theory
Coalescent theory is an area of population genetics that models the structure of genealogies backward in time from a set of lineages sampled from a large population. A simple coalescent process turns
out to be a good model for the genealogies of a wide range of scenarios in population genetics [23]. In the coalescent process, each pair of lineages in the sample coalesces into a common ancestral
lineage at a constant rate. When time is measured in units of generations, this rate is the reciprocal of the effective population size. So the rate at which any of the pairs coalesces is equal to
the number of pairs of lineages divided by the effective population size.
The skyride uses this simple relationship between effective population size and the expected time before coalescence to estimate population size from the length of intracoalescent intervals in a
genealogy. The median of a skyride reconstruction y[rec] at time t within an intracoalescent interval is approximately
where N[e] is the effective population size, τ is the generation time, $(n2)$ is the average number of pairs of lineages in the sample within the intracoalescent interval, and u is the length of the
intracoalescent interval.
Predicting a skyride from the dynamics of an epidemic model is simply a matter of calculating the rate at which a pair of lineages will coalesce, that is, the rate at which two chains of infection
merge into a single chain. Volz et al. [11] have described how coalescence rates follow from prevalence and incidence. Prevalence, given a fixed population size, refers to the count of cases of
infection, and so we denote it by I. Incidence refers to the rate at which new cases are occurring, and so we denote it by r[i]. The rate of coalescence of a single pair of cases is
where P is the probability that we can trace a particular pair of cases back to a single case before the last transmission event. We have
making the approximation that the last transmission event was equally likely to have taken place between any pair of current cases. Therefore, the predicted skyride y[pred] satisfies
The similarity of (4) and (1) reflects the similarity of the coalescent process to the transmission process in a continuous-time epidemic model. N[e] and τ, however, are often considered as
parameters of a discrete-time population model that has nonoverlapping generations. The coalescent process describes the genealogy in such a model when we sample a small fraction of the lineages in a
population. So how do we interpret N[e] and τ in the terms of a continuous-time epidemic model that has overlapping generations? Following Frost and Volz [24] and the general theory of Wakeley and
Sargsyan [25], we say that generation time τ is equal to the expected time before an infected individual transmits infection:
Then from (1) and (4) and y[rec] = y[pred], we have
3.2. Simulation
To determine the effect of sampling on the ability of the skyride to reconstruct prevalence history, we simulated genealogies and pruned off a variable number of branches from the genealogies. We
found that small amounts of pruning rapidly reduced the number of coalescent events in the sampled genealogy that occurred in the peak and late phases of the epidemic, thereby restricting accurate
reconstruction to the early phase of the epidemic (Figure 1).
Low levels of proportional sampling may prevent accurate reconstruction of prevalence during and after the epidemic peak. We consider reconstruction to be accurate when the skyride and the predicted
skyride match. The light-blue ribbons are the middle ...
To demonstrate the effect of network structure on the reconstruction of prevalence history, epidemics were simulated on networks with varying heterogeneity. Keeping the extent of sampling equal and
increasing heterogeneity compressed the coalescent events in the sampled genealogy into the beginning of the epidemic. Figure 2 shows a representative example of this general trend that holds across
intermediate levels of sampling. Consequently, increasing heterogeneity has a similar effect to reducing the proportion of nodes sampled: the time at which the prediction of the skyride based on
prevalence and incidence diverges from the estimated skyride based on the genealogy occurs earlier.
Contact heterogeneity determines the amount of time over which the skyride estimated from the genealogy is informative of the skyride predicted by prevalence and incidence. Contact heterogeneity also
affects the relationship between the skyride and prevalence ...
Figure 3 shows how differences in the scaling of prevalence of the skyride follows from differences in trajectories of prevalence and incidence. The ratio of prevalence to incidence is the expected
time until an infected host transmits infection, and we here define it as the generation time (5). In Figure 3, we see that generation times are at, or quickly reach, a minimum after an epidemic
begins and then gradually increase until the epidemic ends. In the regular networks, the decline in the number of susceptible hosts over the course of the epidemic causes this increase to happen. In
the other networks, which have hosts of varying degree, infection first moves to the high-degree hosts and then to progressively lower- and lower-degree hosts [26–28]. Because the degree of a host
determines how much his/her infection increases incidence, this movement of infection from high- to low-degree hosts translates into generation times being at first shorter and then longer in
heterogeneous networks relative to regular networks (Figure 3).
Contact-network structure, infectious disease dynamics, and genealogical structure interact. The ratio of prevalence to incidence is the generation time, which scales prevalence of the predicted
skyride (up to a constant factor). Dividing the predicted ...
4. Discussion
The effects of contact heterogeneity can be important in relating the structure of genealogies to infectious disease dynamics (Figure 3). The strength of the effect will vary from system to system,
and for some systems other aspects of contact-network structure such as the frequency of short paths [29] and the dynamics of edge formation [30–33] may also be important. More generally, models may
also require more detailed models of the course of infection within hosts (including incubation periods, e.g.), the effects of natural selection [34, 35], and other additions before they can make
precise predictions in real-world systems.
But are the data requirements of these more complex models feasible? To begin answering this question, we next discuss the implications of obtaining the equivalent of our simulated data from a
real-world system.
We knew the true infection tree in our simulations. In typical coalescent analyses of an infectious disease (e.g., [13, 36]), we do not know the true genealogy and so we must infer it along with the
dynamics of the effective population size. Although there is a large set of methods for the inference of trees from sequences [37–39], the variety of methods available reflects the difficulty of the
task. Additionally, as is well known by practitioners of phylogenetics, substitution rates set fundamental limits on the amount of phylogenetic information that sequences may contain. Sequences with
common ancestors that are very recent may not have any polymorphic sites that could suggest the structure of the branching of the tree connecting them. Sequences with common ancestors that are too
distant similarly contain little information about the true genealogy [40].
It may be possible to work around the second problem by collecting sequences over time such that there are no branching points in the tree that are too far from every pair of tips. For the first
problem, there is simply no information that the sequences alone can provide, and additional knowledge of events in the chain of infection is necessary to determine the infection tree. The panels
labeled “Time to coalescence” in Figure 3 show that this additional information is most likely to be needed early in the epidemic and when there is a large amount of variance in the contact network.
It is then perhaps fortunate that contact-tracing methods are practiced by many health departments for sexually transmitted diseases (STDs) [41, 42], which are thought to have higher contact
heterogeneity than airborne diseases [15]. However, we probably need more widespread practice of contact tracing for large genealogies to be assembled. A recent survey of physicians in the United
States [43] found that less than one-third of physicians routinely screen patients for STDs and many physicians relied on patients to notify health departments and partners, and similar surveys in
other countries [42, 44, 45] likewise indicate that contact tracing is not a routine in general medical care of STDs.
There also may be a need for contact tracing to establish the genealogy for airborne infections because many airborne transmissions may occur in a single day during which a single strain may be
dominant in a host, as the super-spreading events in the 2003 SARS-coronavirus outbreak demonstrated [46]. Contact tracing is also practiced for airborne diseases. It has been used to help contain
the SARS-coronavirus outbreak [47], smallpox [48], and tuberculosis [49]. Given that contacts for airborne diseases can be quite transient, it seems that, even with the addition of contact-tracing
data, we may generally know less about parasite genealogies for airborne diseases compared to STDs. On the upside, our results suggest that the ability to reconstruct early parts of the epidemic is
robust to much pruning of the full genealogy (Figure 1). However, this robustness may depend on our sampling scheme. Using discrete-time simulations, Stack et al. [50] found that the difference
between reconstructed prevalence and simulated prevalence depended largely on how the samples were distributed over the course of the epidemic. Also, it is unclear how any of our sampling levels
might compare to realistic amounts of contact tracing and molecular data for a specific infectious disease.
In addition to being necessary to fill gaps in molecular data, contact tracing may be necessary because genealogies do not always match infection trees. Such discordance is likely to occur when there
is relatively little time between transmissions. When there is little time for a mutant to become fixed between transmissions, the order in which alleles at loci of a sequence appear in transmitting
inocula (or sequence isolates) need not match the order in which the alleles appeared in the within-host population. Measures of within-host viral load and sequence diversity may be informative of
the chance of such discordance. If populations tend to be large and diverse, then sequence data may be useless for reconstructing the recent details of chains of infection but still useful in
reconstructing deeper branches in the tree. Sequence data from diverse within-host populations could also be useful in parameter estimation for coalescent models (e.g., [51]) that include the
within-host dynamics of the parasite. Two properties that parasites may have that would help increase the chance that infection trees and genealogies match are a low level of diversity in
transmitting inocula (i.e., a strong bottleneck effect at transmission) and reduction of diversity in an incubation period that precedes all transmission.
In our simulations, we also knew the variance of the degree distribution. We do have some data about the structure of contact networks for some systems. We have survey data about human sexual-contact
networks (e.g., [52, 53]) and survey data about networks of close, but not sexual, human contacts [54–56]. Researchers have used field data to construct hypothetical contact networks for wildlife and
vector-borne diseases (e.g., [57, 58]), and researchers have also used census data to construct hypothetical contact networks for human diseases (e.g., [59, 60]). It seems likely, however, that in
the analysis of real sequence data the heterogeneity of the contact network will be at least as uncertain as disease incidence and prevalence. Thus, estimation of contact heterogeneity may be an
important goal of the analysis. We note that previous work (e.g., [61]) has also discussed the potential use of sequence data to estimate contact heterogeneity.
5. Conclusions
Contact heterogeneity is well known to have a strong effect on infectious disease dynamics. We have shown how the relationship between infectious disease dynamics and genealogies is similarly
sensitive to the contact heterogeneity specified by a network. We have argued that direct knowledge of the tree of infections is likely needed in addition to sequence data for the accurate inference
of prevalence from sequence data. Thus, it seems that understanding the structure of the contact networks for various diseases will be important for progress in phylodynamics.
This work was supported by NSF Grant EF-0742373. The Texas Advanced Computing Center at UT provided computing resources.
Grenfell BT, Pybus OG, Gog JR, et al. Unifying the epidemiological and evolutionary dynamics of pathogens. Science. 2004;303(5656):327–332. [PubMed]
Holmes EC, Grenfell BT. Discovering the phylodynamics of RNA viruses. PLoS Computational Biology. 2009;5(10) Article ID e1000505. [PMC free article] [PubMed]
Siebenga JJ, Lemey P, Kosakovsky Pond SL, Rambaut A, Vennema H, Koopmans M. Phylodynamic reconstruction reveals norovirus GII.4 epidemic expansions and their molecular determinants. PLoS Pathogens.
2010;6(5) Article ID e1000884. [PMC free article] [PubMed]
Fraser C, Donnelly CA, Cauchemez S, et al. Pandemic potential of a strain of influenza A (H1N1): early findings. Science. 2009;324(5934):1557–1561. [PMC free article] [PubMed]
Smith GJD, Vijaykrishna D, Bahl J, et al. Origins and evolutionary genomics of the 2009 swine-origin H1N1 influenza a epidemic. Nature. 2009;459(7250):1122–1125. [PubMed]
Rambaut A, Holmes EC. The early molecular epidemiology of the swine-origin A/H1N1 human inuenza pandemic. PLoS Currents: Influenza. 2009;1, article RRN1003 [PMC free article] [PubMed]
Hughes GJ, Fearnhill E, Dunn D, Lycett SJ, Rambaut A, Leigh Brown AJ. Molecular phylodynamics of the heterosexual HIV epidemic in the United Kingdom. PLoS Pathogens. 2009;5(9) Article ID e1000590. [
PMC free article] [PubMed]
Biek R, Henderson JC, Waller LA, Rupprecht CE, Real LA. A high-resolution genetic signature of demographic and spatial expansion in epizootic rabies virus. Proceedings of the National Academy of
Sciences of the United States of America. 2007;104(19):7993–7998. [PMC free article] [PubMed]
Nakano T, Lu L, He Y, Fu Y, Robertson BH, Pybus OG. Population genetic history of hepatitis C virus 1b infection in China. Journal of General Virology. 2006;87(1):73–82. [PubMed]
Bennett SN, Drummond AJ, Kapan DD, et al. Epidemic dynamics revealed in dengue evolution. Molecular biology and evolution. 2010;27(4):811–818. [PMC free article] [PubMed]
Volz EM, Kosakovsky Pond SL, Ward MJ, Leigh Brown AJ, Frost SDW. Phylodynamics of infectious disease epidemics. Genetics. 2009;183(4):1421–1430. [PMC free article] [PubMed]
Drummond AJ, Rambaut A. BEAST: Bayesian evolutionary analysis by sampling trees. BMC Evolutionary Biology. 2007;7(1, article 214) [PMC free article] [PubMed]
Minin VN, Bloomquist EW, Suchard MA. Smooth skyride through a rough skyline: Bayesian coalescent-based inference of population dynamics. Molecular Biology and Evolution. 2008;25(7):1459–1471. [PMC
free article] [PubMed]
Bansal S, Grenfell BT, Meyers LA. When individual behaviour matters: homogeneous and network models in epidemiology. Journal of the Royal Society Interface. 2007;4(16):879–891. [PMC free article] [
Liljeros F, Edling CR, Amaral LAN. Sexual networks: implications for the transmission of sexually transmitted infections. Microbes and Infection. 2003;5(2):189–196. [PubMed]
16. Erdős P, Renyi A. On random graphs. I. Publicationes Mathematicae. 1959;6:290–297.
17. Viger F, Latapy M. Computing and Combinatorics. Berlin, Germany: Springer; 2005. Efficient and simple generation of random simple connected graphs with prescribed degree sequence; pp. 440–449.
18. Gillepsie DT. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. Journal of Computational Physics. 1976;22(4):403–434.
Gillespie DT. Stochastic simulation of chemical kinetics. Annual Review of Physical Chemistry. 2007;58:35–55. [PubMed]
20. Galassi M, Davies J, Theiler J, et al. GNU Scientific Library Reference Manual. 3rd edition. Network Theory Ltd.; 2009.
21. Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal. 2006;Complex Systems:p. 1695.
23. Kingman JFC. On the genealogy of large populations. Journal of Applied Probability. 1982;19:27–43.
Frost SDW, Volz EM. Viral phylodynamics and the search for an effective number of infections. Philosophical Transactions of the Royal Society B. 2010;365(1548):1879–1890. [PMC free article] [PubMed]
Wakeley J, Sargsyan O. Extensions of the coalescent effective population size. Genetics. 2009;181(1):341–345. [PMC free article] [PubMed]
Barthélemy M, Barrat A, Pastor-Satorras R, Vespignani A. Velocity and hierarchical spread of epidemic outbreaks in scale-free networks. Physical Review Letters. 2004;92(17) Article ID 178701. [PubMed
Barthélemy M, Barrat A, Pastor-Satorras R, Vespignani A. Dynamical patterns of epidemic outbreaks in complex heterogeneous networks. Journal of Theoretical Biology. 2005;235(2):275–288. [PubMed]
Volz E. SIR dynamics in random networks with heterogeneous connectivity. Journal of Mathematical Biology. 2008;56(3):293–310. [PubMed]
Miller JC. Spread of infectious disease through clustered populations. Journal of the Royal Society Interface. 2009;6(41):1121–1134. [PMC free article] [PubMed]
Volz E, Meyers LA. Susceptible-infected-recovered epidemics in dynamic contact networks. Proceedings of the Royal Society B. 2007;274(1628):2925–2933. [PMC free article] [PubMed]
Altmann M. Susceptible-infected-removed epidemic models with dynamic partnerships. Journal of Mathematical Biology. 1995;33(6):661–675. [PubMed]
32. Morris M, Kretzschmar M. A microsimulation study of the effect of concurrent partnerships on the spread of HIV in Uganda. Mathematical Population Studies. 2000;8:109–133.
Volz E, Meyers LA. Epidemic thresholds in dynamic contact networks. Journal of the Royal Society Interface. 2009;6(32):233–241. [PMC free article] [PubMed]
O'Fallon BD, Seger J, Adler FR. A continuous-state coalescent and the impact of weak selection on the structure of gene genealogies. Molecular Biology and Evolution. 2010;27(5):1162–1172. [PubMed]
Welch D, Nicholls GK, Rodrigo A, Solomon W. Integrating genealogy and epidemiology: the ancestral infection and selection graph as a model for reconstructing host virus histories. Theoretical
Population Biology. 2005;68(1):65–75. [PubMed]
Drummond AJ, Rambaut A, Shapiro B, Pybus OG. Bayesian coalescent inference of past population dynamics from molecular sequences. Molecular Biology and Evolution. 2005;22(5):1185–1192. [PubMed]
37. Yang Z. Computational Molecular Evolution. Oxford, Miss, USA: Oxford University Press; 2006. (Oxford Series in Ecology and Evolution).
38. Felsenstein J. Inferring Phylogenies. 2nd edition. Sinauer Associates; 2003.
39. Lemey P, Salemi M, Vandamme A-M, editors. The Phylogenetic Handbook: A Practical Approach to Phylogenetic Analysis and Hypothesis Testing. 2nd edition. Cambridge, Mass, USA: Cambridge University
Press; 2009.
Hagstrom GI, Hang DH, Ofria C, Torng E. Using Avida to test the effects of natural selection on phylogenetic reconstruction methods. Artificial Life. 2004;10(2):157–166. [PubMed]
Stokes T, Schober P. A survey of contact tracing practice for sexually transmitted diseases in GUM clinics in England and Wales. International Journal of STD and AIDS. 1999;10(1):17–21. [PubMed]
McCarthy M, Haddow LJ, Furner V, Mindel A. Contact tracing for sexually transmitted infections in New South Wales, Australia. Sexual Health. 2007;4(1):21–25. [PubMed]
St. Lawrence JS, Montaño DE, Kasprzyk D, Phillips WR, Armstrong K, Leichliter JS. STD screening, testing, case reporting, and clinical and partner notification practices: a national survey of US
physicians. American Journal of Public Health. 2002;92(11):1784–1788. [PMC free article] [PubMed]
Chan RKW, Tan HH, Chio MTW, Sen P, Ho KW, Wong ML. Sexually transmissible infection management practices among primary care physicians in Singapore. Sexual Health. 2008;5(3):265–271. [PubMed]
Heal C, Muller R. General practitioners’ knowledge and attitudes to contact tracing for genital Chlamydia trachomatis infection in North Queensland. Australian and New Zealand Journal of Public
Health. 2008;32(4):364–366. [PubMed]
Ruan Y, Wei CL, Ee LA, et al. Comparative full-length genome sequence analysis of 14 SARS coronavirus isolates and common mutations associated with putative origins of infection. Lancet. 2003;361
(9371):1779–1785. [PubMed]
Donnelly CA, Ghani AC, Leung GM, et al. Epidemiological determinants of spread of causal agent of severe acute respiratory syndrome in Hong Kong. Lancet. 2003;361(9371):1761–1766. [PubMed]
48. Fenner F, Henderson DA, Arita I, Jezek Z, Ladnyi ID. Smallpox and Its Eradication. Vol. 6. Geneva, Switzerland: World Health Organization; 1988. (History of International Public Health).
Rothenberg RB, McElroy PD, Wilce MA, Muth SQ. Contact tracing: comparing the approaches for sexually transmitted diseases and tuberculosis. International Journal of Tuberculosis and Lung Disease.
2003;7(12):S342–S348. [PubMed]
Stack JC, Welch JD, Ferrari MJ, Shapiro BU, Grenfell BT. Protocols for sampling viral sequences to study epidemic dynamics. Journal of the Royal Society Interface. 2010;7(48):1119–1127. [PMC free
article] [PubMed]
Edwards CTT, Holmes EC, Wilson DJ, et al. Population genetic estimation of the loss of genetic diversity during horizontal transmission of HIV-1. BMC Evolutionary Biology. 2006;6, article no. 28 [PMC
free article] [PubMed]
Potterat JJ, Phillips-Plummer L, Muth SQ, et al. Risk network structure in the early epidemic phase of HIV transmission in Colorado Springs. Sexually Transmitted Infections. 2002;78(1):i159–i163. [
PMC free article] [PubMed]
Rothenberg RB, Long DM, Sterk CE, et al. The Atlanta urban networks study: a blueprint for endemic transmission. AIDS. 2000;14(14):2191–2200. [PubMed]
Wallinga J, Teunis P, Kretzschmar M. Using data on social contacts to estimate age-specific transmission parameters for respiratory-spread infectious agents. American Journal of Epidemiology. 2006;
164(10):936–944. [PubMed]
Hens N, Goeyvaerts N, Aerts M, Shkedy Z, Van Damme P, Beutels P. Mining social mixing patterns for infectious disease models based on a two-day population survey in Belgium. BMC Infectious Diseases.
2009;9, article no. 5 [PMC free article] [PubMed]
Mossong J, Hens N, Jit M, et al. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Medicine. 2008;5(3, article no. e74) [PMC free article] [PubMed]
Craft ME, Volz E, Packer C, Meyers LA. Distinguishing epidemic waves from disease spillover in a wildlife population. Proceedings of the Royal Society B. 2009;276(1663):1777–1785. [PMC free article]
Salkeld DJ, Salathé M, Stapp P, Jones JH. Plague outbreaks in prairie dog populations explained by percolation thresholds of alternate host abundance. Proceedings of the National Academy of Sciences
of the United States of America. 2010;107(32):14247–14250. [PMC free article] [PubMed]
Meyers LA, Pourbohloul B, Newman MEJ, Skowronski DM, Brunham RC. Network theory and SARS: predicting outbreak diversity. Journal of Theoretical Biology. 2005;232(1):71–81. [PubMed]
Eubank S. Network based models of infectious disease spread. Japanese Journal of Infectious Diseases. 2005;58(6):S9–S13. [PubMed]
Sander LM, Warren CP, Sokolov IM, Simon C, Koopman J. Percolation on heterogeneous networks as a model for epidemics. Mathematical Biosciences. 2002;180:293–305. [PubMed]
Articles from Interdisciplinary Perspectives on Infectious Diseases are provided here courtesy of Hindawi Publishing Corporation
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2995904/?tool=pubmed","timestamp":"2014-04-16T23:32:16Z","content_type":null,"content_length":"112154","record_id":"<urn:uuid:4e8763a8-f039-4845-b6de-c52f43aa9848>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Their infinite wisdom
Graphic: Christine Daniloff
(PhysOrg.com) -- Hotel guests come and go. But in the first decade of the 1900s, a pair of frequent Russian visitors to the Hotel Parisiana, near the Sorbonne on Paris' Left Bank, stood out vividly.
The children of the hotel's proprietors, the Chamont family, remembered them into the 1970s as 'hardworking' and 'pious' men. The guests, Dimitri Egorov and Nikolai Luzin, were mathematicians,
studying in Paris; they often prayed and went to church.
The Russians were embarking on a grand project: exploring the unknown features of infinity, the notion that a quantity can always increase. Infinity’s riddles have fascinated intellectuals from
Aristotle to Jorge Luis Borges to David Foster Wallace. In ancient Greece, Zeno’s Paradox stated that a runner who keeps moving halfway toward a finish line will never cross it (in effect, Zeno
realized the denominator of a fraction can double infinitely, from 1/2 to 1/4 to 1/8, and so on). Galileo noticed but left unresolved another brain-teaser: A series that includes every integer (1, 2,
3, and so on) seems like it should contain more numbers than one that only includes even integers (2, 4, 6, and so on). But if both continue infinitely, how can one be bigger than the other?
As it happens, infinity does come in multiple sizes. And by discovering some of its precise characteristics, the Russians helped show that infinity is not just one abstract concept. Egorov and Luzin,
with the help of another colleague, Pavel Florensky, created a new field, Descriptive Set Theory, which remains a pillar of contemporary mathematical inquiry. They also founded the Moscow School of
mathematics, home to generations of leading researchers.
The Russians’ success in grasping infinity concretely went hand in hand with their unorthodox religious beliefs, according to MIT historian of science Loren Graham. In a recent book, Naming Infinity:
A True Story of Religious Mysticism and Mathematical Creativity, co-written with French mathematician Jean-Michel Kantor and published this year by Harvard University Press, Graham describes how the
Russians were “Name-worshippers,” a cult banned in their own country. Members believed they could know God in detail, not just as an abstraction, by repeating God’s name in the “Jesus prayer.”
Graham thinks this openness to apprehending the infinite let the trio make its discoveries--before Egorov and Florensky were swept up in Stalin’s purges. “The impact of the Russian mathematicians has
been enormous,” says Graham, who has spent a half-century studying the history of science in Russia. “But their fates were tragic.”
Settling set theory
In studying infinity, the Russians followed Georg Cantor, the German theorist who from the 1870s to the 1890s formalized the notion that infinity comes in multiple sizes. As Cantor showed, the
infinite set of real numbers is greater than the infinite set of integers. Because real numbers can be expressed as infinite decimals (like 6.52918766145 … ), there are infinitely many in between
each integer. The set containing this continuum of real numbers must thus be larger than the set of integers. In Cantor’s terms, when there is no one-to-one correspondence between members of infinite
sets, those infinities have different sizes.
Cantor’s work made it clear that the study of infinity was actually the study of sets: their properties and the functions used to create them. Today, set theory has become the foundation of modern
math. But in the aftermath of Cantor, the basics of set theory were unclear. As Graham and Kantor describe it, even leading mathematicians found the situation unsettling. Three French thinkers —
Emile Borel, Henri Lebesgue, and Rene Baire — who made advances in set theory nonetheless decided by the early 1900s that the study of infinity had lost its way. They felt theorists were relying more
on arbitrary rule-making than rigorous inquiry. “The French lost their nerve,” says Graham.
By contrast, Graham and Kantor assert, the Russian trio found “freedom” in the mathematical uncertainties of the time. It turns out there were plenty of concrete advances in set theory yet to be
made; Luzin in particular pushed the field forward in the 1910s and 1920s, making discoveries about numerous types of sets involving the continuum of real numbers (the larger of the infinities Cantor
found); Descriptive Set Theory details the properties of these sets. In turn, many of Luzin’s students in the Moscow School also became prominent figures in the field, including Andrei Kolmogorov,
the best-known Russian mathematician of the 20th century.
What’s in a name?
Naming Infinity argues that the Russians thought their mathematical inquiries corresponded to their religious practices. The Name-worshippers believed the name of God was literally God, and that by
invoking it repeatedly in their prayer, they could know God closely — a heretical view for some.
Graham and Kantor think the Russians saw their explorations in math the same way; they were defining (and naming) sets in areas where others thought knowledge was impossible. Luzin, for one, often
stressed the importance of “naming” infinite sets as a part of discovering them. The Russians “believed they made God real by worshipping his name,” the book states, “and the mathematicians … thought
they made infinities real” by naming and defining them.
Graham also suggests a parallel between the Russians and Isaac Newton, another believer (and heretic). Historians today largely view Newton’s advances in physics as part of a larger personal effort —
including readings in theology and alchemy experiments — to find divine order in the world. Similarly, the Russians thought they could comprehend infinity through both religion and mathematics.
Mathematicians have responded to Naming Infinity with enthusiasm. “It’s a wonderful book for many reasons,” says Barry Mazur, the Gerhard Gade University Professor at Harvard, who regards it as “an
excellent way of getting into the development of set theory at the turn of the century.”
Moreover, Mazur agrees that the connection between the religious impulses of the three Russians and their mathematical studies seems significant, even if there is only a general affinity between the
two areas in matters such as naming objects. “It is more a conveyance of energy, than a conveyance of logic,” Mazur says. Religion could not trigger precise mathematical moves, he thinks, but it
provided the Russians with the intellectual impetus to move forward.
Victor Guillemin, a professor of mathematics at MIT, also finds this account convincing. In the 1970s, it was Guillemin, staying at the Hotel Parisiana like Egorov and Luzin before him, who discussed
the Russians’ lives with the Chamont family daughters (then elderly women, having been children just after the turn of the century). While reading Graham and Kantor’s book, Guillemin says, “I was
fascinated at the idea that the Russians were able to push the subject further because they had less trepidation at dealing with infinity.”
As Graham and Kantor point out, many other prominent mathematicians have had a mystical bent, from Pythagoras to Alexander Grothendieck, an innovative French theorist of the 1960s who now lives as a
recluse in the Pyrenees. Yet Graham emphasizes that mysticism is not a precondition for mathematical insight. “To see if science and religion are opposed to each other, or help each other,” Graham
says, “you have to select a specific episode and study it.”
Egorov’s exile, Florensky’s fate
Naming Infinity also starkly recounts the sorry fates of Egorov and Florensky, as publicly religious figures in atheist, postrevolutionary Russia. Egorov was exiled to the provinces and starved to
death in 1931. Florensky, a flamboyant figure who wore priestly garb in public, was executed in 1937. Luzin was spared after the physicist Peter Kapitsa made a direct appeal to Stalin on his behalf.
These men were not just endangered by their religiosity, however, but also by their style of math. The intangible nature of infinity contradicted the Marxist notion that intellectual activity should
be grounded in material matters, a charge made by one of their accusers: Ernst Kol’man, a mathematician and seemingly sinister figure called “the dark angel” for his role as an informant on other
Soviet intellectuals.
Graham, who knew both Kapitsa and Kol’man, says Kol’man “really believed his Marxism, and believed it was wrong to think mathematics has no relationship to the material world. He thought this was a
threat to the Soviet order.” Even so, Kol’man, who died in 1979, left behind writings acknowledging he had judged such matters “extremely incorrectly.”
The Russian trio was thus part of a singular saga, belonging to a now-vanished historical era. Naming Infinity rescues that story for readers who never had the chance to hear it directly from the
owners of the Hotel Parisiana.
Provided by Massachusetts Institute of Technology (news : web)
not rated yet Dec 18, 2009
Although the size (i.e. cardinality) of the set of real numbers is indeed bigger than that of the integers, it is not because "...there are infinitely many in between each integer." The set of
rational numbers is a counterexample to this argument; there are infinitely many rationals between each integer but the set of rationals is the same size as the set of integers.
not rated yet Jan 27, 2010
I would compare the part about infinity coming in different sizes to time and its different speed of flow. The only real number, quite paradoxically, is zero. All the rest of the numbers are there to
represent zero's flow through time.
-1+1,-2+2,-3+3,-4+4,-5+5,-infinity+infinity = 0
Numbers are just time and space, simplified, condensed into one.
Another interesting study is the story 1 to 5 and 7 & 8. The numbers 6 and 9 don't count as they essentially represent recyclement/emerging from an egg/zero. Thus transition of 1 to 5 into 7 (which
lasts for a mini eternity represented by 8) then comes transition again, represented by 9. 7 after 8 breaks down to 1 (but this 1 is a new level one, represented by the presence of a 0. 1 becomes one
with 0 (the universe), thus comes ten. And the universe begins again in the image of 1. 11,12,13,14 etc)
not rated yet Jan 27, 2010
P.S: Note only do numbers represent zero's passage through time, but it also represents energy/zero's subtle shifts in its tone/characteristic/properties during that time, where each number is a
not rated yet Jan 31, 2010
P.P.S: I forgot to tell that the opposite (regarding numbers and zero) is also true. After all, that's why it's called a zero. Both theories, about everything being zero and zero being nothing/
unachievable are true. Kindda like your right hand is left in the mirror. | {"url":"http://phys.org/news180030744.html","timestamp":"2014-04-20T03:38:12Z","content_type":null,"content_length":"78971","record_id":"<urn:uuid:30ddfc0c-b916-45fa-9dae-907f305c7967>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Juhi Jang
Courant Institute of Mathematical Sciences
New York University
251 Mercer Street
New York, NY 10012
E-mail: juhijang-at-cims-dot-nyu-dot-edu
Office: Warren Weaver Hall, Room 1027
Phone: (212) 998-3260
Courant Instructor at the Courant Institute (NYU). Here is my CV.
Teaching at NYU
Research Interests
• Nonlinear PDEs arising in fluid and gas dynamics, kinetic theory, plasma physics, and astrophysics
• NSF Research Grant DMS-0908007 (PI), 2009-2012
• Nonlinear Instability in Gravitational Euler-Poisson system for &gamma=6/5, Archive for Rational Mechanics and Analysis 188 (2008), no. 2, 265-307
• Vlasov-Maxwell-Boltzmann diffusive limit, Archive for Rational Mechanics and Analysis 194 (2009), no. 2, 531-584
• Local Well-Posedness of Dynamics of Viscous Gaseous Stars, Archive for Rational Mechanics and Analysis 195 (2010), no. 3, 797-863
• Well-posedness for compressible Euler equations with physical vacuum singularity (with Nader Masmoudi), Communications on Pure and Applied Mathematics 62 (2009), no. 10, 1327-1385
• Local Hilbert Expansion for the Boltzmann equation (with Yan Guo and Ning Jiang), Kinetic and Related Models 2 (2009), no. 1, 205-214
• Acoustic limit of the Boltzmann equation: classical solutions (with Ning Jiang), Discrete and Continuous Dynamical Systems - Series A 25 (2009), no. 3, 869-882
• Acoustic Limit for the Boltzmann equation in Optimal Scaling (with Yan Guo and Ning Jiang), Communications on Pure and Applied Mathematics 63 (2010), no. 3, 337-361
• Global Hilbert expansion for the Vlasov-Poisson-Boltzmann system (with Yan Guo), accepted for publication in Communications in Mathematical Physics
• Vacuum in Gas and Fluid dynamics (with Nader Masmoudi), accepted for publication in Proceedings of the IMA summer school on Nonlinear Conservation Laws and Applications
* Most of my papers including recent preprints are available on arXiv.org and the final versions can be obtained on request. *
Preprints and In Preparation
• Well-posedness of compressible Euler equations in a physical vacuum (with Nader Masmoudi), submitted
• Wave operators for semilinear wave and Klein-Gordon equations with critical nonlinearities (with Dong Li and Xiaoyi Zhang), in preparation
• The 2D Euler-Poisson System with Spherical Symmetry, unpublished note | {"url":"http://www.cims.nyu.edu/~juhijang/","timestamp":"2014-04-17T18:24:05Z","content_type":null,"content_length":"4139","record_id":"<urn:uuid:5a8c1e58-43f3-440a-a418-5323970c7262>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beam Pointing Direction of Traveling-Wave Arrays
A series of curves for dielectric-filled waveguide arrays illustrate how the beam direction is dependent on various array parameters. Example problems consider the various tradeoffs available.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicapable.
A. Derivation of the main beam-pointing equation
A linear traveling-wave array of radiating elements with a constant interelement spacing, d, is considered; see Fig. A. The change in phase from a to c is 2π/λ ∙ d cos ϕ where λ is the free space
wavelength and ϕ the main beam pointing direction from forward endfire. Similarly, the phase change from a to b is 2π/λ[g] ∙ d; λ[g] is the waveguide wavelength. An additional phase shift of –mπ (m =
0, 1, 2, 3) is introduced with each succeeding element. The value of m depends on how the elements are fed—in or out of phase (m = 1 and m = 3 correspond to staggered slots and m = 0 and m = 2
correspond to collinear or inline slots). Points c and b are in phase yielding.
Solving Eq. 2 for ϕ gives an expression for the main beam pointing direction
For the TE[10] mode of the waveguide wavelength, λ[g], is related to the relative dielectric constant, K, and the inside width, a, of the rectangular waveguide by
The beam pointing direction is, thus
Eq. 5 represents the beam pointing direction for the array factor of an array of omnidirectional elements.
B. Bias toward broadside
The effective bias toward broadside of the beam pointing direction of traveling-wave arrays is caused by the variation of the magnitude of the element factor from broadside to endfire. Radiation
pattern beams of non-broadside arrays are asymmetrical because the magnitude of the element factor is maximum at broadside and decreases toward forward endfire and rear endfire. In particular, the
forward endfire (rear endfire) portion of the main lobe of a non-broadside beam is somewhat lower than the corresponding portion of a broadside beam from an array of the same length. Lowering the
portion of the main beam closest to forward endfire (rear endfire) with respect to the portion closest to broadside effectively moves the beam peak toward broadside. The wider the beam and the closer
it is to forward endfire (rear endfire), the greater the relative reduction of the forward endfire (rear endfire) portion of the main beam and the greater the effective bias toward broadside.
C. Grating lobes
The maximum interelement spacing, d[M], of radiating elements in a traveling-wave array (which will not allow gain reducing grating lobes to appear in the radiation pattern) is a function of the
angular offset, ϕ[0], of the main beam from endfire, as expressed in Eq. 6 for infinitely narrow beamwidths.
where λ is the operating wavelength. A more general expression applicable to arrays of various beamwidths is
where BW[10 dB] is the 10 dB beamwidth or the angular width of the main beam measured at the 10 dB down points. Figs. 1 through 11 are each plotted with interelement spacings of 0.55λ and 0.65λ which
correspond to minimum angular offsets of the main beam from endfire of 34.9 degrees and 57.3 degrees, respectively, (which will not allow grating lobes to appear in the radiation pattern) for
infinitely narrow beams. The appropriate portions of the curves have been eliminated due to grating lobes appearing in the radiation pattern as predicted by Eq. 6. The antenna designer should use the
more general expression of Eq. 7 to check their particular array for possible grating lobes.
D. Double moding
Eq. 5 was derived for a TE[10] mode of propagation in a rectangular waveguide. Under certain conditions the TE[20] mode will propagate, and those conditions (waveguide widths allowing TE[20] cut off
frequencies below the operating frequency) have been eliminated from the curves. The expression for TE[20] cut off frequency is
c = velocity of light
K = dielectric constant
a = inside width of the rectangular waveguide
E. Broadside beam designs
Traveling-wave arrays are not usually designed with broadside beams. Commonly used longitudinal shunt slot radiating elements staggered (non-staggered) about the waveguide broadwall centerline are
spaced odd (even) multiples of a half waveguide wavelength apart along the traveling-wave array when broadside beams are desired. However, the input admittance of the array for either odd or even
multiples of half waveguide wavelength interelement spacing is the sum of the admittances of the individual shunt slot radiating elements plus the characteristic admittance of the waveguide. Since
the sum of the admittances of the individual shunt elements cannot equal zero, the input admittance of the array cannot equal the characteristic admittance of the waveguide; thus, the array will not
be matched without a transformer. Therefore, the more efficient standing wave arrays rather than traveling-wave arrays are commonly used to generate broadside beams.
The basic design curves, Figs. 1 through 11, include the broadside beam pointing direction. However, the engineer should not design a traveling-wave array with a beam pointing direction closer than 5
degrees from a broadside at all operating frequencies because of the matching problems. | {"url":"http://mwrf.com/rf-classics/beam-pointing-direction-traveling-wave-arrays?page=2","timestamp":"2014-04-17T09:36:34Z","content_type":null,"content_length":"84319","record_id":"<urn:uuid:d9e5f44f-3d1a-4b0a-ba9a-34970a1f36a3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
IVT and Jump discontinuity
February 11th 2011, 04:44 PM #1
Jan 2011
IVT and Jump discontinuity
As I understand it IVT states that there will always be one value between an interval that satisfies it. Which is pretty easy to understand. If it doesn't work it means you have some kind of
discontinuity. However, this problem in this book is asking me draw a graph of a function f(x) on [0,4] with the given property.
Jump discontinuity at x = 2, yet does satisfy the conclusion of the IVT on [0,4]. I've tried to wrap my head around this but It has me stumped. I assume it's a trick question but before I lay it
to rest I have to ask.
I just had an idea is it a Tan function? Because technically it can break at x =2 but still satisfies every M for every c between [0,4] But is that considered jump discontinuity or an infinite
Last edited by Newskin01; February 11th 2011 at 04:48 PM. Reason: add more thoughts :)
February 11th 2011, 04:45 PM #2
Jan 2011 | {"url":"http://mathhelpforum.com/calculus/170942-ivt-jump-discontinuity.html","timestamp":"2014-04-16T07:35:01Z","content_type":null,"content_length":"31970","record_id":"<urn:uuid:b6b45c40-dd09-414f-ac5e-a85fc3609a1e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
16y + 0 = 16y Associate Property of Addition Zero Property of Multiplication Commutative Property of Addition Identity Property of Addition
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f453a0e4b0abb3d87065e2","timestamp":"2014-04-19T07:21:59Z","content_type":null,"content_length":"46545","record_id":"<urn:uuid:4eb3db54-fddc-4f1e-ab4b-368663a6d862>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference - Asymptotic geodesics on compact surfaces without conjugate points
up vote 8 down vote favorite
I would like to ask about possible references on the following problem: consider a compact surface and a metric without conjugate points. Consider it's universal covering endowed whith the lifting of
the metric (so there is no conjugate points on the universal covering as well).
Suppose that there exists two geodesics on the covering wich are strongly asymptotic on the future (the distance between them goes to zero as $t\rightarrow\infty$). Is there any hope of obtainning
estimatives for the distance between them on the past (as $t\rightarrow -\infty$)?
I see that on the hyperbolic plane if the geodesics get closer on the future, they deviate on the past. I'm wondering if there is some similar "deviating behavior" (even if it was not monotonically
increasing) in abscense of conjugate points too.
Thanks on advance.
riemannian-geometry ds.dynamical-systems
Since Anton's answer is likely to disappear in the near future, here are some comments: 1. Giving a positive or negative answer for this question looks like a nice PhD thesis. 2. There are few
3 positive ans some negative results in this direction, the most interesting counter-example is in Keith Burns' 1992 paper "The Flat Strip Theorem Fails for Surfaces with No Conjugate Points". 3. On
the positive side, if you assume, in addition, that the metric has no focal points, strongly asymptotic geodesics will probably diverge in the opposite direction, see O'Sullivan's 1976 paper... –
Misha Mar 23 '13 at 13:34
"Riemannian Manifolds Without Focal Points". 4. Lastly, you should ask Keith Burns directly (he is still in Northwestern University) about the status of this question, since there were several
followup paper since his 1992 work. – Misha Mar 23 '13 at 13:37
Dear Anton, sorry for taking too long to unnacept the answer. I was really busy these days so I didn't see the changes on the status of the question. And thanks to Misha for pointing out the
papers of Burns and Sullivan – matgaio Mar 24 '13 at 21:07
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged riemannian-geometry ds.dynamical-systems or ask your own question. | {"url":"http://mathoverflow.net/questions/125107/reference-asymptotic-geodesics-on-compact-surfaces-without-conjugate-points","timestamp":"2014-04-20T08:52:31Z","content_type":null,"content_length":"50803","record_id":"<urn:uuid:e6a54a73-c543-4b0a-b19f-395377f98d64>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert lb to ton [short, US] - Conversion of Measurement Units
›› Convert pound to ton [short, US]
›› More information from the unit converter
How many lb in 1 ton [short, US]? The answer is 2000.
We assume you are converting between pound and ton [short, US].
You can view more details on each measurement unit:
lb or ton [short, US]
The SI base unit for mass is the kilogram.
1 kilogram is equal to 2.20462262185 lb, or 0.00110231131092 ton [short, US].
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between pounds and tons.
Type in your own numbers in the form to convert the units!
›› Definition: Pound
The pound (abbreviation: lb) is a unit of mass or weight in a number of different systems, including English units, Imperial units, and United States customary units. Its size can vary from system to
system. The most commonly used pound today is the international avoirdupois pound. The international avoirdupois pound is equal to exactly 453.59237 grams. The definition of the international pound
was agreed by the United States and countries of the Commonwealth of Nations in 1958. In the United Kingdom, the use of the international pound was implemented in the Weights and Measures Act 1963.
An avoirdupois pound is equal to 16 avoirdupois ounces and to exactly 7,000 grains.
›› Definition: Ton
The short ton is a unit of mass equal to 2000 lb (exactly 907.18474 kg). In the United States it is often called simply "ton" without distinguishing it from the metric ton (or tonne) and the long ton
—rather, the other two are specifically noted. There are, however, some U.S. applications for which "tons", even if unidentified, are usually long tons (e.g., Navy ships) or metric tons (e.g., world
grain production figures).
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0027 seconds. | {"url":"http://www.convertunits.com/from/lb/to/ton+%5Bshort,+US%5D","timestamp":"2014-04-18T08:05:03Z","content_type":null,"content_length":"21032","record_id":"<urn:uuid:c5a104a3-f2e4-4e70-b91e-c302fe28da0e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hillside, NJ Prealgebra Tutor
Find a Hillside, NJ Prealgebra Tutor
I believe that the best way to learn a subject -- no matter how basic or advanced -- is to first thoroughly understand the basics. Only after making this knowledge truly your own, will you be
able to apply these concepts -- not only to the subject at hand, but in other subjects also. A good example of this is logarithms.
10 Subjects: including prealgebra, chemistry, English, biology
...I began studying in fifth grade, via the alto saxophone. By seventh grade, I was playing both the tenor and baritone sax and participating in an advanced jazz ensemble. During my middle school
years, I attended a performing arts academy.
11 Subjects: including prealgebra, reading, writing, algebra 1
...Most importantly, I am personable and easy to talk to; Lessons are thorough but generally informal. I also make myself available by phone and e-mail outside of lessons--My goal is for you to
succeed on your tests. My expertise is in basic and advanced math: algebra 1/2, trigonometry, geometry, precalculus/analysis, calculus (AB/BC), and statistics.
10 Subjects: including prealgebra, physics, calculus, geometry
I am presently a business founder and owner for an online marketing and advertising company. Over the past few years I have given private lessons in the field of business, web design and
marketing. I have a very relaxed and comfortable (but firm) approach when it comes to my tutoring style.
22 Subjects: including prealgebra, reading, writing, business
...I know the content that I teach and know how to make that content accessible through multi-sensory and multi-intelligence approaches. In other words, I do not think there is just one way to
teach something and will use multiple strategies to find the one that clicks with a student! I am reliable and ready to tutor!
12 Subjects: including prealgebra, reading, English, writing
Related Hillside, NJ Tutors
Hillside, NJ Accounting Tutors
Hillside, NJ ACT Tutors
Hillside, NJ Algebra Tutors
Hillside, NJ Algebra 2 Tutors
Hillside, NJ Calculus Tutors
Hillside, NJ Geometry Tutors
Hillside, NJ Math Tutors
Hillside, NJ Prealgebra Tutors
Hillside, NJ Precalculus Tutors
Hillside, NJ SAT Tutors
Hillside, NJ SAT Math Tutors
Hillside, NJ Science Tutors
Hillside, NJ Statistics Tutors
Hillside, NJ Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Cranford prealgebra Tutors
Elizabeth, NJ prealgebra Tutors
Elizabethport, NJ prealgebra Tutors
Harrison, NJ prealgebra Tutors
Irvington, NJ prealgebra Tutors
Kenilworth, NJ prealgebra Tutors
Maplewood, NJ prealgebra Tutors
Roselle Park prealgebra Tutors
Roselle, NJ prealgebra Tutors
South Orange prealgebra Tutors
Springfield, NJ prealgebra Tutors
Townley, NJ prealgebra Tutors
Union Center, NJ prealgebra Tutors
Union, NJ prealgebra Tutors
Weequahic, NJ prealgebra Tutors | {"url":"http://www.purplemath.com/hillside_nj_prealgebra_tutors.php","timestamp":"2014-04-20T09:03:48Z","content_type":null,"content_length":"24244","record_id":"<urn:uuid:5b848dae-ec90-4795-b46f-5a0c44e06b7f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elliptic Curve Primality Proof
This page lists record
that were first proven prime by the elliptic curve primality proving algorithm. It is shown here as a convenience for those watching the heated contest between the chief
programmers. Originally these were François Morain (who first set a
titanic prime
record for proving primality via ECPP) and Marcel Martin (who wrote a version called Primo for Windows machines). Both
programs available
). In 2003, J. Franke, T. Kleinjung and T. Wirth greatly increased the size of numbers that could be handled with a new program of their own. Morain has worked with this trio and they have both
improved their programs [FKMW2003].
Martin's Primo
is by far the easiest of these programs to set up and use. There seems to be some question which is fastest on a single CPU.
ECPP has replaced the groups of order n-1 and n+1 used in the classical test with a far larger range of group sizes (see our page on elliptic curve primality proving). The idea is that we can keep
switching elliptic curves until we find one we can "factor". This improvement comes at the cost of having to do a great deal of work to find the actual size of these groups--but works for all
numbers, not just those with very special forms.
About 1986 S. Goldwasser & J. Kilian [GK86] and A. O. L. Atkin [Atkin86] introduced elliptic curve primality proving methods. Atkin's method, ECPP, was implemented by a number of mathematicians,
including Atkin & Morain [AM93]. Heuristically, ECPP is O((log n)^5+eps) (with fast multiplication techniques) for some eps > 0 [LL90]. It has been proven to be polynomial time for almost all choices
of inputs. A version attributed to J. O. Shallit is O((log n)^4+eps). Franke, Kleinjung and Wirth combined with Morain to improve their respective programs (both now use Shallit's changes), creating
what they "fastECPP" [FKMW2003].
The editors expect this page should remain our only Top Twenty Page dedicated to a proof method rather than a form of prime. Note that "fastECPP" is simply a name--their use of the adjective 'fast'
should not be construed as a comparison to programs by other authors (which may also follow Shallit's approach). | {"url":"http://primes.utm.edu/top20/page.php?id=27","timestamp":"2014-04-20T15:52:14Z","content_type":null,"content_length":"18741","record_id":"<urn:uuid:bc11a59c-96ec-4f5c-a66a-eafd164b0626>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expected number of questions to win a game
Let, a person is taking part in a quiz competition.
For each questions in the quiz, there are 3 answers, and for each correct answer he gets 1 point.
When he gets 5 points, he wins the game.
But, if he gives 4 consecutive wrong answers, then his points resets to zero (i.e. if his score is now 4 and he gives 2 wrong answers, then his score resets to 0).
My question is, on an average how much questions he needs to answer to win the game?
Plz, someone give answer.
Last edited by concept on Thu Apr 04, 2013 5:00 am, edited 1 time in total.
concept wrote:Let, a person is taking part in a quiz competition.
For each questions in the quiz, there are 3 answers, and for each correct answer he gets 1 point.
When he gets 5 points, he wins the game.
But, if he gives 2 consecutive wrong answers, then his points resets to zero (i.e. if his score is now 4 and he gives 2 wrong answers, then his score resets to 0).
My question is, on an average how much questions he needs to answer to win the game?
Plz, someone give answer.
Sorry, but giving out the answers doesn't generally happen here. You'll need to participate, too.
For a start, what is the probability of answering each question correctly? Are you assuming mere chance (that is, that the person is utterly ignorant), or are there other conditions? When you reply,
please show your work so far. Thank you! | {"url":"http://www.purplemath.com/learning/viewtopic.php?t=3138","timestamp":"2014-04-21T04:54:26Z","content_type":null,"content_length":"19252","record_id":"<urn:uuid:8c7de1ff-9378-4acc-9856-70509f36124e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deckers, CO Math Tutor
Find a Deckers, CO Math Tutor
...I especially love working with students who have some fear of the subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and
high school students. We have worked on applications and how this relates to things in real life. I realize that Algebra is a big step up from the math many have worked at previously with great
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...While in grad school, I was a Teacher-Assistant responsible for an undergraduate class, spending long hours in the library with students having trouble, or those who just wanted to learn more
-get deeper into- the topics we discuss in class. Also, I was an adjunct instructor at UCD where I taugh...
42 Subjects: including geometry, Arabic, ACT Math, statistics
...The sciences I have taught include earth science, physical science, chemistry and physics. The math classes I have taught include all middle school math, pre-algebra, algebra and geometry. I
have coached for the Science Olympiad, Science Fair and the middle school math competition, MathCounts, and have had several students qualify for state level.
9 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I have used Take Flight for individual tutoring, and frequently utilize these strategies with all my students. I recently attended the Diverse Learners Conference in Denver, which has helped
me better understand ADD and ADHD as learning styles. I help children set and monitor their own goals in both academics and behavior.
53 Subjects: including precalculus, trigonometry, ACT Math, SAT math
...I have a great deal of knowledge about the blues, classic rock, ska, reggae, hip-hop, funk, & alternative music. I have completed three college level courses with a focus on C++. I became
versed in object oriented programming, video game design, and advanced programming topics from these classes...
57 Subjects: including ACT Math, English, writing, trigonometry
Related Deckers, CO Tutors
Deckers, CO Accounting Tutors
Deckers, CO ACT Tutors
Deckers, CO Algebra Tutors
Deckers, CO Algebra 2 Tutors
Deckers, CO Calculus Tutors
Deckers, CO Geometry Tutors
Deckers, CO Math Tutors
Deckers, CO Prealgebra Tutors
Deckers, CO Precalculus Tutors
Deckers, CO SAT Tutors
Deckers, CO SAT Math Tutors
Deckers, CO Science Tutors
Deckers, CO Statistics Tutors
Deckers, CO Trigonometry Tutors
Nearby Cities With Math Tutor
Adams City, CO Math Tutors
Alpine Village, CO Math Tutors
Cleora, CO Math Tutors
Copper Mtn, CO Math Tutors
Dupont, CO Math Tutors
Ellicott, CO Math Tutors
Irondale, CO Math Tutors
Johnson Village, CO Math Tutors
Keystone, CO Math Tutors
Maysville, CO Math Tutors
Montclair, CO Math Tutors
Parkdale, CO Math Tutors
Smeltertown, CO Math Tutors
Stratmoor Hills, CO Math Tutors
Turret, CO Math Tutors | {"url":"http://www.purplemath.com/Deckers_CO_Math_tutors.php","timestamp":"2014-04-18T08:35:41Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:e7beac38-94ab-406b-8659-e8697d6f95a1>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming Praxis - Happy Numbers
Today's problem had to do with Happy Numbers.
I split the problem up in two parts, splitting up a number up in to digits and checking whether or not the number is a Happy Number.
The first part I implemented in two different ways.
Method 1:
[python] def chop(x):
r = [] length = int(ceil(log(x, 10))) if ceil(log(x, 10)) == int(ceil(log(x, 10))): length += 1 #a(dd), s(ubstract) for a,s in zip(range(0, length), range(length-1, -1, -1)): temp = x/(10s) for i,e
in enumerate(r): pass temp -= e * 10(len(r)-i) r.append(temp) return r [/python]
Method 2:
[python] def chop2(x):
return [int(x) for x in str(x)] [/python]
Method 2 is shorter and faster, but I really wanted to try solving it by just using numbers.
String conversions feel a bit "dirty".
Then finally, checking whether a number is a happy number or not:
[python] def isHappyNumber(x):
if x <= 0: return False numbers = [x] while x != 1: x = sum([a**2 for a in chop(x)]) if x in numbers: return False numbers.append(x) return True [/python]
The source can also be found on Bitbucket.
comments powered by Disqus | {"url":"http://blog.jeroenpelgrims.be/programming-praxis-happy-numbers/","timestamp":"2014-04-17T12:34:05Z","content_type":null,"content_length":"8172","record_id":"<urn:uuid:54873156-3b39-4fad-bac4-0108b78052b9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |