content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
marcy mathworks bridge to algebra answers
Punchline Algebra Answer Key.
Mathematics Curriculum Salem City Schools Grade 8. Punchline Bridge to Algebra, Marcy Mathworks, 2001, pp. 84-100. Calculator Use the figure below to answer the
Marcy Mathworks Punchline Algebra Book B.
Marcy Mathworks Punchline Algebra Book B Answer Key. Enrichment And Extras Coordinate Grids 9 Punchline. Algebra. Apush Chapter 33 Open Book Test Answer Key Mlynde.
What is the answer to how do golf balls.
Marcy Mathworks Punchline Algebra Book A.
Cassius and Marcus Brutus are part of the conspiracy to kill Ceasar for the good of Rome. Brutus used to be Caesar’s best friend but betrayed him by helping to kill
NEW FOR 2011! Punchline Algebra is now available as an interactive DVD. from Steve and Janis Marcy Authors of Algebra with Pizzazz, Pre-Algebra with Pizzazz,
What is the answer to how do golf balls get around 15.2 punchline algebra book b marcy mathworks?
Who is Cassius and Marcus Brutus – The.
Untitled Document.
Marcy mathworks punchline algebra b answers download on GoBookee.com free books and manuals search – Punchline Bridge To Algebra Answers
Marcy Mathworks Punchline Algebra Book A.
Marcy mathworks punchline algebra b.
Listed below are the 10 puzzle sections in Punchline Bridge to Algebra • 2nd Edition, each with a link to a sample puzzle from that section.
Who is Cassius and Marcus Brutus – The.
Marcy mathworks punchline algebra b.
punch line algebra book a 2006 marcy mathworks answer key for page2 10 punch line algebra book a 2006 marcy mathworks answer key for page2 10 punch line algebra book
Marcy Mathworks Punchline Algebra Book A Answer Key. Apush Chapter 33 Open Book Test Answer Key Mlynde. Biology 12 Resource Exam A Answer Key. Chm 2045fall 2009 Exam
Marcy Mathworks Punchline Algebra Book A.
Marcy mathworks punchline algebra b.
Marcy Mathworks
marcy mathworks bridge to algebra answers
marcy mathworks bridge to algebra answers | {"url":"http://latfalupve.blog.com/2013/05/06/marcy-mathworks-bridge-to-algebra-answers/","timestamp":"2014-04-18T18:21:31Z","content_type":null,"content_length":"20713","record_id":"<urn:uuid:06671c8c-1d94-4ba4-bb2a-2ee700f2481c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
lagragian interpolation
April 20th 2010, 01:04 PM #1
Jan 2008
lagragian interpolation
Let Q(t) be a quadratic polynomial such that Q(0) = 0,
Q(1) = 2, and Q(3) = 6. Use Lagrangian interpolation to
obtain Q(2).
im not understanding these things at all.
any help?
Basically, they are giving you three points on the polynomial:
$Q({\color{red}0}) = 0$
$Q({\color{blue}1}) = 2$
$Q({\color{magenta}3}) = 6$
You need to determine the quadratic (i.e. degree 2) polynomial that will go through all three of these points. The formula looks like this:
$Q(t) = 0 \cdot \frac{t-{\color{blue}1}}{{\color{red}0}-{\color{blue}1}} \cdot \frac{t-{\color{magenta}3}}{{\color{red}0}-{\color{magenta}3}} + 2 \cdot \frac{t-{\color{red}0}}{{\color{blue}1}-{\
color{red}0}} \cdot \frac{t-{\color{magenta}3}}{{\color{blue}1}-{\color{magenta}3}} + 6 \cdot \frac{t-{\color{red}0}}{{\color{magenta}3}-{\color{red}0}} \cdot \frac{t-{\color{blue}1}}{{\color
I've color-coded the numbers to make it easier to see where each number comes from. For example, look at just the second term:
$2 \cdot \frac{t-{\color{red}0}}{{\color{blue}1}-{\color{red}0}} \cdot \frac{t-{\color{magenta}3}}{{\color{blue}1}-{\color{magenta}3}}$
In the front we multiply by the function value at t=1, which is 2. Then there is one fraction corresponding to each point given OTHER than t=1. Hopefully you can see the general format by
studying what I am doing here. It's a bit hard to explain this online so perhaps you will get it just by studying the pattern in my work.
Once you have written out the terms like I did above, you can simplify it to get to a simple polynomial form, although it's not completely necessary. In the problem, it asks you to solve for $Q
(2)$ which just means you take the polynomial $Q(t)$ we just created and then plug in $t=2$.
ahh i get it now, thank you for that.
April 20th 2010, 02:12 PM #2
Senior Member
Jan 2010
April 20th 2010, 02:35 PM #3
Jan 2008 | {"url":"http://mathhelpforum.com/calculus/140335-lagragian-interpolation.html","timestamp":"2014-04-16T14:18:23Z","content_type":null,"content_length":"37017","record_id":"<urn:uuid:7c0b02da-8df6-4a50-a0f4-cb3f5891966b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
often represent incomplete rather than incorrect knowledge.^22 From the current research base, we can make several observations about the kinds of learning opportunities that instruction must provide
students if they are to develop proficiency with rational numbers. These observations address both representing rational numbers and computing with them.
Representing Rational Numbers
As with whole numbers, the written notations and spoken words used for decimal and common fractions contribute to—or at least do not help correct— the many kinds of errors students make with them.
Both decimals and common fractions use whole numbers in their notations. Nothing in the notation or the words used conveys their meaning as fractured parts. The English words used for fractions are
the same words used to tell order in a line: fifth in line and three fifths (for
Research does not prescribe a one best set of learning activities or one best instructional method for rational numbers. But some sequences of activities do seem to be more effective than others for
helping students develop a conceptual understanding of symbolic representations and connect it with the other strands of proficiency.^23 The sequences that have been shown to promote mathematical
proficiency differ from each other in a number of ways, but they share some similarities. All of them spend time at the outset helping students develop meaning for the different forms of
representation. Typically, students work with multiple physical models for rational numbers as well as with other supports such as pictures, realistic contexts, and verbal descriptions. Time is spent
helping students connect these supports with the written symbols for rational numbers.
In one such instructional sequence, fourth graders received 20 lessons introducing them to rational numbers.^24 Almost all the lessons focused on helping the students connect the various
representations of rational number with concepts of rational number that they were developing. Unique to this program was the sequence in which the forms were introduced: percents, then decimal
fractions, and then common fractions. Because many children | {"url":"http://www.nap.edu/openbook.php?record_id=9822&page=236","timestamp":"2014-04-21T10:32:23Z","content_type":null,"content_length":"35658","record_id":"<urn:uuid:ed5a7c49-e979-427e-85d8-6d8fb4b6e028>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scope of the present volume
This volume presents Einstein's 49 contributions to Annalen der Physik, together with four introductory essays based on recent historical studies. The first three essays, by David Cassidy, Jürgen
Renn, and Robert Rynasiewicz, discuss key aspects of the scientific revolution triggered by the pathbreaking papers of Einstein's annus mirabilis 1905, which changed our understanding of space, time,
matter, and radiation.Various ramifications of these papers are worked out in Einstein's subsequent contributions to the Annalen. These papers document Einstein's further exploration of the quantum
hypothesis and the triumphs of statistical physics as well as various stages of Einstein's journey from special to general relativity. General relativity is the subject of the fourth historical
essay, by Michel Janssen.
The earliest contributions were written just after Einstein graduated with a teacher's diploma from the Swiss Federal Polytechnic School in Zurich; the latest while Einstein was working in Berlin as
a member of the Prussian Academy of Sciences and as director of the Kaiser-Wilhelm Institute of Physics. The rise of Nazism in Germany put an end to this glorious period of the history of science.
Einstein was forced to emigrate from Germany in 1933 and was never to return again. This volume, published in the centenary of Einstein's annus mirabilis, offers the reader a comprehensive overview
of the breathtaking scope and depth of the investigations of the towering figure of 20^th-century physics, focusing on his most productive years. The dramatically changing historical circumstances
under which these papers were written may also serve as a reminder of the fragility of the scientific enterprise and the need both to reflect on its contexts and to strengthen it by civil courage,
just as Einstein has taught us.
1 Foundation and role of the Annalen
The Annalen der Physik, one of the most influential journals in the history of physics, was founded in 1790 by Friedrich Albert Carl Gren, a professor of physics and chemistry at Halle University. As
is described in the masterful account of the rise of theoretical physics by Christa Jungnickel and Russel McCormmach (Intellectual Mastery of Nature, University of Chicago Press), the original
mission of the Annalen was to familiarize its German-speaking readership with the results of investigations pertaining to the mathematical and chemical parts of the theory of nature, including
reports from other journals, foreign as well as German. From the outset, the spirit of the journal was international and integrative and continued to be so under the subsequent editors, in particular
Ludwig Wilhelm Gilbert and Johann Christian Poggendorff, who succeeded in turning it into a principal point of reference for the German-speaking scientific community in physics and chemistry, which
included not only university professors, but also teachers, doctors, and apothecaries.
Original contributions published in the Annalen were soon translated or reported in foreign journals. In spite of the rising specialization, the editors paid close attention to the interconnections
between the broad variety of subjects treated in the articles. While the emphasis was on experimental work, the rising significance of theoretical contributions was acknowledged as well. The wide
distribution of the Annalen, whichwas available not only in university libraries, but also in secondary and technical schools, furthered the formation of a broadly accessible scientific culture.
Accordingly, the Annalen remained open to contributions not only from established physicists and institute directors, but also to articles submitted by students, assistants, and teachers. Its role as
an intellectual reference point was reinforced by the foundation of the Beiblätter, which offered brief reports on work not published in the Annalen.
The subjects treated in the Annalen over the years reflect the development of research in 19th century physics and chemistry. Under the editorship of Gustav Wiedemann, who took office after
Poggendorff's death in 1877, the broad perspective of the journal was maintained and occasionally even included articles on the history of science. All in all, the journal was transformed into a
means of communication oriented towards the increasingly professionalized community of physicists. Yet the growing hints at the existence and relevance of a microworld of atoms and molecules for the
understanding of nature kept alive the promise of unity in the dispersive multitude of results published in the Annalen.
The Annalen as food for thought
This was roughly the situation when the young Einstein began to avidly study the Annalen, which had been edited since 1900 by Paul Drude. Drude's work on an atomistic theory of conduction in metals
was of special interest to Einstein and the precocious young student even entered into a controversy with Drude.
Einstein's originality is often attributed to his autodidactic training. But the possibility to learn independently obviously very much depends on the availability of appropriate reading material.
Although academically isolated, it was the Annalen that offered Einstein an up-to-date overview of contemporary physics, stimulating many of the original ideas he pursued during his student days and
his time at the Swiss patent office in Bern. His contemporary correspondence suggests that he often believed he just had to put the pieces of a puzzle together in order to achieve a breakthrough,
pieces he often found in papers he read in the Annalen.
Apart from Drude's work on the electron theory of metals, which eventually stimulated Einstein's development of statistical mechanics, he also read Max Planck's work on black-body radiation and
Philipp Lenard's studies of the photoelectric effect, which triggered his work on the light quantum hypothesis. Also Wien's report on the problematic attempts to detect the translatory motion of the
ether offered an important stepping stone towards the rejection of the ether and the formulation of the special theory of relativity.
The Annalen as a source of income
The Annalen also served as a source of modest additional income for Einstein, who wrote more than twenty reports for its Beiblätter - mainly on the theory of heat - thus demonstrating an impressive
mastery of the contemporary literature. This activity started in 1905 and probably resulted from his earlier publications in the Annalen in this field. Going by his publications between 1900 and
early 1905, one would conclude that Einstein's specialty was thermodynamics.
Beginners' papers
The collection begins with what Einstein later designated his two "worthless beginners' papers," one on capillarity published in early 1901 and the other on dilute salt solutions published in 1902.
Both are dedicated to an investigation of the nature of molecular forces through the effect of such forces on phenomena in liquids, a subject Einstein also planned to investigate for his
dissertation, a plan he then abandoned.
His early exploration of a molecular theory of solutions nevertheless helped shape many of the techniques used in the dissertation he did complete in 1905. It dealt with the determination of
molecular dimensions. It was published in the Annalen in 1906 and is included in this collection. The investigations documented by Einstein's first papers also provided a motivation for generalizing
the methods of the kinetic theory, and for establishing statistical mechanics independently from Josiah Gibbs.
Statistical mechanics
The pivotal role of statistical mechanics in Einstein's early work is clearly visible in this collection. While its development was obviously driven by his early atomistic speculations, the
statistical framework he established between 1902 and 1904 provided the backbone for his papers on the light quantum and on Brownian motion of 1905. It pointed to the crucial role of fluctuations in
discerning the non-classical character of heat radiation, and revealed atomic dimensions in his analysis of Brownian motion.
The annus mirabilis 1905
Without detracting from the singularity of Einstein's 1905 papers in the history of science, this collection may help to frame these contributions in the context of his intellectual development, as
is discussed in the historical essays opening this volume. The 1905 papers deal with subjects as diverse as heat radiation, Brownian motion, and the electrodynamics of moving bodies. How were these
topics related in Einstein's mind? In viewof his earlier publication record and of insights gained from his contemporary correspondence, it seems plausible to assume that one unifying theme goes back
to Einstein's early pursuit of atomistic ideas, which includes both the quest for evidence for the existence of atoms and speculative ideas such as that of a corpuscular constitution of light.
Later these speculations turned into the exploration of the limits of classical physics, as Einstein encountered them when critically reading the Annalen. His perception of these limits was sharpened
by the philosophical acumen he had developed through his reading of authors such as Hume, Kant, Mach, and Poincaré. All three of the revolutions that Einstein initiated in 1905 originated from
problems at the borders between the major conceptual domains of classical physics; mechanics, electrodynamics, and thermodynamics. Special relativity emerged from the electrodynamics of moving
bodies, an area at the intersection of electrodynamics and mechanics; the light quantum hypothesis can be seen as an attempt to cope with the problem of heat radiation, a problem at the intersection
of electrodynamics and thermodynamics; while Einstein's work on Brownian motion deals with a borderline problem of mechanics and thermodynamics.
The year 1905 was just the beginning of Einstein's career and of the scientific revolution triggered by his pathbreaking contributions. This becomes evident from his own subsequent publications,
which show that Einstein's contributions should not be seen a series of isolated achievements, but as integrated in a lively scientific context, involving collaborative efforts and discussions -
polemics even - with his colleagues.
Electrodynamics in moving media
Einstein's 1908 paper with Jakob Laub on the electrodynamics of moving media was, for instance, a direct continuation of his 1905 work on the electrodynamics of moving bodies, which focused on
microscopic electron theory, extending it, following prior work by Minkowski, to the macroscopic theory of electromagnetic and optical phenomena in polarizable and magnetizable material media in
motion. It was in this context that Einstein was first confronted with the four-dimensional spacetime formalism developed by Minkowski. In their own work, Einstein and Laub avoided this formalism,
the value of which Einstein only gradually learned to appreciate.
Specific heats
The present collection also documents Einstein's early efforts to further explore the consequences of his revolutionary interpretation of Planck's formula for black-body radiation as hinting at a
non-classical foundation of physics. Such an exploration was needed all the more since Einstein's interpretation - in particular the light quantum hypothesis - met, in contrast to his other 1905
achievements, with little sympathy from his established colleagues.
A first milestone of this exploration was Einstein's 1907 paper on the specific heat of solid bodies, which exploited the insight into the non-classical behavior of atomic oscillators for a new
understanding of the thermal properties of solid bodies, in particular at lower temperatures. The experimental confirmation in Nernst's laboratory of the prediction of the decrease of specific heats
with temperature turned out to be crucial for Einstein's career and his eventual move to Berlin in 1914.
Elastic behavior of solids
This line of research is continued in a paper of 1911 about the relation between molecular vibrations and optical wavelengths in the infrared region, which exploits the connection that Einstein had
established between molecular vibrations and specific heats. He thus succeeded in propagating the quantum discontinuity from its original locus in radiation theory to yet another range of physical
phenomena, identifying, very much in the vein of his early atomistic speculations, a link between the thermal and mechanical properties of a solid.
Collaboration with Hopf
Planck and others remained skeptical of Einstein's claim that a newradiation theorywas required. Challenged by this skepticism, Einstein in 1910 published two papers together with Ludwig Hopf on the
statistical properties of the radiation field. Their main purpose was to provide support for the claim that classical radiation theory leads to unacceptable implications for heat radiation and that
Planck's radiation formula does imply a break with classical physics.
Critical opalescence
Einstein's 1910work on critical opalescencewas both a direct continuation of his earlierwork on fluctuations and a reaction to a contemporary issue raised by the Polish physicist, Marian von
Smoluchowski, who in 1905 had analyzed independently from Einstein the statistical properties of Brownian motion. In 1908 Smoluchowski published a paper on critical opalescence in the Annalen, which
dealt with the optical effects occurring near the critical point of a gas and near the critical point of a binary mixture of liquids. In his paper, Einstein provided a quantitative derivation of the
effect from a treatment of density fluctuations. His key insight was that both critical opalescence and the blue color of the sky can be explained with the help of such density fluctuations, which
originate from the atomistic constitution of matter.
Photochemical equivalence law
Another contribution illustrating Einstein's attempts to explore the quantum hypothesis at a time when he had already begun to despair about ever capturing it in a coherent theory is his influential
1912 paper about the photochemical equivalence law, the beginning of a line of research that would lead him in 1916 to his ground-breaking rederivation of Planck's law based on the concepts of
spontaneous and induced emission.
Zero-point energy
The 1913 paper by Einstein and Otto Stern also testifies to the early struggle to understand the status of Planck's radiation law and its implications for applying the quantum hypothesis to the
atomistic conception of matter. Einstein and Stern attempted to develop a quantum theory of rotating diatomic molecules, which show that the notion of zero-point energy - first introduced by Planck
in his "second quantum theory" - could be used to interpret measurements of the specific heat of hydrogen at low temperatures. But Einstein soon became skeptical of some of the arguments in this
paper and considered zero-point energy, as he put it in a letter to his friend Paul Ehrenfest, "as dead as a doornail."
Light deflection
While ever more desperate about the quantum, Einstein became increasingly involved with the idea of formulating a relativistic field theory of gravitation, modeled on electromagnetic field theory. As
early as 1907, whileworking on a reviewof special relativity, he had realized that, if such a theory were to incorporate Galileo's principle that all bodies fall with the same acceleration, it would
require yet another fundamental revision of our concepts of space and time. This led Einstein to formulate his famous equivalence principle, by which gravitation and inertia ultimately became as
intertwined as the electric and the magnetic field in the first relativity revolution.
This collection contains some of the early papers marking Einstein's path from special to general relativity such as his 1911 paper predicting the deflection of light by the gravitational field of
the sun.
Static gravitational fields
The collection also includes a number of papers illustrating some of the heuristic strategies Einstein adopted as well as some of the obstacles he had to overcome in his search for a relativistic
field theory of gravitation. As documented by the papers in this volume, he started in 1912 by treating the special case of a static gravitational field with the help of the equivalence principle,
which allowed him to use knowledge about acceleration in the absence of gravity to draw conclusions about physical effects in the presence of a gravitational field.
While making impressive advances in this way, such as the prediction of light deflection and his recognition of the need for non-Euclidean geometry, these early successes consolidated a framework of
expectations rooted in classical physics, many of which had to be abandoned or seriously modified before general relativity could be established.
Controversy with Abraham
One can argue that, unlike special relativity, general relativity was essentially the achievement of a single man. As a matter of fact, most of Einstein's established colleagues were skeptical about
his attempt to build a new theory of gravitation on the idea of curved spacetime described by a ten-component metric tensor rather than the familiar scalar potential of Newton's theory.
It is important to realize, however, that Einstein was not only supported by some friends and collaborators such as his Swiss companions Marcel Grossmann and Michele Besso and by the astronomer Erwin
Freundlich, but that he also had to face competitors and opponents who provided his endeavor with a scientific context that was crucial for the emergence of general relativity. It was Max Abraham,
for instance, and not Einstein, who first formulated a comprehensive gravitational field theory in 1912, thus challenging Einstein to integrate his own considerations based on the equivalence
principle into a coherent theory as well. Our collection contains the papers resulting from these efforts while offering some glimpses of the heated controversy in which this early competition
Nordström's special relativistic theory of gravitation
While Einstein was initially convinced that the problem of gravitation could not successfully be addressed within the framework of special relativity,Abraham's failed attempt to provide such a
theorywas followed by a more convincing theory developed by Gunnar Nordströmin the years between 1912 and 1913. Nordström's theory was a serious competitor of nascent general relativity. It might
well have become the dominating relativistic theory of gravitation for some time had it not been for Einstein's philosophically motivated quest to combine such a theory with the attempt to generalize
the principle of relativity. This collection features a paper resulting from a collaboration with Adriaan Fokker and showing how Nordström's theory can be reformulated in terms of the absolute
differential calculus, the mathematical language Einstein had adopted in his own search for a field theory of gravitation. In this way, it became possible to compare the two approaches more directly
and to reveal the assumptions underlying Nordström's theory. At the same time it suggested that Nordström's theory, like Einstein's, went beyond special relativity and would likewise involve curved
Foundations of general relativity
It took Einstein eight years, from 1907 to 1915, to attain his goal of a relativistic field theory of gravitation that preserved both the heritage of mechanics and that of field theory. The drama of
this struggle with the conceptual foundations of classical and special relativistic physics is documented by Einstein's research manuscripts, by his correspondence, by several intermediary
publications, and in particular by the famous sequence of communications to the Prussian Academy of November 1915.
A comprehensive reconstruction of this drama including key sources appears elsewhere (The Genesis of General Relativity, Kluwer Academic Publishers, edited by J. Renn). The present collection
features the outcome of this quest - the general theory of relativity - in the form of Einstein's first masterful exposition of the finished theory in his famous 1916 contribution to the Annalen.
This paper bears clear traces of the gestation period of the theory, as is demonstrated in the historical essay of Michel Janssen.
Einstein's subsequent work on general relativity is no longer extensively documented in the Annalen. As a newly minted member of the Prussian Academy in Berlin, his outlet of choice in this period
are the Academy's own Sitzungsberichte. Both the four celebrated papers of November 1915 documenting the final breakthrough in Einstein's search for a relativistic field theory of gravity and the
famous paper on cosmology of 1917 appeared in the Sitzungsberichte. This volume, however, does contain a short but important paper of 1918 on the foundations of general relativity, in which Einstein
formally introduced what he called "Mach's Principle," the requirement that matter fully determines the metric field. The volume ends with a short paper of 1922 providing at least a hint at the fate
of general relativity, which was subsequently turned from a philosophically motivated integration of the classical knowledge about gravitation with the kinematics of relativity into the theoretical
foundation of modern cosmology describing an expanding universe. In this 1922 paper, Einstein reacted to a proposal by Franz Selety for resolving Einstein's objections to Newtonian cosmology of 1917
by what he called a "hierarchical molecular world." Einstein rejected this proposal because it did not, in his view, comply with Mach's principle. He also rejected the interpretation of the spiral
nebulae as galaxies similar to our own milky way, referring to the evidence of contemporary observations. The cosmological mission of general relativity was yet to be accomplished.
The present collection offers a first entry point into Einstein's work, which is being published comprehensively in an annotated documentary edition by the Collected Papers of Albert Einstein
(Princeton University Press). Here the reader will find more extensive commentaries and annotations that offer insights into the genesis and historical context of Einstein's papers. In line with
Einstein's legacy and spirit of broadly sharing scientific knowledge, the Editor-in-Chief of the Annalen, Ulrich Eckern, and WILEY-VCH have consented, in agreement with the Albert Einstein Archives
at the Hebrew University Jerusalem and the Collected Papers, and in collaboration with the Max Planck Society, to make the papers in this collection freely accessible on the Internet.
Acknowledgements I would like to thank Lindy Divarci for her role as editorial assistant in the preparation of this volume.
Jürgen Renn (Berlin)
This text is taken from "Einstein’s Annalen Papers", WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim, 2005 | {"url":"http://einstein-annalen.mpiwg-berlin.mpg.de/home","timestamp":"2014-04-18T15:54:10Z","content_type":null,"content_length":"33111","record_id":"<urn:uuid:65d5ab01-8cb9-4900-9327-244bdc59cb22>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Rebekah on Monday, October 22, 2012 at 8:29pm.
#1. (3x^3-4x^2-3x+4)/(x^3-5x)
my answers:
y-int: NONE
x-int: 1, -1, and 4/3
x asymtope: x= 0 and +- square root of 5
y asym: y=3
If it crosses horiz asym: idk i need help on this
#2. (x^4-7x^2+12)/(x^2-5x+4)
x-int: -2, 2, and +- square root of 3
y-int: 3
x-asy: x= 4 and 1
y asym: y=-7/4x^2
I cant find if/where it intersects the horiz asym
y-int: (0,5/3)
x-int: 0 and +- square root of 5
xasym: x= 0 and +- square root of 3
I cant find the y asym or where it intersects
#4 (1-x)/(x^3-2x^2+x-2)
x asym: x=2
x-int: 1,0
y int: (0, -1)
yasym: y=0
once again i cant find where it intersects
Thankyou soooo much!! i know its longgg....
• Precalc - Steve, Monday, October 22, 2012 at 11:56pm
take a visit to
and you can play around with graphing functions. It should help a lot, especially as you review what you already (maybe) know.
You can also use wolframalpha.com for a "big picture" plot, without having to worry about domain and range. Just type in the function in the box, and it will plot it, as well as show other
interesting related info.
Related Questions
check my precalc - #1. (3x^3-4x^2-3x+4)/(x^3-5x) my answers: y-int: NONE x-int: ...
Math - Can someone please check my answers below for accurateness? The ...
AP Calculus - I need to find the integral: int:(x^2+2x+3)/(x^3+3x+9) dx I think ...
calc - 6(4x+3)^5(4)(3x-5)^5+ 5(3x-5)^4(3)(4X+3)^6 common factors: (4x+3)^5 (3x-5...
bobpursley,math - about this problem , I figured it out and i did get the answer...
Math - Can you check these questions for me? 1. Add: (3x^4-x+2)+(9x^5-x^2+x) 9x^...
Math - AlgebrAic factorization 1)6(2a-3)(a+7)+18(2a-3)(3a+2) 2) (3x-4)(5x+1)-(...
computer science - int *reverse(const int *, int); int *temp; temp = reverse(...
Precalc - #1: f(x) = x^2-4x find vrtex, axis ofsym, y-int, x-int if any, domain...
Check my precalc? - #1: f(x) = x^2-4x find vrtex, axis ofsym, y-int, x-int if ... | {"url":"http://www.jiskha.com/display.cgi?id=1350952165","timestamp":"2014-04-19T13:25:55Z","content_type":null,"content_length":"9331","record_id":"<urn:uuid:7ebfc8b6-76c9-4df1-8b0a-768f32cf07bf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: June 2000 [00295]
[Date Index] [Thread Index] [Author Index]
Re: plotting surfaces
• To: mathgroup at smc.vnet.net
• Subject: [mg24018] Re: plotting surfaces
• From: hwolf at debis.com
• Date: Tue, 20 Jun 2000 03:07:33 -0400 (EDT)
• Organization: debis Systemhaus
• References: <8i9oit$2b5@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
A Prashanth pg ee schrieb:
> i have a set of functions for plotting. some of them are algebraic, some
> transcendental, some are explicitly solvable for z and some are not (my
> functions, ofcourse, have three variables: x,y and z).
> could you tell me sir, the relavant commands for plotting these surfaces.
> do i need mathematica 3.0 as well; i have rightnow the 2.2 version with
> me.
> one of my functions for instance happens to be sin(xyz) +log(z) = 1, which
> clearly is not solvable for z. so how would i proceed with the plot with
> the version i rightnow have?
> sincerely,
> prashanth.
I think it's not so much a matter of the Mathematica Version (except possibly
for speed); I'd advice you to
(1) try to get an explicit description of your surface (z = z[x,y]); then use
(2) else try to get a parametric description (x=x[u,v], y=y[u,v], z=z[u,v]);
then use ParametricPlot3D
(3) else you may try
<< Graphics`ContourPlot3D`
ContourPlot3D[Sin[x y z] + Log[z], {x, 1/2, 2}, {y, 1/2, 2}, {z, 0.01, 10.},
Contours -> {1}, BoxRatios -> {1, 1, 1}, PlotPoints -> 7] // Timing
This took approx. 7 minutes on my machine, and gives you only a short glimpse at
the surface.
(4) Better you try to _study_ the surfaces beforehand, e.g. do
Plot[Sin[z] + Log[z], {z, 0., 10 Pi}]
p = Plot3D[Sin[x z] + Log[z], {x, 0.01, 3.}, {z, 0.01, 5 Pi}, PlotPoints -> 40]
Show[p, ViewPoint -> {-1.3, -2.4, 2.}]
Perhaps the best (affordable) view for that surface is with
<< Graphics`ImplicitPlot`
ImplicitPlot[Sin[x z] + Log[z] == 1, {x, 0.01, 10.}, {z, 0.01, 8.},
PlotPoints -> 150]
although this gives you only a 2-dim picture, you can see much more from this
than from the 3-dim ContourPlot3D above (since the dependency on x and y is only
through x*y).
But if you really *need* the 3-dim plot, you could construct it from that last
computation; though not complicated, that will cost you some programming work
(build 3d-polygons from the 2d-line you have got).
Kind regards,
Hartmut Wolf | {"url":"http://forums.wolfram.com/mathgroup/archive/2000/Jun/msg00295.html","timestamp":"2014-04-20T18:43:31Z","content_type":null,"content_length":"36420","record_id":"<urn:uuid:ad7a98db-891b-4e86-9afe-a43e262b299a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evariste Galois: The Man Who Never Lived - Gonit SoraEvariste Galois: The Man Who Never Lived
Evariste Galois, the famous French mathematician whose life is tragic and inspiring at the same time was born 200 years ago. Gonit Sora is celebrating his life by bringing forth a series of articles
on the life and works of Galois. This is the first of the planned five articles to be published throughout this month and beyond.
A famous and oft repeated quote is “Whom Gods love, die young!” Although, there seems to be no scientific evidence nor any coherent study confirming or discarding this statement, it has been noticed
now and again that great men indeed die young. Not everyone, of course; but there have been some glaring examples in many fields; take for instance the great poet John Keats who died very young. But
the field in which such examples are galore is mathematics. Throughout the history of mathematics, there has been many examples of extraordinarily brilliant minds living for a very short span of
time. Take for example, the great Indian mathematician S. Ramanujan who died at the young age of 32; the famous Norwegian mathematician Neils Henrick Abel died at the age of 27. Some other notable
people who lived for a short span of time are Riemann and Pascal; both geniuses of the first order and both could have achieved a lot had fate been kinder to them. However, no example is as tragic
and as hearttouching as that of Evariste Galois, the now famous French mathematician who died at the age of 20!
Evariste Galois (pronounced ‘Gelwa’) was born in Bourg la-Riene in the then French Empire on 25^th of October, 1811. Galois, like many mathematicians before and after him showed a tenacity and zeal
for higher mathematics at a very small age that could only be described as hauting. Galois started his formal education at the age of 10 being self tutored at home and later joined the Lycee’s school
in his hometown. As was expected, Galois showed a tremoundous amount of scholarship in his studies and soon rose to the top of his class. But, such is the tale of genius that at the age of 14, he
became bored with the regular school curriculum and started taking an uncanny liking towards mathematics. This was eventful not only for him, but for the whole of mathematics as he did some
pioneering work in the fields he touched upon, that even now we are yet to reap the benefits of the seeds that he sowed.
During this period of his life, Galois began studying the masters of mathematics. It is said that he finished the famous mathematician Legendre’s book on Geometry in almost 5 days cover to cover and
all the while he read it like a novel. It must be mentioned that even now professional mathematicians find this book too difficult to master. At the age of 15, Galois started to follow the original
research papers of another great mathematician, Lagrange. This not only fueled his deep passion for mathematics but also encouraged him to unravel the mathematical mysteries on his own. In April,
1829 Galois published his first research paper on continued fractions at the age of only 17. Thus, began the journey of a legend. Galois deep and varied contributions in many different fields of
mathematics has earned him the respect and adulation of one and all today. He was the first person to use the word ‘group’ to define a certain class of mathematicial objects that are today
omnipresent not only in almost all branches of mathematics but in fields as varied as physics, chemistry, biology, engineering and even economics.
Galois after completing his school with excellent marks in mathematics decided to try and enter the distinguished Ecole Polytechnique, and so sat in its entrance exam. However, Galois failed to
secure a seat in this institute of unique importance and had to enroll in the far inferior Ecole Normale Superior. Here Galois studied for some time, and then again decided to try and enter for the
Ecole Polytechnique. Meanwhile, on the personal frontier Galois lost his father who committed suicide by hanging himself in public. This was a major blow to the teenaged Galois and which further
fuled his Republican tendencies. The French nation was at that time going through enormous imbalance in its monarchy and system of governance. Galois too decided to join the revolution at the cost of
his mathematics. History is testament to tha fact that Galois was even jailed a few times for his revolutionary activities and this got him into trouble even in his institution. All these incidents
happened when he was preparing for his entrance exam at the Polytechnique. It was again a surprise when Galois failed a second time to clear it. The genius of Galois was not reccognised at that
hallowed institution of learning. Eric Temple Bell, the famous historian of mathematics in his book “Men of Mathematics” quotes
“People not fit to sharpen his (Galois’) pencils sat on judgement of him.”
Such failure prompted Galois to almost leave doing mathematics and light the fire of revolution once again, which was later the cause of his death too.
Galois’ major contribution to mathematics lies in his theory of equations, where he gave a very novel approach to solve one of the major outstanding problems of his time. He along with Abel showed
the impossibility of solving the quintic equation via regular methods. This is regarded as a giant leap in the then 19^th century mathematical scene. Galois made fundamental contribution to a new
field of mathematics which is now termed as ‘Galois Theory’. Galois wrote one paper on Number Theory where he discussed the concept of a ‘finite field’ for the first time. Galois’ entire mathematical
research output was a mere 66 pages. This was all that he gave to world mathematics, and this is what made him immortal. It took major advances in group theory to fully understand the implications of
the works of Evariste Galois.
The story of this great man came to a very cruel end on 31^st May, 1832 at Paris when he had just entered his 20^th year. Galois was killed in a duel. There have been numerous speculations as to what
may have been the cause of his death, and it seems that the most likely explanation could be that he fell in love with his physician’s daughter and it was at her instigation that he challenged
someone for a duel and was as a result killed. The sadder part of this story is that Galois didn’t recive any medical attention for many hours after he was shot, maybe this giant of mathematics could
have been saved had helped arrived on time. Galois died a very slow and painful death at the tender age of 20, and the world lost a brilliant mind who was just showing his capabilites. His last words
to his brother were
“Don’t cry, Alfred! I need all my courage to die at twenty.”
Galois never recived the admiration from his peers that he should have recived in his lifetime. Even his grave is unmarked and he died almost an anonymous person. It was only years after his death
when the letters and manuscripts that Galois wrote just before he died were published that the world started revering Galois and his unparralled genius. The night before he died Galois sensing his
end was near wrote down many letters, both mathematical and political to his numerous friends and brother. These letters contain some very though provoking mathematical ideas that has forever sealed
Galois name in the annals of mathematical wizardry. The famous mathematician Hermann Weyl while describing these letters said
“This letter judged by the novelty and profundity of ideas it contain, is perhaps the most substantial piece of writing in the whole literature of mankind.”
Galois may have died, but his legacy still lingers on. His life shows us what legends are made of, and is a true testament to the fact that whether a man is a legend or not, is determined by history,
not fortune tellers. Galois seems to be a perfect man on whom the words of Albert Einstein used to describe Mahatma Gandhi fit perfectly
“Generations to come and generations to go will scarcely belive that such a one as he ever walked upon this earth in flesh and blood.”
[This article is written by our Editor and Co-Founder Manjil P. Saikia, one of whose heroes is Galois.]
More Articles:
Recent Comments
1. Shwetabh singh
Awesom post bhaiya..........
2. Maverick
Very inspiring and enlightening... | {"url":"http://gonitsora.com/evariste-galois/","timestamp":"2014-04-18T18:15:27Z","content_type":null,"content_length":"117860","record_id":"<urn:uuid:6e75f43e-784f-4b82-a9b9-0f8dde8371f5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proving the Interior of S is null
September 18th 2011, 07:34 PM #1
Junior Member
Oct 2009
Proving the Interior of S is null
Again, I'm not sure how to prove this statement either. If you could show me how to prove this that'd be great! Much appreciated:
Let S = {(x,y): y = 1, 0 <= x <= 1}. Prove that Interior(S)=0.
Re: Proving the Interior of S is null
Let me ask you this, if $\text{int}(S)$ were non-empty, then you'd be able to find some $(x,y)\in S$ and some open ball $B$ with $(x,y)\in B\subseteq S$. Now, let me ask you this, is it possible
for an open ball to contain points with only ONE $y$ coordinate?
September 18th 2011, 07:50 PM #2 | {"url":"http://mathhelpforum.com/differential-geometry/188321-proving-interior-s-null.html","timestamp":"2014-04-18T16:22:43Z","content_type":null,"content_length":"34386","record_id":"<urn:uuid:184f7c67-84f1-49f0-95dc-3c1e5627b56c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Efficient Radix Sort
Next: References Up: Sorting Previous: A New Sorting Algorithm by Regular Sampling
We now consider the problem of sorting n integer keys in the range [0, M-1] that are distributed equally over a p-processor distributed memory machine. An efficient and well known stable algorithm is
radix sort that decomposes each key into groups of r-bit blocks, for a suitably chosen r, and sorts the keys by sorting on each of the r-bit blocks beginning with the block containing the least
significant bit positions. Here we only sketch the algorithm Counting Sort for sorting on individual blocks.
The Counting Sort algorithm sorts n integers in the range [0, R-1] by using R counters to accumulate the number of keys equal to i in bucket h-relation personalized communication to move each element
into the correct position; in this case stable sorting routine, that is, if two keys are identical, their relative order in the final sort remains the same as their initial order.
The pseudocode for our Counting Sort algorithm uses six major steps and is as follows.
• Step (1): For each processor i, count the frequency of its I[i][k], the number of keys equal to k, for
• Step (2): Apply the transpose primitive to the I array using the block size I.
• Step (3): Each processor locally computes the prefix-sums of its rows of the array I.
• Step (4): Apply the (inverse) transpose primitive to the R corresponding prefix-sums augmented by the total count for each bin. The block size of the transpose primitive is
• Step (5): Each processor computes the ranks of local elements.
• Step (6): Perform a personalized communication of keys to rank location using our h-relation algorithm for
Table vi: Total execution time for radix sort on 32-bit integers (in seconds), comparing the AIS and our implementations.
Table vi presents a comparison of our radix sort with another implementation of radix sort in SPLIT-C by Alexandrov et al. [1] This other implementation, which was tuned for the Meiko CS-2, is
identified the table as AIS, while our algorithm is referred to as BHJ. The input [R] is random, [C] is cyclically sorted, and [N] is a random Gaussian approximation [4]. Additional performance
results are given in Figure 5 and in [4].
Figure 5: Scalability of radix sort with respect to machine and problem size, on the Cray T3D and the IBM SP-2-TN
Next: References Up: Sorting Previous: A New Sorting Algorithm by Regular Sampling David R. Helman | {"url":"http://www.umiacs.umd.edu/research/EXPAR/papers/spaa96/node9.html","timestamp":"2014-04-16T04:11:17Z","content_type":null,"content_length":"6559","record_id":"<urn:uuid:ec27317b-9973-4b64-b705-0c91d56b11b5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
H-E-L-P!!! Pentagon ABCDE is shown on the coordinate plane below. If pentagon ABCDE is rotated 180° around the origin to create pentagon A’B’C’D’E’, what is the ordered pair of point C’? (-5, 2) (5,
2) (2, -5) (-2, 5)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ef62b9e4b0d196e6a6c917","timestamp":"2014-04-17T16:18:47Z","content_type":null,"content_length":"45992","record_id":"<urn:uuid:9c2c84af-8a11-457a-afac-4df6c50a1519>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
open quantum system
open quantum system
Open systems and reversibility
Consider two quantum systems, Q and E where Q is some system of interest and E is some system that is external to Q and that is in some fixed pure state $|e\rangle$. Now let us suppose that the two
systems interact and evolve via some unitary operator on the combined Hilbert space of each, $\mathcal{H}^{(QE)}$. In this situation Q is known as an open system and E is the environment.
The dilation construction of quantum states (see Stinespring’s dilation theorem above), i.e. in the quantum operation formalism, the evolution of a system is often written in a more condensed manner
$\rho'=\varepsilon (\rho)$.
Here we refer to $\varepsilon (\rho)$ as a superoperator.
Suppose $\varepsilon$ is a linear map on Q-operators. Then the following three conditions are equivalent:
• $\varepsilon$ represents a “physically reasonable” evolution for density operators on Q.
• $\varepsilon$ is given by unitary evolution on an extended system as in the quantum operation formalism.
• $\varepsilon$ has a Kraus decomposition with normalized Kraus operators as in the quantum channel formalism.
This is proven in Appendix D of
• Schumacher, Benjamin and Westmoreland, Michael, Q-PSI: Quantum Processes, Systems, and Information, Cambridge University Press, Cambridge, 2010
where there is also an explanation of “physically reasonable.”
Ian Durham: Is there a convenient category theoretic way to prove the above lemma?
Revised on April 11, 2010 18:55:05 by
Toby Bartels | {"url":"http://ncatlab.org/nlab/show/open+quantum+system","timestamp":"2014-04-19T12:02:27Z","content_type":null,"content_length":"14813","record_id":"<urn:uuid:540ebe1c-9a39-4087-ab5b-e336078ba3ed>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Gluon Propagator on The Gauge Connection
Dust is finally settling…
The situation about Yang-Mills theory is finally settling down. I do not mean that mathematicians’ community has finally decided the winner of the Millenium prize but rather that people working on
the study of two-point functions on a pure Yang-Mills theory have finally a complete scenario for it. These studies have seen very hot debates and breakthrough moments with the use of important
computing resources at different facilities. I have tried to sum up this very beautiful piece of history of physical science here. Just today a paper by Attilio Cucchieri, David Dudal and Nele
Vandersickel is appeared on arXiv making clear a fundamental aspect of this scenario. Attilio is a principal figure in the Brazilian group that carried out fundamental results in this area of
research and was instrumental in the breakthrough at Regensburg 2007. David and Nele were essential into the realization of Ghent conference on 2010 and their work, as we will see in a moment,
displays interesting results that could be important for a theoretical understanding of Yang-Mills theory.
The question of the Green functions for Yang-Mills theory can be recounted in two very different views about their behavior at very low energies. Understanding the behavior of these functions in this
energy limit could play an essential role to understand confinement, one of the key problems of physics today. Of course, propagators depend on the gauge choice and so, when we talk of them here we
just mean in the Landau gauge. But they also code some information that does not depend on the gauge at all as the mass spectrum. So, If one wants to know if the gluon becomes massive and how big is
that mass then, she should turn her attention to these functions. But also, if I want to do QCD at very low energies I need these functions to be able to do computations, something that theoretical
physicists are not able to perform precisely yet missing this piece of information.
In the ’90, the work performed by several people seemed to convince everyone that the gluon propagator should go to zero lowering momenta and the ghost propagator should run to infinity faster than
the case of a free particle. Difficulties with computational resources impeded to achieve the right volume dimensions to draw clearcut conclusions about, working on the lattice. But another solution
was emerging, with a lot of difficulties and while a paradigm seemed to be already imposed, proving that the gluon propagator should reach a finite non-null limit at zero momenta and the ghost
propagator was behaving like a free particle. A massive gluon propagator was already proposed in the ’80 by John Cornwall and this idea was finally gaining interest. After Regensburg 2007, this
latter solution finally come into play as lattice results on huge volumes were showing unequivocally that the massive solution was the right one. The previous solution was then called “scaling
solution” while the massive one was dubbed “decoupling solution”.
A striking result obtained by Axel Maas (see here) showed that, in two dimensions, the propagators agree with the scaling solution. This is quite different from the three and four dimensional case
where the massive solution is seen instead. This problem was a main concern for people working on the lattice as a theoretical understanding was clearly in need here. Attilio asked to me if I could
come out with an explanation with my approach. I have found a possible answer here but this was not the answer Attilio was looking for. With this paper he has found the answer by himself.
The idea is the following. In order to understand the behavior of the propagators in different dimensions one has to solve the set of coupled Dyson-Schwinger equations for the ghost and gluon
propagators as one depends on the other. In this paper they concentrate just on the equation for the ghost propagator and try to understand, in agreement with the no-pole idea of Gribov that the
ghost propagator must have no poles, when its solution is consistent. This is a generalization of an idea due to Boucaud, Gómez, Leroy, Yaouanc, Micheli, Pène and Rodríguez-Quintero (see here):
Consider the equation of the ghost propagator and compute it fixing a form for the gluon propagator, then see when the solution is physically consistent. In their work, Boucaud et al. fix the gluon
propagator to be Yukawa-like, a typical massive propagator for a free particle. Here I was already happy because this is fully consistent with my scenario (see here): I have a propagator being the
sum of Yukawa-like propagators typical of a trivial infrared fixed point where the theory becomes free. Attilio, David and Nele apply this technique to a propagator devised by Silvio Paolo Sorella,
David Dudal, John Gracey, Nele Vandersickel and Henry Verschelde that funded the so-called “Refined Gribov-Zwanziger” scenario (see here). The propagator they get can be simply rewritten as the sum
of three Yukawa propagators and so, it is fully consistent with my results. Attilio, David and Nele use it to analyze the behavior of the ghost propagator and to understand its behavior at different
dimensions, using Gribov no-pole condition. Their results are indeed striking. They recover a critical coupling at which the scaling solution works in 2 and 3 dimensions: Only when the coupling has
this particular value the scaling solution can apply but this is not the real case. Also, as Attilio, David and Nele remeber us, this critical point is unstable as recently showed by Axel Weber (see
here). This agrees with the preceding finding by Boucaud et al. but extends the conclusions to different dimensions. In two dimensions a strange thing happen: There is a logarithmic singularity at
one-loop for the ghost propagator that can only be removed taking the gluon propagator going to zero and to make the Gribov no-pole condition hold. This is indeed a beautiful physical explanation and
gives an idea on what is going on by changing dimensions to these propagators. I would like to emphasize that also the refined Gribov-Zwanziger scenario agrees perfectly well with my idea of a
trivial infrared fixed point that is also confirmed by lattice data, having the gluon propagator the sum of Yukawa propagators. I think we can merge our results at some stage fixing the parameters.
Given all this clear view that is finally emerged, maybe it is time to turn to phenomenology. There is a lot of people, for example there at CERN, waiting for fully working models of low-energy QCD.
All the people I cited here and a lot more I would like to name have given the answer.
Attilio Cucchieri, David Dudal, & Nele Vandersickel (2012). The No-Pole Condition in Landau gauge: Properties of the Gribov Ghost
Form-Factor and a Constraint on the 2d Gluon Propagator arXiv arXiv: 1202.1912v1
Axel Maas (2007). Two- and three-point Green’s functions in two-dimensional Landau-gauge Yang-Mills theory Phys.Rev.D75:116004,2007 arXiv: 0704.0722v2
Boucaud, P., Gómez, M., Leroy, J., Le Yaouanc, A., Micheli, J., Pène, O., & Rodríguez-Quintero, J. (2010). Low-momentum ghost dressing function and the gluon mass Physical Review D, 82 (5) DOI:
Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6
David Dudal, John Gracey, Silvio Paolo Sorella, Nele Vandersickel, & Henri Verschelde (2008). A refinement of the Gribov-Zwanziger approach in the Landau gauge: infrared propagators in harmony with
the lattice results Phys.Rev.D78:065047,2008 arXiv: 0806.4348v2
Axel Weber (2011). Epsilon expansion for infrared Yang-Mills theory in Landau gauge arXiv arXiv: 1112.1157v1
No scaling solution with massive gluons
Some time ago, while I was just at the beginning of my current understanding of low-energy Yang-Mills theory, I wrote to Christian Fischer to know if from the scaling solution, the one with the gluon
propagator going to zero lowering momenta and the ghost propagator running to infinity faster than the free particle in the same limit, a mass gap could be derived. Christian has always been very
kind to answer my requests for clarification and did the same also for this so particular question telling to me that this indeed was not possible. This is a rather disappointing truth as we are
accustomed with the idea that short ranged forces need some kind of massive carriers. But physics taught that a first intuition could be wrong and so I decided not to take this as an argument against
the scaling solution. Since today.
Looking at arxiv, I follow with a lot of interest the works of the group of people collaborating with Philippe Boucaud. They are supporting the decoupling solution as this is coming out from their
numerical computations through the Dyson-Schwinger equations. A person working with them, Jose Rodríguez-Quintero, is producing several interesting results in this direction and the most recent ones
appear really striking (see here and here). The question Jose is asking is when and how does a scaling solution appear in solving the Dyson-Schwinger equations? I would like to remember that this
kind of solution was found with a truncation technique from these equations and so it is really important to understand better its emerging. Jose solves the equations with a method recently devised
by Joannis Papavassiliou and Daniele Binosi (see here) to get a sensible truncation of the Dyson-Schwinger hierarchy of equations. What is different in Jose’s approach is to try an ansatz with a
massive propagator (this just means Yukawa-like) and to see under what conditions a scaling solution can emerge. A quite shocking result is that there exists a critical value of the strong coupling
that can produce it but at the price to have the Schwinger-Dyson equations no more converging toward a consistent solution with a massive propagator and the scaling solution representing just an
unattainable limiting case. So, scaling solution implies no mass gap as already Christian told me a few years ago.
The point is that now we have a lot of evidence that the massive solution is the right one and there is no physical reason whatsoever to presume that the scaling solution should be the true solution
at the critical scaling found by Jose. So, all this mounting evidence is there to say that the old idea of Hideki Yukawa is working yet: Massive carriers imply limited range forces.
J. Rodríguez-Quintero (2011). The scaling infrared DSE solution as a critical end-point for the family
of decoupling ones arxiv arXiv: 1103.0904v1
J. Rodríguez-Quintero (2010). On the massive gluon propagator, the PT-BFM scheme and the low-momentum
behaviour of decoupling and scaling DSE solutions JHEP 1101:105,2011 arXiv: 1005.4598v2
Daniele Binosi, & Joannis Papavassiliou (2007). Gauge-invariant truncation scheme for the Schwinger-Dyson equations of
QCD Phys.Rev.D77:061702,2008 arXiv: 0712.2707v1
SU(2) lattice gauge theory revisited
As my readers know, there are several groups around the World doing groundbreaking work in lattice gauge theories. I would like here to cite names of I. L. Bogolubsky, E.-M. Ilgenfritz, M.
Müller-Preussker, and A. Sternbeck jointly working in Russia, Germany and Australia. They have already produced a lot of meaningful papers in this area and today come out with another one worthwhile
to be cited (see here). I would like to cite a couple of their results here. Firstly, they show again that the decoupling type solution in the infrared is supported. They get the following figure
The gauge is the Landau gauge. They keep the physical volume constant at 10 fm while varying the linear dimension and the coupling. This picture is really beautiful confirming an emergent
understanding of the behavior of Yang-Mills theory in the infrared that we have supported since we opened up this blog. But, I think that a second important conclusion from these authors is that
Gribov copies do not seem to matter. Gribov ambiguity has been a fundamental idea in building our understanding of gauge theories and now it just seems it has been a blind alley for a lot of
All this scenario is fully consistent with our works on pure Yang-Mills theory. As far as I can tell, there is no theoretical attempt to solve these equations than ours being in such agreement with
lattice data (running coupling included).
I would finally point out to your attention a very good experimental paper from KLOE collaboration. This is a detector at ${\rm DA\Phi NE}$ accelerator in Frascati (Rome). They are carrying out a
lot of very good work. This time they give the decay constant of the pion on energy ranging from 0.1 to 0.85 ${\rm GeV^2}$ (see here).
We cannot see the light
An interesting paper appeared today in arxiv by Alkofer, Huber and Schwenzer (see here). Reinhard Alkofer and Lorenz von Smekal are the proponents of an infrared solution of Yang-Mills theory in D=4
having the following properties
• Gluon propagator goes to zero at lower momenta
• Ghost propagator goes to infinity at lower momenta faster than the free propagator
• Running coupling reaches a fixed point at lower momenta
and this scenario disagrees with lattice evidence in D=4 but agrees with lattice in D=2 when the theory is trivial having no dynamics. After some years that other researchers were claiming that a
different solution can be obtained by the same equations, that is Dyson-Schwinger equations, that indeed agrees with lattice computations, Alkofer’s group accepted this fact but with a lot of
skepticism pointing out that this solution has several difficulties, last but not least it breaks BRST symmetry. The solution proposed by Alkofer and von Smekal by its side gives no mass gap
whatsoever and no low energy spectrum to be compared neither with lattice nor with experiments to understand the current light unflavored meson spectrum. So, whoever is right we are in a damned
situation that no meaningful computations can be carried out to get some real physical understanding. The new paper is again on this line with the authors proposing a perturbation approach to
evaluate the vertexes of the theory in the infrared and obtaining again comforting agreement with their scenario.
I will avoid to enter into this neverending controversy about Dyson-Schwinger equations but rather I would ask a more fundamental question: Is it worthwhile an approach that only grants at best
saving a phylosophical understanding of confinement without any real understanding of QCD? My view is that one should start from lattice data and try to understand the real mathematical form of the
gluon propagator. Why does it resemble the Yukawa form so well? A Yukawa form grants a mass gap and this is elementary quantum field theory. This I would like to see explained. When a method is not
satisfactory something must be changed. It is evident that solving Dyson-Schwinger equations requires some new mathematical approach as old views are just confusing this kind of research.
A quite effective QCD theory
As far as my path toward understanding of QCD is concerned, I have found a quite interesting effective theory to work with that is somewhat similar to Yukawa theory. Hideki Yukawa turns out to be
more in depth in his hindsight than expected.
Indeed, I have already showed as the potential in infrared Yang-Mills theory is an infinite sum of weighted Yukawa potentials with the range, at each order, decided through a mass formula for
glueballs that can be written down as
being $\sigma$ the string tension, an experimental parameter generally taken to be $(440 MeV)^2$, and $K(i)$ is an elliptic integral, just a number.
The most intriguing aspect of all this treatment is that an effective infrared QCD can be obtained through a scalar field. I am about to finish a paper with a calculation of the width of the $\sigma$
resonance, a critical parameter for our understanding of low energy QCD. Here I put the generating functional if someone is interested in doing a similar calculation (time is rescaled as $t\
$Z[\eta,\bar\eta,j] \approx\exp\left\{i\int d^4x\sum_q \frac{\delta}{i\delta\bar\eta_q(x)}\frac{\lambda^a}{2\sqrt{N}}\gamma_i\eta_i^a\frac{\delta}{i\delta j_\phi(x)}\frac{i\delta}{\delta\eta_q(x)}\
right\} \times$
$\exp\left\{-\frac{i}{Ng^2}\int d^4xd^4y\sum_q\bar\eta_q(x)S_q(x-y)\eta_q(y)\right\}\times$
$\exp\left\{\frac{i}{2}(N^2-1)\int d^4xd^4y j_\phi(x)\Delta(x-y)j_\phi(y)\right\}.$
As always, $S_q(x-y)$ is the free Dirac propagator for the given quark $q=u,d,s,\ldots$ and $\Delta(x-y)$ is the gluon propagator that I have discussed in depth in my preceding posts. People
seriously interested about this matter should read my works (here and here).
For a physical understanding of this you have to wait my next posting on arxiv. Anyhow, anybody can spend some time to manage this theory to exploit its working and its fallacies. My hope is that,
anytime I post such information on my blog, I help the community to have an anticipation of possible new ways to see an old problem with a lot of prejudices well grounded.
An inspiring paper
These days I am closed at home due to the effects of flu. When such bad symptoms started to relax I was able to think about physics again. So, reading the daily from arxiv today I have uncovered a
truly inspiring paper from Antal Jakovac a and Daniel Nogradi (see here). This paper treats a very interesting problem about quark-gluon plasma. This state was observed at RHIC at Brookhaven.
Successful hydrodynamical models permit to obtain values of physical quantities, like shear viscosity, that could be in principle computed from QCD. The importance of shear viscosity relies on the
existence of an important prediction from AdS/CFT symmetry claiming that the ratio between this quantity and entropy density can be at least $1/4\pi$. If this lower bound would be proved true we will
get an important experimental verification for AdS/CFT conjecture.
Jakovac and Nogradi exploit the computation of this ratio for SU(N) Yang-Mills theory. Their approach is quite successful as their able to show that the value they obtain is still consistent with the
lower bound as they have serious difficulties to evaluate the error. But what really matters here is the procedure these authors adopt to reach their aim making this a quite simple alley to pursuit
when the solution of Yang-Mills theory in infrared is acquired. The central point is again the gluon propagator. These authors assume simply the very existence of a mass gap taking for the propagator
something like $e^{-\sigma\tau}$ in Euclidean time. Of course, $\sigma$ is the glueball mass. This is a too simplified assumption as we know that the gluon propagator is somewhat more complicated and
a full spectrum of glueballs does exist that can contribute to this computation (see my post and my paper).
So, I spent my day to extend the computations of these authors to a more realistic gluon propagator. Indeed, with my gluon propagator there is no need of one-loop computations as the identity at
0-loop $G_T=G_0$ does not hold true anymore for a non-trivial spectrum and one has immediately an expression for the shear viscosity. I hope to give some more results in the near future.
Emerging scenario
Reading arxiv dailys today I have found three different papers on the gluon and ghost propagators for Yang-Mills (see here, here and here). These papers prove that this line of research is very
strongly alive and that there exist a lot of points to be settled down before to carry on. In this post I would like to point out several evidences that should not be forgotten when one talks about
this matter. First of all there are the results of Yang-Mills theory in D=1+1. We know that, for this dimensionality, Yang-Mills theory has no dynamics. Anyhow, several people tried to solve it on
the lattice or modified it to try to relate these solutions of the ones of Dyson-Schwinger equations with a given truncation. The bad news is that they find agreement with such solutions of
Dyson-Schwinger equations. Why is this bad news? Because this gives, beyond any doubt, a proof that such a truncation of Dyson-Schwinger equations is fault as it removes any dynamics from Yang-Mills
theory in higher dimensionality and appears to agree with numerical results just when such a dynamics does not exist. This is already a severe indicator that lattice computations done in higher
dimensionality are right. What do they say us about ghost and gluon propagators?
• Gluon propagator reaches a non-null finite value at zero momenta.
• Ghost propagator is that of a free particle.
• Running coupling goes to zero at lower momenta.
This means that the confinement scenarios that are normally considered are faulty and do not work at all. These results demand for a better understanding of the physical situation at hand. It we are
not ourselves convinced that they are right, we will keep on fumbling in the dark losing precious resources and time. Evidences are really heavy already at this stage and should be combined with
spectra computations carried out so far. Also in this case a lot of work still must be carried out. You can read the beatiful paper of Craig McNeile about (contribution to QCD 08). It is a mistery to
me why these ways are seen as different into the understanding of Yang-Mills theory.
A formula I was looking for
As usual I put in this blog some useful formulas to work out computations in quantum field theory. My aim in these days is to compute the width of the $\sigma$ resonance. This is a major aim in QCD
as the nature of this particle is hotly debated. Some authors think that it is a tetraquark or molecular state while others as Narison, Ochs, Minkowski and Mennessier point out the gluonic nature of
this resonance. We have expressed our view in some posts (see here and here) and our results heavily show that this resonance is a glueball in agreement with the spectrum we have found for a pure
Yang-Mills theory.
Our next step is to understand the role of this resonance in QCD. Indeed, we have shown in our recent paper (see here) that, once the gluon propagator is known, it is possible to derive a
Nambu-Jona-Lasinio model from QCD with all parameters properly fixed. We have obtained the following:
$S_{NJL} \approx \int d^4x\left[\sum_q \bar q(x)(i\gamma\cdot\partial-m_q)q(x)\right.$
$-\frac{1}{2}g^2\sum_{q,q'}\bar q(x)\frac{\lambda^a}{2}\gamma^\mu q(x)\times$
$\left.\int d^4yG(x-y)\bar q'(y)\frac{\lambda^a}{2}\gamma_\mu q'(y)\right]$
being $G(x-y)$ the gluon propagator with all indexes in color and space-time already saturated. This in turn means that we can use the following formula (see my paper here and here):
$e^{\frac{i}{2}\int d^4xd^4yj(x)G(x-y)j(y)}\approx {\cal N}\int [d\sigma]e^{-i\int d^4x\left[\sigma\left(\frac{1}{2}\partial^2\sigma+\frac{Ng^2}{4}\sigma^3\right)-j\sigma\right]}$
being again $G(x-y)$ the gluon propagator for SU(N) and ${\cal N}$ a normalization factor. This formula does hold only for infrared limit, that is when the theory is strongly coupled. We plan to
extract physical results from this formula and define in this way the role of $\sigma$ resonance.
What makes the proton spin?
There is currently a beautiful puzzle to be answered that relies on sound and beautiful experimental results. The question is how the components of a proton, that is quarks and gluons, concur to
determine the value one half for the spin of the particle. During the conference QCD 08 at Montpellier I listened to a beatiful presentation of Joerg Pretz of the COMPASS Collaboration (see here and
here). Hearing these results was stunning for me. I explain the reasons in a few words. The spin of the proton should be composed by the spin of the quarks, the contributions of gluons (gluons???)
and orbital angular momentum. What happens is that the spin of quarks does not contribute too much. People then thought that the contribution of gluons (gluons again???) should have been decisive.
The COMPASS Collaboration realized a beautiful experiment using charmed mesons. This experiment has been described by Pretz at QCD 08. They proved in a striking way that the contribution of the glue
to proton spin can be zero and cannot be used to account for the particle spin. Of course, there are beautiful papers around that are able to explain how the proton spin comes out. I have found for
example a paper by Thomas and Myhrer at Jefferson Lab (see here and here) that describes quite well an understanding of the puzzle and surely is worthwhile reading. But my question is another: Why
the glue does not contribute?
From our preceding posts one should have reached immediately an answer, the same that come out to my mind when I listened Pretz’s talk. The reason is that, in the infrared, gluons that have spin one
are not the true carriers of the strong force. The true carriers have no spin unless higher excited states are considered. This explains why COMPASS experiment did not see any contribution
consistently with previous expectations.
This is again a strong support to our description of the gluon propagator (see here). No other theory around shows this.
Yang-Mills in D=1+1 strikes back
Today on arxiv I have found a very beatiful paper by Reinhardt and Schleifenbaum (see here). This paper is an important event as the authors present a full account of Yang-Mills theory in D=1+1. As
we know, Axel Maas produced a lattice computation of this theory (see here) and found a perfect agreement with truncated Dyson-Schwinger equations. These results disagree completely with those
obtained on lattice for D=3+1. From ‘t Hooft’s work we also know that Yang-Mills theory in D=1+1 is completely trivial having no dynamics. This means that the agreement between Maas’ lattice
computations and truncated Dyson-Schiwnger equations implies that the truncation eliminates any dynamics from Yang-Mills theory and this explains the disagreement between truncated Dyson-Scwinger
equations and lattice Yang-Mills in D=3+1.
In their paper Reinhardt and Schleifenbaum confirm all this but they do a smarter thing. They consider a non trivial Yang-Mills theory in D=1+1 taking a compact manifold ${\sl S}^1\times {\mathbb R}$
. In this case they introduce a length $L$ and this means that the “thermodynamic limit” $L\rightarrow\infty$ should recover the trivial limit of Yang-Mills theory in D=1+1. Of course, due to this
deep link between the theory on the compact manifold and the one on the real line, again this case is not representative for Yang-Mills in D=3+1 but, anyhow, can give some hints on how truncated
Dyson-Schwinger equations recover these results. However, it should be emphasized that Gribov copies in D=1+1 have a prominent role and this is not generally true in D=3+1. This can yield the false
impression to have caught something of the disagreement between functional methods and lattice computations. Of course, this is plainly false. In order to give an idea of what is going on they get a
gluon propagator going like $D\sim 1/L^2$ and this goes to zero in the thermodynamic limit as no dynamics is expected in this case. In D=3+1 there is nothing like this. On a compact manifold for this
case, the limit $L\rightarrow\infty$ is absolutely not trivial. Finally, they get an infrared enhanced ghost propagator and the authors claim that the reason why this is not seen on lattice
computations for the D=3+1 case is due to Gribov copies. This conclusion cannot be accepted as the trivial limit of this theory is the D=1+1 case on the real line that has an enhanced ghost
propagator too and this must not necessarily be true for D=3+1 where, as said, Gribov copies play no role. This latter fact is the reason of the failure of functional methods and also the reason why
dynamics is removed by this approach. Indeed, to account for Gribov copies in D=3+1 one is forced to remove dynamics. This works for D=1+1 where no dynamics exists but fails otherwise.
A note on the running coupling should have been done by the authors. They did not do that but if the gluon propagator goes like $\frac{1}{L^2}$, whatever else the ghost propagator does, the
thermodynamic limit grants that the coupling goes to zero. No dynamics no interaction.
Another interesting result given by the authors is the spectrum for the theory on the compact manifold. They get the spectrum of a rigid free rotator going like $j(j+1)$. This is very nice indeed.
Finally, the conclusion by the authors that functional methods turn out to have got a strong support by their computations cannot be sustained. They just give an understanding, a deep one indeed, of
the reason why these methods blatantly fail for the D=3+1 case. This is the role of computations in D=1+1 as already seen with Maas’ work. | {"url":"http://marcofrasca.wordpress.com/tag/gluon-propagator/","timestamp":"2014-04-19T15:38:56Z","content_type":null,"content_length":"158734","record_id":"<urn:uuid:9db7b412-c64e-4226-bd19-9738761f1ee9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US7577206 - OFDM signal receiving apparatus and method for estimating common phase error of OFDM signals using data subcarriers
This application claims the priority under 35 U.S.C. § 119 of Korean Patent Application No. 10-2005-0006583, filed on Jan. 25, 2005 in the Korean Intellectual Property Office, the contents of which
is incorporated herein in its entirety by reference.
1. Field of the Invention
The present invention relates to an OFDM (Orthogonal Frequency Division Multiplexing) signal receiver, and more particularly, to an OFDM signal receiving apparatus and method of estimating a common
phase error (CPE) of received OFDM signals using data subcarriers in addition to pilot subcarriers.
2. Description of the Related Art
A multicarrier based OFDM signal may be used in a DVB-T (Terrestrial Digital Video Broadcasting) system. DVB-T (Digital Video Broadcasting) is a pan-European broadcasting standard (ETS 300 744) for
digital terrestrial television. DVB-T is directly compatible with MPEG2-coded TV-signals. The introduction of this digital service is already in progress in various European countries.
In OFDM systems, modulation and demodulation can be done digitally by computationally efficient Fast Fourier Transforms (FFT) of finite length, N. The orthogonality of the consecutive OFDM symbols is
maintained by appending a cyclic prefix (CP) of length GI at the start of each symbol. The CP is obtained by taking the last v samples of each symbol and consequently the total length of the
transmitted OFDM symbols is N+v samples. The duration of the FFT window N is the duration of the “useful period” ignoring the Guard Interval (GI) period during which the receiving antenna is
presumably polluted by a mixture of the new symbol & the delayed versions of the previous one (i.e.: the echoes—the ghosts). The receiver discards the CP and takes only the last N samples of each
OFDM symbol for demodulation by the receiver FFT.
The DVB-T standard determines FFT-length (N) of 2 k and 8 k. Thus, an OFDM-symbol consists of 2 k or 8 k sub-carriers respectively. However, not all of the sub-carriers can be used for data
transmission. A number of the sub-carriers are used either for the spectral limitation of the transmission signal or for the transmission of pilot information.
A number of OFDM symbols are combined to form an OFDM DVB-T frame. One frame of an OFDM DVB-T Signal is composed of 68 symbols each having 1705 active carriers in a N=2K mode, or 6817 active carriers
in a N=8K mode, respectively. The active carriers of each symbol include data subcarriers and pilot subcarriers. The data subcarriers are digital signals corresponding to audio/video information to
be transmitted and received and the pilot subcarriers are digital signals to be used for synchronization, mode detection, channel estimation, etc. A pilot subcarrier is inserted between neighboring
data subcarriers in a predetermined position.
Orthogonal Frequency Division Multiplex (OFDM) systems are very sensitive to phase noise (e.g., caused by oscillator instabilities). The phase noise may be resolved into two components, namely the
Common Phase Error (CPE), also known as average phase noise offset, which affects all the subchannels equally, and the Inter Carrier Interference (ICI), which is caused by the loss of orthogonality
of the subcarriers.
FIG. 1 is a block diagram of a conventional OFDM signal receiver 100. Referring to FIG. 1, the OFDM signal receiver includes an RF (Radio Frequency) module 110, a demodulator 120, a frequency
synchronization (FS) unit 130, a FFT (Fast Fourier Transform) unit 140, an equalizer (EQ) 150, a Common Phase Error (CPE) estimation and correction unit 160, and a demapper 170.
The demodulator 120 demodulates a digital OFDM signal output from the RF module 110 (received in a signaling format such as QPSK, BPSK or QAM), to generate an in-phase (I) signal (referred to as
I-signal hereinafter) and a quadrature-phase (Q) signal (referred to as Q-signal hereinafter), which are complex signals. The demodulator 120 down-converts the digital OFDM signal is into a
low-frequency signal and demodulates it. A frequency offset of the demodulated signal is compensated while the demodulated signal passes through the frequency synchronization (FS) unit 130. The
frequency synchronization (FS) unit 130 estimates the frequency offset from the demodulated signal. When an estimation error is generated due to noise and channel distortion, the signal compensated
by the frequency synchronization unit 130 may include a residual frequency offset. The signal compensated by the frequency synchronization unit 130 passes through the FFT unit 140, and is then
equalized by the equalizer (EQ) 150. The CPE estimation and correction unit 160 estimates and corrects a Common Phase Error (CPE) equally generated in all subcarriers of the OFDM signal. A CPE is the
difference between the phase of the original (transmitted) signal and the phase of a received signal, and is equally generated in all subcarriers. It is known that the CPE may be caused by a residual
frequency offset and phase noise in the output of an oscillator included in the RF module 110. In the aforementioned conventional technique, pilot subcarriers are used to estimate the CPE. The pilot
subcarriers may be used to transmit promised (predetermined, expected) values between a transmitter and a receiver in an OFDM system. The pilot subcarriers may be used by the receiver to estimate a
frequency offset or channel distortion.
In general, the CPE can be estimated using phase rotation generated in the pilot subcarriers because it is a common phase error generated in all subcarriers. The CPE may equal a value, Δ{circumflex
over (φ)}[r], obtained by estimating the quantity of phase rotation generated in carriers due to a residual frequency offset and can be represented as follows:
$Δ ϕ ^ r = tan - 1 [ ∑ k ∈ P R k · S k * ] , P = { - 21 , - 7 , + 7 , + 21 } [ Equation 1 ]$
wherein k represents a subcarrier index and S[k ]and R[k ]respectively denote a transmitted (expected) value and a received value with respect to the pilot subcarriers.
The CPE estimation and correction unit 160 extracts pilot subcarriers from the equalized signal output from the equalizer 150, multiplies complex numbers of the extracted pilot subcarriers R[k ]by
conjugate complex numbers of the transmitted original (expected) pilot subcarriers S[k], sums up the multiplication results, and estimates the tan^−1 value of the complex value obtained from the
overall result to be the quantity of phase rotation, Δ{circumflex over (φ)}[r]. In Equation 1, the set P is an example of an IEEE802.11a WLAN (Wireless Local Area Network) standard and, in such a
case, subcarriers −21, −7, +7 and +21 (of 64 subcarriers −32 through +31) are used as pilot subcarriers.
As described above, the CPE can be estimated using a phase variation between the transmitted (expected) pilot value and the received pilot value. However, a CPE estimation error can be generated when
there is noise or channel distortion. Although the number of pilots can be increased to improve CPE estimation accuracy, the total transmission rate of the system would be reduced. Thus, the number
of pilots should be appropriately determined. In particular, when a total of four pilots are used, as described above, conventional CPE estimation accuracy is low and thus the system can become
sensitive to noise and channel distortion.
An aspect of the present invention provides an Orthogonal Frequency Division Multiplexing (OFDM) signal receiver adapted to estimate a Common Phase Error (CPE) with greater reliability using data
subcarriers (e.g., determined by a Decision Directed (DD) estimation algorithm) in addition to pilot subcarriers, thus improving system performance.
Another aspect of the present invention provides a method of estimating the Common Phase Error (CPE) using the data subcarriers in addition to the pilot subcarriers, in an OFDM signal receiver.
According to an aspect of the present invention, there is provided an OFDM signal receiver including: an equalizer, a channel measurement unit, a CPE estimation unit, and a CPE compensation unit. The
equalizer equalizes an input (received) baseband signal. The channel measurement unit estimates a channel characteristic from the input (received) baseband signal to generate information about good
subcarrier indexes in the form of channel State Information (CSI). The CPE estimation unit estimates good pilot subcarriers and good data subcarriers from the equalized signal based on the CSI,
calculates first and second CPEs from the estimated subcarriers, and (variously, selectively) combines the first and second CPEs to generate a final CPE. The CPE compensation unit compensates the
phase of the equalized signal by the final CPE and outputs the phase-compensated signal.
The OFDM signal receiver further includes a demodulator, a frequency synchronization unit, and a Fast Fourier Transform (FFT) unit. The demodulator demodulates a digital OFDM signal input from an RF
module to generate a complex signal. The frequency synchronization unit compensates a frequency offset of the demodulated signal. The FFT unit fast-Fourier-transforms the frequency-compensated signal
to generate the input baseband signal.
The OFDM signal receiver further comprises a demapper demapping the phase-compensated signal according to a predetermined symbol-mapping format.
According to another aspect of the present invention, there is provided an OFDM signal receiving method including: equalizing an input (received) baseband signal; estimating a channel from the input
(received) baseband signal to generate CSI about good subcarrier indexes; estimating good pilot subcarriers and good data subcarriers from the equalized signal based on the CSI; calculating first and
second CPEs from the estimated subcarriers; combining (e.g., averaging or selecting one of) the first and second CPEs to generate a final CPE; and compensating the phase of the equalized signal with
the final CPE.
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The
invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided so that this
disclosure will be thorough and complete, and will fully convey the concepts of the invention to those skilled in the art. Throughout the drawings, like reference numerals refer to like elements,
FIG. 1 is a block diagram of a conventional OFDM signal receiver;
FIG. 2 is a block diagram of an OFDM signal receiver according to an embodiment of the present invention;
FIG. 3 is a block diagram of the subcarrier estimation unit 281 and the CPE determination unit 285 shown in FIG. 2;
FIG. 4 is a flow chart of the method of operation of the OFDM signal receiver of FIG. 2;
FIG. 5 is an I-Q constellation graph in 64-QAM format;
FIG. 6 is an I-Q constellation graph in 256-QAM format; and
FIG. 7 is a graph illustrating the relationship between Signal to Noise Ratio (SNR) and Bit Error Rate (BER) of the OFDM signal receiver according to an embodiment of the present invention.
FIG. 2 is a block diagram of an OFDM signal receiver 200 according to an embodiment of the present invention. Referring to FIG. 2, the OFDM signal receiver 200 includes an RF module 210, a
demodulator 220, a frequency synchronization unit 230, a Fast Fourier Transform (FFT) unit 240, an equalizer 250, a channel measurement unit 270, a Common Phase Error (CPE) estimation unit 280, a
Common Phase Error (CPE) compensation unit 260, and a demapper 290.
The demodulator 220 demodulates a digital OFDM signal output from the RF module 210 (e.g., received in a format such as QAM (Quadrature Amplitude Modulation), BPSK (Binary Phase-Shift Keying), QPSK
(Quadrature Phase-Shift Keying), etc.) to generate an I-signal and a Q-signal, which are complex signals. The demodulator 220 down-converts the digital OFDM signal output from the RF module 210 into
a low-frequency signal and demodulates it. The demodulator 220 includes a synchronization circuit that reconstructs required synchronization signals including a chip-rate clock signal and a
symbol-rate clock signal. The demodulated signal output from the demodulator 220 is a baseband sampled complex signal. The frequency synchronization FS unit 230 compensates a frequency offset of the
demodulated signal. The FFT unit 240 fast-Fourier-transforms the compensated frequency offset signal. FFT is well known in the art. The fast-Fourier-transformed baseband signal is a frequency domain
complex signal. The equalizer 250 equalizes the fast-Fourier-transformed baseband signal. The equalizer 250 can equalize the signal using channel coefficients H[k ]associated with subcarriers
estimated by a channel estimator 271 included in the channel measurement unit 270.
The OFDM signal receiver 200 estimates a CPE using data subcarriers in addition to pilot subcarriers. For performing this method of estimating the CPE, the channel measurement unit 270 generates
Channel State Information (CSI) and outputs the CSI to the Pilot/Data subcarrier estimator 281 in the CPE estimation unit 280. The channel measurement unit 270 estimates a channel from the
fast-Fourier-transformed baseband signal to generate information about good subcarrier indexes as Channel State Information (CSI). The subcarrier estimator 281 uses the CSI (from the channel
measurement unit 270) and the equalized signal (from the equalizer 250) to estimate the good pilot subcarriers R[k ]and good data subcarriers Y[k]. The subcarrier estimator 281 of the CPE estimation
unit 280 estimates good pilot subcarriers R[k ]and good data subcarriers Y[k ]from the equalized (from equalizer EQ) signal based on the CSI and the CPE determination part 285 of the CPE estimation
unit 280 calculates a first CPE {circumflex over (φ)}[c ]and a second CPE {circumflex over (φ)}[c,data ]and a final CPE φ[c,final ]from the estimated subcarriers R[k ]and Y[k].
The CPE compensation unit 260 compensates for the common phase error CPE of the equalized signal by the final CPE φ[c,final]. The demapper 290 demaps the equalized phase-compensated (equalized,
CPE-compensated) signal according to a predetermined symbol-mapping format such as QAM, QPSK or BPSK. The demapped signal is output to a Viterbi decoder or an RS (Reed Solomon) decoder. The decoder
performs forward error correction (FEC) on the received signal and decodes the signal. The decoded signal is processed by a predetermined signal processor to generate video display and audio signals
such that a viewer may watch and hear a program broadcast corresponding to the display and audio signals of a TV broadcast.
FIG. 4 is a flow chart of the method of operation of the OFDM signal receiver of FIG. 2.
The operations of the channel measurement unit 270 and the CPE estimation unit 280 will now be explained in more detail with reference to the flow chart in FIG. 4.
As an overview: First, the channel measurement unit 270 and the equalizer EQ 250 continuously receive the fast-Fourier-transformed baseband signal (step S41); The channel measurement unit 270
continuously estimates a channel to generate the CSI while the equalizer 250 continuously equalizes the fast-Fourier-transformed baseband signal (step S41); The CPE estimation unit 280 continuously
generates the final CPE φ[c,final ]from the equalized signal based on the CSI (step S53).
The channel measurement unit 270 (FIG. 2) includes the channel estimator 271 and a good subcarrier indexing part 272. The channel estimator 271 continuously estimates the channel from the
fast-Fourier-transformed signal to generate the channel coefficients H[k ]corresponding to to respective subcarriers. Each channel coefficient H[k ]corresponds to the magnitude of channel frequency
response associated with (is proportional to) the power of each subcarrier. The good subcarrier indexing part 272 calculates the mean | H|^2 of the powers of the channel coefficients H[k ]as a
channel reference value (step S42). The mean | H|^2 of the powers of the channel coefficients H[k ]is defined in Equation 2 as follows:
$ H _ 2 = 1 52 ∑ k = - 26 , k ≠ 0 26 H k 2 [ Equation 2 ]$
where k is a subcarrier index (ranging from −26 to 26), and the absolute values of the channel coefficients H[k ]are proportional to the powers of the respective subcarriers. In Equation 2, it is
assumed that the number of effective subcarriers is known to be 52. Thus, the FFT length used in the system is 64 but there are 52 effective subcarriers. Furthermore, 4 of the 52 effective
subcarriers are pilot subcarriers and 48 of them are data subcarriers.
The good subcarrier indexing part 272 indexes as good subcarriers, the plurality of subcarriers for which the powers of each of their corresponding channel coefficient H[k ]generated by the channel
estimator 271 are larger than half of the mean | H|^2, as shown in Decision 3, to generate the CSI about the index k (S43).
$ H k 2 > H _ 2 2 ? [ Decision 3 ]$
When the good subcarriers are indexed, the CPE estimation unit 280 determines the final CPE φ[c,final ]from the equalized signal based on the CSI generated by the channel measurement unit 270.
Referring to FIG. 2, the CPE estimation unit 280 includes a subcarrier estimator 281 and a CPE determination part 285. The subcarrier estimator 281 estimates the good pilot subcarriers R[k ]and good
data subcarriers Y[k]. The CPE determination part 285 calculates the first CPE {circumflex over (φ)}[c ]and the second CPE {circumflex over (φ)}[c,data ]and combines the first and second CPEs to
generate the final CPE φ[c,final].
FIG. 3 is a block diagram of the CPE estimation unit 280 shown in FIG. 2 comprised of the subcarrier estimator 281 and the CPE determination part 285. Referring to FIG. 3, the subcarrier estimator
281 includes a pilot extraction part 282 and a data extraction part 283; and the CPE determination part 285 includes a first CPE determination part 286, a second CPE determination part 287 and a
final determination part 288.
The pilot extraction part 282 outputs pilot subcarriers (judged to be “good” subcarriers) based on their having channel coefficient powers larger than half of the mean | H|^2 as the “good” pilot
subcarriers R[k ]based on the CSI (step S44). Here, pilots having “bad” channel characteristics (those not having channel coefficient powers larger than half of the mean | H|^2) are eliminated in
order to improve CPE estimation accuracy.
The data extraction part 283 outputs data subcarriers having real components Re(Y[k]) and imaginary components Im(Y[k]) larger than half of a maximum mapping level according to the constellation
among data subcarriers. The data extraction part 283 selects and outputs as the “good” data subcarriers Y[k ]those among the good subcarriers (having channel coefficient H[k ]powers larger than half
of the mean | H|^2 based on the CSI (step S46)), that satisfy Condition 4 as follows:
IF ({k is “good subcarrier”}&{Re(Y [k])>(maximum size)/2}&{Im(Y [k])>(maximum size)/2}), THEN k is “selected”[Condition 4]
Here, data having “bad” channel characteristics (not satisfying condition 4) are eliminated in order to improve the CPE estimation accuracy.
FIG. 5 is an I-Q constellation graph in 64-QAM symbol-mapping format, and FIG. 6 is an I-Q constellation graph in 256-QAM symbol-mapping format. Here, half of the maximum mapping level corresponds to
two blocks in each of four directions (horizontal and vertical) from the center point in 64-QAM and four blocks in each of four directions (horizontal and vertical) from the center point in 256-QAM.
Furthermore, the data extraction part 283 (FIG. 3) generates the number (m) of good data subcarriers existing within the FFT length (for example, 64) used in the system.
The first CPE determination part 286 (FIG. 3) calculates the estimated quantity of phase rotation Δ{circumflex over (φ)}[r ]using the good pilot subcarriers R[k ](extracted by the pilot extraction
part 282), as represented by Equation 1. Here, the estimated quantity of phase rotation Δ{circumflex over (φ)}[r ]is generated as the first CPE {circumflex over (φ)}[c ](step S45). Thus, the first
CPE {circumflex over (φ)}[c ]equals Δ{circumflex over (φ)}[r ]of Equation 1 using the good pilot subcarriers R[k ](extracted by the pilot extraction part 282.
The second CPE determination part 287 (FIG. 3) first performs phase compensation on the good data subcarriers Y[k ](extracted by the data extraction part 283) by using the first CPE {circumflex over
(φ)}[c ](S47). Then, the second CPE determination part 287 determines mapping levels G[k ]according to the constellation for the data subcarriers phase-compensated by the first CPE {circumflex over
(φ)}[c ](step S48), and as shown in Equation 5. In Equation 5, Π represents a symbol decision making process according to the constellation (such as 256-QAM).
$G k = ∏ 256 - QAM ( Y k ⅇ - j ϕ ^ c ) , k is “ selected ” [ Equation 5 ]$
When the mapping levels G[k ]are determined, the second CPE determination part 287 (FIG. 3) generates the quantity of phase rotation for the good data subcarriers Y[k ]as the second CPE {circumflex
over (φ)}[c,data ]based on the mapping levels G[k ](step S49), and as shown in Equation 6.
$ϕ ^ c , data = tan - 1 ( ∑ k is “ slected ” Y k G k * ) [ Equation 6 ]$
While Equation 6 is similar to Equation 5, in Equation 6 the quantity of phase rotation is calculated as a reference phase using the phase of the mapping levels G[k ]instead of the phase of the
transmitted value S[k ]of the subcarriers.
Here, the second CPE determination part 287 (FIG. 3) limits the range of the calculated second CPE {circumflex over (φ)}[c,data]. Thus, the second CPE determination part 287 determines whether the
second CPE {circumflex over (φ)}[c,data ]is larger than half of the minimum phase between neighboring points (for example, 15.4° in 64-QAM and 7.64° in 256-QAM) in the constellations shown in FIG. 5
or FIG. 6 (S50). When the second CPE {circumflex over (φ)}[c,data ]is larger than half of the minimum phase between neighboring points, the second CPE determination part 287 restricts the second CPE
{circumflex over (φ)}[c,data ]to half of the minimum phase between neighboring points (step S51). When the second CPE {circumflex over (φ)}[c,data ]is not larger than half of the minimum phase
between neighboring points, the second CPE determination part 287 outputs the quantity of phase rotation calculated according to Equation 6 unchanged.
The final determination part 288 (FIG. 3) generates the final CPE {circumflex over (φ)}[c,final ]from the first CPE {circumflex over (φ)}[c ]and the second CPE {circumflex over (φ)}[c,data ], and
based on a decision step S52 For example, when the number (m) of the good data subcarriers is larger than the number of pilot subcarriers used in the system (S52), the final determination part 288
generates the mean of the first CPE {circumflex over (φ)}[c ]and the second CPE {circumflex over (φ)}[c,data ]as the final CPE {circumflex over (φ)}[c,final ](S53), as shown in Equation 7.
$ϕ c , final = 4 * ϕ ^ c + m * ϕ ^ c , data 4 + m [ Equation 7 ]$
The final determination part 288 generates the first CPE {circumflex over (φ)}[c ]as the final CPE {circumflex over (φ)}[c,final ]when the number (m) of the good data subcarriers is smaller than the
number of the pilot subcarriers, for example, 4, used in the system (step S54). Accordingly, the CPE compensation unit 260 (FIG. 2) compensates the phase of the equalized signal by the final CPE
{circumflex over (φ)}[c,final ]and outputs the phase-compensated signal.
FIG. 7 is a graph illustrating the relationship between signal to noise ratio (SNR) and bit error rate BER of the OFDM signal receiver 200 of FIG. 2 according to embodiments of the present invention.
In this simulation, 256-QAM modulation was used. The simulation result represents the performance of a multipath fading channel having a Root Mean Square (RMS) delay spread of 50 ns in an indoor
wireless environment.
FIG. 7 also shows the performances of a conventional OFDM signal receiver (“Pef FS, Pef EQ, CPE on”/“Est FS, Est EQ, CPE on”) for comparison with the OFDM signal receiver of the present invention
(“Pef FS, Pef EQ, CPE(M) on”/“Est FS, Est EQ, CPE(M) on”), and are compared to an ideal case (“Pef FS, Pef EQ, CPE off) having perfect frequency offset compensation and equalization and no CPE
estimation. When perfect frequency offset compensation and equalization are accomplished (“Pef FS, Pef EQ”), the OFDM signal receiver according to the present invention (“CPE(M) on”), which is
operated according to the CPE estimation unit 280 (FIGS. 2 & 3), can improve the SNR by 0.3 dB over the conventional OFDM signal receiver (“CPE on”). Furthermore, when frequency offset compensation
and equalization are estimated (“Est FS, Est EQ”), the present invention (“CPE(M) on”) can improve the SNR by 0.3 dB over the conventional technique (“CPE on”).
As described above, in the OFDM signal receiver 200 (FIG. 2) according to embodiments of the present invention, the channel measurement unit 270 estimates a channel from the fast-Fourier-transformed
signal to generate the CSI about good subcarrier indexes. Furthermore, the CPE estimation unit 280 estimates the good pilot subcarriers R[k ]and good data subcarriers Y[k ]from the equalized signal
output from the equalizer 250 according to the CSI, calculates the first CPE {circumflex over (φ)}[c ]and the second CPE {circumflex over (φ)}[c,data ]and combines them variously to generate the
final CPE {circumflex over (φ)}[c,final]. Accordingly, the CPE compensation unit 260 compensates the phase of the equalized signal by the final CPE {circumflex over (φ)}[c,final ]and outputs the
phase-compensated signal.
As described above, the OFDM signal receiver according to the present invention estimates the CPE using the data subcarriers determined with high reliability in addition to the pilot subcarriers.
Accordingly, CPE estimation accuracy and system performance can be improved.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes
in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims. | {"url":"http://www.google.com.au/patents/US7577206","timestamp":"2014-04-18T18:13:45Z","content_type":null,"content_length":"99343","record_id":"<urn:uuid:04fff1fe-3070-4d71-9390-ad79bce92141>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Tutors
Columbus, NJ 08022
Math, Sciences, Writing/Grammar, SERIOUS SAT students
I am a fun, helpful, and experienced tutor for the Sciences (biology and chemistry), Math (geometry, pre-algebra, algebra, and pre-calulus), English/Grammar, and the SATs. For the SAT, I implement a
results driven and rigorous 7 week strategy. PLEASE NOTE: I only...
Offering 10+ subjects including calculus | {"url":"http://www.wyzant.com/Evesham_Twp_NJ_calculus_tutors.aspx","timestamp":"2014-04-17T15:28:51Z","content_type":null,"content_length":"60719","record_id":"<urn:uuid:91fe3d7c-6323-4216-a1b0-3e8547e8dc7f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Question on LinAlg Inverse Algorithm
Pauli Virtanen pav@iki...
Wed Aug 31 03:59:44 CDT 2011
On Tue, 30 Aug 2011 15:48:18 -0700, Mark Janikas wrote:
> Last week I posted a question involving the identification of linear
> dependent columns of a matrix... but now I am finding an interesting
> result based on the linalg.inv() function... sometime I am able to
> invert a matrix that has linear dependent columns and other times I get
> the LinAlgError()... this suggests that there is some kind of random
> component to the INV method. Is this normal?
I suspect that this is a case of floating-point rounding errors.
Floating-point arithmetic is inexact, so even if a certain matrix
is singular in exact arithmetic, for a computer it may still be
invertible (by a given algorithm). This type of things are not
unusual in floating-point computations.
The matrix condition number (`np.linalg.cond`) is a better measure
of whether a matrix is invertible or not.
Pauli Virtanen
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-August/058297.html","timestamp":"2014-04-18T23:46:15Z","content_type":null,"content_length":"3494","record_id":"<urn:uuid:ff1093fd-a028-43fb-a5d3-8453eda48950>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polar coordinate system
In mathematics, the polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a fixed point and an angle from a fixed direction.
The fixed point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the fixed direction is the polar axis. The distance from the pole is called the radial
coordinate or radius, and the angle is the angular coordinate, polar angle, or azimuth.^[1]
The concepts of angle and radius were already used by ancient peoples of the 1st millennium BC. The Greek astronomer and astrologer Hipparchus (190–120 BC) created a table of chord functions giving
the length of the chord for each angle, and there are references to his using polar coordinates in establishing stellar positions.^[2] In On Spirals, Archimedes describes the Archimedean spiral, a
function whose radius depends on the angle. The Greek work, however, did not extend to a full coordinate system.
From the 8th century AD onward, astronomers developed methods for approximating and calculating the direction to Makkah (qibla)—and its distance—from any location on the Earth.^[3] From the 9th
century onward they were using spherical trigonometry and map projection methods to determine these quantities accurately. The calculation is essentially the conversion of the equatorial polar
coordinates of Mecca (i.e. its longitude and latitude) to its polar coordinates (i.e. its qibla and distance) relative to a system whose reference meridian is the great circle through the given
location and the Earth's poles, and whose polar axis is the line through the location and its antipodal point.^[4]
There are various accounts of the introduction of polar coordinates as part of a formal coordinate system. The full history of the subject is described in Harvard professor Julian Lowell Coolidge's
Origin of Polar Coordinates.^[5] Grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the concepts in the mid-seventeenth century. Saint-Vincent wrote about them privately in
1625 and published his work in 1647, while Cavalieri published his in 1635 with a corrected version appearing in 1653. Cavalieri first used polar coordinates to solve a problem relating to the area
within an Archimedean spiral. Blaise Pascal subsequently used polar coordinates to calculate the length of parabolic arcs.
In Method of Fluxions (written 1671, published 1736), Sir Isaac Newton examined the transformations between polar coordinates, which he referred to as the "Seventh Manner; For Spirals", and nine
other coordinate systems.^[6] In the journal Acta Eruditorum (1691), Jacob Bernoulli used a system with a point on a line, called the pole and polar axis respectively. Coordinates were specified by
the distance from the pole and the angle from the polar axis. Bernoulli's work extended to finding the radius of curvature of curves expressed in these coordinates.
The actual term polar coordinates has been attributed to Gregorio Fontana and was used by 18th-century Italian writers. The term appeared in English in George Peacock's 1816 translation of Lacroix's
Differential and Integral Calculus.^[7]^[8] Alexis Clairaut was the first to think of polar coordinates in three dimensions, and Leonhard Euler was the first to actually develop them.^[5]
The radial coordinate is often denoted by r, and the angular coordinate by φ, θ or t. The angular coordinate is specified as φ by ISO standard 31-11.
Angles in polar notation are generally expressed in either degrees or radians (2π rad being equal to 360°). Degrees are traditionally used in navigation, surveying, and many applied disciplines,
while radians are more common in mathematics and mathematical physics.^[9]
In many contexts, a positive angular coordinate means that the angle φ is measured counterclockwise from the axis.
In mathematical literature, the polar axis is often drawn horizontal and pointing to the right.
Uniqueness of polar coordinates[edit]
Adding any number of full turns (360°) to the angular coordinate does not change the corresponding direction. Also, a negative radial coordinate is best interpreted as the corresponding positive
distance measured in the opposite direction. Therefore, the same point can be expressed with an infinite number of different polar coordinates (r, φ ± n×360°) or (−r, φ ± (2n + 1)180°), where n is
any integer.^[10] Moreover, the pole itself can be expressed as (0, φ) for any angle φ.^[11]
Where a unique representation is needed for any point, it is usual to limit r to non-negative numbers (r ≥ 0) and φ to the interval [0, 360°) or (−180°, 180°] (in radians, [0, 2π) or (−π, π]).^[12]
One must also choose a unique azimuth for the pole, e.g., φ = 0.
Converting between polar and Cartesian coordinates[edit]
The polar coordinates r and φ can be converted to the Cartesian coordinates x and y by using the trigonometric functions sine and cosine:
$x = r \cos \varphi \,$
$y = r \sin \varphi \,$
The Cartesian coordinates x and y can be converted to polar coordinates r and φ with r ≥ 0 and φ in the interval (−π, π] by:^[13]
$r = \sqrt{x^2 + y^2} \quad$ (as in the Pythagorean theorem or the Euclidean norm), and
$\varphi = \operatorname{atan2}(y, x) \quad$,
where atan2 is a common variation on the arctangent function defined as
$\operatorname{atan2}(y, x) = \begin{cases} \arctan(\frac{y}{x}) & \mbox{if } x > 0\\ \arctan(\frac{y}{x}) + \pi & \mbox{if } x < 0 \mbox{ and } y \ge 0\\ \arctan(\frac{y}{x}) - \pi & \mbox{if }
x < 0 \mbox{ and } y < 0\\ \frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y > 0\\ -\frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y < 0\\ \text{undefined} & \mbox{if } x = 0 \mbox{ and } y = 0 \end
The value of φ above is the principal value of the complex number function arg applied to x+iy. An angle in the range [0, 2π) may be obtained by adding 2π to the value in case it is negative.
Polar equation of a curve[edit]
The equation defining an algebraic curve expressed in polar coordinates is known as a polar equation. In many cases, such an equation can simply be specified by defining r as a function of φ. The
resulting curve then consists of points of the form (r(φ), φ) and can be regarded as the graph of the polar function r.
Different forms of symmetry can be deduced from the equation of a polar function r. If r(−φ) = r(φ) the curve will be symmetrical about the horizontal (0°/180°) ray, if r(π − φ) = r(φ) it will be
symmetric about the vertical (90°/270°) ray, and if r(φ − α) = r(φ) it will be rotationally symmetric α counterclockwise about the pole.
Because of the circular nature of the polar coordinate system, many curves can be described by a rather simple polar equation, whereas their Cartesian form is much more intricate. Among the best
known of these curves are the polar rose, Archimedean spiral, lemniscate, limaçon, and cardioid.
For the circle, line, and polar rose below, it is understood that there are no restrictions on the domain and range of the curve.
The general equation for a circle with a center at (r[0], $\gamma$) and radius a is
$r^2 - 2 r r_0 \cos(\varphi - \gamma) + r_0^2 = a^2.\,$
This can be simplified in various ways, to conform to more specific cases, such as the equation
$r(\varphi)=a \,$
for a circle with a center at the pole and radius a.^[14]
When r[0] = a, or when the origin lies on the circle, the equation becomes
$r = 2 a\cos(\varphi - \gamma)$.
In the general case, the equation can be solved for r, giving
$r = r_0 \cos(\varphi - \gamma) + \sqrt{a^2 - r_0^2 \sin^2(\varphi - \gamma)}$,
the solution with a minus sign in front of the square root gives the same curve.
Radial lines (those running through the pole) are represented by the equation
$\varphi = \gamma \,$,
where ɣ is the angle of elevation of the line; that is, ɣ = arctan m where m is the slope of the line in the Cartesian coordinate system. The non-radial line that crosses the radial line φ = ɣ
perpendicularly at the point (r[0], ɣ) has the equation
$r(\varphi) = {r_0}\sec(\varphi-\gamma). \,$
Otherwise stated (r[0], ɣ) is the point in which the tangent intersects the imaginary circle of radius r[0].
Polar rose[edit]
A polar rose is a famous mathematical curve that looks like a petaled flower, and that can be expressed as a simple polar equation,
$r(\varphi) = a \cos (k\varphi + \gamma_0)\,$
for any constant ɣ[0] (including 0). If k is an integer, these equations will produce a k-petaled rose if k is odd, or a 2k-petaled rose if k is even. If k is rational but not an integer, a rose-like
shape may form but with overlapping petals. Note that these equations never define a rose with 2, 6, 10, 14, etc. petals. The variable a represents the length of the petals of the rose.
Archimedean spiral[edit]
The Archimedean spiral is a famous spiral that was discovered by Archimedes, which also can be expressed as a simple polar equation. It is represented by the equation
$r(\varphi) = a+b\varphi. \,$
Changing the parameter a will turn the spiral, while b controls the distance between the arms, which for a given spiral is always constant. The Archimedean spiral has two arms, one for φ > 0 and one
for φ < 0. The two arms are smoothly connected at the pole. Taking the mirror image of one arm across the 90°/270° line will yield the other arm. This curve is notable as one of the first curves,
after the conic sections, to be described in a mathematical treatise, and as being a prime example of a curve that is best defined by a polar equation.
Conic sections[edit]
A conic section with one focus on the pole and the other somewhere on the 0° ray (so that the conic's major axis lies along the polar axis) is given by:
$r = { \ell\over {1 + e \cos \varphi}}$
where e is the eccentricity and $\ell$ is the semi-latus rectum (the perpendicular distance at a focus from the major axis to the curve). If e > 1, this equation defines a hyperbola; if e = 1, it
defines a parabola; and if e < 1, it defines an ellipse. The special case e = 0 of the latter results in a circle of radius $\ell$.
Intersection of two polar curves[edit]
The graphs of two polar functions $r=f(\theta)$ and $r=g(\theta)$ have possible intersections in 3 cases:
1. In the origin if the equations $f(\theta)=0$ and $g(\theta)=0$ have at least one solution each.
2. All the points $[g(\theta_i),\theta_i]$ where $\theta_i$ are the solutions to the equation $f(\theta)=g(\theta)$.
3. All the points $[g(\theta_i),\theta_i]$ where $\theta_i$ are the solutions to the equation $f(\theta+(2k+1)\pi)=-g(\theta)$ where $k$ is an integer.
Complex numbers[edit]
Every complex number can be represented as a point in the complex plane, and can therefore be expressed by specifying either the point's Cartesian coordinates (called rectangular or Cartesian form)
or the point's polar coordinates (called polar form). The complex number z can be represented in rectangular form as
$z = x + iy\,$
where i is the imaginary unit, or can alternatively be written in polar form (via the conversion formulae given above) as
$z = r\cdot(\cos\varphi+i\sin\varphi)$
and from there as
$z = re^{i\varphi} \,$
where e is Euler's number, which are equivalent as shown by Euler's formula.^[15] (Note that this formula, like all those involving exponentials of angles, assumes that the angle φ is expressed in
radians.) To convert between the rectangular and polar forms of a complex number, the conversion formulae given above can be used.
For the operations of multiplication, division, and exponentiation of complex numbers, it is generally much simpler to work with complex numbers expressed in polar form rather than rectangular form.
From the laws of exponentiation:
$r_0 e^{i\varphi_0} \cdot r_1 e^{i\varphi_1}=r_0 r_1 e^{i(\varphi_0 + \varphi_1)} \,$
$\frac{r_0 e^{i\varphi_0}}{r_1 e^{i\varphi_1}}=\frac{r_0}{r_1}e^{i(\varphi_0 - \varphi_1)} \,$
$(re^{i\varphi})^n=r^ne^{in\varphi} \,$
Calculus can be applied to equations expressed in polar coordinates.^[16]^[17]
The angular coordinate φ is expressed in radians throughout this section, which is the conventional choice when doing calculus.
Differential calculus[edit]
Using x = r cos φ and y = r sin φ , one can derive a relationship between derivatives in Cartesian and polar coordinates. For a given function, u(x,y), it follows that
$r \frac{\partial u}{\partial r} = r \frac{\partial u}{\partial x}\frac{\partial x}{\partial r} + r \frac{\partial u}{\partial y}\frac{\partial y}{\partial r},$
$\frac{\partial u}{\partial \varphi} = \frac{\partial u}{\partial x}\frac{\partial x}{\partial \varphi} + \frac{\partial u}{\partial y}\frac{\partial y}{\partial \varphi},$
$r \frac{\partial u}{\partial r} = r \frac{\partial u}{\partial x} \cos \varphi + r \frac{\partial u}{\partial y} \sin \varphi = x \frac{\partial u}{\partial x} + y \frac{\partial u}{\partial
$\frac{\partial u}{\partial \varphi} = - \frac{\partial u}{\partial x} r \sin \varphi + \frac{\partial u}{\partial y} r \cos \varphi = -y \frac{\partial u}{\partial x} + x \frac{\partial u}{\
partial y}.$
Hence, we have the following formulae:
$r \frac{\partial}{\partial r}= x \frac{\partial}{\partial x} + y \frac{\partial}{\partial y} \,$
$\frac{\partial}{\partial \varphi} = -y \frac{\partial}{\partial x} + x \frac{\partial}{\partial y} .$
Using the inverse coordinates transformation, an analogous reciprocal relationship can be derived between the derivatives. Given a function u(r,φ), it follows that
$\frac{\partial u}{\partial x} = \frac{\partial u}{\partial r}\frac{\partial r}{\partial x} + \frac{\partial u}{\partial \varphi}\frac{\partial \varphi}{\partial x},$
$\frac{\partial u}{\partial y} = \frac{\partial u}{\partial r}\frac{\partial r}{\partial y} + \frac{\partial u}{\partial \varphi}\frac{\partial \varphi}{\partial y},$
$\frac{\partial u}{\partial x} = \frac{\partial u}{\partial r}\frac{x}{\sqrt{x^2+y^2}} - \frac{\partial u}{\partial \varphi}\frac{y}{x^2+y^2} = \cos \varphi \frac{\partial u}{\partial r} - \frac
{1}{r} \sin \varphi \frac{\partial u}{\partial \varphi},$
$\frac{\partial u}{\partial y} = \frac{\partial u}{\partial r}\frac{y}{\sqrt{x^2+y^2}} + \frac{\partial u}{\partial \varphi}\frac{x}{x^2+y^2} = \sin \varphi \frac{\partial u}{\partial r} + \frac
{1}{r} \cos \varphi \frac{\partial u}{\partial \varphi}.$
Hence, we have the following formulae:
$\frac{\partial}{\partial x} = \cos \varphi \frac{\partial}{\partial r} - \frac{1}{r} \sin \varphi \frac{\partial}{\partial \varphi} \,$
$\frac{\partial}{\partial y} = \sin \varphi \frac{\partial}{\partial r} + \frac{1}{r} \cos \varphi \frac{\partial}{\partial \varphi}.$
To find the Cartesian slope of the tangent line to a polar curve r(φ) at any given point, the curve is first expressed as a system of parametric equations.
$x=r(\varphi)\cos\varphi \,$
$y=r(\varphi)\sin\varphi \,$
Differentiating both equations with respect to φ yields
$\frac{dx}{d\varphi}=r'(\varphi)\cos\varphi-r(\varphi)\sin\varphi \,$
$\frac{dy}{d\varphi}=r'(\varphi)\sin\varphi+r(\varphi)\cos\varphi. \,$
Dividing the second equation by the first yields the Cartesian slope of the tangent line to the curve at the point (r(φ), φ):
For other useful formulas including divergence, gradient, and Laplacian in polar coordinates, see curvilinear coordinates.
Integral calculus (arc length)[edit]
The arc length (length of a line segment) defined by a polar function is found by the integration over the curve r(φ). Let L denote this length along the curve starting from points A through to point
B, where these points correspond to φ = a and φ = b such that 0 < b − a < 2π. The length of L is given by the following integral
$L = \int_a^b \sqrt{ \left[r(\varphi)\right]^2 + \left[ {{dr(\varphi) } \over { d\varphi }} \right] ^2 } d\varphi$
Integral calculus (area)[edit]
Let R denote the region enclosed by a curve r(φ) and the rays φ = a and φ = b, where 0 < b − a ≤ 2π. Then, the area of R is
$\frac12\int_a^b \left[r(\varphi)\right]^2\, d\varphi.$
This result can be found as follows. First, the interval [a, b] is divided into n subintervals, where n is an arbitrary positive integer. Thus Δφ, the length of each subinterval, is equal to b − a
(the total length of the interval), divided by n, the number of subintervals. For each subinterval i = 1, 2, …, n, let φ[i] be the midpoint of the subinterval, and construct a sector with the center
at the pole, radius r(φ[i]), central angle Δφ and arc length r(φ[i])Δφ. The area of each constructed sector is therefore equal to
$\left[r(\varphi_i)\right]^2 \pi \cdot \frac{\Delta \varphi}{2\pi} = \frac{1}{2}\left[r(\varphi_i)\right]^2 \Delta \varphi.$
Hence, the total area of all of the sectors is
$\sum_{i=1}^n \tfrac12r(\varphi_i)^2\,\Delta\varphi.$
As the number of subintervals n is increased, the approximation of the area continues to improve. In the limit as n → ∞, the sum becomes the Riemann sum for the above integral.
A mechanical device that computes area integrals is the planimeter, which measures the area of plane figures by tracing them out: this replicates integration in polar coordinates by adding a joint so
that the 2-element linkage effects Green's theorem, converting the quadratic polar integral to a linear integral.
Using Cartesian coordinates, an infinitesimal area element can be calculated as dA = dx dy. The substitution rule for multiple integrals states that, when using other coordinates, the Jacobian
determinant of the coordinate conversion formula has to be considered:
$J = \det\frac{\partial(x,y)}{\partial(r,\varphi)} =\begin{vmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \varphi} \\[8pt] \frac{\partial y}{\partial r} & \frac{\partial y}{\
partial \varphi} \end{vmatrix} =\begin{vmatrix} \cos\varphi & -r\sin\varphi \\ \sin\varphi & r\cos\varphi \end{vmatrix} =r\cos^2\varphi + r\sin^2\varphi = r.$
Hence, an area element in polar coordinates can be written as
$dA = dx\,dy\ = J\,dr\,d\varphi = r\,dr\,d\varphi.$
Now, a function that is given in polar coordinates can be integrated as follows:
$\iint_R f(x,y) \, dA = \int_a^b \int_0^{r(\varphi)} f(r,\varphi)\,r\,dr\,d\varphi.$
Here, R is the same region as above, namely, the region enclosed by a curve r(φ) and the rays φ = a and φ = b.
The formula for the area of R mentioned above is retrieved by taking f identically equal to 1. A more surprising application of this result yields the Gaussian integral
$\int_{-\infty}^\infty e^{-x^2} \, dx = \sqrt\pi.$
Vector calculus[edit]
Vector calculus can also be applied to polar coordinates. For a planar motion, let $\mathbf{r}$ be the position vector (rcos(φ), rsin(φ)), with r and φ depending on time t.
We define the unit vectors
in the direction of r and
$\hat{\boldsymbol\varphi}=(-\sin(\varphi),\cos(\varphi)) = \hat {\mathbf{k}} \times \hat {\mathbf{r}} \ ,$
in the plane of the motion perpendicular to the radial direction, where $\hat{\mathbf {k}}$ is a unit vector normal to the plane of the motion.
$\mathbf{r} = (x, \ y ) = r (\cos \varphi ,\ \sin \varphi) = r \hat{\mathbf{r}}\ ,$
$\dot {\mathbf r} = (\dot x, \ \dot y ) = \dot r (\cos \varphi ,\ \sin \varphi) + r \dot \varphi (-\sin \varphi ,\ \cos \varphi) = \dot r \hat {\mathbf r} + r \dot \varphi \hat {\boldsymbol{\
varphi}} \ ,$
$\ddot {\mathbf r } = (\ddot x, \ \ddot y ) = \ddot r (\cos \varphi ,\ \sin \varphi) + 2\dot r \dot \varphi (-\sin \varphi ,\ \cos \varphi) + r\ddot \varphi (-\sin \varphi ,\ \cos \varphi) - r {\
dot \varphi }^2 (\cos \varphi ,\ \sin \varphi)\ =$
$\left( \ddot r - r\dot\varphi^2 \right) \hat{\mathbf r} + \left( r\ddot\varphi + 2\dot r \dot\varphi \right) \hat{\boldsymbol{\varphi}} \ = (\ddot r - r\dot\varphi^2)\hat{\mathbf{r}} + \frac
{1}{r}\; \frac{d}{dt} \left(r^2\dot\varphi\right) \hat{\boldsymbol{\varphi}}$
Centrifugal and Coriolis terms[edit]
The term $r\dot\varphi^2$ is sometimes referred to as the centrifugal term, and the term $2\dot r \dot\varphi$ as the Coriolis term. For example, see Shankar.^[18] Although these equations bear some
resemblance in form to the centrifugal and Coriolis effects found in rotating reference frames, nonetheless these are not the same things.^[19] For example, the physical centrifugal and Coriolis
forces appear only in non-inertial frames of reference. In contrast, these terms that appear when acceleration is expressed in polar coordinates are a mathematical consequence of differentiation;
these terms appear wherever polar coordinates are used. In particular, these terms appear even when polar coordinates are used in inertial frames of reference, where the physical centrifugal and
Coriolis forces never appear.
Co-rotating frame[edit]
For a particle in planar motion, one approach to attaching physical significance to these terms is based on the concept of an instantaneous co-rotating frame of reference.^[20] To define a
co-rotating frame, first an origin is selected from which the distance r(t) to the particle is defined. An axis of rotation is set up that is perpendicular to the plane of motion of the particle, and
passing through this origin. Then, at the selected moment t, the rate of rotation of the co-rotating frame Ω is made to match the rate of rotation of the particle about this axis, dφ/dt. Next, the
terms in the acceleration in the inertial frame are related to those in the co-rotating frame. Let the location of the particle in the inertial frame be (r(t), φ(t)), and in the co-rotating frame be
(r(t), φ′(t)). Because the co-rotating frame rotates at the same rate as the particle, dφ′/dt = 0. The fictitious centrifugal force in the co-rotating frame is mrΩ^2, radially outward. The velocity
of the particle in the co-rotating frame also is radially outward, because dφ′/dt = 0. The fictitious Coriolis force therefore has a value −2m(dr/dt)Ω, pointed in the direction of increasing φ only.
Thus, using these forces in Newton's second law we find:
$\boldsymbol{F} + \boldsymbol{F_{cf}} + \boldsymbol{F_{Cor}} = m \ddot{\boldsymbol{r}} \ ,$
where over dots represent time differentiations, and F is the net real force (as opposed to the fictitious forces). In terms of components, this vector equation becomes:
$F_r + mr\Omega^2 = m\ddot r$
$F_{\varphi}-2m\dot r \Omega = mr \ddot {\varphi} \ ,$
which can be compared to the equations for the inertial frame:
$F_r = m \ddot r -mr \dot {\varphi}^2 \$
$F_{\varphi} = mr \ddot \varphi +2m \dot r \dot {\varphi} \ .$
This comparison, plus the recognition that by the definition of the co-rotating frame at time t it has a rate of rotation Ω = dφ/dt, shows that we can interpret the terms in the acceleration
(multiplied by the mass of the particle) as found in the inertial frame as the negative of the centrifugal and Coriolis forces that would be seen in the instantaneous, non-inertial co-rotating frame.
For general motion of a particle (as opposed to simple circular motion), the centrifugal and Coriolis forces in a particle's frame of reference commonly are referred to the instantaneous osculating
circle of its motion, not to a fixed center of polar coordinates. For more detail, see centripetal force.
Connection to spherical and cylindrical coordinates[edit]
The polar coordinate system is extended into three dimensions with two different coordinate systems, the cylindrical and spherical coordinate system.
Polar coordinates are two-dimensional and thus they can be used only where point positions lie on a single two-dimensional plane. They are most appropriate in any context where the phenomenon being
considered is inherently tied to direction and length from a center point. For instance, the examples above show how elementary polar equations suffice to define curves—such as the Archimedean
spiral—whose equation in the Cartesian coordinate system would be much more intricate. Moreover, many physical systems—such as those concerned with bodies moving around a central point or with
phenomena originating from a central point—are simpler and more intuitive to model using polar coordinates. The initial motivation for the introduction of the polar system was the study of circular
and orbital motion.
Polar coordinates are used often in navigation, as the destination or direction of travel can be given as an angle and distance from the object being considered. For instance, aircraft use a slightly
modified version of the polar coordinates for navigation. In this system, the one generally used for any sort of navigation, the 0° ray is generally called heading 360, and the angles continue in a
clockwise direction, rather than counterclockwise, as in the mathematical system. Heading 360 corresponds to magnetic north, while headings 90, 180, and 270 correspond to magnetic east, south, and
west, respectively.^[21] Thus, an aircraft traveling 5 nautical miles due east will be traveling 5 units at heading 90 (read zero-niner-zero by air traffic control).^[22]
Systems displaying radial symmetry provide natural settings for the polar coordinate system, with the central point acting as the pole. A prime example of this usage is the groundwater flow equation
when applied to radially symmetric wells. Systems with a radial force are also good candidates for the use of the polar coordinate system. These systems include gravitational fields, which obey the
inverse-square law, as well as systems with point sources, such as radio antennas.
Radially asymmetric systems may also be modeled with polar coordinates. For example, a microphone's pickup pattern illustrates its proportional response to an incoming sound from a given direction,
and these patterns can be represented as polar curves. The curve for a standard cardioid microphone, the most common unidirectional microphone, can be represented as r = 0.5 + 0.5sin(φ) at its target
design frequency.^[23] The pattern shifts toward omnidirectionality at lower frequencies.
See also[edit]
• Adams, Robert; Christopher Essex (2013). Calculus: a complete course (Eighth ed.). Pearson Canada Inc. ISBN 978-0-321-78107-9.
• Anton, Howard; Irl Bivens, Stephen Davis (2002). Calculus (Seventh ed.). Anton Textbooks, Inc. ISBN 0-471-38157-8.
• Finney, Ross; George Thomas, Franklin Demana, Bert Waits (June 1994). Calculus: Graphical, Numerical, Algebraic (Single Variable Version ed.). Addison-Wesley Publishing Co. ISBN 0-201-55478-X.
1. ^ Brown, Richard G. (1997). Andrew M. Gleason, ed. Advanced Mathematics: Precalculus with Discrete Mathematics and Data Analysis. Evanston, Illinois: McDougal Littell. ISBN 0-395-77114-5.
2. ^ Friendly, Michael. "Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization". Retrieved 2006-09-10.
3. ^ King, David A. (2005). The Sacred Geography of Islam. p.166. In Koetsier, Teun; Luc, Bergmans, eds. (2005). Mathematics and the Divine: A Historical Study. Amsterdam: Elsevier. pp. 162–78. ISBN
4. ^ King (2005, p. 169). The calculations were as accurate as could be achieved under the limitations imposed by their assumption that the Earth was a perfect sphere.
5. ^ ^a ^b Coolidge, Julian (1952). "The Origin of Polar Coordinates". American Mathematical Monthly (Mathematical Association of America) 59 (2): 78–85. doi:10.2307/2307104. JSTOR 2307104.
6. ^ Boyer, C. B. (1949). "Newton as an Originator of Polar Coordinates". American Mathematical Monthly (Mathematical Association of America) 56 (2): 73–78. doi:10.2307/2306162. JSTOR 2306162.
7. ^ Miller, Jeff. "Earliest Known Uses of Some of the Words of Mathematics". Retrieved 2006-09-10.
8. ^ Smith, David Eugene (1925). History of Mathematics, Vol II. Boston: Ginn and Co. p. 324.
9. ^ Serway, Raymond A.; Jewett, Jr., John W. (2005). Principles of Physics. Brooks/Cole—Thomson Learning. ISBN 0-534-49143-X.
10. ^ "Polar Coordinates and Graphing" (PDF). 2006-04-13. Retrieved 2006-09-22.
11. ^ Lee, Theodore; David Cohen, David Sklar (2005). Precalculus: With Unit-Circle Trigonometry (Fourth ed.). Thomson Brooks/Cole. ISBN 0-534-40230-5.
12. ^ Stewart, Ian; David Tall (1983). Complex Analysis (the Hitchhiker's Guide to the Plane). Cambridge University Press. ISBN 0-521-28763-4.
13. ^ Torrence, Bruce Follett; Eve Torrence (1999). The Student's Introduction to Mathematica. Cambridge University Press. ISBN 0-521-59461-8.
14. ^ Claeys, Johan. "Polar coordinates". Retrieved 2006-05-25.
15. ^ Smith, Julius O. (2003). "Euler's Identity". Mathematics of the Discrete Fourier Transform (DFT). W3K Publishing. ISBN 0-9745607-0-7. Retrieved 2006-09-22.
16. ^ Husch, Lawrence S. "Areas Bounded by Polar Curves". Retrieved 2006-11-25.
17. ^ Lawrence S. Husch. "Tangent Lines to Polar Graphs". Retrieved 2006-11-25.
18. ^ Ramamurti Shankar (1994). Principles of Quantum Mechanics (2nd ed.). Springer. p. 81. ISBN 0-306-44790-8.
19. ^ In particular, the angular rate appearing in the polar coordinate expressions is that of the particle under observation, $\dot{\varphi}$, while that in classical Newtonian mechanics is the
angular rate Ω of a rotating frame of reference.
20. ^ For the following discussion, see John R Taylor (2005). Classical Mechanics. University Science Books. p. §9.10, pp. 358–359. ISBN 1-891389-22-X.
21. ^ Santhi, Sumrit. "Aircraft Navigation System". Retrieved 2006-11-26.
22. ^ "Emergency Procedures" (PDF). Retrieved 2007-01-15.
23. ^ Eargle, John (2005). Handbook of Recording Engineering (Fourth ed.). Springer. ISBN 0-387-28470-2.
External links[edit] | {"url":"http://blekko.com/wiki/Polar_coordinate_system?source=672620ff","timestamp":"2014-04-21T15:04:52Z","content_type":null,"content_length":"114084","record_id":"<urn:uuid:3e024e16-1e6c-4fea-a5f6-51d1618ead16>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Brief Introduction To Fluid Mechanics 0th... Textbook Solutions | Chegg.com
Calculate the power, by using the formula.
Here, work done is W, and time is T.
Calculate the dimension of work done, in MLT system, by using the formula:
Here, force is F, and distance is D
Calculate the dimension of force, in MLT system, by using the formula:
Here, mass is m, acceleration is a, velocity is v, and time is t.
Substitute M for m, L for D, and T for t:
Substitute F, and L for D in equation (5).
Substitute W, and T for T in equation (4), to find the power in MLT system.
Therefore, the dimension of the power in MLT system is
From the table 1.1, “Dimension Associated with Common Physical Quantities.”
Dimension for the power in MLT system is | {"url":"http://www.chegg.com/homework-help/a-brief-introduction-to-fluid-mechanics-0th-edition-solutions-9780470372074","timestamp":"2014-04-17T04:18:49Z","content_type":null,"content_length":"50351","record_id":"<urn:uuid:27f96c36-34eb-4a63-b608-6afdd1669904>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum computing research edges toward practicality
The latest news from academia, regulators research labs and other things of interest
Posted: Oct 05, 2010
Quantum computing research edges toward practicality
(Nanowerk News) An important step 末 one that is essential to the ultimate construction of a quantum computer 末 was taken for the first time by physicists at UC Santa Barbara. The discovery is
published in the current issue of the journal Nature.
The research involves the entanglement of three quantum bits of information, or qubits. Before now, entanglement research in the solid state has only been developed with two qubits. The UCSB finding
comes from a collaboration of the research groups of physicists Andrew Cleland and John Martinis. Graduate student Matthew Neeley is the first author on the Nature paper. Meanwhile, a research group
at Yale reported the same result.
"These entangled states are interesting in their own right, but they are also very important from the perspective of the larger, long-term goal of creating a quantum computer with many qubits," said
The Cleland-Martinis group is studying superconducting quantum circuits and their potential uses in quantum computing. Quantum circuits are fabricated on microchips using techniques similar to those
used in making conventional computers. When cooled to very low temperatures 末 just a few hundredths of a degree above absolute zero 末 they become superconducting and exhibit quantum effects.
Essentially behaving like artificial atoms, they can be manipulated and measured using electrical signals. Unlike atoms, however, these circuits can be designed to have only the properties that the
scientists desire for various experiments 末 providing a tool for exploring many of the fundamental aspects of quantum mechanics.
The simplest type of quantum system is one with just two possible states, known as a quantum bits by analogy with the classical bits that are the fundamental elements of conventional computers.
UCSB's team uses quantum circuits of a type known as phase qubits, designed to behave as two-levels quantum systems. In this most recent work, the team fabricated and operated a device with three
coupled phase qubits, using them to produce entangled quantum states.
"Entanglement is one of the strangest and most counterintuitive features of quantum mechanics," said Neeley. "It is a property of certain kinds of quantum states in which different parts of the
system are strongly correlated with each other. This is often discussed in the context of bipartite systems with just two components. However, when one considers tripartite or larger quantum systems,
the physics of entanglement becomes even richer and more interesting."
In this work, the team produced entangled states of three qubits. Neeley explained that unlike the two-qubit case, three qubits can be entangled in two fundamentally different ways, exemplified by a
state known as GHZ, and another state known as W. The GHZ state is highly entangled but fragile, and measuring just one of the qubits collapses the other two into an unentangled state.
"The W state is in a certain sense less entangled, but nevertheless more robustly so 末 two thirds of the time, measuring one qubit will still leave the other two in an entangled state," Neeley said.
"We produced both of these states with our phase qubits, and measured their fidelity compared to the theoretical ideal states. Experimentally, the fidelity is never perfect, but we showed that it is
high enough to prove that the three qubits are entangled."
"Entanglement is a resource that gives quantum computers an advantage over classical computers, and so producing multipartite entanglement is an important step for any system with which we might hope
to construct a quantum computer," said Neeley.
The same result was published simultaneously, based on similar research from the group of Rob Schoelkopf, a physics professor at Yale. Both results are the first work showing three coupled
superconducting qubits. This is a significant step toward scaling to increasingly larger numbers of qubits.
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news. | {"url":"http://www.nanowerk.com/news/newsid=18342.php","timestamp":"2014-04-19T22:05:50Z","content_type":null,"content_length":"38715","record_id":"<urn:uuid:5e484161-ee00-4311-8dd8-3c82c70ba5d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Metric (idea)
There are a few other uses of the word metric in mathematics, that all revolve around the concept described above:
• A metric can also be a scalar differential, say ds, which is something you integrate along a curve to find its length. In R^3, it is given by ds^2=dx^2+dy^2+dz^2- essentially a differential
statement of Pythagoras' Theorem.
• In a more general context, ds is given by ds^2=M[ij]dx[i]dx[j] (using abstract index notation), where (M[ij]) is a real symmetrix matrix, which is also called a metric tensor. In General
Relativity, it is denoted g[αβ] | {"url":"http://everything2.com/user/90%2525+fat+free/writeups/Metric","timestamp":"2014-04-17T09:46:35Z","content_type":null,"content_length":"19606","record_id":"<urn:uuid:755abbdb-0b48-4c28-8dba-60a97c77a149>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Upper Darby Trigonometry Tutor
Find an Upper Darby Trigonometry Tutor
...I love to teach, I love to pass what I know on to others. This is an opportunity for you to learn skills and tricks that will usher you into a new realm of understanding some of the subjects
you’re having difficulties with. I am most fulfilled when I demystify what some see as difficult-to-follow, difficult-to-understand, baffling and confusing subjects/topics.
36 Subjects: including trigonometry, English, chemistry, statistics
...One quick note about my cancellation policy, as it's different than most tutors: Cancel one or all sessions at any time, and there is NO CHARGE. Thank you for considering my services, and the
best of luck in all your endeavors! Warm regards, Dr.
14 Subjects: including trigonometry, calculus, physics, geometry
...Most of the time people get hung up on the language or complex symbols used in math and science when really the key to understanding is to be able to look beyond those things and visualize
something physical. I promote using some imagination when looking at these topics, especially in physics. ...
16 Subjects: including trigonometry, Spanish, calculus, physics
...I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local
university. I hold degrees in economics and business and an MBA.
13 Subjects: including trigonometry, calculus, algebra 1, geometry
...I developed a high school-level survey course for home-schooled students (details upon request). I have worked successfully with several Aspergers students at Delaware high schools (Wilmington
Charter, St. Mark's, Newark). A solid clinical diagnosis and a thorough IEP are crucial. Most students with Aspergers are uncomfortable in social situations.
32 Subjects: including trigonometry, chemistry, English, biology | {"url":"http://www.purplemath.com/Upper_Darby_Trigonometry_tutors.php","timestamp":"2014-04-16T10:22:40Z","content_type":null,"content_length":"24360","record_id":"<urn:uuid:697ef57c-cc7c-4ca5-a012-39314d563578>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
James Eells
Born: 25 October 1926 in Cleveland, Ohio, USA
Died: 14 February 2007 in Cambridge, England
Click the picture above
to see four larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
James Eells attended Western Reserve Academy [2]:-
... until his exuberance led to his expulsion.
However he still was accepted into Bowdoin College, Maine. After studying mathematics there he graduated in 1947 and decided to take a 'gap' year abroad. He went to Turkey where he taught mathematics
at Robert College in Istanbul. Robert College is today Bogazici University (or Bosporos University). He returned to the United States in 1948 and was appointed as an instructor in mathematics at
Amherst College in Amherst, Massachusetts. At this time Amherst was a men's college but it was here that he met Nan Munsell. They married in 1950 and had one son and three daughters.
Eells had not undertaken graduate studies up to this time but after two years teaching at Amherst he decided that he wanted to make a career in mathematics. He applied for graduate study at Harvard
and there began research under Hassler Whitney. He was awarded his doctorate in 1954 for his thesis Geometric Aspects of Integration Theory. After spending a session at the Institute for Advanced
Study at Princeton, he was appointed to the University of California at Berkeley. In these early years of his career, Eells published a number of papers including Geometric aspects of currents and
distributions (1955), (with Charles B Morrey) A variational method in the theory of harmonic integrals (1955), (with Richard F Arens) On embedding uniform and topological spaces (1956), and (with
Charles B Morrey) A variational method in the theory of harmonic integrals (1956).
For a few years Eells taught at the Columbia University, New York. He spent 1963 at Churchill College, Cambridge then was appointed as a full professor at Cornell University in the following year. He
returned to Cambridge to spend the year 1966-67 there and while in England he visited the University of Warwick in the summer of 1967 to run a symposium. I [EFR] was a research student at the
University of Warwick at this time and I met Eells who was captivated by the lively research atmosphere, the energy, and the excitement at the newly established university. Eells was keen to find a
permanent appointment at Warwick and in 1969 he was appointed as the first Professor of Analysis. The year 1967 in which Eells ran the symposium at Warwick was also the year that he published
Singularities of smooth maps which mainly deals with on Morse theory. He writes in the Preface:-
Aside from minor changes these notes form the first half of a course given at Columbia University in 1960 - 61. This is not a text book; it consists of reprinted lecture notes, of an informal,
incomplete and definitely temporary character.
In [2] Elworthy described Eells mathematics and how it fitted into the way the things were developing at Warwick:-
It was an appointment which fitted perfectly with the philosophy of the department at that time, which was to feature research in global, rather than traditional, analysis; and it was already
becoming known as a centre for the global approach to dynamical systems theory. It is tempting to describe global analysis as a holistic approach to mathematics. In it the whole geometry or
topology of the spaces involved play a role, rather than just the equations describing the behaviour or motion in small areas. Non-linearity, especially that caused by curvature, is a prevalent
aspect. A prime example is Eells's most famous article, "Harmonic Mappings of Riemannian Manifolds", published in the American Journal of Mathematics in 1964. Written with J H Sampson of Johns
Hopkins University, it founded the theory of "harmonic maps" and the "non-linear heat flow".
One of the features of mathematics at Warwick were the year long symposia which brought leading mathematicians in a particular area to spend time at Warwick during the tear the symposium ran. Eells
had first been attracted to Warwick through the mini summer symposium he ran in 1967 so once on the permanent staff at Warwick it was natural that he should run year long symposia. This he did with
"Global Analysis" in 1971-72, "Geometry of the Laplace Operator" in 1976-77, and "Partial Differential Equations in Differential Geometry", in 1989-90. However, despite being fully committed to
Warwick, Eells took on another role in addition, namely one at the International Centre for Theoretical Physics at Trieste. In fact he association with this Centre came about it a fairly similar way
to his association with the University of Warwick. In the summer of 1972 he organised a symposium there, being a continuation of the 1971-72 Warwick symposium. It was in fact the first mathematics
symposium at the International Centre for Theoretical Physics which had, up till then, been only involved with physics. The success of Eells symposium led to the setting up of a Mathematics Division
of the Centre and Eells became its first director in 1986. It was a role he filled for six years in addition to his role at Warwick. The work of the Centre is particularly aimed at helping scientist
in Third World countries to participate fully in their specialities and Eells' role as Director of the Mathematics Division reflected his passion to support mathematicians working in low income
We should say more about Eells substantial and deep contributions to mathematics. His work on harmonic maps has already been mentioned in connection with his 1964 paper Harmonic Mappings of
Riemannian Manifolds. In fact Eells went on to publish two definitive surveys on the topic with Luc Lemaire who studied for a doctorate under Eells' supervision. These were A report on harmonic maps
(1978) and Another report on harmonic maps (1988) both published in the Bulletin of the London Mathematical Society. In 1992 a selection of Eells' papers on Harmonic maps was published as a book with
this title in 1992. In this book Eells points out that:-
... harmonic maps pervade differential geometry and mathematical physics: they include geodesics, minimal surfaces, harmonic functions, Abelian integrals, Riemannian fibrations with minimal
fibres, holomorphic maps between Kähler manifolds, chiral models, and strings.
Other books written by Eells on this topic were Selected topics in harmonic maps (1983) with Luc Lemaire, Harmonic maps and minimal immersions with symmetries (1992) with Andrea Ratto (another PhD
student of Eells' who received his doctorate in 1987), and Harmonic maps between Riemannian polyhedra (2001) with the Danish mathematician B Fuglede.
David Elworthy describes Eells in [2]:-
Jim Eells was a man of irrepressible enthusiasm for mathematics, and for most other things; especially people, irreverent fun, lots of wine, and music. His interest in the latter ranged across
most styles and he would delight the younger children of his colleagues with lively performances of scatological songs.
David Elworthy writes in [1]:-
Jim Eells had a phenomenal memory for people, matched by an interest in them. It has been claimed that in his early days he could recognise every member of the American Mathematical Society. His
wife Nan became well known to the younger researchers especially, and their families, due to her friendship and splendid dinner parties, with only mild attempts to control the exuberance of her
husband. With Jim came tremendous mathematical excitement combined with an intensity of fun. He is missed a lot.
In 1992 Eells retired from his professorship at the University of Warwick and ended his role as director of the Mathematics Division of the International Centre for Theoretical Physics at Trieste. He
went to Cambridge which became his base, although he maintained his world-wide travels.
Article by: J J O'Connor and E F Robertson
List of References (2 books/articles)
Mathematicians born in the same country
Additional Material in MacTutor
Honours awarded to James Eells
(Click below for those honoured in this way)
BMC morning speaker 1972
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © August 2007 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Eells.html","timestamp":"2014-04-16T07:15:26Z","content_type":null,"content_length":"17138","record_id":"<urn:uuid:e024b9f2-93d7-4b73-b47b-42411427e3e3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
ResearchEd 2013
ResearchEd 2013 took place today. It was a (UK) conference intended to bring together teachers and education academics, to try to better support collaboration between the two. One of the most
pleasing things was the appetite for the conference: 500 attendees got themselves to London on a Saturday, with 400 more on the waiting list. And this for a conference that had never been run before,
advertised mainly through Twitter and blogs. I went — here’s some of my thoughts.
Coe on Evidence
Robert Coe’s talk was my favourite of the day (slides here, PPT). He mentioned the Education Endowment Foundation, and their work on trying to summarise useful educational research (a bit like the
Cochrane Review?) — something I want to look into further.
Coe sounded some notes of caution on transferring research into practice: using his example, “Assessment for Learning” apparently comes out very well in trials, but this did not translate into a
massive effect in practice when the government pushed it. Understanding why would be useful for future efforts to transfer research into practice.
Coe also pointed out that some practices continue to be used despite a lack of evidence for their effectiveness. Tom Bennett previously documented most of the obviously barmy ones (in his book
Teacher Proof, my review here), but as a less obvious example, Coe questioned where the evidence is that classroom observation (teachers observing their peers) improves teaching.
The Effect Size Debate
Coe also cropped up in another interesting session: a debate between Coe and Ollie Orange about whether effect size is a good measure. Coe was for effect size, Orange against. I think Coe’s argument
boiled down to: it’s not ideal, but it is a useful, slightly crude, heuristic in several circumstances (comparing incompatible measures of the same outcome, performing meta analyses).
Orange’s argument was not as convincing. A large part of his argument was that proponents/inventors of effect size do not have maths/statistics degrees (he actually listed them out loud and their
degrees) and that mathematicians do not use effect sizes. Dealing with the first part: I agree that the lack of training could be a warning sign, but it is not itself an argument against effect size.
Science and rationalism are about reasoned arguments, not who said what. Coe said in counter-argument to the second point: why would a pure mathematician use an effect size? It’s a pragmatic measure
used by empirical researchers (in education, psychology, medicine and so forth). It seemed a shame that Orange did not dispense with all this and spend more time on critiquing the mathematical
properties of effect size instead. (Not all the audience might have followed it, but it seems to me that debating effect size requires getting into the mathematics at least a little.)
In the comments afterwards, discussion inevitably moved to Hattie, who based his large meta-meta-analysis on effect size. Coe said that he thought Hattie’s work was “riddled with errors” (which
sounds like it roughly agrees with my assessment of the book). I think it’s important not to use inappropriate uses of a statistic to argue against all uses of the statistic. The mean is a bad
measure for skew data (like salaries) but that does not imply that we should all stop using the mean as a statistical measure.
Pick and Mix
A few leftover bits. Ben Goldacre’s keynote was good, and he did admirably well at surviving the nightmare hitch of not being able to display his slides. Useful point: if we start properly assessing
the claims of new education intiatives and products, this will encourage their proponents to make smaller, more reasonable claims. Amanda Spielman in her talk mentioned having a “drawer of debunking
papers” ready to hand out to people who suggested a known-to-be-ineffective initiative to her — I liked that notion. Tom Bennett: good science seeks to disprove itself, not to confirm existing
Overall, I enjoyed the conference and wished I could have gone to a couple more sessions (the conference had six parallel sessions!). However, I gather many of the sessions were recorded, so I should
shortly get my wish. A good sign from my perspective was meeting Sue Sentance there, who now works for Computing At School, and who is keen on encouraging more research collaboration between
computing teachers and academics in the UK. It’s clearly some kind of zeitgeist.
4 responses to “ResearchEd 2013”
1. Reblogged this on The Echo Chamber.
2. HI Neil. Hope you enjoyed the debate as much as I did. Professor Coe was correct to say that ‘Pure’ Mathematicians wouldn’t be interested in the Effect Size but of course Statisticians would be
very interested if it were correct.
I assumed that anybody attending the debate would have already read my blog where I had talked a bit about the Maths of it all, maybe that was an error. I had 6 minutes to persuade a group of
non-Mathematicians that there were problems with the Effect Size. The line of argument that I took was that there has been very little input or interest from Mathematicians and most of the people
involved don’t have Maths degrees. By the end some of the people were starting to ask “Why is it only Education and Psychology that are using the Effect Size?”
To be honest the whole thing was worth it just to hear Professor Coe say that “Hattie’s book is riddled with errors”.
□ Hi Ollie. I agree it’s tricky to get too in-depth in six minutes, but I still think that the line of argument that the people who proposed it are not mathematicians is not in itself a good
enough argument: several developments in various disciplines have arisen from people coming in from outside with a fresh perspective or because they have different needs. But those versed in
the discipline should be able to say why it is wrong. And popular is not necessarily correct: for example, significance tests are problematic for various reasons (e.g. 0.05 is arbitrary,
statistical significance is not practical significance, etc) but they are used in a huge range of disciplines because they are simple and people understand how to do them.
Anyway, I’m pleased the debate took place, I enjoyed it (and am still happy with choosing to attend it vs the other fine-sounding 5 sessions), and thank you for participating in it.
Filed under Uncategorized | {"url":"http://academiccomputing.wordpress.com/2013/09/07/researched-2013/","timestamp":"2014-04-20T01:02:09Z","content_type":null,"content_length":"59456","record_id":"<urn:uuid:e882d078-e2be-4c93-9545-c567efeb1c41>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Gini Coefficient
Once a Lorenz curve is constructed, calculating the Gini coefficient is pretty straightforward. The Gini coefficient is equal to A/(A+B), where A and B are as labeled in the diagram above. (Sometimes
the Gini coefficient is represented as a percentage or an index, in which case it would be equal to (A/(A+B))x100%.)
As stated in the Lorenz curve article, the straight line in the diagram represents perfect equality in a society, and Lorenz curves that are further away from that diagonal line represent higher
levels of inequality. Therefore, larger Gini coefficients represent higher levels of inequality and smaller Gini coefficients represent lower levels of inequality (i.e. higher levels of equality).
In order to mathematically calculate the areas of regions A and B, it is generally necessary to use calculus to calculate the areas below the Lorenz curve and between the Lorenz curve and the
diagonal line. | {"url":"http://economics.about.com/od/measures-of-income-inequality/ss/The-Gini-Coefficient_3.htm","timestamp":"2014-04-18T23:16:27Z","content_type":null,"content_length":"42806","record_id":"<urn:uuid:b1c981cf-dbcb-49cf-a6fb-f71fa1a0b8da>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Urgent help needed please!!!
June 28th 2008, 12:11 AM #1
Jun 2008
Scott is saving for an overseas holiday and he is looking for the best bank in which to invest his money. Three banks are offering different interest rates as follows:
Bank Able: 8.1%(nominal p.a.) compounded monthly
Nank Beble: 8.2% (nominal p.a.) compounded twice yearly
Bank Ceble: 8.4% (effective p.a.) effective interest rate per year
a) compare effective rates to find which bak offers the best return on investments DONE
b) Check your answer to a) by considering the amount of interest earned be each option over a 5 year time period. I used $5000 as example. DONE
c) Scott chooses to invest with bank ceble. Find their quarterly interest rate.
i need help with this one!! how do i find the quarterly interest rate???
Cheers for the help!!
Hello, slanno!
Bank Ceble: 8.4% (effective p.a.) effective interest rate per year
c) Scott chooses to invest with bank Ceble. Find their quarterly interest rate.
Let $r$ = quarterly interest rate.
If he invests $P$ dollars for a year, his final balance is: . $P(1 + r)^4\;\;{\color{blue}[1]}$
Since the effective rate is 8.4%.
. . his final balance is: . $P + 0.084P \:=\:(1.084)P\;\;{\color{blue}[2]}$
Equate [1] and [2]: . $P(1+r)^4 \:=\:(1.084)P \quad\Rightarrow\quad (1+r)^4 \:=\:1.084$
. . . . . . $1 + r \:=\:\sqrt[4]{1.084} \quad\Rightarrow\quad r \:=\:\sqrt[4]{1.084} - 1$
Therefore: . $r \;=\;0.020369152 \;\approx\;2\%$
June 28th 2008, 02:40 AM #2
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/math-topics/42603-urgent-help-needed-please.html","timestamp":"2014-04-18T16:58:53Z","content_type":null,"content_length":"35743","record_id":"<urn:uuid:c7c9763a-f2e0-40c7-bb1c-06b860f79d82>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenOffice.org Forum :: Array Formula Difference from Excel
Author Message
Guest Posted: Wed Feb 04, 2004 7:28 am Post subject: Array Formula Difference from Excel
Sorry if this has been asked (no way to search the forum). Assume I have dates in cells A1:A5. In Excel the following formula works fine. In Calc it does not (I get #VALUE). Is this a
bug or does Calc intentionally handle array forumlas differently than Excel (i.e. will it ever be changed).
In Excel
In Calc
SergeM Posted: Wed Feb 04, 2004 8:49 am Post subject:
Super User Sorry, I don't understand the semantic of your formula :
Joined: 09 Sep {=sum(if(year(a1:a5)=1996;1;0))}
Posts: 3211
Troyes France "if(year(a1:a5)=1996;1;0)" return for me a one if all years are 1996 and 0 otherwise. I am not surprise by this result !
and then I don't see why a sum after this operation which can only return 0 or 1 !
and then I don't understand how we can expect values and not only one value !
Can you give exaple of what Exel is doing with this formula ?
Linux & Windows OOo3.0
UNO & C++ : WIKI
In French
Guest Posted: Wed Feb 04, 2004 9:25 am Post subject:
This is a cut down version of a true formula which is more complicated. Basically if you picture the following:
1996-01-01 100
1996-02-01 200
1997-01-01 100
I want to sum up all of the values for 1996. Again, this is simplified, the complicated formula makes using database functions difficult.
If the fomula was written as:
would return 300
Guest Posted: Wed Feb 04, 2004 9:29 am Post subject:
Also if I do use {=sum(if(yeara1:a3)=1996;1;0))} then I would expect the value to return 2, so that I knew I had 2 rows with a year of 1996.
SergeM Posted: Wed Feb 04, 2004 10:13 am Post subject:
Super User I cann't find a solution because I have not enough experience with OOocalc.
The first problem is the semantic of IF which is not what you expect.
=IF(YEAR(A1:A5)=1996) is true only on the case that all the years are equal to 1996...
Have a loock perhaps on the function COUNTIF ...
Joined: 09 Sep _________________
2003 Linux & Windows OOo3.0
Posts: 3211 UNO & C++ : WIKI
Location: http://wiki.services.openoffice.org/wiki/Using_Cpp_with_the_OOo_SDK
Troyes France In French
SergeM Posted: Wed Feb 04, 2004 10:18 am Post subject:
Super User Quote:
If the fomula was written as:
Joined: 09 Sep
2003 {=sum(if(year(a1:a3)=1996;b1:b3,0))}
Posts: 3211
Location: would return 300
Troyes France
Ok the sum will return only one value and then I don't understand why do you use
instead of
Linux & Windows OOo3.0
UNO & C++ : WIKI
In French
Guest Posted: Wed Feb 04, 2004 10:40 am Post subject:
What should be happening (I shouldn't say should because I don't know what really should be happening, but how Excel is currently working and how my spreadsheets are created to work)
is that the if test will check the year for each row one at a time, for any whose year is equal to 1996, it will grab the matching value in the true array.
The formula you present does not do this since it requires an array formula.
Now I don't know if Excel is working incorrectly or bastardizing the meaning of array formulas, but it is how it works. So for compatibility reasons either Calc needs to work this way
or it needs to convert the formula to one which does fit into Calc's model.
Otherwise, at least for me, I don't really have compatible products and any solution either forces me to pick one or redesign all my worksheets depending on the environment. | {"url":"http://www.oooforum.org/forum/viewtopic.phtml?t=5533&view=previous","timestamp":"2014-04-18T13:13:11Z","content_type":null,"content_length":"35729","record_id":"<urn:uuid:587d1ec4-d722-47cf-a812-5b033cff7e8f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
when is Aut(G) abelian
up vote 25 down vote favorite
let $G$ be a group such that $Aut(G)$ is abelian. is then $G$ abelian?
this is a sort of generalization of the well-known exercise, that $G$ is abelian when $Aut(G)$ is cyclic. but I have no idea. at least, the finitely generated abelian groups $G$ such that $Aut(G)$ is
abelian can be classified.
add comment
5 Answers
active oldest votes
From MathReviews:
MR0367059 (51 #3301) Jonah, D.; Konvisser, M. Some non-abelian $p$-groups with abelian automorphism groups. Arch. Math. (Basel) 26 (1975), 131--133.
This paper exhibits, for each prime $p$, $p+1$ nonisomorphic groups of order $p^8$ with elementary abelian automorphism group of order $p^{16}$. All of these groups have elementary
abelian and isomorphic commutator subgroups and commutator quotient groups, and they are nilpotent of class two. All their automorphisms are central. With the methods of the reviewer
and Liebeck one could also construct other such groups, but the orders would be much larger.
up vote 38 down FYI, I found this via a google search.
vote accepted
The first to construct such a group (of order $64 = 2^6$) was G.A. Miller* in 1913. If you know something about this early American group theorist (he studied groups of order 2, then
groups of order 3, then...and he was good at it, and wrote hundreds of papers!), this is not so surprising. I found a nice treatment of "Miller groups" in Section 8 of
(*): The wikipedia page seems a little harsh. As the present example shows, he was a very clever guy.
thanks :) . – Martin Brandenburg Dec 28 '09 at 11:23
add comment
The answer is No. (Pete beat me to it.)
up vote 25 down The earliest example seems to be in a 1913 paper of GA Miller's A non-abelian group whose group of isomorphisms is abelian, Messenger of Math. 48 (1913) 124--125.
6 Well, I'm voting for you, anyway. For such questions I admit to following the advice of SO immortal Jon Skeet: submit a brief version first and then edit to add the desired
frills. – Pete L. Clark Dec 28 '09 at 11:29
1 That's very gracious of you -- thanks! – José Figueroa-O'Farrill Dec 28 '09 at 14:38
add comment
Two additional remarks:
1. Any group whose automorphism group is abelian must have nilpotency class at most two, because the inner automorphism group, being a subgroup of the automorphism group, is abelian.
2. For finite groups, being abelian and the automorphism group being abelian as well implies cyclic. In the infinite case, there are locally cyclic groups that are not cyclic, and these
have abelian automorphism groups. For instance, the additive group of rational numbers has an abelian automorphism group (the multiplicative group of rational numbers).
Two other references (in addition to Miller and Jonah-Konvisser mentioned above) for examples of 2-groups with abelian automorphism groups:
up vote 10 1. Some nonabelian 2-groups with abelian automorphism groups by Rebecca Roth Struik, Archiv der Mathematik, ISSN 1420-8938 (Online), ISSN 0003-889X (Print), Volume 39,Number 4, Page 299
down vote - 302(Year 1982); MathReviews Number: 0684397
2. Some new non-abelian 2-groups with abelian automorphism groups by Ali-Reza Jamali, Journal of Group Theory, ISSN 14435883 (print), ISSN 14434446 (online), Volume 5,Number 1, Page 53
- 57(Year 2002); MathReviews Number: 1879516
I have more notes on groups whose automorphism group is abelian here and here.
add comment
There have been some activities on this topic recently. No example of a non-special finite $p$-group having abelian automorphism group was know until quite recently. A class of such groups
is constructed in "V. K. Jain, M. K. Yadav, On finite p-groups whose automorphisms are all central, Israel J. Math. 189 (2012), 225 - 236." This paper also contains a quick survey of
up vote 6 results on the topic and a big bibliography. Some more, different kind of examples are available at http://arxiv.org/pdf/1304.1974.pdf
down vote
add comment
A relate one: If $Aut(G)$ is cyclic then $G$ is cyclic.
up vote -1 down vote
9 ... which is not true. – Martin Brandenburg May 26 '12 at 6:47
I find this: towson.edu/math/Zassenhaus_2011_Conference/Full_Text_of_Talk/… – Buschi Sergio May 26 '12 at 15:06
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/9944/when-is-autg-abelian/9946","timestamp":"2014-04-18T03:17:27Z","content_type":null,"content_length":"72606","record_id":"<urn:uuid:39940c56-31a1-44a1-b28b-1f69b706a70c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
value of 'e'
Re: value of 'e'
Do you have a hint?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Take the common log of both sides.
Can you finish now?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
That will help you get the number of digits in 9^9^9. But, gAr has already done that
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Nope, that will get you the front digits.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
Does the answer start with 428... ?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Real Member
Re: value of 'e'
The front digits are 325460436031586856...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: value of 'e'
That is not correct.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Full Member
Re: value of 'e'
Re: value of 'e'
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
Is post 30 correct?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Yes, post 30 is correct. Sorry, I did not see it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
I think anonymnistefy has taken the log on e that is why he is getting it wrong
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Yes, you need to take the common log else you can not get x by itself!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: value of 'e'
Ah, that's true. I forgot Log[x] was ln(x).
Here it is: 428124773175747048036987115931...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: value of 'e'
Yes, that is correct. Very good!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
Okay, Will the person below please shed light on the easier method?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Maxima can do it offhand and so can Alpha!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
That does not explain how it is done in Maxima?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Have you just tried entering it as a floating point number?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
I do not know how to do anything with Maxima at all except to look at the manual and call functions
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Okay, then you have done well enough.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=20167&p=2","timestamp":"2014-04-19T01:58:17Z","content_type":null,"content_length":"40594","record_id":"<urn:uuid:b964e4ab-f32b-4119-9267-913e1ba6ad81>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton's Method Tutorial
Introduction to Scientific Programming
Computational Problem Solving Using:
Online Resources:
Newton's Method Tutorial
In this tutorial we will explore Newton's method for finding the roots of equations, as explained in Chapter 14.
We will be using a Newton's method simulator throughout this tutorial. You can start it by clicking on the following button.
Finding Roots
This tutorial explores a numerical method for finding the root of an equation: Newton's method. Newton's method is discussed in Chapter 14 as a way to solve equations in one unknown that cannot be
solved symbolically.
For example, suppose that we would like to solve the simple equation
x = 5
To solve this equation using Newton's method, we first manipulate it algebraically so that one side is zero.
x - 5 = 0
Finding a solution to this equation is then equivalent to finding a root of the function
f(x) = x - 5
This function is plotted in the simulation window.
We next make a guess for the root. In the simulation window, the guess is -5. The point
(guess, f(guess))
is displayed with a pink dot. The coordinates of the dot are displayed at the bottom of the simulation window.
The yellow line is tangent (to the curve whose root we are seeking) at the pink dot. Newton's method relies on the observation that this tangent line will often cross the x-axis at a point closer to
a root than is the guess.
To see Newton's method in action, click on the button labeled "Step". The pink dot will slide down the tangent line until it reaches the x-axis, and it will then move vertically until it reaches the
curve. A new tangent line will be displayed. The new x-coordinate of the pink dot is the new guess to the root of the function. For many functions and for many initial guesses, repeating this process
a few times will yield an excellent approximation to a root.
If you click on Step a few more times, the pink dot will move closer to the point where the curve crosses the x-axis. (You can zoom in by using the mouse to drag a rectangle around the region that
you'd like to enlarge. There is also a "Zoom" menu in the menu bar.) At any point of the simulation, the x-coordinate of the guess will be an approximation to the root, and the y-coordinate of the
guess will be the value of the function at that guess.
1. You can place the guess by clicking the mouse where you would like the guess to go. Experiment with placing the guess and observing convergence.
2. If you place the guess at -5, how many steps are required until the approximate root is good to three decimal places? (The root near -2 is -2.236067978 to ten digits.) How this compare to the
behavior of the bisection method when the positive guess is at -5 and the negative guess is at 1? (You can switch to the bisection method by using the Method menu.)
3. Use the "Function" menu to display the curve for cos(x). Notice that four different roots are displayed. What guesses lead to which roots? (Be sure to watch the coordinates of the guess. You may
need to zoom out to see where the guess is.)
4. It can be difficult to predict exactly how Newton's method will behave. Use the "Function" menu to display the curve for cos(10x) + 4x. Watch what happens if you start from the guess that is
displayed by default.
5. The function x^2 + 1 has no root. Experiment with how Newton's method behaves with it.
6. Choose the function sin(5x) + x^2 - 3x and place the guess so between 1.2 and 2.0 so that the tangent line crosses the x-axis between 1.0 and 2.0. Step through Newton's method several times. What
do you notice? What does this say about finding the roots of this function with Newton's method?
7. Choose the function ln(x^2 - (4/5)x + 1) and zoom out by 5. There are 2 points on the function which, if you choose your first guess between these two points (excluding the function minimum and
it's very close proximity), Newton's method will converge and if you choose your first guess outside these points Newton's method will diverge. What are these two points?
8. Experiment with some of the other functions to get a feel for how Newton's method works and for how many steps it takes for it to come up with a good approximation.
Last modified 19Nov96. | {"url":"http://www.cs.utah.edu/~zachary/isp/applets/Root/Newton.html","timestamp":"2014-04-16T04:12:27Z","content_type":null,"content_length":"5751","record_id":"<urn:uuid:e8b422a7-bd6f-4234-bc59-fbcffe91b1aa>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word Problem
July 31st 2007, 08:28 PM #1
Junior Member
Jun 2006
Word Problem
You have 10L pure juice you take out x liters of juice and replace with water. Then you take out x liters again and add water again. The final concentration is 69% juice. Find X.
Lol I feel so stupid lol
Umm, I miss Math. I am rusty now---no practice. Let me start practicing again with your question.
You take away x liters of pure juice then you put in x liters of water (no juice in it) to make it 10 liters again. So the "juiciness" of the4 new mixture based on the quantity of pure juice in
the new mixture is:
(10-x)(100% pure juice) +x(0% pure juice)
= (10-x)(1.00) +x(0)
= 0.1(10-x) <------"juiciness", as if acidity.
Then you take away x liters of the new mixture then replace that with x liters of water again to go back to 10 liters again. The resulting newer mixture is 69% pure juice. So, basing on the
quantity of pure juice again in the newer mixture,
(10-x)[0.1(10-x)] +x(0) = 10(0.69)
(10-x)^2 = 69
10-x = sqrt(69)
x = 10 -sqrt(69) = 1.6934 liters -----------answer.
We remove $x$ litres, so the concentration of the diluted juice is $\frac{10-x}{10}$, and then we remove another $x$ litres of the mixture which contains $x\frac{10-x}{10}$ litres of pure juice,
so in total we have removed:
litres of pure juice. So the concentration is now:
$<br /> \frac{10-\left( x+x\frac{10-x}{10}\right)}{10}=0.69<br />$
Now multiply through by $100$ and simplify:
which has roots $x\approx18.307\ \wedge\ x\approx 1.693$, the first of these is clearly non-physical, so the second is the solution we seek.
July 31st 2007, 08:51 PM #2
Junior Member
Jun 2006
August 3rd 2007, 10:19 PM #3
MHF Contributor
Apr 2005
August 3rd 2007, 10:56 PM #4
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/algebra/17383-word-problem.html","timestamp":"2014-04-20T09:46:25Z","content_type":null,"content_length":"40059","record_id":"<urn:uuid:1487f488-bc32-4a87-8723-b8d6b13ba818>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Cryptography with Open-Source Software
• Teaches the key concepts in a practical way via Sage, an open-source algebraic mathematics software
• Includes examples that can be implemented on any modern computer
• Enables students to run their own programs and develop a deep and solid understanding of the mechanics of cryptography
• Takes students through the necessary mathematics gradually, introducing more advanced concepts one chapter at a time
• Provides exercises at the end of every chapter
Once the privilege of a secret few, cryptography is now taught at universities around the world. Introduction to Cryptography with Open-Source Software illustrates algorithms and cryptosystems using
examples and the open-source computer algebra system of Sage. The author, a noted educator in the field, provides a highly practical learning experience by progressing at a gentle pace, keeping
mathematics at a manageable level, and including numerous end-of-chapter exercises.
Focusing on the cryptosystems themselves rather than the means of breaking them, the book first explores when and how the methods of modern cryptography can be used and misused. It then presents
number theory and the algorithms and methods that make up the basis of cryptography today. After a brief review of "classical" cryptography, the book introduces information theory and examines the
public-key cryptosystems of RSA and Rabin’s cryptosystem. Other public-key systems studied include the El Gamal cryptosystem, systems based on knapsack problems, and algorithms for creating digital
signature schemes.
The second half of the text moves on to consider bit-oriented secret-key, or symmetric, systems suitable for encrypting large amounts of data. The author describes block ciphers (including the Data
Encryption Standard), cryptographic hash functions, finite fields, the Advanced Encryption Standard, cryptosystems based on elliptical curves, random number generation, and stream ciphers. The book
concludes with a look at examples and applications of modern cryptographic systems, such as multi-party computation, zero-knowledge proofs, oblivious transfer, and voting protocols.
Table of Contents
Introduction to Cryptography
Hiding information: confidentiality
Some basic definitions
Attacks on a cryptosystem
Some cryptographic problems
Cryptographic protocols
Some simple ciphers
Cryptography and computer security
Basic Number Theory
Some basic definitions
Some number theoretic calculations
Primality testing
Classical Cryptosystems
The Caesar cipher
Translation ciphers
Transposition ciphers
The Vigenère cipher
The one-time pad
Permutation ciphers
Matrix ciphers
Introduction to Information Theory
Entropy and uncertainty
Perfect secrecy
Estimating the entropy of English
Unicity distance
Public-Key Cryptosystems Based on Factoring
The RSA cryptosystem
Attacks against RSA
RSA in Sage
Rabin’s cryptosystem
Rabin’s cryptosystem in Sage
Some notes on security
Public-Key Cryptosystems Based on Logarithms and Knapsacks
El Gamal’s cryptosystem
El Gamal in Sage
Computing discrete logarithms
Diffie-Hellman key exchange
Knapsack cryptosystems
Breaking the knapsack
Digital Signatures
RSA signature scheme
Rabin digital signatures
The El Gamal digital signature scheme
The Digital Signature Standard
Block Ciphers and the Data Encryption Standard
Block ciphers
Some definitions
Substitution/permutation ciphers
Modes of encryption
Exploring modes of encryption
The Data Encryption Standard (DES)
Feistel ciphers
Simplified DES: sDES
The DES algorithm
Security of S-boxes
Security of DES
Using DES
Experimenting with DES
Lightweight ciphers
Finite Fields
Groups and rings
Introduction to fields
Fundamental algebra of finite fields
Polynomials mod 2
A field of order 8
Other fields GF(2n)
Multiplication and inversion
Multiplication without power tables
The Advanced Encryption Standard
Introduction and some history
Basic structure
The layers in detail
Experimenting with AES
A simplified Rijndael
Security of the AES
Hash Functions
Uses of hash functions
Security of hash functions
Constructing a hash function
Provably secure hash functions
New hash functions
Message authentication codes
Using a MAC
Elliptic Curves and Cryptosystems
Basic definitions
The group on an elliptic curve
Background and history
Elliptic curve cryptosystems
Elliptic curve signature schemes
Elliptic curves over binary fields
Pairing based cryptography
Exploring pairings in Sage
Random Numbers and Stream Ciphers
Pseudo-random number generators
Some cryptographically strong generators
The shrinking generator
ISAAC and Fortuna
Stream ciphers
The Blum-Goldwasser cryptosystem
Advanced Applications and Protocols
Secure multi-party computation
Zero knowledge proofs
Oblivious transfer
Digital cash
Voting protocols
Appendix A: Introduction to Sage
Appendix B: Advanced Computational Number Theory
Exercises appear at the end of each chapter.
Author Bio(s)
Alasdair McAndrew is a senior lecturer in the School of Engineering and Science at Victoria University in Melbourne, Australia.
Editorial Reviews
"This very well-written book is recommended to graduate or final-year undergraduate students intended to start research work on both theoretical and experimental cryptography. Most of the
cryptographic protocols are illustrated by various examples and implemented using the open-source algebra software Sage. The book provides a rigorous introduction to the mathematics used in
cryptographic and covers almost all modern practical cryptosystems. Also, the book is certainly a valuable resource for practitioners looking for experimental cryptography with a computer algebra
—Abderrahmane Nitaj (LMNO, Université de Caen Basse Normandie), IACR book reviews, February 2014
"It would make a great first course in cryptography but it is also easy enough to read to make it suitable for solitary study. … Overall this is an excellent book. It is far from the theorem-proof
format and it does try to explain the ideas and motivate the reader. The pattern of mixing some theory followed by some practice is good at keeping the less theory-minded reader rolling along as the
need for the theory becomes all too apparent. … this is a really good book. If you want to master cryptography, this is a great place to start."
—Mike James, IProgrammer, August 2011 | {"url":"http://www.crcpress.com/product/isbn/9781439825709","timestamp":"2014-04-19T17:33:47Z","content_type":null,"content_length":"106388","record_id":"<urn:uuid:ec8f5bf7-334b-4df0-8078-5fc19fff53b1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale, AZ Trigonometry Tutor
Find an Avondale, AZ Trigonometry Tutor
...In my classes, I have many students on the Autism spectrum, including students with Asperger's syndrome, and have great success helping them achieve their educational, social, and behavioral
goals. I am a certified Cross-Categorical Special Education teacher in the state of Arizona which means I...
40 Subjects: including trigonometry, English, reading, writing
...The basics is the understanding of matrices and the Gauss-Jordan Method. Later you get into inverses, proofs of a vector space(zero, scalar, addition), eigen values, dot product, and much more.
Differential equations and Mathematical structures are good prerequisites to take before starting Linear Algebra.
21 Subjects: including trigonometry, chemistry, calculus, physics
...Currently I am tutoring GED classes. I enjoy the challenge of helping people to master mathematics. One of my fellow GED teachers suggested that I post my profile on WyzAnt's website.
15 Subjects: including trigonometry, calculus, geometry, algebra 2
...I achieved a perfect score on the ACT, SAT, & GRE Math tests, but ONLY AFTER I overcame the challenging hurdle of mastering algebra. Contact me for steady help or "crisis intervention"! I
graduated with a 4.0 GPA in Chemical Engineering (undergrad) and Chemistry/Biochemistry (graduate school), so I can help you with General, Physical, Biochemistry, Biophysical, and Inorganic
20 Subjects: including trigonometry, chemistry, calculus, physics
...As my educational background shows, I have both a bachelors and a masters in geology. I am also currently teaching GLG 101 and GLG 103 at Mesa Community College. This is one of favorite
subjects to tutor as the majority of students need help in the same course that I'm teaching.
28 Subjects: including trigonometry, English, chemistry, writing | {"url":"http://www.purplemath.com/Avondale_AZ_trigonometry_tutors.php","timestamp":"2014-04-20T16:23:37Z","content_type":null,"content_length":"24272","record_id":"<urn:uuid:d9d76f65-e702-4ece-86cf-115cdc55d823>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Work and Power
Problem :
A 10 kg object experiences a horizontal force which causes it to accelerate at 5 m/s^2 , moving it a distance of 20 m, horizontally. How much work is done by the force?
The magnitude of the force is given by F = ma = (10)(5) = 50 N. It acts over a distance of 20 m, in the same direction as the displacement of the object, implying that the total work done by the
force is given by W = Fx = (50)(20) = 1000 Joules.
Problem :
A ball is connected to a rope and swung around in uniform circular motion. The tension in the rope is measured at 10 N and the radius of the circle is 1 m. How much work is done in one revolution
around the circle?
Recall from our study of
uniform circular motion
that centripetal force is always directed radially, or toward the center of the circle. Also, of course, the displacement at any given time is always tangential, or directed tangent to the circle:
Work in Uniform Circular Motion
Clearly the force and the displacement will be perpendicular at all times. Thus the cosine of the angle between them is 0. Since
W = Fx cosθ
, no work is done on the ball.
Problem :
A crate is moved across a frictionless floor by a rope THAT is inclined 30 degrees above horizontal. The tension in the rope is 50 N. How much work is done in moving the crate 10 meters?
In this problem a force is exerted which is not parallel to the displacement of the crate. Thus we use the equation W = Fx cosθ . Thus
W = Fx cosθ = (50)(10)(cos 30) = 433 J
Problem :
A 10 kg weight is suspended in the air by a strong cable. How much work is done, per unit time, in suspending the weight?
The crate, and thus the point of application of the force, does not move. Thus, though a force is applied, no work is done on the system.
Problem :
A 5 kg block is moved up a 30 degree incline by a force of 50 N, parallel to the incline. The coefficient of kinetic friction between the block and the incline is .25. How much work is done by the 50
N force in moving the block a distance of 10 meters? What is the total work done on the block over the same distance?
Finding the work done by the 50 N force is quite simple. Since it is applied parallel to the incline, the work done is simply W = Fx = (50)(10) = 500 J.
Finding the total work done on the block is more complex. The first step is to find the net force acting upon the block. To do so we draw a free body diagram:
Because of its weight, mg, the block experiences a force down the incline of magnitude
mg sin 30 = (5)(9.8)(.5) = 24.5 N
. In addition, a frictional force is felt opposing the motion, and thus down the incline. Its magnitude is given by
F [k] = μF [N] = (.25)(mg cos 30) = 10.6 N
. In addition, the normal force and the component of the gravitational force that is perpendicular to the incline cancel exactly. Thus the net force acting on the block is:
50 N -24.5 N -10.6 N = 14.9 N
, directed up the incline. It is this net force that exerts a ìnet workî on the block. Thus the work done on the block is
W = Fx = (14.9)(10) = 149 | {"url":"http://www.sparknotes.com/physics/workenergypower/workpower/problems.html","timestamp":"2014-04-20T21:04:06Z","content_type":null,"content_length":"54852","record_id":"<urn:uuid:5502393a-d4d4-40a2-8f42-84ed7f2c9c4c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Dyadic Python
Rob rob at pythonemproject.com
Sat Oct 12 15:16:03 CDT 2002
As a help for me learning Dyadic vector analysis, I have been working on
a Numpy class for manipulating Dyads. I am wondering if anyone else has
any similar interests. Most of the impetus for the work comes from
"Methods for Electromagnetic Field Analysis" by Ismo Lindell. In
numerous searches I found no computer code dealing with Dyads.
My biggest stumbling block so far is integrating the symbolism of Dyadic
analysis into something understandable. Operator overloading isn't
going to help me. For example in my program:
A=dyad(a,b) where a and b are complex vectors
B=dyad(c,d) "
A.dmmd(B) is equivalent to double dyadic multiplication of the dyads A
and B. :) In a book this would be written something like A xx B
except that the x's would be aligned vertically.
Ok, its an insane project. Rob.
The Numeric Python EM Project
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2002-October/001699.html","timestamp":"2014-04-17T15:45:43Z","content_type":null,"content_length":"3379","record_id":"<urn:uuid:95c96242-8f8c-453b-96b9-0cd3964ae309>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Biharmonic function with a constant modulus
up vote 1 down vote favorite
A bi-harmonic function $u:U\to C$, where $U$ is an open subset of the complex plane $C$ is a solution of the equation $\Delta^2u=0$. Can a nonconstant bi-harmonic mapping have a constant modulus in
an open set?
Example: The mapping $f(x)=x/|x|$ is a bi-harmonic mapping of the space $R^3$, so the answer to the above question for the space is YES.
add comment
1 Answer
active oldest votes
It seems the answer is Yes as can be seen by the example $f(z)=z/\bar z$.
up vote 1 down vote
add comment
Not the answer you're looking for? Browse other questions tagged harmonic-functions or ask your own question. | {"url":"http://mathoverflow.net/questions/97114/biharmonic-function-with-a-constant-modulus","timestamp":"2014-04-19T10:28:36Z","content_type":null,"content_length":"49582","record_id":"<urn:uuid:db26eab4-6115-49cd-b294-781c29044e8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics! plese help
Posted by Jessi on Thursday, June 7, 2012 at 1:19pm.
A 1320-N uniform beam is attached to a vertical wall at one end and is supported by a cable at the other end. A 1960-N crate hangs from the far end of the beam. Using the data shown in the figure,
find (a) the magnitude of the tension in the wire and the magnitudes of the (b) horizontal and (c) vertical components of the force that the wall exerts on the left end of the beam.
From the wall to the beam creates a 50 degree angle. From the beam to the crate is a 30 degree angle.
• Physics! plese help - Elena, Thursday, June 7, 2012 at 6:05pm
The drawing shows the beam and the five forces that act on it: the horizontal and vertical components S(x) and S(y) that the wall exerts on the left end of the beam, the weight W(b) of the beam,
the force due to the weight Wc of the crate,
and the tension T in the cable. The beam is uniform, so its center of gravity is at the center of the beam, which is where its weight can be assumed to act. Since the beam is in equilibrium, the
sum of the torques about any axis of rotation must be
zero ( Σ τ = 0) , and the sum of the forces in the horizontal and vertical directions
must be zero ( Σ F(x)= 0, Σ F(y)= 0) . These three conditions will allow us to determine the magnitudes of S(x), S(y), and T.
We will begin by taking the axis of rotation to be at the left end of the beam. Then
the torques produced by S(x) and S(y) are zero, since their lever arms are zero. When we set the sum of the torques equal to zero, the resulting equation will have only one unknown, T, in it.
Setting the sum of the torques produced by the three forces equal to zero gives (with L equal to the length of the beam)
Σ τ = − W(b){0.5• L•cos30°} − W(c) •{Lcos30°} + T• {L•sin80°} = 0.
Algebraically eliminating L from this equation and solving for T gives
T = [W(b){0.5• L•cos30°} + W(c) •{Lcos30°}]/sin80º =
={1320• 0.5 •cos30°+1960• cos30°}/sin 80 º=2883 N.
Since the beam is in equilibrium, the sum of the forces in the vertical direction
must be zero:
Σ F(y) = + S(y) − W(b) − W(c) + T•sin50° =0
Solving for S(y) gives
S(y) =W(b)+W(c)-T•sin50 =1320+1960-2883•sin50°=1071 N.
The sum of the forces in the horizontal direction must also be zero:
Σ F(x)= + S(x) − T• cos50° = 0.
so that
S(x) = T• cos50° =2883•cos50° =1853 N.
Related Questions
Physics - A sign is supported by a cable and a horizontal beam.The beam has a ...
Physics - A uniform beam of length 1.0 m and mass 16 kg is attached to a wall by...
physics - A uniform, horizontal beam is 5.00m long and has a weight of 200N. The...
physics - In Fig. 12-51, uniform beams A and B are attached to a wall with ...
physics - A uniform 8 m 1500 kg beam is hinged to a wall and supported by a thin...
Physics(Please help) - A uniform horizontal beam is attached to a vertical wall...
physics - A 1210 N uniform beam is attached to a vertical wall at one end and ...
Physics - A 500N person stands 2.5m from a wall against which a horizontal beam ...
physics - The drawing shows a uniform horizontal beam attached to a vertical ...
Physics - A uniform horiztonal beam is attached to a veritcal wall by a ... | {"url":"http://www.jiskha.com/display.cgi?id=1339089563","timestamp":"2014-04-19T04:40:33Z","content_type":null,"content_length":"10498","record_id":"<urn:uuid:61c8e07f-79fa-497a-abb0-62602be1b73b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with an exercise.
So i got an exam now, and i am not ready for it, so im counting on you guys!
N people came to conference. Transporting them from hotel to conference a number of cars is given, their capacity(not including driver) is K or M people. Cars at hotel go this way, first car with
capacity K(people),after that comes car with capacity M, then again K, and M and so on.. Each car can transport only so much people as given in exercise.We need to make programme, about how much cars
are needed to transport all people.
P.S sorry for english translation, exercise is in latvian.
User inserts 3 natural numbers- N, K and M values. Its known, that 0<N<2147483648, 0<K<100, 0<M<100.
Im really counting on your help guys, thanks for your time.
1 #include <iostream.h>
2 #include <stdio.h>
3 #include <conio.h>
4 void main()
5 {
6 int x,N,K,M;
7 cout<<"Ievadi cik cilveki ieradas uz konferenci: "; cin>>N;
8 cout<<"Ievadi cik cilveki var iekapt pirmajaa auto: "; cin>>K;
9 cout<<"Ievadi cik cilveki var iekapt otrajaa auto:"; cin>>M;
10 if(0<N<2147483648,0<K<100,0<M<100)
11 {
12 x=(N/K) && (N/M);
got so far :/
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/81391/","timestamp":"2014-04-19T22:13:41Z","content_type":null,"content_length":"8918","record_id":"<urn:uuid:83602025-9122-45fd-9793-20485ea1e75b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Controlling pressure with or without through-flow
Figure 1. With no through-flow, there is one pure integrator in the loop, making it a Type 1 control system
Figure 2. This simplified schematic shows the through-flow path as the dashed line through the load. An analytical schematic would be the same, even if the through path was connected at the valve
work port. The important issue is whether flow occurs through the control valve in steady state.
An example of a pressure control system that has through-flow is the SAE test method for hydraulic hose. This test requires pressure to vary between maximum and minimum limits according to a
prescribed pressure profile and within a specified temperature range. The easiest way to maintain temperature in test specimens is by providing a flow path (an orifice) downstream of the test hose.
This allows fluid to change continuously.
In this and similar cases, the control valve not only must supply the volume of fluid needed to fill and compress the volume, it must also supply any amount of steady-state through-flow that results
at the controlled pressure level. A complication is that the through-flow probably varies as the controlled pressure level varies. Therefore, the valve must be open, even when the controlled pressure
is held constant at the target value. In addition, some error must exist between the command and feedback. The error is what holds the valve open to supply the through-flow.
Here is the explanation: If the valve is open, the current must be non-zero. If the current is nonzero, the input to the amplifier must be non-zero as well, leading to the conclusion that the error
is not zero. If the error is not zero, then it immediately follows that the command and feedback values must differ. The conclusion? A steady-state error in the system is just sufficient to hold the
valve open enough to provide the through-flow at the output pressure level, but the command and feedback signals are not the same. This reality places a pressure-dependent flow into the circuit,
which creates feedback around the integrating volume that, in turn, eliminates its pure integration function. The integrator is still there. However, the feedback around it converts the mathematical
process from one of integration to a nonlinear first order time constant.
Type 0 and Type 1 regulators
To illustrate, the circuit in Figure 1 has no through-flow. Therefore, the controlled volume acts as a pure integrator, leading to the conclusion that the command and feedback are equal when the
output pressure reaches the commanded level.
The circuit in Figure 2, with its steady-state through-flow, necessarily has valve opening and current-and error between command and feedback. This is a classic Type 0 regulator. As such, it
functions with steady-state error.
The presence of the steady-state error in the Type 0 system often is objectionable because it is more difficult to analyze than a Type 1, zero error system, which can be analyzed "off the top of your
head." It is not so easy to analyze the Type 0 system. In the Type 1 system, the command and feedback are, ideally, equal to each other, and the calibration thinking is simple. If you enter 5 V of
command, for example, then in steadystate, you expect the feedback to be 5 V as well. Further, if you cali-
brate the feedback transducer for, say, 1 V/1000 psi, then it is simple arithmetic to conclude that a command of 5 V will result in an output pressure of 5000 psi.
But with the through-flow of the system under discussion at the moment, an error must exist between the command and feedback in order to sustain the through-flow. In such a situation, the command
will always be greater than the feedback signal, and the calibration factor for the pressure transducer will not be the same as the calibration for the feedback control system.
Take the above data for instance, calibrated for 1000 psi/V. If an error of 0.5 V is needed to hold the valve open enough to maintain through-flow, then a command of 5.5 V is needed to get a
controlled output pressure of 5000 psi. Note, then, the balance at the input to the amplifier: 5.5 V of command, minus 5.0 V of feedback, equals 0.5 V of error. Recall that this is the amount of
error needed to hold the valve open enough to sustain the through-flow. In the Type 1 system, it is only necessary to know the calibration on the feedback transducer. But in this, the Type 0 system,
we need to know the gain of the forward branch as well. You may get the idea that if 0.5 V of error is needed to hold the valve open the right amount, then why not just use op-amp circuit function,
(called an offset) to add in 0.5 V? Voila, you'd have the required voltage for through-flow purposes, and the command would only need to be 5.0 V. On the surface, this appears to be a simple way to
get the calibration of 1V/1000 psi. But there is a flaw in this logic. However, the 0.5 V of offset is correct only when the commanded target pressure is 5000 psi. If the target pressure is
different, so is the amount of offset.
The reason this is true can be seen by looking at the hydraulic circuit and answering the question, "How far must the control valve be open under a variety of target pressure levels?" First, note
that the through-flow results from some unspecified, but real, orifice offering a continuous flow path through the circuit. Because the opening is an orifice, the flow through it will depend upon the
Therefore, as the target pressure changes (hopefully, the output pressure changes along with the changing command), the through-flow must change. More specifically, the through-flow will increase
with increased commanded pressure. The only conclusion we can draw is that because the through-flow is higher, the control valve will have to open farther at the higher pressures than at the lower
pressures. If the valve must open farther, then the current into the valve must be greater, requiring a greater error.
Details on this subject, including formula derivations, are available in Designer's Handbook for Electrohydraulic Servo and Proportional Systems. For more information, contact IDAS Engineering Inc.,
at (262) 642-7021; fax (262) 642-7025; or www.idaseng.com. The book is also available from Barnes & Noble.
The website describes testing, research, and design facilities and services, instructional videos, books, and software, all geared toward electrohydraulic system design. It also contains complete,
unedited versions of this and previous "Motion Control" columns. | {"url":"http://hydraulicspneumatics.com/200/TechZone/HydraulicValves/Article/False/9780/TechZone-HydraulicValves","timestamp":"2014-04-18T01:35:38Z","content_type":null,"content_length":"75001","record_id":"<urn:uuid:e6720560-4bfe-4f65-9fad-7f04f65feed1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improving Spatial Adaptivity of Nonlocal Means in Low-Dosed CT Imaging Using Pointwise Fractal Dimension
Computational and Mathematical Methods in Medicine
Volume 2013 (2013), Article ID 902143, 8 pages
Research Article
Improving Spatial Adaptivity of Nonlocal Means in Low-Dosed CT Imaging Using Pointwise Fractal Dimension
^1College of Computer Science, Sichuan University, No. 29 Jiuyanqiao Wangjiang Road, Chengdu 610064, Sichuan, China
^2School of Computer Science, Sichuan Normal University, No. 1819 Section 2 of Chenglong Road, Chengdu 610101, Sichuan, China
^3School of Automation Engineering, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu 611731, Sichuan, China
^4School of Information Science and Technology, East China Normal University, No. 500, Dong-Chuan Road, Shanghai 200241, China
Received 25 January 2013; Accepted 6 March 2013
Academic Editor: Shengyong Chen
Copyright © 2013 Xiuqing Zheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
NLMs is a state-of-art image denoising method; however, it sometimes oversmoothes anatomical features in low-dose CT (LDCT) imaging. In this paper, we propose a simple way to improve the spatial
adaptivity (SA) of NLMs using pointwise fractal dimension (PWFD). Unlike existing fractal image dimensions that are computed on the whole images or blocks of images, the new PWFD, named pointwise
box-counting dimension (PWBCD), is computed for each image pixel. PWBCD uses a fixed size local window centered at the considered image pixel to fit the different local structures of images. Then
based on PWBCD, a new method that uses PWBCD to improve SA of NLMs directly is proposed. That is, PWBCD is combined with the weight of the difference between local comparison windows for NLMs.
Smoothing results for test images and real sinograms show that PWBCD-NLMs with well-chosen parameters can preserve anatomical features better while suppressing the noises efficiently. In addition,
PWBCD-NLMs also has better performance both in visual quality and peak signal to noise ratio (PSNR) than NLMs in LDCT imaging.
1. Introduction
Radiation exposure and associated risk of cancer for patients from CT examination have been increasing concerns in recent years. Thus minimizing the radiation exposure to patients has been one of the
major efforts in modern clinical X-ray CT radiology [1–8]. However, the presentation of serious noise and many artifacts degrades the quality of low-dose CT images dramatically and decreases the
accuracy of diagnosis dose. Although many strategies have been proposed to reduce their noise and artifacts [9–14], filtering noise from clinical scans is still a challenging task, since these scans
contain artifacts and consist of many structures with different shape, size, and contrast, which should be preserved for making correct diagnosis.
Recently nonlocal means (NLMs) is proposed for improving the performance of classical adaptive denoising methods [15–17] and shows good performance even in low-dose CT (LDCT) imaging [18–20].
There are two novel ideas for NLMs. One is that the similar points should be found by comparing the difference between their local neighborhoods instead of by comparing their gray levels directly.
Since gray levels of LDCT will be polluted seriously by noises and artifacts, finding similar points by local neighborhoods instead of by gray levels directly will help NLMs find correct similar
points. The other important idea for NLMs is that the similar points should be searched in large windows to guarantee the reliability of estimation.
Following the previous discussion, the NLMs denoising should be performed in two windows: one is comparison patch and the other is searching window. The sizes of these two windows and the standard
deviation of the Gaussian kernel, which is used for computing the distance between two neighborhoods, should be determined according to the standard deviation of noises [15–17], and these three
parameters are identical in an image.
Some researchers find that identical sizes of two windows and identical Gaussian kernel in an image are not the best choice for image denoising [21–25]. The straightest motivation is that the
parameters should be modified according to the different local structures of images. For example, the parameters near an edge should be different from parameters in a large smooth region.
An important work to improve the performance of NLMs is quasi-local means (QLMs) proposed by us [21, 22]. We argue that nonlocal searching windows are not necessary for most of image pixels. In fact,
for points in smooth regions, which are the majority of image pixels, local searching windows are big enough, while for points near singularities, only the minority of image pixels, nonlocal search
windows are necessary. Thus the method is named quasi-local whereit islocal for most of image pixels and nonlocal only for pixels near singularities. The searching windows for quasi-local means
(QLMs) are variable for different local structures, and QLMs can get better singularity preservation in image denoising than classical NLMs.
Other important works about improving spatial adaptivity of NLMs are proposed very recently [23–25]. The starting point for these works is that the image pixels are parted into different groups using
supervised learning or semisupervised learning and clustering. However, the learning and clustering will waste a lot of computation time and resource, which will hamper them to be applied in medical
imaging. Thus we must propose a new method for improving the spatial adaptivity with a simple way.
In this paper we propose a simple and powerful method to improve spatial adaptivity for NLMs in LDCT imaging using pointwise fractal dimension (PWFD) where PWFD is computed pixel by pixel in a
fixed-size window centered at the considering pixel. According to the new definition of PWFD, different local structures will be with different local fractal dimensions, for example, pixels near edge
regions will be with relatively big PWFDs, while PWFDs of pixels in smooth regions will be zeros. Thus PWFD can provide local structure information for image denoising. After defined PWFD, which can
fit different local structures of images well, we design a new weight function by combining the new PWFD difference between two considering pixels with the weight of original NLMs measured by gray
level difference between two comparison windows. Thus using this new weight function, the proposed method will not only preserve the gray level adaptivity of NLMs but also improve the SA of NLMs.
The arrangement of this paper is as follows: In Section 2, the backgrounds are introduced, then the new proposed method is presented in Section 3, the experiment results are shown and discussed in
Section 4, and the final part is the conclusions and acknowledgment.
2. Backgrounds
In this section, we will introduce related backgrounds of the proposed method.
2.1. Noise Models
Based on repeated phantom experiments, low-mA (or low-dose) CT calibrated projection data after logarithm transform were found to follow approximately a Gaussian distribution with an analytical
formula between the sample mean and sample variance; that is, the noise is a signal-dependent Gaussian distribution [11].
The photon noise is due to the limited number of photons collected by the detector. For a given attenuating path in the imaged subject, and denote the incident and the penetrated photon numbers,
respectively. Here, denotes the index of detector channel or bin and is the index of projection angle. In the presence of noises, the sinogram should be considered as a random process and the
attenuating path is given by where is a constant and is Poisson distribution with mean .
Thus we have
Both its mean value and variance are .
Gaussian distributions of ployenergetic systems were assumed based on limited theorem for high-flux levels and followed many repeated experiments in [11]. We have where is the mean and is the
variance of the projection data at detector channel or bin is a scaling parameter, and is a parameter adaptive to different detector bins.
The most common conclusion for the relation between Poisson distribution and Gaussian distribution is that the photon count will obey Gaussian distribution for the case with large incident intensity
and Poisson distribution with feeble intensity [11].
2.2. Nonlocal Means (NLMs)
Given a discrete noisy image , the estimated value (), for a pixel , is computed as a weighted nonlocal average: where indicates a neighborhood centered at and size , called searching window, and .
The family of weights depend on the similarity between the pixels and and satisfy and .
The similarity between two pixels and , depends on the similarity of the intensity gray level vectors and , where denotes a square window with fixed size and centered at a pixel , named comparison
patch: and the weights are computed as where denotes the standard deviation of the noise and is the filtering parameter set depending on the value .
2.3. Box-Counting Dimension
Box-counting dimension, also known as Minkowski dimension or Minkowski-Bouligand dimension, is a way of determining the fractal dimension of a set in a Euclidean space or more generally in a metric
space . To calculate this dimension for a fractal , putting this fractal on an evenlyspaced grid and count how many boxes are required to cover the set. The box-counting dimension is calculated by
seeing how this number changes as we make the grid finer by applying a box-counting algorithm.
Suppose that is the number of boxes of side length required to cover the set. Then the box-counting dimension is defined as
Given an image whose gray level is G, then the image is part into the grids, which are related to cube grids. If for the th grid, the greatest gray level is in the th box and the smallest is in the
th box, then the box number for covering the grid is Therefore the box number for covering the whole image is Selecting different scale , we can get related . Thus we have a group of pairs . The
group can be fit with a line using least-squares fitting, the slope of the line is the box-counting dimension.
3. The New Method
In this section, we will present our new proposed algorithm in detail. The motivation for the proposed method is that SA of NLMs should be improved in a simpler way. The new PWFD is introduced
firstly to adapt complex image local structures, and then the new weight functions based on PWFD are discussed. At the end of this section, the procedures of the proposed method are shown.
3.1. Pointwise Box-Counting Dimension
In image processing, the fractal dimension usually is used for characterizing roughness and self-similarity of images. However, most of works only focus on how to compute fractal dimensions for
images or blocks of images [26–30]. Since fractal dimension can characterize roughness and self-similarity of images, it also can be used for characterizing the local structures of images by
generalizing it to PWFD, which is computed pixel by pixel using a fixed-size window centered in the considered pixel. Thus, each pixel in an image has a PWFD and it equals the fractal dimension of
the fixed-size window centered in the considered pixel.
Following the previous discussion, the pointwise box-counting dimension (PWBCD) starts from replacing each pixel to a fixed-size window centered at . It is obvious that PWFD can be generalized to all
definitions of fractal dimensions. However, in order to make our explanation more clearly, we only extend the new definition to PWBCD.
According to the new PWFD, PWBCD should be computed for each pixel in the image. For each pixel , the PWBCD is computed in a fixed-size window centered at .
The window is parted into the grids, which are related to cube grids. If for the th grid, the greatest gray level is in the th box and the smallest is in the th box, then the box number for covering
the grid is Therefore the box number for covering the whole window is Selecting different scale , we can get related . Thus we have a group of pairs . The group can be fit with a line using
least-squares fitting; the slope of the line is the box-counting dimension.
Note that each pixel in an image has a PWBCD value. Thus we can test the rationality for PWBCD by showing PWBCD values using an image. In these PWBCD images, high PWBCD values are shown as white
points, while low PWBCD values are shown as gray or black points. If PWBCD images are similar to the original images with big PWBCD values near singularities and small PWBCD values in smooth regions,
the rationality is testified.
Figure 1 shows PWBCD images for three images: an test image composed by some blocks with different gray levels, a LDCT image, and barbara. The white points signify the pixels with big fractal
dimensions, while black points signify the pixels with small fractal dimensions. Here, and . Note that the white parts correspond the texture parts of barbara and soft tissues of the second image in
the first row. Moreover, the PWBCD images are very similar to the original images which demonstrate that the PWBCD can be used for characterizing the local structure of images.
3.2. The New Weight Function
After defining the PWBCD, we must find an efficient and powerful way to use the PWBCD in NLMs directly. Just as discussed in the previous subsection, PWBCD can characterize the local structures for
images well. Thus PWBCD should be used to weight the points in the searching patch. That is, (6) should be changed as where () is FDBCD value for the considering pixel and is computed according to
the method proposed in Section 3.1, denotes the standard deviation of the noise, are the filtering parameters. is the similarity between two pixels and depending on the similarity of the intensity
gray level vectors and , where denotes a square window with fixed size and centered at a pixel :
Given a discrete noisy image , the estimated value (), for a pixel is computed as a weighted nonlocal average: where indicates a neighborhood centered at and size , called searching window, and .
Note that the family of weights depend on the similarity between the pixels and and satisfy and .
3.3. The Steps of the New Method
The steps of PWBCD-NLMs are as follows.(1)Compute pointwise box-counting dimension for each ofthe pixels.For each of the pixels,given and , compute PWBCD according to Section 3.1, and get a matrix
with the same size as the image.(2)Compute weights. determine parameters: , the size of comparison window , and the size of the searching patch . Compute the difference between two comparison
windows, , using (13).Compute the weights using (12).(3)Estimate real gray levels: estimate real levels using (14).
4. Experiments and Discussion
The main objective for smoothing LDCT images is to delete the noise while preserving anatomy features for the images.
In order to show the performance of PWBCD-NLMs, a 2-dimensional test phantom is shown in Figure 1(a). The number of bins per view is 888 with 984 views evenly spanned on a circular orbit of . The
detector arrays are on an arc concentric to the X-ray source with a distance of 949.075mm. The distance from the rotation center to the X-ray source is 541mm. The detector cell spacing is
The LDCT projection data (sinogram) is simulated by adding Gaussian-dependent noise (GDN) whose analytic form between its mean and variance has been shown in (3) with and . The projection data is
reconstructed by standard Filtered Back Projection (FBP). Since both the original projection data and sinogram have been provided, the evaluation is based on peak signal to noise ration (PSNR)
between the ideal reconstructed image and reconstructed image.
The PWBCDs for images are computed according to Section 3.1, and the parameters are and . The new proposed method is compared with NLMs, and their common parameters includes the standard deviation of
noise ; the size of comparison window is , while the size of searching patch is . The other parameter for NLMswhickis the Gaussian kernel for weights defined on (13) is and the parameters for the new
method are the sizes of Gaussian kernel for two weights defined on (12): for the weights of difference between comparison window and for the weights between two PWBCDs. All parameters are chosen by
hand with many experiments, which has the best performance.
Table 1 summarized PSNR between the ideal reconstructed image and filtered reconstructed image. The PWBCD-NLMs has better performance in different noise levels in the term of PSNR than NLMs.
Figure 2 shows noisy test images and their reconstructed images using NLMs and the proposed method. Although the reconstructed images are very similar to each other, the reconstructed images using
the new method also show better performance in edge preservation especially in weak and curve edge preserving than the NLMs. Since PWBCD-NLMs provides a more flexible way for handling different local
image structures, it has much good performance in denoising while preserving structures.
One abdominal CT images of a 62-year-old woman were scanned from a 16 multidetector row CT unit (Somatom Sensation 16; Siemens Medical Solutions) using 120 kVp and 5mm slice thickness. Other
remaining scanning parameters are gantry rotation time, 0.5 second; detector configuration (number of detector rows section thickness), mm; table feed per gantry rotation, 24mm; pitch, 1:1; and
reconstruction method, Filtered Back Projection (FBP) algorithm with the soft-tissue convolution kernel “B30f”. Different CT doses were controlled by using two different fixed tube currents 60mAs
for LDCT and 150mAs (60mA or 300mAs) for SDCT, resp.). The CT dose index volumes (CTDIvol) for LDCT images and SDCT images are in positive linear correlation to the tube current and are calculated
to be approximately ranged between 15.32mGy and 3.16mGy [18].
On sinogram space, the PWBCDs for images are computed according to Section 3.1 and the parameters are and . The new proposed method is compared with NLMs and their common parameters includes the
standard deviation of noise ; the size of comparison window is , while the size of searching patch is . The other parameter for NLMswhichis the Gaussian kernel for weights defined on (13) is and the
parameters for the new method are the sizes of Gaussian kernel for two weights defined on (12): for the weights of difference between comparison window and for the weights between two PWBCDs.
Comparing the original SDCT images and LDCT images in Figure 3, we found that the LDCT images were severely degraded by nonstationary noise and streak artifacts. In Figure 3(d), for the proposed
approach, experiments obtain more smooth images. Both in Figures 3(c) and 3(d), we can observe better noise/artifacts suppression and edge preservation than the LDCT image. Especially, compared to
their corresponding original SDCT images, the fine features representing the hepatic cyst were well restored by using the proposed method. We can observe that the noise grains and artifacts were
significantly reduced for the NLMs and PWBCD-NLMs processed LDCT images with suitable parameters both in Figures 3(c) and 3(d). The fine anatomical/pathological features can be well preserved
compared to the original SDCT images (Figure 3(a)) under standard dose conditions.
5. Conclusions
In this paper, we propose a new PWBCD-NLMs method for LDCT imaging based on pointwise boxing-counting dimension and its new weight function. Since PWBCD can characterize the local structures of image
well and also can be combined with NLMs easily, it provides a more flexible way to balance the noise reduction and anatomical details preservation. Smoothing results for phantoms and real sinograms
show that PWBCD-NLMs with suitable parameters has good performance in visual quality and PSNR.
This paper is supported by the National Natural Science Foundation of China (no 60873102), Major State Basic Research Development Program (no. 2010CB732501), and Open Foundation of Visual Computing
and Virtual Reality Key Laboratory Of Sichuan Province (no. J2010N03). Ming Li also acknowledges the supports by the NSFC under the Project Grant nos. 61272402, 61070214 and 60873264 and the 973 plan
under the Project Grant no. 2011CB302800.
1. D. J. Brenner and E. J. Hall, “Computed tomography-an increasing source of radiation exposure,” New England Journal of Medicine, vol. 357, no. 22, pp. 2277–2284, 2007. View at Publisher · View at
Google Scholar · View at Scopus
2. J. Hansen and A. G. Jurik, “Survival and radiation risk in patients obtaining more than six CT examinations during one year,” Acta Oncologica, vol. 48, no. 2, pp. 302–307, 2009. View at Publisher
· View at Google Scholar · View at Scopus
3. H. J. Brisse, J. Brenot, N. Pierrat et al., “The relevance of image quality indices for dose optimization in abdominal multi-detector row CT in children: experimental assessment with pediatric
phantoms,” Physics in Medicine and Biology, vol. 54, no. 7, pp. 1871–1892, 2009. View at Publisher · View at Google Scholar · View at Scopus
4. L. Yu, “Radiation dose reduction in computed tomography: techniques and future perspective,” Imaging in Medicine, vol. 1, no. 1, pp. 65–84, 2009. View at Publisher · View at Google Scholar
5. J. Weidemann, G. Stamm, M. Galanski, and M. Keberle, “Comparison of the image quality of various fixed and dose modulated protocols for soft tissue neck CT on a GE Lightspeed scanner,” European
Journal of Radiology, vol. 69, no. 3, pp. 473–477, 2009. View at Publisher · View at Google Scholar · View at Scopus
6. W. Qi, J. Li, and X. Du, “Method for automatic tube current selection for obtaining a consistent image quality and dose optimization in a cardiac multidetector CT,” Korean Journal of Radiology,
vol. 10, no. 6, pp. 568–574, 2009. View at Publisher · View at Google Scholar · View at Scopus
7. A. Kuettner, B. Gehann, J. Spolnik et al., “Strategies for dose-optimized imaging in pediatric cardiac dual source CT,” RoFo, vol. 181, no. 4, pp. 339–348, 2009. View at Publisher · View at
Google Scholar · View at Scopus
8. P. Kropil, R. S. Lanzman, C. Walther et al., “Dose reduction and image quality in MDCT of the upper abdomen: potential of an adaptive post-processing filter,” RoFo, vol. 182, no. 3, pp. 248–253,
9. H. B. Lu, X. Li, L. Li et al., “Adaptive noise reduction toward low-dose computed tomography,” in Proceedings of the Medical Imaging 2003: Physics of Medical Imaging, parts 1 and 2, vol. 5030,
pp. 759–766, February 2003. View at Publisher · View at Google Scholar · View at Scopus
10. J. C. Giraldo, Z. S. Kelm, L. S. Guimaraes et al., “Comparative study of two image space noise reduction methods for computed tomography: bilateral filter and nonlocal means,” in Proceedings of
the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1, pp. 3529–3532, 2009. View at Scopus
11. H. B. Lu, I. T. Hsiao, X. Li, and Z. Liang, “Noise properties of low-dose CT projections and noise treatment by scale transformations,” in Proceedings of the IEEE Nuclear Science Symposium
Conference Record, vol. 1–4, pp. 1662–1666, November 2002. View at Scopus
12. P. J. La Rivière, “Penalized-likelihood sinogram smoothing for low-dose CT,” Medical Physics, vol. 32, no. 6, pp. 1676–1683, 2005. View at Publisher · View at Google Scholar · View at Scopus
13. S. Hu, Z. Liao, and W. Chen, “Reducing noises and artifacts simultaneously of low-dosed X-ray computed tomography using bilateral filter weighted by Gaussian filtered sinogram,” Mathematical
Problems in Engineering, vol. 2012, Article ID 138581, 14 pages, 2012. View at Publisher · View at Google Scholar
14. S. Hu, Z. Liao, and W. Chen, “Sinogram restoration for low-dosed X-ray computed tomography using fractional-order Perona-Malik diffusion,” Mathematical Problems in Engineering, vol. 2012, Article
ID 391050, 13 pages, 2012. View at Publisher · View at Google Scholar
15. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at Publisher · View at
Google Scholar · View at Scopus
16. A. Buades, B. Coll, and J. M. Morel, “A non-local algorithm for image denoising,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05),
vol. 2, pp. 60–65, June 2005. View at Scopus
17. A. Buades, B. Coll, and J. M. Morel, “Nonlocal image and movie denoising,” International Journal of Computer Vision, vol. 76, no. 2, pp. 123–139, 2008. View at Publisher · View at Google Scholar
· View at Scopus
18. C. Yang, C. Wufan, Y. Xindao et al., “Improving low-dose abdominal CT images by weighted intensity averaging over large-scale neighborhoods,” European Journal of Radiology, vol. 80, no. 2, pp.
e42–e49, 2011. View at Publisher · View at Google Scholar
19. Y. Chen, Z. Yang, W. Chen, et al., “Thoracic low-dose CT image processing using an artifact suppressed largescale nonlocal means,” Physics in Medicine and Biology, vol. 57, no. 9, pp. 2667–2688,
2012. View at Publisher · View at Google Scholar
20. Y. Chen, D. Gao, C. Nie et al., “Bayesian statistical reconstruction for low-dose X-ray computed tomography using an adaptive-weighting nonlocal prior,” Computerized Medical Imaging and Graphics,
vol. 33, no. 7, pp. 495–500, 2009. View at Scopus
21. Z. Liao, S. Hu, and W. Chen, “Determining neighborhoods of image pixels automatically for adaptive image denoising using nonlinear time series analysis,” Mathematical Problems in Engineering,
vol. 2010, Article ID 914564, 2010. View at Publisher · View at Google Scholar · View at Scopus
22. Z. Liao, S. Hu, M. Li, and W. Chen, “Noise estimation for single- slice sinogram of low-dose X-ray computed tomography using homogenous patch,” Mathematical Problems in Engineering, vol. 2012,
Article ID 696212, 16 pages, 2012. View at Publisher · View at Google Scholar
23. T. Thaipanich, B. T. Oh, P.-H. Wu, and C.-J. Kuo, “Adaptive nonlocal means algorithm for image denoising,” in Proceedings of the IEEE International Conference on Consumer Electronics (ICCE '10),
24. T. Thaipanich and C.-C. J. Kuo, “An adaptive nonlocal means scheme for medical image denoising,” in Proceedings of the SPIE Medical Imaging 2010: Image Processing, vol. 7623, March 2010. View at
Publisher · View at Google Scholar · View at Scopus
25. R. Yan, L. Shao, S. D. Cvetkovic, and J. Klijn, “Improved nonlocal means based on pre-classification and invariant block matching,” Journal of Display Technology, vol. 8, no. 4, pp. 212–218,
2012. View at Publisher · View at Google Scholar
26. A. K. Bisoi and J. Mishra, “On calculation of fractal dimension of images,” Pattern Recognition Letters, vol. 22, no. 6-7, pp. 631–637, 2001. View at Publisher · View at Google Scholar · View at
27. R. Creutzberg and E. Ivanov, “Computing fractal dimension of image segments,” in Proceedings of the 3rd International Conference of Computer Analysis of Images and Patterns (CAIP '89), 1989.
28. M. Ghazel, G. H. Freeman, and E. R. Vrscay, “Fractal image denoising,” IEEE Transactions on Image Processing, vol. 12, no. 12, pp. 1560–1578, 2003. View at Publisher · View at Google Scholar ·
View at Scopus
29. M. Ghazel, G. H. Freeman, and E. R. Vrscay, “Fractal-wavelet image denoising revisited,” IEEE Transactions on Image Processing, vol. 15, no. 9, pp. 2669–2675, 2006. View at Publisher · View at
Google Scholar · View at Scopus
30. B. Pesquet-Popescu and J. L. Vehel, “Stochastic fractal models for image processing,” IEEE Signal Processing Magazine, vol. 19, no. 5, pp. 48–62, 2002. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/cmmm/2013/902143/","timestamp":"2014-04-21T05:27:51Z","content_type":null,"content_length":"206854","record_id":"<urn:uuid:8d925706-1bbe-45fc-a466-fa6c2a488688>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triangle Calculator
Enter values three of the six sides and angles of the triangle and the other three values will be computed. The number of significant values entered will determine the number of significant figures
in the results.
Frequently Asked Questions
Q: I like your triangles, especially the ones that have interior angles not summing to 180 degrees. Neat! Try sides equal to 1,2,2.
A: Because each of the sides you entered has so few significant figures, the angles are all rounded to come out to 80, 80, and 30 (each with one significant figure). Entering sides of values 1.00,
2.00, and 2.00 will yield much more acurate results of 75.5, 75.5, and 29.0.
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
Public License for more details. | {"url":"http://ostermiller.org/calc/triangle.html","timestamp":"2014-04-19T13:08:59Z","content_type":null,"content_length":"25034","record_id":"<urn:uuid:f4d96b95-64a6-4dac-90db-412dc5e31864>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Investment problem
How do I solve this problem-
Mr.A and Mr.B invested 1000 and 15000 dollers for a year in a project,after 6 months,Mr.C invests 12000 dollers for 6 months,after 6 months the whole profit from the project is 100000 dollers,I need
to distribute the profit among these 3 according to their investment and time.so what will be the distributed profits. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=237216","timestamp":"2014-04-19T09:27:18Z","content_type":null,"content_length":"11109","record_id":"<urn:uuid:819361b5-f7f7-4343-8e4e-c36443ee7361>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
6+2 (3j-2) = 4(l+j) please show work
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5074a11ee4b0b4f7c79c2897","timestamp":"2014-04-17T16:26:10Z","content_type":null,"content_length":"72391","record_id":"<urn:uuid:05a9ac49-b790-419b-9159-d852b1b34ec1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2010 [00357]
[Date Index] [Thread Index] [Author Index]
Solving differential equations?
• To: mathgroup at smc.vnet.net
• Subject: [mg112497] Solving differential equations?
• From: Zhiyong Shen <zshen2002 at yahoo.com>
• Date: Fri, 17 Sep 2010 06:42:56 -0400 (EDT)
Following are the two differential equations:
dx/dr = [a*(1-x^2+y^2)]/{r(r-1)]^(1/2)-2b*[r/(r-1)]^(1/2)*x - 0.0152*[r/(r-1)]^(1/2)*xy
dy/dr = -2a*xy/[r(r-1)]-2b*[r/(r-1)]^(1/2)*y + 0.0076*[r/(r-1)]^(1/2) *(1+x^2-y^2)
with initial conditions:
x(5.67)= - {a[br + (b^2*r^2 + a^2 + 0.000058*r^2)^(1/2)]}/(a^2 + 0.000058*r^2)
y(5.67)= -{0.0076*r[br + (b^2*r^2 + a^2 + 0.000058*r^2)^(1/2)]}/(a^2 + 0.000058*r^2)
where r is the independent variable, x and y are dependent variables to be solved, and a and b are constant parameters.
It would be a great help to me if someone could give me a suggestion on how to solve these equations. | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Sep/msg00357.html","timestamp":"2014-04-18T19:07:32Z","content_type":null,"content_length":"25637","record_id":"<urn:uuid:83637687-ed5f-427c-9f33-92453917409c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hölder's inequality via complex analysis
Hölder’s inequality via complex analysis
In this post I will give a complex variables proof of Hölder’s inequality due to Rubel. The argument is very similar to Thorin’s proof of the Riesz-Thorin interpolation theorem. I imagine that there
is a multilinear form of Riesz-Thorin that provides a common generalization of the two arguments, however we won’t explore this here. We start by establishing the well-known three lines lemma.
Lemma (three lines lemma) Let ${\phi(z)}$ be a bounded analytic function in the strip ${B=\{z : 0<\Re(z)<1\}}$. Furthermore, assume that $\phi(z)$ extends to a continuous function on the boundary of
$B$ and satisfies
$\displaystyle |\phi(it)|\leq M_{0} \hspace{1cm}\text{and}\hspace{1cm} |\phi(1+it)|\leq M_{1}$
for $t\in \mathbb{R}$. Then, for $0\leq \sigma \leq 1$, we have that
$\displaystyle|\phi(\sigma+it)|\leq M^{1-\sigma}_{0}M^{\sigma}_{1}.$
Proof: Let $\epsilon>0$ and consider the function (analytic in $B$)
$\displaystyle \phi_{\epsilon}(z)=\phi(z)M_{0}^{z-1}M_{1}^{-z} e^{\epsilon z(z-1)}.$
One easily checks that ${\phi_{\epsilon}(z)\leq 1}$ if ${\Re(z)=0}$ or ${\Re(z)=1}$. Furthermore, since ${|\phi(z)|}$ is uniformly bounded for ${z \in \overline{B}}$, we must have that
$\displaystyle \lim_{\Im{z}\rightarrow\infty}|\phi_{\epsilon}(z)|=0$
for ${ 0 < \Re z < 1}$. We now claim that, for ${A}$ sufficiently large, ${|\phi_{\epsilon}(z)|\leq 1}$ for ${z \in B_{A}}$ where ${B_{A}=\{z : 0\leq\Re(z)\leq 1, -A \leq \Im(z) \leq A\}}$. This
follows by the previous remarks on the boundary of ${B_{A}}$, and by the maximum modulus principle in its interior. This completes the proof. $\Box$
We are now ready to give a complex variables proof of Hölder’s Inequality.
Theorem (Hölder’s Inequality)Let ${(X, \mathcal{M}, \mu)}$ be a measure space, ${p,q\in [1,\infty]}$ such that ${\frac{1}{p}+\frac{1}{q}=1}$. If ${f \in L^p(X)}$ and ${g \in L^q(X)}$ then
$\displaystyle ||fg||_{L^1} \leq ||f||_{L^p}||g||_{L^q}.$
Proof: By a standard limiting argument (preformed first with, say, ${g}$ fixed) it will suffice to assume that ${f}$ and ${g}$ are simple functions. If we let ${z=1/q}$ we may now rewrite Hölder’s
inequality as
$\displaystyle \int_{X}|f|^{p(1-z)}|g|^{qz}d\mu \leq ||f||_{L^p}^{p(1-z)}||g||_{L^q}^{qz}$
Indeed, ${p(1-z)=p/p=1}$. Using the fact that ${f}$ and ${g}$ are simple, we can define a function, ${\phi(z)}$, analytic in the strip ${B}$ by
$\phi(z)=\int_{X}|f|^{p(1-z)}|g|^{qz}d\mu=\sum_{n=1}^{N}\sum_{m=1}^{M}a_n b_m e^{\lambda_n p(1-z)}e^{\kappa_m q z}.$
It follows that ${|\phi(\sigma+it)| \leq |\phi(\sigma)|}$, and that ${\phi(z)}$ is bounded on the closure of ${B}$. We record that ${|\phi(it)| \leq |\phi(0)| = \int_{X}|f|^{p}d\mu}$ and ${|\phi
(1+it)| \leq \phi(1) = \int_{X}|g|^{q}d\mu }$. Now, by the three lines lemma, we have that
$\displaystyle \int_{X}|f|^{p(1-\sigma)}|g|^{q\sigma}d\mu=|\phi(\sigma)|\leq (\int_{X}|f|^{p}d\mu)^{1-\sigma}(\int_{X}|g|^{q}d\mu)^{\sigma}.$
Taking ${\sigma=1/q}$ we recover Hölder’s inequality. $\Box$
Updated 10/13/2009: typos corrected
Updated 10/14/2009: the statement of the three lines lemma was truncated in the original post
Update 10/31/2009: typo in definition of B.
6 Responses
1. This is nice. But to get a multi-linear version [without the reduction of the multilinear case to the bilinear case], it appears you would need a higher dimensional version of the three lines
□ Thanks for inaugurating the comments!
I agree with your remarks. Of course, as you point out, you should be able to reduce the multilinear case to the bilinear case.
2. Interesting proof, which I haven’t seen before.
The basic idea seems to be that since your function phi(z) is analytic, it must be bounded by the values on the lines Re(z)=0 and Re(z)=1. Also, on each line Re(z)=constant it is maximised at the
point Im(z)=0, which gives you the bounds on Re(z)=0 and Re(z)=1. Using almost the same argument, phi(z) is log-convex on [0,1].
However, log-convexity of phi(z) follows directly from the fact that it is an integral over exponential functions of z, which also proves the Holder inequality (and a bit more efficiently imo).
3. There’s a typo in your definition of B. It should be 0<R(z)<1 rather than 0<R(z)<0. | {"url":"http://lewko.wordpress.com/2009/10/13/holders-inequality-via-complex-analysis/","timestamp":"2014-04-20T15:51:07Z","content_type":null,"content_length":"71558","record_id":"<urn:uuid:3fa44536-52b4-4a39-90bd-e7563af12ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Almagre Chronologies
On an earlier occasion, I posted up our updated measurement data from Almagre. I’ve been working on this material in preparation for AGU (Dec 14). Today I’m going to show some initial chronology
In doing these calculations, I’m trying to reconcile exactly to standard methodology while, at the same time, trying to be a little bit reflective about the statistical meaning of their methods (and
porting the software to R as much as possible.) I have done runs in Arstan on our data and on Graybill’s co524. I also corresponded with Rob Wilson on this matter. I had trouble reconciling the
archived Graybill chronology with Arstan options and Rob kindly identified the precise Arstan option used by Graybll.
Data Sets
In addition to our collection in 2007, there have been two previous measurement data archives for Almagre: Lamarche in 1968 (co071) and Graybill in 1983 (co524). The Graybill archive includes some
(but strangely not all) Lamarche cores. According to my best efforts at concordance, Graybill trees with id over 30 were actually collected by Lamarche. Three of our trees (30,33, 47) matched
Graybill trees.
For the calculations here, I’ve collated a data set using the “fresh” Graybill measurements, the original (and more complete) Lamarche measurements and, from our update, I’ve excluded the first two
sites (Elk Park, Almagre Base) which are not in the same area as the Graybill samples. I don’t believe that the results are particularly sensitive to the exact collation. (I’ve examined some
sensitivities but am still doing analysis.)
In a first pass analysis, I’m using all past and present data (strip bark and whole bark) in order to reconcile results to past results as much as possible and will then look at strip bark-whole bark
The resulting data set has 37 trees and 77 cores.
Arstan detrending is done by first trying to fit a “generalized” negative exponential to each core. A “generalized” exponential has the shape:
$RingWidth= A+B*exp(-C* age)$
There are some interesting numerical analysis issues pertaining to this sort of non-linear fit. I can substantially replicate Arstan results by doing these fits using the R function nls and even more
conveniently using nlsList (in the nlme package). I can often obtain convergence when Arstan converence failed and I suspect that they’ve set their iteration a little low relative to their tolerance.
A second Arstan option is a negative sloping line if a neg exponential fit fails and then a line through the mean.
An odd feature in Arstan detrending – and I don’t think that it matters much in this example, but doesn’t seem to make any sense – is that the “age” of each core is determined individually even
though another core may have established an earlier date for the tree, This is shown in the example below (which has the most cores of any tree on the site). I’ve plotted the ring width information
from each core together with the Arstan fit. For example, the green core is fitted as though it’s a new tree even though the blue core has shown that the tree was already at least 150 years old and
no longer juvenile at the commencement of the green core. In solid black, I’ve shown the negative exponential fit using all available data for the tree. This seems far more rational than trying to
treat each core separately. At the end of the day, I don’t suppose that the decision makes much difference, but detrending on a tree basis (rather than a core basis) seems a safer approach,
especially when one is worried about strip bark potential problems. Dendros have been increasingly moving towards standardization on a more regional basis, and standardization at a tree (rather than
a core basis) is at least consistent with that trend.
After fitting a curve, the standard dendro procedure is to divide the measured ring width by the fitted width to produce a dimensionless ratio. There are occasional discussions about whether to use
residuals, as would be far more conventional in mainstream statistics, but ratios are well-established. I’ve used ratio approaches here in order not to vary too many things at the same time.
Power Transformations
A second decision in chronology-making is the decision of whether to transform the data to “normalize” it. Ring width data, even after detrending is typically very non-normal and this is the case
here. Willis showed a pretty violin plot in the earlier thread and I’ve applied this below in combination with a QQnorm plot to illustrate the distributions. First here is the distribution of
“standardized” detrended ring width ratios calculated according to the above procedures. As you can see, this is highly non-normal with a positive skew.
Figure 2. Violin and QQnorm plots of Almagre detrended ring width ratios.
Cook has initiated the use of power transformations to normalize ring width distributions (a technique nearly always used by Rob Wilson in his work) and an excellent idea. I haven’t explored the
criteria that they use to select the power transformation index. I experimented with several different transformations starting with k=0.5 and after a couple of attempts used k=0.375. This resulted
in the following distribution. Because of the severe non-normality of the Almagre data, I have the impression that some of the variation in the older chronologies is significantly reduced as a result
of power transformation reduction of non-normality artefacts.
First here is a plot comparing a chronology using only updated measurements to a chronology using only Graybill 1983 measurements (excluding Lamarche for now.) This shows a couple of things: that our
sampling actually did replicate Graybill’s results (r=0.89 for this period) (I’m not sure whether this is biased upward in the crossdating exclusions, biases from which seem possible to me, but are a
large study in themselves). For now, we can say that – whatever interpretation one may put on the final chronology itself, there does seem to be data that can be independently recovered.
First here is a plot showing the original Lamarche and Graybill chronologies, together with our extension to 2007 (without a power transformation). Values were at high levels in the late 19th century
and again in the 1950s and have declined since the 1950s reaching more or less average levels in the 1990s-2000s.
The most notable feature on the recent portion of this graphic (and perhaps the entire graphic) is the severely reduced ring widths in the 1840s. Steve Mosher has linked to some references reporting
severe drought in Colorado in the 1840s http://www.ncdc.noaa.gov/paleo/pubs/woodhouse2002/woodhouse2002.html
and it’s hard to avoid the iview that the reduced growth in Almagre bristlecones in the 1840s isn’t associated with contemporaneous drought throughout the state – a view which is certainly consistent
with the impression of our most knowledgeable botanical observers of moisture limitation at the site.
Here is a blow-up of the same chronology for the 1830-2007 period, covering the low-growth 1840s.
Power Transformation Chronologies
Here is a the power-transformation chronology (k=0.375) for the 1830-2007 period. Much of the variation has been damped down and one is left with an impression of rather limited variation other than
for extreme events like the 1840s (the 1920s were also low-growth here).
Power Transform Chronology k=0.375.
Finally for today, here is the power transform chronology for 900-2007 (with an 11-year smooth):
The correlation of the ring width chronology to the HadCRU3 grdicell (annual) is 0. The first graph shows the correlation of the chronology to monthly temperatures at the nearest USHCN station
(Chessman adjusted). In addition to the usual barplot showing the correlations to the current and preceding year, I’ve shown correlations of the ring width to the temperatures in the following year.
The most “significant” correlation is between ring width and April temperature of the following year – a “teleconnection” that is appealingly Mannian.
Despite some evidence that large-scale drought such as the 1840s can affect growth, there is little correlation between statewide precipitation index and ring widths as shown below.
“Reconstruction” of Cheesman Reservoir Temperature
The graphic below shows a “reconstruction” of Cheesman Reservoir July-August temperature done in one of the common dendro ways – by variance matching. The r2 of this “reconstruction” is under 0.01 –
not that this precludes a Mannian model. After all, this may teleconnect with temperatures in Bali or Beijing or Rio de Janeiro or Antarctica. 2002 and 2003 were warm summers at Chessman Reservoir,
but did not result in exceptional growth.
58 Comments
1. Looking at 1980 – present I have one overbearing thought:
Wot no hockey stick?
2. Maybe it should be called the Toronto Maple Leaf graph
They don’t do hockey either.
3. But, I thought it was settled science that BCPs were thermometers, not rain gauges.
From Salzer, M.W. and K.F. Kipfmueller. 2005. Reconstructed temperature and precipitation on a millennial timescale from tree-rings in the southern Colorado Plateau, U.S.A. Climatic Change, 70:
Climatically responsive trees useful in dendroclimatology contain quantifiable variables (e.g., ring-width) that are the result of internal tree processes directly or indirectly limited by
climatic factors. It follows that the most responsive trees are found near distributional edges and ecotonal boundaries, where climatic factors are most limiting. Hence, boundary areas, such
as the lower forest border and subalpine treeline, are ideal for developing tree-ring chronologies at both the cold and arid limits of trees (Figure 1). On the southern Colorado Plateau,
lower elevation pines (Pinus ponderosa and Pinus edulis) and Douglas-fir (Pseudotsuga menziesii) provide information on past precipitation, while high elevation Bristlecone Pine (Pinus
aristata) renders details about prior temperatures. Through a comparison of these two growth records, paleoclimatic insight unobtainable from either record alone is generated, allowing an
integrated view of temperature and precipitation variations.
4. I don’t mean to be a noob on bristlecone pine tree proxies, but on your correlations by month: In what month of the year does a tree ring “begin” and “end?” The way you labeled the months seems
to imply a ring matches a calender year, but my understanding was that the inner or first part of a ring was the new spring growth, so the actual year should be april to april or may to may or
5. It’s good to see that your 900-2007 chronology closely tracks the others and largely negates the criticism that your methods were non-standard. Something happened with Lamarche from about
1180-1410 though. Congratulations on extending the data 25 years on the recent end and 200 years on the ancient end with a “significant” peak at about 980.
6. #5. We did not extend the ancient end. The update includes the ancient data and merely overplots it.
7. I don’t think a state-wide precipitation index means much at high elevations. I would imagine that nowpack amounts and the number of thunderstorms that happen to occur there are especially
8. #7 Agreed on snowpack.
9. Steve,
it s hard to avoid the iview that the reduced growth in Almagre bristlecones in the 1840s isn t associated with contemporaneous drought throughout the state
You’ve got yourself confused by a common misuse of the double-negative here, I believe. I think you’re trying to say the reduced growth is associated with the statewide drought, in which case it
should be “is” rather than “isn’t” after “1840s”. “Hard to avoid” is essentially the same as “easy to accept”. If you read that sentence using the latter version, it’s hard to avoid the fact that
you’ve not used good grammar.
10. 1) They got a hockey stick out of THAT?!?! WTF?
2) Trees make lousy thermometers, apparently.
11. RE: #7 and 8 – Adding to this, I would say that summer thunderstorms are probably a bit more reliable on the Front Range than in the Whites. Both are of course highly dependent on snow pack
persistence as a control on how much moisture is available in July and August. The idea scenario is where winter comes in two surges, one that dumps lots and lots of snow, then, a hiatus in say,
March or April, allowing the ground to thaw, followed by a late series of snow events, dumping massively throuth late April and May, atop the now thawed ground. A typical pattern in the Whites,
perhaps also common in the Front Range.
12. You see a similar, albeit smaller dip in 1934. Granted I looked at 1934 for a particular reason.
The rebound in 1935 is remarkable.
The question for the biologists is how rapidly does a tree respond to climate differences
( BCPs aint tomatoes) Anyway, the dip in the 1934 range would seem to be negataively correlated
with temp (assuming no lag in the response) and then 1935 shows a spurt of growth.
Record rainfall 25 miles from colorado springs in 1935. 24 inches in 6 hours. in a region
that sees 16=18 inches/yr
Colorado spring rainfall… http://www.crh.noaa.gov/pub/?n=/climate/cospcpn.php
MrPete will comment on the use of colorado springs precip data.
13. Just to clarify:
it s hard to avoid the iview that the reduced growth in Almagre bristlecones in the 1840s isn t associated with contemporaneous drought throughout the state
Do you mean you think drought was likely widspread across the state?
Also, I was looking for USHCN data for Cheesman Reservoir. The only source that I found for this was this one. I’m curious why the table for Cheesman ends at 1984. If there is a more recent
record for this site, could someone link it?
14. the table for Cheesman ends at 1984
That should read “1994″. Sorry. Same question. What happened the last 13 years?
15. I’m confused over the grey “cloud” around the chronology graphs. Are these the yearly values with the graphs giving smoothed values?
Can you estimate confidence bands? If the “clouds” are yearly values I’m guessing such bands would be pretty big.
And finally, how do we get from these curves to temperatures?
16. Should be interesting in San Francisco.
how do we get from these curves to temperatures?
How indeed. There’s no correlation during the calibration period. Which would imply the temperature reconstruction confidence bands are infintely wide.
18. Sorry – the clouds show all the individual measurements.
And finally, how do we get from these curves to temperatures?
No idea. When there’s an r2 of 0, it’s about as informative as a Mannian reconstruction.
19. I love a good crosspost. Independent convergence, as opposed to forced consensus.
20. #13. My USHCN collation has Cheesman through to 2006.
21. O.k. Found it.
22. MattN: That’s just it, they didn’t get a hockeystick out of this kind of data. This kind of data was used to create the flat shaft. The blade was created by splicing on the modern thermometer
record to the end of the chart. Locate the record that shows what you want for each time period. Splice them together. Prove anything you want.
“Numbers are like people. Torture them long enough, and they’ll tell you anything you want to hear.”
Steve: There’s more to making the proxy blade than that – as I’ve discussed at length on the blog elsewhere.
23. #16, 17. I’ve added a dendro-style “reconstruction” of Cheesman Reservoir temperature.
24. Steve:
There s more to making the proxy blade than that – as I ve discussed at length on the blog elsewhere.
Just for clarification purposes, is it accurate to say that the Sheep Mountain (California) BCPs are believed to be the source of the hockey stick, rather than these Colorado BCPs?
Steve: Yes, Sheep Mountain is by far more important. See the Ababneh update though.
25. So this is the result of the negative exponential fit of each tree, averaged together and with an 11-year smooth through the incredibly noisy data.
Can this whole procedure have any statistical or physical integrity? Does bristlecone tree growth relate to any useful climatic variable?
Steve: Perhaps it is teleconnected.
26. To any newbies: this is a major development in the so-called “hockey stick debate”, which asks: “are current temperatures ‘unprecedented’ compared to previous tiems, such as the MWP?”. Past
studies have relied on data that were 20 years old. These data are new. They illustrate a huge 20th century divergence between the tree-ring record and the instrumental temperature record. i.e.
Treering vs. temperature calibrations that appeared to work well up until the 1980s no longer work.
Steve: Yes, Sheep Mountain is by far more important. See the Ababneh update though.
Thanks Steve. I guess I was trying to clarify for the purposes of those posters who are looking for the hockey stick in the Amalgre BCPs. As I understand it, there never was a hockey stick from
the Amalgre chronology, only from the telconnected sweet-spot of Sheep Mountain.
Sorry if this is a bit OT for this thread, but have you tried updating the Sheep Mountain data with Abaneh’s data and running it through a psuedo Mann-o-matic, in the same way you did with the
tech stocks? I try to keep up with the blog, and if I missed this, I apologize for eating the bandwidth.
28. Steve, in #24 you say “Perhaps it is teleconnected.” Are you being funny or are you serious? (I personally find this teleconnected idea to be complete nonsense, but I’m not a scientist, and am I
right in thinking that even Bender may findit plausible?
29. Regarding Cheesman temperature record
Perhaps you’ve discussed already: It’s difficult not to speculate on the effect of the Hayman Fire on the 2002 temperature reading. It was Colorado’s largest wildfire. Photo shows Cheesman in the
cleft of the northward-spreading fire. The year was 2002, the “tip” of the hockeystick blade, I believe.
30. I’ve added a comparison of a chronology developed only from our Graybill site measurements and the Graybill 1983 measurements. They match very closely (r=0.89). So the chronology itself appears
to be replicable and not merely random. Determining whether or not the chronology is teleconnected to precipitation in Bombay, births in Honduras, wine sales in Australia or the first consonant
of Presidential surnames will undoubtedly require “sophisticated” analysis.
31. #27
He’s being funserious. Let’s have our lecture from JEG on what teleconnection *really* means. Then we can judge. Agnosticism is hard, I know.
32. Steve, As an engineer, rather than a statistician, I find most information from the graphs rather than the basic numbers, but one thing has been totally baffling me with these rings, apart from
the point you make. Just looking at rings from trees cut down in my own garden, the widths vary around the tree, and they don’t vary evenly, so, for instance, one side of the slice has ring,say,
1995 wider than 1996, yet further round the trunk, it’s the other way round. Given variations like this, just how can anything at all be extracted from tree rings? This is shown in post #39 by
Willis in the earlier Almagre thread. Some major events do seem to line up, but in other places, the two lines are going in different directions, forgive me, but, huh….?
The subsequent question is how are the detrending equations obtained? Is there some sort of recent calibration, or is this just an educated guess on someone’s part?
Finally, silly question perhaps, but has anyone done a plot of this century’s rings against “global temperature”?
Hope this is not too simplistic, as I really am interested and have been following for quite a while now.
33. Sorry, I’d intended to quote your # 17, but haven’t quite got the hang of block quotes yet.
You said
Sorry – the clouds show all the individual measurements.
And finally, how do we get from these curves to temperatures?
No idea. When there s an r2 of 0, it s about as informative as a Mannian reconstruction.
34. Re #32:
Given variations like this, just how can anything at all be extracted from tree rings?
That is a very good question…
Sorry, I d intended to quote your # 17, but haven t quite got the hang of block quotes yet.
Highlight all of the text you want to quote and hit the B-Quote button. The window will put appropriate tags around your quoted text.
36. #28 asks
am I right in thinking that even Bender may find it plausible?
CA search for “bender” and “teleconnection” leads to:
Short answer: it depends what you mean by “it”. There are several layers to the dendro-teleconnection postulate.
-climate at A is correlated (shares a common low-frequency signal) with climate at B (reasonable; e.g. Tasmania-California via ENSO)
-trees at A respond causally to climate at A (reasonable)
-trees at A spuriously correlate better with climate at B than A (reasonable; given B is detected post-hoc and correlation is a statistic subject to sampling error)
-trees at A appear to respond better to climate at B (reasonable; thinking correlation may imply causation)
-trees at A actually do to respond better to climate at B than A (unreasonable)
Fishing for teleconnective dendro-correlations is prone to logical errors of the type post hoc ergo proctor hoc. Fine for building working hypotheses. But not robust enough to withstand a
challenge in court.
37. The correlation between this analysis and that of Graybill almost a quarter-century ago seems to be very important, and a ringing endorsement of the methodology employed in both cases. It is
extraordinary that, on an individual tree level, data is so noisy as to appear almost useless, yet the whole contains a long-term signal that is reproducible.
trees at A spuriously correlate better with climate at B than A (reasonable; given B is detected post-hoc and correlation is a statistic subject to sampling error)
Aye, there’s the rub. If I have enough ‘proxy’ series and enough temperature series I can find many correlations. It is worse if the series are trending/non-stationary and I don’t adjust
appropriately for it. Standard statistics says that about 1 in 20 regressions will show ‘significant’ correlation where there is none — how many regressions do you think researchers run before
they fix on the one they publish?
Give me enough data and I will find a pattern in it. A pattern that might even be ex-post rationalisable. But dollars to doughnuts it won’t be a real pattern.
It is extraordinary that, on an individual tree level, data is so noisy as to appear almost useless, yet the whole contains a long-term signal that is reproducible.
That’s because the “noise” isn’t due to sampling error. It’s the internal complexity of the trees’ response to the environments in which they live. The sampling error on these chronologies is
miniscule becasue the populations have been so intensely sampled. Don’t mistake complexity for uselesenss.
40. Questions and comments:
Confused why the final power transformation chronology, from 900 on, is displayed starting at or before 800?
Be careful about assuming tremendous replication power. Yes, with appropriate smoothing and a small data set, the data largely duplicates Graybill’s over the displayed period. That’s comforting.
Is the correlation as nice over a longer period of time, and with other Graybill trees? I dunno.
41. JS, I (and Kenneth Fritsch) anxiously await JEG’s explanation to the contrary.
42. 32, Tony: I think it goes this way: If you draw, say two or three lines from the center (pith) of the tree perpendicular to the rings to the outside of the tree and measure the ring widths along
each line, you will generally see that the relative ratios of the widths along each line are the same, even though the rings vary in width around the diameter of the tree. It’s this ratio that is
supposed to be a thermometer. Of course, if the core isn’t taken perpendicular to the pith, you don’t have the proper ratios, so you have measurement errors, and I don’t know how these are dealt
43. Could you graph against sunspot numbers?
I think those trees like lots of sunspots.
44. Some more hints. USHCN have precip data for cheeseman, canon city, etc. You can get monthly data.
A glance at the data suggest this: Watch the spikes and jolts. Watch the tree ring response when the
rainfall jolts below the mean of 16 in. Basically, look what happens if
1. year X is mean precip (16 in or slight above)
2. Year X+1 is 2-3Sd below it.
Basically what happens if a tree get 16 inches ( mean) one year and 10 the next ( sd = 3)
Its different if the precip falls from 19 to 13.
Years to check, 1923-24, 1933-34, 38-39, 42-43, 77-78.
( eyeballing)
When the rainfall falls a 2 or 3 stdv below the mean, the tree responds.I looks to me.
( i havent looked at all the rings ) but the jolts below the mean precip seem to indicate something.
What;s left when you remove this signal?
45. Steve M thanks for taking time to explain some of the steps used in reducing the tree ring measurements to more meaningful data. Attempting to understand what tree ring growth means and how it is
handled and manipulated would not have been something that I would have thought would be occupying much of my time in retirement. The real life participation in these activities that you and Mr.
Pete have reported here would make it difficult not to take an interest.
Am I correct in assuming that your TRW are being measured in exactly the same manner as those used in the original reconstruction? Did they use MXD in conjunction with TRs at that time or is that
a more recent development? I seem to recall that a number of Rob Wilson s papers described calibrations that apparently were weighted more heavily by the MXD variable than TRW.
46. One thing struck me: in standard dendro treatment there seems to be no consideration of non-stationarity. Does anyone know why this is ignored? Annual ring widths would appear to invite time
series-based analytical techniques. And things do look non-stationary, meaning that the negative exponential is a bad idea. Am I missing something here? I’ve thought of 2 justifications: Dendros
go for the residuals and they are white noise –not sure I believe this and how can these (straight reisudals or these ratios) then reveal a hockey stick (not white noise)? The other theory: when
dendro techniques were being developed time series techniques were not easily implemented, apart from trivial examples. I’d love to hear from the dendro folks that drop by here.
47. #45. MXD wasn’t done in the original bristlecone studies, nor by us.
I’ve done chronologies using ARSTAN but I’ve done fits here using R scripts that I’ve reconciled to ARSTAN. The only slight difference is standardizing on a tree basis rather than for each core.
It doesn’t really make any difference. For consistency’s sake, I’ll archive an ARSTAN version as well. I have a secondary interest in showing dendros that you can make chronologies using a
statistical technique known off the Island.
I have a secondary interest in showing dendros that you can make chronologies using a statistical technique known off the Island.
Will we see this soon?
49. #46
Hey, I look at it and think that it looks like a massive panel data set. With enough samples you can estimate year dummies and age dummies across the entire stand. No need to impose any structure
on the data or growth patterns of trees. You could parameterise it if you felt the need – but that wouldn’t be necessary with enough data. You might have some identification problems for really
ancient dates but that may not be a significant problem depending on the objective. Take the year dummies and you have your ‘climate’ signal. Although you will still have the problem of
interpreting that climate signal, your normalisation should be better than what you would get with a tightly parameterised and restrictive funcitonal form as is used in the standard detrending
The techniques are the same as are used to estimate interesing effects in, say, the PSID (Panel Study of Income Dynamics) and the statistical properties are well established. (Like, for example,
separating cohort effects from age effects.)
50. I’ve estimated chronologies using some settings in the nlme package in R. You have to estimate cross factors and it’s a little tricky.
For this particular data set, where the cores are often long, the detrending doesn’t have as much impact as one would think. The juvenile portion wears off after about 50 years and is flat
thereafter. My suspicion is that there is some bias introduced with these series, since they usually don’t hit the pith and the juvenile bias could simply be a decline from a high quasi-cycle. A
ring width average looks very similar.
51. …and a followup speculation.
If the trees reliably go stripbark at an advanced age, then the age dummies would pick that up (on average) and you would have a novel way of partialling out that particular confounding
influence. The big panel data approach would identify the strip-bark growth pulse with age not year. Maybe I’ll fire up Stata and poke around in the data set when I don’t have anything better to
52. Of chronologies archived at the ITRDB that go back to AD1000, it looks like Almagre is the highest!
53. Correlations
Steve, a year ago I expressed reservations about using mid-level correlation coefficients, preferring r^2 above 0.9. I am pleased our thoughts are converging and that we both agree that a
correlation coefficient of zero has significant interpretative confidence.
Re # 32 Tony Edwards
Graphs are ok, but have a closer look at Steve’s graph “Grabill & Updated…” The two curves do look well correlated, but when you do a count you find that the eye sees mainly about 25 peaks in
this 400 year term, marching in step. It is not uncommon in many fields of science that the devil is in the detail, requiring an excellent statistician (which I am not) to extract the max from
the data. But you’d know this. It’s part of the reason why the hockey stick had so much public impact.
Steve – Re “detrending” exponential correction. I presume this is used because the ring width is a greater part of the diameter when the tree is young, then it evens out somewhat in midlife.
However, towards death, the ring width should thin (relatively) as its function and load decreases. I have not specifically searched recent papers for a more sophisticated curve, but did work
through the maths of this with foresters 20 years ago. We also did live weight vs. time curves for plantation trees to predict best harvest time, this possibly related to ring growth
Looking forward to the final conclusions from your stay in Calif. Fascinating numbers.
54. SteveM are all the cores used in the plot shown above, both cores for each tree? If so, then as well as the strip bark-whole bark disaggregation, it would be interesting to see that data
55. I had asked in the previous thread how the dendros do it … Steve Mc, many thanks for explaining how they do it.
However, it seems to me that the standard method, with or without a power transformation, throws away a lot of information. I would respectfully submit that my method preserves that information.
I had looked for a method that would preserve the pattern, minimize single-year jumps, and have a relatively normal distribution. The derivative, or “first difference”, was the obvious choice.
However, the normal (discrete) first difference definition of (Y(t) – Y(t-1))/(X(t) – X(t-1)) doesn’t work because a ring width change of 1 mm on a 2 mm ring is very different from a 1 mm change
on a 5 mm ring. To equalize the effects of wide and narrow rings, it is better to use a percentage change. (Using the median of percentage change, curiously, will also remove most of the “early
fast growth” bias which the dendros remove with ARSTAN.)
I initially used a straight percentage of increase, Y(t) / Y(t-1) – 1. This does not give a normal distribution. A better form is logarithmic, ln( Y(t) / Y(t-1) ). This can be restated as ln(Y
(t)) – ln(Y(t-1)), and is not too far from normality. (The normality only matters for the error calculation.)
Then I took the median of the individual tree changes for each year. I used the median to minimize the “one-year jump” error. Finally, I inverted the original logarithmic transformation to yield
the reconstructed dataset.
To understand why this transformation is better than the ARSTAN==>average transformation, we can consider a much simpler question. What is the most accurate way to estimate a given single year’s
change in the record?
Now, we can average year t and year t-1, and subtract one from the other, that’s one way. The problem is that it doesn’t deal well with single-year jumps. If a couple of the trees took big
single-year jumps last year, it will skew the record. It may make a year of declining ring width look like a year of increasing ring width. And unfortunately, it will skew the record for as long
as it does not return to its former position. Taking the median of the first difference (as a percentage) avoids or greatly minimizes those problems.
In addition, I’m nervous about ARSTAN. I get nervous in general when you fit a line to a bunch of natural data and say “ceteris paribus, it would should be like this” …
Ringwidth = A + B exp(-C*age)
For starters, it assumes that you know the tree’s age. Given that in this species, the location of the heart is often anyone’s guess, does a core reveal the age of the tree? I would think in many
cases, no.
Steve is pointing in the right direction with detrending on the basis of the whole tree or the stand, rather than the individual core. But this still does not deal with the one-year jump problem
… and it still requires that we know the age of the tree. The core with the green crosses illustrates the problem perfectly. If we didn’t have the longer core, we would ARSTAN detrend that whole
tree totally incorrectly.
The result of this incorrect detrending is that it removes good information from the dataset, and replaces it with wrong information. Does this make a difference? I don’t know. I also don’t like
the idea that some trees get detrended with a straight line with a negative slope … and when you are looking for a trend in data, that doesn’t seem like a good plan.
This is a particularly insidious error, because (as in the green cross data in Fig. 1) the un-detrended change may represent valid data. The insidious part is, under ARSTAN, it sometimes will fit
a line or a curve with an erroneous negative slope, but it will never fit a line or curve with an erroneous positive slope. This has the potential of introducing an erroneous positive bias in the
results, because some valid negative trends will have been removed.
As I said, I don’t know whether this is a problem, or how to quantify it. I point it out as a potential hazard of ARSTAN detrending. I suspect that some broader average, based on Steve M’s
“grassplots” or the like, applied only to the trees where it is clear that there is a real need based on some statistical evidence, would do a better job.
Finally, I like my method better than the dendro method because it is conceptually much simpler. I am calculating the integral of the median of the first derivative. This avoids fitted curves,
and allows the inversion of the final result back into the units we started with (ring width).
Anyhow, gotta run, work calls. I’ll look further at these questions and report back.
56. I wondered if someone could explain something I could not grasp from the intro: Where did the 900 – 1100 data come from?
57. I’ve always been baffled by the normalization of the raw data. (The step where you fit an exponential curve to the data and subtract off the exponential trend.)
As I understand it, this step removes the trend from the data. How then can you use the detrended data to say anything about the trend in the data, i.e., that it is warmer in later years than in
earlier years? Don’t the results we are interested in depend critically on the curve used to detremd the data?
And even if some information manages to survive the detrending, don’t the confidence intervals explode because the detrending process itself introduces noise into the results?
Given that the Team can, with a straight face, use the confidence intervals from the calibration period in the backcasted period, I would be astonished if the Team has even thought of this, but
has anyone thought of it?
58. A bit hard to find the right slot for this, but it’s about CO2 fertilization of tree growth and possible effects on tree ring analysis. It’s about FACE, a method where free air CO2 enrichment is
given to growing systems. It’s typically coordinated by Oak Ridge National Laboratories, who commonly tag the added CO2 with isotopes to follow its path.
On Unthreaded #32 @ 150 I posted on March 20 2008 that the preamble to some FACE work (for oceans) had ideological preconceptions and I gave this excerpt:
Free Air CO2 Enrichment (FACE) experiments. Both SOLAS and the IMBER project have proposed FACE-like experiments for the ocean. The benefit of such experiments is that they are more likely to
show the actual long-term effects that will occur in the future. The major anticipated drawback is that it might be impossible to use for pelagic communities without enclosing them in some
way or somehow using a Lagrangian approach. There is a need to start with a feasibility study because the amount of CO2 or acid required for a full-scale pelagic FACE experiment may be very
high. The other drawback is the public perception problem. This drawback might be approached by pointing out that the effects of elevated CO2 under “business as usual” scenarios may be so
severe that understanding them might cause policymakers to think more carefully about emission controls or other mitigation methods.
Now on to trees and compounds of forest with high CO2 added.
ARGONNE, Ill. (Dec. 20, 2005) — Researchers from the U.S. Department of Energy’s Argonne National Laboratory – with collaborators from Oak Ridge National Laboratory, Kansas State University
and Texas A&M University– have shown that soils in temperate ecosystems might play a larger role in helping to offset rising atmospheric carbon dioxide (CO2 ) concentrations than earlier
studies had suggested. Results of the new study are published in the current issue of Global Change Biology.
Higher CO2 concentrations often stimulate plant growth. A subsequent increase in the amount of decaying plant material might then lead to an accumulation of carbon in soil. Yet nearly all
field experiments to date have failed to demonstrate changes in soil carbon against the large and variable background of existing soil organic matter.
In this new study, funded by DOE’s Office of Science, scientists overcame that issue using a statistical technique called meta-analysis. This analysis of earlier published experiments showed
that elevated CO2 concentrations – ranging from double pre-industrial levels to double current levels – increased carbon in soil surface layers by an average of 5.6 percent across diverse
temperate ecosystems. If a response of this magnitude occurred globally for all temperate systems in a CO2 -enriched world, the authors calculated that increased soil carbon storage might
remove 8 to 13 billion metric tons of carbon from the atmosphere over a period of about 10 years.
A minor derivation from this press release is that maybe reduction of atmospheric CO2 can be done by adding more CO2 to the atmosphere.
Note, however, that no promises are given that the new carbon will stay in the soil forever. Other agricultural studies suggest it will be temporary, otherwise soils would soon turn to coal or
The reason I raise FACE here is to show you CA maths people the power of meta-analysis. The sought effect was not found until meta-analysis was done. Why have you not informed and educated the
rest of us before of the power of this method/ (He asks sarcastically).
Finally, another CA contributor has noted an absence of reported temperature measurements, so that tree ring analysis in the future, from these trees, might be difficult to interpret.
Is FACE worth a separate thread?
Post a Comment | {"url":"http://climateaudit.org/2007/12/04/almagre-chronologies/?like=1&source=post_flair&_wpnonce=c1ce40f6e1","timestamp":"2014-04-21T02:36:53Z","content_type":null,"content_length":"163485","record_id":"<urn:uuid:64ba5183-c2e4-4f38-9967-b6a9a457753b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transportation Geography and Network Science/Network formation
Models of Network FormationEdit
This section will discuss Models of Network Formation also referred to as “Generative Network Models.” These models offer an explanation to the question of why a network should have a certain set of
predetermined parameters (e.g. power-law degree distributions). That is, they model the mechanisms by which networks are created and explore the resultant network structures of certain hypothesized
generative mechanisms. “If the structures are similar to those of networks we observe in the real world, it suggests-though does not prove-that similar generative mechanisms may be at work in real
networks.” (Newman, 2010)
Preferential AttachmentEdit
Many networks are observed to have degree distributions that approximately follow power laws, at least in the tail of the distribution. In the 1970s, Derek Price first proposed a network formation
model that explained this observation which was inspired by the work of economist Herbert Simon and statistician Udny Yule (Yule Process). Price expanded Simon’s mathematical explanation of how the
“rich-get-richer” (also a power-law distribution) to the network context, calling it “cumulative advantage” (more well known as preferential attachment coined by Barabasi and Albert in 1999).
Price’s ModelEdit
Price specifically applied this process to a model of a citation network. In this model, papers are published continually (added one by one), yet not necessarily at a constant rate and newer papers
cite older papers. The edges in this network are directional and are created but never destroyed. The central assumption of Price’s model is that a newly appearing paper cites previous ones chosen at
random with probability proportional to the number of citations those previous papers already have plus a constant a, where a>0. Consider a price model network where $q_i$ denotes the in-degree of
vertex i. Now suppose a new vertex is added to the network. Then the probability of the citation made to a particular vertex, say i is proportional to $q_i+a$. Normalizing the above term, we get the
probability of citing a vertex, say i by a newly introduced vertex is given by,
If average number of citations for a given paper is c, with certain variance according to the properties of the network then as as the network size becomes very large, the fraction of vertices in the
network that have in-degree q becomes: (Newman, 2010 pp- 490-494)
c=average out-degree of network (or average bibliography size)
where beta function is defined as $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$
and gamma function as $\Gamma(x) = \int_0^\infty t^{x-1}e^{-t}\,dt$
Furthermore, the Beta function falls off as a power law for large values of x, with exponent y. Applying this, we find that for large values of q (q>>a) the degree distribution goes to:
$p_q \sim q^{-(2+\frac{a}{c})}$
Thus Price’s model for citation network gives rise to a degree distribution with a power-law tail. More generally speaking, each paper could be considered a node and each citation a link. Here, links
are added to existing nodes at a probability proportional to the number of links already reaching that node.
Model of Barabasi and AlbertEdit
Developed independently in the 1990s, the Barabasi-Albert model is the best known generative network model in use today. Similar to Price’s model, vertices are added one by one to a growing network
and connected to existing vertices. However, these connections are undirected and the number of connections made by each vertex is exactly c (unlike Price’s model where c is allowed to vary from step
to step). Connections are made in proportion to the undirected degree, k. By treating the Barabasi and Albert model as a special case of Price’s model, Newman, 2010 shows that:
for k ≥ c
c=number of connections made by each vertex
k=degree of vertex
$p_k$ is the proportion of vertices with k degrees.
In this case, when k becomes very large the degree distribution becomes:
$p_k \sim k^{-3}$
Extensions and Further Properties of Preferential Attachment ModelsEdit
Model Property/Extension Intuitive Rationale Resultant Distribution Contributor
Incorporating Time of Creation By preferential attachment, older vertices will have more time to Contains leading algebraic factor, does not (Dorogovsev, Mendes, & Samukhim, 2000)
acquire links. follow power-law distribution
Sizes of In-Components Distribution of the set of vertices from which vertex i can be For component sizes << network size, follows (Newman, 2010)
reached by following a directed path power-law distribution
Addition of Extra Edges Connections added between two existing vertices Power-law distribution (Krapivsky, Rodgers, & Redner, 2001)
Removal of Edges Considered reverse preferential attachment, higher degree are more Power-law distribution to a point, then (Moore, Ghoshal, & Newman, 2006)
likely to lose an edge stretched exponential
Non-Linear Preferential Considers a non-linear attachment process rather than a linear Stretched exponential (Krapivsky, Redner, & Leyvraz, Connectivity of
Attachment process in the degree of the vertex Growing Rrandom Networks, 2000)
Vertices of Varying Quality or Quality and attractiveness are incorporated into attachment Depends on distribution of “fitness” values (Bianconi & Barabasi, 2001)
Attractiveness process
Thought QuestionEdit
What other networks’ formation could be accurately described using preferential attachment (considering either Price’s model or Barabasi-Albert)?
Online SimulationEdit
Vertex Copying ModelsEdit
Though preferential treatment offers a plausible explanation for power-law degree distributions, it is not the only means by which networks can grow (Newman, 2010). Suppose newly added vertices
copied a fraction of the connections of an existing vertex and the remainders are filled out by other existing vertices (e.g. a new paper copied a fraction the bibliography of an existing paper). As
analyzed by Newman, 2010, the degree distribution of a large network under this model will asymptotically follow a power-law distribution (Newman, 2010).
Network Optimization ModelsEdit
Previous network structures are based, for the most part, on successive random processes, which are blind to the large-scale structures that are being created. A network growth mechanism that may be
more suited to transportation networks is structural optimization. The optimization in this case involves a trade-off between travel time and cost. One of the simplest forms of this type of mechanism
was developed by Ferrer i Cancho and Sole in 2003. This model seeks to minimize the following quality function (Ferrer i Cancho & Sole, 2003):
$E(e,l)=\lambda e+(1-\lambda)l$
e=number of edges in a network
l=mean geodesic distance between vertex pairs (dissatisfaction measure)
λ=a parameter in the range 0≤λ≤1
In this model, the cost of maintaining the network is represented by the variable m. This is the same as saying the cost of operating an airline is proportional to the number of routes it operates,
for example. Following the airline example, the variable l would be the average number of legs required to journey from one point to another. This is obviously a simplification of a complex network.
The parameter λ provides a balance between a network with the minimum number of edges possible and a fully connected network. From this model it can be seen that by placing a moderate weight on λ, l
is minimized and the optimal result is a star graph. This offers a simple explanation of why the hub-and-spoke system is so efficient (Newman, 2010). As it turns out, a non-star graph solution only
appears when: $\lambda < \frac{2}{n^{2}+2}$.
The degree distributions of this model, as determined by Ferrer i Cancho and Sole, ranged from an exponential distribution to a power-law distribution as the value of λ changes. This was proposed
only for local minima.
Gastner and Newman, 2006 generalizes the model presented by Ferrer i Cancho and Sole by considering not only the number of legs in a journey but also the geographic distance traveled (Newman, 2010).
The term l is replaced by t, where t=u+vr (Newman, 2010). In this expression u and v are constants associated with a fixed time and pace (1/speed) on a link and r is the distance traveled on the
link. For example, u could be considered the check in, waiting, taxiing, etc. time at an airport and pace would be the reciprocal of the average velocity traveled during a flight. This introduces a
spatial aspect into the model where before only topological considerations were accounted for. However, this model also produces only numerical results.
Thought QuestionEdit
By changing the values of u and v in Gastner and Newman’s model, what assumptions could be drawn about the geometry of the resultant network?
Further ReadingEdit
A Survey of Models of Network Formation: Stability and Efficiency by Matthew O. Jackson
A General Theory of Bibliometric and Other Cumulative Advantage Processes by Derek de S. Price
The Emergence of Hierarchy in Transportation Networks by Bhanu M. Yerra and David M. Levinson
Barabasi, A.-L., & Albert, R. (1999). Emergence of Scaling in Random Networks. Science , 286, 509-512.
Bianconi, G., & Barabasi, A.-L. (2001). Bose-Einstein Condensation in Complex Networks. Physics Review Letters , 86, 5632-5635.
Dorogovsev, S. N., Mendes, J. F., & Samukhim, A. N. (2000). Structure of Growing Networks with Preferential Linking. Physics Review Letters , 85, 4633-4636.
Ferrer i Cancho, R., & Sole, R. V. (2003). Statistical Mechanics of Complex Networks (Vol. 625). (R. Pastor-Satorras, M. Rubi, & A. Diaz-Guilera, Eds.) Berlin: Springer.
Gastner, M. T., & Newman, M. E. (2006). Optimal Design of Spatial Distribution Networks. Physics Review E , 74, 016117.
Jackson, M. O. (2005). A Survey of Models of Network Formation: Stability and Efficiency. In G. Demange, & M. Wooders (Eds.), Group Formation in Economics: Networks, Clubs, and Coalitions (p. Chapter
1). Cambridge: Cambridge University Press.
Krapivsky, P. L., Redner, S., & Leyvraz, F. (2000). Connectivity of Growing Rrandom Networks. Physics Review Letters , 85, 4629-4632.
Krapivsky, P. L., Rodgers, G. J., & Redner, S. (2001). Degree Distributions of Growing Networks. Physics Review Letters , 86, 5401-5404.
Moore, C., Ghoshal, G., & Newman, M. E. (2006). Exact Solutions of Models of Evolving Networks with Addition and Deletion of Nodes. Physics Review E , 74 (3), 036121.
Newman, M. E. (2010). Networks: An Introduction. New York, NY: Oxford University Press Inc.
Price, D. d. (1976). A General Theory of Bibliometric and Other Cumulative Advantage Processes. Journal of the American Society for Information Science , 27, 292-306.
Rodrigue, J.-P., Comtois, C., & Slack, B. (2009). The Geography of Transport Systems (2nd Edition ed.). New York, NY: Routledge.
Simon, H. A. (1955). On a Class of Skew Distribution Functions. Biometrika , 42 (3/4), 425-440.
Wilensky, U. (2005). NetLogo Preferential Attachment model. http://ccl.northwestern.edu/netlogo/models/PreferentialAttachment. Center for Connected Learning and Computer-Based Modeling, Northwestern
University, Evanston, IL
Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
Yerra, B. M., & Levinson, D. M. (2005). The Emergence of Hierarchy in Transportation Networks. Annals of Regional Science , 39, 541-553.
Last modified on 4 May 2011, at 18:49 | {"url":"http://en.m.wikibooks.org/wiki/Transportation_Geography_and_Network_Science/Network_formation","timestamp":"2014-04-20T21:06:05Z","content_type":null,"content_length":"34000","record_id":"<urn:uuid:9c83c510-1a9a-4caf-ace2-37b107888a53>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Data set up for nlogit and nlogitrum
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: Data set up for nlogit and nlogitrum
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Data set up for nlogit and nlogitrum
Date Tue, 8 Mar 2005 16:51:10 -0000
Yes. See the help for -fillin- and more
explicitly SJ Tip 17 in Stata Journal 5(1) 2005.
Prahm, Jeremy
> This question concerns the set up of data for estimation using nlogit
> and nlogitrum commands.
> Do I need to populate the dataset with observations for all
> the choices
> the individual did not make? Let "chosen" be the dependent
> variable, I
> only have data for the observations where "chosen" equals 1. For
> example, of possible choices: A, B, C, D, I have an observation that
> they selected D and can set the variable "chosen" equal to 1.
> Do I need
> to generate three more observations where "chosen" takes a value of 0
> for choices A, B, and C, thus making my dataset 4 times as large? My
> understanding is that I do, but it doesn't seem to be very efficient
> considering I have a very large dataset. Thanks in advance for any
> help.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-03/msg00282.html","timestamp":"2014-04-20T06:33:31Z","content_type":null,"content_length":"5594","record_id":"<urn:uuid:6b66504d-0218-482e-b61b-3f47d6bebf38>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
[arndt@jjj.de: Re: bugfix for charpoly with finite fields]
Karim Belabas on Fri, 01 Aug 2008 02:43:59 +0200
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
[arndt@jjj.de: Re: bugfix for charpoly with finite fields]
Hi pari-dev, [ sorry, looong mail ]
I received an interesting test-case from Joerg Arndt (simplified version attached).
"The used function gauss_poly2() calls only pari/gp internals, else it
does arithmetic with binary polynomials no more."
Some timings:
ver 2.3.4: 11.9 s. / 11.8 s [GMP]
ver 2.4.2: 360 s. / 356 s. [ GMP ]
\\ Anticipating a little bit on the semi-happy ending (the solution is awkward)
current svn: 12.5s / 11.9s [ GMP ]
A simpler example:
a = x^1771 + x^82 + x^8 + x;
b = x^1449 + x^1448 + x^1446 + x^1412 + x^1411 + x^1408 + x^1406 + x^1403 + x^1366 + x^1362 + x^1360 + x^1317 + x^537 + x^500 + x^496 + x^494 + x^460 + x^459 + x^456 + x^454 + x^451 + x^411 + x^410 + x^408;
T = x^1861 + 1;
z = Mod(Mod(1,2), Mod(1,2)*T);
(z * a) * (z * b)
is about 50 times slower in 2.4.2 (and onwards) than in 2.3.*
It's easy to identify the cause, but it can't be reverted:
1) historically, the t_INTMOD and t_POLMOD types were introduced as a
(reasonably) convenient user interface to construct "arbitrary" base rings in an
otherwise typeless language. They were very slow (and still are), but
did the job. All the basic polynomial arithmetic and linear algebra was
written purely in terms of generic elementary operations and no explicit
notion of a "base ring".
2) with time a number of "heuristic" improvements (read "incorrect" but
OK in most practical circumstances) were added to improve functions like
Euclidean division in a polynomial ring K[X]. In that case, we basically
ignore zeroes and use some kind of sparse representation for the
divisor. Nobody cared too much about "minor" problems like
(01:09) gp > (x^2+Mod(1,3)*x+Mod(1,3)) % (x + Mod(0,2))
%1 = Mod(1, 3)
(01:09) gp > (x^2 + 100) % (x + Mod(0,2))
%2 = 100
(01:09) gp > Mod(0,2)*x
%3 = 0 \\ this is the integer 0 (t_INT). Base ring is lost
(01:09) gp > Mod(0,2)/x
%4 = 0
Some zeroes do carry information, even "exact zeroes".
3) in parallel, a large number of dedicated functions were written in libpari to
re-implement polynomial arithmetic over specific simple base rings
(like Z/nZ or Fp[x]/(T(x))), in a much more efficient way than could be
done using generic operations only, i.e. the function has a predefined
notion of a common base ring containing all coefficients it will handle.
Much simpler to program and optimize.
Unsurprisingly, all of libpari is now written in terms of these
functions ( except the part implementing generic arithmetic, which are
hardly ever used in libpari itself ). Unfortunately, it is hard for
GP users to access these functions: the simple example above would be written
\\ a,b,T as above
Flx_to_ZX(Flxq_mul(ZX_to_Flx(a,2), ZX_to_Flx(b,2), ZX_to_Flx(T,2), 2))
Timings do improve: 539ms --> 16ms in 2.4.2, vs 12ms for the original
code in 2.3.4 (stable remains a little more efficient because Flx_rem
does not use sparse representation but Newton/FFT-based arithmetic in
this example; conversions are negligible)
4) In version 2.4.2, PARI (both libpari and GP) became much stricter
with the zeroes it would actually ignore, restricting severely the old
improvements / hacks such as 2). Fixing a large number of bugs at the
same time of course:
\\ current svn
(01:29) gp > (x^2+Mod(1,3)*x+Mod(1,3)) % (x + Mod(0,2))
%1 = Mod(0, 1)
(01:29) gp > (x^2 + 100) % (x + Mod(0,2))
%2 = Mod(0, 2)
(01:29) gp > Mod(0,2)*x
%3 = Mod(0, 2)
(01:29) gp > Mod(0,2)/x
%4 = Mod(0, 2)/x
5) In our case, this slows down immensely Euclidean division in (Z/2Z)[X]
since all the zeroes are actually Mod(0,2), which we are no longer free
to ignore, and we no longer get a useful sparse representation.
6) Of course it would be trivial to solve 5) if our routines knew the base
ring from the start and could answer precisely the question "what is zero ?".
As an experiment, I modified a few generic functions to do just that
(gdiv, gmul, gsqr operating on t_POLMODs) and try to determine a (very) simple
base ring before starting. In most cases we fail immediately and go on.
When we hit the jackpot and recognize Z/nZ, we call specialized
functions [ and unfortunately have to convert between representations,
twice: to get rid of t_INTMODs, then re-introduce them ]
7) This may slow down a little the library: most polynomials we use are
in Z[X], and we have to scan them to the end to make sure no coefficient
belongs to some Z/nZ [ for nothing: this will never happen in libpari:
we don't use INTMODs... ]
In libpari proper, this is not the case: no critical function uses the
generic operations anymore, and certainly not with t_POLMOD arguments,
in any case.
8) This is not a general solution, in the sense that Euclidean division
in slightly more complicated base rings and sparse polynomials of huge degrees
will still be abysmally slower than 2.3.*. [ As far as I can see,
division is the worse case by far: other operations are much less
sensitive. ]
-- I do not want to revert to the old happy-go-lucky behaviour (always
algebraically acceptable, but very often semantically incorrect),
-- It would be very, very nice if polynomials (and matrices) carried
with them their coefficient ring, without our trying to guess in each
single function where exactly we are working (often giving up).
Knowing precisely "what is 0" would be useful in many contexts: our
current approach is too conservative and misses simplifications, as
above. But I do not want to force users to specify base rings in advance
or use complicated install-ed functions from libpari; so I'm stuck there.
Maybe adding a "domain" tag to each object whenever we can safely do so
would be doable. But can't be done without wrecking backward compatibility.
(Or introducing completely new types.)
-- Catering for more rings "by hand" will complicate the code, for
probably marginal improvements.
Any bright idea ?
Karim Belabas, IMB (UMR 5251) Tel: (+33) (0)5 40 00 26 17
Universite Bordeaux 1 Fax: (+33) (0)5 40 00 69 50
351, cours de la Liberation http://www.math.u-bordeaux.fr/~belabas/
F-33405 Talence (France) http://pari.math.u-bordeaux.fr/ [PARI/GP]
gauss_poly2(n, t)=
{ /* return field polynomial for type-t Gaussian normal basis */
/* All computations over GF(2) */
/* NOTE: slower than the algorithm using complex numbers */
local(p, M, r, F, t21, t2, Z);
\\ if ( 0==gauss_test(n,t), return(0) );
p = t*n + 1;
\\ print(" n=", n, " t=", t, ": p=", p);
r = znprimroot(p)^n; \\ element of order t mod p
\\ print(" :: r=", r, " ord(r)=", znorder(r), " == t=", t);
\\ M = sum(k=0, p-1, 'x^k); \\ The polynomial modulus
M = 'x^p - 1; \\ Use redundant modulus
M *= Mod(1,2); \\ ... over GF(2)
if ( 1==t, return( sum(k=0, p-1, 'x^k) ) ); \\ for type 1
\\ print(" :: M =", lift(M));
F = Mod(1, M);
t21 = Mod(2,p); t2 = Mod(1,p);
for (k=1, n,
\\ print(" :: ------- k=", k);
\\ for(j=0, t-1, print( lift(Mod('x^lift(t2*r^j), M) ) ) );
\\ Z = sum(j=0, t-1, 'x^lift(t2*r^j)); \\ faster (but unclean?)
Z = Mod( sum(j=0, t-1, 'x^lift(t2*r^j)), M); \\ faster
\\ Z = sum(j=0, t-1, Mod('x^lift(t2*r^j), M) ); \\ fast
\\ Z = sum(j=0, t-1, Mod('x, M)^lift(t2*r^j) ); \\ slower
\\ print(" :: Z =", lift(component(Z, 2)));
F = ('x+Z)*F;
\\ print(" :: w=", sum(j=0, t-1, 'x^lift(t2*r^j) ) );
\\ print(" :: w%=", lift(Mod(sum(j=0, t-1, 'x^lift(t2*r^j) ), sum(k=0, p-1, 'x^k) ) ) );
\\ print(" :: F =", lift(component(F,2)) );
t2 *= t21;
\\ final reduction for redundant modulus:
\\ M = sum(k=0, p-1, 'x^k); \\ The polynomial modulus
\\ F = lift( Mod( lift(F), M) );
\\ final reduction for redundant modulus (simplified):
F = lift(F);
if ( 0==polcoeff(F,0), F=sum(k=0, n, (1-polcoeff(F,k))*'x^k) );
return ( F );
} /* ----- */
gauss_poly2(620, 3); | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0808/msg00000.html","timestamp":"2014-04-20T08:18:34Z","content_type":null,"content_length":"12420","record_id":"<urn:uuid:f21d61d4-e4f0-4f0f-a585-ad3b8db5c04e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Duals and anihilators
January 27th 2013, 08:09 AM #1
Jan 2013
New York
Duals and annihilators
I have this problem in my book that I really have trouble with:
Given a vector space V and a subspace U of V. Prove that phi of U [phi is the natural isomorphism from U to U** given by phi(u)(f) = f(u)] is equal (not just isomorphic!) to the double
annihilator of U. Given G in the double annihilator, find u in U such that G=phi(u)
Any help would be greatly appreciated!!
Last edited by math336; January 27th 2013 at 08:26 AM.
Re: Duals and anihilators
given u in U, it is enough to show that φ(u) is in Ann(Ann(U)) (this proof only works in finite-dimensional vector spaces, by the way, as in general, U is not isomorphic to U**).
but if f is in Ann(U), this means f(u) = 0 for all u in U. hence for any such f, φ(u)(f) = f(u) = 0, that is φ(u) is contained in Ann(Ann(U)), which means that φ(U) = U** is contained in Ann(Ann
but clearly Ann(Ann(U)) is a subset of U**, so the two sets are equal (the finite-dimensionality comes in by way of asserting φ is ONTO U**).
now suppose we take G in Ann(Ann(U)). since Ann(Ann(U)) = U**, and φ is an isomorphism, φ^-1 is well-defined. letting u = φ^-1(G), we have:
φ(u) = φ(φ^-1(G)) = id[U**](G) = G.
Re: Duals and anihilators
(Yes, I had forgotten to say that V is finite dimensionnal. We haven't gotten to infinite-dimensionnal vector spaces yet.)
I'm really confused about this... Ann(U) is the set of all f in V* such that f(u) = 0 for all u in U. So Ann(U) is a subset of V*. The same way, Ann(Ann(U)) would be a subset of V**. More
precisely, I can't see how a function from U* to the field (an element of U**) could be equal to a function from V* to the field (an element of Ann(Ann(U))). Maybe I got my definitions wrong?
Re: Duals and anihilators
the isomorphism φ is actually induced from a similar isomorphism (defined the same way) from V to V**.
i understand your confusion. let's look at a concrete example. suppose V = R^3. suppose U = span(S), where S = {(1,0,1),(1,1,0)}.
now Ann(U) = {f in V*: f(u) = 0, for all u in U}. let's find a basis for Ann(U). let {p[j] in V*: p[j](e[i]) = δ[ij]}. this is a basis for V*.
so, for example, p[2](x,y,z) = y. now suppose f = a[1]p[1]+a[2]p[2]+a[3]p[3] is in Ann(U).
this means f(1,0,1) = 0, and f(1,1,0) = 0. so from the first equation we get: a[1] + a[3] = 0, and from the second we get: a[1] + a[2] = 0.
combining these two, we get: a[2] = a[3]. so f is of the form: f = -cp[1] + cp[2] + cp[3], for c in R. this is a 1-dimensional subspace of V*, with basis {-p[1]+p[2]+p[3]}.
now Ann(Ann(U)) = {G in V**: G(f) = 0, for all f in Ann(U)}. this in turn means for any G in Ann(Ann(U)), -G(p[1])+G(p[2])+G(p[3]) = 0.
now let's find a basis for Ann(Ann(U)). to do this, we need a basis for V**. let's use B = {E[1],E[2],E[3]} where E[k](p[j]) = δ[jk].
so if G = b[1]E[1]+b[2]E[2]+b[3]E[3], from -G(p[1])+G(p[2])+G(p[3]) = 0
we get -b[1]+b[2]+b[3] = 0. we thus have 2 free parameters, say s and t, and G = s(E[1]+E[3]) + t(E[1]+E[2]),
that is, a basis for Ann(Ann(U)) is {E[1]+E[2],E[1]+E[3]}. see the similarity with U?
ok, so suppose we consider the isomorphism ψ:V-->V** given by [ψ(v)](f) = f(v), for f in V* let's see what we get for v = (1,1,0), and v = (1,0,1).
given f = a[1]p[1]+a[2]p[2]+a[3]p[3] in V* we have:
ψ(1,1,0)(f) = f(1,1,0) = a[1]+a[2]
ψ(1,0,1)(f) = f(1,0,1) = a[1]+a[3]
but (E[1]+E[2])(f) = E[1](a[1]p[1]+a[2]p[2]+a[3]p[3]) + E[2](a[1]p[1]+a[2]p[2]+a[3]p[3]) = a[1]+a[2]
and (E[1]+E[3])(f) = E[1](a[1]p[1]+a[2]p[2]+a[3]p[3]) + E[3](a[1]p[1]+a[2]p[2]+a[3]p[3]) = a[1]+a[3]
that is: ψ(1,1,0) = E[1]+E[2], ψ(1,0,1) = E[1]+E[3].
so ψ does indeed map U to Ann(Ann(U)).
in particular, if f = -cp[1]+cp[2]+cp[3] (that is, if f is in Ann(U)), we see:
ψ(x,y,z)(f) = c(y+z-x). if we extend the basis {(1,1,0),(1,0,1)} to the basis {(1,1,0),(1,0,1),(-1,1,1)} for R^3, and consider f ≠ 0 in Ann(U) on these:
ψ(v)(f) = ψ[d[1](1,1,0)+d[2](1,0,1)+d[3](-1,1,1)](f) = f(d[1]+d[2]-d[3],d[1]+d[3],d[2]+d[3])
= 3cd[3], which is 0 if and only if d[3] = 0, that is, if v is in U, so ψ(v) is in Ann(Ann(U)) if and only if v is in U.
Re: Duals and anihilators
Thanks, it's all clear now.
January 27th 2013, 09:08 AM #2
MHF Contributor
Mar 2011
January 27th 2013, 10:39 AM #3
Jan 2013
New York
January 27th 2013, 01:11 PM #4
MHF Contributor
Mar 2011
January 28th 2013, 05:42 AM #5
Jan 2013
New York | {"url":"http://mathhelpforum.com/advanced-algebra/212115-duals-anihilators.html","timestamp":"2014-04-20T20:40:55Z","content_type":null,"content_length":"43680","record_id":"<urn:uuid:dee55582-282e-4656-961a-0203551d8c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Treasure: Babylonian Scribal Exercises
Thousands of clay cuneiform tablets have been excavated in the Middle East. For many years such collections languished, awaiting appropriate scholarly attention. However, within the last decade much
progress has been made in understanding the information contained on these artifacts. Jöran Friberg, mathematician and Mesopotamian scholar, presently Professor Emeritus at Chalmers University of
Technology, Sweden, has produced a series of popular publications that greatly clarify the mathematical ability and procedures of ancient Babylonia. In particular, his Remarkable Collection of
Babylonian Texts (2007) opened the curtain on a better understanding of the ancient mathematics of the Middle East. Professor Friberg has kindly made some of his personal images of Old Babylonian
mathematical work available to Convergence. The following problem inscriptions were, most probably, scribal exercises.
Given a square with diagonals inscribed in a circle, the student scribe was required to find the area of a quarter of the circle and a quarter of the square.
Arrays of squares were a popular geometric theme in Old Babylonian culture. Here, in a connection between decorative art and geometry, a pattern was superimposed on a dense grid of lines. The
finished design may have been intended for a tiled floor or a woven fabric.
It has been determined that this tablet originally contained sixteen exercises. In the one still visible, the scribe was given the task of finding the length of a chord in a circle. The diameter of
the circle is given as 20 units and the sagitta of the small segment as 2 units.
Here the student scribe was given an equilateral triangle with a smaller one inscribed in it. The side lengths of both triangles was provided and the scribe was required to find the area bounded
between the triangles. In his solution, he considered the sought after area to be composed of a chain of three congruent trapezoids. He found the area of one trapezoid and computed the desired area.
How would you do this problem?
Further information and images can be obtained from Jöran Friberg’s books and publications. In particular, see reviews in Convergence of three of Friberg's recent books: | {"url":"http://www.maa.org/publications/periodicals/convergence/mathematical-treasure-babylonian-scribal-exercises","timestamp":"2014-04-16T13:37:12Z","content_type":null,"content_length":"99637","record_id":"<urn:uuid:fe757226-d487-4d02-9610-436b8dfadb23>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 23
, 2003
"... We present a new version of the Seal Calculus, a calculus of mobile computation. We study observational congruence and bisimulation theory, and show how they are related. ..."
Cited by 24 (3 self)
Add to MetaCart
We present a new version of the Seal Calculus, a calculus of mobile computation. We study observational congruence and bisimulation theory, and show how they are related.
- Imperial College London , 2003
"... We introduce the Xdπ calculus, a peer-to-peer model for reasoning about dynamic web data. Web data is not just stored statically. Rather it is referenced indirectly, for example using
hyperlinks, service calls, or scripts for dynamically accessing data, which require the complex coordination of data ..."
Cited by 23 (3 self)
Add to MetaCart
We introduce the Xdπ calculus, a peer-to-peer model for reasoning about dynamic web data. Web data is not just stored statically. Rather it is referenced indirectly, for example using hyperlinks,
service calls, or scripts for dynamically accessing data, which require the complex coordination of data and processes between sites. The Xdπ calculus models this coordination, by integrating the XML
data structure with process orchestration techniques associated with the distributed pi-calculus. We study behavioural equivalences for Xdπ, to analyze the various possible patterns of data and
process interaction.
, 2005
"... The Seal Calculus is a process language for describing mobile computation. Threads and resources are tree structured; the nodes thereof correspond to agents, the units of mobility. The Calculus
extends a �-calculus core with synchronous, objective mobility of agents over channels. This paper syste ..."
Cited by 22 (0 self)
Add to MetaCart
The Seal Calculus is a process language for describing mobile computation. Threads and resources are tree structured; the nodes thereof correspond to agents, the units of mobility. The Calculus
extends a �-calculus core with synchronous, objective mobility of agents over channels. This paper systematically compares all previous variants of Seal Calculus. We study their operational behaviour
with labelled transition systems and bisimulations; by comparing the resulting algebraic theories we highlight the differences between these apparently similar approaches. This leads us to identify
the dialect of Seal that is most amenable to operational reasoning and can form the basis of a distributed programming language. We propose type systems for characterising the communications in which
an agent can engage. The type systems thus enforce a discipline of agent mobility, since the latter is coded in terms of higher-order communication.
- IN PROC. OF ICALP’03, VOLUME 2719 OF LNCS , 2003
"... We study the behavioural theory of Cardelli and Gordon's Mobile Ambients. We give an LTS based operational semantics, and a labelled bisimulation based equivalence that coincides with reduction
barbed congruence. We also provide two up-to proof techniques that we use to prove a set of algebraic laws ..."
Cited by 21 (3 self)
Add to MetaCart
We study the behavioural theory of Cardelli and Gordon's Mobile Ambients. We give an LTS based operational semantics, and a labelled bisimulation based equivalence that coincides with reduction
barbed congruence. We also provide two up-to proof techniques that we use to prove a set of algebraic laws, including the perfect firewall equation.
, 2004
"... We discuss a basic process calculus useful for modelling applications over global computing systems and present the associated semantic theories as determined by some basic notions of
observation. The main features of the calculus are explicit distribution, remote operations, process mobility and ..."
Cited by 17 (6 self)
Add to MetaCart
We discuss a basic process calculus useful for modelling applications over global computing systems and present the associated semantic theories as determined by some basic notions of observation.
The main features of the calculus are explicit distribution, remote operations, process mobility and asynchronous communication through distributed data spaces. We introduce some natural notions of
extensional observations and study their closure under operational reductions and/or language contexts to obtain barbed congruence and may testing. For these equivalences, we provide alternative
tractable characterizations as labelled bisimulation and trace equivalence. We discuss some of the induced equational laws and relate them to design choices of the calculus. In particular, we show
that some of these laws do not hold any longer if the language is rendered less abstract by introducing (asynchronous and undetectable) failures or by implementing remote communications via process
migrations and local exchanges. In both
- In: Proceedings of the 3rd International Conference on Theoretical Computer Science (IFIP TCS , 2004
"... We study a behavioural theory of Mobile Ambients, a process calculus for modelling mobile agents in wide-area networks, focussing on reduction barbed congruence. Our contribution is threefold.
(1) We prove a context lemma which shows that only parallel and nesting contexts need be examined to recove ..."
Cited by 12 (1 self)
Add to MetaCart
We study a behavioural theory of Mobile Ambients, a process calculus for modelling mobile agents in wide-area networks, focussing on reduction barbed congruence. Our contribution is threefold. (1) We
prove a context lemma which shows that only parallel and nesting contexts need be examined to recover this congruence. (2) We characterise this congruence using a labelled bisimilarity: this requires
novel techniques to deal with asynchronous movements of agents and with the invisibility of migrations of secret locations. (3) We develop refined proof methods involving up-to proof techniques,
which allow us to verify a set of algebraic laws and the correctness of more complex examples.
, 2004
"... Groupoidal relative pushouts (GRPOs) have recently been proposed by the authors as a new foundation for Leifer and Milner's approach to deriving labelled bisimulation congruences from reduction
systems. In this paper, we develop the theory of GRPOs further, proving that well-known equivalences, othe ..."
Cited by 11 (1 self)
Add to MetaCart
Groupoidal relative pushouts (GRPOs) have recently been proposed by the authors as a new foundation for Leifer and Milner's approach to deriving labelled bisimulation congruences from reduction
systems. In this paper, we develop the theory of GRPOs further, proving that well-known equivalences, other than bisimulation, are congruences. To demonstrate the type of category theoretic arguments
which are inherent in the 2-categorical approach, we construct GRPOs in a category of `bunches and wirings.' Finally, we prove that the 2-categorical theory of GRPOs is a generalisation of the
approaches based on Milner's precategories and Leifer's functorial reactive systems.
- ACM Transactions on Programming Languages and Systems , 2006
"... We develop a semantics theory for SAP, a variant of Levi and Sangiorgi’s Safe Ambients, SA. The dynamics of SA relies upon capabilities (and co-capabilities) exercised by mobile agents, called
ambients, to interact with each other. These capabilities contain references, the names of ambients with wh ..."
Cited by 11 (0 self)
Add to MetaCart
We develop a semantics theory for SAP, a variant of Levi and Sangiorgi’s Safe Ambients, SA. The dynamics of SA relies upon capabilities (and co-capabilities) exercised by mobile agents, called
ambients, to interact with each other. These capabilities contain references, the names of ambients with which they wish to interact. In SAP we generalise the notion of capability: in order to
interact with an ambient n, an ambient m must exercise a capability indicating both n and a password h to access n; the interaction between n and m takes place only if n is willing to perform a
corresponding co-capability with the same password h. The name h can also be looked upon as a port to access ambient n via port h. In SAP by managing passwords/ports, for example generating new ones
and distributing them selectively, an ambient may now program who may migrate into its computation space, and when. Moreover in SAP an ambient may provide different services/resources depending on
the port accessed by the incoming clients. Then, we give an lts-based operational semantics for SAP and a labelled bisimulation equivalence which is proved to coincide with reduction barbed
congruence. We use our notion of bisimulation to prove a set of algebraic laws which are subsequently exploited to prove more significant examples.
- In ASIAN’03, number 2896 in LNCS , 2003
"... Resource control has attracted increasing interest in foundational research on distributed systems. This paper focuses on space control and develops an analysis of space usage in the context of
an ambient-like calculus with bounded capacities and weighed processes, where migration and activation ..."
Cited by 7 (1 self)
Add to MetaCart
Resource control has attracted increasing interest in foundational research on distributed systems. This paper focuses on space control and develops an analysis of space usage in the context of an
ambient-like calculus with bounded capacities and weighed processes, where migration and activation require space.
- Global Computing - Programming Environments, Languages, Security and Analysis of Systems, volume 2874 of LNCS , 2003
"... A tutorial introduction to the key concepts of ambient calculi and their type disciplines, illustrated through a number of systems proposed in the last few years, such as Mobile Ambients, Safe
Ambients, Boxed Ambients, and other related calculi with types. ..."
Cited by 6 (2 self)
Add to MetaCart
A tutorial introduction to the key concepts of ambient calculi and their type disciplines, illustrated through a number of systems proposed in the last few years, such as Mobile Ambients, Safe
Ambients, Boxed Ambients, and other related calculi with types. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=108411","timestamp":"2014-04-18T09:41:00Z","content_type":null,"content_length":"36481","record_id":"<urn:uuid:fc40b88d-e5a9-420c-9f5f-51c58fdf58bd>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's new
You are currently browsing the tag archive for the ‘Weyl calculus’ tag.
One of the basic problems in the field of operator algebras is to develop a functional calculus for either a single operator ${A}$, or a collection ${A_1, A_2, \ldots, A_k}$ of operators. These
operators could in principle act on any function space, but typically one either considers complex matrices (which act on a complex finite dimensional space), or operators (either bounded or
unbounded) on a complex Hilbert space. (One can of course also obtain analogous results for real operators, but we will work throughout with complex operators in this post.)
Roughly speaking, a functional calculus is a way to assign an operator ${F(A)}$ or ${F(A_1,\ldots,A_k)}$ to any function ${F}$ in a suitable function space, which is linear over the complex numbers,
preserve the scalars (i.e. ${c(A) = c}$ when ${c \in {\bf C}}$), and should be either an exact or approximate homomorphism in the sense that
$\displaystyle FG(A_1,\ldots,A_k) = F(A_1,\ldots,A_k) G(A_1,\ldots,A_k), \ \ \ \ \ (1)$
should hold either exactly or approximately. In the case when the ${A_i}$ are self-adjoint operators acting on a Hilbert space (or Hermitian matrices), one often also desires the identity
$\displaystyle \overline{F}(A_1,\ldots,A_k) = F(A_1,\ldots,A_k)^* \ \ \ \ \ (2)$
to also hold either exactly or approximately. (Note that one cannot reasonably expect (1) and (2) to hold exactly for all ${F,G}$ if the ${A_1,\ldots,A_k}$ and their adjoints ${A_1^*,\ldots,A_k^*}$
do not commute with each other, so in those cases one has to be willing to allow some error terms in the above wish list of properties of the calculus.) Ideally, one should also be able to relate the
operator norm of ${f(A)}$ or ${f(A_1,\ldots,A_k)}$ with something like the uniform norm on ${f}$. In principle, the existence of a good functional calculus allows one to manipulate operators as if
they were scalars (or at least approximately as if they were scalars), which is very helpful for a number of applications, such as partial differential equations, spectral theory, noncommutative
probability, and semiclassical mechanics. A functional calculus for multiple operators ${A_1,\ldots,A_k}$ can be particularly valuable as it allows one to treat ${A_1,\ldots,A_k}$ as being exact or
approximate scalars simultaneously. For instance, if one is trying to solve a linear differential equation that can (formally at least) be expressed in the form
$\displaystyle F(X,D) u = f$
for some data ${f}$, unknown function ${u}$, some differential operators ${X,D}$, and some nice function ${F}$, then if one’s functional calculus is good enough (and ${F}$ is suitably “elliptic” in
the sense that it does not vanish or otherwise degenerate too often), one should be able to solve this equation either exactly or approximately by the formula
$\displaystyle u = F^{-1}(X,D) f,$
which is of course how one would solve this equation if one pretended that the operators ${X,D}$ were in fact scalars. Formalising this calculus rigorously leads to the theory of pseudodifferential
operators, which allows one to (approximately) solve or at least simplify a much wider range of differential equations than one what can achieve with more elementary algebraic transformations (e.g.
integrating factors, change of variables, variation of parameters, etc.). In quantum mechanics, a functional calculus that allows one to treat operators as if they were approximately scalar can be
used to rigorously justify the correspondence principle in physics, namely that the predictions of quantum mechanics approximate that of classical mechanics in the semiclassical limit ${\hbar \
rightarrow 0}$.
There is no universal functional calculus that works in all situations; the strongest functional calculi, which are close to being an exact *-homomorphisms on very large class of functions, tend to
only work for under very restrictive hypotheses on ${A}$ or ${A_1,\ldots,A_k}$ (in particular, when ${k > 1}$, one needs the ${A_1,\ldots,A_k}$ to commute either exactly, or very close to exactly),
while there are weaker functional calculi which have fewer nice properties and only work for a very small class of functions, but can be applied to quite general operators ${A}$ or ${A_1,\ldots,A_k}$
. In some cases the functional calculus is only formal, in the sense that ${f(A)}$ or ${f(A_1,\ldots,A_k)}$ has to be interpreted as an infinite formal series that does not converge in a traditional
sense. Also, when one wishes to select a functional calculus on non-commuting operators ${A_1,\ldots,A_k}$, there is a certain amount of non-uniqueness: one generally has a number of slightly
different functional calculi to choose from, which generally have the same properties but differ in some minor technical details (particularly with regards to the behaviour of “lower order”
components of the calculus). This is a similar to how one has a variety of slightly different coordinate systems available to parameterise a Riemannian manifold or Lie group. This is on contrast to
the ${k=1}$ case when the underlying operator ${A = A_1}$ is (essentially) normal (so that ${A}$ commutes with ${A^*}$); in this special case (which includes the important subcases when ${A}$ is
unitary or (essentially) self-adjoint), spectral theory gives us a canonical and very powerful functional calculus which can be used without further modification in applications.
Despite this lack of uniqueness, there is one standard choice for a functional calculus available for general operators ${A_1,\ldots,A_k}$, namely the Weyl functional calculus; it is analogous in
some ways to normal coordinates for Riemannian manifolds, or exponential coordinates of the first kind for Lie groups, in that it treats lower order terms in a reasonably nice fashion. (But it is
important to keep in mind that, like its analogues in Riemannian geometry or Lie theory, there will be some instances in which the Weyl calculus is not the optimal calculus to use for the application
at hand.)
I decided to write some notes on the Weyl functional calculus (also known as Weyl quantisation), and to sketch the applications of this calculus both to the theory of pseudodifferential operators.
They are mostly for my own benefit (so that I won’t have to redo these particular calculations again), but perhaps they will also be of interest to some readers here. (Of course, this material is
also covered in many other places. e.g. Folland’s “harmonic analysis in phase space“.)
Recent Comments
Terence Tao on Polymath8b, X: writing the pap…
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Aubrey de Grey on Polymath8b, X: writing the pap…
xfxie on Polymath8b, X: writing the pap…
Daniel on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Terence Tao on Finite time blowup for an aver…
Terence Tao on Polymath8b, IV: Enlarging the…
Tony Feng on Polymath8b, IV: Enlarging the…
Gil Kalai on Finite time blowup for an aver…
xfxie on Polymath8b, X: writing the pap…
Terence Tao on 254A, Notes 3a: Eigenvalues an…
Anonymous on 254A, Notes 3a: Eigenvalues an… | {"url":"http://terrytao.wordpress.com/tag/weyl-calculus/","timestamp":"2014-04-18T00:24:34Z","content_type":null,"content_length":"115155","record_id":"<urn:uuid:44375e8d-baee-43aa-9aa5-edf4e8961a64>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maxwell’s demon in the quantum world
+Enlarge image
Credit: (a) Shutterstock.com/Scott Maxwell/LuMaxArt; S. W. Kim
et al.
, [
]; (b) S. W. Kim
et al.
, [
Information and thermodynamics are intimately connected. This idea was first illustrated by Maxwell with his celebrated demon: an intelligent being who uses his knowledge about the position and
velocity of the molecules in a gas to transfer heat against a temperature gradient without expenditure of work, beating the second law of thermodynamics. The Szilard engine is a stylized version of
the demon, where a yes/no measurement of a classical single-particle system allows one to extract a tiny amount of energy, $kTln2$, $k$ being the Boltzmann constant, from a thermal reservoir at
temperature $T$. The engine has been around for almost a century now [1]. Along the way it has furnished insight into the foundations of statistical mechanics, become the canonical model for
investigations of feedback-controlled systems, and even spurred the creation of a new field: the thermodynamics of computation.
In a paper appearing in Physical Review Letters [2], Sang Wook Kim at the Pusan National University, Korea, and the University of Tokyo, Japan, along with collaborators from the University of Tokyo
analyze a multiparticle quantum version of the Szilard engine, highlighting how the quantum statistics of fermions and bosons can dramatically affect the engine’s performance. The paper helps to
clarify the interplay between information and entropy in the quantum world, the thermodynamic consequences of measurement, and the information content of spatially extended multiparticle states.
The original Szilard engine consists of a classical particle in a box attached to a thermal reservoir at temperature $T$. An external agent inserts a wall in the middle of the box, confining the
particle in one half. Next, the agent measures in which half of the box the particle is trapped, and then slowly moves the wall to the opposite side of the box, allowing the particle to perform
mechanical work. The motion of the wall is an isothermal expansion and the amount of work performed can easily be calculated, $kTln2$. In the classical case, the insertion of the wall can be done,
ideally, at zero energy cost. Therefore the whole process results in the extraction of a net amount of energy, $kTln2$, from the thermal bath or a decrease of the entropy by $kln2$. This is precisely
the information gathered by the measurement, in the appropriate units (the information in a yes/no measurement is one bit or $ln2$ nats, a unit of information that uses natural instead of base $2$
logarithms). The second law of thermodynamics demands that this decrease in the entropy must be compensated by an increase in the entropy of the agent operating the engine. This can occur either in
the measurement or in the erasure of the information gathered [1]. Sagawa and Ueda have unified both possibilities in a simple and elegant theoretical framework [3].
Despite the age of the problem, the analysis of the Szilard engine continues to benefit from new theoretical and experimental developments. The engine has been studied using a new class of powerful
results in nonequilibrium statistical mechanics that characterize the fluctuations in the energetics of arbitrary thermodynamic processes: work and fluctuation theorems. In particular, the extraction
of work has been related to the time-reversal asymmetry of the engine’s operation [4, 5]. Additionally, the engine has recently been realized in the laboratory—almost one century since Szilard
proposed the engine as a gedanken experiment—using a charged Brownian rotor in an electrostatic field controlled by feedback [6].
In their paper, Kim et al. present a general analysis of the isothermal Szilard engine using an arbitrary number of quantum particles. The quantum Szilard engine exhibits intriguing differences with
respect to the classical case: the insertion of the wall cannot be done without expending work [7], and the measurement generally involves the collapse of the wave function [1]. Although these
subtleties have been analyzed to some extent in previous works, the present article extends these results. Most importantly, by considering two or more noninteracting quantum particles, quantum
statistics enters the game, bringing in new effects. For an illustration of the operation of this quantum Szilard engine, see Fig. 1.
Take, for example, the case of two particles: Kim et al. show that the work extracted in one operational cycle is $2kTf0lnf0$, where $f0$ is the probability of finding both particles in the same half
of the box. For distinguishable particles, $f0=1/2$, and one recovers the classical result for a single particle. The reason is that although the measurement carries more information ($2$ bits) than
in the one-particle case, two of the possible outcomes (namely, one particle in each half of the box) are not used to extract energy.
More interesting effects occur when both particles are bosons or fermions, especially close to zero temperature where the quantum nature of the particles is more significant. For fermions near zero
temperature, the Pauli exclusion principle forces the two fermions to be found in separate halves of the box, thus $f0≈0$ and the work extracted is much smaller than in the classical case. On the
other hand, at zero temperature, bosons like to clump together. Kim et al. have shown that in this case, $f0=1/3$, and the work extracted is larger than in the classical case. Consequently, bosons
more efficiently extract work from measurement, whereas fermions can be completely inefficient. These differences with respect to the classical case remain even when the thermal energy $kT$ is far
above the energy of the ground state.
While the field of quantum information is well developed, we are still a long way from a full understanding of the interplay between information and thermodynamics in quantum systems. Although the
paper of Kim et al. does not make explicit use of quantum coherence, it is a first step in this pursuit. Experimental realization of a quantum Szilard engine is still some way off, although potential
candidates include trapped cold atoms in the bosonic case and quantum dots from semiconductor heterostructures for the fermionic version.
1. H. S. Leff and A. F. Rex, Maxwell's Demon 2: Entropy, Classical and Quantum Information, Computing (Institute of Physics, Bristol, 2003)[Amazon][WorldCat].
2. S. W. Kim, T. Sagawa, S. De Liberato, and M. Ueda, Phys. Rev. Lett. 106, 070401 (2011).
3. T. Sagawa and M. Ueda, Phys. Rev. Lett. 102, 250602 (2009).
4. R. Kawai, J. M. R. Parrondo, and C. Van den Broeck, Phys. Rev. Lett. 98, 080602 (2007).
5. J. M. Horowitz and S. Vaikuntanathan, Phys. Rev. E 82, 061120 (2010).
6. S. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki, and M. Sano, Nature Phys. 6, 988 (2010).
7. J. Gea-Banacloche and H. S. Leff, Fluct. Noise Lett. 5, C39 (2005). | {"url":"http://physics.aps.org/articles/v4/13","timestamp":"2014-04-18T00:22:41Z","content_type":null,"content_length":"25658","record_id":"<urn:uuid:bda12567-075d-4c21-8d90-5cb7ca16c11a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/msayer3/answered/1","timestamp":"2014-04-21T07:45:29Z","content_type":null,"content_length":"116429","record_id":"<urn:uuid:5068e9c9-c907-440b-88f7-cbc99adb0ac1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Help with complete inner product space
Replies: 0
Help with complete inner product space
Posted: Dec 10, 2012 1:27 PM
Hi, i'm struggling with this problem, it goes like this:
V is a space of polynomials with complex coefficients(so map from R to C) and it's an inner product space, equipped with (f,g) = int(f(x)conjugate(g(x)) * e^(-x) dx) where integration runs from x= 0
to x= +inf
I need to show that V is not complete, can someone post a few tips/step that i can follow?
I know i need to pick out a cauchy sequence in V, and show that it converges to some function in V, but i'm not sure how. | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2419741&messageID=7935010","timestamp":"2014-04-19T15:21:58Z","content_type":null,"content_length":"13927","record_id":"<urn:uuid:d4a3fb10-2ee5-4c57-96ea-018029a242f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof that cross product = Area of parrelelogram
February 14th 2010, 10:11 AM #1
Junior Member
Apr 2009
Proof that cross product = Area of parrelelogram
Not sure if I should be in this thread or in the geometry one, but here goes:
The problem that I was given was:
Find the area of a paralellogram spanned by the vectors a = i + j and b = 4i - j
Now prove the following |a X b| =|a||b|sin(theta) for ANY ( 2 dimensional ) vectors a and b where theta is the angle between the vectors.
Now I know that the area of a parrallogram is the magnitude of the cross product, and I can easily show that the area of the parrallelogram is the same as |a||b|sin(theta), but how do you prove
that |axb| = area of parrelelogram?
Actually, I would be inclined to look at it the other way- with the cross product of two vectors [b]defined[\b] by $|\vec{u}\times \vec{v}= |\vec{u}||\vec{v}|sin(\theta)$ and the direction given
by the "right hand rule", prove that $a\vec{i}+ b\vec{j}+ c\vec{k}\times x\vec{i}+ y\vec{j}+ z\vec{k}= (bz- cy)\vec{i}- (az- cx)\vec{j}+ (ay-bx)\vec{k}$, but, of course, you can go either way.
In order to show your way, define a "new" product, say *, by $|\vec{u}*\vec{v}= |\vec{u}||\vec{v}|sin(\theta)$, and the "right hand rule". We can easily get the rules $\vec{i}*\vec{j}= \vec{k}$,
$\vec{j}*\vec{k}= \vec{i}$, and $\vec{k}*\vec{i}= \vec{j}$ as well as $\vec{i}*\vec{i}= \vec{j}*\vec{j}= \vec{k}*\vec{k}= \vec{0}$, and the general $\vec{u}*\vec{v}= -\vec{v}*\vec{u}$.
Now, for $\vec{u}= a\vec{i}+ b\vec{j}+ c\vec{k}$ and $\vec{v}= x\vec{i}+ y\vec{j}+ z\vec{k}$, multiply $\vec{u}*\vec{v}= a\vec{i}+ b\vec{j}+ c\vec{k}*x\vec{i}+ y\vec{j}+ z\vec{k}$ "term by term"
and use the basic products above to show that this "new" product is precisely the cross product.
I think I see. How would you do this in two dimensions though?
February 14th 2010, 10:39 AM #2
MHF Contributor
Apr 2005
February 14th 2010, 10:51 AM #3
Junior Member
Apr 2009 | {"url":"http://mathhelpforum.com/calculus/128782-proof-cross-product-area-parrelelogram.html","timestamp":"2014-04-21T05:16:13Z","content_type":null,"content_length":"38181","record_id":"<urn:uuid:e607f170-dcc6-4a29-abfc-00bf09e32fea>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 11
- Mathematical Systems Theory , 1994
"... There is currently considerable interest among computational linguists in grammatical formalisms with highly restricted generative power. This paper concerns the relationship between the class
of string languages generated by several such formalisms viz. Combinatory Categorial Grammars, Head Grammar ..."
Cited by 79 (5 self)
Add to MetaCart
There is currently considerable interest among computational linguists in grammatical formalisms with highly restricted generative power. This paper concerns the relationship between the class of
string languages generated by several such formalisms viz. Combinatory Categorial Grammars, Head Grammars, Linear Indexed Grammars and Tree Adjoining Grammars. Each of these formalisms is known to
generate a larger class of languages than Context-Free Grammars. The four formalisms under consideration were developed independently and appear superficially to be quite different from one another.
The result presented in this paper is that all four of the formalisms under consideration generate exactly the same class of string languages. 1 Introduction There is currently considerable interest
among computational linguists in grammatical formalisms with highly restricted generative power. This is based on the argument that a grammar formalism should not merely be viewed as a notation, but
as part o...
- COMPUTATIONAL LINGUISTICS , 1991
"... ... this paper that the number-name system of Chinese is generated neither by this formalism nor by any other equivalent or weaker ones, suggesting that such a task might require the use of the
more powerful Indexed Grammar formalism. Given that our formal results apply only to a proper subset of Ch ..."
Cited by 14 (0 self)
Add to MetaCart
... this paper that the number-name system of Chinese is generated neither by this formalism nor by any other equivalent or weaker ones, suggesting that such a task might require the use of the more
powerful Indexed Grammar formalism. Given that our formal results apply only to a proper subset of Chinese, we extensively discuss the issue of whether they have any implications for the whole of
that natural language. We conclude that our results bear directly either on the syntax of Chinese or on the interface between Chinese and the cognitive component responsible for arithmetic reasoning.
Consequently, either Tree Adjoining Grammars, as currently defined, fail to generate the class of natural languages in a way that discriminates between linguistically warranted sublanguages, or
formalisms with generative power equivalent to Tree Adjoining Grammar cannot serve as a basis for the interface between the human linguistic and mathematical faculties.
- Linguistics and Philosophy , 1994
"... We consider the use of evolving algebra methods of specifying grammars for natural languages. We are especially interested in distributed evolving algebras. We provide the motivation for doing
this, and we give a reconstruction of some classical grammatical formalisms in directly dynamic terms. Fina ..."
Cited by 8 (0 self)
Add to MetaCart
We consider the use of evolving algebra methods of specifying grammars for natural languages. We are especially interested in distributed evolving algebras. We provide the motivation for doing this,
and we give a reconstruction of some classical grammatical formalisms in directly dynamic terms. Finally, we consider some technical questions arising from the use of direct dynamism in grammatical
formalisms. 1 Introduction Formal work in linguistics has both produced and used important mathematical tools. It led to formal language theory, and later developments in that field have found their
way back to linguistics. But in addition, ideas originally developed for other applications have been incorporated into linguistic research. This paper considers the use of techniques from the theory
of evolving algebras (see Gurevich [5]) in the development of syntactic formalisms. Our application of evolving algebras to grammatical formalisms may be viewed as part of several trends. First,
there ha...
, 1986
"... We examine the relationship between the two grammatical formalisms: Tree Adjoining Grammars and Head Grammars. We briefly investigate the weak equivalence of the two formalisms. We then turn to
a discussion comparing the linguistic expressiveness of the two formalisms. ..."
Cited by 5 (1 self)
Add to MetaCart
We examine the relationship between the two grammatical formalisms: Tree Adjoining Grammars and Head Grammars. We briefly investigate the weak equivalence of the two formalisms. We then turn to a
discussion comparing the linguistic expressiveness of the two formalisms.
"... Abstract. It is investigated for which choice of a parameter q, denoting the number of contexts, the class of simple external contextual languages is iteratively learnable. On one hand, the
class admits, for all values of q, polynomial time learnability provided an adequate choice of the hypothesis ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract. It is investigated for which choice of a parameter q, denoting the number of contexts, the class of simple external contextual languages is iteratively learnable. On one hand, the class
admits, for all values of q, polynomial time learnability provided an adequate choice of the hypothesis space is given. On the other hand, additional constraints like consistency and conservativeness
or the use of a one-one hypothesis space changes the picture — iterative learning limits the long term memory of the learner to the current hypothesis and these constraints further hinder storage of
information via padding of this hypothesis. It is shown that if q> 3, then simple external contextual languages are not iteratively learnable using a class preserving one-one hypothesis space, while
for q = 1 it is iteratively learnable, even in polynomial time. It is also investigated for which choice of the parameters, the simple external contextual languages can be learnt by a consistent and
conservative iterative learner. 1
- Handbook of Logic and Language. North , 1995
"... We consider the use of evolving algebra methods of specifying grammars for natural languages. We are especially interested in distributed evolving algebras. We provide the motivation for doing
this, and we give a reconstruction of some classical grammatical formalisms in directly dynamic terms. Fina ..."
Cited by 3 (2 self)
Add to MetaCart
We consider the use of evolving algebra methods of specifying grammars for natural languages. We are especially interested in distributed evolving algebras. We provide the motivation for doing this,
and we give a reconstruction of some classical grammatical formalisms in directly dynamic terms. Finally, we consider some technical questions arising from the use of direct dynamism in grammatical
formalisms. 1 Introduction Formal work in linguistics has both produced and used important mathematical tools. It led to formal language theory, and later developments in that field have found their
way back to linguistics. But in addition, ideas originally developed for other applications have been incorporated into linguistic research. This paper considers the use of techniques from the theory
of evolving algebras (see Gurevich [5]) in the development of syntactic formalisms. Our application of evolving algebras to grammatical formalisms may be viewed as part of several trends. First,
there ha...
, 1995
"... By relating positive inductive definitions to space-bounded computations of alternating Turing machines, Rounds, Comp. Linguistics 14, 1988, has given uniform grammatical characterizations of
the EXPTIME and PTIME languages. But his proof gives fairly poor bounds for language recognition with contex ..."
Cited by 1 (1 self)
Add to MetaCart
By relating positive inductive definitions to space-bounded computations of alternating Turing machines, Rounds, Comp. Linguistics 14, 1988, has given uniform grammatical characterizations of the
EXPTIME and PTIME languages. But his proof gives fairly poor bounds for language recognition with context-free resp. head grammars. We improve Rounds' analysis in two respects: first, we introduce a
modified class of language definitions that allow restricted forms of negative inductions, and second, we show how to build table-driven recognizers from such definitions. For a wide and natural
class of language definitions we thereby obtain fairly efficient recognizers; we can recognize the boolean closure of context-free resp. head languages in the well-known O(n 3 ) resp. O(n 6 ) steps
on a RAM . Our `bounded' fixed-point formulas apparently can not define an arbitrary PTIME language. Our method is based on the existence of fixed-points for a class of operators that need neither be
, 2005
"... We discuss two standard formal tools used to study models of grammar. One of these is formal language theory, which provides a way to describe the complexity of languages in terms of a sequence
of standard language classes known as the Chomsky hierarchy. The other tool is learnability theory, which ..."
Add to MetaCart
We discuss two standard formal tools used to study models of grammar. One of these is formal language theory, which provides a way to describe the complexity of languages in terms of a sequence of
standard language classes known as the Chomsky hierarchy. The other tool is learnability theory, which can describe, for a given class of languages, whether or not there exists a single learner that
can learn every language in the class; we use a particular model for learning developed by Gold. These two tools can be used to obtain formal properties of a grammar system, and to evaluate the
validity of a theory of natural language. After presenting the tools, we show how they can be applied to the linguistic theory of categorial grammars, and we discuss the results. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1463781","timestamp":"2014-04-24T09:39:13Z","content_type":null,"content_length":"36004","record_id":"<urn:uuid:f5c45494-db52-46c4-b6db-523c84852abb>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Point-Slope Form
Visit our main site: Media4Math+. We have thousands of resources.
Math Solver:
The Point-Slope Form:
Find the equation of a line given the slope (m), expressed as a fraction, and the coordinates, (x[1], y[1]), of a point on the line. Input the slope and (x[1], y[1]) and press the SOLVE button. | {"url":"http://media4math.com/MathSolvers/point-slopeformsolver2.html","timestamp":"2014-04-21T12:25:47Z","content_type":null,"content_length":"6296","record_id":"<urn:uuid:76bbda1e-e066-4e08-badf-1b715482b116>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Indexed Containers
Thorsten Altenkirch1
, Neil Ghani1
, Peter Hancock1
, Conor McBride1
, and
Peter Morris1
School of Computer Science and Information Technology,
Nottingham University
Abstract. The search for an expressive calculus of datatypes in which
canonical algorithms can be easily written and proven correct has proved
to be an enduring challenge to the theoretical computer science commu-
nity. Approaches such as polynomial types, strictly positive types and
inductive types have all met with some success but they tend not to
cover important examples such as types with variable binding, types
with constraints, nested types, dependent types etc.
In order to compute with such types, we generalise from the traditional
treatment of types as free standing entities to families of types which
have some form of indexing. The hallmark of such indexed types is that | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/809/2851963.html","timestamp":"2014-04-16T23:12:43Z","content_type":null,"content_length":"8004","record_id":"<urn:uuid:016f632f-a4a2-4cbe-8622-dacdfaa27f9e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hi Guys! algebra help needed for engineering studies!
February 18th 2013, 02:42 PM
Hi Guys! algebra help needed for engineering studies!
Hi there Im andy from Liverpool
im currently studying for an engineering degree, but firstly have to complete a foundation course in maths. Its going well so far - until the dreaded algebra questions came along! Ive understood
some of it so far but the assesment questions have tied me in knots and i feel completely stuck. These are the questions that are giving me particular difficulty, any help would be much
Solve: 16 = 24(1-e^-t) (It shgould read -t over 2 but cant figure out to write correctly on here!)
]The next question is
A= V (Q- mV^2) Where Q = 50.28, m = 17, V = 5 and g = 9.81
100 g
Many thanks
February 18th 2013, 02:48 PM
Re: Hi Guys! algebra help needed for engineering studies!
Sorry second equation isnt very clear should be V divided by 100 mV squared divided by g
February 18th 2013, 02:54 PM
Re: Hi Guys! algebra help needed for engineering studies!
$16 = 24(1-e^{-\frac{t}{2}})$
divide both sides by 24 to get
$\frac{2}{3} = 1 -e^{-\frac{t}{2}}$
add - 1 to both sides
$\frac{-1}{3} = -e^{-\frac{t}{2}}$
multiply both sides by -1
$\frac{1}{3} = e^{-\frac{t}{2}})$
take natural log of both sides
$ln(\frac{1}{3}) = -\frac{t}{2}$
use log rules $a*ln(x) = ln(x^a)$
to get $2*ln(\frac{1}{3}) = ln((\frac{1}{3})^2) = ln(\frac{1}{9})$
so $-t = ln(\frac{1}{9})$
$t = - ln(\frac{1}{9})=ln((\frac{1}{9})^{-1}) = ln(9)$
t = ln(9)
February 18th 2013, 02:55 PM
Re: Hi Guys! algebra help needed for engineering studies!
thanks very much for reply, really helped!
February 20th 2013, 04:24 PM
Re: Hi Guys! algebra help needed for engineering studies!
$16 = 24(1-e^{-\frac{t}{2}})$
divide both sides by 24 to get
$\frac{2}{3} = 1 -e^{-\frac{t}{2}}$
add - 1 to both sides
$\frac{-1}{3} = -e^{-\frac{t}{2}}$
multiply both sides by -1
$\frac{1}{3} = e^{-\frac{t}{2}})$
take natural log of both sides
$ln(\frac{1}{3}) = -\frac{t}{2}$
use log rules $a*ln(x) = ln(x^a)$
to get $2*ln(\frac{1}{3}) = ln((\frac{1}{3})^2) = ln(\frac{1}{9})$
so $-t = ln(\frac{1}{9})$
$t = - ln(\frac{1}{9})=ln((\frac{1}{9})^{-1}) = ln(9)$
t = ln(9)
you are a genius.. | {"url":"http://mathhelpforum.com/new-users/213356-hi-guys-algebra-help-needed-engineering-studies-print.html","timestamp":"2014-04-20T18:27:15Z","content_type":null,"content_length":"10452","record_id":"<urn:uuid:795de2e6-3018-4320-91fe-980b2f4068bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
On Operators of Saphar Type
Christoph Schmoeger
Mathematisches Institut I, Universität Karlsruhe,
Postfach 6980, D-7500 Karlsruhe 1 - GERMANY
Abstract: A bounded linear operator $T$ on a complex Banach space $X$ is called an operator of Saphar type, if $T$ is relatively regular and if its null space is contained in its generalized range $\
displaystyle\bigcap^{\infty}_{n=1}T^n(X)$. This paper contains some characterizations of operators of Saphar type. Furthermore, for a function $f$ admissible in the analytic calculus, we obtain a
necessary and sufficient condition in order that $f(T)$ is an operator of Saphar type.
Full text of the article:
Electronic version published on: 29 Mar 2001. This page was last modified: 27 Nov 2007.
© 1994 Sociedade Portuguesa de Matemática
© 1994–2007 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PM/51f4/14.html","timestamp":"2014-04-21T07:46:07Z","content_type":null,"content_length":"3362","record_id":"<urn:uuid:72ba4356-cec1-4e7e-9480-e7e084297e94>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Watch as David Attenborough signals his interest in mating with a male cicada. Scientists think that cicadas have 13- or 17-year mating cycles because, being prime numbers, those periods are not
divisible by those periods of potential predators. From Stephen J. Gould:
Many potential predators have 2-5-year life cycles. Such cycles are not set by the availability of cicadas (for they peak too often in years of nonemergence), but cicadas might be eagerly
harvested when the cycles coincide. Consider a predator with a life-cycle of five years: if cicadas emerged every 15 years, each bloom would be hit by the predator. By cycling at a large prime
number, cicadas minimize the number of coincidences (every 5 x 17, or 85 years, in this case). Thirteen- and 17-year cycles cannot be tracked by any smaller number.
It's a bit more complicated than that, but Gould's argument covers the basics. (thx, @mwilkie) | {"url":"http://kottke.org/tag/primenumbers","timestamp":"2014-04-21T14:43:20Z","content_type":null,"content_length":"10089","record_id":"<urn:uuid:30ddef86-064c-4763-988e-23c1d57d74dc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factoring Polynomials: Factoring Trinomials with Leading Coefficient Other Than 1
Factor 18 x 2 −56x+6 18 x 2 −56x+6 .
We see that each term is even, so we can factor out 2.
2(9 x 2 −28x+3) 2(9 x 2 −28x+3)
Notice that the constant term is positive. Thus, we know that the factors of 3 that we are looking for must have the same sign. Since the sign of the middle term is negative, both factors must be
Factor the first and last terms.
There are not many combinations to try, and we find that 9x 9x and −3 −3 are to be multiplied and x x and −1 −1 are to be multiplied.
18 x 2 −56x+6 = 2(9 x 2 −28x+3) = 2(9x−1)(x−3) 18 x 2 −56x+6 = 2(9 x 2 −28x+3) = 2(9x−1)(x−3)
If we had not factored the 2 out first, we would have gotten the factorization
The factorization is not complete since one of the factors may be factored further.
18 x 2 −56x+6 = (9x−1)(2x−6) = (9x−1)⋅2(x−3) = 2(9x−1)(x−3) (Bythecommutativepropertyofmultiplication.) 18 x 2 −56x+6 = (9x−1)(2x−6) = (9x−1)⋅2(x−3) = 2(9x−1)(x−3)
The results are the same, but it is much easier to factor a polynomial after all common factors have been factored out first. | {"url":"http://cnx.org/content/m21912/latest/","timestamp":"2014-04-18T23:25:06Z","content_type":null,"content_length":"316213","record_id":"<urn:uuid:6cd61a4f-06e6-4ded-b637-189f71ff9d55>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to calculate my average price per share.
post #1 of 8
7/7/06 at 8:53pm
Thread Starter
Ok, this is probably 6th grade math, but say I bought the same stock at three different price levels and a different amount of shares each time.
You'll want to check this out:
total $/ total shares = Avg PPS
total $ spent on shares (divided by) total shares (equals) Average pps
But if you want to work out your 'breakeven' price (the price before you start making profit) don't forget to include the commissions.
Eg If you buy 2 lots of 100 shares at $1 and $2 then you have spent $300 on 200 shares = $1.50 per share average. But you will have paid 2 commissions (and you will pay another to sell). Lets assume
that each commission is $10 then your profit only starts if you sell for more than $330. Therefore your breakeven price is not $1.50, but 330/200 = $1.65.
Excellent, thanks for the info!
Hi all,
Looking for some real help. I am a personal trainer in the UK looking to set up and deliver CPD courses.
I have potential investors to help do this but I need to get my head around issuing shares and making it attractive for them. I am not educated on this area so any help would be great.
I am looking to
Start Up
Show investment for each investor
Price of each share
6 years in: model based on 100% return initial investment
I then need to show
value of shares after 6 years
I want to be able to grow this business for the long term. How many shares can I issue / attract further investment from other investors?
please help
($ per share A) x (# of A shares) + ($ per share B) x (# of B shares) + ($ per share C) x (# of C shares) = Total Spent
Total Spent / Total number of shares = Average Price per Share
For Example:
You buy 50 shares of XYZ at $20, then later you buy 75 shares of XYZ at $18, finally you buy 25 shares of XYZ for $15. To find the average price per share of your XYZ positions you multiply each set
cost per share by the number of shares purchased.
50 x $20 = $1000
75 x $18 = $1350
25 x $15 = $375
now add the totals together
1000 + 1350 + 375 = $27,525
divide this number by the total number of XYZ shares you have.
50 + 75 + 25 = 150 shares
$27,525 / 150 shares = $18.1666666 = average price per share
I know that to figure a break-even point one takes the total amount invested including commissions divided by the total number of shares and add the sell commission per share. But what about a more
complex case-
suppose I have an account that was a managed account part of the time and commission based for another time period.
If I add up the total number of shares bought or sold under management and divide it into the total management fees for that period that will give me an average management cost(AMC) per share. Now
if I calculate the average cost per share of a stock excluding management fees but including any commissions and add that to the AMC per share I will have an average cost per share that must be
exceeded to make a profit but I still have to account for the sale commission(switched from managed acct to commission based before the sale).
Is this procedure correct so far and if I know that the sale commission will be 2% how do I complete the calculation for a break-even
point ?
post #2 of 8
7/7/06 at 10:15pm
• 67 Posts. Joined 9/2005
• Location: Midwest
• Karma: 1
post #3 of 8
7/8/06 at 5:37am
• 7,244 Posts. Joined 5/2006
• Karma: 36
post #4 of 8
7/8/06 at 11:00am
• 486 Posts. Joined 4/2005
• Location: Phoenix
• Karma: 1
post #5 of 8
7/8/06 at 1:27pm
Thread Starter
post #6 of 8
2/8/12 at 5:38am
• 1 Post. Joined 2/2012
• Karma: 1
post #7 of 8
11/14/12 at 2:06pm
• 1 Post. Joined 11/2012
• Karma: 1
post #8 of 8
3/3/13 at 12:22pm
• 3 Posts. Joined 3/2013
• Karma: 1 | {"url":"http://www.hotstockmarket.com/t/39708/how-to-calculate-my-average-price-per-share","timestamp":"2014-04-19T23:29:07Z","content_type":null,"content_length":"84824","record_id":"<urn:uuid:3afd9457-0070-49ec-ae7c-a285c4881b92>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exclusive ρ0 production in deep inelastic scattering at HERA
Exclusive ρ^0 electroproduction at HERA has been studied with the ZEUS detector using 120 pb^-1 of integrated luminosity collected during 1996–2000. The analysis was carried out in the kinematic
range of photon virtuality 2 <Q^2 < 160 GeV^2, and γ*p centre-of-mass energy 32 <W < 180 GeV. The results include the Q^2 and W dependence of the γ*p → ρ^0p cross section and the distribution of the
squared-four-momentum transfer to the proton. The helicity analysis of the decay-matrix elements of the ρ^0 was used to study the ratio of the γ*p cross section for longitudinal and transverse photon
as a function of Q^2 and W. Finally, an effective Pomeron trajectory was extracted. The results are compared to various theoretical predictions.
PACS Codes: 13.60.Hb, 13.60.Le
1 Introduction
Two of the most surprising aspects of high-energy deep inelastic scattering (DIS) observed at the HERA ep collider have been the sharp rise of the proton structure function, F[2], with decreasing
value of Bjorken x and the abundance of events with a large rapidity gap in the hadronic final state [1]. The latter are identified as due to diffraction in the deep inelastic regime. A contribution
to the diffractive cross section arises from the exclusive production of vector mesons (VM).
High-energy exclusive VM production in DIS has been postulated to proceed through two-gluon exchange [2,3], once the scale, usually taken as the virtuality Q^2 of the exchanged photon, is large
enough for perturbative Quantum Chromodynamics (pQCD) to be applicable. The gluons in the proton, which lie at the origin of the sharp increase of F[2], are also expected to cause the VM cross
section to increase with increasing photon proton centre-of-mass energy, W, with the rate of increase growing with Q^2. Moreover, the effective size of the virtual photon decreases with increasing Q^
2, leading to a flatter distribution in t, the four-momentum-transfer squared at the proton vertex. All these features, with varying levels of significance, have been observed at HERA [4-10] in the
exclusive production of ρ^0, ω, φ, and J/ψ mesons.
This paper reports on an extensive study of the properties of exclusive ρ^0-meson production,
γ*p → ρ^0p,
based on a high statistics data sample collected with the ZEUS detector during the period 1996–2000, corresponding to an integrated luminosity of about 120 pb^-1.
2 Theoretical background
Calculations of the VM production cross section in DIS require knowledge of the wave-function of the virtual photon, specified by QED and which depends on the polarisation of the virtual photon. For
longitudinally polarised photons, , pairs of small transverse size dominate [3]. The opposite holds for transversely polarised photons, , where configurations with large transverse size dominate.
The favourable feature of exclusive VM production is that, at high Q^2, the longitudinal component of the virtual photon is dominant. The interaction cross section in this case can be fully
calculated in pQCD [11], with two-gluon exchange as the leading process in the high-energy regime. For heavy vector mesons, such as the J/ψ or the ϒ, perturbative calculations apply even at Q^2 = 0,
as the smallness of the dipole originating from the photon is guaranteed by the mass of the quarks.
Irrespective of particular calculations [12], in the region dominated by perturbative QCD the following features are predicted:
• the total γ*p → Vp cross section, σ[γ*p], exhibits a steep rise with W, which can be parameterised as σ ~ W^δ, with δ increasing with Q^2;
• the Q^2 dependence of the cross-section, which for a longitudinally polarised photon is expected to behave as Q^-6, is moderated to become Q^-4 by the rapid increase of the gluon density with Q^2;
• the distribution of t becomes universal, with little or no dependence on W or Q^2;
• breaking of the s-channel helicity conservation (SCHC) is expected.
In the region where perturbative calculations are applicable, exclusive vector-meson production could become a complementary source of information on the gluon content of the proton. At present, the
following theoretical uncertainties have been identified:
• the calculation of σ(γ*p → Vp) involves the generalised parton distributions [13,14], which are not well tested; in addition [15], it involves gluon densities outside the range constrained by
global QCD analyses of parton densities;
• higher-order corrections have not been fully calculated [16]; therefore the overall normalisation is uncertain and the scale at which the gluons are probed is not known;
• the rapid rise of σ[γ*p ]with W implies a non-zero real part of the scattering amplitude, which is not known;
• the wave-functions of the vector mesons are not fully known.
In spite of all these problems, precise measurements of differential cross sections separated into longitudinal and transverse components [17], should help to resolve the above theoretical
It is important in these studies to establish a region of phase space where hard interactions dominate over the non-perturbative soft component. If the relative transverse momentum of the pair is
small, the colour dipole is large and perturbative calculations do not apply. In this case the interaction looks similar to hadron-hadron elastic scattering, described by soft Pomeron exchange as in
Regge phenomenology [18].
The parameters of the soft Pomeron are known from measurements of total cross sections for hadron-hadron interactions and elastic proton-proton measurements. It is usually assumed that the Pomeron
trajectory is linear in t:
The parameter α[ℙ](0) determines the energy behaviour of the total cross section,
and describes the increase of the slope b of the t distribution with increasing W. The value of is inversely proportional to the square of the typical transverse momenta participating in the
exchanged trajectory. A large value of suggests the presence of low transverse momenta typical of soft interactions. The accepted values of α[ℙ](0) [19] and [20] are
The non-universality of α[ℙ](0) has been established in inclusive DIS, where the slope of the γ*p total cross section with W has a pronounced Q^2 dependence [21]. The value of can be determined from
exclusive VM production at HERA via the W dependence of the exponential b slope of the t distribution for fixed values of W, where b is expected to behave as
where b[0 ]and W[0 ]are free parameters. The value of can also be derived from the W dependence of dσ/dt at fixed t,
where F(t) is an arbitrary function. This approach has the advantage that no assumption needs to be made about the t dependence. The first indications from measurements of α[ℙ](t) in exclusive J/ψ
photoproduction [8,22] are that α[ℙ](0) is larger and is smaller than those of the above soft Pomeron trajectory.
3 Experimental set-up
The present measurement is based on data taken with the ZEUS detector during two running periods of the HERA ep collider. During 1996–1997, protons with energy 820 GeV collided with 27.5 GeV
positrons, while during 1998–2000, 920 GeV protons collided with 27.5 GeV electrons or positrons. The sample used for this study corresponds to an integrated luminosity of 118.9 pb^-1, consisting of
37.2 pb^-1 e^+ p sample from 1996–1997 and 81.7 pb^-1 from the 1998–2000 sample (16.7 pb^-1 e^- and 65.0 pb^-1 e^+)^1.
A detailed description of the ZEUS detector can be found elsewhere [23,24]. A brief outline of the components that are most relevant for this analysis is given below.
Charged particles are tracked in the central tracking detector (CTD) [25-27]. The CTD consists of 72 cylindrical drift chamber layers, organised in nine superlayers covering the polar-angle^2 region
15° <θ <164°. The CTD operates in a magnetic field of 1.43 T provided by a thin solenoid. The transverse-momentum resolution for full-length tracks is σ(p[T])/p[T ]= 0.0058p[T ]⊕ 0.0065 ⊕ 0.0014/p[T
], with p[T ]in GeV.
The high-resolution uranium-scintillator calorimeter (CAL) [28-31] covers 99.7% of the total solid angle and consists of three parts: the forward (FCAL), the barrel (BCAL) and the rear (RCAL)
calorimeters. Each part is subdivided transversely into towers and longitudinally into one electromagnetic section (EMC) and either one (in RCAL) or two (in BCAL and FCAL) hadronic sections. The CAL
energy resolutions, as measured under test-beam conditions, are σ(E)/E = 0.18/ for electrons and σ(E)/E = 0.35/ for hadrons, with E in GeV.
The position of the scattered electron was determined by combining information from the CAL, the small-angle rear tracking detector [32] and the hadron-electron separator [33].
In 1998, the forward plug calorimeter (FPC) [34] was installed in the 20 × 20 cm^2 beam hole of the FCAL with a small hole of radius 3.15 cm in the centre to accommodate the beam pipe. The FPC
increased the forward calorimeter coverage by about one unit in pseudorapidity to η ≤ 5.
The leading-proton spectrometer (LPS) [35] detected positively charged particles scattered at small angles and carrying a substantial fraction, x[L], of the incoming proton momentum; these particles
remained in the beam-pipe and their trajectories were measured by a system of silicon microstrip detectors, located between 23.8 m and 90.0 m from the interaction point. The particle deflections
induced by the magnets of the proton beam-line allowed a momentum analysis of the scattered proton.
During the 1996–1997 data taking, a proton-remnant tagger (PRT1) was used to tag events in which the proton dissociates. It consisted of two layers of scintillation counters perpendicular to the beam
at Z = 5.15 m. The two layers were separated by a 2 mm-thick lead absorber. The pseudorapidity range covered by the PRT1 was 4.3 <η < 5.8.
The luminosity was measured from the rate of the bremsstrahlung process ep → eγp. The photon was measured in a lead-scintillator calorimeter [36-38] placed in the HERA tunnel at Z = -107 m.
4 Data selection and reconstruction
The following kinematic variables are used to describe exclusive ρ^0 production and its subsequent decay into a π^+π^- pair:
• the four-momenta of the incident electron (k), scattered electron (k'), incident proton (P), scattered proton (P') and virtual photon (q);
• Q^2 = -q^2 = -(k - k')^2, the negative squared four-momentum of the virtual photon;
• W^2 = (q + P)^2, the squared centre-of-mass energy of the photon-proton system;
• y = (P·q)/(P·k), the fraction of the electron energy transferred to the proton in its rest frame;
• M[ππ], the invariant mass of the two decay pions;
• t = (P - P')^2, the squared four-momentum transfer at the proton vertex;
• three helicity angles, Φ[h], θ[h ]and φ[h ](see Section 9).
The kinematic variables were reconstructed using the so-called "constrained" method [10,39], which uses the momenta of the decay particles measured in the CTD and the reconstructed polar and
azimuthal angles of the scattered electron.
The online event selection required an electron candidate in the CAL, along with the detection of at least one and not more than six tracks in the CTD.
In the offline selection, the following further requirements were imposed:
• the presence of a scattered electron, with energy in the CAL greater than 10 GeV and with an impact point on the face of the RCAL outside a rectangular area of 26.4 × 16 cm^2;
• E - P[Z ]> 45 GeV, where E - P[Z ]= ∑[i](E[i ]- ) and the summation is over the energies and longitudinal momenta of the final-state electron and pions, was imposed. This cut excludes events with
high energy photons radiated in the initial state;
• the Z coordinate of the interaction vertex within ± 50 cm of the nominal interaction point;
• in addition to the scattered electron, exactly two oppositely charged tracks, each associated with the reconstructed vertex, and each having pseudorapidity |η| less than 1.75 and transverse
momentum greater than 150 MeV; this excluded regions of low reconstruction efficiency and poor momentum resolution in the CTD. These tracks were treated in the following analysis as a π^+π^- pair;
• events with any energy deposit larger than 300 MeV in the CAL and not associated with the pion tracks (so-called 'unmatched islands') were rejected [40-42].
In addition, the following requirements were applied to select kinematic regions of high acceptance:
• the analysis was restricted to the kinematic regions 2 <Q^2 < 80 GeV^2 and 32 <W < 160 GeV in the 1996–1997 data and 2 <Q^2 < 160 GeV^2 and 32 <W < 180 GeV in the 1998–2000 sample;
• only events in the π^+π^- mass interval 0.65 <M[ππ ]< 1.1 GeV and with |t| < 1 GeV^2 were taken. The mass interval is slightly narrower than that used previously [10], in order to reduce the effect
of the background from non-resonant π^+π^- production. In the selected M[ππ ]range, the resonant contribution is ≈ 100% (see Section 8).
The above selection yielded 22,400 events in the 1996–1997 sample and 49,300 events in the 1998–2000 sample, giving a total of 71,700 events for this analysis.
5 Monte Carlo simulation
The relevant Monte Carlo (MC) generators have been described in detail previously [10]. Here their main features are summarised.
The program ZEUSVM [43] interfaced to HERACLES4.4 [44] was used. The effective Q^2, W and t dependences of the cross section were parameterised to reproduce the data [42].
The decay angular distributions were generated uniformly and the MC events were then iteratively reweighted using the results of the present analysis for the 15 combinations of matrix elements ,
(see Section 9).
The contribution of the proton-dissociative process was studied with the EPSOFT [45] generator for the 1996–1997 data and with PYTHIA [46] for the 1998–2000 data. The Q^2, W and t dependences were
parameterised to reproduce the control samples in the data. The decay angular distributions were generated as in the ZEUSVM sample.
The generated events were processed through the same chain of selection and reconstruction procedures as the data, thus accounting for trigger as well as detector acceptance and smearing effects. For
both MC sets, the number of simulated events after reconstruction was about a factor of seven greater than the number of reconstructed data events.
All measured distributions are well described by the MC simulations. Some examples are shown in Fig. 1, for the W, Q^2, t variables, and the three helicity angles, θ[h], φ[h], and Φ[h], and in Fig. 2
for the transverse momentum p[T ]of the pions, for different Q^2 bins.
Figure 1. Comparison between the data and the ZEUSVM MC distributions for (a) W, (b) Q^2, (c) |t|, (d) cosθ[h], (e) φ[h ]and (f) Φ[h ]for events with 0.65 <M[ππ ]< 1.1 GeV and |t| < 1.0 GeV^2. The MC
distributions are normalised to the data.
Figure 2. Comparison between the data and the ZEUSVM MC distributions for the transverse momentum, p[T], of π^+ and π^- particles, for different ranges of Q^2, as indicated in the figure. The events
are selected to be within 0.65 <M[ππ ]< 1.1 GeV and |t| < 1.0 GeV^2. The MC distributions are normalised to the data.
6 Systematics
The systematic uncertainties of the cross section were evaluated by varying the selection cuts and the MC simulation parameters. The following selection cuts were varied:
• the E - P[Z ]cut was changed within the appropriate resolution of ±3 GeV;
• the p[T ]of the pion tracks (default 0.15 GeV) was increased to 0.2 GeV;
• the distance of closest approach of the extrapolated track to the matched island in the CAL was changed from 30 cm to 20 cm;
• the π^+π^--mass window was changed to 0.65–1.2 GeV;
• the Z vertex cut was varied by ±10 cm;
• the rectangular area of the electron impact point on the CAL was increased by 0.5 cm in X and Y ;
• the energy of an unmatched island was lowered to 0.25 GeV and then raised to 0.35 GeV.
The dependence of the results on the precision with which the MC reproduces the performance of the detector and the data was checked by varying the following inputs within their estimated
• the reconstructed position of the electron was shifted with respect to the MC by ±1 mm;
• the electron-position resolution was varied by ±10% in the MC;
• the W^δ-dependence in the MC was changed by varying δ by ±0.03;
• the exponential t-distribution in the MC was reweighted by changing the nominal slope parameter b by ±0.5 GeV^-2;
• the angular distributions in the MC were reweighted assuming SCHC;
• the Q^2-distribution in the MC was reweighted by (Q^2 + )^k, where k = ±0.05.
The largest uncertainty of about ± 4% originated from the variation of the energy of the unmatched islands. All the other checks resulted on average in a 0.5% change in the measured cross sections.
All the systematic uncertainties were added in quadrature. In addition, the cross-section measurements have an overall normalisation uncertainty of ±2% due to the luminosity measurement.
7 Proton dissociation
The production of ρ^0 mesons may be accompanied by the proton-dissociation process, γ*p → ρ^0N. For low masses M[N ]of the dissociative system N, the hadronisation products may remain inside the
beam-pipe, leaving no signals in the main detector. The contribution of these events to the exclusive ρ^0 cross section was estimated from MC generators for proton-dissociative processes.
A class of proton dissociative events for which the final-state particles leave observed signals in the surrounding detectors was used to tune the M[N ]and the t distribution in the MC. In the
1998–2000 running period, these events were selected by requiring a signal in the FPC detector with energy above 1 GeV. The comparison of the data with PYTHIA expectations for the energy distribution
in the FPC is shown in Fig. 3(a). The same procedure was repeated with a sample of ρ^0 events for which the FPC energy was less than 1 GeV and a leading proton was measured in the LPS detector, with
the fraction of the incoming proton momentum x[L ]< 0.95. The comparison between the x[L ]distribution measured in the data and that expected from PYTHIA is shown in Fig. 3(b), where the elastic peak
in the data (x[L ]> 0.95) is also observed. Also shown in Fig. 3(c–e) is the fraction of proton-dissociative events expected in the selected ρ^0 sample as a function of Q^2, W and t. The fraction is
at the level of 19%, independent of Q^2 and W, but increasing with increasing |t|. The combined use of the FPC and LPS methods leads to an estimate of the proton dissociative contribution for |t| < 1
GeV^2 of 0.19 ± 0.02(stat.) ± 0.03(syst.). The systematic uncertainty was estimated by varying the parameters of the M[N ]distribution and by changing the FPC cut.
Figure 3. (a) The energy distribution in the FPC. The data (full dots) are compared to the expectations from the PYTHIA MC, normalised to the data. (b) The x[L ]distribution in the LPS. The data
(open circles) are compared to the expectations from the PYTHIA MC, normalised to the data for x[L ]< 0.95. The extracted fraction of proton-dissociation events, from the FPC data (dots) and from the
LPS data (open circles), as a function of (c) Q^2, (d) W and (e) |t|. All events were selected in the ρ^0 mass window (0.65–1.1 GeV). The dotted line in (c) and (d) represents a fit of a constant to
the proton-dissociation fraction.
In the 1996–1997 data-taking period, a similar procedure was applied, after tuning the EPSOFT MC to reproduce events with hits in the PRT1 or energy deposits in the FCAL. The proton-dissociative
contribution for |t| < 1 GeV^2 was determined to be 0.07 ± 0.02 after rejecting events with hits in the PRT1 or energy deposits in the FCAL. This number is consistent with that determined from the
LPS and FPC because of the different angular coverage of the PRT1.
After subtraction of the proton-dissociative contribution, a good agreement between the cross sections derived from the two data-taking periods was found. For all the quoted cross sections integrated
over t, the overall normalisation uncertainty due to the subtraction of the proton-dissociative contributions was estimated to be ± 4% and was not included in the systematic uncertainty. The
proton-dissociative contribution was statistically subtracted in each analysed bin, unless stated otherwise.
8 Mass distributions
The π^+π^--invariant-mass distribution is presented in Fig. 4. A clear enhancement in the ρ^0 region is observed. Background coming from the decay φ → K^+ K^-, where the kaons are misidentified as
pions, is expected [42] in the region M[ππ ]< 0.55 GeV. That coming from ω events in the decay channel ω → π^+π^-π^0, where the π^0 remains undetected, contributes [42] in the region M[ππ ]< 0.65
GeV. Therefore defining the selected ρ^0 events to be in the window 0.65 <M[ππ ]< 1.1 GeV ensures no background from these two channels.
Figure 4. The π^+π^- acceptance-corrected invariant-mass distribution. The line represent the best fit of the Söding form to the data in the range 0.65 <M[ππ ]< 1.1 GeV. The vertical lines indicate
the range of masses used for the analysis. The dashed line is the shape of a relativistic Breit-Wigner with the fitted parameters given in the figure. The dotted line is the interference term between
the non-resonant background (dash-dotted line) and the ρ^0 signal.
In order to estimate the non-resonant π^+π^- background under the ρ^0, the Söding parameterisation [47] was fitted to the data, with results shown in the figure. The resulting mass and width values
are in agreement with those given in the Particle Data Group [48] compilation. The integrated non-resonant background is of the order of 1% and is thus neglected.
The π^+π^- mass distributions in different regions of Q^2 and t are shown in Fig. 5 and Fig. 6, respectively. The shape of the mass distribution changes neither with Q^2 nor with t. The results of
the fit to the Söding parameterisation are also shown. Note that the interference term decreases with Q^2 as expected but is independent of t, indicating that the non-exclusive background is
Figure 5. The π^+π^- acceptance-corrected invariant-mass distribution, for different Q^2 intervals, with mean values as indicated in the figure. The lines are defined in the caption of Fig. 4.
Figure 6. The π^+π^- acceptance-corrected invariant-mass distribution, for different t intervals, with mean values as indicated in the figure. The lines are defined in the caption of Fig. 4.
9 Angular distributions and decay-matrix density
The exclusive electroproduction and decay of ρ^0 mesons is described, at fixed W, Q^2, M[ππ ]and t, by three helicity angles: Φ[h ]is the angle between the ρ^0 production plane and the electron
scattering plane in the γ*p centre-of-mass frame; θ[h ]and φ[h ]are the polar and azimuthal angles of the positively charged decay pion in the s-channel helicity frame. In this frame, the
spin-quantisation axis is defined as the direction opposite to the momentum of the final-state proton in the ρ^0 rest frame. In the γ*p centre-of-mass system, φ[h ]is the angle between the decay
plane and the ρ^0 production plane. The angular distribution as a function of these three angles, W(cos θ[h], φ[h], Φ[h]), is parameterised by the ρ^0 spin-density matrix elements, , where i, k =
-1, 0, 1 and by convention α = 0, 1, 2, 4, 5, 6 for an unpolarised charged-lepton beam [49]. The superscript denotes the decomposition of the spin-density matrix into contributions from the following
photon-polarisation states: unpolarised transverse photons (0); linearly polarised transverse photons (1,2); longitudinally polarised photons (4); and from the interference of the longitudinal and
transverse amplitudes (5,6).
The decay angular distribution can be expressed in terms of combinations, and , of the density matrix elements
where ε is the ratio of the longitudinal- to transverse-photon fluxes and R = σ[L]/σ[T], with σ[L ]and σ[T ]the cross sections for exclusive ρ^0 production from longitudinal and transverse virtual
photons, respectively. In the kinematic range of this analysis, the value of ε varies between 0.96 and 1 with an average value of 0.996; hence and cannot be distinguished.
The Hermitian nature of the spin-density matrix and the requirement of parity conservation reduces the number of independent parameters to 15 [49]. A 15-parameter fit was performed to the data and
the obtained results are listed in Table 1 and shown in Fig. 7 as a function of Q^2. The published ZEUS results [50] at lower Q^2 values and the expectations of SCHC, when relevant, are also
included. The observed Q^2 dependence, expected in some calculations [51] and previously reported by H1 [52], is driven by the R dependence on Q^2 under the assumption of helicity conservation and
natural parity exchange. The significant deviation of from zero shows that SCHC does not hold [51] as was observed previously [50,52].
Table 1. Spin density matrix elements for electroproduction of ρ^0, for different intervals of Q^2. The first uncertainty is statistical, the second systematic.
Figure 7. The 15 density-matrix elements obtained from a fit to the data (dots), as a function of Q^2. Also shown in the figure are results from an earlier measurement [50] (open circles). The inner
error bars indicate the statistical uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature. The dotted line at zero is the expectation from SCHC
when relevant.
The angular distribution for the decay of the ρ^0 meson, integrated over φ[h ]and Φ[h], reduces to
The element may be extracted from a one-dimensional fit to the cosθ[h ]distribution. The cosθ[h ]distributions, for different Q^2 intervals, are shown in Fig. 8, together with the results of a
one-dimensional fit of the form (3). The data are well described by the fitted parameter at each value of Q^2.
Figure 8. The acceptance-corrected cos θ[h ]distribution, for different Q^2 intervals, with mean values indicated in the figure. The line represent the fit to the data of Eq. (3).
10 Cross section
The measured γ*p cross sections are averaged over intervals listed in the appropriate tables and are quoted at fixed values of Q^2 and W. The cross sections are corrected for the mass range 0.28 <M[
ππ ]< 1.5 GeV and integrated over the full t-range, where applicable.
10.1 t dependence of σ(γ*p → ρ^0p)
The determination of σ(γ*p → ρ^0p) as a function of t for W = 90 GeV was performed by averaging over 40 <W < 140 GeV. The differential cross-section dσ/dt(γ*p → ρ^0p) is shown in Fig. 9 and listed in
Table 2, for different ranges of Q^2. An exponential form proportional to e^-b|t| was fitted to the data in each range of Q^2; the results are shown in Fig. 10. The exponent b, listed in Table 3,
decreases as a function of Q^2. After including the previous results at lower Q^2 [10,53], a sharp decrease of b is observed at low Q^2; the value of b then levels off at about 5 GeV^-2.
Table 2. The differential cross-section dσ/dt for the reaction γ*p → ρ^0p for different Q^2 intervals. The first column gives the Q^2 bin, while the second column gives the Q^2 value at which the
cross section is quoted. The normalisation uncertainty due to luminosity (± 2%) and proton-dissociative background (± 4%), is not included.
Table 3. The slope b resulting from a fit to the differential cross-section dσ/dt to an exponential form for the reaction γ*p → ρ^0p, for different Q^2 intervals. The first column gives the Q^2 bin,
while the second column gives the Q^2 value at which the differential cross sections are quoted. The first uncertainty is statistical, the second systematic.
Figure 9. The differential cross-section dσ/d|t| as a function of |t| for γ*p → ρ^0p, for fixed values of Q^2, as indicated in the figure. The line represents an exponential fit to the data. The
inner error bars indicate the statistical uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature.
Figure 10. The value of the slope b from a fit of the form dσ/d|t| ∝ e^-b|t| for exclusive ρ^0 electroproduction, as a function of Q^2. Also shown are values of b obtained previously at lower Q^2
values [10, 53]. The inner error bars indicate the statistical uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature.
A compilation of the value of the slope b for exclusive VM electroproduction, as a function of Q^2 + M^2, is shown in Fig. 11. Here M is the mass of the corresponding final state. It also includes
the exclusive production of a real photon, the deeply virtual Compton scattering (DVCS) measurement [54]. When b is plotted as a function of Q^2 + M^2, the trend of b decreasing with increasing scale
to an asymptotic value of 5 GeV^-2, seems to be a universal property of exclusive processes, as expected in perturbative QCD [2].
Figure 11. A compilation of the value of the slope b from a fit of the form dσ/d|t| ∝ e^-b|t| for exclusive vector-meson electroproduction, as a function of Q^2 + M^2. Also included is the DVCS
result. The inner error bars indicate the statistical uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature.
10.2 Q^2 dependence of σ(γ*p → ρ^0p)
The determination of σ(γ*p → ρ^0p) as a function of Q^2 for W = 90 GeV was performed by averaging over 40 <W < 140 GeV. The results are shown in Fig. 12 with corresponding values given in Table 4. As
expected, a steep decrease of the cross section with Q^2 is observed. The photoproduction and the low-Q^2 (< 1 GeV^2) measurements are also shown in the figure. An attempt to fit the Q^2 dependence
with a simple propagator term
Table 4. Cross-section measurements at Q^2 and W = 90 GeV averaged over the Q^2 and W intervals given in the table. The normalisation uncertainty due to luminosity (± 2%) and proton-dissociative
background (± 4%) is not included.
Figure 12. The Q^2 dependence of the cross section for exclusive ρ^0 electroproduction, at a γ*p centre-of-mass energy W = 90 GeV. The ZEUS 1994 [53] and the ZEUS 1995 [10] data points have been
extrapolated to W = 90 GeV using the parameterisations reported in the respective publications. The inner error bars indicate the statistical uncertainty, the outer error bars represent the
statistical and systematic uncertainty added in quadrature.
with the normalisation and n as free parameters, failed to produce results with an acceptable χ^2. The data appear to favour an n value which increases with Q^2.
10.3 W dependence of σ(γ*p → ρ^0p)
The values of the cross section σ(γ*p → ρ^0p) as a function of W, for fixed values of Q^2, are plotted in Fig. 13 and given in Table 5. The cross sections increase with increasing W, with the rate of
increase growing with increasing Q^2.
Figure 13. The W dependence of the cross section for exclusive ρ^0 electroproduction, for different Q^2 values, as indicated in the figure. The inner error bars indicate the statistical uncertainty,
the outer error bars represent the statistical and systematic uncertainty added in quadrature. The lines are the result of a fit of the form W^δ to the data.
Table 5. Cross-sections values obtained at Q^2 and W as a result of averaging over bins of the Q^2 and W intervals given in the table. The normalisation uncertainty due to luminosity (± 2%) and
proton-dissociative background (± 4%), are not included.
In order to quantify the rate of growth and its significance, the W dependence for each Q^2 value was fitted to the functional form
σ ~ W^δ.
The resulting δ values are presented as a function of Q^2 in Fig. 14 and listed in Table 6. For completeness, the δ values from lower Q^2 are also included. A clear increase of δ with Q^2 is
observed. Such an increase is expected in pQCD, and reflects the change of the low-x gluon distribution of the proton with Q^2.
Figure 14. The value of δ from a fit of the form W^δ for exclusive ρ^0 electroproduction, as a function of Q^2. Also shown are values of δ obtained previously at lower Q^2 values [10, 53]. The inner
error bars indicate the statistical uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature.
Table 6. The value of δ obtained from fitting . The first column gives the Q^2 bin, while the second column gives the Q^2 value at which the cross section was quoted.
To facilitate the comparison, the ZEUS cross-section data as a function of W have been replotted in the Q^2 bins used by H1 [9]. The results are shown in Fig. 15. The agreement between the two
measurements is reasonable. However, in some Q^2 bins the shape of the W dependence is somewhat different.
Figure 15. Comparison of the H1 (squares) and ZEUS (dots) measurements of the W dependence of , for different Q^2 values, as indicated in the figure. The inner error bars indicate the statistical
uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature.
A compilation of the value of the slope δ for exclusive VM electroproduction, as a function of Q^2 + M^2, is shown in Fig. 16. It also includes the DVCS result [54]. When plotted as a function of Q^2
+ M^2, the value of δ and its increase with the scale are similar for all the exclusive processes, as expected in perturbative QCD [2].
Figure 16. A compilation of the value of δ from a fit of the form W^δ for exclusive vector-meson electroproduction, as a function of Q^2 + M^2. It includes also the DVCS results. The inner error bars
indicate the statistical uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature.
11 R = σ[L]/σ[T ]and
The SCHC hypothesis implies that and . In this case, the ratio R = σ[L]/σ[T ]can be related to the matrix element,
and thus can be extracted from the θ[h ]distribution alone.
If the SCHC requirement is relaxed, then the relation between R and is modified,
In the kinematic range of the measurements presented in this paper, the non-zero value of Δ implies a correction of ~3% on R up to the highest Q^2 value, where it is ~10%, and is neglected.
Under the assumption that Eq. (4) is valid and for values of ε studied in this paper, <ε > = 0.996, the matrix element may be interpreted as
where σ[tot ]= σ[L ]+ σ[T]. When the value of is close to one, as is the case for this analysis, the error on R becomes large and highly asymmetrical. It is then advantageous to study the properties
of itself which carries the same information, rather than R.
The Q^2 dependence of for W = 90 GeV, averaged over the range 40 <W < 140 GeV, is shown in Fig. 17 and listed in Table 7 together with the corresponding R values. The figure includes three data
points at lower Q^2 from previous studies [10,53]. An initial steep rise of with Q^2 is observed and above Q^2 ≃ 10 GeV^2, the rise with Q^2 becomes milder. At Q^2 = 40 GeV^2, σ[L ]constitutes about
90% of the total γ*p cross section.
Figure 17. The ratio as a function of Q^2 for W = 90 GeV. Also included are values of from previous measurements at lower Q^2 values [10, 53]. The inner error bars indicate the statistical
uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature.
Table 7. The spin matrix element and the ratio of cross sections for longitudinally and transversely polarised photons, R = σ[L]/σ[T], as a function of Q^2, averaged over the Q^2 and W bins given in
the table. The first uncertainty is statistical, the second systematic.
The comparison of the H1 and ZEUS results is presented in Fig. 18 in terms of the ratio R. The H1 measurements are at W = 75 GeV and those of ZEUS at W = 90 GeV. Given the fact that R seems to be
independent of W (see below), both data sets can be directly compared. The two measurements are in good agreement.
Figure 18. Comparison of the H1 (squares) and ZEUS (dots) measurements of R as a function of Q^2. The H1 data are at W = 75 GeV and those of ZEUS at W = 90 GeV. Also included are measurements
performed previously at lower Q^2 values [10, 53]. The inner error bars indicate the statistical uncertainty, the outer error bars represent the statistical and systematic uncertainty added in
The dependence of R on M[ππ ]is presented in Fig. 19 for two Q^2 intervals. The value of R falls rapidly with M[ππ ]above the central ρ^0 mass value. Although a change of R with M[ππ ]was anticipated
to be ~10% [55], the effect seen in the data is much stronger. The effect remains strong also at higher Q^2, contrary to expectations [55]. Once averaged over the ρ^0 mass region, the main
contribution to R comes from the central ρ^0 mass value.
Figure 19. The ratio R as a function of M[ππ], for W = 80 GeV, and for two values of Q^2, as indicated in the figure. The inner error bars indicate the statistical uncertainty, the outer error bars
represent the statistical and systematic uncertainty added in quadrature.
The W dependence of , for different values of Q^2, is shown in Fig. 20 and listed in Table 8. Within the measurement uncertainties, is independent of W, for all Q^2 values. This implies that the W
behaviour of σ[L ]is the same as that of σ[T], a result which is somewhat surprising. The configurations in the wave function of have typically a small transverse size, while the configurations
contributing to may have large transverse size. The contribution to σ[T ]of large-size configurations, which are more hadron-like, is expected to lead to a shallower W dependence than in case of σ[
L]. Thus, the result presented in Fig. 20 suggests that the large-size configurations of the transversely polarised photon are suppressed.
Figure 20. The ratio , as a function of W for different values of Q^2, as indicated in the figure. The inner error bars indicate the statistical uncertainty, the outer error bars represent the
statistical and systematic uncertainty added in quadrature.
Table 8. The spin matrix element and the ratio of cross sections for longitudinally and transversely polarised photons, R = σ[L]/σ[T], as a function of W for different values of Q^2, averaged over
the Q^2 and W bins given in the table. The first uncertainty is statistical, the second systematic.
The above conclusion can also explain the behaviour of as a function of t, shown in Fig. 21 and presented in Table 9 for two Q^2 values. Different sizes of interacting objects imply different t
distributions, in particular a steeper dσ[T]/dt compared to dσ[L]/dt. This turns out not to be the case. In both Q^2 ranges, is independent of t, reinforcing the earlier conclusion about the
suppression of the large-size configurations in the transversely polarised photon.
Figure 21. The ratio as a function of |t| for different values of Q^2, as indicated in the figure. The inner error bars indicate the statistical uncertainty, the outer error bars represent the
statistical and systematic uncertainty added in quadrature.
Table 9. The spin matrix element and the ratio of cross sections for longitudinally and transversely polarised photons, R = σ[L]/σ[T], as a function of |t| for two values of Q^2, averaged over the Q
^2 and W bins given in the table. The first uncertainty is statistical, the second systematic.
12 Effective Pomeron trajectory
An effective Pomeron trajectory can be determined from exclusive ρ^0 electroproduction by using Eq. (2). Since the W dependence of the proton-dissociative contribution was established to be the same
as the exclusive ρ^0 sample, no subtraction for proton-dissociative events was performed.
A study of the W dependence of the differential dσ/dt cross section at fixed t results in values of α[ℙ](t), listed in Table 10 and displayed in Fig. 22, for Q^2 = 3 GeV^2 (upper plot) and 10 GeV^2
(lower plot). A linear fit of the form of Eq. (1), shown in the figures, yields values of α[ℙ](0) and shown in Fig. 23, and listed in Table 11. The value of α[ℙ](0) increases slightly with Q^2,
while the value of is Q^2 independent, within the measurement uncertainties. Its value tends to be lower than that of the soft Pomeron [56].
Table 10. The values of the effective Pomeron trajectory α[ℙ](t) as a function of |t|, for two Q^2 values. The first uncertainty is statistical, the second systematic.
Table 11. The values of the effective Pomeron trajectory intercept α[ℙ](0) and slope , for two Q^2 values. The first uncertainty is statistical, the second systematic.
Figure 22. The effective Pomeron trajectory α[ℙ](t) as a function of t, for two values of Q^2, with average values indicated in the figure. The inner error bars indicate the statistical uncertainty,
the outer error bars represent the statistical and systematic uncertainty added in quadrature.
Figure 23. The parameters of the effective Pomeron trajectory in exclusive ρ^0 electroproduction, (a) α[ℙ](0) and (b) , as a function of Q^2. The inner error bars indicate the statistical
uncertainty, the outer error bars represent the statistical and systematic uncertainty added in quadrature. The band in (a) and the dashed line in (b) are at the values of the parameters of the soft
Pomeron [19, 20].
An alternative way of measuring the slope of the Pomeron trajectory is to study the W dependence of the b slope, for fixed Q^2 values. Figure 24 displays the values of b as a function of W for two Q^
2 intervals (see also Table 12). The curves are a result of fitting the data to the expression b = b[0 ]+ 4 ln(W/W[0]). The resulting slopes of the trajectory are for <Q^2 > = 3.5 GeV^2 and for <Q^
2 > = 11 GeV^2. These results are consistent with those presented in Table 11.
Table 12. The slope b resulting from a fit of the differential cross section dσ/dt for the reaction γ*p → ρ^0p to an exponential form, for different W values, for two Q^2 values. The first
uncertainty is statistical, the second systematic.
Figure 24. The b slope as a function of W for two ranges of Q^2, with average values as indicated in the figure. The inner error bars indicate the statistical uncertainty, the outer error bars
represent the statistical and systematic uncertainty added in quadrature. The lines are the results of fitting Eq. (2) to the data.
13 Comparison to models
In this section, predictions from several pQCD-inspired models are compared to the measurements.
13.1 The models
All models are based on the dipole representation of the virtual photon, in which the photon first fluctuates into a pair (the colour dipole), which then interacts with the proton to produce the ρ^
0. The ingredients necessary in such calculations are the virtual-photon wave-function, the dipole-proton cross section, and the ρ^0 wave-function. The photon wave-function is known from QED. The
models differ in the treatment of the dipole-proton cross section and the assumed ρ^0 wave-function.
The models of Frankfurt, Koepf and Strikman (FKS) [57,58] and of Martin, Ryskin and Teubner (MRT) [59,60] are based on two-gluon exchange as the dominant mechanism for the dipole-proton interaction.
The gluon distributions are derived from inclusive measurements of the proton structure function. In the FKS model, a three-dimensional Gaussian is assumed for the ρ^0 wave-function, while MRT use
parton-hadron duality and normalise the calculations to the data. For the comparison with the present measurements the MRST99 [61] and CTEQ6.5M [62] parameterisations for the gluon density were used.
Kowalski, Motyka and Watt (KMW) [63] use an improved version of the saturation model [64,65], with an explicit dependence on the impact parameter and DGLAP [66-69] evolution in Q^2, introduced
through the unintegrated gluon distribution [70]. Forshaw, Sandapen and Shaw (FSS) [71] model the dipole-proton interaction through the exchange of a soft [56] and a hard [72] Pomeron, with (Sat) and
without (Nosat) saturation, and use the DGKP and Gaussian ρ^0 wave-functions. In the model of Dosch and Ferreira (DF) [73], the dipole cross section is calculated using Wilson loops, making use of
the stochastic vacuum model for the non-perturbative QCD contribution.
While the calculations based on two-gluon exchange are limited to relatively high-Q^2 values (typically ~4 GeV^2), those based on modelling the dipole cross section incorporate both the perturbative
and non-perturbative aspects of ρ^0 production.
13.2 Comparison with data
The different predictions discussed above are compared to the Q^2 dependence of the cross section in Fig. 25. None of the models gives a good description of the data over the full kinematic range of
the measurement. The FSS model with the three-dimensional Gaussian ρ^0 wave-function describes the low-Q^2 data very well, while the KMW and DF models describe the Q^2 > 1 GeV^2 region well.
Figure 25. The Q^2 dependence of the γ*p → ρ^0p cross section at W = 90 GeV. The same data are plotted in (a) and (b), compared to different models, as described in the text. The predictions are
plotted in the range as provided by the authors.
The various predictions are also compared with the W dependence of the cross section, for different Q^2 values, in Fig. 26. Here again, none of the models reproduces the magnitude of the cross
section measurements. The closest to the data, in shape and magnitude, are the MRT model with the CTEQ6.5M parametrisation of the gluon distribution in the proton and the KMW model. The KMW model
gives a good description of the Q^2 dependence of δ, as shown in Fig. 27.
Figure 26. The W dependence of the γ*p → ρ^0p cross section for different values of Q^2, as indicated in the figure. The same data are plotted in (a) and (b), compared to different models, as
described in the text. The predictions are plotted in the range as provided by the authors.
Figure 27. The value of δ from a fit of the form σ ~ W^δ for the reaction γ*p → ρ^0p, as a function of Q^2. The lines are the predictions of models as denoted in the figure (see text).
The dependence of b on Q^2 is given only in the FKS and the KMW models as shown in Fig. 28. The FKS expectations are somewhat closer to the data.
Figure 28. The value of the slope b from a fit of the form dσ/d|t| ~ e^-b|t| for the reaction γ*p → ρ^0p, as a function of Q^2. The lines are the predictions of models as denoted in the figure (see
The expected Q^2 dependence of is compared to the measurements in Fig. 29. The MRT prediction, using the CTEQ6.5M gluon density, is the only prediction which describes the data in the whole Q^2
range. While all the models exhibit a mild dependence of on W, consistent with the data as shown in Figs. 30 and 31, none of them reproduces correctly the magnitude of in all the Q^2 bins.
Figure 29. The ratio as a function of Q^2 compared to the predictions of models as denoted in the figure (see text).
Figure 30. The ratio as a function of W for different values of Q^2 compared to the predictions of models as indicated in the figure (see text).
Figure 31. The ratio as a function of W for different values of Q^2 compared to the predictions of models as indicated in the figure (see text).
In summary, none of the models considered above is able to describe all the features of the data presented in this paper. The high precision of the measurements can be used to refine models for
exclusive ρ^0 electroproduction.
14 Summary and Conclusion
Exclusive ρ^0 electroproduction has been studied by ZEUS at HERA in the range 2 <Q^2 < 160 GeV^2 and 32 <W < 180 GeV with a high statistics sample. The Q^2 dependence of the γ*p → ρ^0p cross section
is a steeply falling function of Q^2. The cross section rises with W and its logarithmic derivative in W increases with increasing Q^2. The exponential slope of the t distribution decreases with
increasing Q^2 and levels off at about b = 5 GeV^-2. The decay angular distributions of the ρ^0 indicate s-channel helicity breaking. The ratio of cross sections induced by longitudinally and
transversely polarised virtual photons increases with Q^2, but is independent of W and of |t|, suggesting suppression of large-size configurations of the transversely polarised photon. The effective
Pomeron trajectory, averaged over the full Q^2 range, has a larger intercept and a smaller slope than those extracted from soft interactions. All these features are compatible with expectations of
perturbative QCD. However, none of the available models which have been compared to the measurements is able to reproduce all the features of the data.
The ZEUS Collaboration
S. Chekanov^1, M. Derrick, S. Magill, B. Musgrave, D. Nicholass^2, J. Repond, R. Yoshida
Argonne National Laboratory, Argonne, Illinois 60439-4815, USA ^n
M.C.K. Mattingly
Andrews University, Berrien Springs, Michigan 49104-0380, USA
M. Jechow, N. Pavel^†, A.G. Yagües Molina
Institut für Physik der Humboldt-Universität zu Berlin, Berlin, Germany ^b
S. Antonelli, P. Antonioli, G. Bari, M. Basile, L. Bellagamba, M. Bindi, D. Boscherini, A. Bruni, G. Bruni, L. Cifarelli, F. Cindolo, A. Contin, M. Corradi, S. De Pasquale, G. Iacobucci, A. Margotti,
R. Nania, A. Polini, G. Sartorelli, A. Zichichi
University and INFN Bologna, Bologna, Italy ^e
D. Bartsch, I. Brock, H. Hartmann, E. Hilger, H.-P. Jakob, M. Jüngst, O.M. Kind^3, A.E. Nuncio-Quiroz, E. Paul^4, R. Renner^5, U. Samson, V. Schönberg, R. Shehzadi, M. Wlasenko
Physikalisches Institut der Universität Bonn, Bonn, Germany ^b
N.H. Brook, G.P. Heath, J.D. Morris
H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom ^m
M. Capua, S. Fazio, A. Mastroberardino, M. Schioppa, G. Susinno, E. Tassi
Calabria University, Physics Department and INFN, Cosenza, Italy ^e
J.Y. Kim^6, K.J. Ma^7
Chonnam National University, Kwangju, South Korea ^g
Z.A. Ibrahim, B. Kamaluddin, W.A.T. Wan Abdullah
Jabatan Fizik, Universiti Malaya, 50603 Kuala Lumpur, Malaysia ^r
Y. Ning, Z. Ren, F. Sciulli
Nevis Laboratories, Columbia University, Irvington on Hudson, New York 10027 ^o
J. Chwastowski, A. Eskreys, J. Figiel, A. Galas, M. Gil, K. Olkiewicz, P. Stopa, L. Zawiejski
The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland ^i
L. Adamczyk, T. Bold, I. Grabowska-Bold, D. Kisielewska, J. Lukasik, M. Przybycień, L. Suszycki
Faculty of Physics and Applied Computer Science, AGH-University of Science and Technology, Cracow, Poland ^p
A. Kotański^8, W. Slomiński^9
Department of Physics, Jagellonian University, Cracow, Poland
V. Adler^10, U. Behrens, I. Bloch, C. Blohm, A. Bonato, K. Borras, R. Ciesielski, N. Coppola, A. Dossanov, V. Drugakov, J. Fourletova, A. Geiser, D. Gladkov, P. Göttlicher^11, J. Grebenyuk, I.
Gregor, T. Haas, W. Hain, C. Horn^12, A. Hüttmann, B. Kahle, I.I. Katkov, U. Klein^13, U. Kötz, H. Kowalski, E. Lobodzinska, B. Löhr, R. Mankel, I.-A. Melzer-Pellmann, S. Miglioranzi, A. Montanari,
T. Namsoo, D. Notz, L. Rinaldi, P. Roloff, I. Rubinsky, R. Santamarta, U. Schneekloth, A. Spiridonov^14, H. Stadie, D. Szuba^15, J. Szuba^16, T. Theedt, G. Wolf, K. Wrona, C. Youngman, W. Zeuner
Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany
W. Lohmann, S. Schlenstedt
Deutsches Elektronen-Synchrotron DESY, Zeuthen, Germany
G. Barbagli, E. Gallo, P. G. Pelfer
University and INFN Florence, Florence, Italy ^e
A. Bamberger, D. Dobur, F. Karstens, N.N. Vlasov^17
Fakultät für Physik der Universität Freiburg i.Br., Freiburg i.Br., Germany ^b
P.J. Bussey, A.T. Doyle, W. Dunne, M. Forrest, D.H. Saxon, I.O. Skillicorn
Department of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom ^m
I. Gialas^18, K. Papageorgiu
Department of Engineering in Management and Finance, Univ. of Aegean, Greece
T. Gosau, U. Holm, R. Klanner, E. Lohrmann, H. Salehi, P. Schleper, T. Schörner-Sadenius, J. Sztuk, K. Wichmann, K. Wick
Hamburg University, Institute of Exp. Physics, Hamburg, Germany ^b
C. Foudas, C. Fry, K.R. Long, A.D. Tapper
Imperial College London, High Energy Nuclear Physics Group, London, United Kingdom ^m
M. Kataoka^19, T. Matsumoto, K. Nagano, K. Tokushuku^20, S. Yamada, Y. Yamazaki^21
Institute of Particle and Nuclear Studies, KEK, Tsukuba, Japan ^f
A.N. Barakbaev, E.G. Boos, N.S. Pokrovskiy, B.O. Zhautykov
Institute of Physics and Technology of Ministry of Education and Science of Kazakhstan, Almaty, Kazakhstan
V. Aushev^1, M. Borodin, A. Kozulia, M. Lisovyi
Institute for Nuclear Research, National Academy of Sciences, Kiev and Kiev National University, Kiev, Ukraine
D. Son
Kyungpook National University, Center for High Energy Physics, Daegu, South Korea ^g
J. de Favereau, K. Piotrzkowski
Institut de Physique Nucléaire, Université Catholique de Louvain, Louvain-la-Neuve, Belgium ^q
F. Barreiro, C. Glasman^22, M. Jimenez, L. Labarga, J. del Peso, E. Ron, M. Soares, J. Terrón, M. Zambrana
Departamento de Física Teórica, Universidad Autónoma de Madrid, Madrid, Spain ^l
F. Corriveau, C. Liu, R. Walsh, C. Zhou
Department of Physics, McGill University, Montréal, Québec, Canada H3A 2T8 ^a
T. Tsurugai
Meiji Gakuin University, Faculty of General Education, Yokohama, Japan ^f
A. Antonov, B.A. Dolgoshein, V. Sosnovtsev, A. Stifutkin, S. Suchkov
Moscow Engineering Physics Institute, Moscow, Russia ^j
R.K. Dementiev, P.F. Ermolov, L.K. Gladilin, L.A. Khein, I.A. Korzhavina, V.A. Kuzmin, B.B. Levchenko^23, O.Yu. Lukina, A.S. Proskuryakov, L.M. Shcheglova, D.S. Zotkin, S.A. Zotkin
Moscow State University, Institute of Nuclear Physics, Moscow, Russia ^k
I. Abt, C. Büttner, A. Caldwell, D. Kollar, W.B. Schmidke, J. Sutiak
Max-Planck-Institut für Physik, München, Germany
G. Grigorescu, A. Keramidas, E. Koffeman, P. Kooijman, A. Pellegrino, H. Tiecke, M. Vázquez^19, L. Wiggers
NIKHEF and University of Amsterdam, Amsterdam, Netherlands ^h
N. Brümmer, B. Bylsma, L.S. Durkin, A. Lee, T.Y. Ling
Physics Department, Ohio State University, Columbus, Ohio 43210 ^n
P.D. Allfrey, M.A. Bell, A.M. Cooper-Sarkar, R.C.E. Devenish, J. Ferrando, B. Foster, K. Korcsak-Gorzo, K. Oliver, S. Patel, V. Roberfroid^24, A. Robertson, P.B. Straub, C. Uribe-Estrada, R. Walczak
Department of Physics, University of Oxford, Oxford United Kingdom ^m
P. Bellan, A. Bertolin, R. Brugnera, R. Carlin, F. Dal Corso, S. Dusini, A. Garfagnini, S. Limentani, A. Longhin, L. Stanco, M. Turcato
Dipartimento di Fisica dell' Università and INFN, Padova, Italy ^e
B.Y. Oh, A. Raval, J. Ukleja^25, J.J. Whitmore^26
Department of Physics, Pennsylvania State University, University Park, Pennsylvania 16802 ^o
Y. Iga
Polytechnic University, Sagamihara, Japan ^f
G. D'Agostini, G. Marini, A. Nigro
Dipartimento di Fisica, Università 'La Sapienza' and INFN, Rome, Italy ^e
J.E. Cole, J.C. Hart
Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, United Kingdom ^m
H. Abramowicz^27, R. Ingbir, S. Kananov, A. Kreisel, A. Levy, O. Smith, A. Stern
Raymond and Beverly Sackler Faculty of Exact Sciences, School of Physics, Tel-Aviv University, Tel-Aviv, Israel ^d
M. Kuze, J. Maeda
Department of Physics, Tokyo Institute of Technology, Tokyo, Japan ^f
R. Hori, S. Kagawa^28, N. Okazaki, S. Shimizu, T. Tawara
Department of Physics, University of Tokyo, Tokyo, Japan ^f
R. Hamatsu, H. Kaji^29, S. Kitamura^30, O. Ota, Y.D. Ri
Tokyo Metropolitan University, Department of Physics, Tokyo, Japan ^f
M.I. Ferrero, V. Monaco, R. Sacchi, A. Solano
Università di Torino and INFN, Torino, Italy ^e
M. Arneodo, M. Ruspa
Università del Piemonte Orientale, Novara, and INFN, Torino, Italy ^e
S. Fourletov, J.F. Martin
Department of Physics, University of Toronto, Toronto, Ontario, Canada M5S 1A7 ^a
S.K. Boutle^18, J.M. Butterworth, C. Gwenlan^31, T.W. Jones, J.H. Loizides, M.R. Sutton^31, M. Wing
Physics and Astronomy Department, University College London, London, United Kingdom ^m
B. Brzozowska, J. Ciborowski^32, G. Grzelak, P. Kulinski, P. Łużniak^33, J. Malka^33, R.J. Nowak, J.M. Pawlak, T. Tymieniecka, A. Ukleja, A.F. Żarnecki
Warsaw University, Institute of Experimental Physics, Warsaw, Poland
M. Adamus, P. Plucinski^34
Institute for Nuclear Studies, Warsaw, Poland
Y. Eisenberg, I. Giller, D. Hochman, U. Karshon, M. Rosin
Department of Particle Physics, Weizmann Institute, Rehovot, Israel ^c
E. Brownson, T. Danielson, A. Everett, D. Kçira, D.D. Reeder^4, P. Ryan, A.A. Savin, W.H. Smith, H. Wolfe
Department of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA ^n
S. Bhadra, C.D. Catterall, Y. Cui, G. Hartner, S. Menary, U. Noor, J. Standage, J. Whyte
Department of Physics, York University, Ontario, Canada M3J 1P3 ^a
^1 supported by DESY, Germany
^2 also affiliated with University College London, UK
^3 now at Humboldt University, Berlin, Germany
^4 retired
^5 self-employed
^6 supported by Chonnam National University in 2005
^7 supported by a scholarship of the World Laboratory Björn Wiik Research Project
^8 supported by the research grant no. 1 P03B 04529 (2005–2008)
^9 This work was supported in part by the Marie Curie Actions Transfer of Knowledge project COCOS (contract MTKD-CT-2004-517186)
^10 now at Univ. Libre de Bruxelles, Belgium
^11 now at DESY group FEB, Hamburg, Germany
^12 now at Stanford Linear Accelerator Center, Stanford, USA
^13 now at University of Liverpool, UK
^14 also at Institut of Theoretical and Experimental Physics, Moscow, Russia
^15 also at INP, Cracow, Poland
^16 on leave of absence from FPACS, AGH-UST, Cracow, Poland
^17 partly supported by Moscow State University, Russia
^18 also affiliated with DESY
^19 now at CERN, Geneva, Switzerland
^20 also at University of Tokyo, Japan
^21 now at Kobe University, Japan
^22 Ramón y Cajal Fellow
^23 partly supported by Russian Foundation for Basic Research grant no. 05-02-39028-NSFC-a
^24 EU Marie Curie Fellow
^25 partially supported by Warsaw University, Poland
^26 This material was based on work supported by the National Science Foundation, while working at the Foundation.
^27 also at Max Planck Institute, Munich, Germany, Alexander von Humboldt Research Award
^28 now at KEK, Tsukuba, Japan
^29 now at Nagoya University, Japan
^30 Department of Radiological Science
^31 PPARC Advanced fellow
^32 also at Łódź University, Poland
^33 Łódź University, Poland
^34 supported by the Polish Ministry for Education and Science grant no. 1 P03B 14129
^† deceased
^a supported by the Natural Sciences and Engineering Research Council of Canada (NSERC)
^b supported by the German Federal Ministry for Education and Research (BMBF), under contract numbers 05 HZ6PDA, 05 HZ6GUA, 05 HZ6VFA and 05 HZ4KHA
^c supported in part by the MINERVA Gesellschaft für Forschung GmbH, the Israel Science Foundation (grant no. 293/02-11.2) and the U.S.-Israel Binational Science Foundation
^d supported by the German-Israeli Foundation and the Israel Science Foundation
^e supported by the Italian National Institute for Nuclear Physics (INFN)
^f supported by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) and its grants for Scientific Research
^g supported by the Korean Ministry of Education and Korea Science and Engineering Foundation
^h supported by the Netherlands Foundation for Research on Matter (FOM)
^i supported by the Polish State Committee for Scientific Research, grant no. 620/E-77/SPB/DESY/P-03/DZ 117/2003–2005 and grant no. 1P03B07427/2004–2006
^j partially supported by the German Federal Ministry for Education and Research (BMBF)
^k supported by RF Presidential grant N 8122.2006.2 for the leading scientific schools and by the Russian Ministry of Education and Science through its grant Research on High Energy Physics
^l supported by the Spanish Ministry of Education and Science through funds provided by CICYT
^m supported by the Particle Physics and Astronomy Research Council, UK
^n supported by the US Department of Energy
^o supported by the US National Science Foundation. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the
views of the National Science Foundation.
^p supported by the Polish Ministry of Science and Higher Education as a scientific project (2006–2008)
^q supported by FNRS and its associated funds (IISN and FRIA) and by an Inter-University Attraction Poles Programme subsidised by the Belgian Federal Science Policy Office
^r supported by the Malaysian Ministry of Science, Technology and Innovation/Akademi Sains Malaysia grant SAGA 66-02-03-0048
^1From now on, the word "electron" will be used as a generic term for both electrons and positrons.
^2The ZEUS coordinate system is a right-handed Cartesian system, with the Z axis pointing in the proton direction, referred to as the "forward direction", and the X axis pointing left towards the
centre of HERA. The coordinate origin is at the nominal interaction point.
It is a pleasure to thank the DESY Directorate for their strong support and encouragement. The remarkable achievements of the HERA machine group were essential for the successful completion of this
work and are greatly appreciated. The design, construction and installation of the ZEUS detector has been made possible by the efforts of many people who are not listed as authors. We thank E.
Ferreira, J. Forshaw, M. Strikman, T. Teubner and G. Watt, for providing the results of their calculations.
1. Rev Mod Phys. 1999, 71:1275.
This is an example reference.
Publisher Full Text
2. Phys Rev D. 1994, 50:3134. Publisher Full Text
3. ZEUS Coll, Breitweg J, et al.:
Phys Lett B. 2000, 487:273. Publisher Full Text
4. ZEUS Coll, Chekanov S, et al.:
Nucl Phys B. 2005, 718:3. Publisher Full Text
5. ZEUS Coll, Chekanov S, et al.:
Nucl Phys B. 2004, 695:3. Publisher Full Text
6. Phys Lett B. 2000, 483:360. Publisher Full Text
7. Eur Phys J C. 2006, 46:585. Publisher Full Text
8. Collins JC, Frankfurt L, Strikman M:
Phys Rev D. 1997, 56:2982. Publisher Full Text
9. Ivanov IP, Nikolaev NN, Savin AA:
Phys Part Nucl. 2006, 37:1.
This is an example reference.
Publisher Full Text
10. Phys Rev D. 1997, 56:5524. Publisher Full Text
11. J Phys G. 1998, 24:1181. Publisher Full Text
12. Frankfurt L, McDermott M, Strikman M:
JHEP. 2001, 103:45. Publisher Full Text
13. Yu Ivanov D, Szymanowski L, Krasnikov G:
J Exp Theor Phys Lett. 2004, 80:226. Publisher Full Text
14. The Dipole picture of small x physics: A Summary of the Amirim meeting, DESY-00-126. 2000.
15. Collins PDB: An Introduction to Regge Theory and High Energy Physics. Cambridge University Press, Cambridge, England; 1977.
16. Phys Lett B. 1997, 395:311. Publisher Full Text
17. Nucl Phys B. 1984, 231:189. Publisher Full Text
18. Phys Lett B. 2001, 520:183. Publisher Full Text
19. ZEUS Coll, Chekanov S, et al.:
Eur Phys J C. 2002, 24:345. Publisher Full Text
20. ZEUS Coll, Holm U, (ed): [http://www-zeus.desy.de/bluebook/bluebook.html] webcite
21. Phys Lett B. 1992, 293:465. Publisher Full Text
22. Nucl Inst Meth A. 1989, 279:290. Publisher Full Text
23. Nucl Phys Proc Suppl B. 1993, 32:181. Publisher Full Text
24. Nucl Inst Meth A. 1994, 338:254. Publisher Full Text
25. Nucl Inst Meth A. 1991, 309:77. Publisher Full Text
26. Nucl Inst Meth A. 1991, 309:101. Publisher Full Text
27. Nucl Inst Meth A. 1992, 321:356. Publisher Full Text
28. Nucl Inst Meth A. 1993, 336:33. Publisher Full Text
29. Nucl Inst Meth A. 1996, 382:419. Publisher Full Text
30. Nucl Inst Meth A. 1989, 277:176. Publisher Full Text
31. Nucl Inst Meth A. 2000, 450:235. Publisher Full Text
32. Z Phys C. 1997, 73:253. Publisher Full Text
33. First measurement of HERA luminosity by ZEUS lumi monitor, Preprint DESY-92-066, DESY. 1992.
34. Z Phys C. 1994, 63:391. Publisher Full Text
35. Phys Lett B. 1995, 356:601. Publisher Full Text
36. Kwiatkowski A, Spiesberger H, Möhring H.-J: Proceedings of the Workshop on Physics at HERA. Volume III. Edited by Buchmueller W, Ingelman G. DESY, Hamburg; 1991::1294.
37. Comp Phys Comm. 2001, 135:238. Publisher Full Text
38. Particle Data Group, Yao W-M, et al.:
J Phys G. 2006, 33:1. Publisher Full Text
39. Nucl Phys B. 1973, 61:381. Publisher Full Text
40. Phys Rev D. 1998, 58:114026. Publisher Full Text
41. Phys Lett B. 2002, 539:25. Publisher Full Text
42. Phys Atom Nucl. 1998, 61:81.
and correction in σ[L]/σ[T ]in the ρ^0 meson diffractive electroproduction. [arXiv:hep-ph/9704279]
43. Phys Lett B. 1992, 296:227. Publisher Full Text
44. Frankfurt L, Koepf W, Strikman M:
Phys Rev D. 1996, 54:3194. Publisher Full Text
45. Frankfurt L, Koepf W, Strikman M:
Phys Rev D. 1998, 57:512. Publisher Full Text
46. Martin AD, Ryskin MG, Teubner T:
Phys Rev D. 1997, 55:4329. Publisher Full Text
47. Martin AD, Ryskin MG, Teubner T:
Phys Rev D. 2000, 62:014022. Publisher Full Text
48. Eur Phys J C. 1998, 4:463. Publisher Full Text
49. Phys Rev D. 2006, 74:074016. Publisher Full Text
50. Phys Rev D. 1999, 59:014017. Publisher Full Text
51. Phys Rev D. 1999, 60:114023. Publisher Full Text
52. Nucl Phys B. 1977, 126:298. Publisher Full Text
53. Kimber MA, Martin AD, Ryskin MG:
Phys Rev D. 2001, 63:114027. Publisher Full Text
54. Forshaw JR, Sandapen R, Shaw G:
Phys Rev D. 2004, 69:094013. Publisher Full Text
55. Phys Lett B. 2001, 518:63. Publisher Full Text
56. Dosch HG, Ferreira E: Euro Phys J C. Volume 51. ; 2007::83. Publisher Full Text
Sign up to receive new article alerts from PMC Physics A | {"url":"http://www.physmathcentral.com/1754-0410/1/6","timestamp":"2014-04-20T18:23:23Z","content_type":null,"content_length":"256393","record_id":"<urn:uuid:176605e4-ff7b-42c1-86ac-6ec6bd8cba68>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
quadratic equation from amazing lil boy
November 25th 2007, 11:01 AM
quadratic equation from amazing lil boy
hi there i want help with this quadratic equation.. im quiet young.. so try to explain it step by step so i can understand :)
y= 2x3+3x+19
p.s. the first 3 is squared
can someone identify a b and c value?? Thanks very much.. try fully explain it to me so i can do it next time by myself.. thanks
November 25th 2007, 11:11 AM
hi there i want help with this quadratic equation.. im quiet young.. so try to explain it step by step so i can understand :)
y= 2x3+3x+19
p.s. the first 3 is squared
can someone identify a b and c value?? Thanks very much.. try fully explain it to me so i can do it next time by myself.. thanks
Did you mean $3x^2 + 3x + 19$ ?
A quadratic equation is in the format: $ax^2 + bx + c$
Do you want the equation solved?
$0 = 3x^2 + 3x + (19 - y)$ [That part in brackets is your c-value]
We know that $x = \frac{ -b +- \sqrt{b^2 - 4ac}}{2a}$
So simply set the values into the equation:
$x = \frac{ -(3) +- \sqrt{(3)^2 - 4(3)(19 - y)}}{2(3)}$
$x = \frac{ -(3) +- \sqrt{12y - 219}}{6}$
It seems that for this equation to be true, $y > 18,25$
November 25th 2007, 11:20 AM
i didnt understand anything :S
November 25th 2007, 11:26 AM
November 25th 2007, 11:32 AM
Where did u get the 4 from?? and why do you put numbers 3 on brackets??
November 25th 2007, 11:35 AM
Where did u get the 4 from?? and why do you put numbers 3 on brackets??
Have you heard about the quadratic formula yet?
November 25th 2007, 11:37 AM
November 25th 2007, 11:42 AM
If you have the formula: $2x^2 + 3x + 4$, then the 2 is your a value, the 3 is your b value, and the 4 is your c value.
Like i said before: quadratic equations are in the format: $ax^2 + bx + c$
I'm sure you understand that.
The quadratic formula: $x = \frac{ -b +- \sqrt{b^2 - 4ac}}{2a}$
Thats how the formula looks, and you have to learn that format.
Let's say we want to solve that equation at the top that i used to explain.
Then wherever there is an a, we substitute it with 2.
Wherever there is a b, we substitute it with 3.
Wherever there is a c, we substitute it with 4.
Do you understand?
November 25th 2007, 11:52 AM
I'm just letting you know that I have to leave now. Sorry.
Soroban is around, he'll probably continue where I left off.
Have a good night. (Handshake)
November 25th 2007, 12:12 PM
ye thanks very much.. i understood almost evertyhing just want to know how did you get 12 and 219... can anyone explain it?? and then how did he get 18??
a value is= 3
b= 4
c = 19
am i right??
November 25th 2007, 01:10 PM
oh ye one more thing.. you said wherever theres an a, i should substitute it with a 2.. why if you look at number 2 theres a bracket after it with number 3.. but the equation shows 2a.. why did u
put 2(3) instead of 2(2)
November 25th 2007, 06:43 PM
Solving Quadratic Equations
hi there i want help with this quadratic equation.. im quiet young.. so try to explain it step by step so i can understand :)
y= 2x3+3x+19
p.s. the first 3 is squared
can someone identify a b and c value?? Thanks very much.. try fully explain it to me so i can do it next time by myself.. thanks
To be a quadratic equation, it needs x^2 in it, where the power symbol ^ is used to mean that x is to the power 2.
You can learn a lot by reading some worked out examples as well as entering your own at webgraphing.com:
Free Online Quadratic Equation Solver: Solve by Quadratic Formula
It will give you step by step walk through on any quadratic equation. To start you might just click the button for solving a random equation, just to see how it looks. Then enter your own and
read the worked-out solution.
November 25th 2007, 06:44 PM
If you have the formula: $2x^2 + 3x + 4$, then the 2 is your a value, the 3 is your b value, and the 4 is your c value.
Like i said before: quadratic equations are in the format: $ax^2 + bx + c$
I'm sure you understand that.
The quadratic formula: $x = \frac{ -b +- \sqrt{b^2 - 4ac}}{2a}$
Thats how the formula looks, and you have to learn that format.
Let's say we want to solve that equation at the top that i used to explain.
Then wherever there is an a, we substitute it with 2.
Wherever there is a b, we substitute it with 3.
Wherever there is a c, we substitute it with 4.
Do you understand?
use \pm to get $\pm$
November 26th 2007, 04:10 AM
oh ye one more thing.. you said wherever theres an a, i should substitute it with a 2.. why if you look at number 2 theres a bracket after it with number 3.. but the equation shows 2a.. why did u
put 2(3) instead of 2(2)
Oh yes, it should be 2(2). Thanks for the spot.
Try to spend a little time on it yourself to see how i got the values there. I simplified everything under the root.
November 26th 2007, 04:33 AM
hi there i want help with this quadratic equation.. im quiet young.. so try to explain it step by step so i can understand :)
y= 2x3+3x+19
p.s. the first 3 is squared
can someone identify a b and c value?? Thanks very much.. try fully explain it to me so i can do it next time by myself.. thanks
See this | {"url":"http://mathhelpforum.com/algebra/23458-quadratic-equation-amazing-lil-boy-print.html","timestamp":"2014-04-21T15:05:49Z","content_type":null,"content_length":"23479","record_id":"<urn:uuid:2a8807cd-91ff-4250-a2f2-f7721ef85c27>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse tangent: Riemann surfaces
Real and imaginary parts of the continuation of over the . The real part is a faithful representation of the Riemann surface of .
Real part of the continuation of over the . The logarithmic branch point at is clearly visible. The viewpoint is from the lower half‐plane.
Real parts of the continuation of over the Riemann sphere. The branch points are at the intersection of the equator with the ‐plane | {"url":"http://functions.wolfram.com/ElementaryFunctions/ArcTan/visualizations/10/","timestamp":"2014-04-21T02:33:45Z","content_type":null,"content_length":"37793","record_id":"<urn:uuid:8c3d9e97-4405-47c6-a6aa-fc8f7f94d449>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need emergency help with setting Normal vectors! [Archive] - OpenGL Discussion and Help Forums
06-28-2010, 11:28 PM
Hello all,
I have a huge question about setting normal vectors for coned surfaces.
So I have a piece of code where i want to draw a tree by approximating it with 4 triangles
glBegin( GL_TRIANGLES);
glColor3f(0, 1, 0);
double yval = 0.5;
//One triangle side
glTexCoord2f( leafRep/2,leafRep);
glNormal3f(-1, yval, -1);
glNormal3f(1, yval,-1);
glTexCoord2f(0.0 ,0.0);
glBegin( GL_TRIANGLES);
//another side
glNormal3f(0, 1, 0);
glTexCoord2f( leafRep/2,leafRep);
glNormal3f(1, yval,-1);
glNormal3f(1, yval, 1);
glTexCoord2f(0.0 ,0.0);
glBegin( GL_TRIANGLES);
//another side
glNormal3f(0, 1, 0);
glTexCoord2f( leafRep/2,leafRep);
glNormal3f(1, yval, 1);
glNormal3f(-1, yval, 1);
glTexCoord2f(0.0 ,0.0);
glBegin( GL_TRIANGLES);
//another side
glNormal3f(0, 1, 0);
glTexCoord2f( leafRep/2,leafRep);
glNormal3f(-1, yval, -1);
glTexCoord2f(0.0 ,0.0);
I have been told that i am setting the normals incorrectly.
So i went on the internet and found a number of tutorials on Gourad Shading and how I should calculate the normals for each triangular surface and then calculate the average of the surface normals
that intersect at a particular vertice to get the proper normal vector for that vertice.
But then I was told this:
This technique is only necessary if all you know is the what the triangles
are. When an object is defined ONLY in terms of the triangles, then
Gouraud requires that you calculate the normals that way.
However, when drawing a cylinder or cone or some object which is
analytically defined, you don't have to do that because you KNOW what the
normal is. Using symmetry, it is very easy to show that the answer you
get when averaging the inidividual normals is in fact the true normal.
So the bottom line is if you apply the technique you outline, you will
laboriously calculate a normal which you can determine by inspection.
I do not understand how that applies to my code because i did not use any kind of parametic equation involving sins or cosines to generate my shape.
How would i properly calculate the normals for the vertices to show a "curved effect" without going through the whole averaging the surface normals technique?
Is there even such a way? Do i have to calculate the slope from the top of my tree to one of the edge vertices and then figure something out. I am really confused :( | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-171420.html","timestamp":"2014-04-18T15:41:28Z","content_type":null,"content_length":"7154","record_id":"<urn:uuid:87250025-8bbe-4fac-8276-826d7093a03a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
Identifying points with the smallest Euclidean distance
up vote 7 down vote favorite
I have a collection of n dimensional points and I want to find which 2 are the closest. The best I could come up for 2 dimensions is:
from numpy import *
myArr = array( [[1, 2],
[3, 4],
[5, 6],
[7, 8]] )
n = myArr.shape[0]
cross = [[sum( ( myArr[i] - myArr[j] ) ** 2 ), i, j]
for i in xrange( n )
for j in xrange( n )
if i != j
print min( cross )
which gives
[8, 0, 1]
But this is too slow for large arrays. What kind of optimisation can I apply to it?
Euclidean distance between points in two different Numpy arrays, not within
@Ηλίας: Roughly how many points do you have? Please note that it's possible to have a set more than 2 points (even all the points) with the same distances (but inaccurate computations may not
reflect this, so eventually you need to be able to set a threshold trh where distance differences below trh are considered equal). You are not interested to find out closest point to a given one? –
eat Feb 25 '11 at 20:31
@eat It is a hierarchy cluster that I am building, and I need to find the two closest centroids. Normally less than a thousand points, but I need to see how much it can scale. Rounding errors,
won't be that important in my case. – Ηλίας Feb 25 '11 at 20:57
add comment
6 Answers
active oldest votes
Try scipy.spatial.distance.pdist(myArr). This will give you a condensed distance matrix. You can use argmin on it and find the index of the smallest value. This can be
up vote 8 down vote converted into the pair information.
What is the easiest way to get those coordinates from that single integer? – Ηλίας Feb 25 '11 at 18:58
add comment
There's a whole Wikipedia page on just this problem, see: http://en.wikipedia.org/wiki/Closest_pair_of_points
up vote 8 down Executive summary: you can achieve O(n log n) with a recursive divide and conquer algorithm (outlined on the Wiki page, above).
2 Neat! I'm glad I hit refresh before writing: "Obviously the complexity is O(n^2)" ;o) – das_weezul Feb 25 '11 at 16:27
Great. If the points are to be added successively, and the minimum distance pair is to be updated, then maintaining a Delaunay triangulation structure is efficient. – Alexandre
C. Feb 25 '11 at 16:28
add comment
You could take advantage of the latest version of SciPy's (v0.9) Delaunay triangulation tools. You can be sure that the closest two points will be an edge of a simplex in the
triangulation, which is a much smaller subset of pairs than doing every combination.
Here's the code (updated for general N-D):
import numpy
from scipy import spatial
def closest_pts(pts):
# set up the triangluataion
# let Delaunay do the heavy lifting
mesh = spatial.Delaunay(pts)
# TODO: eliminate reduncant edges (numpy.unique?)
edges = numpy.vstack((mesh.vertices[:,:dim], mesh.vertices[:,-dim:]))
up vote 4 # the rest is easy
down vote x = mesh.points[edges[:,0]]
y = mesh.points[edges[:,1]]
dists = numpy.sum((x-y)**2, 1)
idx = numpy.argmin(dists)
return edges[idx]
#print 'distance: ', dists[idx]
#print 'coords:\n', pts[closest_verts]
dim = 3
N = 1000*dim
pts = numpy.random.random(N).reshape(N/dim, dim)
Seems closely O(n):
May actually work in 2D. Have you made any timings? However this approach fails miserable in higher dim. Thanks – eat Feb 25 '11 at 21:31
@eat: why do you say it "fails miserably"? 3D is 4-5X slower than the same N in 2D. But any approach (except for the naive brute approach) is going to see slowdowns with D. – Paul Feb
25 '11 at 21:56
Well, it's kind of pointless to try to do Delaunay triangulation in 123D! So this won't solve OP's question (unless his nD is 2 or 3). Don't get me wrong, I'm actually very happy that
scipy is able to perform Delaunay triangulation so fast. Please make some timings with pdist for n= 2...123, you'll see. Thanks – eat Feb 25 '11 at 22:16
@eat: I missed the fact that the OP wanted a general N-D solution, I was under the impression it was strictly 2D. I'm a little "bridge-and-tunnel" and sometimes consider 3D not only as
"high dimensional", but the highest! – Paul Feb 26 '11 at 0:59
add comment
There is a scipy function pdist that will get you the pairwise distances between points in an array in a fairly efficient manner:
up vote 2 down vote
that outputs the N*(N-1)/2 unique pairs (since r_ij == r_ji). You can then search on the minimum value and avoid the whole loop mess in your code.
add comment
Perhaps you could proceed along these lines:
In []: from scipy.spatial.distance import pdist as pd, squareform as sf
In []: m= 1234
In []: n= 123
In []: p= randn(m, n)
In []: d= sf(pd(p))
up vote 1 down vote In []: a= arange(m)
In []: d[a, a]= d.max()
In []: where(d< d.min()+ 1e-9)
Out[]: (array([701, 730]), array([730, 701]))
With substantially more points you need to be able to somehow utilize the hierarchical structure of your clustering.
add comment
How fast is it compared to just doing a nested loop and keeping track of the shortest pair? I think creating a huge cross array is what might be hurting you. Even O(n^2) is still
up vote 0 down pretty quick if you're only doing 2 dimensional points.
It helps, but quickly degenerates for large matrices – Ηλίας Feb 25 '11 at 16:43
add comment
Not the answer you're looking for? Browse other questions tagged python algorithm numpy nearest-neighbor euclidean-distance or ask your own question. | {"url":"http://stackoverflow.com/questions/5119644/identifying-points-with-the-smallest-euclidean-distance","timestamp":"2014-04-23T14:06:09Z","content_type":null,"content_length":"93730","record_id":"<urn:uuid:dc1c0b89-5984-44e0-9d94-b32796ba6100>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real vs Nominal Interest Rates
John Creighto
If monetary policy worked (which it doesn't) then they should be identical (in shape not necessarily in value) as inflation should be constant.
I see. What confuses me is that even if the shape is the same for both of them, once you change the money supply then price levels change, and that causes nominal interest rate to change but not real
interest rate, effectively distorting the original curve in the context of real vs nominal interest rates. | {"url":"http://www.physicsforums.com/showthread.php?p=3897711","timestamp":"2014-04-17T21:32:29Z","content_type":null,"content_length":"25304","record_id":"<urn:uuid:00abe5cb-f942-40f9-9acb-f1e888aec3da>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle Measurement ( Read ) | Geometry
Angles are tricky things. Have you ever tried to measure or draw an angle?
Cassie is working with a protractor in math class. She has been assigned the task of drawing a 35 degree angle. She is wondering if this angle will work for a ramp at the skateboard park. The trouble
is that she isn't sure how to draw one or what one will look like.
Do you know?
This Concept will teach you how to draw angles using a protractor. When finished, you will know what a 35 degree angle will look like.
Previously we worked on naming angles using points on the rays and the vertex. We can also measure angles. If you look at the two angles that you just named, you will see that they are different
sizes. Angles are measured in degrees. The larger the angle the higher the number of degrees.
How can we measure angles?
Angles are measured using a special tool called a protractor.
Here is a picture of a protractor.
Notice that you can see all of the degrees on the protractor.
How can we use a protractor?
1. First, you line up the vertex with the little hole in the middle of the protractor, then carefully align the bottom ray with the bottom line of the protractor.
2. Then, you follow the top ray to the number of degrees that the angle measures.
Here's a few for you to try. Answer the following questions about angles.
Example A
How many degrees are in a straight line?
Solution: 180 degrees
Example B
If one ray of the angle is to the left of 90, is the angle less than 90 or greater than 90?
Solution: The angle is less than 90.
Example C
If an angle measures 120 degrees, is one of the rays to the left or right of 90?
Solution: To the right of 90
Remember Cassie and the angle? Here is the original problem once again.
Cassie is working with a protractor in math class. She has been assigned the task of drawing a 35 degree angle. She is wondering if this angle will work for a ramp at the skateboard park. The trouble
is that she isn't sure how to draw one or what one will look like.
To draw an angle, first you have to draw a line segment that will serve as the bottom ray of the angle. Then you can measure 35 degrees and here is what the angle will look like.
This angle is 35 degrees.
a location in space that does not have size or shape.
a line that has one endpoint and continues indefinitely in one direction.
a set of connected points without endpoints.
Line Segment
a set of connected points with two endpoints.
Point of Intersection
the point where two intersecting lines meet.
Intersecting Lines
lines that cross or meet at some point
Parallel Lines
Lines that do not cross or meet EVER and are equidistant.
a geometric figure formed by two rays that connect at a single point or vertex.
The point of intersection of the lines or rays that form an angle
a tool used to measure an angle in terms of degrees.
Guided Practice
Here is one for you to try on your own.
Look at the following angles. Use a protractor to measure each angle.
First, line up your protractor to measure each angle. Be sure that the vertex is at 0. Then write down each number of degrees.
1. 120 degrees
2. 180 degrees
3. 35 degrees
4. 90 degrees
Video Review
Khan Academy: Using a Protractor
Directions: Draw the following angles. Be sure to use a protractor and check your work with a friend.
1. 180 degrees
2. 100 degrees
3. 90 degrees
4. 15 degrees
5. 20 degrees
6. 45 degrees
7. 60 degrees
8. 110 degrees
9. 25 degrees
10. 160 degrees
11. 180 degrees
12. 140 degrees
13. 42 degrees
14. 115 degrees
15. 173 degrees | {"url":"http://www.ck12.org/geometry/Angle-Measurement/lesson/Angle-Measurement-Grade-6/","timestamp":"2014-04-18T01:31:25Z","content_type":null,"content_length":"107944","record_id":"<urn:uuid:2b9da909-33d2-406e-b61d-0c682236c2e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Selection Sorting Algorithm in Computer Programming: C
Selection sort, also called in-place comparison sort, is a sorting algorithm in computer programming. It is well-known algorithm for its simplicity and also performance advantages. The article shows
the process with C language code and an example.
As the name indicates, first we select the smallest item in the list and exchange it with the first item. Then we select the second smallest in the list and exchange it with the second element and so
on. Finally, all the items will be arranged in ascending order. Since, the next least item is selected and exchanged accordingly so that elements are finally sorted, this technique is called
selection sort.
For example, consider the elements 50, 40, 30, 20, 10 and sort using selection sort.
: The position of smallest element from i'th position onwards can be obtained using the following code:
pos = i; for(j=i+1; j<n; j++) { If(arr[j] < arr[pos]) pos = j; }
After finding the position of the smallest number, it should be exchanged with i'th position. The equivalent statements are shown below:
temp = arr[pos]; arr[pos] = arr[i]; arr[i] = temp;
The above procedure has to be performed for each value of i in the range 0 < i < n-1. The equivalent
to sort N elements using selection sort is shown below:
Step1: [Input the number of items] Read: n Step2: [Read n elements] for i = 0 to n-1 Read: arr[i] [End of for] Step3: for i = 0 to n - 2 do pos = i; for j = I + 1 to n – 1 do if(arr[j] < arr[pos])
then pos = j [End of if] [End of for] temp = arr[pos] arr[pos] = arr[i] arr[i] = temp [End of for] Step4: [Display Sorted items] for i = 0 to n -1 Write: arr[i] [End of for] Step5: Exit C
program to implement the selection sort.
main() { int n, arr[10], pos, i, j, temp; clrscr(); printf("Enter the number of items:\n"); scanf("%d",&n); printf("Input the n items here:\n"); for(i = 0; i< n; i++) { scanf("%d",&arr[i]); } for(i =
0; i <n; i++) { pos = i; for(j = i+1; j< n; j++) { if(arr[j] < arr[pos]) pos = j; } temp = arr[pos]; arr[pos] = arr[i]; arr[i] = temp; } printf("The sorted elements are as:\n"); for( i = 0; i < n;
i++) { printf("%d\n",arr[i]); } getch(); }
The program will outputs as:
• Very simple and easy to implement.
• Straight forward approach.
• It is not efficient. More efficient sorting techniques are present.
• Even if the elements are sorted, n-1 passes are required.
Binary Search in C Linear Search in C
0 comments : | {"url":"http://dotprogramming.blogspot.com/2013/12/selection-sort-algorithm-computer-programming.html","timestamp":"2014-04-18T21:38:03Z","content_type":null,"content_length":"137000","record_id":"<urn:uuid:754673bb-18f9-41b3-82ec-4b40960c9f63>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trig graphs
July 19th 2008, 12:57 PM
Trig graphs
Hope someone can help with a bit of confusion i have!?(Worried)
Im happy with sketching trig graphs generally but this question caught me out and i cant get my head around it!!....
"Sketch the graph of y = 1 + 3cosec2x". I thought that was simply the graph of y = cosec x stretched with scale factor 1/2 and 3 in the x and y direction respectively followed by a translation of
vector (0, 1). This is where i went wrong, i drew my graph with the translation of 1 unit in the y direction when actually the stretch in the y direction had moved the vertexes of the upper
parbolas a further 3 units up.
Why does a stretch in the y direction on this graph expand the series of parbolas away for the x axis thus moving the vertexes when the same transformation when applied to a single parbola ie y =
3(-x^2) does not move the location of the vertex?
Hope that makes sense!
Thanks for any help in advance.
July 19th 2008, 01:07 PM
If the function that the coefficient multiplies is anything other than zero, it will move away from the x axis. The vertex in x^2 is the same as 3(x^2) because 3 times 0 = 0. CSC pi/2 = 1. 3*1 =
July 19th 2008, 01:22 PM
Thanks a lot, just had an experiment on my calculator! seems a bit clearer now but we shall see!(Wink) | {"url":"http://mathhelpforum.com/trigonometry/44068-trig-graphs-print.html","timestamp":"2014-04-20T00:31:36Z","content_type":null,"content_length":"4687","record_id":"<urn:uuid:9a539f44-c0fc-4d63-ab9d-97fa3fda43a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cadernos de Saúde Pública
Services on Demand
Related links
Print version ISSN 0102-311X
Cad. Saúde Pública vol.25 n.9 Rio de Janeiro Sep. 2009
ARTIGO ARTICLE
Establishing the risk of neonatal mortality using a fuzzy predictive model
Modelo preditivo fuzzy para estabelecer o risco de morte neonatal
Luiz Fernando C. Nascimento^I; Paloma Maria S. Rocha Rizol^II,III; Luciana B. Abiuzi^III
^IDepartamento de Medicina, Universidade de Taubaté, Taubaté, Brasil
^IIFaculdade de Engenharia de Guaratinguetá, Universidade Estadual Paulista, Guaratinguetá, Brasil
^IIIInstituto Tecnológico de Aeronáutica,-São José dos Campos, Brasil
The objective of this study was to develop a fuzzy model to estimate the possibility of neonatal mortality. A computing model was built, based on the fuzziness of the following variables: newborn
birth weight, gestational age at delivery, Apgar score, and previous report of stillbirth. The inference used was Mamdani's method and the output was the risk of neonatal death given as a percentage.
24 rules were created according to the inputs. The validation model used a real data file with records from a Brazilian city. The receiver operating characteristic (ROC) curve was used to estimate
the accuracy of the model, while average risks were compared using the Student t test. MATLAB 6.5 software was used to build the model. The average risks were smaller in survivor newborn (p < 0.001).
The accuracy of the model was 0.90. The higher accuracy occurred with risk below 25%, corresponding to 0.70 in respect to sensitivity, 0.98 specificity, 0.99 negative predictive value and 0.22
positive predictive value. The model showed a good accuracy, as well as a good negative predictive value and could be used in general hospitals.
Neonatal Mortality; Fuzzy Logic; Medical Informatics Computing; Risk Factors; Predictive Value of Tests
O objetivo do artigo foi avaliar o uso da lógica fuzzy para estimar possibilidade de óbito neonatal. Desenvolveu-se um modelo computacional com base na teoria dos conjuntos fuzzy, tendo como
variáveis peso ao nascer, idade gestacional, escore de Apgar e relato de natimorto. Empregou-se o método de inferência de Mamdani, e a variável de saída foi o risco de morte neonatal. Criaram-se 24
regras de acordo com as variáveis de entrada, e a validação do modelo utilizou um banco de dados real de uma cidade brasileira. A acurácia foi estimada pela curva ROC; os riscos foram comparados pelo
teste t de Student. O programa MATLAB 6.5 foi usado para construir o modelo. Os riscos médios foram menores para os que sobreviveram (p < 0,001). A acurácia do modelo foi 0,90. A maior acurácia foi
com possibilidade de risco igual ou menor que 25% (sensibilidade = 0,70, especificidade = 0,98, valor preditivo negativo = 0,99 e valor preditivo positivo = 0,22). O modelo mostrou acurácia e valor
preditivo negativo bons, podendo ser utilizado em hospitais gerais.
Mortalidade Neonatal; Lógica Fuzzy; Computação em Informática Médica; Fatores de Risco; Valor Preditivo dos Testes
Uncertainty, vagueness, and imprecision are very common in medicine and in areas such as fever (high or low) and weight (high or low), where the best and most useful descriptions of diseases often
involve terms that are unavoidably vague.
Fuzzy set theory has been developed to deal with the concept of partial true values, ranging from completely true to completely false, and has become a powerful tool for dealing with imprecision and
uncertainty, aiming at tractability, robustness and low-cost solutions for real-world problems.
These features and the ability to deal with linguistic terms could explain the increasing number of works applying Fuzzy Logic to problems in medicine ^1,2. In fact, the theory of Fuzzy Sets has
become an important mathematical approach in diagnosis systems ^3, and, more recently, in epidemiology and public health ^4. For example, a model using birth weight and gestational age was used to
estimate neonatal death risk ^5.
Neonatal mortality is defined as a death that occurred up until the 28^th day of life and it is a very important population health indicator. This indicator provides information on social welfare,
and ethical and political aspects of a population under certain conditions. Low birth weight, those who are born weighing less than 2,500g, preterm newborns, children who are born before having
completed 37 weeks of gestation ^6, newborn seriously depressed when the Apgar score is below seven and previous reports of stillbirth are important causes of neonatal mortality.
The incidences of low birth weight and preterm newborn in Brazil are around 7% (Department for Informatics at the Unified National Health System. http://tabnet.datasus.gov.br/cgi/tabcgi.exe?sinasc/
cnv/nvuf.def, accessed on 14/Jun/2007). Neonatal mortality in the State of Sao Paulo, the most industrialized state in Brazil, was 9.89/1,000 livebirths in 2004 (Department for Informatics at the
Unified National Health System. http://tabnet.datasus.gov.br/cgi/tabcgi.exe?sim/cnv/infuf.def, accessed on 14/Jun/2007).
The estimate of risk of neonatal death can provide important information to pediatricians, especially to neonatal intensive care physicians, with respect to the attention a newborn requires. It is
evident that the care provided to a newborn infant will differ depending on the hospital and its location. In fairly small hospitals it is common for there to be no pediatrician present at the time
of birth, and other professionals are in charge of evaluating the newborn ^5.
To estimate the risk of neonatal death, a Regression Model using dichotomous independent variables such as Yes or No, Present or Absent has been applied ^7. Fuzzy Logic allows assigning, for
instance, a newborn with birth weight of 2,350g to a fuzzy subset low birth weight with 0.63 membership degree and to a normal birth weight fuzzy subset with 0.25 membership degree, taking into
account the inherent uncertainties of this record. In fact, a newborn weighing 2,490g at birth and another weighing 2,510g at birth, who are classically categorized as low birth weight and normal
birth weight respectively, do not show significant differences across biological, anatomical and physiological aspects. In the fuzzy approach each element may be compatible with several categories,
with different membership degrees. The advantage of the fuzzy theory is to consider an even and more realistic classification of the children relating to the two variables assumed ^5.
The theory of fuzzy sets was introduced by Lotfi A. Zadeh in the 1960s as a means to model the uncertainty within natural language and introduced the concept of vagueness. According to this
alternative view, uncertainty is considered essential to science. To the reader who wishes to learn more about fuzzy logic theory the book by Yen & Langari ^8 is recommended.
Thus, a theoretical fuzzy linguistic model is presented in the study, which is a low cost program able to evaluate more appropriately the risk of neonatal death based on birth weight, gestational
age, Apgar score and previous report of stillbirth.
A computational model is used with a fuzzy linguistic model to evaluate the risk of neonatal death. This model involves four previously named inputs: birth weight, gestational age, Apgar score and
previous report of stillbirth. The model was developed from one expert knowledge, who elaborated three fuzzy sets to the variable birth weight: very low birth weight, low birth weight, normal birth
weight; and two fuzzy sets to the variable gestational age: preterm and term; and two fuzzy sets to the variable Apgar Score: Low, when the values were below seven, and high when the values are above
eight; and two fuzzy sets to the variable previous report of stillbirth: few if there were zero or one stillbirth and many if there were two or more stillbirth. The output is the death risk with five
linguistic labels: very high, high, middle high, middle and low. These fuzzy sets were built by fuzzying the classical pediatrics classification. Situations such as small to gestational age, adequate
to gestational age and large to gestational age were not considered in this study.
A fuzzy linguistic model is a rule-based system that uses fuzzy sets theory to address the issue. Its basic structure includes four main components, as shown in Figure 1:
A fuzzifier, which translates crisp inputs (classical numbers) into fuzzy values;
An inference engine that applies a fuzzy reasoning mechanism to obtain a fuzzy output (in the case of Mamdani inference);
A knowledge base, which contains both a set of fuzzy rules and a set of membership functions representing the fuzzy sets of the linguistic variable; and
A defuzzifier, which translates the fuzzy output into a crisp value.
The decision process is performed by the inference engine using the rules contained in the rule base. These fuzzy rules define the connection between fuzzy input and output. A fuzzy rule has a form:
if antecedent then consequent, where antecedent is a fuzzy expression composed of one or more fuzzy sets connected by fuzzy operators, and consequent is an expression that assigns fuzzy values to the
output variables. The inference process evaluates all rules in the rule base and combines the weighted consequents of all relevant rules into a single output fuzzy set (Mamdani's model). The fuzzy
output set may then be replaced by a "crisp" output value obtained by a process called defuzzification ^8.
The base rules are give in Table 1. When a newborn is very low birth weight and is preterm and Apgar is low and previous report of stillbirth is few, then the risk of neonatal death is very high as
shown by rule 1. Note that the sequence of input is: birth weight; gestational age; Apgar score; previous report of stillbirth; and the output is risk of neonatal death, after the step named
defuzzification. Centroid was the defuzzification method used in this study and the risk of neonatal death was estimated as a percentage.
Note that, by combining all possible inputs, it is possible to build 24 rules. The procedure of the fuzzy linguistic model, given four of the above inputs for any child, consists of calculating the
membership degree of these values in all fuzzy sets of birth weight, gestational age, Apgar Score and previous report of stillbirth. Next, the risk of neonatal death is determined by inference of the
fuzzy rule set, using Mamdani's inference and defuzzification of the fuzzy output.
The fuzzy sets related to the linguistic variables birth weight, Apgar score, previous report of stillbirth and gestational age are presented in Figure 2.
This model was validated by using a real data set which contains the same variables of the defined fuzzy set. The real data set was taken from São José dos Campos, a mid-sized city in the Southeast
of Brazil, in 2003. This data file contained information from the Brazilian Birth Certificate, an official document necessary for civil registration. This data file contained information about the
newborn's situation up to 28 days of life - dead or alive. The accuracy of the model was estimated by the ROC (receiver operating characteristic) curve and the risk values were evaluated using the
Student t test. The Median test or Mann-Whitney test were used if the value of the risk did not have a normal distribution. The MATLAB software (MathWorks, Natick, USA) was used to perform the
There were 58 neonatal deaths in 1,351 records. The mean value of the risk values was 9.85% (SD = 14.02), the range of these values was 4.67-90.33% and the median value was 4.67%. The risk values do
not have a normal distribution by using a Kolmogorov-Smirnov Test (z = 5.47, p < 0.001). The Mann Whitney resulted in a mean rank of 1,194.01 to neonatal death and 652.76 to live newborn (z = -14.79,
p < 0.001). The median test resulted in 49 neonatal deaths with risk value above the median (4.67%) and 1,071 live newborns with risk value equal to or below the median (χ^2 = 152.7, p < 0.001).
Figure 3 shows the membership functions of the output variable risk of neonatal death. The surface of the neonatal death risk using the gestational age and birth weight (in grams) and Apgar score and
birth weight (in grams) are shown in Figure 4.
It can be noted in this graph that the risk of neonatal death decreases monotonically when birth weight or gestational age increases, as expected, such as higher Apgar score (Apgar score vs. birth
In order to validate the computational model created, six dates were taken from the real data set with the following inputs: birth weight; gestational age; Apgar score; previous report of stillbirth.
The output (risk of neonatal death) was given by the model.
Consider, for example, a newborn with birth weight of 3,500g, gestational age of 38 weeks, Apgar score of 5 and previous report of stillbirth of 0. With these four antecedents the following
membership functions were activated: normal birth weight for the variable birth weight; term to the variable gestational age, low to the variable Apgar score, few to the variable previous report of
stillbirth. The rule 21 was activated and the output variable activated was middle. After the defuzzification through the method centroid the result of the system (risk) is 25%.
Below are presented other examples:
Birth weight of 3,500, gestational age of 38 weeks, Apgar of 9, previous report of stillbirth of 0. Risk: 4.7%.
Birth weight of 1,500, gestational age of 38 weeks, Apgar of 9, previous report of stillbirth of 0. Risk: 25%.
Birth weight of 1,500, gestational age of 32 weeks, Apgar of 9, previous report of stillbirth of 0. Risk: 35.5%.
Birth weight of 1,500, gestational age of 32 weeks, Apgar of 5, previous report of stillbirth of 0. Risk: 65.5%.
Birth weight of 1,500, gestational age of 32 weeks, Apgar of 9, previous report of stillbirth of 0. Risk: 35.3%.
Birth weight of 1,500, gestational age of 32 weeks, Apgar of 5, previous report of stillbirth of 2. Risk: 79.5%.
In the first two cases both newborns survived.
Accuracy is higher when risk is below 25%, corresponding to 0.70 in respect to sensitivity, 0.98 specificity, 0.99 negative predictive value and 0.22 positive predictive value. Considering 4.7% risk
values, we obtained 0.82 in respect to sensitivity, 0.82 specificity, 0.99 negative predictive value and 0.16 positive predictive value. The ROC curve is shown in Figure 5; the area under the curve
is 0.90 (95%CI: 0.84-0.96) (p < 0.001).
In this study, a fuzzy linguistic model to evaluate the risk of neonatal death based on birth weight, gestational age, Apgar score and previous report of stillbirth was proposed.
This study is not an epidemiological study about neonatal mortality; it aimed to build a com-putational predictive model by using fuzzy logic.
Neonatal mortality is a main component of childhood mortality (SUS Information Department. http://tabnet.datasus.gov.br/cgi/tabcgi.exe?sim/cnv/infuf.def, accessed on 14/Jun/2007). The means of
identifying newborns with high risk to neonatal mortality can offer information to physicians who attend these newborns to take actions and prevent devastating outcomes.
There are several methods to estimate the risk of neonatal death. The most commonly used methods are Pediatric Risk of Mortality (PRISM) ^9, the Score for Neonatal Acute Physiology (SNAP) ^10 and the
Clinical Risk for Index Baby (CRIB) ^11.
These scores use several variables and several measures of blood analysis while newborns are interned in neonatal intensive care units. Besides, the accuracies obtained by the ROC curve of these
studies were 0.90 in respect to CRIB and 0.92 in respect to PRISM.
Furthermore, other predictive models need a considerable number of records to establish an association between the outcome, neonatal death, and determinant variables, such as birth weight, Apgar
score, previous report of stillbirth and gestational age, which is not necessary in the fuzzy model. Other approaches like artificial neural networks or neurofuzzy need records to train, check and
validate the model. The model presented here provided good results as shown in the ROC curve.
The advantage of the risk estimator presented here is that model values cannot change over time, which is not true for experts' opinions. In fact, the experts could provide different values for death
risk under the same conditions, depending on their positive or negative feelings and also from different geographic locations. It is common to get different answers from experts for the same question
in a week's time. In this sense, the model presented here could offer a standardization of the classification process. On the contrary, our model did not use several blood analyses as is the case for
PRISM, SNAP and CRIB.
In addition, this model prevents the variability in the analysis of newborn conditions provided by different health professionals, which could yield inequalities in the treatment. Besides, the fuzzy
model is very simple and involves low costs in terms of computing, making it an easy and inexpensive option, factors that are particularly relevant in developing and poor countries.
On the other hand, it is not possible to compare this model with other predictive models because the fuzzy model does not use blood analyses and current models such as PRISM, SNAP or CRIB do not use
the fuzzy variables.
In cities where there are no experts available, the model can help understanding and evaluating the risk of neonatal death based only on information regarding the birth weight Apgar score, previous
report of stillbirth and gestational age without the need for laboratory tests and the value obtained immediately after the birth. This is available even in very modest conditions. A similar model
was developed based only on expert opinions with agreements ^4.
On the other hand, it is important to bear in mind that the number of fuzzy rules grows exponentially and this can impair the model's performance. Besides, the inclusion of new variables does not
guarantee the improvement and robustness of the model.
The application of fuzzy sets theory in medicine and, particularly, in pediatrics, is a new area of research. Nevertheless, this approach has provided promising results in several medical
applications, proposing a paradigmatic shift in health sciences ^2,12. The possibility of building a computational interface makes this fuzzy model a promising and useful predictive tool.
All authors participated equally in the study.
1. Sadegh-Zadeh K. Fuzzy genomes. Artif Intell Med 2000; 18:1-28. [ Links ]
2. Sousa CA, Duarte PS, Pereira JCR. Fuzzy logic and logistic regression in the decision making for parathyroid scintigraphy study. Rev Saúde Pública 2006; 40:898-906. [ Links ]
3. Adlassnig K-P. Proceedings of Erudit Workshop Fuzzy Diagnostic and Therapeutic Decision Support. Vienna: Österreichishe Computer Gessellschaft; 2000. [ Links ]
4. Ortega NRS, Sallum PC, Massad E. Fuzzy dynamical system in epidemic modelling. Kybernetes 2000; 29:201-18. [ Links ]
5. Nascimento LFC, Ortega NRS. Fuzzy linguistic model for evaluating the risk of neonatal death. Rev Saúde Pública 2002; 36:686-92. [ Links ]
6. Abrams B, Newman V. Small for gestational age birth: maternal predictors and comparison with risk factors of spontaneous preterm delivery in the same cohort. Am J Obstet Gynecol 1991; 164:785-90.
[ Links ]
7. Menezes AMB, Barros FC, Victora CG, Tomasi E, Halpern R, Oliveira ALB. Fatores de risco para mortalidade perinatal em Pelotas. Rev Saúde Pública 1998; 32:209-16. [ Links ]
8. Yen J, Langari R. Fuzzy logic: intelligence, control and information. Upper Saddle River: Prentice-Hall; 1999. [ Links ]
9. Pollack MM, Ruttiman EU, Getson PR. Pediatric risk of mortality (PRISM) score. Crit Care Med 1988; 16:1110-6. [ Links ]
10. Richardson DK, Gray JE, McKormick MC, Workman K, Goldmann DA. Score for neonatal acute physiologic severity index for neonatal intensive care. Pediatrics 1993; 91:617-23. [ Links ]
11. International Neonatal Network. The CRIB (critical risk index for babies) score: a tool for assessing initial neonatal risk and comparing performance of neonatal intensive care units. Lancet
1993; 342:193-8. [ Links ]
12. Sadegh-Zadeh K. Fundamentals of clinical methodology: 3. Nosology. Artif Intell Med 1999; 17:87-108. [ Links ]
L. F. C. Nascimento
Departamento de Medicina, Universidade de Taubaté
Rua Durval Rocha 500
Guaratinguetá, SP 12515-710, Brasil
Submitted on 23/Jul/2008
Final version resubmitted on 09/Mar/2009
Approved on 11/May/2009 | {"url":"http://www.scielosp.org/scielo.php?script=sci_arttext&pid=S0102-311X2009000900018&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-18T21:02:11Z","content_type":null,"content_length":"47911","record_id":"<urn:uuid:31594eaa-8c75-44b1-b6f0-e7fead63b500>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Home» Gate »Gate Syllabus
Gate Syllabus
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green's theorems.
Diferential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy's and Euler's equations; Laplace transforms; PDEs - Laplace, heat and wave
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson's rule; single and multi-step methods for differential equations.
Sources of power on the farm-human, animal, mechanical, electrical, wind, solar and biomass; design and selection of machine elements - gears, pulleys, chains and sprockets and belts; overload safety
devices used in farm machinery; measurement of force, torque, speed, displacement and acceleration on machine elements.
Soil tillage; forces acting on a tillage tool; hitch systems and hitching of tillage implements; functional requirements, principles of working, construction and operation of manual, animal and power
operated equipment for tillage. sowing, planting, fertilizer application, inter-cultivation, spraying, mowing, chaff cutting, harvesting, threshing and transport; testing of agricultural machinery
and equipment; calculation of performance parameters -field capacity, efficiency, application rate and losses; cost analysis of implements and tractors.
Thermodynamic principles of I.C. engines; I.C. engine cycles; engine components; fuels and combustion; lubricants and their properties; I.C. engine systems - fuel, cooling, lubrication, ignition,
electrical, intake and exhaust; selection, operation, maintenance and repair of I.C. engines; power efficiencies and measurement; calculation of power, torque, fuel consumption, heat load and power
Tractors and power tillers - type, selection, maintenance and repair; tractor clutches and brakes; power transmission systems - gear trains, differential, final drives and power take-off; mechanics
of tractor chassis; traction theory; three point hitches- free link and restrained link operations; mechanical steering and hydraulic control systems used in tractors; human engineering and safety in
tractor design; tractor tests and performance.
Ideal and real fluids, properties of fluids; hydrostatic pressure and its measurement; hydrostatic forces on plane and curved surface; continuity equation; Bernoulli's theorem; laminar and turbulent
flow in pipes, Darcy-Weisbach and Hazen-Williams equations, Moody's diagram; flow through orifices and notches; flow in open channels.
Engineering properties of soils, fundamental definitions and relationships; index properties of soils; permeability and seepage analysis; shear strength, Mohr's circle of stresses; active and passive
earth pressures; stability of slopes.
Hydrological cycle; precipitation measurement, analysis of precipitation data; abstraction from precipitation; runoff; hydrograph analysis, unit hydrograph theory and application; stream flow
measurement; flood routing, hydrological reservoir and channel routing.
Mechanics of soil erosion, factors affecting erosion; soil loss estimation; biological and engineering measures to control erosion, terraces and bunds; vegetative waterways; gully control structures,
drop, drop inlet and chute spillways; farm ponds; earthen dams; principles of watershed management.
Water requirement of crops; consumptive use and evapo-transpiration; irrigation scheduling; irrigation efficiencies; design of prismatic and silt loaded channels; methods of irrigation water
application; design and evaluation of irrigation methods; drainage coefficient; surface and subsurface drainage systems; leaching requirement and salinity control; irrigation and drainage water
quality; classification of pumps; pump characteristics; pump selection; types of aquifer; evaluation of aquifer properties; well hydraulics; ground water recharge.
Steady state heat transfer in conduction, convection and radiation; transient heat transfer in simple geometry; condensation and boiling heat transfer; working principles of heat exchangers;
diffusive and convective mass transfer; simultaneous heat and mass transfer in agricultural processing operations.
Material and energy balances in food processing systems; water activity, sorption and desorption isotherms; centrifugal separation of solids, liquids and gases; kinetics of microbial death -
pasteurisation and sterilization of liquid foods; preservation of food by cooling and freezing; psychrometry - properties of air-vapour mixture; concentration and dehydration of liquid foods -
evaporators, tray, drum and spray dryers.
Mechanics and energy requirement in size reduction of granular solids; particle size analysis for comminuted solids; size separation by screening; fluidisation of granular solids; cleaning and
grading efficiency and effectiveness of grain cleaners; conditioning and hydrothermal treatments for grains; dehydration of food grains; processes and machines for processing of cereals, pulses and
oilseeds; design considerations for grain silos.
City planning: Historical development of cities; principles of city planning; new towns; survey methods, site planning, planning regulations and building bye-laws.
Housing: Concept of shelter; housing policies and design; community planning; role of government agencies; finance and management.
Landscape Design: Principles of landscape design and site planning; history and landscape styles; landscape elements and materials; planting design.
Computer Aided Design: Application of computers in architecture and planning; understanding elements of hardware and software; computer graphics; programming languages - C and Visual Basic and usage
of packages such as AutoCAD.
Environmental and Building Science: Elements of environmental science; ecological principles concerning environment; role of micro-climate in design; climatic control through design elements; thermal
comfort; elements of solar architecture; principles of lighting and illumination; basic principles of architectural acoustics; air pollution, noise pollution and their control.
Visual and Urban Design: Principles of visual composition; proportion, scale, rhythm, symmetry, harmony, balance, form and colour; sense of place and space, division of space; focal point, vista,
imageability, visual survey.
History of Architecture: Indian - Indus valley, Vedic, Buddhist, Indo-Aryan, Dravidian and Mughal periods; European - Egyptian, Greek, Roman, medieval and renaissance periods.
Development of Contemporary Architecture: Architectural developments and impacts on society since industrial revolution; influence of modern art on architecture; works of national and international
architects; post modernism in architecture.
Building Services: Water supply, Sewerage and Drainage systems; Sanitary fittings and fixtures; principles of electrification of buildings; elevators, their standards and uses; air-conditioning
systems; fire fighting systems.
Building Construction and Management: Building construction techniques, methods and details; building systems and prefabrication of building elements; principles of modular coordination; estimation,
specification, valuation, professional practice; project management, PERT, CPM.
Materials and Structural Systems: Behavioural characteristics of all types of building materials e.g. mud, timber, bamboo, brick, concrete, steel, glass, FRP; principles of strength of materials;
design of structural elements in wood, steel and RCC; elastic and limit state design; complex structural systems; principles of pre-stressing.
Planning Theory: Planning process; multilevel planning; comprehensive planning; central place theory; settlement pattern; land use and land utilization.
Techniques of Planning: Planning surveys; Preparation of urban and regional structure plans, development plans, action plans; site planning principles and design; statistical methods; application of
remote sensing techniques in urban and regional planning.
Traffic and Transportation Planning: Principles of traffic engineering and transportation planning; methods of conducting surveys; design of roads, intersections and parking areas; hierarchy of roads
and levels of services; traffic and transport management in urban areas; traffic safety and traffic laws; public transportation planning; modes of transportation.
Services and Amenities: Principles and design of water supply systems, sewerage systems, solid waste disposal systems, power supply and communication systems; Health, education, recreation and
demography related standards at various levels of the settlements.
Development Administration and Management: Planning laws; development control and zoning regulations; laws relating to land acquisition; development enforcements, land ceiling; regional and urban
plan preparations; planning and municipal administration; taxation, revenue resources and fiscal management; public participation and role of NGO.
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative, Maxima and
minima, Gradient, Divergence and Curl, Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy's and Euler's equations, Initial and boundary value
problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy's integral theorem, Taylor and Laurent series.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by trapezoidal and Simpson's rule, single and multi-step methods for differential equations.
Mechanics: Bending moments and shear forces in statically determinate beams; simple stress and strain: relationship; stress and strain in two dimensions, principal stresses, stress transformation,
Mohr's circle; simple bending theory; flexural shear stress; thin-walled pressure vessels; uniform torsion.
Structural Analysis: Analysis of statically determinate trusses, arches and frames; displacements in statically determinate structures and analysis of statically indeterminate structures by force/
energy methods; analysis by displacement methods (slope-deflection and moment-distribution methods); influence lines for determinate and indeterminate structures; basic concepts of matrix methods of
structural analysis.
Concrete Structures: Basic working stress and limit states design concepts; analysis of ultimate load capacity and design of members subject to flexure, shear, compression and torsion (beams, columns
and isolated footings); basic elements of prestressed concrete: analysis of beam sections at transfer and service loads.
Steel Structures: Analysis and design of tension and compression members, beams and beam-columns, column bases; connections - simple and eccentric, beam-column connections, plate girders and trusses;
plastic analysis of beams and frames.
Soil Mechanics: Origin of soils; soil classification; three-phase system, fundamental definitions, relationship and inter-relationships; permeability and seepage; effective stress principle:
consolidation, compaction; shear strength.
Foundation Engineering: Sub-surface investigation - scope, drilling bore holes, sampling, penetrometer tests, plate load test; earth pressure theories, effect of water table, layered soils; stability
of slopes - infinite slopes, finite slopes; foundation types - foundation design requirements; shallow foundations; bearing capacity, effect of shape, water table and other factors, stress
distribution, settlement analysis in sands and clays; deep foundations - pile types, dynamic and static formulae, load capacity of piles in sands and clays.
Fluid Mechanics and Hydraulics: Hydrostatics, applications of Bernoulli equation, laminar and turbulent flow in pipes, pipe networks; concept of boundary layer and its growth; uniform flow, critical
flow and gradually varied flow in channels, specific energy concept, hydraulic jump; forces on immersed bodies; flow measurement in channels; tanks and pipes; dimensional analysis and hydraulic
modeling. Applications of momentum equation, potential flow, kinematics of flow; velocity triangles and specific speed of pumps and turbines.
Hydrology: Hydrologic cycle; rainfall; evaporation infiltration, unit hydrographs, flood estimation, reservoir design, reservoir and channel routing, well hydraulics.
Irrigation: Duty, delta, estimation of evapo-transpiration; crop water requirements; design of lined and unlined canals; waterways; head works, gravity dams and Ogee spillways. Designs of weirs on
permeable foundation, irrigation methods.
Water requirements; quality and standards, basic unit processes and operations for water treatment, distribution of water. Sewage and sewerage treatment: quantity and characteristic of waste water
sewerage; primary and secondary treatment of waste water; sludge disposal; effluent discharge standards.
Highway planning; geometric design of highways; testing and specifications of paving materials; design of flexible and rigid pavements.
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative, Maxima and
minima, Gradient, Divergence and Curl, Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy's and Euler's equations, Initial and boundary value
problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy's integral theorem, Taylor and Laurent series.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by trapezoidal and Simpson's rule, single and multi-step methods for differential equations.
Process Calculations and Thermodynamics: Laws of conservation of mass and energy; use of tie components; recycle, bypass and purge calculations; degree of freedom analysis.
First and Second laws of thermodynamics and their applications; equations of state and thermodynamic properties of real systems; phase equilibria; fugacity, excess properties and correlations of
activity coefficients; chemical reaction equilibria.
Fluid Mechanics and Mechanical Operations: Fluid statics, Newtonian and non-Newtonian fluids, Bernoulli equation, Macroscopic friction factors, energy balance, dimensional analysis, shell balances,
flow through pipeline systems, flow meters, pumps and compressors, packed and fluidized beds, elementary boundary layer theory, size reduction and size separation; free and hindered settling;
centrifuge and cyclones; thickening and classification, filtration, mixing and agitation; conveying of solids.
Heat Transfer: Conduction, convection and radiation, heat transfer coefficients, steady and unsteady heat conduction, boiling, condensation and evaporation; types of heat exchangers and evaporators
and their design.
Mass Transfer: Fick's law, molecular diffusion in fluids, mass transfer coefficients, film, penetration and surface renewal theories; momentum, heat and mass transfer analogies; stagewise and
continuous contacting and stage efficiencies; HTU & NTU concepts design and operation of equipment for distillation, absorption, leaching, liquid-liquid extraction, crystallization, drying,
humidification, dehumidification and adsorption.
Chemical Reaction Engineering: Theories of reaction rates; kinetics of homogeneous reactions, interpretation of kinetic data, single and multiple reactions in ideal reactors, non-ideal reactors;
residence time; non-isothermal reactors; kinetics of heterogeneous catalytic reactions; diffusion effects in catalysis.
Instrumentation and Process Control: Measurement of process variables; sensors, transducers and their dynamics, dynamics of simple systems, dynamics such as CSTRs, transfer functions and responses of
simple systems, process reaction curve, controller modes (P, PI, and PID); control valves; analysis of closed loop systems including stability, frequency response (including Bode plots) and
controller tuning, cascade, feed forward control.
Plant Design and Economics: Design and sizing of chemical engineering equipment such as compressors, heat exchangers, multistage contactors; principles of process economics and cost estimation
including total annualized cost, cost indexes, rate of return, payback period, discounted cash flow, optimization in Design.
Chemical Technology: Inorganic chemical industries; sulfuric acid, NaOH, fertilizers (Ammonia, Urea, SSP and TSP); natural products industries (Pulp and Paper, Sugar, Oil, and Fats); petroleum
refining and petrochemicals; polymerization industries; polyethylene, polypropylene, PVC and polyester synthetic fibers.
Mathematical Logic: Propositional Logic; First Order Logic.
Probability: Conditional Probability; Mean, Median, Mode and Standard Deviation; Random Variables; Distributions; uniform, normal, exponential, Poisson, Binomial.
Set Theory & Algebra: Sets; Relations; Functions; Groups; Partial Orders; Lattice; Boolean Algebra.
Combinatorics: Permutations; Combinations; Counting; Summation; generating functions; recurrence relations; asymptotics.
Graph Theory: Connectivity; spanning trees; Cut vertices & edges; covering; matching; independent sets; Colouring; Planarity; Isomorphism.
Linear Algebra: Algebra of matrices, determinants, systems of linear equations, Eigen values and Eigen vectors.
Numerical Methods: LU decomposition for systems of linear equations; numerical solutions of non linear algebraic equations by Secant, Bisection and Newton-Raphson Methods; Numerical integration by
trapezoidal and Simpson's rules.
Calculus: Limit, Continuity & differentiability, Mean value Theorems, Theorems of integral calculus, evaluation of definite & improper integrals, Partial derivatives, Total derivatives, maxima &
Formal Languages and Automata Theory: Regular languages and finite automata, Context free languages and Push-down automata, Recursively enumerable sets and Turing machines, Un-decidability;
Analysis of Algorithms and Computational Complexity: Asymptotic analysis (best, worst, average case) of time and space, Upper and lower bounds on the complexity of specific problems, NP-completeness.
Digital Logic: Logic functions, Minimization, Design and synthesis of Combinational and Sequential circuits; Number representation and Computer Arithmetic (fixed and floating point);
Computer Organization: Machine instructions and addressing modes, ALU and Data-path, hardwired and micro-programmed control, Memory interface, I/O interface (Interrupt and DMA mode), Serial
communication interface, Instruction pipelining, Cache, main and secondary storage.
Data structures: Notion of abstract data types, Stack, Queue, List, Set, String, Tree, Binary search tree, Heap, Graph;
Programming Methodology: C programming, Program control (iteration, recursion, Functions), Scope, Binding, Parameter passing, Elementary concepts of Object oriented, Functional and Logic Programming;
Algorithms for problem solving: Tree and graph traversals, Connected components, Spanning trees, Shortest paths; Hashing, Sorting, Searching; Design techniques (Greedy, Dynamic Programming,
Compiler Design: Lexical analysis, Parsing, Syntax directed translation, Runtime environment, Code generation, Linking (static and dynamic); Operating Systems: Classical concepts (concurrency,
synchronization, deadlock), Processes, threads and Inter-process communication, CPU scheduling, Memory management, File systems, I/O systems, Protection and security.
Databases: Relational model (ER-model, relational algebra, tuple calculus), Database design (integrity constraints, normal forms), Query languages (SQL), File structures (sequential files, indexing,
B+ trees), Transactions and concurrency control;
Computer Networks: ISO/OSI stack, sliding window protocol, LAN Technologies (Ethernet, Token ring), TCP/UDP, IP, Basic concepts of switches, gateways, and routers.
CH - CHEMISTRY
Structure: Quantum theory - principles and techniques; applications to particle in a box, harmonic oscillator, rigid rotor and hydrogen atom; valence bond and molecular orbital theories and Huckel
approximation, approximate techniques: variation and perturbation; symmetry, point groups; rotational, vibrational, electronic, NMR and ESR spectroscopy.
Equilibrium: First law of thermodynamics, heat, energy and work; second law of thermodynamics and entropy; third law and absolute entropy; free energy; partial molar quantities; ideal and non-ideal
solutions; phase transformation: phase rule and phase diagrams- one, two, and three component systems; activity, activity coefficient, fugacity and fugacity coefficient ; chemical equilibrium,
response of chemical equilibrium to temperature and pressure; colligative properties; kinetic theory of gases; thermodynamics of electrochemical cells; standard electrode potentials: applications -
corrosion and energy conversion; molecular partition function (translational, rotational, vibrational and electronic).
Kinetics: Rates of chemical reactions, theories of reaction rates, collision and transition state theory; temperature dependence of chemical reactions; elementary reactions, consecutive elementary
reactions; steady state approximation, kinetics of photochemical reactions and free radical polymerization, homogenous and heterogeneous catalysis.
Non-Transition Elements: General characteristics, structure and reactions of simple and industrially important compounds, boranes, carboranes, silicates, silicones, diamond and graphite; hydrides,
oxides and oxoacids of N, P, S and halogens; boron nitride, borazines and phosphazenes; xenon compounds. Shapes of molecules, hard-soft acid base concept.
Transition Elements: General characteristics of d and f block elements; coordination chemistry: structure and isomerism, stability, theories of metal-ligand bonding (CFT and LFT), electronic spectra
and magnetic properties of transition metal complexes and lanthanides; metal carbonyls, metal-metal bonds and metal atom clusters, metallocenes; transition metal complexes with bonds to hydrogen,
alkyls, alkenes, and arenes; metal carbenes; use of organometallic compounds as catalysts in organic synthesis; mechanisms of substitution and electron transfer reactions of coordination complexes.
Role of metals with special reference to Na, K, Mg, Ca, Fe, Co, Zn, and Mo in biological systems.
Solids: Crystal systems and lattices, Miller planes, crystal packing, crystal defects; Bragg's Law; ionic crystals, band theory, metals and semiconductors. Spinels.
Instrumental methods of analysis: atomic absorption, UV-visible spectrometry, chromatographic and electro-analytical methods.
Synthesis, reactions and mechanisms involving the following: Alkenes, alkynes, arenes, alcohols, phenols, aldehydes, ketones, carboxylic acids and their derivatives; halides, nitro compounds and
amines; stereochemical and conformational effects on reactivity and specificity; reactions with diborane and peracids. Michael reaction, Robinson annulation, reactivity umpolung, acyl anion
equivalents; molecular rearrangements involving electron deficient atoms.
Photochemistry: Basic principles, photochemistry of olefins, carbonyl compounds, arenes, photo oxidation and reduction.
Pericyclic reactions: Cycloadditions, electrocyclic reactions, sigmatropic reactions; Woodward-Hoffmann rules.
Heterocycles: Structural properties and reactions of furan, pyrrole, thiophene, pyridine, indole.
Biomolecules: Structure, properties and reactions of mono- and di-saccharides, physico-chemical properties of amino acids, structural features of proteins and nucleic acids.
Spectroscopy: Principles and applications of IR, UV-visible, NMR and mass spectrometry in the determination of structures of organic compounds.
Linear Algebra: Matrix Algebra, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Mean value theorems, Theorems of integral calculus, Evaluation of definite and improper integrals, Partial Derivatives, Maxima and minima, Multiple integrals, Fourier series. Vector
identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems.
Differential equations: First order equation (linear and nonlinear), Higher order linear differential equations with constant coefficients, Method of variation of parameters, Cauchy's and Euler's
equations, Initial and boundary value problems, Partial Differential Equations and variable separable method.
Complex variables: Analytic functions, Cauchy's integral theorem and integral formula, Taylor's and Laurent' series, Residue theorem, solution integrals.
Probability and Statistics: Sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Discrete and continuous distributions, Poisson, Normal and
Binomial distribution, Correlation and regression analysis.
Numerical Methods: Solutions of non-linear algebraic equations, single and multi-step methods for differential equations.
Transform Theory: Fourier transform, Laplace transform, Z-transform.
Networks: Network graphs: matrices associated with graphs; incidence, fundamental cut set and fundamental circuit matrices. Solution methods: nodal and mesh analysis. Network theorems: superposition,
Thevenin and Norton's maximum power transfer, Wye-Delta transformation. Steady state sinusoidal analysis using phasors. Linear constant coefficient differential equations; time domain analysis of
simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer functions. State equations
for networks.
Electronic Devices: Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon: diffusion current, drift current, mobility, resistivity. Generation and recombination of
carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS capacitor, MOSFET, LED, p-I-n and avalanche photo diode, LASERs. Device technology: integrated circuits fabrication process,
oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process.
Analog Circuits: Equivalent circuits (large and small-signal) of diodes, BJTs, JFETs, and MOSFETs. Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and
FET amplifiers. Amplifiers: single-and multi-stage, differential, operational, feedback and power. Analysis of amplifiers; frequency response of amplifiers. Simple op-amp circuits. Filters.
Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations. Function generators and wave-shaping circuits. Power supplies.
Digital circuits: Boolean algebra, minimization of Boolean functions; logic gates digital IC families (DTL, TTL, ECL, MOS, CMOS). Combinational circuits: arithmetic circuits, code converters,
multiplexers and decoders. Sequential circuits: latches and flip-flops, counters and shift-registers. Sample and hold circuits, ADCs, DACs. Semiconductor memories. Microprocessor(8085): architecture,
programming, memory and I/O interfacing.
Signals and Systems: Definitions and properties of Laplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-time Fourier Transform, z-transform. Sampling
theorems. Linear Time-Invariant (LTI) Systems: definitions and properties; casuality, stability, impulse response, convolution, poles and zeros frequency response, group delay, phase delay. Signal
transmission through LTI systems. Random signals and noise: probability, random variables, probability density function, autocorrelation, power spectral density.
Controls Systems: Basic control system components; block diagrammatic description, reduction of block diagrams. Open loop and closed loop (feedback) systems and stability analysis of these systems.
Signal flow graphs and their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Tools and techniques for LTI control
system analysis: root loci, Routh-Hurwitz criterion, Bode and Nyquist plots. Control system compensators: elements of lead and lag compensation, elements of Proportional-Integral-Derivative(PID)
control. State variable representation and solution of state equation of LTI control systems.
Communications: Analog communication systems: amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne receivers; elements of hardware,
realizations of analog communication systems; signal-to-noise ratio (SNR) calculations for amplitude modulation (AM) and frequency modulation (FM) for low noise conditions. Digital communication
systems: pulse code modulation (PCM), differential pulse code modulation (DPCM), delta modulation (DM); digital modulation schemes-amplitude, phase and frequency shift keying schemes (ASK, PSK, FSK),
matched filter receivers, bandwith consideration and probability of error calculations for these schemes.
Electromagnetics: Elements of vector calculus: divergence and curl; Gauss' and Stokes' theorems, Maxwell's equations: differential and integral forms. Wave equation, Poynting vector. Plane waves:
propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance
matching; pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Antennas: Dipole antennas; antenna arrays; radiation pattern;
reciprocity theorem, antenna gain.
Linear Algebra: Matrix Algebra, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Mean value theorems, Theorems of integral calculus, Evaluation of definite and improper integrals, Partial Derivatives, Maxima and minima, Multiple integrals, Fourier series. Vector
identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems.
Differential equations: First order equation (linear and nonlinear), Higher order linear differential equations with constant coefficients, Method of variation of parameters, Cauchy's and Euler's
equations, Initial and boundary value problems, Partial Differential Equations and variable separable method.
Complex variables: Analytic functions, Cauchy's integral theorem and integral formula, Taylor's and Laurent' series, Residue theorem, solution integrals.
Probability and Statistics: Sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Discrete and continuous distributions, Poisson, Normal and
Binomial distribution, Correlation and regression analysis.
Numerical Methods: Solutions of non-linear algebraic equations, single and multi-step methods for differential equations.
Transform Theory: Fourier transform, Laplace transform, Z-transform.
Electrical Circuits and Fields: Network graph, KCL, KVL, node/ cut set, mesh/ tie set analysis, transient response of d.c. and a.c. networks; sinusoidal steady-state analysis; resonance in electrical
circuits; concepts of ideal voltage and current sources, network theorems, driving point, immittance and transfer functions of two port networks, elementary concepts of filters; three phase circuits;
Fourier series and its application; Gauss theorem, electric field intensity and potential due to point, line, plane and spherical charge distribution, dielectrics, capacitance calculations for simple
configurations; Ampere's and Biot-Savart's law, inductance calculations for simple configurations.
Electrical Machines: Single phase transformer - equivalent circuit, phasor diagram, tests, regulation and efficiency; three phase transformers - connections, parallel operation; auto transformer and
three-winding transformer; principles of energy conversion, windings of rotating machines: D. C. generators and motors - characteristics, starting and speed control, armature reaction and
commutation; three phase induction motors-performance characteristics, starting and speed control; single-phase induction motors; synchronous generators-performance, regulation, parallel operation;
synchronous motors - starting, characteristics, applications, synchronous condensers; fractional horse power motors; permanent magnet and stepper motors.
Power Systems: Electric power generation - thermal, hydro, nuclear; transmission line parameters; steady-state performance of overhead transmission lines and cables and surge propagation;
distribution systems, insulators, bundle conductors, corona and radio interference effects; per-unit quantities; bus admittance and impedance matrices; load flow; voltage control and power factor
correction; economic operation; symmetrical components, analysis of symmetrical and unsymmetrical faults; principles of over current, differential and distance protections; concept of solid state
relays and digital protection; circuit breakers; concept of system stability-swing curves and equal area criterion; basic concepts of HVDC transmission.
Control Systems: Principles of feedback; transfer function; block diagrams: steady-state errors; stability-Routh and Nyquist criteria; Bode plots; compensation; root loci; elementary state variable
formulation; state transition matrix and response for Linear Time Invariant systems.
Electrical and Electronic Measurements: Bridges and potentiometers, PMMC, moving iron, dynamometer and induction type instruments; measurement of voltage, current, power, energy and power factor;
instrument transformers; digital voltmeters and multimeters; phase, time and frequency measurement; Q-meter, oscilloscopes, potentiometric recorders, error analysis.
Analog and Digital Electronics: Characteristics of diodes, BJT, FET, SCR; amplifiers-biasing, equivalent circuit and frequency response; oscillators and feedback amplifiers, operational amplifiers-
characteristics and applications; simple active filters; VCOs and timers; combinational and sequential logic circuits, multiplexer, Schmitt trigger, multivibrators, sample and hold circuits, A/D and
D/A converters; microprocessors and their applications.
Power Electronics and Electric Drives: Semiconductor power devices-diodes, transistors, thyristors, triacs, GTOs, MOSFETs and IGBTs - static characteristics and principles of operation; triggering
circuits; phase control rectifiers; bridge converters-fully controlled and half controlled; principles of choppers and inverters, basic concepts of adjustable speed dc and ac drives.
PART - I
Earth and planetary system; size, shape, internal structure and composition of the earth; atmosphere and greenhouse effect; isostasy; elements of seismology; continents and continental processes;
physical oceanography; palaeomagnetism, continental drift plate tectonics, geothermal energy.
Weathering; soil formation; action of river, wind and glacier; oceans and oceanic features; earthquakes, volcanoes, orogeny and mountain building; elements of structural geology; crystallography;
classification, composition and properties of minerals and rocks; engineering properties of rocks and soils, role of geology in the construction of engineering structures.
Processes of ore formation, occurrence and distribution of ores on land and on ocean floor; coal and petroleum resources in India; ground water geology including well hydraulics, geological time
scale and geochronology; stratigraphic principles and stratigraphy of India; basics concepts of gravity, magnetic and electrical prospecting for ores and ground water.
PART - IIA: GEOLOGY
Crystal symmetry, forms, twinning; crystal chemistry; optical mineralogy, classification of minerals, diagnostic properties of rock minerals.
Mineralogy, structure, texture and classification of igneous, sedimentary and metamorphic rock, their origin and evolution; application of thermodynamics; structure and petrology of sedimentary
rocks; sedimentary processes and environments, sedimentary facies, basin studies; basement cover relationship;
Primary and secondary structures; geometry and genesis of folds, faults, joints, unconformities, cleavage, schistosity and lineation; methods of projection. Tectonites and their significance; shear
zone; superposed folding.
Morphology, classification and geological significance of important invertebrates, vertebrates, microfossils and palaeoflora; stratigraphic principles and Indian stratigraphy; geomorphic processes
and agents; development and evolution of landforms; slope and drainage; processes on deep oceanic and near-shore regions; quantitative and applied geomorphology; air photo interpretation and remote
sensing; chemical and optical properties of ore minerals; formation and localization of ore deposits; prospecting and exploration of economic minerals; coal and petroleum geology; origin and
distribution of mineral and fuel deposits in India; ore dressing and mineral economics.
Cosmic abundance; meteorites; geochemical evolution of the earth; geochemical cycles; distribution of major, minor and trace elements; isotope geochemistry; geochemistry of waters including solution
equilibria and water rock interaction.
Engineering properties of rocks and soils; rocks as construction material; geology of dams, tunnels and excavation sites; natural hazards; the fly ash problem; ground water geology and exploration;
water quality; impact of human activity; Remote sensing techniques for the interpretation of landforms and resource management.
PART - II B: GEOPHYSICS
The earth as a planet; different motions of the earth; gravity filed of the earth and its shape; geochronology; isostasy, seismology and interior of the earth; variation of density, velocity,
pressure, temperature, electrical and magnetic properties inside the earth; earthquakes-causes and measurements; zonation and seismic hazards; geomagnetic field, palaeomagnetism; oceanic and
continental lithosphere; plate tectonics; heat flow; upper and lower atmospheric phenomena.
Theories of scalar and vector potential fields; Laplace, Maxwell and Helmholtz equations for solution of different types of boundary value problems for Cartesian, cylindrical and spherical polar
coordinates; Green's theorem; Image theory; integral equations and conformal transformations in potential theory; Eikonal equation and ray theory.
'G' and 'g' units of measurement, density of rocks, gravimeters, preparation, analysis and interpretation of gravity maps; derivative maps, analytical continuation; gravity anomaly type curves;
calculation of mass.
Earth's magnetic field, units of measurement, magnetic susceptibility of rocks, magnetometers, corrections, preparation of magnetic maps, magnetic anomaly type curve, analytical continuation,
interpretation and application; magnetic well logging.
Conduction of electricity through rocks, electrical conductivities of metals, metallic, non-metallic and rock forming minerals, D.C. resistivity units and methods of measurement, electrode
configuration for sounding and profiling, application of filter theory, interpretation of resistivity field data, application; self potential origin, classification, field measurement, interpretation
of induced polarization time frequency, phase domain; IP units and methods of measurement, interpretation and application; ground-water exploration.
Origin of electromagnetic field elliptic polarization, methods of measurement for different source-receiver configuration components in EM measurements, interpretation and applications; earth's
natural electromagnetic field, tellurics, magneto-tellurics; geomagnetic depth sounding principles, methods of measurement, processing of data and interpretation.
Seismic methods of prospecting: Reflection, refraction and CDP surveys; land and marine seismic sources, generation and propagation of elastic waves, velocity increasing with depth, geophones,
hydrophones, recording instruments (DFS), digital formats, field layouts, seismic noises and noise profile analysis, optimum geophone grouping, noise cancellation by shot and geophone arrays, 2D and
3D seismic data acquisition and processing, CDP stacking charts, binning, filtering, dip-moveout, static and dynamic corrections, deconvolution, migration, signal processing, Fourier and Hilbert
transforms, attribute analysis, bright and dim spots, seismic stratigraphy, high resolution seismics, VSP.
Principles and techniques of geophysical well-logging, SP, resistivity, induction, micro gamma ray, neutron, density, sonic, temperature, dip meter, caliper, nuclear magnetic, cement bond logging.
Quantitative evaluation of formations from well logs; well hydraulics and application of geophysical methods for groundwater study; application of bore hole geophysics in ground water, mineral and
oil exploration. Remote sensing techniques and application of remote sensing methods in geophysics.
Linear Algebra: Matrix Algebra, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Mean value theorems, Theorems of integral calculus, Evaluation of definite and improper integrals, Partial Derivatives, Maxima and minima, Multiple integrals, Fourier series. Vector
identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems.
Differential equations: First order equation (linear and nonlinear), Higher order linear differential equations with constant coefficients, Method of variation of parameters, Cauchy's and Euler's
equations, Initial and boundary value problems, Partial Differential Equations and variable separable method.
Complex variables: Analytic functions, Cauchy's integral theorem and integral formula, Taylor's and Laurent' series, Residue theorem, solution integrals.
Probability and Statistics: Sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Discrete and continuous distributions, Poisson, Normal and
Binomial distribution, Correlation and regression analysis.
Numerical Methods: Solutions of non-linear algebraic equations, single and multi-step methods for differential equations.
Transform Theory: Fourier transform, Laplace transform, Z-transform.
Measurement Basics and Metrology: Static and dynamic characteristics of measurement systems. Standards and calibration. Error and uncertainty analysis, statistical analysis of data, and curve
fitting. Linear and angular measurements; Measurement of straightness, flatness, roundness and roughness.
Transducers, Mechanical Measurements and Industrial Instrumentation: Transducers - elastic, resistive, inductive, capacitive, thermo-electric, piezoelectric, photoelectric, electro-mechanical,
electro-chemical, and ultrasonic. Measurement of displacement, velocity (linear and rotational), acceleration, shock, vibration, force, torque, power, strain, stress, pressure, flow, temperature,
humidity, viscosity, and density. energy storing elements, suspension systems and dampers.
Analog Electronics: Characteristics of diodes, BJTs, JFETs and MOSFETs; Diode circuits; Amplifiers: single and multi-stage, feedback; Frequency response; Operational amplifiers - design,
characteristic, linear and non-linear applications: difference amplifiers; instrumentation amplifiers; precision rectifiers, I-to-V converters, active filters, oscillators, comparators, signal
generators, wave shaping circuits.
Digital Electronics: Combinational logic circuits, minimization of Boolean functions; IC families (TTL, MOS, CMOS), arithmetic circuits, multiplexer and decoders. Sequential circuits: flip-flops,
counters, shift registers. Schmitt trigger, timers, and multivibrators. Analog switches, multiplexers, S/H circuits. Analog-to-digital and digital-to-analog converters. Basics of computer
organization and architecture. 8-bit microprocessor (8085), applications, memory, I/O interfacing, and microcontrollers.
Signals and Systems: Vectors and matrices; Fourier series; Fourier transforms; Ordinary differential equations. Impulse and frequency responses of first and second order systems. Laplace transform
and transfer function, convolution and correlation. Amplitude and frequency modulations and demodulations. Discrete time systems, difference equations, impulse and frequency responses; Z-transforms
and transfer functions; IIR and FIR filters.
Electrical and Electronic Measurements: Measurement of R, L and C; bridges and potentiometers. Measurement of voltage, current, power, power factor, and energy; Instrument transformers; Q meter,
waveform analyzers. Digital volt-meters and multi-meters. Time, phase and frequency measurements; Oscilloscope. Noise and interference in instrumentation.
Control Systems & Process Control: Principles of feedback; transfer function, signal flow graphs. Stability criteria, Bode plots, root-loci, Routh and Nyquist criteria. Compensation techniques; State
space analysis. System components: mechanical, hydraulic, pneumatic, electrical and electronic; Servos and synchros; Stepper motors. On-off, cascade, P, PI, PID and feed-forward controls. Controller
tuning and general frequency response.
Analytical, Optical and Biomedical Instrumentation: Principles of spectrometry, UV, visible, IR mass spectrometry, X-ray methods; nuclear radiation measurements, gas, solid and semi conductor lasers
and their characteristics, interferometers, basics of fibre optics, transducers in biomedical applications, cardiovascular system measurements, instrumentation for clinical laboratory.
Linear Algebra: Finite dimensional vector spaces. Linear transformations and their matrix representations, rank; systems of linear equations, eigenvalues and eigenvectors, minimal polynomial,
Cayley-Hamilton theorem, diagonalisation, Hermitian, Skew-Hermitian and unitary matrices. Finite dimensional inner product spaces, self-adjoint and Normal linear operators, spectral theorem,
Quadratic forms.
Complex Analysis: Analytic functions, conformal mappings, bilinear transformations, complex integration: Cauchy's integral theorem and formula, Liouville's theorem, maximum modulus principle, Taylor
and Laurent's series, residue theorem and applications for evaluating real integrals.
Real Analysis: Sequences and series of functions, uniform convergence, power series, Fourier series, functions of several variables, maxima, minima, multiple integrals, line, surface and volume
integrals, theorems of Green, Stokes and Gauss; metric spaces, completeness, Weierstrass approximation theorem, compactness. Lebesgue measure, measurable functions; Lebesgue integral, Fatou's lemma,
dominated convergence theorem.
Ordinary Differential Equations: First order ordinary differential equations, existence and uniqueness theorems, systems of linear first order ordinary differential equations, linear ordinary
differential equations of higher order with constant coefficients; linear second order ordinary differential equations with variable coefficients, method of Laplace transforms for solving ordinary
differential equations, series solutions; Legendre and Bessel functions and their orthogonality, Sturm Liouville system, Greeen's functions.
Algebra: Normal subgroups and homomorphisms theorems, automorphisms. Group actions, sylow's theorems and their applications groups of order less than or equal to 20, Finite p-groups. Euclidean
domains, Principal ideal domains and unique factorizations domains. Prime ideals and maximal ideals in commutative rings.
Functional Analysis: Banach spaces, Hahn-Banach theorems, open mapping and closed graph theorems, principle of uniform boundedness; Hilbert spaces, orthonormal sets, Riesz representation theorem,
self-adjoint, unitary and normal linear operators on Hilbert Spaces.
Numerical Analysis: Numerical solution of algebraic and transcendental equations; bisection, secant method, Newton-Raphson method, fixed point iteration, interpolation: existence and error of
polynomial interpolation, Lagrange, Newton, Hermite(osculatory)interpolations; numerical differentiation and integration, Trapezoidal and Simpson rules; Gaussian quadrature; (Gauss-Legendre and
Gauss-Chebyshev), method of undetermined parameters, least square and orthonormal polynomial approximation; numerical solution of systems of linear equations: direct and iterative methods, (Jacobi
Gauss-Seidel and SOR) with convergence; matrix eigenvalue problems: Jacobi and Given's methods, numerical solution of ordinary differential equations: initial value problems, Taylor series method,
Runge-Kutta methods, predictor-corrector methods; convergence and stability.
Partial Differential Equations: Linear and quasilinear first order partial differential equations, method of characteristics; second order linear equations in two variables and their classification;
Cauchy, Dirichlet and Neumann problems, Green's functions; solutions of Laplace, wave and diffusion equations in two variables Fourier series and transform methods of solutions of the above equations
and applications to physical problems.
Mechanics: Forces in three dimensions, Poinsot central axis, virtual work, Lagrange's equations for holonomic systems, theory of small oscillations, Hamiltonian equations;
Topology: Basic concepts of topology, product topology, connectedness, compactness, countability and separation axioms, Urysohn's Lemma, Tietze extension theorem, metrization theorems, Tychonoff
theorem on compactness of product spaces.
Probability and Statistics: Probability space, conditional probability, Bayes' theorem, independence, Random variables, joint and conditional distributions, standard probability distributions and
their properties, expectation, conditional expectation, moments. Weak and strong law of large numbers, central limit theorem. Sampling distributions, UMVU estimators, sufficiency and consistency,
maximum likelihood estimators. Testing of hypotheses, Neyman-Pearson tests, monotone likelihood ratio, likelihood ratio tests, standard parametric tests based on normal, X2 ,t, F-distributions.
Linear regression and test for linearity of regression. Interval estimation.
Linear Programming: Linear programming problem and its formulation, convex sets their properties, graphical method, basic feasible solution, simplex method, big-M and two phase methods, infeasible
and unbounded LPP's, alternate optima. Dual problem and duality theorems, dual simplex method and its application in post optimality analysis, interpretation of dual variables. Balanced and
unbalanced transportation problems, unimodular property and u-v method for solving transportation problems. Hungarian method for solving assignment problems.
Calculus of Variations and Integral Equations: Variational problems with fixed boundaries; sufficient conditions for extremum, Linear integral equations of Fredholm and Volterra type, their iterative
solutions. Fredholm alternative.
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative, Maxima and
minima, Gradient, Divergence and Curl, Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy's and Euler's equations, Initial and boundary value
problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy's integral theorem, Taylor and Laurent series.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by trapezoidal and Simpson's rule, single and multi-step methods for differential equations.
Engineering Mechanics: Equivalent force systems, free-body concepts, equations of equilibrium, trusses and frames, virtual work and minimum potential energy. Kinematics and dynamics of particles and
rigid bodies, impulse and momentum (linear and angular), energy methods, central force motion.
Strength of Materials: Stress and strain, stress-strain relationship and elastic constants, Mohr's circle for plane stress and plane strain, shear force and bending moment diagrams, bending and shear
stresses, deflection of beams, torsion of circular shafts, thin and thick cylinders, Euler's theory of columns, strain energy methods, thermal stresses.
Theory of Machines: Displacement, velocity and acceleration, analysis of plane mechanisms, dynamic analysis of slider-crank mechanism, planar cams and followers, gear tooth profiles, kinematics of
gears, governors and flywheels, balancing of reciprocating and rotating masses.
Vibrations: Free and forced vibration of single degree freedom systems, effect of damping, vibration isolation, resonance, critical speed of rotors.
Design of Machine Elements: Design for static and dynamic loading, failure theories, fatigue strength; design of bolted, riveted and welded joints; design of shafts and keys; design of spur gears,
rolling and sliding contact bearings; brakes and clutches; belt, rope and chain drives.
Fluid Mechanics: Fluid properties, fluid statics, manometry, buoyancy; control-volume analysis of mass, momentum and energy; fluid acceleration; differential equations of continuity and momentum;
Bernoulli's equation; viscous flow of incompressible fluids; boundary layer; elementary turbulent flow; flow through pipes, head losses in pipes, bends etc.
Heat-Transfer: Modes of heat transfer; one dimensional heat conduction, resistance concept, electrical analogy, unsteady heat conduction, fins; dimensionless parameters in free and forced convective
heat transfer, various correlations for heat transfer in flow over flat plates and through pipes; thermal boundary layer; effect of turbulence; radiative heat transfer, black and grey surfaces, shape
factors, network analysis; heat exchanger performance, LMTD and NTU methods.
Thermodynamics: Zeroth, First and Second laws of thermodynamics; thermodynamic system and processes; irreversibility and availability; behaviour of ideal and real gases, properties of pure
substances, calculation of work and heat in ideal processes; analysis of thermodynamic cycles related to energy conversion; Carnot, Rankine, Otto, Diesel, Brayton and vapour compression cycles.
Power Plant Engineering: Steam generators; steam power cycles; steam turbines; impulse and reaction principles, velocity diagrams, pressure and velocity compounding; reheating and reheat factor;
condensers and feed heaters.
I.C. Engines: Requirements and suitability of fuels in IC engines, fuel ratings, fuel-air mixture requirements; normal combustion in SI and CI engines; engine performance calculations.
Refrigeration and air-conditioning: Refrigerant compressors, expansion devices, condensers and evaporators; properties of moist air, psychrometric chart, basic psychometric processes.
Turbomachinery: Components of gas turbines; compression processes, centrifugal and axial flow compressors; axial flow turbines, elementary theory; hydraulic turbines; Euler-turbine equation; specific
speed, Pelton-wheel, Francis and Kaplan turbines; centrifugal pumps.
Engineering Materials: Structure and properties of engineering materials and their applications, heat treatment.
Metal Casting: Casting processes (expendable and non-expendable) -pattern, moulds and cores, heating and pouring, solidification and cooling, gating design, design considerations, defects.
Forming Processes: Stress-strain diagrams for ductile and brittle material, Plastic deformation and yield criteria, fundamentals of hot and cold working processes, Bulk metal forming processes
(forging, rolling, extrusion, drawing), sheet metal working processes (punching, blanking, bending, deep drawing, coining, spinning, load estimation using homogeneous deformation methods, defects).
processing of powder metals- atomization, compaction, sintering, secondary and finishing operations. forming and shaping of plastics- extrusion, injection moulding.
Joining Processes: Physics of welding, fusion and non-fusion welding processes, brazing and soldering, adhesive bonding, design considerations in welding, weld quality defects.
Machining and Machine Tool Operations: Mechanics of machining, single and multi-point cutting tools, tool geometry and materials, tool life and wear, cutting fluids, machinability, economics of
machining, non-traditional machining processes.
Metrology and Inspection: Limits, fits and tolerances, linear and angular measurements, comparators, gauge design, interferometry, form and finish measurement, measurement of screw threads, alignment
and testing methods.
Tool Engineering: Principles of work holding, design of jigs and fixtures.
Computer Integrated Manufacturing: Basic concepts of CAD, CAM and their integration tools.
Manufacturing Analysis: Part-print analysis, tolerance analysis in manufacturing and assembly, time and cost analysis.
Work-Study: Method study, work measurement, time study, work sampling, job evaluation, merit rating.
Production Planning and Control: Forecasting models, aggregate production planning, master scheduling, materials requirements planning.
Inventory Control: Deterministic and probabilistic models, safety stock inventory control systems.
Operations Research: Linear programming, simplex and duplex method, transportation, assignment, network flow models, simple queuing models, PERT and CPM
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green's theorems.
Diferential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy's and Euler's equations; Laplace transforms; PDEs - Laplace, heat and wave
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson's rule; single and multi-step methods for differential equations.
Mechanics: Equivalent force systems, equations of equilibrium, two dimensional frames and trusses, free body diagrams, friction forces, particle kinematics and dynamics.
Mine Development, Geomechanics and Strata Control: Drivages for underground mine development, drilling methods and machines, explosives, blasting devices and practices, shaft sinking.
Physico-mechanical properties of rocks, rock mass classification, ground control instrumentation and stress measurement techniques, theories of rock failure, ground vibrations, stress distribution
around mine openings, subsidence, design of supports in roadways and workings, stability of open pits, slopes.
Mining Methods and Machinery: Surface mining - layout, development, loading, transportation and mechanization, continuous surface mining systems. Underground coal mining - bord and pillar system,
longwall mining, thick seam mining methods. Underground metal mining: different stoping methods, stope mechanization, ore handling systems, mine filling. Generation and transmission of mechanical,
hydraulic, and pneumatic power. Materials handling - haulages, conveyors, ropeways, face and development machinery, hoisting systems, and pumps.
Ventilation, Underground Hazards and Surface Environment: Underground atmosphere, heat load sources and thermal environment, air cooling, mechanics of air flow distribution, natural and mechanical
ventilation, mine fans and their usage, auxiliary ventilation. Subsurface hazards from fires, explosions, gases, dust, and inundation, rescue apparatus and practices, safety in mines, accident
analysis, noise, mine lighting. Air and water pollution: causes, dispersion, quality standards, and control.
Surveying, Mine Planning and Systems Engineering: Fundamentals of engineering surveying, Levels and levelling, Theodolite, tacheometry, triangulation, contouring, errors and adjustments, correlation,
underground surveying, curves, photogrammetry, field astronomy, GPS fundamentals. Principles of planning - Sampling methods and practices, reserve estimation techniques, basics of geostatistics,
optimization of facility location, cash flow concepts and mine valuation, open pit design. Work study, concepts of reliability, reliability of series and parallel systems. Linear programming,
transportation and assignment problems, queueing, network analysis, inventory control.
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green's theorems.
Diferential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy's and Euler's equations; Laplace transforms; PDEs - Laplace, heat and wave
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson's rule; single and multi-step methods for differential equations.
Thermodynamics and Rate Processes: Laws of thermodynamics, activity, equilibrium constant, applications to metallurgical systems, solutions, phase equilibria, basic kinetic laws, order of reactions,
rate constants and rate limiting steps principles of electro chemistry, aqueous, corrosion and protection of metals, oxidation and high temperature corrosion - characterization and control; momentum
transfer - concepts of viscosity, shell balances, Bernoulli's equation; heat transfer - conduction, convection and heat transfer coefficient relations, radiation, mass transfer - diffusion and Fick's
Extractive Metallurgy: Flotation, gravity and other methods of mineral processing; agglomeration, pyro-hydro-and electro-metallurgical processes; material and energy balances; principles and
processes for the extraction of non-ferrous metals - aluminium, copper, zinc, lead, magnesium, nickel, titanium and other rare metals; iron and steel making - principles, blast furnace, direct
reduction processes, primary and secondary steel making, deoxidation and inclusion in steel; ingot and continuous casting; stainless steel making, design of furnaces; fuels and refractories.
Physical Metallurgy: Crystal structure and bonding characteristics of metals, alloys, ceramics and polymers; solid solutions; solidification; phase transformation and binary phase diagrams;
principles of heat treatment of steels, aluminum alloys and cast irons; recovery, recrystallization and grain growth; industrially important ferrous and non-ferrous alloys; elements of X-ray and
electron diffraction; principles of scanning and transmission electron microscopy; elements of ceramics, composites and electronic materials; electronic basis of thermal, optical, electrical and
magnetic properties of materials.
Mechanical Metallurgy: Elements of elasticity and plasticity; defects in crystals; elements of dislocation theory - types of dislocations, slip and twinning, stress fields of dislocations,
dislocation interactions and reactions, methods of seeing dislocations; strengthening mechanisms; tensile, fatigue and creep behaviour; superplasticity; fracture - Griffith theory, ductile to brittle
transition, fracture toughness; failure analysis; mechanical testing - tension, compression, torsion, hardness, impact, creep, fatigue, fracture toughness and formability tests.
Manufacturing Processes: Metal casting - patterns, moulds, melting, gating, feeding and casting processes, defects and castings, hot and cold working of metals; Metal forming - fundamentals of metal
forming, rolling wire drawing, extrusion, forming, sheet metal forming processes, defects in forming; Metal joining - soldering, brazing and welding, common welding processes, welding metallurgy,
problems associated with welding of steels and aluminium alloys, defects in welding, powder metallurgy; NDT methods - ultrasonic, radiography, eddy current, acoustic emission and magnetic.
PH - PHYSICS
Mathematical Physics: Linear vector space, matrices; vector calculus; linear differential equations; elements of complex analysis; Laplace transforms, Fourier analysis, elementary ideas about
Classical Mechanics: Conservation laws; central forces; collisions and scattering in laboratory and centre of mass reference frames; mechanics of system of particles; rigid body dynamics; moment of
inertia tensor; noninertial frames and pseudo forces; variational principle; Lagrange's and Hamilton's formalisms; equation of motion, cyclic coordinates, Poisson bracket; periodic motion, small
oscillations, normal modes; wave equation and wave propagation; special theory of relativity - Lorentz transformations, relativistic kinematics, mass-energy equivalence.
Electromagnetic Theory: Laplace and Poisson equations; conductors and dielectrics; boundary value problems; Ampere's and Biot-Savart's laws; Faraday's law; Maxwell's equations; scalar and vector
potentials; Coulomb and Lorentz gauges; boundary conditions at interfaces; electromagnetic waves; interference, diffraction and polarization; radiation from moving charges.
Quantum Mechanics: Physical basis of quantum mechanics; uncertainty principle; Schrodinger equation; one and three dimensional potential problems; Particle in a box, harmonic oscillator, hydrogen
atom; linear vectors and operators in Hilbert space; angular momentum and spin; addition of angular momentum; time independent perturbation theory; elementary scattering theory.
Atomic and Molecular Physics: Spectra of one-and many-electron atoms; LS and jj coupling; hyperfine structure; Zeeman and Stark effects; electric dipole transitions and selection rules; X-ray
spectra; rotational and vibrational spectra of diatomic molecules; electronic transition in diatomic molecules, Franck-Condon principle; Raman effect; NMR and ESR; lasers.
Thermodynamics and Statistical Physics: Laws of thermodynamics; macrostates, phase space; probability ensembles; partition function, free energy, calculation of thermodynamic quantities; classical
and quantum statistics; degenerate Fermi gas; black body radiation and Planck's distribution law; Bose-Einstein condensation; first and second order phase transitions, critical point.
Solid State Physics: Elements of crystallography; diffraction methods for structure determination; bonding in solids; elastic properties of solids; defects in crystals; lattice vibrations and thermal
properties of solids; free electron theory; band theory of solids; metals, semiconductors and insulators; transport properties; optical, dielectric and magnetic properties of solids; elements of
Nuclear and Particle Physics: Rutheford scattering; basic properties of nuclei; radioactive decay; nuclear forces; two nucleon problem; nuclear reactions; conservation laws; fission and fusion;
nuclear models; particle accelerators, detectors; elementary particles; photons, baryons, mesons and leptons; Quark model.
Electronics: Network analysis; semiconductor devices; bipolar transistors; FETs; power supplies, amplifier, oscillators; operational amplifiers; elements of digital electronics; logic circuits.
Linear Algebra: Matrix algebra, Systems of linear equations, Eigen values and eigenvectors.
Calculus: Functions of single variable, Limit, continuity and differentiability, Mean value theorems, Evaluation of definite and improper integrals, Partial derivatives, Total derivative, Maxima and
minima, Gradient, Divergence and Curl, Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems.
Differential equations: First order equations (linear and nonlinear), Higher order linear differential equations with constant coefficients, Cauchy's and Euler's equations, Initial and boundary value
problems, Laplace transforms, Solutions of one dimensional heat and wave equations and Laplace equation.
Complex variables: Analytic functions, Cauchy's integral theorem, Taylor and Laurent series.
Probability and Statistics: Definitions of probability and sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Poisson, Normal and Binomial
Numerical Methods: Numerical solutions of linear and non-linear algebraic equations Integration by trapezoidal and Simpson's rule, single and multi-step methods for differential equations.
Engineering Materials: Structure and properties of engineering materials and their applications; effect of strain, strain rate and temperature on mechanical properties of metals and alloys; heat
treatment of metals and alloys.
Applied Mechanics: Engineering mechanics - equivalent force systems, free body concepts, equations of equilibrium, virtual work and minimum potential energy; strength of materials- stress, strain and
their relationship, Mohr's circle, deflection of beams, bending and shear stress, Euler's theory of columns.
Theory of Machines and Design: Analysis of planar mechanisms, plane cams and followers; governers and fly wheels; design of elements-failure theories; design of bolted, riveted and welded joints;
design of shafts, keys, belt drives, brakes and clutches.
Thermal Engineering: Fluid machines - fluid statics, Bernoulli's equation, flow through pipes, equations of continuity and momentum; Thermodynamics - zeroth, First and Second laws of thermodynamics,
thermodynamic system and processes, calculation of work and heat for systems and control volumes; Heat transfer - fundamentals of conduction, convection and radiation.
Metal Casting: Casting processes; patterns-materials; allowances; moulds and cores - materials, making and testing; melting and founding of cast iron, steels and nonferrous metals and alloys;
solidification; design of casting, gating and risering; casting defects and inspection.
Metal working: Stress-strain in elastic and plastic deformation; deformation mechanisms; hot and cold working-forging, rolling, extrusion, wire and tube drawing; sheet metal working; analysis of
rolling, forging, extrusion and wire /rod drawing; metal working defects, high energy rate forming processes-explosive, magnetic, electro and electrohydraulic.
Metal Joining Processes: Welding processes - gas shielded metal arc, TIG, MIG, submerged arc, electroslag, thermit, resistance, pressure and forge welding; thermal cutting; other joining processes -
soldering, brazing, braze welding; welding codes, welding symbols, design of welded joints, defects and inspection; introduction to modern welding processes - friction, ultrasonic, explosive,
electron beam, laser and plasma.
Machining and Machine Tool Operations: Machining processes-turning, drilling, boring, milling, shaping, planing, sawing, gear cutting, thread production, broaching, grinding, lapping, honing super
finishing; mechanics of cutting- Merchant's analysis, geometry of cutting tools, cutting forces, power requirements; selection of process parameters; tool materials, tool wear and tool life, cutting
fluids, machinability; nontraditional machining processes and hybrid processes- EDM, CHM, ECM, USM, LBM, EBM, AJM, PAM AND WJM; economics of machining.
Metrology and Inspection: Limits and fits, linear and angular measurements by mechanical and optical methods, comparators; design of limit gauges; interferometry; measurement of straightness,
flatness, roundness, squareness and symmetry; surface finish measurement; inspection of screw threads and gears; alignment testing.
Powder Metallurgy and Processing of Plastics: Production of powders, compaction, sintering; Polymers and composites; injection, compression and blow molding, extrusion, calendaring and thermoforming;
molding of composites.
Tool Engineering: Work-holding-location and clamping; principles and methods; design of jigs and fixtures; design of press working tools, forging dies.
Manufacturing Analysis: Sources of errors in manufacturing; process capability; part-print analysis; tolerance analysis in manufacturing and assembly; process planning; parameter selection and
comparison of production alternatives; time and cost analysis; Issues in choosing manufacturing technologies and strategies.
Computer Integrated Manufacturing: Basic concepts of CAD, CAM, CAPP, group technology, NC, CNC, DNC, FMS, Robotics and CIM.
Product Design and Development: Principles of good product design, component and tolerance design; efficiency, quality and cost considerations; product life cycle; standardization, simplification,
diversification, value analysis, concurrent engineering.
Engineering Economy and Costing: Financial statements; elementary cost accounting, methods of depreciation; break-even analysis, techniques for evaluation of capital investments.
Work System Design: Taylor's scientific management, Gilbreths's contributions; productivity concepts and measurements; method study, micro-motion study, principles of motion economy; human factors
engineering, ergonomics; work measurement - time study, PMTS, work sampling; job evaluation, merit rating, wage administration, incentive systems; business process reengineering.
Logistics and Facility Design: Facility location factors, evaluation of alternatives, types of plant layout, evaluation; computer aided layout; assembly line balancing; material handling systems;
supply chain management.
Production Planning and Inventory Control: Inventory Function costs, classifications - deterministic and probabilistic models; quantity discount; safety stock; inventory control system; Forecasting
techniques - causal and time series models, moving average, exponential smoothing; trend and seasonality; aggregate production planning; master scheduling; bill of materials and material requirement
planning; order control and flow control, routing, scheduling and priority dispatching; JIT; Kanban PULL systems; bottleneck scheduling and theory of constraints.
Operation Research: Linear programming - problem formulation, simplex method, duality and sensitivity analysis; transportation; assignment; network flow models, constrained optimization and Lagrange
multipliers; simple queuing models; dynamic programming; simulation; PERT and CPM, time-cost trade-off, resource leveling.
Quality Control: Taguchi method; design of experiments; quality costs, statistical quality assurance, process control charts, acceptance sampling, zero defects; quality circles, total quality
Reliability and Maintenance: Reliability, availability and maintainability; probabilistic failure and repair times; system reliability; preventive maintenance and replacement, TPM.
Management Information System: Value of information; information storage and retrieval system - database and data structures; interactive systems; knowledge based systems.
Intellectual Property System: Definition of intellectual property, importance of IPR; TRIPS, and its implications, WIPO and Global IP structure, and IPS in India; patent, copyright, industrial design
and trademark; meanings, rules and procedures, terms, infringements and remedies.
Natural Products: Pharmacognosy & Phytochemistry - Chemistry, tests, isolation, characterization and estimation of phytopharmaceuticals belonging to the group of Alkaloids, Glycosides, Terpenoids,
Steroids, Bioflavanoids, Purines, Guggul lipids. Pharmacognosy of crude drugs which contain the above constituents. Standardisation of raw materials and herbal products. WHO guide lines. Quantitative
microscopy including modern techniques used for evaluation. Biotechnological principles and techniques for plant development Tissue culture.
Pharmacology: General pharmacological principles including Toxicology. Drug interaction. Pharmacology of drugs acting on Central nervous system, Cardiovascular system, Autonomic nervous system,
Gastro intestinal system and Respiratory system. Pharmacology of Autocoids, Hormones, Chemotherapeutic agents including anticancer drugs. Bioassays. Immuno Pharmacology.
Medicinal Chemistry: Structure, nomenclature, classification, synthesis, SAR and metabolism of the following category of drugs which are official in Indian Pharmacopoeia and British Pharmacopoeia
Hypnotics and Sedatives, Analgesics, NSAIDS, Neuroleptics, Antidepressants, Anxiolytics, Anticonvulsants, Antihistaminics, Local anaesthetics, Cardio Vascular drugs - Antianginal agents Vasodilators,
Adrenergic & cholinergic drugs, Cardiotonic agents, Diuretics, Antihypertensive drugs, Hypoglycemic agents, Antilipedmic agents, Coagulants, Anticoagulants, Antiplatelet agents. Chemotherapeutic
agents - Antibiotics, Antibacterials, Sulphadrugs. Antiproliozoal drugs, Antiviral, Antitubercular, Antimalarial, Anticancer, Antiamoebic drugs. Diagnostic agents. Preparation and storage and uses of
official Radiopharmaceuticals. Vitamins and Hormones.
Pharmaceutics: Development, manufacturing standards, labeling, packing as per the pharmacopoeal requirements, Storage of different dosage forms and new drug delivery systems. Biopharmaceutics and
Pharmacokinetics and their importance in formulation. Formulation and preparation of cosmetics - lipstick, shampoo, creams, nail preparations and dentifrices. Pharmaceutical calculations.
Pharmaceutical Jurisprudence: Legal aspects of manufacture, storage, sale of drugs. D and C act and rules. Pharmacy act.
Pharmaceutical Analysis: Principles, instrumentation and applications of the following. Absorption spectroscopy (UV, visible & IR), Fluorimetry, Flame photometry, Potentiometry, Conductometry and
Plarography. Pharmacopoeial assays. Principles of NMR, ESR, Mass spectroscopy, X-ray diffraction analysis and different chromatographic methods.
Biochemistry and Clinical Pharmacy: Biochemical role of hormones, Vitamins, Enzymes, Nucleic acids. Bioenergetics. General principles of immunology. Immunological techniques. Adverse drug
Microbiology: Principles and methods of microbiological assays of the Pharmacopoeia. Methods of preparation of official sera and vaccines. Serological and diagnostic tests. Applications of
microorganisms in Bio Conversions and in Pharmaceutical industry.
Linear Algebra: Matrices and Determinants, Systems of linear equations, Eigen values and eigen vectors.
Calculus: Limit, continuity and differentiability; Partial Derivatives; Maxima and minima; Sequences and series; Test for convergence; Fourier series.
Vector Calculus: Gradient; Divergence and Curl; Line; surface and volume integrals; Stokes, Gauss and Green's theorems.
Diferential Equations: Linear and non-linear first order ODEs; Higher order linear ODEs with constant coefficients; Cauchy's and Euler's equations; Laplace transforms; PDEs - Laplace, heat and wave
Probability and Statistics: Mean, median, mode and standard deviation; Random variables; Poisson, normal and binomial distributions; Correlation and regression analysis.
Numerical Methods: Solutions of linear and non-linear algebraic equations; integration of trapezoidal and Simpson's rule; single and multi-step methods for differential equations.
Textile Fibres: Classification of textile fibres according to their nature and origin; general characteristics of textile fibres-their chemical and physical structures and their properties; essential
characteristics of fibre forming polymers; uses of natural and man-made fibres; physical and chemical methods of fibre and blend identification and blend analysis.
Melt Spinning processes with special reference to polyamide and polyester fibres; wet and dry spinning of viscose and acrylic fibres; post spinning operations-drawing, heat setting, texturing- false
twist and air-jet, tow-to-top conversion. Methods of investigating fibre structure e.g. X-ray diffraction, birefringence, optical and electron microscopy, I.R. absorption, thermal methods; structure
and morphology and principal natural and man-made fibres, mechanical properties of fibres, moisture sorption in fibres; fibre structure and property correlation.
Textile Testing: Sampling techniques, sample size and sampling errors; measurement of fibre length, fineness, crimp, strength and reflectance; measurement of cotton fibre maturity ad trash content;
HVI and AFIS for fibre testing. Measurement of yarn count, twist and hairiness; tensile testing of fibres, yarn and fabrics; evenness testing of slivers, rovings and yarns; testing equipment for
measurement test methods of fabric properties like thickness, compressibility, air permeability, drape, crease recovery, tear strength bursting strength and abrasion resistance. Correlation analysis,
significance tests and analysis of variance; frequency distributions and control charts.
Yarn Manufacture and Yarn Structure: Modern methods of opening, cleaning and blending of fibrous materials; the technology of carding with particular reference to modern developments; causes of
irregularity introduced by drafting, the development of modern drafting systems; principles and techniques of preparing material for combing; recent development in combers; functions and
synchronization of various mechanisms concerned with roving production; forces acting on yarn and traveller, ring and traveller designs; causes of end breakages; properties of doubles yarns; new
methods of yarn production such as rotor spinning, air jet spinning and friction spinning.
Yarn diameter; specific volume, packing coefficient; twist-strength relationship; fibre orientation in yarn; fibre migration.
Fabric Manufacture and Fabric Structure: Principles of cheese and cone winding processes and machines; random and precision winding; package faults and their remedies; yarn clearers and tensioners;
different systems of yarn splicing; features of modern cone winding machines; different types of warping creels; features of modern beam and sectional warping machines; different sizing systems,
sizing of spun and filament yarns, modern sizing machines; principles of pirn winding processes and machines; primary and secondary motions of loom, effect of their settings and timings on fabric
formation, fabric appearance and weaving performance; dobby and jacquard shedding; mechanics of weft insertion with shuttle; warp and weft stop motions, warp protection, weft replenishment;
functional principles of weft insertion systems of shuttleless weaving machines, principles of multiphase and circular looms. Principles of weft and warp knitting; basic weft and warp knitted
structures; classification, production and areas of application of nonwoven fabrics.
Basic woven fabric constructions and their derivatives; crepe, cord, terry, gauze, lino and double cloth constructions.
Peirce's equations for fabric geometry; thickness, cover and maximum sett of woven fabrics
Textile Chemical Processing: Preparatory processes for natural-and and their blends; mercerization of cotton; machines for yarn and fabric mercerization.
Dyeing and printing of natural- and synthetic- fibre fabrics and their blends with different dye classes; dyeing and printing machines; styles of printing; fastness properties of dyed and printed
textile materials.
Finishing of textile materials, wash and wear, durable press, soil release, water repellent, flame retardant and antistatic finishes; shrink-resistance finish for wool; heat setting of
synthetic-fibre fabrics, finishing machines; energy efficient processes; pollution control.
The syllabi of the sections of this paper are as follows:
SECTION A. ENGINEERING MATHEMATICS (Compulsory)
Linear Algebra : Determinates, algebra of matrices, rank, inverse, system of linear equations, symmetric, skew-symmetric and orthogonal matrices. Hermitian, skew-hermitian and unitary matrices.
eigenvalues and eigenvectors, diagonalisation of matrices, Cayley-Hamiltonian, quadratic forms.
Calculus : Functions of single variables, limit, continuity and differentiability, Mean value theorems, Intermediate forms and L'Hospital rule, Maxima and minima, Taylor's series, Fundamental and
mean value-theorems of integral calculus. Evaluation of definite and improper integrals, Beta and Gamma functions, Functions of two variables, limit, continuity, partial derivatives, Euler's theorem
for homogeneous functions, total derivatives, maxima and minima, Lagrange method of multipliers, double and triple integrals and their applications, sequence and series, tests for convergence, power
series, Fourier Series, Fourier integrals.
Complex variable: Analytic functions, Cauchy's integral theorem and integral formula without proof. Taylor's and Laurent' series, Residue theorem (without proof) with application to the evaluation of
real integarls.
Vector Calculus: Gradient, divergence and curl, vector identities, directional derivatives, line, surface and volume integrals, Stokes, Gauss and Green's theorems (without proofs) with applications.
Ordinary Differential Equations: First order equation (linear and nonlinear), higher order linear differential equations with constant coefficients, method of variation of paramaters, Cauchy's and
Euler's equations, initial and boundary value problems, power series solutions, Legendre polynomials and Bessel's functions of the first kind.
Partial Differential Equations: Variables separable method, solutions of one dimensional heat, wave and Laplace equations.
Probability and Statistics: Definitions of probability and simple theorems, conditional probability, mean, mode and standard deviation, random variables, discrete and continuous distributions,
Poisson, normal and Binomial distribution, correlation and regression
Numerical Methods: L-U decomposition for systems of linear equations,Newton-Raphson method, numerical integration(trapezoidal and Simpson's rule), numerical methods for first order differential
equation (Euler method)
Numerical Methods: Truncation errors, round off errors and their propagation; Interpolation; Lagrange, Newton's forward, backward and divided difference formulas, least square curve fitting, solution
of non-linear equations of one variables using bisection, false position, secant and Newton Raphson methods; Rate of convergence of these methods, general iterative methods. Simple and multiple roots
of polynomials. Solutions of system of linear algebraic equations using Gauss elimination methods, Jacobi and Gauss-Seidel iterative methods and their rate of convergence; ill conditioned and well
conditioned system. eigen values and eigen vectors using power methods. Numerical integration using trapezoidal, Simpson's rule and other quadrature formulas. Numerical Differentiation. Solution of
boundary value problems. Solution of initial value problems of ordinary differential equations using Euler's method, predictor corrector and Runge Kutta method.
Programming : Elementary concepts and terminology of a computer system and system software, Fortran77 and C programming.
Fortran : Program organization, arithmetic statements, transfer of control, Do loops, subscripted variables, functions and subroutines.
C language : Basic data types and declarations, flow of control- iterative statement, conditional statement, unconditional branching, arrays, functions and procedures.
Electric Circuits: Ideal voltage and current sources; RLC circuits, steady state and transient analysis of DC circuits, network theorems; alternating currents and voltages, single-phase AC circuits,
resonance; three-phase circuits.
Magnetic circuits: Mmf and flux, and their relationship with voltage and current; transformer, equivalent circuit of a practical transformer, three-phase transformer connections.
Electrical machines: Principle of operation, characteristics, efficiency and regulation of DC and synchronous machines; equivalent circuit and performance of three-phase and single-phase induction
Electronic Circuits: Characteristics of p-n junction diodes, zener diodes, bipolar junction transistors (BJT) and junction field effect transistors (JFET); MOSFET's structure, characteristics, and
operations; rectifiers, filters, and regulated power supplies; biasing circuits, different configurations of transistor amplifiers, class A, B and C of power amplifiers; linear applications of
operational amplifiers; oscillators; tuned and phase shift types.
Digital circuits: Number systems, Boolean algebra; logic gates, combinational circuits, flip-flops (RS, JK, D and T) counters.
Measuring instruments: Moving coil, moving iron, and dynamometer type instruments; shunts, instrument transformers, cathode ray oscilloscopes; D/A and A/D converters.
Fluid Properties: Relation between stress and strain rate for Newtonian fluids
Hydrostatics, buoyancy, manometry
Concept of local and convective accelerations; control volume analysis for mass, momentum and energy conservation.
Differential equations of continuity and momentum (Euler's equation of motion); concept of fluid rotation, stream function, potential function; Bernoulli's equation and its applications.
Qualitative ideas of boundary layers and its separation; streamlined and bluff bodies; drag and lift forces.
Fully-developed pipe flow; laminar and turbulent flows; friction factor; Darcy Weisbach relation; Moody's friction chart; losses in pipe fittings; flow measurements using venturimeter and orifice
Dimensional analysis; similitude and concept of dynamic similarity; importance of dimensionless numbers in model studies.
Atomic structure and bonding in materials: metals, ceramics and polymers.
Structure of materials: Crystal systems, unit cells and space lattice; determination of structures of simple crystals by X-ray diffraction; Miller indices for planes and directions. Packing geometry
in metallic, ionic and covalent solids.
Concept of amorphous, single and polycrystalline structures and their effects on properties of materials.
Imperfections in crystalline solids and their role in influencing various properties.
Fick´s laws of diffusion and applications of diffusion in sintering, doping of semiconductors and surface hardening of metals.
Alloys: solid solution and solubility limit. Binary phase diagram, intermediate phases and intermetallic compounds; iron-iron carbide phase diagram. Phase transformation in steels. Cold and hot
working of metals, recovery, recrystallization and grain growth.
Properties and applications of ferrous and nonferrous alloys.
Structure, properties, processing and applications of traditional and advanced ceramics.
Polymers: classification, polymerization, structure and properties, additives for polymer products, processing and application.
Composites: properties and application of various composites.
Corrosion and environmental degradation of materials (metals, ceramics and polymers).
Mechanical properties of materials: Stress-strain diagrams of metallic, ceramic and polymeric materials, modulus of elasticity, yield strength, plastic deformation and toughness, tensile strength and
elongation at break; viscoelasticity, hardness, impact strength. ductile and brittle fracture. creep and fatigue properties of materials.
Heat capacity, thermal conductivity, thermal expansion of materials.
Concept of energy band diagram for materials; conductors, semiconductors and insulators in terms of energy bands. Electrical conductivity, effect of temperature on conductivity in materials,
intrinsic and extrinsic semiconductors, dielectric properties of materials.
Refraction, reflection, absorption and transmission of electromagnetic radiation in solids.
Origin of magnetism in metallic and ceramic materials, paramagnetism, diamagnetism, antiferromagnetism, ferromagnetism, ferrimagnetism in materials and magnetic hysteresis.
Advanced materials: Smart materials exhibiting ferroelectric, piezoelectric, optoelectronic, semiconducting behaviour; lasers and optical fibers; photoconductivity and superconductivity in materials.
Equivalent force systems; free-body diagrams; equilibrium equations; analysis of determinate and indeterminate trusses and frames; friction.
Simple relative motion of particles; force as function of position, time and speed; force acting on a body in motion; laws of motion; law of conservation of energy; law of conservation of momentum
Stresses and strains; principal stresses and strains; Mohr's circle; generalized Hooke's Law; equilibrium equations; compatibility conditions; yield criteria.
Axial, shear and bending moment diagrams; axial, shear and bending stresses; deflection (for symmetric bending); torsion in circular shafts; thin cylinders; energy methods (Castigliano's Theorems);
Euler buckling.
Basic Concepts: Continuum, macroscopic approach, thermodynamic system (closed and open or control volume); thermodynamic properties and equilibrium; state of a system, state diagram, path and
process; different modes of work; Zeroth law of thermodynamics; concept of temperature; heat.
First Law of Thermodynamics: Energy, enthalpy, specific heats, first law applied to systems and control volumes, steady and unsteady flow analysis.
Second Law of Thermodynamics: Kelvin-Planck and Clausius statements, reversible and irreversible processes, Carnot theorems, thermodynamic temperature scale, Clausius inequality and concept of
entropy, principle of increase of entropy; availability and irreversibility.
Properties of Pure Substances: Thermodynamic properties of pure substances in solid, liquid and vapour phases, P-V-T behaviour of simple compressible substances, phase rule, thermodynamic property
tables and charts, ideal and real gases, equations of state, compressibility chart.
Thermodynamic Relations: T-ds relations, Maxwell equations, Joule-Thomson coefficient, coefficient of volume expansion, adiabatic and isothermal compressibilities, Clapeyron equation.
Ideal Gas Mixtures: Dalton's and Amagat's laws, calculations of properties, air-water vapour mixtures.
XL - LIFE SCIENCES
The syllabi of the Sections of this paper are as follows:
SECTION H. CHEMISTRY (Compulsory)
Atomic structure and periodicity: Quantum chemistry; Planck's quantum theory, wave particle duality, uncertainty principle, quantum mechanical model of hydrogen atom; electronic configuration of
atoms; periodic table and periodic properties; ionization energy, election affinity, electronegativity, atomic size.
Structure and bonding: Ionic and covalent bonding M.O. and V.B. approaches for diatomic molecules, VSEPR theory and shape of molecules, hybridisation, resonance, dipole moment, structure parameters
such as bond length, bond angle and bond energy, hydrogen bonding, van der Waals interactions. Ionic solids; ionic radii, lattice energy (Born-Haber Cycle).
s.p. and d Block Elements: Oxides, halides and hydrides of alkali and alkaline earth metals, B, Al, S, N, P and S, silicones, general characteristics of 3d elements, coordination complexes: valence
bond and crystal field theory, color, geometry and magnetic properties.
Chemical Equilibria: Colligative properties of solutions, ionic equilibria in solution, solubility product, common ion effect, hydrolysis of salts, pH, buffer and their applications in chemical
Electrochemistry: Conductance, Kohlrausch law, Half Cell potentials, emf, Nernst equation, galvanic cells, thermodynamic aspects and their applications.
Reaction Kinetics: Rate constant, order of reaction, molecularity, activation energy, zero, first and second order kinetics, equilibrium constants (Kc, Kp and Kx) for homogeneous reactions, catalysis
and elementary enzyme reactions.
Thermodynamics: First law, reversible and irreversible processes, internal energy, enthalpy, Kirchoff's equation, heat of reaction, Hess law, heat of formation, Second law, entropy, free energy, and
work function. Gibbs-Helmholtz equation, Clausius-Clapeyron equation, free energy change and equilibrium constant, Troutons rule, Third law of thermodynamics.
Mechanistic Basis of Organic Reactions: Elementary treatment of SN1, SN2, E1 and E2 reactions, Hoffmann and Saytzeff rules, Addition reactions, Markonikoff rule and Kharash effect, Diels-Alder
reaction, aromatic electrophilic substitution, orientation effect as exemplified by various functional groups.
Structure-Reactivity Correlations: Acids and bases, electronic and steric effects, optical and geometrical isomerism, tautomerism, concept of aromaticity
Organization of life. Importance of water. Cell structure and organelles. Structure and function of biomolecules: Carbohydrates, Lipids, Proteins and Nucleic acids. Biochemical separation techniques.
Spectroscopic methods; UV-visible and fluorescence. Protein structure, folding and function: Myoglobin, Hemoglobin, Lysozyme, ribonuclease A, Carboxypeptidase and Chymotrypsin. Enzyme kinetics and
regulation, Coenzymes.
Metabolism and bioenergitics. Generation and utilization of ATP. Photosynthesis. Major metabolic pathways and their regulation. Biological membranes. Transport across membranes. Signal transduction;
hormones and neurotransmitters.
DNA replication, transcription and translation. Biochemical regulation of gene expression. Recombinant DNA technology and applications. Genomics and Proteomics.
The immune system. Active and passive immunity. Complement system. Antobody structure, function and diversity. Cells of the immune system: T, B and macrophages. T and B cell activation. Major
histocompatibilty complex. T cell receptor. Immunological techniques: Immunodiffusion, immunoelectrophoresis, RIA and ELISA.
Recombinant DNA technology for the production of therapeutic proteins. Micro array technology. Heterologous protein expression systems in bacteria, yeast etc.
Architecture of plant genome; plant tissue culture techniques; methods of gene transfer into plant cells; manipulation of phenotypic traits in plants; plant cell fermentations and production of
secondary metabolites using suspension/ immobilized cell culture; methods for plant micro propagation; crop improvement and development of transgenic plants. Expression of animal proteins in plants.
Animal cell metabolism and regulation; cell cycle; primary cell culture; nutritional requirements for animal cell culture; techniques for the mass culture of animal cell lines; production of
vaccines; growth hormones and interferons using animal cell culture; cytokines- production and therapeutic uses; hybridoma technology; vectors for gene transfer and expression in animal cells.
Transgenic animals and molecular pharming.
Microbial production of industrial enzymes; methods for immobilization of enzymes; kinetics of soluble and immobilized enzymes; application of soluble and immobilized enzymes; enzyme-based sensors.
Microbial growth kinetics; batch, fed batch and continuous culture of microbial cells; media for industrial fermentations; sterilization of air and media; design features and operation of stirred
tank, air-lift and fluidized bed reactors; aeration and agitation in aerobic fermentations; recovery and purification of fermentation products- filtration, centrifugation, cell disintegration,
solvent extraction and chromatographic separations; industrial fermentations for the production of ethanol, citric acid, lysine, penicillin and other biomolecules; simple calculations based on
material and energy balance of fermentation processes; application of microbes in the management of domestic and industrial wastes.
Anatomy: Roots, stem and leaves of land plants, meristems, vascular system, their ontogeny, structure and functions. Plant cell structure, organisation, organelles, cytoskeleton, cell wall and
Development: Cell cycle, cell division, senescence, hormonal regulation of growth; life cycle of an angiosperm, pollination, fertilization, embryogenesis, seed formation, seed storage proteins, seed
dormancy and germination. Concept of cellular totipotency, organogenesis and somatic embryogenesis, somaclonal variation, embryo culture, in vitro fertilization.
Physiology and Biochemistry: Plant water relations, transport of minerals and solutes, N2 metabolism, proteins and nucleic acid, respiration, photophysiology, photosynthesis, photorespiration;
biosynthesis, mechanism of action and physiological effects of plant growth regulators.
Genetics: Principles of Mendelian inheritance, linkage, recombination and genetic mapping; extrachromosomal inheritance; eukaryotic genome organization (chromatin structure) and regulation of gene
expression, gene mutation, chromosome aberrations (numerical and structural), transposons.
Plant Breeding: Principles, methods - selection, hybridization, heterosis; male sterility, self and inter-specific incompatibility; haploidy; somatic cell hybridization; molecular marker-assisted
selection; gene transfer methods viz. direct and vector-mediated, transgenic plants and their applications in agriculture.
Economic Botany: Economically important plants - cereals, pulses, plants yielding fiber, timber, sugar, beverages, oils, rubber, dyes, gums, drugs and narcotics - a general account.
Systematics: Systems of classification (non-phylogenetic vs. phylogenetic - outline), plant groups, molecular systematics.
Plant Pathology: Nature and classification of plant diseases, diseases of important crops caused by fungi, bacteria and viruses, and their control measures, mechanism(s) of pathogenesis and
resistance, molecular detection of pathogens; plant-microbe beneficial interactions.
Ecology and Plant Geography: Ecosystems - types, dynamics, degradation, ecological succession; food chains; vegetation types of the world; pollution and global warming; speciation and extinction,
conservation strategies, cryopreservation.
Historical perspective - Discovery of the microbial world; Controversy over spontaneous generation; Role of microorganisms in transformation of organic matter and in the causation of diseases.
Methods in microbiology - Pure culture techniques; Theory and practice of sterilization; Principles of microbial nutrition; Construction of culture media; Enrichment culture techniques for isolation
of chemoautotrophs, chemoheterotrophs and photosynthetic microorganisms.
Microbial evolution, systematics and taxonomy - Evolution of earth and earliest life forms; Primitive organisms and their metabolic strategies; New approaches to bacterial taxonomic classification
including ribotyping; Nomenclature.
Microbial diversity - Bacteria, archea and their broad classification; Eukaryotic microbes, yeast, fungi, slime mold and protozoa; Viruses and their classification.
Microbial growth -The definition of growth, mathematical expression of growth, growth curve, measurement of growth and growth yields; Synchronous growth; Continuous culture.
Nutrition and metabolism - Overview of metabolism; Microbial nutrition; Energy classes of microorganisms; Culture media; Energetics, modes of ATP generation; ATP generation by heterotrophs;
Fermentation; Glycolysis; Respiration; The citric acid cycle; Electron transport systems; Alternate modes of energy generation; Pathways (anabolism) in the biosynthesis of amino acids, purines,
pyrimidines and fatty acids.
Metabolic diversity among microorganisms - Photosynthesis in microorganisms; Role of chlorophylls, carotenoids and phycobilins; Calvin cycle; Chemolithotrophy; Hydrogen- iron- nitrite-oxidizing
bacteria; Nitrate and sulfate reduction; Methanogenesis and acetogenesis.
Prokaryotic cells: structure-function - Cells walls of eubacteria (peptidoglycan) and related molecules; Outer-membrane of gram-negative bacteria; Cell wall and cell membrane synthesis; Flagella and
motility; Cell inclusions like endospores, gas vesicles.
Microbial diseases and host parasite relationships - Normal microflora of skin; Oral cavity; Gastrointestinal tract; Entry of pathogens into the host; Infectious disease transmission; Respiratory
infections caused by bacteria and viruses; Tuberculosis; Sexually transmitted diseases including AIDS; Diseases transmitted by animals (Rabies, plague), insects and ticks (rikettsias, Lyme disease,
malaria); Food and water borne diseases; Public health and water quality; Pahtogenic fungi; Emerging and resurgent infectious diseases.
Chemotherapy/Antibiotics - Antimicrobial agents; Sulfa drugs; Antibiotics; Pencillins and cephalosporins; Broad-spectrum antibiotics; Antibiotics from prokaryotes; Antifungal antibiotics; Mode of
action; Resistance to antibiotics.
Microbial genetics - Genes, mutation and mutagenesis - UV and chemical mutagnes; Types of mutations; Ames test for mutagenesis; Methods of genetic analysis. Bacterial genetic system - Transformation;
Conjugation; Transduction; Recombination; Plasmids and Transposons; Bacterial genetic map with reference to E. coli. Viruses and their genetic system - Phage λ and its life cycle; RNA phages; RNA
viruses; Retroviruses; Genetic systems of yeast and Neurospora; Extrachromosomal inheritance and mitochondrial genetics; Basic concept of genomics.
Animal world: Animal diversity, distribution, systematic and classification of animals, the phylogenetic relationship.
Evolution: Origin of life, history of life on earth, evolutionary theories, natural selection, adaptation, speciation.
Genetics: Principles of inheritance, molecular basis of heredity, the genetic material, transmission of genetic material, mutations, cytoplasmic inheritance.
Biochemistry and Molecular Biology: Nucleic acids, proteins and other biological macromolecules. Replication, transcription and translation, regulation of gene expression, organization of genome,
Kreb's cycle, glycolysis, enzyme catalysis, hormones and their action.
Cell Biology: Structure of cell, cellular organelles and their structure and function, cell cycle, cell division, cellular differentiation, chromosome and chromatin structure. Eukaryotic gene
organisation and expression.
Animal Anatomy and Physiology: Comparative physiology, the respiratory system, circulatory system, digestive system, the nervous system, the excretory system, the endocrine system, the reproductive
system, the skeletal system, osmoregulation.
Parasitology and Immunology: Nature of parasite, host-parasite relation, protozoan and helminthic parasites, the immune response, cellular and humoral immune response, evolution of the immune system.
Development Biology: Embryonic development, cellular differentiation, organogenesis, metamorphosis, genetic basis of development.
Ecology: The ecosystem, habitats the food chain, population dynamics, species diversity, zoogeography, biogeochemical cycles, conservation biology.
Animal Behaviour: Types of behaviours, courtship, mating and territoriality, instinct, learning and memory, social behaviour across the animal taxa, communication, pheromones, evolution of animal
Mathematical Logic: Propositional Logic; First Order Logic.
Probability: Conditional Probability; Mean, Median, Mode and Standard Deviation; Random Variables; Distributions; uniform, normal, exponential, Poisson, Binomial.
Set Theory & Algebra: Sets; Relations; Functions; Groups; Partial Orders; Lattice; Boolean Algebra.
Combinatorics: Permutations; Combinations; Counting; Summation; generating functions; recurrence relations; asymptotics.
Graph Theory: Connectivity; spanning trees; Cut vertices & edges; covering; matching; independent sets; Colouring; Planarity; Isomorphism.
Linear Algebra: Algebra of matrices, determinants, systems of linear equations, Eigen values and Eigen vectors.
Numerical Methods: LU decomposition for systems of linear equations; numerical solutions of non linear algebraic equations by Secant, Bisection and Newton-Raphson Methods; Numerical integration by
trapezoidal and Simpson's rules.
Calculus: Limit, Continuity & differentiability, Mean value Theorems, Theorems of integral calculus, evaluation of definite & improper integrals, Partial derivatives, Total derivatives, maxima &
Regular Languages: finite automata, regular expressions, regular grammar.
Context free languages: push down automata, context free grammars
Digital Logic: Logic functions, minimization, design and synthesis of combinatorial and sequential circuits, number representation and computer arithmetic (fixed and floating point)
Computer organization: Machine instructions and addressing modes, ALU and data path, hardwired and microprogrammed control, memory interface, I/O interface (interrupt and DMA mode), serial
communication interface, instruction pipelining, cache, main and secondary storage
Data structures and Algorithms: the notion of abstract data types, stack, queue, list, set, string, tree, binary search tree, heap, graph, tree and graph traversals, connected components, spanning
trees, shortest paths, hashing, sorting, searching, design techniques (greedy, dynamic, divide and conquer), asymptotic analysis (best, worst, average cases) of time and space, upper and lower
bounds, intractability
Programming Methodology: C programming, program control (iteration, recursion, functions), scope, binding, parameter passing, elementary concepts of object oriented programming
Operating Systems (in the context of Unix): classical concepts (concurrency, synchronization, deadlock), processes, threads and interprocess communication, CPU scheduling, memory management, file
systems, I/O systems, protection and security
Information Systems and Software Engineering: information gathering, requirement and feasibility analysis, data flow diagrams, process specifications, input/output design, process life cycle,
planning and managing the project, design, coding, testing, implementation, maintenance.
Databases: relational model, database design, integrity constraints, normal forms, query languages (SQL), file structures (sequential, indexed), b-trees, transaction and concurrency control
Data Communication: data encoding and transmission, data link control, multiplexing, packet switching, LAN architecture, LAN systems (Ethernet, token ring), Network devices: switches, gateways,
Networks: ISO/OSI stack, sliding window protocols, routing protocols, TCP/UDP, application layer protocols & systems (http, smtp, dns, ftp), network security
Web technologies: three tier web based architecture; JSP, ASP, J2EE, .NET systems; html, XML
Looking for Job? Upload your resume here for MAXIMUM Exposure! | {"url":"http://www.vyomworld.com/gate/prepare/syllabus.asp","timestamp":"2014-04-21T07:11:17Z","content_type":null,"content_length":"280752","record_id":"<urn:uuid:1b3eeb47-b12e-4a8a-ad14-2ba4377387c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multinomial distribution
December 8th 2010, 06:27 PM
Multinomial distribution
If five balanced dice are rolled, what is the probability that the number 1 and the number 4 will appear the same number of times?
I know that n=5, but I can't figure out what other numbers to plug into the equation. 1 can appear once and 4 can appear once, or 1 can appear twice and 4 can appear twice. Anyone know how to do
this??? The answer in the book is (2424)/(6^5)
Thanks in advance!!
December 8th 2010, 07:05 PM
So this includes the occurance that of the 5 dice.
both #1 and #4 appear once together
both #1 and #4 appear twice together
both #1 and #4 appear not at all
Can you think of any other ways?
December 9th 2010, 06:27 AM
No I cannot think of any other ways. And I don't know how to solve for the probability still
December 9th 2010, 06:32 AM
The number of ways:
□ zero times $4^5$
□ one time $(5)(4)(4^3)$
□ two times $\binom{5}{2}\binom{3}{2}(4)$
Now divide by $6^5$
December 9th 2010, 09:58 AM
That makes sense but does not give me the answer in the back of the book of 2424/(6^5)
December 9th 2010, 10:12 AM
Never mind, I figured it out. That 3 should be a 2 above
December 9th 2010, 10:42 AM | {"url":"http://mathhelpforum.com/advanced-statistics/165758-multinomial-distribution-print.html","timestamp":"2014-04-17T07:51:57Z","content_type":null,"content_length":"6767","record_id":"<urn:uuid:927cf86c-9f53-4159-b135-2ccd7a374614>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tangential and Radial Acceleration | Motion in Two Dimensions
Tangential and Radial Acceleration
Figure 4.17 The motion of a particle along an arbitrary curved path lying in the xy plane. If the velocity vector v (always tangent to the path) changes in direction and magnitude, the component
vectors of the acceleration a are a tangential component a[t] and a radial component a[r].
Now let us consider a particle moving along a curved path where the velocity changes both in direction and in magnitude, as shown in Figure 4.17. As is always the case, the velocity vector is tangent
to the path, but now the direction of the acceleration vector a changes from point to point. This vector can be resolved into two component vectors: a radial component vector a[r] and a tangential
component vector a[t] . Thus, a can be written as the vector sum of these component vectors:
The tangential acceleration causes the change in the speed of the particle. It is parallel to the instantaneous velocity, and its magnitude is
The radial acceleration arises from the change in direction of the velocity vector as described earlier and has an absolute magnitude given by
[r] and a[t] are mutually perpendicular component vectors of a, it follows that a = √(a[r]^2 + a[t]^2). As in the case of uniform circular motion, a[r] in nonuniform circular motion always points
toward the center of curvature, as shown in Figure 4.17. Also, at a given speed, a[r] is large when the radius of curvature is small (as at points A and B in Figure 4.17) and small when r is large
(such as at point C). The direction of a[t] is either in the same direction as v (if v is increasing) or opposite v (if v is decreasing). In uniform circular motion, where v is constant, a[t] = 0 and
the acceleration is always completely radial, as we described in Section 4.4. (Note: Eq. 4.18 is identical to Eq. 4.15.) In other words, uniform circular motion is a special case of motion along a
curved path. Furthermore, if the direction of v does not change, then there is no radial acceleration and the motion is one-dimensional (in this case, a[r] = 0, but a[t] may not be zero).
Figure 4.18 (a)Descriptions of the unit vectors r̂ and θ̂. (b) The total acceleration a of a particle moving along a curved path (which at any instant is part of a circle of radius r) is the sum of
radial and tangential components. The radial component is directed toward the center of curvature. If the tangential component of acceleration becomes zero, the particle follows uniform circular
It is convenient to write the acceleration of a particle moving in a circular path in terms of unit vectors. We do this by defining the unit vectors r̂ and θ̂ shown in Figure 4.18a, where r̂ is a unit
vector lying along the radius vector and directed radially outward from the center of the circle and θ̂ is a unit vector tangent to the circle. The direction of θ̂ is in the direction of increasing θ,
where θ is measured counterclockwise from the positive x axis. Note that both r̂ and θ̂ “move along with the particle” and so vary in time. Using this notation, we can express the total acceleration as
^2/r term in Equation 4.19 indicates that the radial acceleration is always directed radially inward, opposite r̂. | {"url":"http://www.kshitij-school.com/Study-Material/Class-11/Physics/Motion-in-two-dimensions/Tangential-and-radial-acceleration.aspx","timestamp":"2014-04-18T13:06:03Z","content_type":null,"content_length":"51487","record_id":"<urn:uuid:dd22dd7e-ea90-4c02-bce7-b91d381510f8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
In the PLT Scheme Simulation Collection, a set is a general structure that maintains a (doubly linked) list of its elements. The length of the list, i.e. the n field, is implemented using a variable
and, therefore, provides automatic data collection.
(struct set-cell
((next set-cell?)
(previous set-cell?)
(priority real?)
(item any?)))
The set-cell structure represents an item in a set.
(struct set
((variable-n variable?)
(first-cell (union/c set-cell? #f))
(last-cell (union/c set-cell? #f))
(type (one-of/c #:fifo #:lifo #:priority))))
The set structure represents a collection of items.
variable-n - a variable that stores the number of elements in the set. This allows automatic data collection of the number of elements. Note that the actual number of elements is available (as a
natural number) using the pseudo-field set-n.
(make-set type)
(case-> (-> (one-of/c #:fifo #:lifo #:priority) set?)
(-> set?))
This function returns a new (empty) set of the specified
. If
is not specified, a
set is created (i.e. a queue).
(set-n set value)
(-> set? natural-number/c)
This function returns the value if the
field of the
field of
. This is the number of elements in the set. Note that this is stored in a variable to facilitate automatic data collection of the number of elements in a set.
(set-empty? x)
(-> any? boolean?)
This function returns
is a set or
(set-first set)
(-> set? any)
This functions returns the first element in
. An error is signaled if
is empty.
(set-last set)
(-> set? any)
This functions returns the last element in
. An error is signaled if
is empty.
(set-for-each-cell set proc)
(-> set? procedure? void)
This function iterates over the cells in
and applies the procedure
to each cell in turn.
(set-for-each set proc)
(-> set? procedure? void)
This function iterates over the elements in
and applies the procedure
to each element in turn.
(set-find-cell set item)
(-> set? any? (union/c set-cell? #f))
This function returns the cell in
that contains
if it is not found.
(set-insert-cell-first! set cell
(-> set? set-cell? void?)
This function inserts the
as the first element of
. This is independent of the type of the set.
(set-insert-first! set item)
(-> set? any? void?)
This function creates a new cell containing
and inserting it as the first element of
. This is independent of the type of the set.
(set-insert-cell-last! set cell
(-> set? set-cell? void?)
This function inserts the
as the last element of
. This is independent of the type of the set.
(set-insert-last! set item)
(-> set? any? void?)
This function creates a new cell containing
and inserting it as the last element of
. This is independent of the type of the set.
(set-insert-cell-first! set cell
(-> set? set-cell? void?)
This function inserts the
with its stored priority.
(set-insert-first! set item priority)
(-> set? any? real? void?)
This function creates a cell containing
and inserts into
with the given
(set-remove-cell! set cell
(-> set? set-cell? void?)
This function removes the
. No error is raised if the
is not found.
(set-remove-item! set item)
(-> set? any? (union/c set-cell? #f))
This function removes the cell containing the
. If the
was found, the cell containing it is returned, otherwise
is returned.
(set-remove-first cell! set error-thunk)
(set-remove-first-cell! set
(case-> (-> set? procedure? any)
(-> set? set-cell?))
This function removes the first cell from the
and returns it. If the
is empty and the
is provided, it is evaluated and the result returned. Otherwise, if the
is empty, an error is raised.
(set-remove-first! set error-thunk)
(set-remove-first! set
(case-> (-> set? procedure? any)
(-> set? any))
This function removes the first item from the
and returns it. If the
is empty and the
is provided, it is evaluated and the result returned. Otherwise, if the
is empty, an error is raised.
(set-remove-last cell! set error-thunk)
(set-remove-last-cell! set
(case-> (-> set? procedure? any)
(-> set? set-cell?))
This function removes the last cell from the
and returns it. If the
is empty and the
is provided, it is evaluated and the result returned. Otherwise, if the
is empty, an error is raised.
(set-remove-last! set error-thunk)
(set-remove-last! set
(case-> (-> set? procedure? any)
(-> set? any))
This function removes the last item from the
and returns it. If the
is empty and the
is provided, it is evaluated and the result returned. Otherwise, if the
is empty, an error is raised.
(set-insert! set item priority)
(set-insert! set item)
(case-> (-> set? any? real? void?)
(-> set? any? void?))
This function creates a cell containing
and inserts into
with the given
according to the type of the set.
#:fifo - the item is inserted at the end of the set. The priority, if provided, is ignored.
#:lifo - the item is inserted at the beginning of the set. The priority, if provided, is ignored.
#:priority - the item is inserted in the set according to the priority. If priority is not provided, 100 is used.
(set-remove! set item)
(set-remove! set)
(case-> (-> set? any? void?)
(-> set? any?))
If an
is specified, the cell in the
containing the item is removed and returned.
If an item is not specified, then this function removes the dirst item from the set and returns it. An error is signaled if the set is empty.
The furnace model will be used in Chapter 10 Continuous Simulation Models to illustrate building a continuous model. This initial model is a purely discrete-event model of the same system. The
furnace itself is modeled by a set.
This simulation model is derived from an example in Introduction to Combined Discrete-Continuous Simulation Using SIMSCRIPT II.5 by Abdel-Moaty M Fayek[1].
;;; Model 1 - Discrete Event Model
(require (planet "simulation-with-graphics.ss"
("williams" "simulation.plt")))
(require (planet "random-distributions.ss"
("williams" "science.plt")))
;;; Simulation Parameters
(define end-time 720.0)
(define n-pits 7)
;;; Data collection variables
(define total-ingots 0)
(define wait-time (make-variable))
;;; Model Definition
(define random-sources (make-random-source-vector 2))
(define furnace #f)
(define pit #f)
(define (scheduler)
(let loop ()
(schedule now (ingot))
(wait (random-exponential (vector-ref random-sources 0) 1.5))
(define-process (ingot)
(let ((arrive-time (current-simulation-time)))
(with-resource (pit)
wait-time (- (current-simulation-time) arrive-time))
(set-insert! furnace self)
(work (random-flat (vector-ref random-sources 1) 4.0 8.0))
(set-remove! furnace self))
(set! total-ingots (+ total-ingots 1))))
(define (stop-sim)
"Report after ~a Simulated Hours - ~a Ingots Processed~n"
(current-simulation-time) total-ingots)
(printf "~n-- Ingot Waiting Time Statistics --~n")
(printf "Mean Wait Time = ~a~n"
(variable-mean wait-time))
(printf "Variance = ~a~n"
(variable-variance wait-time))
(printf "Maximum Wait Time = ~a~n"
(variable-maximum wait-time))
(printf "~n-- Furnace Utilization Statistics --~n")
(printf "Mean No. of Ingots = ~a~n"
(variable-mean (set-variable-n furnace)))
(printf "Variance = ~a~n"
(variable-variance (set-variable-n furnace)))
(printf "Maximum No. of Ingots = ~a~n"
(variable-maximum (set-variable-n furnace)))
(printf "Minimum No. of Ingots = ~a~n"
(variable-minimum (set-variable-n furnace)))
(printf "~a~n"
(variable-history (set-variable-n furnace))
"Furnace Utilization History"))
(define (initialize)
(set! total-ingots 0)
(set! wait-time (make-variable))
(set! pit (make-resource n-pits))
(set! furnace (make-set))
(accumulate (variable-history (set-variable-n furnace)))
(tally (variable-statistics wait-time))
(schedule (at end-time) (stop-sim))
(schedule (at 0.0) (scheduler)))
(define (run-simulation)
The following is the output from the simulation.
Report after 720.0 Simulated Hours - 479 Ingots Processed
-- Ingot Waiting Time Statistics --
Mean Wait Time = 0.1482393804317038
Variance = 0.24601817483957691
Maximum Wait Time = 3.593058032365832
-- Furnace Utilization Statistics --
Mean No. of Ingots = 4.0063959874393795
Variance = 3.2693449366238347
Maximum No. of Ingots = 7
Minimum No. of Ingots = 0 | {"url":"http://planet.racket-lang.org/package-source/williams/simulation.plt/2/0/html/simulation-Z-H-10.html","timestamp":"2014-04-18T18:33:32Z","content_type":null,"content_length":"33304","record_id":"<urn:uuid:53a67804-4bd8-461b-9b89-3c4cab5d13b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browse by Author
Number of items: 2.
Liaw, S
Liu, R. T. and Liaw, S. S. and Maini, P. K. (2007) Oscillatory Turing Patterns in a Simple Reaction-Diffusion System. Journal of the Korean Physical Society, 50 (1). pp. 234-238.
Liu, R. T. and Liaw, S. S. and Maini, P. K. (2006) Two-stage Turing model for generating pigment patterns on the leopard and the jaguar. Physical Review E, 74 (1). 011914-1-011914-8.
Liu, R
Liu, R. T. and Liaw, S. S. and Maini, P. K. (2007) Oscillatory Turing Patterns in a Simple Reaction-Diffusion System. Journal of the Korean Physical Society, 50 (1). pp. 234-238.
Liu, R. T. and Liaw, S. S. and Maini, P. K. (2006) Two-stage Turing model for generating pigment patterns on the leopard and the jaguar. Physical Review E, 74 (1). 011914-1-011914-8.
Maini, P
Liu, R. T. and Liaw, S. S. and Maini, P. K. (2007) Oscillatory Turing Patterns in a Simple Reaction-Diffusion System. Journal of the Korean Physical Society, 50 (1). pp. 234-238.
Liu, R. T. and Liaw, S. S. and Maini, P. K. (2006) Two-stage Turing model for generating pigment patterns on the leopard and the jaguar. Physical Review E, 74 (1). 011914-1-011914-8.
This list was generated on Sun Apr 20 03:25:25 2014 BST. | {"url":"http://eprints.maths.ox.ac.uk/view/author/Liaw=3AS=2E_S=2E=3A=3A.html","timestamp":"2014-04-20T03:11:55Z","content_type":null,"content_length":"9691","record_id":"<urn:uuid:84559234-d44a-4ae6-91ad-c3d8e38a7a4f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallelogram Question
May 24th 2009, 12:54 PM #1
May 2009
Parallelogram Question
ABC has vertices of A(3,2,-5) B(4,-1,7) C(-8,3,-6)
Determine the coordinates of D such that ABCD is a parallelogram
Is the parallelogram a rectangle? Justify your answer.
that should give you the coordinates of D.
As to know whether it's a rectangle or not... you should know that a rectangle is a parallelogram with a right angle.
So check if the scalar product between $\vec{AB}$ and $\vec{BC}$ is 0.
Memo : $A<img src=$x_a,y_a,z_a) \text{ and } B
(coordinates of vector)
May 24th 2009, 01:18 PM #2 | {"url":"http://mathhelpforum.com/geometry/90312-parallelogram-question.html","timestamp":"2014-04-18T12:34:27Z","content_type":null,"content_length":"34350","record_id":"<urn:uuid:29bd13e6-f69c-47ac-8a21-7bc893d3a821>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding depth from a 2d screen point [Archive] - OpenGL Discussion and Help Forums
06-02-2004, 09:11 AM
I am writing a 3d design/viewing program in visual basic and I am trying to create a function where the user can draw lines on a plane (right now i just have y=0) by clicking on the screen. I need to
know how to transform the 2d point (x,0) to a 3d point. another related problem is the mouse's coordinates (x,y) are not the same as the camera or world space coordinates.
also..vb's line drawing function is too slow for my application...is there a high performance line drawing function available? | {"url":"https://www.opengl.org/discussion_boards/archive/index.php/t-159660.html","timestamp":"2014-04-20T21:09:39Z","content_type":null,"content_length":"11304","record_id":"<urn:uuid:f1cc8c56-455f-4196-8b70-ba29dad67c4d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patente US5795727 - Gravitational attractor engine for adaptively autoclustering n-dimensional datastreams
This is a continuation of application Ser. No. 08/039,465, filed Apr. 26, 1993, now U.S. Pat. No. 5,627,040, which is a continuation-in-part of U.S. Ser. No. 07/751,020, filed Aug. 28, 1991, now
This invention relates to a method for classifying multi-parameter data in real time (or from recorded data) into cluster groups for the purpose of defining different populations of particles in a
sample. This invention is particularly useful in the field of flow cytometry wherein multi-parameter data is recorded for each cell that passes through an illumination and sensing region. It is
especially useful for classifying and counting immunofluorescently labeled CD3, CD4 and CD8 lymphocytes in blood samples from AIDS patients.
Particle analysis generally comprises the analysis of cells, nuclei, chromosomes and other particles for the purpose of identifying the particles as members of different populations and/or sorting
the particles into different populations. This type of analysis includes automated analysis by means of image and flow cytometry. In either instance, the particle, such as a cell, may be labeled with
one or more markers and then examined for the presence or absence of one or more such markers. In the case of a cell, such as a leukocyte, tumor cell or microorganism, the marker can be directed to a
molecule on the cell surface or to a molecule in the cytoplasm. Examination of a cell's physical characteristics, as well as the presence or absence of marker(s), provides additional information
which can be useful in identifying the population to which a cell belongs.
Cytometry comprises a well known methodology using multi-parameter data for identifying and distinguishing between different cell types in a sample. For example, the sample may be drawn from a
variety of biological fluids, such as blood, lymph or urine, or may be derived from suspensions of cells from hard tissues such as colon, lung, breast, kidney or liver. In a flow cytometer, cells are
passed, in suspension, substantially one at a time through one or more sensing regions where in each region each cell is illuminated by an energy source. The energy source generally comprises an
illumination means that emits light of a single wavelength such as that provided by a laser (e.g., He/Ne or argon) or a mercury arc lamp with appropriate filters. Light at 488 nm is a generally used
wavelength of emission in a flow cytometer having a single sensing region.
In series with a sensing region, multiple light collection means, such as photomultiplier tubes (or "PMT"), are used to record light that passes through each cell (generally referred to as forward
light scatter), light that is reflected orthogonal to the direction of the flow of the cells through the sensing region (generally referred to as orthogonal or side light scatter) and fluorescent
light emitted from the cell, if it is labeled with fluorescent marker(s), as the cell passes through the sensing region and is illuminated by the energy source. Each of forward light scatter (or
FSC), orthogonal light scatter (SSC), and fluorescence emissions (FL1, FL2, etc.) comprise a separate parameter for each cell (or each "event"). Thus, for example, two, three or four parameters can
be collected (and recorded) from a cell labeled with two different fluorescence markers.
Flow cytometers further comprise data acquisition, analysis and recording means, such as a computer, wherein multiple data channels record data from each PMT for the light scatter and fluorescence
emitted by each cell as it passes through the sensing region. The purpose of the analysis system is to classify and count cells wherein each cell presents itself as a set of digitized parameter
values. Typically, by current analysis methods, the data collected in real time (or recorded for later analysis) is plotted in 2-D space for ease of visualization. Such plots are referred to as "dot
plots" and a typical example of a dot plot drawn from light scatter data recorded for leukocytes is shown in FIG. 1 of U.S. Pat. No. 4,987,086. By plotting orthogonal light scatter versus forward
light scatter, one can distinguish between granulocytes, monocytes and lymphocytes in a population of leukocytes isolated from whole blood. By electronically (or manually) "gating" on only
lymphocytes using light scatter, for example, and by the use of the appropriate monoclonal antibodies labeled with fluorochromes of different emission wavelength, one can further distinguish between
cell types within the lymphocyte population (e.g., between T helper cells and T cytotoxic cells). U.S. Pat. Nos. 4,727,020, 4,704,891, 4,599,307 and 4,987,086 describe the arrangement of the various
components that comprise a flow cytometer, the general principles of use and one approach to gating on cells in order to discriminate between populations of cells in a blood sample.
Of particular interest is the analysis of cells from patients infected with HIV, the virus which causes AIDS. It is well known that CD4.sup.+ T lymphocytes play an important role in HIV infection and
AIDS. For example, counting the number of CD4.sup.+ T lymphocytes in a sample of blood from an infected individual will provide an indication of the progress of the disease. A cell count under 400
per mm.sup.3 is an indication that the patient has progressed from being seropositive to AIDS. In addition to counting CD4.sup.+ T lymphocytes, CD8.sup.+ T lymphocytes also have been counted and a
ratio of CD4:CD8 cells has been used in understanding AIDS.
In both cases, a sample of whole blood is obtained from a patient. Monoclonal antibodies against CD3 (a pan-T lymphocyte marker), CD4 and CD8 are labeled directly or indirectly with a fluorescent
dye. These dyes have emission spectra that are distinguishable from each other. (Examples of such dyes are set forth in example 1 of U.S. Pat. No. 4,745,285.) The labeled cells then are run on the
flow cytometer and data is recorded. Analysis of the data can proceed in real time or be stored in list mode for later analysis.
While data analyzed in 2-D space can yield discrete populations of cells, most often the dot plots represent projections of multiple clusters. As a result, often it is difficult to distinguish
between cells which fall into regions of apparent overlap between clusters. In such cases, cells can be inadvertently classified in a wrong cluster, and thus, contribute inaccuracy to the population
counts and percentages being reported. In blood from an HIV infected patient for example, over-inclusion of T cells as being CD4.sup.+ could lead a clinician to believe a patient had not progressed
to AIDS, and thus, certain treatment which otherwise might be given could be withheld. In cancers, such as leukemia, certain residual tumor cells might remain in the bone marrow after therapy. These
residual cells are present in very low frequencies (i.e., their presence is rare and thus their occurrence in a large sample is a "rare event"), and thus, their detection and classification are both
difficult and important.
Current data analysis methods fail to provide sufficient means to discriminate between clusters of cells, and thus, fail to permit more accurate identification and/or sorting of cells into different
populations. In addition, such methods fail to predict if the preparative conditions used by the technician were done properly (e.g., improper staining techniques leading to non-specific staining or
pipetting improper amounts of reagent(s) and/or sample(s)). Finally, most methods work well for mononuclear preparations from whole blood or on erythrocyte lysed whole blood but perform poorly on
unlysed whole blood because of the over abundance of red cells and debris in a sample.
The autoclustering method, described herein as the "gravitational attractor engine", addresses the need to automatically assign classifications to multi-parameter events as they arrive from an array
of sensors such as the light collection means of a cytometer. It also functions in the post-classification of recordings of multi-parameter events in list-mode or database format. It is particularly
useful in clustering Z-parameter data from CD3 and CD4 as well as CD3 and CD8 T cells labeled with immunofluorescent markers in blood samples from AIDS patients.
The gravitational attractor consists of a geometric boundary surface of fixed size, shape and orientation, but of variable position, a computational engine by which the boundary surface positions
itself optimally to enclose a cluster of multi-parameter events. Multiple attractors may be employed simultaneously for the purposes of classifying multiple clusters of events within the same
datastream or recorded data distribution, the strategy being to assign one attractor per population to be identified and/or sorted. Classification of events in the datastream consists of a two-step
process: In the first step (pre-analysis), the datastream is analyzed for purposes of precisely centering each attractor's membership boundary surface about the statistical center-of-mass of the data
cluster (i.e., population) it is intending to classify. Pre-analysis is terminated after a pre-determined number of events have been analyzed or if insignificant deviations in an attractor position
is found. In the second step (classification), each attractor's membership boundary is "locked down in place", and incoming datastream events are tested against membership boundaries for
classification inclusion vs. exclusion.
Major benefits of the gravitational attractor engine are that it: 1) requires no list-mode recording of events in the process of their classification (i.e., data may be analyzed in real time); 2)
provides a classification method tolerant of between-sample drift in the central value of a data cluster which may arise from any arbitrary combination of instrumentation, sample-preparation and
intrinsic sample variance sources; 3) exhibits stability in the case of multiple missing clusters and can count particles in a population down to absolute zero in the vicinity of where the cluster is
expected to locate; and 4) provides continuous access to population vector means and membership counts during sampling of the datastream, allowing continuous process quality assurance (or "PQA")
during time-consuming, rare-event assays.
Several extensions to the gravitational attractor engine increase its benefits: 1) hyperspherical boundary surfaces can be elongated on a preferred axis to obtain a cigar-shaped attractor; 2) the
boundary surface used to gate events for gravitational interaction during pre-analysis can be different in shape and extent from the membership boundary applied during classification; and 3) the
subset of parameters used to cluster events can be different for different attractors, allowing smear-inducing parameters to be ignored and permitting data classification at varying degrees of
dimensional collapse.
The primary advantage of the gravitational attractor engine is its capacity for accurate and efficient autoclustering, that is, it can replace manual-clustering methods which require human judgment
to adapt gating geometry to normal variances in the positions of target clusters. By comparison, prior autoclustering methods which rely on histogram curve analysis to locate threshold-type
separators are less-robust in the handling of missing populations (especially multiple missing clusters).
A cigar-shaped attractor engine performs well at classifying diagonally-elongated clusters whose "stretch" originates from partially-correlated (i.e., uncompensated) events. By comparison, prior
methods utilizing 1-D histogram analysis do not work as well with uncompensated clusters because their 1-D histogram projections consume excessive curvespace. Since an attractor can be defined in
arbitrary N-dimensional space, the problem of overlapping clusters may be redressed through the addition of extra parameters to tease them apart at no additional computational complexity. The
simplicity and highly parallel nature of the attractor engine's computations, together with its stream-oriented data interaction, makes this autoclassification method ideally suited to real time
classification performed on high-event rate, multi-parameter datastreams. Compared to prior methods which require remembering a list-mode recording in order to perform data analysis, the attractor
engine's memory requirements are small and unrelated to the datastream length being sampled, thus making practicable routine analyses in which several million events are sampled. The salient benefit
of such mega-assays in cellular diagnostics is to detect diseased cells at thresholds as low as 1 per million normal cells (i.e., rare-event assays), thus, enabling earlier detection and milder
interventions to arrest disease.
The file of the patent includes at least one drawings executed in color.
FIGS. 1A and 1B illustrate two multi-dimensional attractors (one spherical and one cigar-shaped) at their seed locations in multi-space prior to pre-analysis. FIGS. 1A and 1B depict two such
projective scatterplots (5 and 6), showing the spherical attractor's centroid (1), radius (2) and orbital band (7), and the cigar attractor's centerline (3), radius (4) and orbital band (8).
FIGS. 2A and 2B illustrate the same two attractors, by the same projection scatterplots, at their center-of-mass locations in multi-space during classification.
FIGS. 3A, 3B and 3C comprise a series of colored 2-D dot plots of FSC vs. SCC (FIG. 3A), log fluorescence FITC vs. log fluorescence PE (FIG. 3B), and log fluorescence FITC vs. log fluorescence (FIG.
3C) for data collected in list mode from erythrocyte whole blood to which different fluorescently labeled monoclonal antibodies have been added. The three gravitational attractors and their
respective seed locations are shown prior to autoclustering. The blue dots and boundaries identify the NK cell attractor; the red dots and boundaries identify B lymphocyte attractor; and the green
dots and boundaries identify T lymphocyte attractor.
FIGS. 4A, 4B and 4C comprise the colored 2-D dot plots as set forth in FIGS. 3A, 3B and 3C post analysis showing the autoclustered populations and final positions of the attractors. The gray dots
represent unclustered events (e.g., monocytes, granulocytes and debris) in the sample.
FIGS. 5A and 5B comprise two dot plot of log PE version by PE/Cy5 fluorescence showing three autoclustered populations from a sample of unlysed whole blood from a AIDS patient to which a solution
containing a known concentration of fluorescently labeled microbeads and fluorescently labeled (FIG. 5A) anti-CD3 and anti-CD4 monoclonal antibodies or (FIG. 5B) anti-CD3 and anti-CD8 monoclonal
antibodies have been added.
FIGS. 6A and 6B comprise a dot plot as in FIGS. 5A and 5B, however, the blood is taken from a normal individual but the sample has been rejected by PQA.
A gravitational attractor is a small computational "engine". Initially, it contains one or more geometric parameters set by the user for each type of sample to be analyzed or fixed to define an
expected target cluster's shape, size and approximate location. The attractor engine further comprises a method for locating a cluster's actual center-of-mass in the datastream being analyzed, and to
subsequently classify events in the arriving datastream which satisfy the attractor's geometric membership predicate. The term "gravitational" is apt because the attractor finds its optimal location
enclosing the data cluster by falling to its center-of-mass location under the accumulative gravitational force of events in proximity to its expected location in multi-space. The term "attractor",
drawn from dynamical systems theory, refers to the behavior of a system whereby a multitude of initial state vectors move toward, and converge upon a common, equilibrium end-state vector. In this
case, the state vector corresponds to the instantaneous vector location of a roving geometric boundary surface (specifically a rigidly-attached reference point within it), as the boundary moves from
an initial, expected "seed" location to equilibrium at a data cluster's actual center-of-mass location.
The gravitational attractor described below illustrates the simplest case of membership geometry, the hypersphere. The engine of a spherical attractor comprises the following fixed and variable
______________________________________Fixed components: s seed, or initial centroid vector of hypersphere representing approximate expected location of cluster r radius of hyperspherevariable components: c current centroid vector of hypersphere n number of gravitationally- interacting events so far within the current datastream______________________________________
Before a datastream begins, the invariant aspects of the target cluster are first specified in terms of seed location, s, and radius, r. The specifications of S and r are made by observing
projections of the cluster in 2-D projection scatterplots, whereby two coordinates of s are adjusted at a time using an 2-D locator device, and r is edited by "pulling" on its appearance with a
locator device until satisfactory.
The events in the datastream encountered consist of a variable number of multi-parameter events e.sub.i where i indexes the number (or sequence) of the event in the stream and e is the vector of
parameter values comprising that particular event. Prior to analyzing the datastream, c is initialized to the seed location, s.
Attractor autoclustering of the datastream comprises a two-step process: In the first step, pre-analysis, the datastream is analyzed for purposes of precisely centering the attractor's membership
boundary surface about the statistical center-of-mass of the data cluster it is intending to classify. Upon arrival of the first event, and that of each subsequent event during pre-analysis, the
spherical attractor transforms each event vector into its own local coordinate system, whose origin is based at c:
local e.sub.i =e.sub.i -c (transformation to local coordinates)
Next, the attractor decides whether local e.sub.i is short enough in length (e.sub.i is close enough in proximity to c) to be allowed attractive pull on c. The interaction gating predicate, g,
evaluates affirmatively if local e.sub.i has vector length less than r:
g (local e.sub.i)=length (local e.sub.i)<r
If the above proximity test is met, e.sub.i is permitted to exert an increment of attractive pull on c (i.e., to enter into the center-of-mass calculation). The center-of-mass of a lone cluster in an
otherwise vacuous dataspace can be defined simply as the vector-mean of all N event vectors, e.sub.i :
c=Σe.sub.i /N (center-of-mass for lone cluster)
In multi-cluster distributions, each cluster applies its interaction gating function, g, whose job is to protect its centroid calculation from the influence of density pockets elsewhere in space:
c=Σe.sub.i * g (local e.sub.i) /N
Rather than update c continuously with each interaction (an inefficient approach prone to instability the case of missing clusters), the attractor's centroid, c, is updated on a fixed schedule at
prescribed interaction count milestones (i.e., s1, s2, s3 . . . sm). For this purpose, the attractor keeps a running vector sum sigma of all its interacting event vectors. At the start of
pre-analysis, sigma and n are zeroed. During pre-analysis, each arriving event vector which satisfies the above gating predicate is accumulated, by vector addition, into sigma, the interaction count
n is incremented,
sigma=sigma+e.sub.i (effect of each event interaction)
and if n is one of the scheduled update milestones (e.g., S1) the centroid is updated
c=sigma/n (effect of centroid update)
whereby the new value of c is the running vector sum sigma, scalar divided by n. At the completion of each update, c contains the vector mean of all events which have so far interacted with the
attractor. This new, refined value of c, which carries the weight of more data than did the its previous value, governs subsequent interaction gating until the next update milestone is reached.
The initial seed point, s, serves as a default centroid to get the calculation started. It should reflect the best available information about expected cluster position. Once the gated vector sum has
accumulated some actual data (e.g., s1=50), the computed centroid, c, replaces s as the best-available central value for anchoring the interaction gate.
local e.sub.i =e.sub.i -c (computed c replaces s)
The update schedule for c subserves the goal of only improving its accuracy over time. The first attractor update milestone s1 is called the threshold of inertia. It must be overcome in order to
replace the seed value s, as a check on wandering. If a cluster were depleted down to a handful of events, and the centroid were allowed to update on the first gated event, and that event fell just
inside the gate, the updated gate could be dislocated up to a distance r from the seed point, possibly excluding centrist events from further consideration. If the threshold of inertia cannot be
surmounted, no positional refinement is allowed (i.e., the seed value, s, specifies default emplacement of cluster membership geometry). Consequently, clusters which have become so depleted that no
density landmark can be established are default-gated about the point where they were expected to have been found.
If the threshold of inertia can been surmounted, the attractor is allowed to gravitate toward the local center-of-mass. Periodic centroid updates (e.g., every 50 interactions) would move the
attractor toward a convergence point, but a more efficient update schedule observes the statistical rule that residual error diminishes as the inverse square root of the number of interactions.
Therefore, a parabolic update schedule (e.g., s1=100, s2=400, s3=900, s4=1600 . . . ) provides statistically significant centroid corrections on every update, whereas periodic updates take the
centroid along a more oscillatory path toward the same eventual outcome.
The cessation of the pre-analysis activity for a single attractor is triggered by either attainment of the number of interactions, sm, specified as the final scheduled update milestone (the
attractor's "interaction quota"), or a global time-out metered in time or total events acquired, which ever comes first. If multiple attractors are interacting with the pre-analysis datastream,
attractors which have reached their interaction quotas lay dormant while awaiting the attainment of quota by all other attractors, or the global time-out, whichever comes first. If pre-analysis is
terminated by global time-out, each attractor which fell short of its interaction quota but which surpassed its threshold of inertia is given a final centroid update, so that event interactions
accumulated since its last previous update are represented in the final value of the centroid, c. The specification of a global time-out, as a function of time or total acquired events, is necessary
to guarantee termination of datastream pre-analysis unless there are apriori guarantees of sufficient population, for each target cluster in each datastream sample to always guarantee termination by
satisfaction of interaction quotas.
Attractor-based autoclustering is a 2-step process. In the second step, "classification", each attractor's hyperspherical membership boundary is locked down in place at its centroid, c, frozen after
the last pre-analysis centroid update was completed (or at s if no update took place). As each subsequent incoming event arrives in the continuation of the same datastream which was pre-analyzed, the
incoming event is tested against each membership boundary for classification inclusion vs. exclusion, and a membership count incremented at each inclusion decision. If multiple classification and
counting of the same event is unnatural or undesirable, one provides a contention resolving mechanism to assure that each event is classified and counted but by one attractor. A straightforward
mechanism is to prioritize competing classifications, another is to award membership based on closest Euclidean proximity. One distinct advantage of prioritized classifications is that it can easily
extend to attractors with more complex geometry's which can overlap in more complex ways, and for this reason it has been adopted into practice.
During classification, the membership count so far accumulated by each attractor is available for deciding when enough target events have been counted to terminate the assay. These accretional counts
may be used for early detection of missing clusters, for example, indicating a sample-preparation omission which is cause for aborting the assay.
The cessation of classification is triggered by attainment of "membership quotas" for all attractors, or a global time-out expressed in time or total acquired events during the classification phase.
Both during and after the classification phase, each attractor holds its cluster population count and centroid (location) vectors, and thus, provides additional benefits to data analyses. Such
benefits include quality-assurance mechanisms by which the user can define acceptable vs. aberrant datastream distributions, and automatically have the latter flagged.
A "minimum expected population" (defined apriori for each cluster as a function of membership count or a derivative thereof) is compared to the actual membership counts (or a derivative thereof)
during, and after termination of, classification. An error condition or warning is generated for each cluster evidencing an unexpectedly low population. This type of PQA benefits from the unique
missing cluster stability of the gravitational attractor classification method (i.e., the attractor will accurately count down to absolute zero the occurrence of events in the vicinity where the
cluster was expected to have presented itself). A check on attainment of minimum expected population per each target cluster makes the overall autoclustering system vigilant to any number of
instrumentation, sample preparation, and intrinsic sample aberration that express as absent target populations).
As a second benefit, a "tether" may be employed to define the permissible roving distance of each attractor from its seed position. A tether length (defined apriori and expressed as a scalar distance
in multi-space) is compared to the actual displacement of c from its starting seed location, s, to determine if the tether length has been exceeded. If exceeded, an error or warning is generated
indicating that a cluster has been found too far from its expected location. A test on proximity of actual cluster (vector mean) location to expected cluster location, per each target cluster, makes
the overall autoclustering system vigilant to any number of instrumentation, sample preparation, and intrinsic sample aberration that express as unreasonable displacements in multi-space cluster
Though other classification methods can yield a population vector mean (and can compare proximity to apriori expected location), the attractor method has the unique advantage of requiring no
list-mode recording. Because the tether constraint can be checked each time the attractor moves its position during pre-analysis, it is practical to detect cluster position aberrance early in
exposure to the datastream, thus a time-consuming mega-assay can be interrupted early on, rather than waiting until its completion to find out it must be rejected for PQA reasons.
As a third benefit, a well formal cluster should consist of a dense area of events surrounded by a void region. To assure proper cluster membership and classification where a cluster is less well
formed, an orbital band can be placed around the cluster membership boundary. The purpose of the orbital band is to guard against the movement of a cluster too far from its boundary, an unexpected
change in the shape of a cluster and higher than expected noise. In any or all of such situations, a high number of events within the orbital band (or "orbiters") is an indication that the data may
be unacceptable. Generally, less than 3% of ten events for a cluster should fall within the orbital band.
Referring to FIGS. 1A and 1B, the centroid (1), radius (2) and orbital band (2) are shown for a spherical attractor. The thickness of the orbital band is arbitrary. A "thin" band will include fewer
orbiters than a "thick" band. FIGS. 2A and 2B show the movement of all of the components during classification.
A limitation of the hyperspherical attractor (i.e., it's not being well-fitted to the elongated shape of many multispace data clusters) can be overcome by a modification to the gating (or boundary
surface) geometry. The characteristic of the attractors, whereby each employs an interaction gating function, g whose job is to protect its centroid calculation from the influence of events in other
clusters, makes it advantageous to deploy gating geometries that closely approximate actual cluster shape. Better fitting boundaries allow the targeting of more populations within a fixed-size
An adaptation that elongates the spherical attractor is to replace its centroid vector, c, with a straight line segment in multi-space running between two endpoint vectors, e.sub.1 and e.sub.2. The
line connecting the two endpoints is called the attractor's "centerline". Instead of measuring the proximity of an event in terms of its distance from a single centerpoint, by extension, proximity is
measured in terms of distance from the nearest point on the centerline. The locus of points equidistant from the centerline gives rise to a boundary surface that is a hypercylinder with rounded ends.
In 3-D space, this solid assumes the shape of a cigar.
The cigar attractor's radius, cr, specifies both the cigar's cylindrical radius and the radius of curvature of its endcaps.
The midpoint, mp, of the centerline is the center of the cigar, and serves as the origin of the cigar's local coordinate system.
The geometric components of the cigar attractor that differ from those of the spherical attractor are:
______________________________________Fixed components: seed centerline = e.sub.1 s, e.sub.2 s! seed endpoints of initial centerline of cigar representing approximate expected location and orientation of cluster cr radius of cigar cylinder and endcapsvariable components: centerline = e.sub.1, e.sub.2 ! current endpoints mp midpoint of current centerline______________________________________
The cigar attractor's interaction gating function g(e.sub.i) for event e.sub.i is:
g(e.sub.i)=distance(e.sub.i, centerline)<r
The distance function first finds p, the nearest point on centerline to e.sub.i (the projection of e.sub.i onto centerline). If p projects beyond the end of the centerline, the distance to the
closest endpoint is computed, otherwise the distance between the p and e.sub.i is used.
When the cigar attractor commences an update of its location during pre-analysis, the new midpoint mp assumes the value of the gated vector mean of all events which have thus far interacted. The
endpoints of the centerline, maintaining rigid values in local coordinates, receive the same delta vector as was applied to mp, thus the centerline moves as a rigid structure under the pull of
combined gravitational event force on its midpoint.
The cigar membership gating function applied during classification is the same as g(e.sub.i) above.
The proximity function and centerline update are the only aspects of the cigar attractor that differ from the spherical attractor. All other behaviors are identical. A primary benefit of the cigar
attractor is its ability to handle correlated multi-parameter clusters. If two sensory channels are identical in their sensitivity and fed the same signal, all their 2-D event vectors will fall on
the diagonal characterized by the equation (x=y). If two sensory channels have partially-overlapping sensitivities and are exposed to each other's uncorrelated input signals, the joint distribution
will retain some diagonal stretch by virtue of unintended channel-crosstalk (uncompensated data). Electronic compensation (the subtracting out of cross talk components) is difficult to specify as the
number of sensory channels and cross-talk interactions increases. A more practical approach, reduced to practice in this invention, is to cluster directly on raw, uncompensated event vectors
employing a cigar attractor oriented along the principal stretch vector of the cluster in multi-space. The specification of the centerline endpoints is made by observing projections of the cluster in
2-D projection scatterplots, whereby two coordinates of the endpoint are adjusted at a time using an 2-D locator device. The specification of cr is edited by "pulling" on its appearance with a
locator device until satisfactory.
Referring to FIGS. 1A and 1B, the center (3), radius (4) and orbital bands are shown for a cigar attractor. FIGS. 2A and 2B show the movement of these components during classification.
A slightly different geometry (other than cigar-shaped) suitable for elongated clusters is the hyperellipse. The attachment of an elliptical boundary surface to the attractor behavior claimed herein
will be referred to as the elliptical attractor.
The orientation axis of the ellipse is specified by its two foci vectors f.sub.1 and f.sub.2. The proximity of an event is measured in terms of the sum of its two Euclidean distances from the two
foci, and the ellipse radius, er, specifies the upper limit of this sum for the event inclusion.
The elliptical attractor's interaction gating function, g(e.sub.i) for event e.sub.i is:
g(e.sub.i)=distance (e.sub.i, f.sub.1)+distance (e.sub.i, f.sub.2)<er
The midpoint, mp, of the orientation axis is the center of the ellipse, and serves as its local coordinate system origin. The specification of the principal axis and the gating function are the only
two aspects of the elliptical attractor that differentiate it from the cigar attractor. As in the case of the cigar attractor, positional deltas applied to the midpoint propagate to each foci so that
the ellipse can maintain its fixed orientation, size and shape.
The purpose of an attractor's classification geometry is to suitably enclose its target cluster's event cloud when deployed at its center-of-mass. The purpose of its interaction geometry is to define
a "seek area" in which the attractor can expect to find its cluster (and little else). Since these two geometry's serve differing purposes, it is sometimes advantageous to customize the geometry's
subserving interactions and classifications.
A spherical attractor may employ a "membership radius" different from its "interaction radius". Other geometry's can be invoked for defining an attractor's interaction and membership boundaries
(i.e., squares, rectangles, tilted rectangles, ellipses or arbitrary mouse-drawn regions). In general, cluster membership boundaries are chosen to approximate the actual size and shape of their
target clusters. Attractor interaction boundaries are chosen that both 1) delimit the scan area where center-of-mass should be found and 2) exclude neighboring clusters from possible interaction.
An attractor can be defined on a subset of arriving parameters. Different attractors may be defined on different subsets of arriving parameters, if useful for clustering their respective populations.
A mask, M, or vector of binary switches, is stored within each attractor to signify which parameters of incoming event vectors are to be attended to and which ignored. Since the attractor engine can
be defined in any N-dimensional space, it can be defined on a subset of parameters without embellishment beyond the mere requirement to specify M. The vector operations that underlie the attractor
engine are implemented in such a way that masked out parameters are treated as non-existent in a completely transparent fashion. The benefits of parameter masking are that it 1) permits data clusters
to be defined in the subset of parameters which affords the sharpest cluster definition, 2) allows parameters to be ignored which smear an otherwise well-formed cluster and 3) support classification
at varying degrees of dimensional collapse. The latter benefit requires that a single event be permitted to be classified by multiple attractors.
Referring to FIGS. 3A, 3B, 3C, 4A, 4B and 4C, peripheral whole blood was obtained from normal adult volunteers in EDTA containing evacuated blood collection tubes. Erythrocyte were lysed in a lysing
solution comprising NH.sub.4 Cl, KHCO.sub.3 and EDTA. The lysed cells were spun down and removed.
The remaining cells were placed in a test tube containing PBS. To this tube were added, in sequence, Leu 4 FITC (anti-CD3; BDIS), Leu 11+19 PE (anti-CD16, CD56; BDIS) and Leu 12 PerCp (anti-CD19;
BDIS). These antibodies will label T lymphocytes, NK cells and B lymphocytes respectively. After incubation the cells were washed and then run on a FACScan brand flow cytometer (BDIS) equipped with
Consort FACScan Research Software (BDIS). The data was acquired and stored in list-mode. 15,000 events were recorded.
In FIGS. 3A, 3B and 3C, the seed location, s, and radius, r or cr, of each population's attractor was identified prior to analysis based upon well known and published data. A spherical attractor was
applied for B lymphocytes while cigar attractors were used for NK cells and T lymphocytes. Each attractor then was mouse drawn to represent the expected locations of each population when the data was
analyzed for scatter (FIG. 3A), PE vs. FITC fluorescence (FIG. 3B) and PerCp vs. FITC fluorescence (FIG. 3C). Gray dots are shown interposed on the dot plots showing unclustered events. (In other
embodiments, it will be appreciated that these unclustered events need not be displayed in either real time or list-mode analysis.)
In FIGS. 4A, 4B and 4C, the results of classification are displayed after all recorded events have been analyzed. The parameters measured, and thus included in each event vector, were FSC, SSC, log
PE fluorescence, log FITC fluorescence and log PerCP fluorescence. For B lymphocytes, 757 cells (or approximately 19% of all clustered events) were within this cluster. For T lymphocytes, 2596 (or
approximately 66% of all clustered events) were within the cluster; and for NK cells, 587 events were within this cluster. It should be appreciated that the data analysis for all of the attractors
occurs at the same time. FIGS. 4A, 4B and 4C represent the 2-D projections of each attractor post analysis.
Referring to FIGS. 5A, 5B, 6A and 6B, whole blood was obtained from an AIDS patient (FIGS. 5A and 6B) and from a normal adult volunteer in EDTA containing evaluated blood collection tubes. Each
sample was split into two aliquots. A mixture of 50,000 fluorescent microbeads, titered amounts of antibody and buffer to make 400 μl was prepared for each aliquot. To one aliquot from each sample
the antibodies consisted of Leu 4 PE/Cy5 and Leu 3a PE. (Cy5 was obtained from Biological Detection Systems.) To the other aliquot from each sample the antibodies consisted of Leu 4 PE/Cy5 and Leu 2a
PE. (Leu2a is an anti-CD8 monoclonal antibody available from BDIS.) To the mixture in each aliquot was added 50 μl of whole blood. The aliquots were incubated for 30 minutes, vortexed and then run on
a FACSCount brand flow cytometer. Data was acquired and stored in list-mode. A fluorescence threshold was set in the PE/Cy5 channel to exclude the majority of red blood cells, however, care was taken
to assure that the threshold was to the left of the far most expected edge of the CD4.sup.- and CD8.sup.- attractors.
Three elliptical attractors were applied to the bead, CD4.sup.- and CD4.sup.+ or CD8.sup.- and CD8.sup.+ clusters. One difficulty encountered in the analysis of CD8 cells is that, unlike CD4 cells,
CD8 cells do not differentiate into well defined positive and negative clusters. A small number of CD8 cells will appear to be "dim." These dim cells are CD8.sup.+ and therefore must included in the
count if the absolute is to be accurate.
A new clustering tool was developed to solve this problem. A "pipe" is drawn connecting the upper (i.e., CD8.sup.+) cluster with the lower (i.e., CD8.sup.-) cluster. It is drawn so that in a 2-D plot
one side extends from the left most edge of the upper cluster boundary to the left most edge of the lower cluster boundary and the other side extends from right most edge of the upper cluster
boundary to the right most edge of the lower cluster boundary. Any events falling within the orbital bands surrounding the cluster boundaries of the pipe are monitored as a PQA check assuring proper
containment of CD8.sup.dim cells and as a PQA check against encroachment by debris.
In addition the pipe region tool described above, an additional tool was developed to handle the special case where fluorescent control and/or reference beads are included in the analysis of
fluorescently labelled cells. In this instance, a circular 2-D bead peak attractor is used to pinpoint the vector mean of the beads, which is then used to predict, by fixed vector offsets, the most
likely positions of the cell population clusters. The goal is that the bead peak location will reveal drift in the optical power alignment and sensitivity of the instrument. Any drift in the bead
peak predicts similar drift in the cell clusters; therefore, any offset in the location of the bead peak will cause the seed locations to be offset by a similar amount in a similar direction. This
may be accomplished by a two-step analysis where only beads are analyzed initially in order to establish the bead peak or by means of analysis of a control tube prior to actual sample acquisition. In
the former case, a circular attractor is employed to establish the bead peak while an elliptical attractor is employed in the analysis step.
(FIGS. 5A and 5B) display the final positions of the clusters and the events that fell within each cluster for whole blood from an AIDS patient. In FIG. 5A, the majority of events within a cluster
occur within the CD4.sup.- or CD8.sup.- clusters. There are few, events that fall outside the cluster that are not either CD4.sup.+ or CD4.sup.- T cells or beads. In FIG. 5B, the events are
distributed in a manner similar to CD4.sup.+ cells; however, the pipe region is applied to collect those CD8.sup.+ cells that express "dim" amounts of fluorescence. Table I sets forth the numbers of
events that fell within each cluster as well as those non-red blood cell events that were not clustered.
TABLE I______________________________________CD4 Tube CD8 Tube______________________________________Beads 6729 Orb. Beads 4 Beads 17229 Orb. Beads 17CD4.sup.+ 874 Orb. CD4.sup.+ 56 CD8.sup.+ 5101 Orb. CD8.sup.+ 426CD4.sup.- 4579 Orb. CD4.sup.- 223 CD8.sup.- 1548 Orb. CD8.sup.- 227 CD8.sup.dim 586 Orb. CD8.sup.dim 117______________________________________
Based upon this data, the number of CD4.sup.+ cells per μl of whole blood was calculated as 156; the number of CD3.sup.+ cells per μl of whole blood was calculated to as 972 in the CD4 tube and 978
in the CD8 tube; and the number of CD8.sup.+ cells per μl of whole blood was calculated as 769. The number of cells in the orbital bands was low confirming the integrity of the cluster.
The data from FIGS. 5A and 5B are to be compared with the data from FIGS. 6A and 6B to show how this invention provides PQA. For example, from FIG. 6A it can be seen that the CD4.sup.- cluster is
contaminated with debris and red blood cells, whereas in FIG. 5A there is a separation between the red blood cells/debris and the CD4.sup.- cells. This problem also shows up in Table II where the
number of events occurring in the orbital bands for CD4.sup.- and CD8.sup.- is higher than should be expected if cluster integrity had been maintained. Based on this data, the sample in FIGS. 6A and
6B should have been rejected.
TABLE II______________________________________CD4 Tube CD8 Tube______________________________________Beads 9468 Orb. Beads 1 Beads 17453 Orb. Beads 13CD4.sup.+ 2501 Orb. CD4.sup.+ 69 CD8.sup.+ 713 Orb. CD8.sup.+ 82CD4.sup.- 1519 Orb. CD4.sup.- 763 CD8.sup.- 2501 Orb. CD8.sup.- 343 CD8.sup.dim 189 Orb. CD8.sup.dim 173______________________________________
Another aspect of this invention also is shown in Table II. For both the CD4 and CD8 tube, once the number of events that were CD4.sup.+ exceed 2500, the counting ceased. The instrument had been set
with 2500 events in the CD4.sup.+ window as an auto-shut off. The same is true for the number of CD8.sup.- events in the CD8 tube.
All publications and patent applications mentioned in this specification are indicative of the level of ordinary skill in the art to which this invention pertains. All publications and patent
applications are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by
It will be apparent to one of ordinary skill in the art that many changes and modifications can be made in the invention without departing from the spirit or scope of the appended claims. | {"url":"http://www.google.es/patents/US5795727?dq=flatulence","timestamp":"2014-04-18T15:57:47Z","content_type":null,"content_length":"116845","record_id":"<urn:uuid:bbc550ec-cd85-4ae3-9505-8cf71130fa93>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
So How Many People Can the Aquifer Support?
So how many people can the aquifer support? In reality, most people do not depend solely on water from the aquifer. A combination of surface water and groundwater is used to support the population.
Also, variations throughout the huge extent of the aquifer make it difficult to estimate the total volume of water humans could actually pump from it. Over time, variations in climate, population
growth, water use, and other variables will also alter the actual length of time that the aquifer can support people's water needs.
In order to come up with an estimate of the number of people an aquifer could support, some assumptions must be made. Use the assumed values below, or come up with your own to calculate an estimate
that answers the investigation question.
10. Calculate an estimate of the number of people who could get all the water they would need all their lives from the Denver Basin aquifer system. Assume that:
The aquifer contains 15 trillion gallons of water.
People use 150 gallons of water per person per day.
Humans live eighty years. | {"url":"http://www.classzone.com/books/earth_science/terc/content/investigations/es1406/es1406page09.cfm","timestamp":"2014-04-19T10:27:29Z","content_type":null,"content_length":"21634","record_id":"<urn:uuid:14ceb6c1-ab04-4cb9-a3a3-2df671a8063e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum mechanics as a theory of probability
Pitowsky, Itamar (2005) Quantum mechanics as a theory of probability. [Preprint]
Download (355Kb) | Preview
We develop and defend the thesis that the Hilbert space formalism of quantum mechanics is a new theory of probability. The theory, like its classical counterpart, consists of an algebra of events,
and the probability measures defined on it. The construction proceeds in the following steps: (a) Axioms for the algebra of events are introduced following Birkhoff and von Neumann. All axioms,
except the one that expresses the uncertainty principle, are shared with the classical event space. The only models for the set of axioms are lattices of subspaces of inner product spaces over a
field K. (b) Another axiom due to Soler forces K to be the field of real, or complex numbers, or the quaternions. We suggest a probabilistic reading of Soler's axiom. (c) Gleason's theorem fully
characterizes the probability measures on the algebra of events, so that Born's rule is derived. (d) Gleason's theorem is equivalent to the existence of a certain finite set of rays, with a
particular orthogonality graph (Wondergraph). Consequently, all aspects of quantum probability can be derived from rational probability assignments to finite "quantum gambles". (e) All experimental
aspects of entanglement- the violation of Bell's inequality in particular- are explained as natural outcomes of the probabilistic structure. (f) We hypothesize that even in the absence of decoherence
macroscopic entanglement can very rarely be observed, and provide a precise conjecture to that effect .We also discuss the relation of the present approach to quantum logic, realism and truth, and
the measurement problem.
Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking: Share |
Item Type: Preprint
Additional Information: Forthcoming in a Festschrift for Jeffrey Bub, ed. W. Demopoulos and the author, Springer (Kluwer): University of Western Ontario Series in Philosophy of Science.
Subjects: Specific Sciences > Physics > Quantum Mechanics
Depositing User: Itamar Pitowsky
Date Deposited: 13 Oct 2005
Last Modified: 07 Oct 2010 11:13
Item ID: 2474
URI: http://philsci-archive.pitt.edu/id/eprint/2474
Actions (login required)
Document Downloads | {"url":"http://philsci-archive.pitt.edu/2474/","timestamp":"2014-04-19T22:11:03Z","content_type":null,"content_length":"30002","record_id":"<urn:uuid:6d830671-01f5-49c0-9e35-02f07323642f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
counting trees with two kind of vertices and fixed number of edges beetween one kind
up vote 2 down vote favorite
I'm interested in the following problem: given vertices $v_1, \dots, v_j$ and $w_{j+1}, \dots, w_n$ I want to count the number of trees with these $n$ vertices such that the number of edges between
the $v_i$ vertices is exactly $t$, for fixed $t$. Clearly, we should only consider $t \in \{0, 1, \dots, n-j-1\}$.
The way in which I tried to solve it seems to be unsuccessful and probably not the best approach. The $t$ edges between the $j$ vertices $v_i$ will form a forest with $j-t$ components. We can do this
in $\binom{j-1}{j-t-1} j^t$ ways, I think. If we consider any such tree $T$ and if we remove the vertices $v_i$, then we get a forest with the $w_i$ vertices. The number of components in this forest
could go from 1 to $n-j$. Hence, we can compute the sum over all the possible number of components $c$ and repeat the above computation for the number of forest with $n-j$ vertices and $c$
components. Finally, fixed a forest with the $v_i$ vertices ($j-t$ components) and one with the $w_k$ vertices ($c$ components), we need to count the number of spanning trees in $K_{j-t, c}$. A
complete bipartite graph $K_{m,n}$ has $m^{n−1} n^{m−1}$ spanning trees. However, each edge - say, from component $A$ to component $B$ - corresponds to $|A| |B|$ possible edges, as each component can
have several vertices: but we don't have that information! If we fix an spanning tree $T$, then the number of corresponding trees for answering the original question associated to $T$ is equal to the
product of the size of each component $C_i$ to the power of $deg_T(C_i)$. Is there any "expectation" approach that could work?
I would appreciate any idea, suggestion or help! Thanks.
graph-theory co.combinatorics
Here is a suggested way to count the quantity. Take a tree T and select t edges from it. Compute the number of vertices k incident to these t edges. Paint the k vertices with a subset of the first
j vertices, and then paint the rest of the vertices. You will end up for that tree T with a weighted sum of terms of the form k!(n-k)! as the number of ways to color that tree, with the sum ranging
over t-subsets of edges of T. The nice thing is that k is bounded by simple functions of t, so you can roughly approximate the sum quickly. Gerhard "Ask Me About System Design" Paseman, 2012.06.13
– Gerhard Paseman Jun 13 '12 at 16:48
add comment
1 Answer
active oldest votes
We can do it by the matrix tree theorem. Let $T_{n,j,t}$ be the number of trees as you define them. Make an $n\times n$ matrix $A=(a_{rs})$ thus: off-diagonal elements $a_{rs}$ are
$-x$ if $r,s\le j$ and $-1$ otherwise. Diagonal elements are set to make the row sums 0. Now strike off the last row and the last column. The determinant of what is left is $$F_{n,j}
(x) = \sum_t T_{n,j,t}x^t$$ by the matrix tree theorem.
Now someone can find a cute proof, but Maple is adamant that $$F_{n,j}(x) = n^{n-j-1} (jx+n-j)^{j-1},$$ and so $$T_{n,j,t} = n^{n-j-1} \binom{j-1}{t} j^t(n-j)^{j-t-1}. $$
up vote 4 down
vote accepted Can someone see a slick way to find that determinant? The eigenvalues are evidently $1$ (once), $n$ ($n-j-1$ times) and $jx+n-j$ ($j-1$ times).
Or maybe the answer suggests a direct proof?
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/99465/counting-trees-with-two-kind-of-vertices-and-fixed-number-of-edges-beetween-one","timestamp":"2014-04-20T06:09:55Z","content_type":null,"content_length":"54478","record_id":"<urn:uuid:42c0bbda-5201-41d6-9e55-a0180b97734d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can all varieties with given hilbert polynomial be rigid
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Let $X$ be a canonically polarized variety with hilbert polynomial $h$.
Does there exist a non-rigid canonically polarized varietz with hilbert polynomial $h$?
up vote 0 down vote favorite When is this the case, and when is this not the case?
ag.algebraic-geometry complex-geometry deformation-theory
add comment
Let $X$ be a canonically polarized variety with hilbert polynomial $h$.
Does there exist a non-rigid canonically polarized varietz with hilbert polynomial $h$?
When is this the case, and when is this not the case? | {"url":"http://mathoverflow.net/questions/134065/can-all-varieties-with-given-hilbert-polynomial-be-rigid","timestamp":"2014-04-20T03:57:17Z","content_type":null,"content_length":"45882","record_id":"<urn:uuid:e386991b-29b1-46b6-ab7e-4455b9c3ac86>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - minimum perimeter
Date: Nov 7, 1997 4:03 PM
Author: Joshua Zucker
Subject: minimum perimeter
The question:
Given an acute triangle, find the inscribed triangle (with one point
on each side of the original triangle) of minimum perimeter.
The answer:
The triangle formed by the feet of the altitudes of the original
triangle has the smallest perimeter.
The proof:
I don't know yet.
--Joshua Zucker | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1077404","timestamp":"2014-04-17T22:02:18Z","content_type":null,"content_length":"1277","record_id":"<urn:uuid:2431ac51-2625-4439-bc8f-949d247b7d9a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Impossible Lewisian Modal Realism
Many people think that talking about possible worlds is useful in philosophy. A good number of those people think that talking about impossible worlds is also useful. In most cases, talking about
impossible worlds as well as possible worlds is innocuous. On most of our views about what worlds are, impossible worlds are no more ontologically problematic than possible worlds: sets of
propositions all of which can’t be true together are no more mysterious than sets of propositions all of which can be true together; if talk of possible worlds is a merely pragmatically useful
fiction, talk of impossible worlds can be such without any further mystery. And so if you hold some such view about worlds, the question as to whether we should talk about impossible worlds depends
solely on whether to do so is useful – there is no metaphysical problem in doing so.
Not so, seemingly, if you are a Lewisian realist about worlds. For Lewis, a world at which there are blue swans is a world with blue swans as parts, and so a world with round squares is a world with
round squares as parts. And so, to believe in the latter world is to believe in round squares. And this is to raise a metaphysical problem, for now one must admit into one’s ontology objects which
could not exist. In brief, impossible worlds for Lewis are problematic because of how he thinks worlds represent: they represent something being the case by being that way, whereas his opponents
think worlds represent in some indirect manner, by describing things to be that way, or picturing them to be that way, or etc. Impossible worlds are not metaphysically mysterious on the latter views
because there is no metaphysical puzzle in there being a description of something that couldn’t exist, or a picture of something that couldn’t exist; but they are a metaphysical puzzle for Lewis,
because there is a metaphysical puzzle in there being something that couldn’t exist.
Nonetheless, some think this is a price worth paying: they like Lewis’s account of possibilia but are impressed by the arguments for the need for impossibilia, so want to extend Lewis’s ontology to
include impossible worlds. I’ve heard this move a few times in conversation, but the one person I know of who has defended it in print is Ira Kiourti. (Yagisawa defends a view similar to Lewisian
realism with impossible worlds, but with some crucial differences.) Now, there are some big, and familiar, problems with believing in genuine impossible worlds that they each try to deal with, but I
am yet to be convinced can be solved. (See my critical study of Yagisawa.) But I want to raise a problem for Lewisian realism with impossible worlds that I haven’t seen discussed and which I don’t
even know how one would start to answer.
I can see how Lewisian realism with impossible worlds is supposed to deal with impossibilities like ‘There is a round square’ or ‘Frank is taller than Jim and Jim is taller than Frank’. I just need
to believe in impossible objects – a round square and a man that is both taller and shorter than some other man – and then I can believe in worlds composed in part of such objects. Now, personally I
can’t conceive of such objects – but so what? If I’ve got good reason to believe in them, I can postulate them. But I don’t see how we are meant to account for an impossibility like ‘2+2=5’. For
Lewis, ‘2+2=4’ is necessary not because there’s a number system that is a part of each world and which behaves the same way at each world; rather it’s necessary that 2+2=4 because the numbers are not
part of any world – they stand beyond the realm of the concreta, and so varying what happens from one portion of concrete reality to another cannot result in variation as to whether 2+2 is 4. And
so, since contingency just is, for Lewis, variation across certain portions of concrete reality – namely, the worlds, which are just big concrete objects – there is simply no room for contingency
with respect to the mathematical truths. Necessary truths about the realm of concreta are necessary because each relevant portion of concrete reality is a certain way: that is, no matter what
variation you get across these concrete portions of reality, things are that way. But necessary truths about the realm of pure abstracta are necessary because they have nothing to do with how
concrete reality is: so irrespective of what variation you get across these concrete portions of reality, things are that way. (See my paper on Lewis and reduction, esp. footnotes 3-5, and the
corresponding discussion in the text.)
In that case, while I can add to my ontology round squares if I wish, and hence believe in a big concrete object with a round square as a part, and thereby have a world that represents the impossible
circumstance of there being a round square, I don’t see even what weird metaphysical move to make to get a Lewisian world that represents 2+2 as being 5. Worlds don’t have numbers as parts: they are
sums of concrete individuals; and if we give that up, we don’t have something worth calling an extension of Lewisian realism (Yagisawa gives this up, which is why I’m ignoring his view here). A
Lewisian world nonetheless represents that there are numbers, because numbers exist from the standpoint of that world. And while the concrete objects that exist in one Lewisian world are never the
same as the concrete objects that exist at another world, the numbers that exist from the standpoint of each world are just the same. And that’s why mathematical truths are necessary, for Lewis:
because a world represents some mathematical claim as being the case just because the numbers represented as existing from the standpoint of that world are as the claim says they are. And since the
same numbers exist from the standpoint of any two worlds, no two worlds differ in what mathematical claims they represent as being the case. Given Lewis’s account of what a world is and how they
represent something to be the case, there is simply no room for variation across worlds in what mathematical claims are represented as true: hence there is no room for contingency in mathematical
claims, but nor is there room for a world that represents some impossible mathematical claim as true, no matter what we think about the extent of what concrete worlds there are. So the Lewisian
simply can’t extend her ontology to admit worlds that account for every impossible situation.
26 responses to “Impossible Lewisian Modal Realism”
1. Dear Professor Cameron,
I think you are onto something importantly right. However, as it stands, it seems Lewis could offer two replies. First, since we’re dealing with impossible worlds, he might deny that all parts of
such worlds are concrete. Perhaps “having abstract parts” is part of what makes an impossible world impossible. (Even so, one rejoinder might be to insist that some impossible worlds are such
that 2+2=5 AND that they have no abstract parts. That also could be an impossible state-of-affairs, but that too could be what defines an impossible world as such.)
Second, I’m unsure whether Lewis is committed to worlds having all concrete parts, even in the case of possible worlds. In _Plurality_, he writes “As for the parts of worlds, certainly some of
them are concrete, such as the other-worldly donkeys and protons and puddles and stars. But if universals or tropes are non-spatiotemporal parts of ordinary particulars that in turn are parts of
worlds, then here we have abstractions that are parts of worlds” (p. 86).
Still, I may be inclined to deny, as van Inwagen does, the intelligibility of “abstract parts.” (We sometimes say things like “algebra is a part of mathematics,” but that could be dismissed as
loose talk.) So even though there may be two Lewisian replies to your claim, your point still seems defensible in the end.
2. On the second point: that’s why I talked only about ‘pure’ abstracta, like numbers. It’s these that Lewis can’t admit as parts of worlds. I agree that if there are abstracta that depend on
concreta, like tropes or universals, Lewis can admit that those are parts of worlds.
On your first point. I agree that something along those lines might be an option a theorist could take, but I stand by my claim that it would too serious a departure for it still to be
legitimately thought of as an extension of Lewisian realism. Here’s one way of seeing that: you can’t just say ‘Okay, let abstract objects be parts of some worlds’ and leave it at that. One now
needs a totally different definition of ‘world’: a world can’t now be a sum of spatio-temporally related things, because pure abstracta aren’t s-t related to anything. So what IS the new
definition of ‘world’? And remember that we can’t invoke modality for it to be anything like Lewisian realism. (In effect that’s what Yagisawa does, which is why he doesn’t face this issue: he in
effect has a primitive worldmate relation.)
□ I think you’re right that the abstract-parts view would be too much of a departure from Lewisian realism. Nonetheless, it seems we must take the worldmate-relation as primitive regardless,
once we introduce impossible worlds. Argument: If the relation is not primitive, there will be some true sentence of the form “x and y are worldmates iff Rxy,” for some appropriately selected
predicate ‘R’. Suppose we also take your earlier suggestion that impossible worlds can (at the very least) be represented by sets of inconsistent statements. Then, without further provisos,
there will be impossible worlds represented partly by statements of the form “x and y are worldmates and ~Rxy.” But if that holds in some impossible world, then “R” cannot define the
worldmate-relation across both possible and impossible worlds. Yet since “R” is arbitrary, the worldmate-relation here is undefinable…
☆ Not sure I buy that. Why can’t we hold Lewis’s definition: x and y are worldmates iff they are spatio-temporally related, and then say that the impossibility of a and b being worldmates
without being s-t related is realised at the world where a and b both are and aren’t s-t related?
And in any case, if your problem is a problem, why does having a primitive worldmate relation help? We still have to account for the impossible world where two things are worldmates but
don’t stand in the worldmate relation.
3. Hmm, would it be Lewisian enough to try something funky with counterpart relations? Step one: if there can be (say) a universal that exists in two worlds, we could fiddle with the counterpart
relations so that it in w1 doesn’t count (according to counterpart relation R) as cross-world identical to w2. Step two: if that’s right for things that literally are parts of two worlds (e.g.
universals), it should be right for things that aren’t literally parts of two worlds but count as existing ‘from the perspective’ of both (e.g. singletons of universals). And if that’s right, it
should be in principle possible to pull the same trick for pure sets. So maybe we can multiply possibilities here by multiplying counterpart relations; a(n im)possibility where 2+2=5 is just like
a possibility where Lumpl isn’t Goliath: one where some pure set x is a counterpart of 4 under a “two-plus-two” relation and another pure set y is a counterpart of 5 under a “five” relation.
Yes, this is all super sketchy, and no I don’t see how to make it less sketchy, and yes, maybe any attempt to make it work properly will collapse Lewisian Modal Realism into Dorrian
Counterpairing Realism. But it seems to me like the most natural thing to try.
4. Parts of classes?
5. What about it?
6. Doesn’t it give a Lewis-acceptable model of how one might duck out of commitment to pure abstracta, and thus reduce your second problematic type of impossibility to the first?
7. Are you thinking of the appendix? It’s been a while – I’d have to re-read it and think through how that plays out.
8. I was thinking that given a very large (inaccessible cardinal sized) pluriverse of concrete mereological atoms and plural quantification we could treat mathematical claims like 2+2=4 in something
like the way the in re arithmetical structuralist does, and 2+2=5 as realized by there being an impossible concrete such structure.
9. Hey, Ross–one thing that struck me about your construal of modal realism (I don’t know if you handle it somewhere, so sorry if it’s old hat) is that it might be bad news for Lewis’s argument
against Boxes and Diamonds on OTPOW p10-12. “So [Humphrey] satisfies both ‘x is human’ at all worlds and ‘x does not exist’ at some worlds; so he satisfies both of them at some worlds; yet though
he satisfies both conjuncts he doesn’t satisfy their conjunction. How can that be?”
Here’s how that can be: if “x is human” is indeed necessary, then the conceptual truth it rests on has the same status as mathematical necessity. Then Humphrey satisfies “x is human” and “x does
not exist” *in different ways* at worlds where he satisfies both, and hence he does not in any one way satisfy their conjunction…
10. Yeah, Andy, that might be right. Interesting that you have to go that way, but yeah it sounds like a workable Lewis-friendly option.
Richard – sorry, I’m not following that very well.
□ Maybe you don’t need the pluriverse to be that size if there’s an impossible object that is big enough (and not big enough).
11. Mathematics is not such a special case: For Lewis, logic is absolutely general and not dependent on any worldly features. So, in the Lewisian picture, mathematical and logical necessity, both,
are world-invariant, their necessary status not arising in virtue of particular features of each world. This is the basis of the joke in Stalnaker’s ‘Impossibilities’ where ‘Louis’ replies to
‘Will’’s postulation of impossible worlds by challenging how we can begin to attribute logical properties to a world, or what it would be like to discover that some world is not deductively
closed. So – Conjecture: whatever one says about how logical laws vary from world to world is applicable to the case of mathematical truths varying from world to world.
Moreover, if I understand this correctly, advocates of non-classical logics such as Priest, Mortensen, Routley have developed inconsistent number systems, which are based on non-classical logical
systems. This again seems to suggest that if there is room for non-classical logics in a concrete pluriverse, ditto for alternative mathematical structures.
So, lets focus on the case of logical laws: I discuss variation in logical laws between worlds in chapter 5. One way to conceive of logical properties as anchored in a world is to see them as
worldly structural properties – complex (second order) properties of some kind. The fact that we cannot imagine how a world might realise some such abstract structure is again beside the point.
Chris Mortensen, in ‘Anything is Possible’, reminds us that just because some general truth can be presented in a totally abstract way this doesn’t imply it is without anchor in physical reality
– think of, say, complex abstract equations in physics. All we want is for worlds to realise different such structural properties to get going. (We may want to deny that worlds have any such
structural features at all, but then a case could be made that all such features arguably stand or fall together, at which point, arguably, the status of all kinds of necessities may be called
into question.)
[Ontologically, both logical and mathematical necessities can be pictured in the Lewisian framework as relations (higher order properties) holding between certain kinds of impure sets: in the one
case we are dealing with propositions, i.e. sets of worlds, (sets that are not purely abstract) – and in the other, we are dealing with numbers, i.e. constructs out of the empty set, (again not
purely abstract, given how tongue-in cheek Lewis is about the null-set: all it has to be is member-less. ‘A possum will do’. Indeed any concrete part of any world we are interested in). This
picture, again, may allow for at least some minimal anchoring of mathematical entities in worlds.]
Beyond this, you may ask about impossibilities of the following kind: it is impossible that there is a world where 2+2=5 but which has no structural properties – so there must be such a world,
albeit impossible. I would have to say that a world, which both does and does not have such properties, may have to suffice.
□ Thanks, Ira. I’m not exactly sure what the answer is, from that, but I’ll have a look at Ch.5 of your thesis in more detail!
☆ Hi Ross,
The second bit of the answer is quick, yes. But the point I am making – which I take to be stated clearly enough – is that mathematics is not a special case. So whatever goes for logic
goes for mathematics. This is key as you seem to think the question of mathematical truths uniquely problematic – i say not so. No need to go into the thesis for that. I didn’t explain in
detail how logical structures can differ from world to world (a difficult issue John Divers raises), but we can talk about that if you like (that’s the bit I talk about in the thesis). :)
☆ No, I wasn’t thinking maths was a special case. Just picking an example where the goings on concern abstracta rather than concreta
12. Ok. In that case, the only way forward with the general problem is to allow such mathematical and logical truths to be anchored in concrete reality in some way that allows them to vary.
Structural properties seems to be the way to go (my only worry is a clash with humean supervenience here, but even Lewis does not take the thesis to be necessarily true). The fact that
mathematical entities do not have to be considered pure abstracta if we ‘naturalise’ the null set as Lewis seems to suggest in parts of classes may further detract a little from the worry about
purely abstract truth
(Btw. Thinking of the impossible as that which cannot possibly exist is an ambiguous statement if you are debating concrete impossible worlds. Arguably, just like the physically impossible cannot
possibly exist given a particular set of physical laws (and not absolutely), the logically impossible cannot possibly exist given a particular set of logical laws, which, ex hypothesi, given the
consideration of logically impossible worlds, are not taken to determine existence. You cannot go for impossible worlds and agree with Lewis on this matter of the status of logical laws.)
13. Cool. One thing both you and Andy are bringing to light though is that the best way for the impossible Lewisian to go is to embrace what Lewis says in PoC. I think that’s an interesting
consequence; but I also personally am disappointed, because I like what Lewis says about worlds but don’t really like what he says about sets!
□ Interesting indeed. Ah, I tend to quite like this consequence. But I have to go back to examine it more carefully.
□ Maybe the deeper source of the issue that you’re raising is the non-recombinable nature of ‘magical’ selection relations. It’s tempting to allow magical set-theoretic such relations but
forswear modal/semantic ones (perhaps on indispensability grounds). But this may not be stable if a suitably rich metaphysical account of the former makes trouble for the way you hoped to
avoid commitment to the latter.
14. Well, I *do* like how intricately related every bit of Lewis’s system is to every other bit! But I would like to be able to do all the modal stuff without saying *crazy* things about sets! The
empty set ain’t no possum!
□ Ha! And what a surprise that a possum will serve just as well!
15. I think that impossible worlds is an analogy too far for dealing with things that cannot exist like square wheels. I think the real problem here is that such things are literally non-sense, i.e.
there is not (and cannot) be anything that they refer to. So to postulate a world in which such things do reside cannot therefore itself make sense, i.e. there cannot be something it refers to
either. The difference between a possible world and nonsense is that it is possible that there could be something that is referred to in a possible world and it can be populated with individuals
that the statements do refer to.
16. I feel your pain Matthew. Logic was a great gift to the world but in the hands of significant intellects it is liable to suck in time and vitality only to deposit its victims on high, barren
platforms, scaffolded by non-sense and easily demolished with simple observation.
The Pseudo-scientists are all the time eating into the territory where we can still resonate but yet you continue to pick at the dry crumbs left in the intellectual hermitage.
Enjoy your swim with blue swans, I probably won’t see you there and nor will anyone else.
This entry was posted in Uncategorized. Bookmark the permalink. | {"url":"http://metaphysicalvalues.wordpress.com/2012/12/07/impossible-lewisian-modal-realism/","timestamp":"2014-04-21T10:22:31Z","content_type":null,"content_length":"105695","record_id":"<urn:uuid:d8b9424c-f2e4-4e57-8ef0-852961c41782>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Courses
Physics Courses
Program | Faculty | Master's | Doctoral | Courses
All courses carry 3 credits unless otherwise specified.
530 Radiation Physics
For science majors specializing in nuclear medicine, radiology, environmental sciences, radiation protection, and applied areas using ionizing radiations. Principles of atoms and nuclei,
radioactivity, interaction of radiation with matter, radiation detectors and methods, applications of radioactive and stable nuclei as tracers. Special topics. Consent of instructor required. Credit,
531 Electronics for Scientists I
Operation and use of the basic elements of modern electronics, both analog and digital. Analog circuit analysis, filters, diodes, transistors, operational amplifiers, oscillators, power supplies,
integrated circuits. Gate construction and families, flip-flops and flip-flop circuits, the 68000 microprocessor, machine language, and the building of a computer based on the 68000. A “hands-on”
experience for those using electronic equipment in research, testing, and analysis. Prerequisites: a freshman course in electricity and magnetism; knowledge of basic dc and ac circuit concepts.
Credit, 4.
553 Optics
Lecture, discussion, laboratory. Modern optics. Geometrical and classical physical optics. Matrix methods in optical design. Optical instruments. Interference and spatial coherence. Diffraction.
Fourier transform spectroscopy. Prerequisite: PHYSICS 422.
556 Nuclei and Elementary Particles
Nuclear properties and models, nuclear decays and reactions. Interactions of hadrons and leptons, internal symmetries and quantum numbers, quarks, unified interactions and gauge symmetry.
Prerequisite: PHYSICS 424.
558 Solid State Physics
Introduction to the properties of solids. Emphasis on the key role played by quantum mechanics in determining the electrical and thermal properties of metals, in-sulators, semiconductors, and
magnets. For senior and graduate students in physics and astronomy, the physical sciences, and engineering. Prerequisites: PHYSICS 423 and 424.
562 Advanced Electricity and Magnetism
Description of electric and magnetic fields in a dynamical context-electromagnetic radiation theory, optics, plasma physics, relativistic electrodynamics, cavity resonators, waveguides. Prerequisite:
PHYSICS 422.
564 Introductory Advanced Quantum Mechanics
Breakdown of classical physics, wave mechanics including the Schroedinger equation and its interpretation, one-dimensional problems, uncertainty principle, harmonic oscillator, hydrogen atom.
Prerequisites: PHYSICS 422, 424.
568 Cosmology and General Relativity
Mathematical and conceptual aspects of the special and general theories of relativity. Lorentz transformations, covariant formulation of the laws of nature. The equivalence principle, curved spaces,
solutions of the equations of relativity. Prerequisite: PHYSICS 422.
601 Classical Mechanics
Lagrange’s and Hamilton’s equations, central force problem, rigid bodies, small oscillations, continuum mechanics, fluid dynamics.
602 Statistical Physics
Survey of thermodynamics. Boltzmann distribution, statistical interpretation of thermodynamics, Gibbsian ensembles and the method of Darwin, Fowler; quantum distributions and their applications,
transport phenomena. Prerequisites: PHYSICS 601, 606 (the latter may be taken concurrently).
605 Methods of Mathematical Physics
Selected topics with application to physics in linear algebra and Hilbert space theory, complex variables, Green’s functions, partial differential equations, integral transforms, integral equations.
Credit, 4.
606 Classical Electrodynamics I
Electrostatic and magnetostatic fields in vacuum and material medium. Maxwell’s equations, radiation, and special relativity. Covariant formulation of the field equations. Fields of a moving charge,
motion of particles, radiation reaction, applications to physical phenomena as time permits. Prerequisite: PHYSICS 601. Credit, 4.
614 Intermediate Quantum Mechanics I
Abstract quantum mechanics, Hilbert space, representation theory, three-dimensional problems, angular momentum, spin, vector coupling, bound state perturbation theory, variational method.
Prerequisite: PHYSICS 605.
615 Intermediate Quantum Mechanics II
Angular momentum, time dependent and time independent perturbation theory, semi-classical and quantum treatment of radiation, scattering theory, Klein-Gordon equation, Dirac equation. Prerequisite:
PHYSICS 614.
696 Independent Study
Special study in some branch of physics, either theoretical or experimental, under direction of a faculty member.
699 Master’s Thesis
Credit, 6.
714 Introductory High Energy Physics
Introduction to physics of elementary particles; treating the development of the field, the particle spectrum, symmetries, quarks, experimental methods, an introduction to theories of the strong,
electromagnetic and weak interaction, and recent developments. Prerequisites: PHYSICS 614, 606.
715 Introductory Solid State Physics
Solids treated as translational symmetry structures, their effect in x-ray and particle scattering, and thermal and vibrational properties of solids. Binding energy of solids, electronics in periodic
potentials, and formation of bands. The free electron model of metals. Prerequisite: PHYSICS 614.
716 Introduction to Superfluidity and Superconductivity
Description of fundamental experiments and properties of superfluid ^He, ¨He and superconductors. The two fluid model, elementary excitations, fluid structure, vortices, superfluid films and
macroscopic quantum effects in superfluidity. Type I and II superconductors, the mixed state, the Meisner effect, superconducting junctions and an introduction to devices. Prerequisite: PHYSICS 614.
719 Nuclear Physics
Basic concepts of nuclear physics, instruments and methods. Natural radioactivity, nuclear radiations—their properties and interaction with matter, nuclear-radiation detectors, electrostatic and
magnetic analyzers, mass spectrometry, charged particle accelerators, elementary discussion of alpha and beta decay, nuclear isomerism, internal conversion, nuclear reactions, neutron physics,
fissions, nuclear spin and magnetic moments, cosmic rays and elementary particles. Prerequisite: PHYSICS 614.
723 Topics in Mathematical Physics
Subjects vary depending on instructor. Most recently has included topics in nonlinear dynamics. Prerequisite: consent of instructor.
724 Group Theory in Quantum Mechanics
Finite dimensional groups and their representations; representations of the permutation group; representations of SU(n), tensor representations, decomposition of direct product representations;
three-dimensional rotation group. Clebsch-Gordon and Racah co-efficients; the Lorentz group and its representations; applications to atomic, solid state, nuclear and high energy physics.
Prerequisite: PHYSICS 615.
811 Field Theory
Klein-Gordon and Dirac equations, field quantization, interacting fields, S-matrix, perturbation theory and Feynman diagrams, renormalization, path integrals, and recent developments.
813 High Energy Physics
Advanced study of particle physics. Topics vary with instructor; may include the theory of the weak interactions, deep inelastic scattering, phenomenology of the strong and weak interactions, quantum
chromodynamics, gauge theory, attempts at unification, and recent developments. Prerequisite: PHYSICS 714.
816 Solid State Physics
Transport phenomena in solids including semiconductors, optical properties of solids, superconductivity, superfluidity, magnetism. Topics vary with instructor. Prerequisite: PHYSICS 715.
817 Advanced Statistical Physics
Phase transitions, including condensation; description of imperfect gases. Transport theory and other nonequilibrium phenomena. Irreversible processes. Field theoretic quantum statistical physics.
Prerequisite: PHYSICS 602.
821 General Relativity
Mathematical and conceptual aspects of the special and general theories of relativity. Lorentz transformations, covariant formulation of the laws of nature. The equivalence principle, curved spaces,
solutions of the equations of relativity. Prerequisite: PHYSICS 606.
850 Advanced Topics in Physics
One or more subjects of special interest covered in lectures. Consent of instructor required.
851 Special Topics in Nuclear Physics
Advanced and current topics in nuclear physics.
852 Special Topics in High Energy Physics
Advanced and current topics in high energy physics. Prerequisite: PHYSICS 813.
853 Special Topics in Solid State Physics
Advanced and current topics in solid state physics. Prerequisite: PHYSICS 816.
860 Seminar on Research Topics
Instruction via reading assignments and seminars on research topics not currently covered in regular courses. Consent of instructor required. Credit, 1-3.
899 Doctoral Dissertation
Credit, 18. | {"url":"http://www.umass.edu/grad_catalog/physics/courses.html","timestamp":"2014-04-21T07:25:59Z","content_type":null,"content_length":"16204","record_id":"<urn:uuid:585b237d-81ba-4034-bdee-81ea28bf6301>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
The n-Category Café
Joint Math Meetings in Washington DC
Posted by John Baez
In just a few days, hordes of mathematicians will descend on Washington DC for the big annual joint meeting of the American Mathematical Society (AMS), Mathematical Association of America (MAA),
Society for Industrial and Applied Mathematics (SIAM), and sundry other societies, organizations, clubs, conspiracies and cabals:
I’ll be there. Will you?
I’m giving two talks — you can see the slides here.
The first is at the special session on Homotopy Theory and Higher Categories, run by Tom Fiore, Mark Johnson, Jim Turner, Steve Wilson and Donald Yau. When I last checked, it was scheduled for
Wednesday January 7th, 1–1:20 pm in Virginia Suite C, Lobby Level in the Marriott:
• Classifying Spaces for Topological 2-Groups — joint work with Danny Stevenson.
Abstract: Categorifying the concept of topological group, one obtains the notion of a topological 2-group. This in turn allows a theory of ‘principal 2-bundles’ generalizing the usual theory of
principal bundles. It is well-known that under mild conditions on a topological group $G$ and a space $M$, principal $G$-bundles over $M$ are classified by either the Cech cohomology $H^1(M,G)$
or the set of homotopy classes $[M,B G]$, where $B G$ is the classifying space of $G$. Here we review work by Bartels, Jurco, Baas-Bökstedt-Kro, Stevenson and myself generalizing this result to
topological 2-groups. We explain various viewpoints on topological 2-groups and the Cech cohomology $H^1(M,\mathbf{G})$ with coefficients in a topological 2-group $\mathbf{G}$, also known as
‘nonabelian cohomology’. Then we sketch a proof that under mild conditions on $M$ and $\mathbf{G}$ there is a bijection between $H^1(M,\mathbf{G})$ and $[M,B|N\mathbf{G}|]$, where $B|N\mathbf{G}
|]$ is the classifying space of the geometric realization of the nerve of $\mathbf{G}$.
Here are some talks by friends of mine at
the same session
• Peter May, Permutative and bipermutative categories revisited.
• Mike Shulman, Limits, derived functors, and homotopical category theory.
• Julie Bergner, Homotopical versions of Hall algebras.
• Tom Fiore, The homotopy theory of n-fold categories.
I’ll also give talk at the special session on Categorification and Link Homology, run by Aaron Lauda and Mikhail Khovanov. When I last checked, it was scheduled for Wednesday January 7th, 5:10 - 5:30
pm, in the Harding Room, Mezzanine Level, Marriott:
• Groupoidification — joint work with James Dolan, Todd Trimble, Alex Hoffnung and Christopher Walker.
Abstract: There is a systematic process that turns groupoids into vector spaces and spans of groupoids into linear operators. ‘Groupoidification’ is the attempt to reverse this process, taking
familiar structures from linear algebra and enhancing them to obtain structures involving groupoids. Like quantization, groupoidification is not entirely systematic. However, examples show that
it is a good thing to try! For example, groupoidifying the quantum harmonic oscillator yields combinatorial structures associated to the groupoid of finite sets, while groupoidifying the $q$
-deformed oscillator yields structures associated to finite-dimensional vector spaces over the field with $q$ elements. Starting with flag varieties defined over the field with $q$ elements, we
can also groupoidify Hecke and Hall algebras.
Here are some talks by friends and fellow math bloggers at the same session:
• Scott Morrison, The 2-point and 4-point Khovanov categories.
• Ben Webster, A categorification of quantum tangle invariants via quiver varieties.
• Hendryk Pfeiffer, Every modular category is the category of modules over an algebra.
• Alex Hoffnung, A categorification of Hecke algebras.
Alex’s talk will pick up where mine leaves off.
My student Alissa Crans will also be there; she and Sam Nelson are running the special session on Algebraic Structures in Knot Theory. Charles Frohman and Louis Kauffman will be talking at that — I
used to see them fairly often, back when I was into knots and quantum gravity.
During the conference I’ll stay with my friend the combinatorialist Bill Schmitt, who lives in Bethesda, conveniently located next to the subway into DC. We went to grad school at MIT together. He’s
a student of Rota, and he’s the one who got me interested in Joyal’s work on ‘species’.
Given this profusion of friends attending the conference, I expect there’ll be some serious socializing going on. Mathematically speaking, that’s actually more productive than the ridiculously short
20-minute talks. And, it’s more fun. I really like it when two people I know and admire, who haven’t ever met, finally meet. There’ll be a lot of that going on here.
Who else reading this will be there?
Posted at December 31, 2008 2:38 AM UTC
Re: Joint Math Meetings in Washington DC
I’ll be there. A convenient farewell tour.
Posted by: John Armstrong on December 31, 2008 4:04 AM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
Posted by: Mikael Vejdemo Johansson on January 1, 2009 1:24 PM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
On the same day that your recent papers appear on the ArXiv, we have Raphael Rouquier writing on 2-Kac-Moody algebras:
We construct a 2-category associated with a Kac-Moody algebra and we study its 2-representations. This generalizes earlier work with Chuang for type $A$. We relate categorifications relying on
$K_0$ properties and 2-representations.
At the end of the introduction he writes:
Certain specializations of the nil Hecke algebras associated with quivers and the resulting monoidal categories associated with “half” Kac-Moody algebras have been introduced independently by
Khovanov and Lauda.
Posted by: David Corfield on December 31, 2008 12:31 PM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
Alas, I will miss the joint meetings. Happily, I will be at Knots in Washington. I think Masahico will be at the joint meetings and KinW.
Posted by: Scott Carter on December 31, 2008 5:31 PM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
As you know, Alissa Crans will be staying for Knots in Washington. Alex Hoffnung, Aaron Lauda, Louis Kauffman and Mikhail Khovanov will also be staying for that. My friend Bill Schmitt teaches at
George Washington University, where this conference is being held, so you may also bump into him — try!
I can’t stay for that conference. Some of us need to teach!
Posted by: John Baez on December 31, 2008 7:22 PM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
I’ll be there! AND I’ll really, really, REALLY want to meet all the bloggers I regularly interact with!
Posted by: Mikael Vejdemo Johansson on January 1, 2009 1:25 PM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
Which sessions will you be going to? On Wednesday I’ll be at Homotopy Theory and Higher Categories and Categorification and Link Homology. I don’t yet know what I’ll be doing the other days. Where
might I bump into you?
Do you look like this? Are you always on the phone?
Posted by: John Baez on January 1, 2009 5:49 PM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
I haven’t yet looked through the schedule in detail - but one place I’ll be guaranteed to be is the Topological Methods in Applied Mathematics - since that’s why my trip gets paid. :-)
I’ll make sure I’ll put highlights up on http://blog.mikael.johanssons.org once I’ve dug through the program.
I do look like that - though not always with a cellphone glued to my ear, and not always dressed in my wedding frockcoat. :-) It’s probable that I’ll be wearing t-shirts, possibly with a Stanford
long-sleeved t-shirt or sweater over it, for most if not all of the meeting.
Posted by: Mikael Vejdemo Johansson on January 2, 2009 10:07 AM | Permalink | Reply to this
Re: Joint Math Meetings in Washington DC
I’m trying to decide which talks to attend on Monday. Here are two interesting ones at the same time:
• MAA Session on Performing Mathematics
Washington Room 4, Lower Level, Marriott, 2:15 p.m.
Leon Harkleroad, Möbius and Grassmann on musical tuning systems.
• AMS Special Session on Algebraic Structures in Knot Theory
Delaware Suite B, Lobby Level, Marriott, 2:15 p.m.
Michael Eisermann, Set-theoretic Yang-Baxter operators and their deformations.
Posted by: John Baez on January 2, 2009 9:19 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2008/12/joint_math_meetings_in_washing.html","timestamp":"2014-04-19T15:08:38Z","content_type":null,"content_length":"32343","record_id":"<urn:uuid:be338ba4-1634-45ac-a6a0-fb8abd92144c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
The uncertainty of uncertainty
There is a paper by Roe and Baker out in [S:Nature:S] Science arguing that Both models and observations yield broad probability distributions for long-term increases in global mean temperature
expected from the doubling of atmospheric carbon dioxide, with small but finite probabilities of very large increases. We show that the shape of these probability distributions is an inevitable and
general consequence of the nature of the climate system. Predictably enough it will get misinterpreted, and indeed Nature itself leads the field in doing so. See-also Grauniad.
For a general take, you’ll want to read RealClimate (of course) before reading my quibbles. Hopefully JA will weigh in on this too. Because I am very dubious of it. [Update: he has]
Ah, but before I should go further, let me point out that this isn’t my thing. I’m guessing, and may be wrong and/or confused.
OK, so what goes on. Well, we can start by saying that dT = l.dR (1), where dT is the change in temperature, dR is the change in radiative forcing, and l is a constant feedback factor. R+B assert,
and I’m going to believe them, that in the absence of feedbacks l is well known to be about 0.3 K/(W/m2) (they note in the SOM that its 0.26 just considering Stephan-Boltzman). The interesting point
is including feedbacks. We’ll call l-without-feedbacks l0. We then *assume* that the feedbacks are proportional to dT, and look like extra radiative forcing, so instead of dR we have dR + C.dT for
some constant C. Plugging this into our equation (1) we get dT = l0.(dR+C.dT) = l0.dR + f.dT (2), if f = l0.C. f is going to be the thing we call the feedback factor, and is important. (2) can be
solved for dT in f as dT = l0.dR/(1 – f) (3). Or, more simply, dT is proportional to 1/(1 – f). Obviously, there is space to quibble all these simple feedback models, particularly if the changes are
large (as R+B say in the supporting online material (SOM) “Now let dRf be some specified, constant, radiative forcing (here, due to anthropogenic emissions of CO2). dRf is assumed small compared with
F and S so that the changes it introduces in system dynamics are all small and linear in the forcing…” But of course dRf *isn’t* small, nor is S, in the cases they consider). But put all that aside
for now.
This, to a large extent, is the sum total ot the paper: since dT ~ 1/(1 – f), if you let f get close to 1 then you get large dT. As RC points out, R+B aren’t the first to explore this area, but they
may be the first to put this quite so clearly.
And indeed R+B *do* let f get close to 1. Indeed, they let f be normally distributed, with mean about 0.7 and SD about 0.14. Which means (of course) that f has a non-zero probability of being greater
than or equal to 1 (which is the unphysical state of infinite gain). For f > 1, dT becomes negative in eq (3), and the situation is totally unphysical (amusingly none of the RC commenters have
noticed this).
How do R+B solve this little quibble? By ignoring it completely, at least in the main text, though they do discuss it in the SOM (last section). In fact, formally, I think you can: it amounts to
truncating the pdf of f at 1 (and then renormalising, of course). But I think this should be taken as a little hint that a normal pdf on f is not appropriate.
But I’m not at all sure this is a quibble: its actually the heart of the entire matter: the shape of the pdf you assume for f gives you the pdf for dT. If you choose as R+B do, you get a long tail in
dT. If you chose a different pdf for f, you could get a short tail. Indeed, you can chose your pdf for dT, and derive the pdf on f. Its a two-way process and its not at all clear (to me at least) why
you should give primacy to f, nor is it clear that this is at all the “standard” way to do it. [Looking further, the SOM refs Allen et al for a Gaussian in f (though it gets the URL wrong). But its
not clear to me exactly what in A et al is being ref'd. And they follow that up with "A natural choice is that all feedbacks are equally likely", which is quite opaque to me].
Continuing onwards, R+B is a transforming pdfs from one space to another paper. But, we don’t really have enough data points to constrain the pdfs to a particular form, at least as far as I can see.
Thus we have to make a choice of pdf, given only a few data points, even if we were reasonably sure of the mean (also unclear). Thus asserting that this tells you anything about the real world is
dubious. OTOH, as RC points out, if you believe Annan+Hargreaves 2006, we *do* have good grounds to believe that dT (for 2*CO2) is less than 6. Which would then be good grounds for believing that f
is well away from 1. Which doesn’t make R+B wrong, except in their choice of pdf for f. OTOH they may be saying something useful, which would be that going via f to calculate your dT is going to lead
to a long tail (so don’t do it?).
Oh, and last off, Allen is still saying all-equally-likely stuff in the commentary, which is of course impossible.
Talking point: R+B address the skewness issue (ie, the tail of long dT) in the SOM. And end up saying that fixing this “…would require that f diminish by about 0.15 per degree of climate change”.
This requires you to believe the results of the previous section, which I haven’t struggled through, but it also appears to run us into the “Bayesian problem”, which is that these pdfs aren’t
properties of f, they are expressions of our ignorance about f. f has (if you believe these simplified models) a given value; we just don’t know what it is. So I’m not sure how having f diminish can
be a problem in any physical sense. Or to say another thing, I can’t see why declaring that our pdf for f is going to be uniform on [0.4,0.7], say, is a problem. In which case, of course, there is no
tail on dT.
1. #1 Mark Hadfield 2007/10/28
“Ah, but before I should go further, let me point out that this isn’t my thing. I’m guessing, and may be wrong and/or confused.”
You certainly are: you’ve got a link to RealClimate with an empty href, meaning it points to the current page.
[Oops thanks, now fixed. If you're interested, it was because I wrote it before the RC post came out -W]
2. #2 mz 2007/10/28
James Annan has been quite quiet lately, I take that as a sign that he’s working on something, maybe looking at this stuff specifically.
3. #3 Eli Rabett 2007/10/29
IEHO a lot of this relates to how ignorant you want your prior to be. The purists insist that nothing is known. In that sense, you can’t object to non-physical choices, since if you know nothing,
you don’t know the boundaries either (see Annan vs. Frame death match). I prefer using theoretical estimates for the prior to be matched against the data. You could, if you wish use the theory to
estimate the bounds and then pick a uniform prior within the bounds.
4. #4 Mark Hadfield 2007/10/29
“I can’t see why declaring that our pdf for f is going to be uniform on [0.4,0.7], say, is a problem.”
So you would be declaring that f might be 0.41 (at least, this value is as likely as any other between 04. & 0.7) but couldn’t possibly be 0.39. Do you have any reason for believing this?
[No. It was only an illustration -W]
5. #5 Eric Steig 2007/10/30
I asked Gerard Roe to provide some thoughts in response to your concerns. What he wrote is what I’ve been trying to explain. To my mind, one of the most important points he makes is that f does
*not* have as you put it “a given value”. It is inherently a probability distribution. Hopefully this will clear up some confusion on whether assumes f is Gaussian is useful (my view) or
misleading (your view).
[Hi Eric. Thanks for the response, and from R. I'm on a course now, so can only reply soon if its boring (hopefully not). The question about f having a value or a distribution is a good one; I
still think it has a value but unknown -W]
“I think Connoley is a little off base. Fundamentally we come from the physics side of things, and most studies are coming from the observation side of things. You are right – it would have to be
really nonGaussian to remove the tail. And if it were, the problem of reducing uncertainty in the response gets even worse – you have piled up all the probabilities into some ungainly shape (that
result is given in the text, and demonstrated in the SOM)
There are two contributions in the paper. The first is the equation, and the second is a test of the equation against different studies. Strictly, Gaussian must be wrong at some level of detail,
but it is also almost certaintly good enough to make the point. And given the result above, it is likely to be an optimistic estimate of how much progress can be made.
We actually test the ‘climateprediction.net’ results against the GCM results of Soden&Held, Colman. The GCM feedback studies show uncorrelated feedback parameters. The appropriate way to combine
them then is to do a sum of squares. When we do that we get back the climateprediciton.net results, a strong suggestion that that is also what the Hadley GCM is doing.
Another way of thinking about it is at the level of individual parameters in climate models. Those model parameters always have a distribution of values, and quite possibly for any given
parameter not Gaussian. Each of those parameters can be cast as its own feedback parameter (though there will definitely be some correlations that come in and muct be accounted for). But with
enough random distributions, Central Limit Theorem kicks in and you would expect that as the number of parameters increases you converge towards a Gaussian pdf in f.
To get rid of the tail, you could do something drastic with the pdf of f (which is unlikely to be true, and leaves you with a greater inability to make progress). You could also do something to
how f changes with the system response. If feedbacks are not linear (for example the -ve feedbacks get stronger with T and the +ve feedbacks get weaker), then the transfer function is not 1/(1-x)
any more. In the SOM we calculate what the transfer function would have to be to eliminate any of the stretching of the high side tail. It would take ridiculously huge changes in feedback
strength as a function of climate state, far larger than suggested by models, and far larger than we have physical theories for. That is in the SOM.
It remains somewhat unclear whether uncertainty in f is real or a result of ignorance. It is the sum of all feedbacks, not just a single number (some of the work Myles Allen has done has
unhelpfully thought about it as a number). If the climate system is chaotic (which is somewhat has to be) f is truly only meaningfully characterized as a probability distribution. The framework
of climate sensitivity is also fundamentally a linear one, which is also unlikely to be true. Climate sensitivity does not exist – it is just a model of nature, and nature is always going to be a
fuzzy version of that model. “
6. #6 Oliver 2007/10/30
Tried to comment yesterday and botched it — just to say that the paper is in Science…
7. #7 viento 2007/10/30
I see some inconsistency in the view that f is intrinsically a random variable and not a number. Consider a planet with a solid, perfectly emitting, surface, without water: just a rock ball. The
only feedback is the black-body radiation feedback. For some reason Roe and Backer do not consider this as feedback, and treat this as the ‘reference climate sensitivity’, but formally this is a
feedback of the same nature as the others, it is singled out just for formal reasons (see Bony et al or Gregory et al).
In this situation one can calculate exactly the feedback parameter by calculating the derivative of the black-body radiation law. Actually Roe and backer give a number for this in their paper,
without any uncertainties attached.
The fact that the atmosphere contains water and the Earth is a more complex system than a rock ball does not change the problem conceptually, it just makes the estimation of the feedbacks a more
complicated thing. The probability distribution, rather a likelihood, for f just describes this complexity
8. #8 Eric Steig 2007/10/30
Fair enough. But the point is that f can very likely change for a lot of very non-linear reasons, making it essentially probablistic. It may not be quite like the electron around the nucleus
(which is truly probablistic at a very fundamental level), but in practice it may amount to the same sort of thing.
[But the electrons *mass* isn't probabalistic. Isn't there a fundamental problem if we can even agree if f has a well defined value or not? Perhaps we're at the early 20th C pre-QM stage? -W]
9. #9 crandles 2007/10/30
Disclaimer I haven’t read Roe and Baker, I probably don’t know what I am talking about and I am probably unjustifably over-interpreting things and should probably add a lot more disclaimers.
>”Oh, and last off, Allen is still saying all-equally-likely stuff in the commentary, which is of course impossible.”
I don’t know where you see this. What I see in ‘Call off the Quest’ is:
“There are even more fundamental problems. Roe and Baker equate observational uncertainty in f with the probability distribution for f. This means that they implicitly assume all values of f to
be equally likely before they begin. If, instead, they initially assumed all values of S to be equally likely, they would obtain an even higher upper bound.”
[Sorry, I'm baffled. Why isn't "This means that they implicitly assume all values of f to be equally likely before they begin" obvious nonsense? As a pdf its impossible, and physically R+B don't
believe in f > 1 -W]
What I see in this should warm James heart rather than being still more of the same disagreement per James. I don’t know to what extent Roe and Baker demonstrate gaussian like observational
uncertainty in f. However saying this cannot be converted to into a pdf without having a prior is exactly what I would expect James to say. There might be a slight quibble about whether it is
possible to have a prior where all values of f are equally likely and still be able to apply Bayes theorem but this does seem to me to be Allen and Frame singing from James hymm sheet.
It certainly seems Allen and Frame are inferring something rather than ‘still saying’.
If you think that is the end of what Eli calls a ‘death match’ then I am afraid not quite. According to James the dispute is about this and he may see this a vindication of his view. However from
Allen and Frame view most of the problem is James misreading of Frame et al 05 and the point James is trying to make is largely accepted. Accepting it in this perspectives piece doesn’t really
end a ‘death match’ but does show that the argument is not about what James claims the argument is about.
[A+F may be doing their best to "frame" this in terms of them being sadly misunderstood, but does anyone believe that? -W]
10. #10 Aaron 2007/10/30
I think the idea is that climate sensitivity is relative to the reference state you chose. If we consider the reference state you describe, as a rock ball, and the outgoing long wave radiation as
a negative feedback, the reference state has an infinite climate sensitivity. That is, there is no way for the reference state to get rid of a radiative perturbation, and the reference climate
state continues to increase in temperature indefinitely, as there is no way to get rid of the radiative perturbation. The reference state is infinitely unstable, and it is thus difficult to
preform any feedback anaylsis on it.
I think using the Earth radiating as a blackbody is a natural reference state for this study. It resolves the problem addressed above as the reference state is now stable since it can get rid of
radiative perturbations by increasing its temperature and, therefore, outgoing longwave radiation. Furthermore, it represents the zeroeth order energy balance of the climate system (incoming
solar radiation equals outgoing longwave radiation). Lastly, the physics is rock solid, Plancks law is undeniable.
11. #11 crandles 2007/10/31
>”[Sorry, I'm baffled. Why isn't "This means that they implicitly assume all values of f to be equally likely before they begin" obvious nonsense? As a pdf its impossible, and physically R+B
don't believe in f > 1 -W]”
Do you accept that
1. Observational uncertainty in f does not imply this can be taken as a pdf because the *probability* also depends on the prior?
2. That Allen and Frame are saying this?
[I'm sayong something rather more basic: that the quoted statemnet is wrong. It is wrong because "all values of f", from -inf to +inf, being equally likely, is impossible. Do you agree with that?
And if so, what do you think the quoted statement is supposed to mean? -W]
If the observational uncertainty is known and the pdf is known then do you accept that there must be some prior that does the conversion?
If so, how would you describe the shape of prior required for there to be no difference in shape betwen obserational uncertainty and probability (preferably using as little space as “all values
of f to be equally likely before they begin”)?
(Perhaps this is a stretch but I did put in an overinterpreting caveat as well as others.)
I would prefer to see the effects of at least a couple of different plausibly different priors to see the effect this has. However prefering and expecting to get may be rather different things.
>”[A+F may be doing their best to "frame" this in terms of them being sadly misunderstood, but does anyone believe that? -W]”
I think I have made it clear enough that I do believe that and I have had much more discussion with James than Myles and Dave. Presumably you mean who else?
12. #12 viento 2007/10/31
I am still not convinced, but probably I will have to think about it longer. The concept of reference state and reference climate sensitivity is just formally useful because one wants to derive
an expression of the gain 1/1-f. But the whole argument could be just expressed in terms of the feedback parameter lammda and the climate sensitivity S, without any reference state. With several
feedbacks, several lambdas, the sensitivity would be just the inverse of the sum of lammdas.
In this framework, I can think of a rock planet, with just the black-body feedback as I explained before. In this case I cannot talk about any pdf for lambda or for S (ok, glossing over the
latitude dependence of the mean T). My point is whether or not S or lambda are described by a pdf just for formal reasons or there is indeed intrinsically a physical need to treat them as random
I think, in a linear set-up, which is the only set-up in which S is meaningful, those pdfs just describe our lack of knowledge. In other words, if we had a collection of twin-earths, each one of
them would have the same S as our Earth, exactly the same, and not a distribution of S’s
13. #13 crandles 2007/10/31
>”[I'm sayong something rather more basic: that the quoted statemnet is wrong. It is wrong because "all values of f", from -inf to +inf, being equally likely, is impossible. Do you agree with
that? And if so, what do you think the quoted statement is supposed to mean? -W]”
*All* values of f from -inf to +inf are not equally likely but that is due to what f is rather than the distribution. Is it possible to talk about any distribution where all values are equally
likely? Well yes I think it is possible to talk about it. Would that be a *probability* distribution? No, you cannot get probabilities for any particular range unless the answer is always 0. Can
you use it as a prior and use Bayes Theorem? No I have already said that. Does a prior have to be a *probability* distribution rather than just vague wild guess at relative likelyhoods? Thinking
about the units, I would feel safer saying that strictly yes the prior used does have to be a probability distribution. Which of these questions did you want answering when you asked “Do you
agree with that?” or have I still failed to answer?
What is it supposed to mean? I think I would prefer them to have said:
In order to get such a similarly shaped pdf, they have to have (at least implicitly) assumed a prior before they began where all reasonable values of f are similarly likely (ie a flat-ish shape
of distribution).
That is a bit longer but I should accept that they could have got most of it by adding reasonable and changing equally to similarly. On this thread, it seems to me everyone is talking about
distributions over reasonable values and only you are talking about unreasonable values of f. I think it is so obvious that it is meant to be over reasonable values that this shouldn’t need to be
explicitly stated. If you do insist it should be stated then it is a minor matter.
14. #14 crandles 2007/10/31
I tend to regard increased radiation from a hotter body to be part of the system rather than a feedback.
This is probably very silly but I hope you don’t mind me asking.
Suppose you modelled a spherical rock without water, ice or atmosphere and also a spherical rock without atmosphere but with ice and some weird process where increases in temperature causes ice
on surface to melt and disappear into the rock while temperature falls caused more ice to form on the surface.
Presumably the ice albedo feedback will depend on the extent of ice coverage. Would it be possible to to figure out some relationship of how the feedback changes with ice coverage? If this was
possible would this relationship mean anything for Earths climate?
What I am trying to ask is would it be sensible to model climate sensitivity not as an unknown constant but as a variable that varied with ice coverage in a manner that would be expected from (a
better version of) this virtual planet modelling?
There would still be uncertainty about the value of the variable and I am guessing the uncertainty if reduced at all would only be by a negligable amount. However if we are sure that at a time in
the past the feedback was less than 1 and there was less ice coverage now than then, could this help us to be sure that the feedbacks are less than some smaller value, say 0.9 now?
15. #15 Eli Rabett 2007/11/01
A problem with Bayesian stuff is that you have to have a way of picking a prior. Pure ignorance is not a good place to start because then you will overweigh unlikely outcomes (see the Frame and
Annan death motel, two ideas check in but only one checks out), but you have to be sure not to include any of the information you are going to test the prior against in the prior and you have to
make sure that the prior includes the entire range of all possible outcomes. In essence this means that the ideal prior will exclude all impossible outcomes. This is not trivial.
Some time ago, I pointed out that the prior might include negative climate sensitivities. James responded that observational data ruled this out. Fair enough, but he was using (I think) some of
the same data to test the prior against which is a type of double dipping.
16. #16 James Annan 2007/11/01
While I don’t disagree much with what Eli says, I think I’d prefer to say that physical implausibility rules out negative sensitivities. Not that it actually matters if the prior assigns non-zero
probability to negative values. But using the observation that we exist, to set a plausible prior, is not really equivalent to double counting the detailed measurements that we have made.
Besides, when the prior is taken from historical texts like the Charney report, it’s a fair bet that it does not depend too heavily on the numerous observations that have been made subsequently
:-) People who complain vaguely about double-counting the data somehow never quite manage to address this specific point (I don’t mean you in particular, but rather one or two prominent climate
17. #17 Eli Rabett 2007/11/01
The problem is that the new data itself may be biased by previous results. A pernicious example could be bias in the method of measurement, for example, borehole data appears to be biased a bit
below that of dendro data (or visa versa). IEHO working towards optimal priors is a major part of any B analysis. I take your point about physical improbability, and that is a better way of
restricting a uniform prior (which I dislike on other grounds), but even there you might find data analyses which use the same restriction to constrain their transformation of raw measurements to
(in this case) temperatures.
18. #18 Hank Roberts 2007/11/02
[Hmm, I think I'd go for "unfortunate but inevitable" for that -W]
19. #19 Eric Steig 2007/11/04
William, you replied to me saying “[But the electrons *mass* isn't probabalistic. Isn't there a fundamental problem if we can even agree if f has a well defined value or not? Perhaps we're at the
early 20th C pre-QM stage? -W]”
I don’t think this is a problem. Everyone will agree that f CANNOT have a well defined value, in the sense of having a constant value. f can and will change as the boundary conditions change,
obviously. For example, we all know sea ice albedo is a big positive feedback. It goes away when sea ice is all gone. (And it must change in favlue as sea ice declines). Every other contributor
to f must behave like this. Modeling this precisely is impossible so there *has* to be some probability distribution. That doesn’t mean what Roe and Baker assumed (gaussian) is right, but I have
yet to see a good argument that it is a bad start. The older work of Schlesinger actually reaches the same conclusion, effectively.
["Modeling this precisely is impossible so there *has* to be some probability distribution" - I don't think this is a correct argument. That there is some initial-condition uncertainty on f is
true, I should think. But if we're thinking of this in terms of the variation provided year-to-year by our current climate, it is small. Quite how small I wouldn't know - maybe 0.1 oC. Almost all
the pdf in f is reflecting our ignorance of f, not its true variation -W]
20. #20 James Annan 2007/11/04
I am sure that William is correct here.
Eric, consider the following: I have a copy of the MIROC model here. From all the published material, you could not possibly replicate its behaviour exactly, even if I told you the specific
parameter values I used. Would you say that the sensitivity of this model version is a probability distribution?
Let’s say I do a 100-year hindcast (and some paleo time-slices), give you the equivalent of the obs we have of the real climate system, and ask you to estimate the model sensitivity. In this
case, it is surely clear that the uncertainty in your estimate is merely an expression of your ignorance, not some intrinsic uncertainty in the system (because there is none).
21. #21 Magnus W 2007/11/05
Knowing that I’m way out of my area… I can’t help to ask some questions.
I just cant see how Eric is wrong here, if the ice goes the CO2 reflection will change? Different outgoing radiation will effect the sensitivity? Ok it ideally will get a number, a new number but
at different stages different numbers… a distribution if you like. I can’t however (as I showed at Annans blog) think of anything that will have a huge difference. It feels like the CO2 (and
other greenhouse gasses) cycle is the big uncertainty?
[The initial state of the climate is bound to affect the CS a bit, but this isn't really part of its meaning, in that CS is supposed to be a useful concept because it doesn't change that much. At
any event, if we're interested in the CS between *now* and 2*CO2, then the ice state isn't a problem: its here now, and it won't be then, so why should it affect the CS, why should it contribute
to the uncertainty.
Stepping back a bit, how can there possibly be this level of ambiguity over the definition of so important and oft-studied a concept? Are we like the old philosophers, wasting words arguing over
the meaning of infinity, when we haven't even defined it? -W]
22. #22 Eric Steig 2007/11/05
James, and William,
You’re getting way too esoteric on something that is quite simple!. I’m not talking about “initial condition” uncertainty. Perhaps you think I am because I made a reference to Lorenz. The point
is that the sensivity must necessarly depend on the boundary conditions, and as the boundary conditions (e.g. sea ice) change (as part of the feedback) then so does the sensivitiy. This is what
Roe and Baker are talking about (mostly). An example point is that using e.g. the last 100 years to get at sensitivity won’t necessarily give you the right number to use for the next 100 years.
Of course this is what is really nice about James’s work on using both ice age and modern data to constrain sensivity. I readily concede that point, and would really like Roe and Baker to address
it in future work. But again, I am NOT talking about initial conditions, and nor were Roe and Baker, so that part of James’s original post on this is quite beside the point.
[We may all be missing each others points. OK, so: as I understand it, CS can be considered in several ways: as a characteristic of the planet in general (in which case, its clearly subject to
variation in the way you describe); as a "from here to 2*CO2" (in which case it isn't); or as a "tangent to the curve, ie instantaneous value" (in which case again it isn't).
Clearly, the cp.net stuff is considering here-to-2*CO2, so problems with sea ice state don't arise. And R+B explicitly state that hey are considering preind to 2*CO2; so the same is true.
So yes, your point can in certain circumstances be true, but not in this case -W]
23. #23 Magnus W 2007/11/05
I don’t know I might be over simplifying things here but… at any given time between two times the CS is a number (ok it might change a little depending on natural variations in the several earths
example) however at the same starting point going for longer or shorter time the CS would be slightly different (e.g. sea ice)… a distribution. In some way I guess that is what is said with the
2-4.5 CS?
And this new discussion is about the possibility of larger CS due to possible variations en the far past. Thinking off CS as a distribution might be wrong sins it seams as at any given time it
should not change much? Which I think could be interpreted quite wrong in media… So that would mean that for all that we know now it is a number most probably around 3 and we don’t have any hard
evidence for it to show big changes in the next say 100 years.
And know I just reads W:s comments above… so well… hmm… I’ll be back…
24. #24 Magnus W 2007/11/05
Bwha… my sucky library haven’t got access to the article yet so well… guess it ends here.
25. #25 crandles 2007/11/05
“Modeling this precisely is impossible so there *has* to be some probability distribution.”
This seems to me to indicate that Eric is splitting this up into ignorance that might be overcome and ignorance that cannot be overcome. James and William are writing off the ignorance that
cannot be overcome as small perhaps .1C. I would have thought that whether that is reasonable depends what you include in that. Precise timings of el nino cycles seems likely to be in a cannot be
overcome category.
What else will affect it? One thing I suggest affects the cs from *now* to 2*CO2 may depend on whether we are currently committed (or not quite committed) to loosing the arctic summer sea ice. It
may be possible to know this with a perfect model but is it reasonable to suggest a perfect model is something we could gain? Eric seems to think there could be lots of other things like that. If
we don’t know what they are how can we assess whether they have large effects or not? James doesn’t seem to think this is a problem. Getting a perfect model seems like a major problem to me and
until we get much closer to a much better model it almost seems like arguing over how many angels can dance on a pinhead.
If rather than this intangible split you classify everything as our ignorance that could be avoided with a truely perfect model than can even cope with chaos then the CS from an equilibrium state
to a 2*CO2 equilibrium state is a number not a pdf.
Is that anywhere close to the state of play in this discussion?
[I think so. Now the game is to reconcile that with R+B and other papers... -W] | {"url":"http://scienceblogs.com/stoat/2007/10/26/uncertainty-in-s-and-f/","timestamp":"2014-04-19T07:28:24Z","content_type":null,"content_length":"95500","record_id":"<urn:uuid:aef9610b-d018-4b59-8ec6-aed5304b2e7d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transforming a set of vertices
May 9th 2010, 11:08 AM #1
May 2010
Transforming a set of vertices
Hi All,
I have a set of 'm' *unit* vectors, v1, v2, ... vm, with k dimensions each.
The angle between any two vectors is less than 90 degrees, by that I mean that the dot/inner product of any two vectors is positive, vi.vj >= 0 forall i,j in [1, m]
Now since all the angles between the vectors are less than 90 degrees, they can all 'fit' in one quadrant (quadrant/octant ... by quadrant i mean a region in the k-dimensional space where all the
j^th co-ordinates of all the 'm' vectors have the same sign, forall j's) .
The problem is to find one such transformation matrix U such that all U*vi forall i, are in the *first quadrant*.
It suffices to give a transformation matrix to transform the set of vectors into *any* quadrant, and then we can have a set of reflections to bring them into the *first quadrant* (just change all
the negative co-ordinates en-masse to positive)
Any replies will be greatly appreciated.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/geometry/143851-transforming-set-vertices.html","timestamp":"2014-04-18T01:01:12Z","content_type":null,"content_length":"29529","record_id":"<urn:uuid:79587fe9-df52-4ecd-863e-7f1f18d04c00>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
The ICT 1301 Resurrection Project.
The Software Archive
The Aims of the project
Aim 1.
To restore one 1301 to working order to the point where we can retrieve the software locked up in 1300 format punched cards and reels of ten track magnetic tape (which were recorded 30+ years ago) so
that they can be recorded on modern media and ultimately made available, hopefully on the web, along with a 1301 simulator.
The CD
Nothing Yet !
Online Archive
Only the music in the Media section, two Card packs,
an article about flatbed scanning of card packs in the Documents! section,
Machine manuals are added as they are prepared as PDF's,
but split into downloadable chunks of about 10mb.
Offline Archive
We do have a fair collection of manuals, many boxes of cards and ( hopefully ) most of the system software on magnetic tape.
Here are some of the libraries we are trying to recover.
I.C.T.= 1300 Series Mathematic Routines (June 1965) A collection of
(pre-existing) I/O, arithmetic, transcendental, matrix subroutines
and solutions to various mathmatical, engineering, statistical and
linear programming problems.
I.C.T. 1300 Series Magnetic Tape Routines and Conventions. April 1965
I.C.T. 1300 Series Magnetic Tape Sorting.
and lots more.
I.C.T. Supplied Library
In the following, bank means a bank of 40 consecutive characters on
the line printer, there are three banks in the printer.
Space means moving the line printer forward one line.
I.A.S. means Immediate Access Store - the main memory.
Routine Number Title
A/00/00 Print Punch Feed Control (400 word I.A.S.)
A/00/05 Print Punch Feed Control (3 bank character printing)
A/00/06 Print Punch Feed Control (2 bank printing)
A/00/07 Print Punch Feed Control (1 bank printing)
A/00/08 Print Punch Feed Control Resetting
A/02/06 Print 1 Line (1 bank) & Space
A/02/07 Print 1 Line (2 banks) & Space
A/02/08 Print 1 Line (3 banks) & Space
A/02/09 Print 1 Line (1 bank) Numeric & Space 1
A/02/10 Print 1 Line (1 bank) Space 1
A/02/18 Print Register B
A/02/19 Print 1 Line (3 banks) numeric and space
A/03/03 Punch 1 card
A/03/04 Punch 1 card - Numeric data only
A/03/05 Punch 1 card - timesharing
A/04/04 Read 1 card - partial timesharing
A/04/06 Read 1 card - timesharing II
A/04/07 Read 1 card
A/04/08 Read 1 card - timesharing 1
A/05/00 Print Punch Feed Routine (3 bank printing)
A/05/01 Print Punch Feed Routine (2 bank printing)
A/05/02 Print Punch Feed Routine (1 bank printing)
A/05/03 Print Punch Feed Routine (3 banks) no punching
A/05/04 Print Punch Feed Routine (2 banks) no punching
A/05/05 Print Punch Feed Routine (1 bank) no punching
A/05/06 Print Punch Feed Routine (3 banks) no reading
A/05/07 Print Punch Feed Routine (2 banks) no reading
A/05/08 Print Punch Feed Routine (1 bank) no reading
A/05/09 Print Punch Feed Routine - no printing
A/06/00 General print distribution
A/06/03 Conversion of digits > 9 for printing
A/06/04 Print and space routine for use with A/06/00
A/06/05 Alternative Type 1 for A/06/00
A/07/03 Row Binarise Numeric 0-15 Punch data
A/07/04 Row Binarise (Decimal & Alpha) Punch data
A/07/05 Row Binarise Zero suppressed Punch Data
A/07/06 Row Binarise Numeric 0-11 Punch data
A/07/08 Row Binarise all Standard Punchings
A/09/00 Card list program
A/09/02 Manchester Auto Code Card list
A/09/03 Manchester Auto Code Card updating
A/10/02 Distribution during card reading
A/10/03 Input distribution - Name & Address Card (1)
A/10/06 Input distribution - Name & Address Card (2)
A/11/00 Input/Output Control routine
A/12/01 Spacing routine for Printer
A/12/02 To find sprag engaged on Printer
A/13/00 Read Paper Tape
A/13/01 Set up Paper Tape
A/13/02 Paper Tape Read and Code Convert
A/14/00 Punch Paper Tape
A/14/01 Paper Tape Code convert and Punch
A/15/00 Type In
A/15/01 Type Out
A/15/02 Clear all "Tabs"
A/15/03 Set desired "Tabs"
A/16/00 Read Card Image through the Punch
A/17/00 Fixed or floating point number input
B/06/00 Drum Sort Generator
Half inch and one inch magnetic tape
C/00/00 Write - Single Unit
C/00/01 Write Multiple Units
C/00/02 Write Exceptions Single Unit
C/00/03 Write Exceptions Multiple Units
C/01/00 Read - Single Unit
C/01/01 Read Multiple Units
C/01/02 Read Exceptions Single Unit
C/02/00 Magnetic Tape Read and Write
C/03/00 4 Tape Record merge
C/03/01 Pre-stringing tape records
C/03/02 Tape record merge - 3 decks
C/03/04 3 TAPE Sort using Drum
C/03/05 4 TAPE Sort package
C/04/00 Write initial labels and Virgin tapes
C/04/02 Errors storing
C/04/03 Print tape Statistics
C/05/00 Dump
C/05/01 Restart
C/05/02 Tape Repositioning
C/06/00 Tape Control Routine
C/06/01 Tape Control Routine
C/07/01 Present Tape Records
C/07/02 Present Selected Tape Records
C/07/03 Present Selected Tape Records
C/07/04 Tape Block Reconstruction
C/09/00 Job Set-up
C/09/01 Write Program to tape
C/09/02 Transfer Program to drum
C/09/03 Insert program on tape
C/09/04 Program Tape Maintenance
C/09/08 Program Present (with C/02/00)
C/09/13 Program tape updating routine
C/10/00 Card to tape conversion of variable length records
C/10/01 Card Image to tape, print
C/10/06 Reproduction of Magnetic tape files
C/11/00 Compare two tape files and print difference
Quarter inch magnetic tape
D/00/01 Write Package
D/01/01 Read Package
D/02/00 Magnetic Tape Read and Write Routines
D/03/00 4 Tape Merge
D/03/01 Prestringing
D/03/02 3 Tape Merge
D/04/00 Create Initial Labels and Virgin Tapes
D/04/03 Print Tape Statistics
D/05/00 Dump
D/05/01 Restart
D/07/05 Record Present
D/08/00 Tape read & write package without control routine
D/08/01 Read/write package with control routine
D/08/05 Read/Write Package
D/09/00 Job Set-up
D/09/01 Prepare Program to write to tape
D/09/02 Convert a program block from tape
D/09/04 Program Tape Maintenance
D/09/05 Program present with Read Package (D/01/01)
D/09/06 Program present with Read/Write Package (D/08/00)
D/09/07 Program present with Read/Write Package (D/08/01)
D/09/08 Program Present with Read/Write Package D/08/05
D/09/09 Write Program to Tape with Write Package (D/00/01)
D/09/10 Write Program to Tape with Read/Write Package (D/08/00)
D/09/11 Write Program to Tape with Read/Write Package (D/08/01)v D/09/12 Write Program to Tape with Read/Write Package D/08/05
D/09/13 Program tape updating routine
D/10/01 Card Image to Tape, Print Image from Tape
D/10/06 Reproduction of Magnetic Tape Files
D/11/00 Compare two tape files and print differences
E/00/00 P.A.Y.E. - Weekly Pay
E/00/01 P.A.Y.E. - Monthly Pay
E/00/02 P.A.Y.E. - Weekly Pay
E/00/03 P.A.Y.E. - Monthly Pay
E/01/00 Coin Analysis
E/01/01 Multiple Unit Analysis
E/02/01 Conversion from Sterling to Decimals of a Pound
E/02/02 Sterling Conversion to pence
E/02/03 Pence conversion to sterling
E/02/10 Pence conversion to sterling
E/02/11 Conversion from Sterling to Decimal of Pound
E/03/00 Graduated Pension Contribution (General)
E/03/01 Graduated Pension Contribution (Weekly)
E/04/00 Sterling amounts to English
E/05/00 Percentage
I/00/00 Manchester Auto Code (1200 I.A.S.)
I/00/01 Manchester Auto Code (1200 I.A.S.)
I/00/02 Manchester Auto Code (800 I.A.S.)
I/00/03 Manchester Auto Code (400 I.A.S.)
I/00/04 Manchester Auto Code (400 I.A.S.)
I/00/05 Manchester Auto Code (Own Coding Facility)
I/01/00 Thirteenhundred Assembly System 1
I/02/00 Thirteenhundred Assembly System 2
I/02/01 Thirteenhundred Assembly System 2 (1/4 inch magnetic tape - Control Pack)
I/03/00 Mnemonic Programming Language 1
I/03/01 Mnemonic Programming Language 1 with Paper Tape
I/03/02 Mnemonic Programming Language 2 (1/4 inch magnetic tape system)
I/03/03 Mnemonic Programming Language 2 (1/2 inch and 1 inch magnetic tape systems)
I/03/10 Mnemonic Programming Language 1 Standard Pack A
I/03/11 Mnemonic Programming Language 1 Standard Pack B
I/03/12 Mnemonic Programming Language 1 Standard Pack C
I/03/13 Mnemonic Programming Language 1 Standard Pack D
I/03/20 Mnemonic Programming Language Source List
K/00/03 Rapidwrite Sterling to Pence
K/00/04 Rapidwrite Division
K/00/05 Rapidwrite Square Root
K/00/06 Rapidwrite Reciprocation
K/00/07 Rapidwrite Left Shift through mill
K/00/08 Rapidwrite Size error
K/00/09 Rapidwrite Table transfer
K/01/00 Convert picture to numeric
K/01/01 Convert numeric to picture
K/02/00 Rapidwrite (Card) Compiler
K/02/01 Rapidwrite (Card) to Cobol translator
K/02/02 Rapidwrite Standard Pack
K/02/03 Cobol Print Out
M/03/00/08 Matrix Inversion (800 I.A.S.)
M/03/00/12 Matrix Inversion (1200 I.A.S.)
M/03/01/08 Simultaneous Equations (800 I.A.S.)
M/03/01/12 Simultaneous Equations (1200 I.A.S.)
M/03/02/12 Structural Frame Analysis
M/03/03/12 Multiple Regression (1200 I.A.S.)
M/03/04/08 Analysis of Variance (800 I.A.S.)
M/03/05/08 Linear Programming (800 I.A.S.)
M/03/06/08 Fourier Analysis (800 I.A.S.)
M/03/06/12 Fourier Analysis (1200 I.A.S.)
M/03/07/12 Eigen Roots and Vectors (1200 I.A.S.)
M/03/08/08 Eigen Roots and Vectors (Jacobi's) (800 I.A.S.)
M/03/08/12 Eigen Roots and Vectors (Jacobi's) (1200 I.A.S.)
M/03/09/04 Transformation of Axes (400 I.A.S.)
M/03/09/08 Transformation of Axes (800 I.A.S.)
M/03/10/04 Varying Section Beam Analysis (400 I.A.S.)
M/03/11/12 Probit Analysis (1200 I.A.S.)
M/03/12/12 Rotating Disc
M/03/14/04 Numerical Solution of Polynomial Equations (400 I.A.S.)
M/03/14/08 Numerical Solution of Polynomial Equations (800 I.A.S.)
M/03/14/12 Numerical Solution of Polynomial Equations (1200 I.A.S.)
M/03/15/12 Trim Loss (Replaced by M/03/24,25,26)
M/03/20/08 Least Squares Polynomial Fit (with constraints)
M/03/21/04 Continuous Beam Analysis
M/03/21/08 Continuous Beam Analysis
M/03/21/12 Continuous Beam Analysis
M/03/23/08 Multiple Regression
M/03/24/12 Trim Loss
M/03/25/08 Trim Loss
M/03/26/08 Trim Loss
M/03/27/12 Structural Frame Analysis, Data Validity Check
M/03/28/12 Read Complex Matrix
M/03/29/12 Print Complex Matrix
M/03/30/12 Complex Matrix Multiplication
M/03/31/12 Complex Matrix Multiplication
M/03/32/12 Inversion of complex matrices
M/05/00/04 Matrix Print (400 I.A.S.)
M/05/00/08 Matrix Print (800 I.A.S.)
M/05/01/12 Symmetric Matrix Print (1200 I.A.S.)
M/05/02/04 Matrix Transposition (400 I.A.S.)
M/05/02/08 Matrix Transposition (800 I.A.S.)
M/05/02/12 Matrix Transposition (1200 I.A.S.)
M/05/03/04 Matrix Multiplication (400 I.A.S.)
M/05/03/08 Matrix Multiplication (800 I.A.S.)
M/05/03/12 Matrix Multiplication (1200 I.A.S.)
M/05/04/08 Matrix Read (800 I.A.S.)
M/05/05/04 Matrix Inversion (400 I.A.S.)
M/05/05/08 Matrix Inversion (800 I.A.S.)
M/05/05/12 Matrix Inversion (1200 I.A.S.)
M/05/06/04 Simultaneous Equations (400 I.A.S.)
M/05/06/08 Simultaneous Equations (800 I.A.S.)
M/05/06/12 Simultaneous Equations (1200 I.A.S.)
M/05/07/04 Solution simultaneous differential Equations (400 I.A.S.)
M/05/07/08 Solution simultaneous differential Equations (800 I.A.S.)
M/05/07/12 Solution simultaneous differential Equations (1200 I.A.S.)
M/05/08/08 Eigen Roots and Vectors (Jacobi's) (800 I.A.S.)
M/05/09/04 Numerical Solution of Polynomial Equations (400 I.A.S.)
M/05/09/08 Numerical Solution of Polynomial Equations (800 I.A.S.)
M/05/09/12 Numerical Solution of Polynomial Equations (1200 I.A.S.)
N/00/06 Division - General decimal
N/00/09 Division - Comprehensive
N/00/10 Division - 2 Positive numbers
N/00/11 Division - Decimal only
N/00/12 Division - Integer by integer
N/00/13 Division - Sterling by decimal (table)
N/00/14 Division - Sterling by decimal
N/00/15 Division - Sterling/Sterling
N/00/17 Division - Positive unrounded quotient
N/00/18 Division - Positive rounded quotient
N/00/19 Division - Positive remainder
N/00/20 Division - Positive negative
N/00/21 Division - Sterling/Decimal to Specified No. of Dec. places
N/01/00 Square root of a fraction
N/01/04 Floating point square root
N/02/07 Floating point arithmetic 4 functions
N/03/00 Double length package
O/00/00 Evaluation of Exponential
O/00/01 Evaluation of Natural Logarithm (Fast)
O/00/02 Evaluation of Natural Logarithm (Slow)
O/00/06 Evaluation of Exponential (Fast)
O/01/00 Evaluation of Sine/Cosine
O/01/01 Evaluation of Arcsin, Arccos, Arctan
O/01/07 Evaluation of Arctangent
O/02/00 Evaluation of Sinh/Cosh
P/00/00 Linear Programming
P/01/02 Matrix Transposition (Fixed & Floating point)
P/02/02 Matrix Multiplication (Fixed point)
P/02/03 Matrix Multiplication Drum floating point
P/03/00 Matrix Inversion (Fixed point)
P/03/03 Matrix Inversion (floating point) Fast
P/03/04 Matrix Inversion (floating point) Slow
P/04/02 Matrix Addition/Subtraction (fixed point)
P/04/03 Matrix Addition/Subtraction (floating point)
P/05/01 Matrix Input
P/06/00 Solution of Simultaneous Linear equations (floating point)
P/06/01 Solution of Simultaneous Linear equations (fixed point)
Q/00/00 Runge-Kutta (N<20)
Q/00/01 Runge-Kutta
Q/01/00 Simpson Quadrature
Q/01/01 Gaussian Quadrature (Finite Limits)
Q/01/02 Gaussian Quadrature (Infinite Limits)
R/00/01 Calculation of Pseudo-Random Numbers
R/00/02 Generation of Pseudo-Random 9 digit Nos. (Lehner model)
R/00/03 Generate a Pseudo-Random number with rectangular distribution
R/03/00 Evaluation of the Normal Probablility Integral
S/00/00 Road Construction Calculations - Cut and Fill
S/00/01 Road Construction Calculations - Cut and Fill Validity Check
S/02/00 General Transportation Problem
S/03/00 Traffic Allocation by the shortest route Method
S/04/00 I.C.T. 1301/01 PERT Program
T/03/00 Formula Translator
X/00/02 Drum Parity Error Routine
X/00/05 Drum Transfer Parity Routine
X/00/06 Drum Parity Error Routine
X/02/01 Logical Not
X/02/02 Exclusive Or
X/02/03 Pack two digit number
X/02/04 Logical Equality
X/02/05 Logical Implies
X/02/06 Logical Neither - Nor
X/03/00 Word Inversion
X/04/00 Divide by 2
X/06/03 Preserve and reset Program indicators 10-19
X/06/04 Preserve and reset Program indicators "self resetting"
X/06/05 Preserve and reset indicators 01-04
X/07/00 Zero Suppression - General
X/07/01 Zero Suppression-Normal Sterling or Decimal
X/07/03 Zero Suppression - Decimal Only
X/07/04 Zero Suppression - Decimal any point
X/07/05 Zero Suppression of integers or sterling with expansion and symbol
X/07/06 Zero Suppression - Decimal currency
X/07/07 Zero Suppression - Decimal currency
X/07/08 Zero Suppression for use with B/02/00
X/07/09 Zero Suppression for use with B/02/00
X/08/03 Zeroise I.A.S.
X/09/00 Trace
X/09/01 Manual Indicator Trace Escape
X/09/02 Indicator trace
X/09/05 Evade S/R Trace
X/09/06 Looping trace
X/10/01 Punch Fast Read Cards from the Drum
X/10/02 Punch Fast Read Cards
X/10/03 Punch Fast Read (Engineers Card Format)
X/11/00 I.A.S. merging Sort - Fixed Length
X/11/01 I.A.S. Extraction Sort - Fixed Length
X/11/02 I.A.S. Exchanging Sort - Fixed Length
X/11/03 Insertion of one word records into a string
X/11/04 Channel merge (400 I.A.S.) Fixed length
X/11/05 Channel merge (800 I.A.S.) Fixed length
X/11/06 Channel reshuffle (400 I.A.S.)
X/11/07 Channel reshuffle (800 I.A.S.)
X/11/08 Insertion of record into string of fixed length
X/11/09 Insertion of record into string of variable length
X/11/10 Insertion of record into string of fixed length
X/11/11 Insertion of record into string of variable length
X/11/12 Insertion of record into string of fixed length
X/11/13 Insertion Sort, variable length
X/11/14 Drum Sort fixed length (400 I.A.S.)
X/11/15 Drum Sort fixed length (800 or more I.A.S.)
X/11/16 I.A.S. Extraction Sort variable length
X/11/17 Insertion of record into string of 2 word records
X/11/18 I.A.S. merging Sort variable length
X/11/19 I.A.S. merging Sort fixed length
X/11/20 Channel merge variable length
X/11/21 Channel merge fixed length
X/11/22 I.A.S. partition sort fixed length
X/12/01 Data Transfer
X/12/03 Block Transfer within I.A.S.
X/13/00 Main Program Printer test
X/15/00 Memory Dump (print)
X/15/01 I.A.S. Print Out (400 I.A.S.)
X/15/02 Memory Dump (print)- Extension to 30 entries
X/15/03 Memory Dump (2 bank print)
X/16/00 Block Collapse
X/17/01 General testing program
X/18/00 Initial Orders standard coding
X/18/02 I.A.S. Drum & Tape Print Out
X/18/03 I.A.S. Drum & Tape Print Out (2 bank printer)
X/18/07 Load X/18/00
X/18/08 Check contents of reserved bands
X/18/09 Bootstrap
X/18/10 Amend Initial Orders to read Y/7/2 from cards
X/18/11 Print Program Sheets standard 1300 Series
X/18/12 Reproduce cards
X/18/17 I.A.S. Drum & Tape Print Out (abbreviated format) half inch and one inch magnetic tape
X/18/19 I.A.S. Drum & Tape Print Out (2 bank with console log) half inch and one inch magnetic tape
X/18/20 I.A.S. Drum & Tape Print Out with console log, quarter inch magnetic tape
X/18/21 I.A.S. Drum & Tape Print Out (2 bank with console log) quarter inch magnetic tape
X/19/00 Units conversion
X/20/00 Alpha conversion 12:12 to 1:1
X/20/01 Alpha conversion 1:1 to 12:12
X/20/02 Card Code Conversion
X/20/03 Convert Machine Code to Powers Punching
X/20/04 Convert Powers Punching to Machine Code
X/20/05 Conversion 10 to 11 and vice-versa
X/23/00 Simulator
X/24/00 Program Updating Routine
X/25/01 Amend on Read
X/26/00 Validity Check
X/26/02 Hash Totalling
X/27/00 Cross Reference Routine
X/30/00 Zone Compression
X/30/01 Conversion from Compressed to normal
X/32/00 Variable word length record assembly
X/32/01 Variable word length record decoder
X/33/00 Simple List Processor
Phew ! | {"url":"http://ict1301.co.uk/13010710.htm","timestamp":"2014-04-20T03:49:06Z","content_type":null,"content_length":"23447","record_id":"<urn:uuid:75c9fb42-e363-4301-9805-ad6a88005aac>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rodeo, CA Algebra 2 Tutor
Find a Rodeo, CA Algebra 2 Tutor
...As a doctoral student in clinical psychology, I was hired by the university to teach study skills, test taking skills, and time management to incoming freshmen students. As part of this job, I
was trained in and provided materials for each of these topics. I often find, when working with my stu...
20 Subjects: including algebra 2, calculus, geometry, biology
GENERAL EXPERIENCE: As an undergraduate student at Florida International University, I often found myself tutoring my study groups in several subject areas, from Biochemistry to advanced Calculus,
which greatly helped my performance in each class. By my senior year, I obtained a teaching assistant...
24 Subjects: including algebra 2, chemistry, calculus, physics
...I passed the teacher's CSET I & II sections that go far beyond the SAT basics. I'll run you through practice tests and focus on improving your weaknesses and using your strengths. I'll train
you in the key test taking talents.
37 Subjects: including algebra 2, reading, English, writing
...I have tutored algebra 2, geometry, and Spanish as well as various sciences. I also have experience in the "Lindamood-Bell" literacy, comprehension, and math techniques. I graduated Summa Cum
Laude from Creighton University with a B.S. in Environmental Science and Spanish.
24 Subjects: including algebra 2, chemistry, Spanish, reading
...I have tutored others, off the record, in various K-6 subject matter. All in all, I have had plenty of experience tutoring elementary level students and have helped them better understand their
subject material. I started playing the clarinet in 5th grade.
34 Subjects: including algebra 2, chemistry, writing, calculus | {"url":"http://www.purplemath.com/Rodeo_CA_algebra_2_tutors.php","timestamp":"2014-04-19T10:16:13Z","content_type":null,"content_length":"23836","record_id":"<urn:uuid:b5f2e135-f627-4b21-af6e-5aa92ba3695c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
It Ain't No Repeated Addition
June 2008
It Ain't No Repeated Addition
In my column for September 2007, which was titled What is conceptual understanding? I remarked that I wished schoolteachers would stop telling pupils that multiplication is repeated addition. It was
little more than a throwaway line, albeit one that I feel strongly about. I put it in to provide a further illustration for the overall theme of the column, to indicate that there are examples beyond
the ones I had focused on. In the intervening months, however, I've received a number of emails from teachers asking for elaboration. Their puzzlement, they make clear, stems from their understanding
that multiplication actually is repeated addition.
If ever there were needed a strong argument that professional mathematicians need to interest themselves in K-12 mathematics education and get involved, this example alone should provide it. The
teachers who contact me do so because they genuinely want to know what I mean, having been themselves taught, presumably either in schools of education or else from school textbooks, that
multiplication is repeated addition.
Let's start with the underlying fact. Multiplication simply is not repeated addition, and telling young pupils it is inevitably leads to problems when they subsequently learn that it is not.
Multiplication of natural numbers certainly gives the same result as repeated addition, but that does not make it the same. Riding my bicycle gets me to my office in about the same time as taking my
car, but the two processes are very different. Telling students falsehoods on the assumption that they can be corrected later is rarely a good idea. And telling them that multiplication is repeated
addition definitely requires undoing later.
How much later? As soon as the child progresses from whole-number multiplication to multiplication by fractions (or arbitrary real numbers). At that point, you have to tell a different story.
"Oh, so multiplication of fractions is a DIFFERENT kind of multiplication, is it?" a bright kid will say, wondering how many more times you are going to switch the rules. No wonder so many people end
up thinking mathematics is just a bunch of arbitrary, illogical rules that cannot be figured out but simply have to be learned - only for them to have the rug pulled from under them when the rule
they just learned is replaced by some other (seemingly) arbitrary, illogical rule.
Pretending there is just one basic operation on numbers (be they whole numbers fractions, or whatever) will surely lead to pupils assuming that numbers are simply an additive system and nothing more.
Why not do it right from the start?
Why not say that there are (at least) two basic things you can do to numbers: you can add them and you can multiply them. (I am discounting subtraction and division here, since they are simply the
inverses to addition and multiplication, and thus not "basic" operations. This does not mean that teaching them is not difficult; it is.) Adding and multiplying are just things you do to numbers -
they come with the package. We include them because there are lots of useful things we can do when we can add and multiply numbers. For example, adding numbers tells you how many things (or parts of
things) you have when you combine collections. Multiplication is useful if you want to know the result of scaling some quantity.
You don't have to use these applications, but both are simple and familiar, and to my mind they are about as good as it gets in terms of appropriateness. (I do think that you need to present simple
everyday examples of applications. Teaching a class of elementary school students about axiomatic integral domains is probably not a good idea! This column is not a rant in favor of the "New Math", a
term that I use here to denote the popular conception of the log-ago aborted education reform that bears that name.)
Once you have established that there are two distinct (I don't say unconnected) useful operations on numbers, then it is surely self-evident that repeated addition is not multiplication, it is just
addition - repeated!
But now, you have set the stage for that wonderful moment when you can tell kids, or even better maybe they can discover for themselves, this wonderful trick that multiplication gives you a super
quick way to calculate a repeated addition sum. Why deprive the kids of that wonderful piece of magic?
[Of course, any magic trick loses a lot once you see behind the scenes. In the very early days of the development of the number concept, around 10,000 years ago, there were only whole numbers, and it
may be that the earliest precursor of what is now multiplication was indeed repeated addition. But that was all 10,000 years ago, and things have changed a lot since then. We don't try to understand
how the iPod works in terms of the abacus, and we should not base our education system on what people knew and did in 8,000 B.C.]
Mathematics is chock full of examples where something that is about A turns out to be useful to do B.
Exponentiation turns out to provide a quick way to do repeated multiplication - wow, it's happened again! Is this math thing cool or what!
Anti-differentiation turns out to be a quick way to calculate an integral. Boy, is that deep!
I can just hear some pupils wondering, "Hey, how many more examples are there like this? This is really, really intriguing. It all seems to fit together. Something deep must be going on here. I've
gotta find out more."
I assume the reason for the present state of affairs is that teachers (which really means their instructors or the writers of the textbooks those teachers have to use) feel that children will be
unable to cope with the fact that there are two basic operations you can perform on numbers. And so they tell them that there is really only one, and the other is just a variant of it. But do we
really believe that two operations is harder to come to terms with than one? The huge leap to abstraction comes in the idea of abstract numbers that you can do things with. Once you have crossed that
truly awesome cognitive chasm, it makes little difference whether you can do one abstract thing with numbers or a dozen or more.
Of course, there are not just two basic operations you can do on numbers. I mentioned a third basic operation a moment ago: exponentiation. University professors of mathematics struggle valiantly to
rid students of the false belief that exponentiation is "repeated multiplication." Hey, if you can confuse pupils once with a falsehood, why not pull the same stunt again? I'm teasing here. But with
the best intentions of drawing attention to something that I think needs to be fixed.
And the way to fix it is to make sure that when we train future teachers, and when authors write, or states adopt, textbooks, we all do it right. We mathematicians bear the ultimate responsibility
here. We are the world's credentialed experts in mathematical structures, including the various numbers systems. ("Systems" here includes the operations that can be performed on them.) Our
professional predecessors constructed those structures. They are part of our world view, things we mastered so long ago in our educational journey that they are second nature. For too long we have
tacitly assumed that our knowledge and understanding of those systems is shared by others. But that isn't the case. I have a file of puzzled emails from qualified teachers that testifies to the gap.
I should end by noting that I have not tried to prescribe how teachers should teach arithmetic. I am not a trained K-12 teacher, nor do I have any first-hand experience to draw on. But the term
"mathematics teaching" comprises two words, and I do have expertise in the first. That is my focus here, and I defer to others who have the expertise in teaching. The best way forward, surely, is for
the two groups of specialists, the mathematicians and the teachers, to dialog - regularly and often.
In the meantime, teachers, please stop telling your pupils that multiplication is repeated addition.
Devlin's Angle is updated at the beginning of each month. Mathematician Keith Devlin (email: devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and
Information at Stanford University and The Math Guy on NPR's Weekend Edition. Devlin's most recent book, Solving Crimes with Mathematics: THE NUMBERS BEHIND NUMB3RS, is the companion book to the hit
television crime series NUMB3RS, and is co-written with Professor Gary Lorden of Caltech, the lead mathematics adviser on the series. It was published last September by Plume. | {"url":"http://www.maa.org/external_archive/devlin/devlin_06_08.html","timestamp":"2014-04-18T17:38:21Z","content_type":null,"content_length":"10748","record_id":"<urn:uuid:cafc2127-856c-461f-9d78-2003f8ea85aa>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with question on matrices
April 11th 2012, 12:19 AM #1
Apr 2012
Help with question on matrices
My daughter is doing a college assignment and I managed to help her with the majority of the question but cannot figure out the last part.
I have the characteristic equation and have made that a cubic but unsure on the M^-1 part
Last edited by Jane1947; April 11th 2012 at 05:25 AM.
Re: Help with question on matrices
ok, this is what i have, but you may want to check my arithmetic, because i DO make mistakes:
the Cayley-Hamilton theorem tells us M satisfies its own characteristic equation, which is $(x+1)(x-2)(x-4) = x^3 - 5x^2 + 2x + 8$.
therefore: $M^3 - 5M^2 + 2M + 8I = 0$, or put another way:
$M^3 = 5M^2 - 2M - 8I$.
the trouble is now how to express $M^2$ in terms of $M^{-1},M,I$. to accomplish that, we write:
$M^3 - 5M^2 + 2M = -8I$
$\frac{-1}{8}(M^2 - 5M + 2I)(M) = I$, so that evidently:
$M^{-1} = \frac{-1}{8}(M^2 - 5M + 2I)$, or:
$-8M^{-1} = M^2 - 5M + 2I$, and re-arranging:
$M^2 = 5M - 2I - 8M^{-1}$.
substituting back in our original equation:
$M^3 = 5M^2 - 2M - 8I = 5(5M - 2I - 8M^{-1}) - 2M - 8I$
$= 23M - 18I -40M^{-1}$, obtaining:
p = 23, q = -18, r = -40.
Re: Help with question on matrices
Thanks very much for the time and effort you have put into this. I appreciate the answer and shall sit down with a pencil and try and explain it to her!
April 11th 2012, 01:22 AM #2
MHF Contributor
Mar 2011
April 11th 2012, 05:23 AM #3
Apr 2012 | {"url":"http://mathhelpforum.com/advanced-algebra/197075-help-question-matrices.html","timestamp":"2014-04-18T09:16:49Z","content_type":null,"content_length":"38578","record_id":"<urn:uuid:dbd1a808-dc95-4dfc-b7d4-6101d2fdecc5>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Considering just positive integers, there are 6 factorizations of 495 into two factors:
1*495, 3*165, 5*99, 9*55, 11*45 and 15*33. Each of these corresponds to one of
the six (x,y) pairs bobbym listed in post #4. For example: 1*495=248^2-247^2 and
24^2-9^2 = (24+9)(24-9) = 33*15.
In general for an odd composite number each of its unique factorizations (other than a perfect
square factorization) corresponds to a difference of squares. For 9 = 1*9 we get 5^2-4^2
= (5+4)(5-4) = 9*1 but 3*3 has no difference of squares representation unless we allow zero:
3^2-0^2 = (3+0)(3-0) = 3*3.
But we were talking about POSITIVE integers.
If M is an odd composite number and M=n*m where n and m are different, we get
as a difference of squares factorization.
If I recall correctly this was involved in one of Fermat's methods of factoring odd composites.
Have a grrreeeeaaaaaaat day!
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=250796","timestamp":"2014-04-16T04:41:37Z","content_type":null,"content_length":"15705","record_id":"<urn:uuid:0143608e-fa32-4d76-b3f7-13183d9dbde5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brownian Motion maximum principle
April 9th 2013, 09:59 AM #1
Oct 2009
Brownian Motion maximum principle
First some notation.
B(t) is a standard Brownian Motion.
M(t) = max(B(u): 0<=u<=t)
T(x) = min(u=>0: B(u)=x)
phi(x) = standard normal cdf
I know that P(B(t) > x) = 1-phi(x/sqrt(t)) and P(M(t)=>x) = P(T(x)<=t) = 2*(1-phi(x/sqrt(t))).
I have a problem that asks for P(M(4)<=2). Wouldn't this be equal to 2*(phi(x/sqrt(t))) = 2*(phi(2/sqrt(4))) = 2*phi(1)?? However, the book says the answer is 0.6826, which is not equal to 2*phi
(1). What am I doing wrong?
Re: Brownian Motion maximum principle
Hey BrownianMan.
What did you get for your distribution of the maximum of B(u) for 0<=u<=t?
April 9th 2013, 06:38 PM #2
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/advanced-statistics/217116-brownian-motion-maximum-principle.html","timestamp":"2014-04-18T05:07:19Z","content_type":null,"content_length":"32010","record_id":"<urn:uuid:cd7e902b-bf73-40b4-8cb8-4abb6602ff4d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
The QR Method
Next: Inverse Iteration Up: NUMERICAL TECHNIQUES FOR EIGENVALUES Previous: Rayleigh-Ritz, Background:
For reasonably-sized matrices, this is the most efficient and widely used method. There's very good code on NETLIB EISPACK to do this. For large matrices, people are loading into Arnoldi methods (see
Lehoucq). This method appeared in 1961 (Francis). Here's an elementary introduction. Given
Note: The term ``orthogonal matrix'' should be restricted to real matrices. However, usage of the term has extended to complex matrices. But in the complex case the matrices should be understood as
being ``Unitary.''
this is the general form of a ``Householder Matrix''
Example) For
Pick 88). Then the first
Let's write
for some
from (91)
Multiplication by
Substituting this into the first component of (93) gives
Choose sign of 92) by
This choice maximizes 94). Return to (93) and then using components
Now 90) by second column of
If at the
which is orthogonal. Then
Now that we know how to obtain a
Note that if
Convergence of QR: Let
then the iterates
For matrices with eigenvalues not satisfying (95), the iterates
in which
Next: Inverse Iteration Up: NUMERICAL TECHNIQUES FOR EIGENVALUES Previous: Rayleigh-Ritz, Background: Juan Restrepo 2003-04-12 | {"url":"http://math.arizona.edu/~restrepo/475A/Notes/sourcea-/node80.html","timestamp":"2014-04-18T08:03:33Z","content_type":null,"content_length":"29269","record_id":"<urn:uuid:5f39fc57-617a-406d-b6eb-e2ceeb560ee2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can anyone please explain the keplers' Law of planetary motion to me? Thanks!
Best Response
You've already chosen the best response.
Kepler (1571-1630) had studied for many years the records of observations on the planets made by TYCHO BRAHE ,and discovered three laws now known by his name.Kepler's law states 01.The planet
describes ellipses about the sun as one focus 02.The line joining the sun and the planet sweeps out equal areas in equal times 03.The squares of the periods of revolution of the planets are
proportional to the cubes of their mean distances from the sun.
Best Response
You've already chosen the best response.
I law :-Law of elliptical orbits "The path of a planet is an elliptical orbit ,with sun at one of its foci". II Law:-Law of equal Areas "The radius vector,drawn from the sun to a planet sweeps
equal areas in equal interval of time" Or "The areal velocity(area swept out by planets per unit time) of a planet is constant". III Law:-Law of periods :(Harmonic law) "The square of the period
of the revolution of a planet round the sun is proportional to the cube of the semi-major axis of the elliptical orbit ". Kepler's third law is consistent with the inverse square nature of the
law of universal gravitation. According to Kepler's second law, when the planet is closest to the sun its speed is maximum and when it is farthest from the sun its speed it minimum. HOPE IT HELPS
YOU ^_^
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f635cd5e4b079c5c631b9c6","timestamp":"2014-04-19T19:45:17Z","content_type":null,"content_length":"31234","record_id":"<urn:uuid:ce346a51-56fd-42e7-a5de-75c4ae5e4971>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Linear regression with constraints
dmitrey dmitrey.kroshko@scipy....
Wed May 7 12:36:42 CDT 2008
As far as I had seen from the example
(btw this one has analytical df)
specialized solvers (like bvls) yield better results (objfunc value)
than scipy l_bfgs_b and tnc.
You could try using algencan, I had no checked the one (my OO<->
algencan connection was broken that time). It will hardly yield better
objfunc value, but maybe it will have less time spent for some
large-scale problems.
Regards, D.
Jose Luis Gomez Dans wrote:
> Hi,
> I have a set of data (x_i,y_i), and would like to carry out a linear regression using least squares. Further, the slope and intercept are bound (they have to be between 0 and slope_max and 0 and slope_min, respectively).
> I have though of using one of the "easy to remember" :D optimization methods in scipy that allow boundaries (BFGS, for example). i can write the equation for the slope and intercept based on x_i and y_i, but I gather that I must provide a gradient estimate of the function at the point of evaluation. How does one go about this? Is this a 2-element array of grad(L) at m_eval, c_eval?
> Thanks!
> Jose
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2008-May/016722.html","timestamp":"2014-04-16T08:00:03Z","content_type":null,"content_length":"3873","record_id":"<urn:uuid:3981d55c-18be-439e-970b-56329688a31d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |