content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Merrillville Math Tutor
Find a Merrillville Math Tutor
...My professional career is in business information systems and in addition I have taught computer programming to many students. I am also experienced in web design, graphic design, and I am an
accomplished photographer. I also speak fluent French.
11 Subjects: including prealgebra, French, geometry, photography
...I teach Accounting and Computer classes. However, I have also completed much coursework in Math as well. Since I was in middle school, I was asked by many cousins and classmates to help them
out with Math, and later other subjects in high school and college.
19 Subjects: including algebra 2, calculus, accounting, algebra 1
...Although I am heavily knowledgeable in the natural sciences, I am better at teaching algebra and writing. I have instructed many of my peers in college, nursing school, and high school on their
writing and they received A's on their essays. I began tutoring Algebra as a student in high school and continued to help my peers with Algebra and Calculus in College.
4 Subjects: including algebra 1, prealgebra, algebra 2, writing
...My passion is for probability/statistics, both in theory and applied material. I took AP Statistics in high school, and received a 5 on the AP Exam. I also have experience programming in SAS
for larger data analysis problems.
5 Subjects: including algebra 1, prealgebra, statistics, probability
Hi, I've taught 5th through 8th grade student for over 20 years at the same institution. Mathematics is my passion and the fundamentals are my specialty, from pre-algebra thru algebra and analytic
geometry. Many award winning students in the field of Math.
8 Subjects: including linear algebra, ACT Science, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/merrillville_math_tutors.php","timestamp":"2014-04-20T11:18:39Z","content_type":null,"content_length":"23708","record_id":"<urn:uuid:ba0b9e59-cd3e-4120-96ea-89f8e7fa3c7c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Projectile Range
Hi to all guys i am not good in physics and trigonometric I tried to do a projectile system.My launch point is higher than target point.I know the velocity v, target_distance d, gravity
g,Launch_height_y y0.I added a link that shows the equation.I tried my own but not solved the equation.Please some one help me.Thanks in advance
d = 30,v = 22 , g = 9.8 ,y0 = 2
I wants to find the theta(angle)
What two points do you want to find the angle between?
Nerd: it’s the launch angle for the projectile (see his previous thread).
rajesh: the equation for projectile travel is really very simple, if you’re still banging your head against it then my recommendation is to go back and study some basic algebra.
Thank you Reedbeta & nerd for reply,
You suggested me this
using that link i learned lot specially that angle of launch .That Projectile motion worked fine for me when the launch and target at same level.But in my case i need both launch at same level and
launch at higher that the target level are required.My one problem is solved by your suggestion.I certainly found this equation from this link
this equation gives me a confidence of make my projectile work properly.
Waiting for some help | {"url":"http://devmaster.net/posts/17708/projectile-range","timestamp":"2014-04-17T04:01:46Z","content_type":null,"content_length":"16213","record_id":"<urn:uuid:aa5f6f7b-a267-44e8-8f20-649b80a82193>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Book III
[ Book II | Main Euclid page | Book IV ]
Book III
Byrne's edition - page by page
71 72-73 74-75 76-77 78-79 80-81 82-83 84-85 86-87 88-89 90-91 92-93 94-95 96-97 98-99 100-101 102-103 104-105 106-107 108-109 110-111 112-113 114-115 116-117 118-119 120-121 122
Proposition by proposition
With links to the complete edition of Euclid with pictures in Java by David Joyce, and the well known comments from Heath's edition at the Perseus collection of Greek classics.
David Joyce's Introduction to Book III
Definitions from Book III
Byrne's edition - Definitions 1, 2, 3, 4
Byrne's edition - Definitions 5, 6, 7, 8, 9
Byrne's edition - Definition 10
David Joyce's Euclid
Heath's comments
Proposition III.1
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.2
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.3
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.4
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.5
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.6
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.7
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.8
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.9
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.10
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.11
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.12
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.13
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.14
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.15
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.16
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.17
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.18
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.19
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.20
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.21
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.22
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.23
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.24
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.25
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.26
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.27
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.28
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.29
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.30
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.31
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.32
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.33
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.34
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.35
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.36
Byrne's edition
David Joyce's Euclid
Heath's comments
Proposition III.37
Byrne's edition
David Joyce's Euclid
Heath's comments | {"url":"http://www.math.ubc.ca/~cass/Euclid/book3/book3.html","timestamp":"2014-04-19T04:33:38Z","content_type":null,"content_length":"23048","record_id":"<urn:uuid:7bbdcca8-3e19-4946-aceb-c4d5ec1f08cb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improved VOF Scheme for
Improved VOF Scheme for FLOW-3D
FLOW-3D VOF Simulation
Figure 1
Figure 2
Figure 3
Figure 4
In its continuing quest to ensure that FLOW-3D users have the best possible methods for predicting fluid flows in all circumstances, Flow Science will introduce a new volume of fluid (VOF) option
with Version 9.1 called the Split Lagrangian method. This method has several advantages over the existing VOF options in FLOW-3D.
In the new VOF scheme, given a function of fluid fraction, F, the free surface is reconstructed using a piecewise linear interface representation, often called PLIC. The key to this construction is
to accurately find a normal vector in each cell in question, as shown in Figure 1. A newly developed scheme based on the least-square method developed earlier was adopted for this purpose. After free
surface reconstruction, the fluid volume passing between adjacent cells in all three directions in the computational domain can be then calculated, as shown in Figure 2.
To illustrate the strengths of the new VOF scheme, two example simulations are shown here.
The first example involves the simulation of a classic two-dimensional circular droplet of water 1.0 mm in diameter. The surface tension model is turned on, but gravity is not, so the fluid should be
at rest with the uniform pressure distribution determined by theconstant curvature of the free surface. In practice, it is difficult to achieve these conditions in a numerical model because of the
discrete nature of the approximations. Truncation errors in determining the surface curvature and in tracking the minute changes in the VOF function at the interface result in small variations in
pressure and velocities. These variations are often called “parasitic currents.” The accuracy of a numerical method can be measured by the amplitude of these perturbations – smaller amplitude for
better accuracy. In the worst cases, the amplitude grows significantly with time.
Figure 3 shows the evolution of the mean kinetic energy obtained with the standard VOF method, IFVOF=4, (red curve) and with the new method, IFVOF=6,(black curve). The time scale is 50 ms. Not only
do the maximum values of the two curves differ by a factor of 50.0, but at the end of the time segment the new VOF method predicts almost three orders of magnitude smaller perturbations of the
equilibrium solution. Both simulations also include new improvements to the surface tension model in FLOW-3D Version 9.1 that help keep the solution well-behaved.
The second example is a simulation of a sloshing tank. The simulation time was set to 15 seconds, which includes seven sloshing cycles. Because of the nature of the three-dimensional sloshing flow,
it is a challenge for any VOF method to conserve fluid volume to an acceptable degree. In FLOW-3D, the volume error is usually small over one wave period, but when multiple wave periods are modeled,
the error may accumulate and reach several percent of the initial value. This in turn may significantly affect the solution. The new Split Lagrangian method helps to alleviate these problems. For
this test volume error and fluid volume are shown in Figure 4, while an animation of the results is shown at the top of the page. Almost constant fluid volume in the tank and small volume error, less
than 0.02%, were observed. These results confirm the superior fluid volume conservation properties of the new VOF scheme.
The new features described here will be available with the release of FLOW-3D Version 9.1. | {"url":"http://www.flow3d.com/resources/news_05/vof-scheme-flow3d.html","timestamp":"2014-04-21T15:28:31Z","content_type":null,"content_length":"22599","record_id":"<urn:uuid:31b8d795-b360-4d65-a91a-b39d86b5820e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: The Intersection R-Torsion for Finite Cone
Xianzhe Dai
Xiaoling Huang
1 Introduction
Torsion invariants were originally introduced in the 3-dimensional setting by K. Reidemeister [23] in
1935 who used them to give a homeomorphism classification of 3-dimensional lens spaces. The Rei-
demeister torsions (R-torsions for short) are defined using linear algebra and combinational topology.
The salient feature of R-torsions is that it is not a homotopy invariant but rather a simple homo-
topy invariant; hence a homeomorphism invariant as well. From the index theoretic point of view,
R-torsion is a secondary invariant with respect to the Euler characteristic. For geometric operators
such as the Gauss-Bonnet and Dolbeault operator, the index is the Euler characteristic of certain
cohomology groups. If these groups vanish, the Index Theorem has nothing to say, and secondary
geometric and topological invariant, i.e., R-torsion, appears. The R-torsions were generalized to
arbitrary dimensions by W. Franz [13] and later studied by many authors (Cf. [19]).
Analytic torsion (or Ray-Singer torsion), which is a certain combinations of determinants of
Hodge Laplacians on k-forms, is an invariant of Riemannian manifolds defined by Ray and Singer
[22] as an analytic analog of R-torsions. Based on the evidence presented by Ray and Singer, Cheeger
[4] and M¨uller [20] proved the Ray-Singer conjecture, i.e., the equality of analytic and Reidemeister
torsion, on closed manifolds using different techniques. Cheeger's proof uses surgery techniques to
reduce the problem to the case of a sphere, while M¨uller's proof examines the convergence of the | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/318/1700111.html","timestamp":"2014-04-20T16:49:33Z","content_type":null,"content_length":"8712","record_id":"<urn:uuid:92ef0dfc-c473-4462-841f-9903a1595734>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Martin Worsek
bio website
visits member for 2 years, 9 months
seen Feb 9 at 15:52
stats profile views 71
Apr Repeated Homotopy Category of Chain Complexes
23 revised added 192 characters in body
23 asked Repeated Homotopy Category of Chain Complexes
28 awarded Scholar
28 accepted Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 comment I thank you for the help and clarification. To sum up: the statement in my first paragraph were true if I had included contractibility of the base space, which of course makes for much
more sensible statement. I will now mark your answer as accepted, unless I have misunderstood your last comment.
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 comment @Torsten Thank you for convincing me of the falseness of the statement in my first paragraph in general.
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 revised update
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 comment Thus my concrete current question is: why is the evaluation map $\varepsilon:\Gamma^0(E^k_{\omega})\to (p^k_{\omega})^{-1}(0)$ a w.h.e., noting that $(p^k_{\omega})^{-1}(0)\subset J^k_0
Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
Jun To me it also seemed to be a more general phenomenon which is why I formulated the more general variant (which may be too general). If I knew that the evaluation map is a w.h.e. I'd be
28 comment indeed done, this is why I was hoping to get a positive answer to the first question. You have shown that in the first question the space of sections and the fiber over the point are
homotopy equivalent, but I would need the fact that explicitly the evaluation maps are appropriate (weak) homotopy equivalences, which by the commend of Torsten and my intuition seems to
be untrue in general.
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 comment I forgot to type that $J^k_0(D^m,F)$ is supposed to be the space of $k$-jets "at $0$" of smooth maps $D^m\to F$.
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 comment @Dan I forgot to include your name for you to receive the notification.
Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
Jun Unfortunately I'm not sure I understand the argument completely. I know that $E^k_{\omega}(0)$ is a subspace of the space $J^k_0(D^m,F)$ of $k$-jets of smooth maps $D^m\to F$ where $F$
28 comment is the fiber of $E$. You are arguing that thus $\Gamma^0(E^k_{\omega})$ is homotopy equivalent to $(p^k_{\omega})^{-1}(0)$ which we know is homeomorphic to $\Gamma^0(E^k_{\omega}(0))$.
My gripe is that this does not show that the explicit map $\rho$ is a w.h.e. but only that the domain and range are (weak) homotopy equivalent. It's very well possible that I'm missing
the point here.
28 awarded Student
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 revised corrections
28 awarded Editor
Jun Are evaluation maps for sections of a fiber bundle weak homotopy equivalences?
28 revised spelling
Jun asked Are evaluation maps for sections of a fiber bundle weak homotopy equivalences? | {"url":"http://mathoverflow.net/users/16062/martin-worsek?tab=activity","timestamp":"2014-04-17T15:40:28Z","content_type":null,"content_length":"44159","record_id":"<urn:uuid:c4858eee-7c50-43dc-92fc-4eaf9cd64ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Just a another one of those problems :)
April 17th 2008, 06:51 PM #1
Junior Member
Apr 2008
Just a another one of those problems :)
Hey guys im glad i found this site!
With a and b as irrationals, is it possible for their sum or difference to be rational? Give a convincing argument for your response.
Is it possible for a^b to be rational? Give a convincing argument.
Hey guys im glad i found this site!
With a and b as irrationals, is it possible for their sum or difference to be rational? Give a convincing argument for your response.
$\color{red} \sqrt{2} + {(-\sqrt{2})} = 0 \in \mathbb{Q}$
$\color{red}\sqrt{2} - {\sqrt{2}} = 0 \in \mathbb{Q}$
Is it possible for a^b to be rational? Give a convincing argument.
$\color{red}(2^{\sqrt{2}})^{\sqrt{2}} = 4 \in \mathbb{Q}$
Im not sure my teacher would let me right E Q in my anwser because i have no idea what that means
Q is the symbol for rational numbers.
E is the symbol for "is an element of".
All Isomorphism is saying with this notation is that 0 and 4 are rational numbers ......
Isomorphism has given you a couple of examples of values of a and b that clearly and convincingly show the answer to both questions is yes.
(For the second part, "Is it possible for a^b to be rational", if your teacher wants you to prove that $a = 2^{\sqrt{2}}$ is irrational, just smirk and refer him/her to the Gelfond-Schneider
Theorem ......)
April 17th 2008, 07:00 PM #2
April 17th 2008, 07:27 PM #3
Junior Member
Apr 2008
April 17th 2008, 08:31 PM #4 | {"url":"http://mathhelpforum.com/pre-calculus/34982-just-another-one-those-problems.html","timestamp":"2014-04-18T07:43:29Z","content_type":null,"content_length":"40752","record_id":"<urn:uuid:aedb1975-c721-4e88-8da8-40ffbd332dbe>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
USBR Water Measurement Manual - Chapter 2 - Basic Concepts Related to Flowing Water and Measurement, Section 3. Basic Principles of Water Measurement
3. Basic Principles of Water Measurement
Most devices measure flow indirectly. Flow measuring devices are commonly classified into those that sense or measure velocity and those that measure pressure or head. The head or velocity is
measured, and then charts, tables, or equations are used to obtain the discharge.
Some water measuring devices that use measurement of head, h, or pressure, p, to determine discharge, Q, are:
(1) Weirs
(2) Flumes
(3) Orifices
(4) Venturi meters
(5) Runup measurement on a flat "weir stick"
Head, h, or depth commonly is used for the open channel devices such as flumes and weirs. Either pressure, p, or head, h, is used with tube-type flowmeters such as a venturi.
Pressure, p, is the force per unit area as shown on figure 2-1 that acts in every direction normal to containing or submerged object boundaries. If an open vertical tube is inserted through and flush
with the wall of a pipe under pressure, water will rise to a height, h, until the weight, W, of water in the tube balances the pressure force, F[p], on the wall opening area, a, at the wall
connection. These tubes are called piezometers. The volume of water in the piezometer tube is designated ha. The volume times the unit weight of water, ha, is the weight, W. The pressure force, F[p],
on the tap connection area is designated pa. The weight and pressure force are equal, and dividing both by the area, a, gives the unit pressure on the wall of the pipe in terms of head, h, written
Thus, head is pressure, p, divided by unit weight of water, ^3). Pressure is often expressed in psi or pounds per square inch (lb/in^2), which may be converted to feet of water by multiplying the (lb
/in^2) value by 2.31. For example, 30 lb/in^2 is produced by 69.3 feet of water.
Figure 2-1 -- Pressure definition
When the head principle is used, the discharge, Q, is computed from an equation such as the one used for a sharp-crested rectangular weir of length, L:
A coefficient, C, is included that accounts for simplifying assumptions and other deficiencies in deriving the equation. The coefficient can vary widely in nonstandard installations, but is well
defined for standard installations or is constant over a specified range of discharge.
The flow cross-sectional area, A, does not appear directly in the equation, but an area can be extracted by rewriting this equation:
in which:
In this form, C also contains a hidden square root of 2g, which, when multiplied by (h)^1/2, is the theoretical velocity. This velocity does not need to be directly measured or sensed. Because the
weir equation computes velocity from a measuring head, a weir is classified as a head measuring device.
Some devices that actually sample or sense velocities, v, are:
(1) Float and stopwatch
(2) Current and propeller meters
(3) Vane deflection meters
These devices generally do not measure the average velocity, V, for an entire flow cross section. Thus, the relationship between sampled velocities, v, and the mean velocity, V, must be known as well
as the flow section area, A, to which the mean velocity applies. Then, the discharge, Q, sometimes called the flow rate, is the product, AV.
Discharge or rate of flow has units of volume divided by unit time. Thus, discharge can be accurately determined by measuring the time, t, to fill a known volume, V[o]:
Water measurement devices can be calibrated using very accurate volumetric tanks and clocks. More commonly, weight of water in the tanks is used by converting the weight of water per unit volume. The
weight of water per cubic foot, called unit weight or specific weight, ^3 at standard atmospheric conditions. | {"url":"http://www.usbr.gov/pmts/hydraulics_lab/pubs/wmm/chap02_03.html","timestamp":"2014-04-17T15:59:14Z","content_type":null,"content_length":"7743","record_id":"<urn:uuid:7439a2b2-ce42-4b2b-be84-91ea96457bf3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
A hierarchical model for ordinal matrix factorization
Publication: Research - peer-review › Journal article – Annual report year: 2011
title = "A hierarchical model for ordinal matrix factorization",
publisher = "Springer New York LLC",
author = "Ulrich Paquet and Blaise Thomson and Ole Winther",
year = "2012",
doi = "10.1007/s11222-011-9264-x",
volume = "22",
number = "4",
pages = "945--957",
journal = "Statistics and Computing",
issn = "0960-3174",
TY - JOUR
T1 - A hierarchical model for ordinal matrix factorization
A1 - Paquet,Ulrich
A1 - Thomson,Blaise
A1 - Winther,Ole
AU - Paquet,Ulrich
AU - Thomson,Blaise
AU - Winther,Ole
PB - Springer New York LLC
PY - 2012
Y1 - 2012
N2 - This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to
incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be
implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is
asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation
metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large
number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.
AB - This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to
incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be
implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is
asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation
metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large
number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.
KW - Collaborative filtering
KW - Bayesian inference
KW - Ordinal regression
KW - Variational Bayes
KW - Low rank matrix decomposition
KW - Gibbs sampling
KW - Hierarchial modelling
KW - Large scale machine learning
U2 - 10.1007/s11222-011-9264-x
DO - 10.1007/s11222-011-9264-x
JO - Statistics and Computing
JF - Statistics and Computing
SN - 0960-3174
IS - 4
VL - 22
SP - 945
EP - 957
ER - | {"url":"http://orbit.dtu.dk/en/publications/a-hierarchical-model-for-ordinal-matrix-factorization(3dbbacfb-4476-4272-8c63-250e253c537f)/export.html","timestamp":"2014-04-19T03:12:25Z","content_type":null,"content_length":"20636","record_id":"<urn:uuid:e36364f4-0f2e-4d1f-915e-293a1b6d85c2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strategies for measuring evolutionary conservation of RNA secondary structures
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2008; 9: 122.
Strategies for measuring evolutionary conservation of RNA secondary structures
Evolutionary conservation of RNA secondary structure is a typical feature of many functional non-coding RNAs. Since almost all of the available methods used for prediction and annotation of
non-coding RNA genes rely on this evolutionary signature, accurate measures for structural conservation are essential.
We systematically assessed the ability of various measures to detect conserved RNA structures in multiple sequence alignments. We tested three existing and eight novel strategies that are based on
metrics of folding energies, metrics of single optimal structure predictions, and metrics of structure ensembles. We find that the folding energy based SCI score used in the RNAz program and a simple
base-pair distance metric are by far the most accurate. The use of more complex metrics like for example tree editing does not improve performance. A variant of the SCI performed particularly well on
highly conserved alignments and is thus a viable alternative when only little evolutionary information is available. Surprisingly, ensemble based methods that, in principle, could benefit from the
additional information contained in sub-optimal structures, perform particularly poorly. As a general trend, we observed that methods that include a consensus structure prediction outperformed
equivalent methods that only consider pairwise comparisons.
Structural conservation can be measured accurately with relatively simple and intuitive metrics. They have the potential to form the basis of future RNA gene finders, that face new challenges like
finding lineage specific structures or detecting mis-aligned sequences.
RNA secondary structures serve important functions in many non-coding RNAs and cis-acting regulatory elements of mRNAs [1,2]. They mediate RNA-protein/RNA-RNA interactions in many different
biological pathways and some even show enzymatic activity themselves. Functional constraints lead to evolutionary conservation of the RNA structure that in many cases can exceed the level of sequence
conservation. Therefore, conserved structures are characteristic evolutionarily signatures of functional RNAs. Most programs developed for the detection of novel functional RNAs rely on these
QRNA [3] was the first program that detects conserved RNAs. It models RNA structure in a pair of sequences using a stochastic context free grammar. Similarly, EvoFold [4] models the structure of a
multiple alignment taking into account a phylogenetic tree (phylo-SCGF). AlifoldZ [5] also analyzes multiple alignments. It uses, however, a thermodynamic folding model based on the RNAalifold
algorithm [6]. All three programs fold and evaluate the conservation of the potential RNA at the same time. As a consequence, their scores combine contributions of RNA stability and conservation.
RNAz [7] disentangles both contributions by calculating two separate scores for stability and conservation. The latter, dubbed structure conservation index (SCI), is thus a measure for structural
conservation only. Two other programs, MSARi [8] and ddbRNA [9], are available that also calculate a pure conservation score.
In this paper, we revisit the problem and propose a series of other possible strategies to measure structural conservation and compare their performance on a large data set of structural RNA
families. The main motivation is to explore alternatives and possible improvements to currently applied measures, especially the SCI used in RNAz. This study seems worthwhile, since comparative
approaches like RNAz and others are starting to get extensively used to annotate RNA structures on a genome wide scale [4,10-21]. At the same time, however, the increasing availability of additional
sequence data makes it necessary to already reconsider and adapt these strategies. For example, while for the first prototype-screens in the human genome [4,15] only 7 vertebrate genomes were
available, we now face the challenge of analyzing alignments of up to 28 species [22]. While the signal from RNA stability is important when only few sequences are available, more emphasis has to be
put on the evolutionary signature in future screens. This might improve the specificity of the predictions, a major limitation of current algorithms [23].
However, the results presented here are not only of relevance for comparative de novo ncRNA prediction. The SCI, for example, has also been used to measure structural similarity in a clustering
approach to find new ncRNA families within one species [13,24]. In principle, conservation measures of that kind could also be useful for general RNA homology search algorithms that combine sequence
and structure conservation [25].
Moreover, using a structure conservation measure on an alignment of sequences that are known to have a conserved RNA structure can help to assess the quality of the alignment. This idea has been used
to benchmark the performance of multiple alignment programs on structural RNAs [26,27], and more recently to detect mis-aligned sequences and assist in the semi-automatic improvement of RNA
alignments [28].
Finally it must be noted that assessing structural conservation, at the same time, means measuring change of RNA structures throughout evolution. Exploring different ways to quantify such structural
changes can help inferring structure based phylogenies [29,30] and might improve our understanding of RNA structure evolution [30,31].
Methods for measuring structural conservation
Structural conservation can be measured on different levels. In the following sections we describe 11 different methods that are based on (i) comparison of predicted minimum free energies (i.e. not
on their minimum free energy structures), (ii) comparison of single structures, (iii) comparison of ensembles of structures representing the whole folding space, and (iv) the two specialized methods
used by ddbRNA and MSARi. A short summary of all methods is given in Table Table11.
Methods based on folding energies
The idea to evaluate structure similarity indirectly through the minimum free energy (MFE) rather than by direct comparison of the structure itself seems to be counter-intuitive at the first glance.
The principle, however, becomes clear when considering the RNAalifold algorithm. RNAalifold implements a consensus folding algorithm for a set of aligned RNA sequences. It extends standard dynamic
programming algorithms for RNA secondary prediction [32] by averaging the energy contributions over all sequences and incorporating covariation terms into the energy model to reward compensatory
mutations and to penalize non-compatible base-pairs. This procedure results in a "consensus MFE" for the alignment. The absolute value of the consensus MFE is of little value to assess the
conservation of structures since it mainly reflects the folding energy that is heavily dependent on the nucleotide composition and the length of the alignment. Therefore, the consensus MFE E[cons ]is
normalized by the average MFE $E¯single$ of the single sequences as computed by RNAfold giving the structure conservation index
If the sequences show equally stable folding energies if forced to fold into a common structure compared to being folded independently, this indicates a conserved structure and the SCI is high. The
lower bound of the SCI is zero, indicating that RNAalifold is not able to find a consensus structure, while a SCI close to one corresponds to perfect structure conservation. Compensatory mutations
adding additional bonus energies to the consensus MFE can even give rise to a SCI higher than one.
The SCI, as given above, requires the computation of a consensus structure for the whole alignment. Alternatively, one can consider formulating a similar measure based on pairwise comparisons of all
sequences. To this end, the folding energy of each sequence is evaluated when forced to fold into the structures of the other sequences. The pairwise SCI for an alignment $A$ is given by
$SCIRN Aeval(A)=∑x,y∈Ax≠yE(x|Sy)(N−1)∑x∈AE(x|Sx)$
where E(x|S[y]) denotes the free energy of sequence x when adopting the minimum free energy structure S[y ]of sequence y, and N is the number of sequences in the alignment. The free energies for a
given sequence in a given structure can be easily evaluated with the program RNAeval from the Vienna RNA package [33]. Therefore, we refer to this method as the "RNAeval" method.
Methods based on single structures
A more intuitive way to assess structural similarity is by comparing structures themselves rather than comparing the energies associated with these structures. Conservation measures derived from
various structure metrics are described in this section. Unlike the energy based methods from the previous section that are inherently linked to thermodynamic folding, the following methods do not
depend on the way of how structures are predicted. There are several different ways, like thermodynamic energy minimization [34], kinetic folding [35] or probabilistic models [36-38], but the choice
of the method will not influence the underlying concept. However, since the goal of this study is not to compare the accuracy of different folding algorithms, we use here exclusively energy
minimization (RNAfold) to ensure comparability between all methods.
Base-pair distance
The most simple distance measure between two sequences is the Hamming distance, i.e. the number of positions with different nucleotides. For RNA structures, one could think of calculating the Hamming
distance of two strings in dot bracket notation with the three characters "(", ".", ")". However, this does not account for the correlations between the opening and closing positions that are
characteristic for the structure.
An alternative to the Hamming distance more suitable for secondary structures is the so-called base-pair distance. The base-pair distance between to RNA secondary structures S[x ]and S[y ]is defined
as the number of base-pairs not shared by the two structures. Formally it can be described in terms of set theory, where the base-pair distance corresponds to the cardinality of the symmetric set
with $δijx$ = 1 if (i,j) is a base-pair of structure S[x], and $δijx$ = 0 otherwise. d[BP ]itself is not a suitable measure for comparison as long it is not set in relation to the union of the
base-pairs in S[x ]and S[y]. The normalized base-pair distance scaled to the interval [0, 1] between two structures is given by
The overall score for a multiple alignment $A$ can either be calculated as the average of all pairwise sequence comparisons
or as the average of all comparisons of each sequence to a consensus structure
If not stated otherwise, also all other methods that are based on pairwise comparisons can be calculated either as the average over all (N - 1)N/2 pairwise comparisons, or the average of all N
comparisons to the consensus structure.
Mountain metric
The mountain metric is based on the mountain representation of RNA secondary structures [39] and follows the idea that the distance between two structures S[x ]and S[y ]can be expressed as the
difference of the two mountain graphs. For this purpose, a l[p]-norm can be defined that induces a metric $dMp$ on two secondary structures S[x ]and S[y ]as the difference of the two mountain
functions m(S[x]) and m(S[y]) [40]:
The mountain function m[k](S) is defined as the number of base-pairs enclosing position k. The effect that base-pairs are weighted differently can be overcome by scaling each base-pair to the range
it spans.
As $dMp$ is expected to grow with the length of sequences, we are in the need of defining a normalized distance measure to be able to compare distances for sequence pairs of different length. The
maximal distance of a secondary structure S[max ]on a sequence of length n to the open chain S[open ]is obtained if S[max ]is a stem of maximal height (n - 3)/2$DMp$ is then defined as the ratio of
the distance $dMp$ (S[x], S[y]) of two secondary structures with length n to the maximal distance $dMp$ (S[max], S[open]) at length n:
Tree editing
RNA secondary structures can be represented as ordered, rooted trees [41-43]. The tree representation can be deduced from the dot-bracket notation (characters "(" and ")" correspond to the 5' base
and the 3' base in the base-pair, respectively, while "." denotes an unpaired base), as the brackets clearly imply parent-child relationships. The ordering among the siblings of a node is imposed by
the 5' to 3' nature of the RNA molecule. To avoid formation of an unconnected forest of trees, a virtual root has to be introduced.
The tree representation at full resolution without any loss of information with regard to the dot-bracket notation can be derived by assigning each unpaired base to a leaf node and each base-pair to
an internal node. The resulting tree can be rewritten to a homeomorphically irreducible tree (HIT) by collapsing all base-pairs in a stem into a single internal node and adjacent unpaired bases into
a single leaf node [43]. Each node is then assigned a weight reflecting the number of nodes or leaves that were combined.
Shapiro proposed another encoding that retains only a coarse-grained shape of a secondary structure [41]. This is useful in the case of comparison of major structural elements of a RNA molecule but
it comes along with a loss of information (cf. section "Abstract shapes"). A secondary structure can be decomposed into stems (S), hairpin loops (H), interior loops (I), multi-loops (M), and external
nucleotides (E). While external nucleotides are assigned to a leaf, unpaired bases in a multi-loop are lost. The weighted coarse-grained approach compensates the effect of information reduction at
least by assigning to each node or leaf the number of elements that were condensed to it.
Tree editing induces a metric in the space of trees and hence a metric in the space of RNA secondary structures. An edit script, which is a series of edit operations, namely deletion, insertion and
relabeling of a node, each assigned a cost can transform any tree T[x ]into any other tree T[y]. The distance between two trees d(T[x], T[y]) is then defined as the cost of the edit script with
minimal cost. Normalization of the tree editing distance is done by comparing the distance of two trees d(T[x], T[y]) to the sum of the costs of deleting either of the two secondary structures, where
• denotes a tree consisting solely of a root:
Among the methods used here, tree editing is the only one that can act on structures of unequal length. In this work we will focus on two different implementations of tree editing. RNAdistance [33] a
tool from the Vienna RNA package implements a tree editing algorithm initially proposed by Shapiro [41] and acts on the full representation, HIT representation [43], coarse-grained and weighted
coarse-grained representation [41]. Allali & Sagot [44] pointed out some shortcomings of the classic tree editing operations and introduced novel editing operations called node-fusion and edge-fusion
, implemented in the program MiGaL. MiGaL uses a new concept of encoding trees at different levels of abstraction called layers [45], which are interconnected to each other via vertex coloring
Methods considering the entire folding space
Distance of structure ensembles
Because the stabilizing energies of base-pair formation are in the same energy range as the thermal energy, RNA molecules in physiological conditions are far away from being caged into one rigid
secondary structure. Instead, one usually observes an ensemble of RNA structures, which can be represented by an energy weighted Boltzmann distribution. McCaskill proposed a dynamic programming
algorithm [46] that allows to efficiently compute the partition function Q, where ΔG is the conformational Gibb's Free Energy change, R is the gas constant, T is the absolute temperature, and $S$ is
the ensemble of possible secondary structures.
The probability of a single structure S is then given by
and hence the probability of a single base-pair (i, j) is
where $δijS$ is one if (i, j) is a base-pair of structure S, and zero otherwise. Using these assumptions the equation of the base-pair distance can be remodeled to calculate the average base-pair
distance $〈dBP(Sy,Sx)〉$ between all structures of the two ensembles $Sx$ and $Sy$.
As one can see in the last line, this corresponds to the the naïve approach of multiplying the probability of the base-pair (i, j) in the ensemble $Sx$ with the probability of not expecting the
base-pair (i, j) in the ensemble $Sy$ and vice versa. Taking a closer look at equation 14, one can see that the distance between the structure ensemble of one sequence d[BP]($Sx$, $Sx$)ensemble
distance $Densemble(Sx,Sy)$ between two ensembles $Sx$ and $Sy$ is then defined as follows:
The result, which is simply the sum over the squared differences of the pair probabilities, is a very intuitive distance measure of two ensembles. Note that this measure is not a metric since the
triangle equation is not fulfilled. However, $Densemble(Sx,Sy)$ is a metric, as it corresponds to the euclidean distance between two vectors.
Also the mountain metric approach previously discussed can be readily extended to incorporate base-pairing probabilities [47]. The mountain function m[k](S) gives then the number of base-pairs that
are expected to enclose position k on average:
Distance of one dimensional pair-probability vectors
Another method to compare the folding space of two RNA sequences is by aligning one dimensional base-pairing probability vectors [48], as implemented in the program RNApdist. From all base-pairing
probabilities of base i the probabilities of being paired downstream ($pi<$), paired upstream ($pi>$), and unpaired ($pio$) are computed:
In this study we use a RNApdist-like variant D[RN Apdist ]as a distance measure for a precomputed alignment of two sequences x and y as follows:
$DRN Apdist(x,y)=1L∑iLδi(x,y)$
where L is the length of the alignment and δ given by
δ(x,y)={1−pi<(x)pi<(y)−pi>(x)pi>(y)−pio(x)pio(y)aligned position0inserted or deleted position
Abstract shapes
Giegerich et al. [49] introduced the concept of abstract shapes, coarse-grained abstractions of full secondary structures. The current implementation of RNAShapes offers five levels of abstraction
and partitions the folding space into structural families represented by the different shapes. The probabilities for shapes are calculated by summing up the probabilities of all structures that are
assigned to the same shape [50,51].
A pairwise similarity measure s comparing two shape spaces $Sx$ and $Sy$ can be defined as follows, where p(S|x) and p(S|y) is the probability of shape S given sequence x and y, respectively.
Other Methods
One key characteristic of conserved structures are compensatory mutations. Compensatory mutations that maintain the secondary structure will accumulate as this helps keeping the RNA molecule
functioning. While all methods described so far include structure predictions and only indirectly depend on such compensatory mutations, Di Bernardo et al. [9] proposed a method that is solely based
on the existence of compensatory mutations. ddbRNAcounts compensatory mutations in all possible stem loops in all sequences of an alignment without making use of a folding model of any sort. In this
paper we will use the number of compensatory mutations per length that is calculated by ddbRNA as measure for structural conservation.
Coventry et al. [8] follow with their MSARi algorithm a similar but more elaborate strategy than that of ddbRNA. Decision about structural conservation is made upon statistical significance of short,
contiguous potential base-paired regions. The partition function implementation of RNAfold is used to predict base-pair probabilities. Each base-pair (i, j) with a base-pairing probability higher
than 5% is then examined individually. For each sequence in the alignment a window of length seven is centered on nucleotide i and compared with a series of windows centered around j ± {0, 1, 2} (to
compensate slight mis-alignments). The window pair with the maximal number of reverse complementary positions is chosen for further analysis, which is the evaluation of the probability of seeing at
least as many compensatory positions against a null-hypothesis distribution for random mutations. The estimation of the significance of observed base-pairs is then used to assess the total
significance of the alignment.
The main interest of this paper is to detect structural similarities in a given alignment. Clearly, the problem of calculating the alignment and detecting a conserved structure is closely related.
For example, structural alignment algorithms based on the Sankoff algorithm [52] can be used to detect conserved structures [18,19] or homologues of a given structure [53]. Aligning sequences
requires a notion of sequence similarity and, therefore, sequence substitution models of RNAs have been developed. Examples are the RIBOSUM matrices for the homology search program RSEARCH [53] or a
specifically parametrized general time reversible (GTR) model for ITS2 sequences [54]. We do not cover methods here that are primarily focused on the alignment problem, such as Sankoff based
algorithms, nor methods that combine sequence and structure comparison such as the family of edit distances on arc annotated sequences by Zhang and coworkers [55] (although RNAdistance represents a
special case of these) or tree alignment as implemented in RNAforester [56]. If used with a sequence weight of zero, we would expect these methods to give similar results to the RNAdistance tree
editing. Liu & Wang [57] recently proposed a method for RNA secondary structure similarity analysis based on the Lempel-Ziv compression algorithm. However, since the authors do not provide an
implementation of their method it could not be considered in this study.
To assess the performance of the various methods to detect conserved RNA structures in multiple sequence alignments, we conducted a comprehensive benchmark on the BRAliBase database version 2.1 [27].
This database provides a reasonable sized data set of homologous RNAs of different families. In addition to the structural alignments provided by the database we generated for each alignment a
corresponding sequence-based alignment using CLUSTAL W [58].
Despite their shortcomings, pure sequence based alignments represent a more realistic scenario because structural alignments are not always available in real life situations (e.g. genome wide
screens). There are many structural alignment programs available. As mentioned before, the problem of structural alignment and finding structural similarities is closely related. However, we do not
want to compare the efficiency of different alignment programs and thus stick with the two extreme cases of purely sequence based alignments and manually curated reference alignments. At this point
we want to mention, that our results might be interesting for some of the alignment algorithms. For example, the heuristic algorithm of CMFinder [59] uses a distance measure based on tree editing in
one of the first alignment steps.
As a negative control of alignments that do not harbor a conserved structure we randomized each alignment of the database by shuffling. The procedure is described in detail in reference [5]. It is as
conservative as possible and keeps the most relevant alignment parameters like base composition, conservation patterns, gap-patterns etc. intact while any correlation arising from the original
structure is efficiently removed.
The sensitivity to detect a conserved RNA structure depends on the sequence variation in the alignment. It is difficult to detect any signature of a conserved structure in alignments with high
sequence identity. The more sequence changes in the alignment the more information is available. The overall "information content" is thus dependent on (i) the divergence of the sequences and (ii)
the number of the sequences in the alignment. A common measure describing sequence variation in a multiple sequence alignment is the average pairwise sequence identity (API). Although this measure is
widely used, it is only capable of assessing sequence variation, and does not take the number of sequences of the alignment into account. We found it helpful to use a combined measure for the content
of evolutionary information for presenting the results of our analysis. We used the normalized Shannon entropy H. In the case of alignments of RNA sequences we are dealing with an alphabet Σ =
{A,C,G,U,-} composed of the four nucleotides plus the gap character "-". The probabilities are approximated by the observed frequencies (e.g. $pAi$ is the frequency of the character A in column i
divided by the number of sequences in the alignment). The normalized Shannon entropy of an alignment $A$ is then defined as the sum of the Shannon entropies of the individual columns divided by the
length of the alignment denoted by L:
Although it is convenient to use this measure, most people are more familiar with the API. Fig. Fig.11 shows the relation of the API and the Shannon entropy for alignments with different number of
Relation between the average pairwise sequence identity and the normalized Shannon entropy. The Shannon entropy is used as measure for information content contained in an alignment throughout this
paper. It depends on the average pairwise identity and ...
In order to assess and compare the performance of the various strategies, we perform receiver operating characteristic (ROC) curve analysis. A ROC curve [60] is a plot of the true positive rate
(sensitivity) versus the false positive rate (1-specificity), while varying the discrimination threshold of a scoring classifier. The more a ROC curve is shifted to the upper left corner of the plot,
the better the discrimination is. The area under the ROC curve (AUC) is a single scalar value ranging from 0 to 1 representing the overall discrimination capability of a method. A random classifier
has an AUC value around 0.5, while perfect classification is indicated by an AUC value of 1.
Results and Discussion
The results of the benchmark are summarized in Tab. Tab.22 and Fig. Fig.2.2. The test set was binned by entropy and for each bin we calculated the average AUC as overall performance measure for
each method. In Table Table2,2, we additionally give the sensitivity of each method for a given specificity of 95%. In other words, this number is the percentage of correctly identified conserved
structures at a false positive rate of 5%, a somewhat more practical measure than the AUC. Almost all methods can be applied in a pairwise comparison manner and as a comparison of single structures
to a consensus structure/energy. We will simply refer to these cases as 'pairwise' and 'consensus'.
Results of the benchmark. AUC values (area under the ROC curve) are shown as general performance measure for different methods, different alignment sets and different regions of information content.
Also refer to Tab. 2.
Comparison of different strategies
As a main result one can note that over all entropy ranges and for both the structural and sequence based alignments, either an energy based method (SCI/RNAeval) or the base-pair distance performs
best. These methods are followed by the tree editing methods based on RNAdistance. MiGaL based tree editing, mountain metric and ensemble methods perform significantly worse.
SCI/RNAeval and base-pair distance
In general, the SCI shows the best overall discrimination power on the structural alignments. On the medium and high entropy sets it apparently makes use of the large number of consistent/
compensatory mutations that are explicitly considered in the SCI through the RNAalifold consensus energy that contains a covariation score. The use of the covariation scoring model in RNAalifold does
improve the discrimination capability of the SCI significantly compared to a version where the covariation score was turned off (data not shown).
Only on the low entropy set that contains highly conserved alignments with little evolutionary information the SCI is outperformed by the RNAeval and base-pair distance measures. In cases with only a
few structural changes, the base-pair distance, which considers the exact position of pairs, seems to be more sensitive than the SCI that uses the folding energy as abstraction of the structure.
Interestingly, the clear winner in the low entropy set is the RNAeval method that, similar to the SCI, also uses the folding energy instead of the structure itself. Still, it performs significantly
better (p-values < 0.001) than the SCI. The SCI and the RNAeval approach operate on two different scales. While the SCI is bounded below by 0, the RNAeval approach is bounded above by 1, which causes
favoring of two extreme cases. In the case of the SCI an alignment with loads of compensatory and consistent mutations will yield a SCI above 1 due to the covariance score. The RNAeval approach will
give at most 1 as compensatory and consistent mutations are not specially rewarded. In the case of an alignment of sequences that do not share a common fold the SCI will be 0, while the RNAeval
approach will yield a value below 0 as the evaluation of a sequence forced to fold into a structure that is not likely to be adopted by that sequence will give positive energy values. Hence, in the
case of the SCI we are dealing with a better dispersion of positive examples, and vice versa in the RNAeval approach with a better dispersion of negative examples.
The overall trend looks slightly different on the CLUSTAL W generated alignments. The SCI loses discrimination power and the base-pair distance performs equally well or, in most cases, even better.
So it seems that the base-pair distance is more robust against alignment errors than the SCI.
Another difference between the results for the structural and CLUSTAL W sets is the overall shape of the curves in Fig. Fig.2.2. For the structural alignments, the classification power increases
with increasing information content. This trend is of course entirely expected, and it is also visible for the CLUSTAL W alignments. However, there are two marked valleys at about 0.6 and 0.9 Shannon
entropy. The first one is caused by a prevalence of pairwise alignments with low sequence identity. An average pairwise identity of 60% to 65% or below is considered as critical with regard to
secondary structures for alignments generated solely on sequence information [26]. This results in a relatively low discrimination capability in this region. As soon as low identity pairwise
alignments do not constitute the majority of instances in a bin, the predictive power rises again. The second performance drop is again caused by prevalence of alignments with low sequence identity,
in this case alignments with three sequences.
Tree editing
The best tree editing approach (the consensus approach using the HIT representation), in general shows weaker performance than the SCI on both the structural and the CLUSTAL W generated data sets.
Detailed results for all tree editing methods are shown in Fig. Fig.3.3. There is a clear hierarchy among tree editing approaches. An abstraction of structural details in the representation is
accompanied with a loss in discrimination power, which is especially well pronounced on the structural data set. Tree editing using the full and HIT representations, which encode a RNA secondary
structure without any loss of information, give best results, while the coarse grained approach which is abstracting at most shows the weakest performance.
Detailed benchmark results for the tree editing methods. AUC values are shown for all variants of the tree editing methods, including different algorithms and abstraction levels. Also refer to Tab.
The weighted coarse-grained approach maintains a higher level of structural information than the coarse-grained representation and therefore generally performs better. The use of different costs for
the tree editing operations has significant influences on the discrimination power of the methods. Tree editing distances of the coarse-grained and weighted coarse-grained representations were
calculated using the cost matrix of the Vienna RNA package and the costs initially proposed by Shapiro [42]. Although the editing costs are in both cases chosen more ore less arbitrarily, the
weighted coarse-grained approach using the Vienna RNA package costs performs significantly better or at least equally well on both structural and CLUSTAL W generated alignments than the weighted
coarse-grained approach using Shapiro's costs (data not shown).
As MiGaL makes use also of the nucleotide sequence and not secondary structures alone, we evaluated MiGaL only in pairwise comparisons. Also for the MiGaL methods, we observe the trend that the more
information is encoded in a representation or layer, respectively, the better the discrimination capability. However, despite its more sophisticated algorithm, MiGaL performs worse than the simpler
tree editing algorithms of the Vienna RNA package.
Tree editing is the only method that can be applied per se to sequences of unequal length, and is hence not subjected to the alignment quality. This seems to be an advantage of this method. However,
this only holds for pairwise comparisons as the calculation of a consensus structure is dependent on a given alignment. Since the consensus approaches show much better performance than their pairwise
counterparts on structural alignments, and at least comparable results on CLUSTAL W generated alignments, the advantage of alignment independent pairwise comparisons is questionable.
Mountain metric
The mountain metric shows the weakest performance of all methods that are based on single structures. This trend becomes even worse when using base-pairing probabilities. Although the mountain
representation allows easy comparison of RNA structures by visual examination, when put to formalism by the mountain metric this approach fails. The weak performance indicates that the difference in
the mountain functions of closely related RNA molecules is in many cases in the range of differences one obtains by comparing non-related structures.
Ensemble methods
In principle, secondary structure predictions that take into account the whole thermodynamical ensemble of the folded RNA hold more information than the mere MFE structure. However, we observe that
this does not translate into improved detection performance of conserved RNA structures (Fig. (Fig.2,2, Table Table2).2). The ensemble distance shows only moderate performance on structural
alignments, and fails completely on CLUSTAL W generated alignments. It seems that taking into account sub-optimal base-pairs only adds noise to the comparison and blurs the signal instead of
improving it.
The extreme sensitivity to alignment errors can be explained by the fact that each probability of each possible base-pair of one sequence has to be compared to the corresponding probability of the
other sequences or the consensus, respectively. A base-pair present in one ensemble that does not have a counterpart in the other ensembles adds its full squared probability to the distance.
The RNApdist-like methods show best overall performance of the ensemble based methods. This is consistent with the observations above, since the RNApdist method only considers a condensed and thus
lessnoisy version of the full pair-probability matrix.
Consensus versus pairwise comparison
In general, one can observe that methods based on the comparison to a consensus structure perform better than methods based on pairwise comparisons only. The consensus structure predicted by
RNAalifold which is usually more accurate than single structures prediction, improves the discrimination power significantly. There are two exceptions: In the case of the ensemble methods and in the
low entropy test-set, the trend is reversed with pairwise methods performing better than their consensus variant.
In the case of the ensemble methods, this is apparently due to the way base-pairing probabilities are calculated by RNAfold and RNAalifold. For single sequence there are no special rules for two
bases to form a base-pair, they just have to belong to the set of valid base-pairs. RNAfold can therefore assign a base-pair probability to each valid base-pair. On the alignment level this is more
complicated as we are dealing with columns of nucleotides rather than with single nucleotides. In the RNAalifold algorithm, only those column pairs in which at least 50% of the sequences can form a
base-pair are used in the computation. In the case of the consensus comparison approach there may be many base-pairing probabilities in the single sequences that do not have a consensus counterpart.
Also in the low entropy range, which is dominated by alignments with little sequence variation, pairwise comparison approaches show better discrimination capability than their consensus counterparts.
Here, there is almost no additional mutational information that could give RNAalifold an advantage over RNAfold on single sequences.
Other methods
As both ddbRNA and MSARi show limitations to the data sets that can be applied, we evaluated both methods only on appropriate subsets of our test set. In case of ddbRNA these are pairwise and
three-way alignments, and in case of MSARi 10-way and 15-way alignments.
In this study we use ddbRNA to evaluate the number of compensatory mutations per length as a measure of evolutionary conservation of structure. The ddbRNA approach shows only moderate discrimination
capability and performs significantly worse than the SCI on both structural and CLUSTAL W generated alignments (Fig. (Fig.4).4). ddbRNA is extremely sensitive to the alignment quality as the
detected stems must be present in all sequences of an alignment.
Performance of the MSARi and ddbRNA algorithms.Left: AUC values for ddbRNAin comparison to the SCI. Only pairwise and three-way alignments were considered. Right: ROC curves of 10- and 15-way
alignments for MSARi in comparison to the SCI.
As MSARi implements a strategy that compensates slight mis-alignments, the results are almost identical for structural and CLUSTAL W generated alignments, but it shows significant lower
discrimination capability than most other methods tested in this paper, e.g. the SCI as shown in Fig. Fig.4.4. The shape of the ROC curves for MSARi indicates that only a few conserved instances are
detected as truly conserved. They are assigned very low p-values and it is not likely to find false positive examples at this low level. However, a large fraction of conserved instances is not
considered to be conserved and is assigned a p-value of 1.
Due to the exponential growth of the shape space with the length of the sequence and the resulting computational costs, we evaluated the RNAshapes approach as a proof of concept only on a small set
of tRNAs. Although this method shows clear discrimination capability, it is far below the performance of the SCI which is able to perfectly separate this specific tRNA test set (Fig. (Fig.5).5). The
observation that the shape type 1 (lowest level of abstraction) performs significantly better than the shape type 5 (highest level of abstraction) is consistent with the observations that increasing
abstraction of detailed structural information is related to a loss in discrimination power.
Performance of the RNAshapes based method. ROC curves are shown for different abstraction levels on a test set of 461 five-way alignments of tRNAs from the structural data set.
Correlation of methods
We have tested a variety of different methods in order to measure the same property, namely structural conservation. A question that is still open is whether all these methods essentially detect the
same features or focus on different aspects of the conserved structures. To get some clues on this question, we investigated the correlation between selected measures (Fig. (Fig.6).6). All methods
correlate statistically significantly (p < 0.001) with each other on the tested subset. The degree of correlation varies, however. Not surprisingly, among the highest correlations (correlation
coefficient 0.93) are the two tree editing methods using the HIT representation and MiGaL Layer 3, as they act both on trees of full structural detail. The base-pair distance is also highly
correlated with the tree editing methods. The SCI shows the highest correlation to RNAeval (0.68), which again does not come unexpected, as both measures are based on folding energies. However, the
relatively high degree of correlation between SCI/RNAeval and the other methods is remarkable. RNAeval, for example, has the same degree of correlation to the pairwise base-pair distance (0.82) as
the pairwise base-pair distance to the pairwise RNAdistance measure. This shows that also SCI/RNAeval, methods that actually do not regard the structure, effectively measure it. This seems
noteworthy, as the name "Structure conservation index" has been criticized in the past of being misleading because the SCI does not measure structural conservation explicitly.
Correlation of selected methods. Lower triangular matrix scatter plots of the different scores with local regression indicted by red lines. Upper triangular matrix displays the corresponding Pearson
correlation coefficients. Data points are shown for ...
Dependence on base composition
All scores used in this study are normalized with respect to sequence length and the number of sequences in the alignment. In principle, all our methods should also be independent of the base
composition. The energy based methods SCI and RNAeval compare folding energies in a way that the absolute value of the free energy (which is clearly dependent on the GC content) is also normalized.
All other methods, except tree editing using MiGaL with Layer 3, do not even explicitly consider the sequence but act on the predicted structure only. Although all methods should be normalized for
base composition by construction, we still investigated how they are affected by the GC content.
The somewhat surprising results are shown in Fig. Fig.7.7. While pairwise tree editing, base-pair distance and mountain metric approaches do not show any significant correlation to the GC content,
energy based methods and tree editing using a consensus structure derived by RNAalifold show high correlation. The consensus base-pair distance method shows little correlation, but correlation
increases slightly when moving to higher entropy ranges (data not shown).
Dependency on nucleotide composition of selected methods. The scores of a subset of randomized pairwise alignments of tRNAs in an entropy range from 0.4 to 0.6 are plotted against the average GC
content of the sequences in the alignment. Correlation coefficients ...
These results suggest that the observed GC-dependence is mainly a consequence of using a RNAalifold consensus structure. In the case of the SCI, this is easiest to understand. The SCI is the ratio of
the consensus energy and the mean of the single sequence energies. Both components are functions of the base composition, with higher GC content resulting in lower free energies. Although consensus
predictions use the same energy model as single sequence predictions, the additional constraints imposed by folding several sequences together result in a slightly different GC dependence. Similar
effects seem to be responsible for the GC dependence of the RNAeval measure and the consensus based tree editing measures.
For the purpose of this study, the GC dependence does not directly affect the results due to the design of our benchmark. The positive and negative test set contains sequences with the same base
composition. However, for practical reasons when considering these measures in RNA gene finding algorithms this effect is of relevance. The GC dependence of the SCI seems to be the main reason why
the RNAz program shows a small bias towards GC rich regions [61].
Statistical significance of the scores
In this study we compared the different methods on the basis of their ability to discriminate between alignments containing true conserved structures and random controls. While this approach gives us
information on the performance of the methods relative to each other, none of the scores used in this study (except the MSARi p-value) is normalized for sequence diversity. Alignments with 100%
sequence identity get, by definition, the highest score of perfect structure conservation. For the purpose of detecting evolutionary conserved structures this is of little help. Ideally, one would
like to answer the question of whether there is an unusually conserved structure in an alignment despite the given sequence diversity.
This problem can be addressed in different ways. The optimal solution is to devise a direct statistical model as in the case of MSARi. However, this seems only feasible if one considers a simplified
score like the base-pair derived score in MSARi. It seems impossible to analytically derive the background distribution of a more complex score like the SCI, since it depends on complex folding
algorithms that cannot be modeled directly.
As an alternative, machine learning algorithms can be used. In the case of RNAz, the dependence of the SCI on the number of sequences and the average pairwise identity is trained on a large test set
of known ncRNAs and random alignments.
Yet another possibility is to derive the background distribution empirically for each alignment under test. This approach is used by AlifoldZ, which calculates a z-score by comparing the score of the
original alignment to the score distribution of randomized alignments.
This last method is computationally demanding, but has the advantage that it can be applied to any score without modification.
We have set up a web-server that calculates relevant scores used in this study for a given alignment and assesses the statistical significance by calculating a z-score and an empirical p-value. The
web server can be accessed under http://rna.tbi.univie.ac.at/cgi-bin/SCA.cgi.
The aim of this work was to find the most effective ways to detect evolutionarily conserved RNA structures in sequence alignments. A few methods and algorithms have been proposed previously. Here, we
devised a series of novel measures and evaluated their performance systematically on a large test set of known conserved RNA structures.
As the most accurate measures we could identify the folding energy based "structure conservation index" and a measure based on the base-pair-distance structure metric. Interestingly, these two are
among the simplest methods tested and generally outperform all of the more sophisticated methods. Only the methods based on tree editing distances could compete to some degree with the SCI/base-pair
distance. Here we can note that more complex tree representations show better performance than simplified "coarse grained" abstractions. However, more sophisticated algorithms like MiGaL do not give
better results than the basic algorithms as implemented in the Vienna RNA package. All other methods show only very poor performance and do not appear to be a reasonable choice in any "real-life"
application. Among these methods we have to list the mountain metric, all methods based on structure ensembles and also the ddbRNA and MSARi algorithm.
As a general trend we could observe that the measures relying on a consensus structure prediction by the RNAalifold algorithm have clear advantage over methods that only use single sequence structure
All these results are fairly consistent over all tested alignments with one notable exception. For highly conserved sequences the RNAeval approach based on pairwise folding energy comparisons shows
the highest accuracy and all other measures, including the SCI, perform significantly worse.
Taken together we can conclude that the simple methods based on either folding energies or base-pair distance are the methods of choice. Although the SCI was the only method that was tested when RNAz
was first published, our results clearly show that this was a reasonable choice. An interesting new aspect is the GC dependence of the SCI that we observed here. This makes it necessary to consider
base composition when evaluating the statistical significance of the SCI, for example by including the GC content as an additional classifier in the RNAz machine learning algorithm. This can be
expected to increase the specificity of the program.
Another result which has practical implications is the fact that the SCI performs poorly on highly conserved sequences. The RNAeval method turned out to be significantly better and might help to
improve ncRNA gene prediction under these particularly difficult conditions.
The ever-growing pace of current genome sequencing projects confronts current RNA gene finders with new problems. Having sequences of dozens or even hundreds of species, the paradigm of detecting
conserved structures will change. Only a few extraordinarily conserved RNAs like tRNAs or rRNAs will show a signal of structure conservation across the whole phylogeny. The next generation of RNA
gene finders will have to deal with the problem of finding lineage specific and evolving structures. The strategies presented here can be the basis of algorithms that find sub-groups of related
structures or detect outliers of mis-aligned sequences. We plan to enhance our programs RNAz and AlifoldZ with such capabilities. The results obtained here guide such efforts as they clearly show
which measures are worth considering and which should be avoided.
All results presented in this paper are based on the BRAliBase 2.1 data set [27]. It consists of 18,990 structural alignments of 36 RNA families. Alignments are divided into subsets of alignments
with 2, 3, 5, 7, 10, and 15 sequences (see additional File 1). For each alignment in BRAliBase 2.1 a corresponding sequence based alignment using CLUSTAL W, version 1.83, with standard settings was
generated. Negative controls (i.e. alignments without naturally evolved secondary structure) were generated by shuffling using shuffle-aln.pl [5] with option "--conservative2". This shuffling
procedure maintains the gap pattern and only columns with the same degree of conservation are shuffled. This results in randomized alignments of the same length, the same number of sequences, the
same nucleotide composition, the same overall conservation, the same local conservation and the same gap pattern. For each alignment in the original BRAliBase 2.1 and CLUSTAL W data set,
respectively, five randomized alignments were generated for subsequent ROC analysis.
Alignments in both data sets were split according to their normalized Shannon entropy (equation 21) in sub sets with a bin size of 0.05. For determination of a minimal sample size, we followed the
strategy proposed by Hanley & McNeil [60]. A minimal sample size of 200 positive and 200 negative instances seems to yield reasonable results (i.e. low standard error). The relative gain in a lower
standard error is small when moving to a higher sample size. To statistically assess the significance of the difference of two AUC values we then used the non-parametric method by DeLong [62].
Calculation of AUC values was done using the R statistical package, version 2.5.1, and the ROCR package [63].
As many methods can only be applied to structures of equal length, RNA sequences without gap characters were folded using RNAfold. The alignment of the sequences was then used to reintroduce gaps
into structures (denoted simply as .) or to adjust the position of base-pairs when using base-pairing probabilities.
Programs and options used
The following program versions and options were used from the Vienna RNA package, version 1.6.5: RNAfold for calculations of MFE structures and base-pair probabilities of single sequences with
options -p -d2. RNAalifold for calculation of consensus structures and consensus base-pair probabilities with options -p -d2. RNAeval for energy evaluations of a sequence in a given secondary
structure. RNAdistance for calculation of base-pair distances with option -DP and tree editing distances with options -Dfhwc and additional option -S when calculations are done using Shapiro's cost
Other programs not part of the Vienna RNA package: RNAshapes version 2.1.1 with options -p-t [1|2|3|4|5]. migal version 2 with options -M --memory 1000. ddbRNA with standard options. MSARi with
standard options.
Authors' contributions
All authors contributed to the design of the study and the interpretation of the results. ARG carried out the analysis. ARG and SW wrote the manuscript. All authors read and approved the final
Supplementary Material
Additional file 1:
Overview of the BRAliBase 2.1 dataset. Overview of the BRAliBase 2.1 data set. The number of the alignments in the different entropy bins are shown. The red line indicates the minimal threshold of
positive instances we used to obtain reasonable significance levels in the ROC analysis. Bins below this threshold were not considered.
We acknowledge funding from the Austrian GEN-AU projects "noncoding RNA" and "Bioinformatics Integration Network" and thank Peter Stadler and Rolf Backofen for valuable discussions.
• Bompfünewerer A, Flamm C, Fried C, Fritzsch G, Hofacker I, Lehmann J, Missal K, Mosig A, Müller B, Prohaska S, Stadler B, Stadler P, Tanzer A, Washietl S, Witwer C. Evolutionary patterns of
non-coding RNAs. Theor Biosci. 2005;123:301–369. [PubMed]
• Mignone F, Gissi C, Liuni S, Pesole G. Untranslated regions of mRNAs. Genome Biol. 2002;3:REVIEWS0004. [PMC free article] [PubMed]
• Rivas E, Eddy SR. Noncoding RNA gene detection using comparative sequence analysis. BMC Bioinformatics. 2001;2:8–8. [PMC free article] [PubMed]
• Pedersen JS, Bejerano G, Siepel A, Rosenbloom K, Lindblad-Toh K, Lander ES, Kent J, Miller W, Haussler D. Identification and classification of conserved RNA secondary structures in the human
genome. PLoS Comput Biol. 2006;2 [PMC free article] [PubMed]
• Washietl S, Hofacker IL. Consensus folding of aligned sequences as a new measure for the detection of functional RNAs by comparative genomics. J Mol Biol. 2004;342:19–30. [PubMed]
• Hofacker IL, Fekete M, Stadler PF. Secondary structure prediction for aligned RNA sequences. J Mol Biol. 2002;319:1059–1066. [PubMed]
• Washietl S, Hofacker IL, Stadler PF. Fast and reliable prediction of noncoding RNAs. Proc Natl Acad Sci USA. 2005;102:2454–2459. [PMC free article] [PubMed]
• Coventry A, Kleitman DJ, Berger B. MSARi: multiple sequence alignments for statistical detection of RNA secondary structure. Proc Natl Acad Sci USA. 2004;101:12102–12107. [PMC free article] [
• di Bernardo D, Down T, Hubbard T. ddbRNA: detection of conserved secondary structures in multiple alignments. Bioinformatics. 2003;19:1606–1611. [PubMed]
• Backofen R, Bernhart SH, Flamm C, Fried C, Fritzsch G, Hackermuller J, Hertel J, Hofacker IL, Missal K, Mosig A, Prohaska SJ, Rose D, Stadler PF, Tanzer A, Washietl S, Will S. RNAs everywhere:
genome-wide annotation of structured RNAs. J Exp Zoolog B Mol Dev Evol. 2007;308:1–25. [PubMed]
• Mourier T, Carret C, Kyes K, Christodoulou Z, Gardner P, Jeffares DC, Pinches R, B B, Berriman M, Griffiths-Jones S, Ivens A, Newbold C, Pain A. Genome wide discovery and verification of novel
structured RNAs in Plasmodium falciparum. Genome Research. 2008;18:281–292. [PMC free article] [PubMed]
• Stark A, Lin MF, Kheradpour P, Pedersen JS, Parts L, Carlson JW, Crosby MA, Rasmussen MD, Roy S, Deoras AN, Ruby JG, Brennecke J, Curators HF, Project BD, Hodges E, Hinrichs AS, Caspi A, Paten B,
Park SW, Han MV, Maeder ML, Polansky BJ, Robson BE, Aerts S, van Helden J, Hassan B, Gilbert DG, Eastman DA, Rice M, Weir M, Hahn MW, Park Y, Dewey CN, Pachter L, Kent WJ, Haussler D, Lai EC,
Bartel DP, Hannon GJ, Kaufman TC, Eisen MB, Clark AG, Smith D, Celniker SE, Gelbart WM, Kellis M, Crosby MA, Matthews BB, Schroeder AJ, Sian Gramates L, St Pierre SE, Roark M, Wiley KL, Jr,
Kulathinal RJ, Zhang P, Myrick KV, Antone JV, Gelbart WM, Carlson JW, Yu C, Park S, Wan KH, Celniker SE. Discovery of functional elements in 12 Drosophila genomes using evolutionary signatures.
Nature. 2007;450:219–232. [PMC free article] [PubMed]
• Rose D, Hackermueller J, Washietl S, Reiche K, Hertel J, Findeiss S, Stadler PF, Prohaska SJ. Computational RNomics of Drosophilids. BMC Genomics. 2007;8:406. [PMC free article] [PubMed]
• Steigele S, Huber W, Stocsits C, Stadler PF, Nieselt K. Comparative analysis of structured RNAs in S. cerevisiae indicates a multitude of different functions. BMC Biol. 2007;5:25–25. [PMC free
article] [PubMed]
• Washietl S, Hofacker IL, Lukasser M, Hüttenhofer A, Stadler PF. Mapping of conserved RNA secondary structures predicts thousands of functional noncoding RNAs in the human genome. Nat Biotechnol.
2005;23:1383–1390. [PubMed]
• Missal K, Zhu X, Rose D, Deng W, Skogerbo G, Chen R, Stadler PF. Prediction of structured non-coding RNAs in the genomes of the nematodes Caenorhabditis elegans and Caenorhabditis briggsae. J Exp
Zoolog B Mol Dev Evol. 2006;306:379–392. [PubMed]
• Missal K, Rose D, Stadler PF. Non-coding RNAs in Ciona intestinalis. Bioinformatics. 2005;21:77–78. [PubMed]
• Uzilov AV, Keegan JM, Mathews DH. Detection of non-coding RNAs on the basis of predicted secondary structure formation free energy change. BMC Bioinformatics. 2006;7:173. [PMC free article] [
• Torarinsson E, Sawera M, Havgaard JH, Fredholm M, Gorodkin J. Thousands of corresponding human and mouse genomic regions unalignable in primary sequence contain common RNA structure. Genome Res.
2006;16:885–9. [PMC free article] [PubMed]
• Weinberg Z, Barrick JE, Yao Z, Roth A, Kim JN, Gore J, Wang JX, Lee ER, Block KF, 'Sudarsan N, Neph S, Tompa M, Ruzzo WL, Breaker RR. Identification of 22 candidate structured RNAs in bacteria
using the CMfinder comparative genomics pipeline. Nucleic Acids Res. 2007;35:4809–19. [PMC free article] [PubMed]
• Yao Z, Barrick J, Weinberg Z, Neph S, Breaker R, Tompa M, Ruzzo WL. A Computational Pipeline for High-Throughput Discovery of cis-Regulatory Noncoding RNA in Prokaryotes. PLoS Comput Biol. 2007;3
:e126. [PMC free article] [PubMed]
• Miller W, Rosenbloom K, Hardison RC, Hou M, Taylor J, Raney B, Burhans R, King DC, Baertsch R, Blankenberg D, Kosakovsky Pond SL, Nekrutenko A, Giardine B, Harris RS, Tyekucheva S, Diekhans M,
Pringle TH, Murphy WJ, Lesk A, Weinstock GM, Lindblad-Toh K, Gibbs RA, Lander ES, Siepel A, Haussler D, Kent WJ. 28-Way vertebrate alignment and conservation track in the UCSC Genome Browser.
Genome Res. 2007;17:1797–808. Epub 2007 Nov 5. [PMC free article] [PubMed]
• Babak T, Blencowe BJ, Hughes TR. Considerations in the identification of functional RNA structural elements in genomic alignments. BMC Bioinformatics. 2007;8:33. [PMC free article] [PubMed]
• Will S, Reiche K, Hofacker IL, Stadler PF, Backofen R. Inferring noncoding RNA families and classes by means of genome-scale structure-based clustering. PLoS Comput Biol. 2007;3:e65. [PMC free
article] [PubMed]
• Freyhult EK, Bollback JP, Gardner PP. Exploring genomic dark matter: a critical assessment of the performance of homology search methods on noncoding RNA. Genome Res. 2007;17:117–25. [PMC free
article] [PubMed]
• Gardner PP, Wilm A, Washietl S. A benchmark of multiple sequence alignment programs upon structural RNAs. Nucleic Acids Res. 2005;33:2433–2439. [PMC free article] [PubMed]
• Wilm A, Mainz I, Steger G. An enhanced RNA alignment benchmark for sequence alignment programs. Algorithms Mol Biol. 2006;1:19–19. [PMC free article] [PubMed]
• Andersen ES, Lind-Thomsen A, Knudsen B, Kristensen SE, Havgaard JH, Torarinsson E, Larsen N, Zwieb C, Sestoft P, Kjems J, Gorodkin J. Semiautomated improvement of RNA alignments. RNA. 2007;13
:1850–1859. Epub 2007 Sep 5. [PMC free article] [PubMed]
• Collins LJ, Moulton V, Penny D. Use of RNA secondary structure for studying the evolution of RNase P and RNase MRP. J Mol Evol. 2000;51:194–204. [PubMed]
• Caetano-Anolles G. Evolved RNA secondary structure and the rooting of the universal tree of life. J Mol Evol. 2002;54:333–45. [PubMed]
• Holmes I. A probabilistic model for the evolution of RNA structure. BMC Bioinformatics. 2004;5:166. [PMC free article] [PubMed]
• Zuker M, Stiegler P. Optimal computer folding of large RNA sequences using thermodynamics and auxiliary information. Nucleic Acids Res. 1981;9:133–148. [PMC free article] [PubMed]
• Hofacker IL, Fontana W, Stadler PF, Bonhoeffer LS, Tacker M, Schuster P. Fast folding and comparison of RNA secondary structures. Monatsh Chem. 1994;125:167–188.
• Mathews DH, Turner DH. Prediction of RNA secondary structure by free energy minimization. Curr Opin Struct Biol. 2006;16:270–8. [PubMed]
• Flamm C, Fontana W, Hofacker IL, Schuster P. RNA folding at elementary step resolution. RNA. 2000;6:325–338. [PMC free article] [PubMed]
• Dowell RD, Eddy SR. Evaluation of several lightweight stochastic context-free grammars for RNA secondary structure prediction. BMC Bioinformatics. 2004;5:71. [PMC free article] [PubMed]
• Knudsen B, Hein J. Pfold: RNA secondary structure prediction using stochastic context-free grammars. Nucleic Acids Res. 2003;31:3423–8. [PMC free article] [PubMed]
• Do CB, Woods DA, Batzoglou S. CONTRAfold: RNA secondary structure prediction without physics-based models. Bioinformatics. 2006;22:e90–8. [PubMed]
• Hogeweg P, Hesper B. Energy directed folding of RNA sequences. Nucleic Acids Res. 1984;12:67–74. [PMC free article] [PubMed]
• Moulton V, Zuker M, Steel M, Pointon R, Penny D. Metrics on RNA secondary structures. J Comput Biol. 2000;7:277–292. [PubMed]
• Shapiro BA. An algorithm for comparing multiple RNA secondary structures. Comput Appl Biosci. 1988;4:387–393. [PubMed]
• Shapiro BA, Zhang KZ. Comparing multiple RNA secondary structures using tree comparisons. Comput Appl Biosci. 1990;6:309–318. [PubMed]
• Fontana W, Konings DA, Stadler PF, Schuster P. Statistics of RNA secondary structures. Biopolymers. 1993;33:1389–1404. [PubMed]
• Allali J, Sagot MF. A new distance for high level RNA secondary structure comparison. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2005;2:3–14. [PubMed]
• Allali J, Sagot MF. String Processing and Information Retrieval. Vol. 3772. Springer, Berlin; 2005. A multiple graph layers model with application to RNA secondary structures comparison; pp.
• McCaskill JS. The equilibrium partition function and base pair binding probabilities for RNA secondary structure. Biopolymers. 1990;29:1105–1119. [PubMed]
• Huynen MA, Perelson A, Vieira WA, Stadler PF. Base pairing probabilities in a complete HIV-1 RNA. J Comput Biol. 1996;3:253–274. [PubMed]
• Bonhoeffer S, McCaskill JS, Stadler PF, Schuster P. RNA multi-structure landscapes. A study based on temperature dependent partition functions. Eur Biophys J. 1993;22:13–24. [PubMed]
• Giegerich R, Voss B, Rehmsmeier M. Abstract shapes of RNA. Nucleic Acids Res. 2004;32:4843–4851. [PMC free article] [PubMed]
• Voss B, Giegerich R, Rehmsmeier M. Complete probabilistic analysis of RNA shapes. BMC Biol. 2006;4:5–5. [PMC free article] [PubMed]
• Steffen P, Voss B, Rehmsmeier M, Reeder J, Giegerich R. RNAshapes: an integrated RNA analysis package based on abstract shapes. Bioinformatics. 2006;22:500–503. [PubMed]
• Sankoff D. Simultaneous Solution of the RNA Folding, Alignment and Protosequence Problems. SIAM Journal on Applied Mathematics. 1985;45:810–825.
• Klein RJ, Eddy SR. RSEARCH: finding homologs of single structured RNA sequences. BMC Bioinformatics. 2003;4:44–44. [PMC free article] [PubMed]
• Wolf M, Achtziger M, Schultz J, Dandekar T, Müller T. Homology modeling revealed more than 20,000 rRNA internal transcribed spacer 2 (ITS2) secondary structures. RNA. 2005;11:1616–1623. [PMC free
article] [PubMed]
• Jiang T, Lin G, Ma B, Zhang K. A General Edit Distance between RNA Structures. J Comp Biol. 2002;9:371–88. [PubMed]
• Hochsmann M, Toller T, Giegerich R, Kurtz S. Local Similarity in RNA Secondary Structures. csb. 2003;2:159–168. [PubMed]
• Liu N, Wang T. A method for rapid similarity analysis of RNA secondary structures. BMC Bioinformatics. 2006;7:493–493. [PMC free article] [PubMed]
• Thompson JD, Higgins DG, Gibson TJ. CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix
choice. Nucleic Acids Res. 1994;22:4673–80. [PMC free article] [PubMed]
• Yao Z, Weinberg Z, Ruzzo WL. CMfinder-a covariance model based RNA motif finding algorithm. Bioinformatics. 2006;22:445–452. [PubMed]
• Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143:29–36. [PubMed]
• Washietl S, Pedersen JS, Korbel JO, Stocsits C, Gruber AR, Hackermüler J, Hertel J, Lindemeyer M, Reiche K, Tanzer A, Ucla C, Wyss C, Antonarakis SE, Denoeud F, Lagarde J, Drenkow J, Kapranov P,
Gingeras TR, Guigó R, Snyder M, Gerstein MB, Reymond A, Hofacker IL, Stadler PF. Structured RNAs in the ENCODE selected regions of the human genome. Genome Res. 2007;17:852–864. [PMC free article
] [PubMed]
• DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44:837–845. [PubMed]
• Sing T, Sander O, Beerenwinkel N, Lengauer T. ROCR: visualizing classifier performance in R. Bioinformatics. 2005;21:3940–3941. [PubMed]
• Flamm C, Hofacker IL, Maurer-Stroh S, Stadler PF, Zehl M. Design of multistable RNA molecules. RNA. 2001;7:254–65. [PMC free article] [PubMed]
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2335298/?tool=pubmed","timestamp":"2014-04-19T13:18:57Z","content_type":null,"content_length":"195254","record_id":"<urn:uuid:1234d525-7f63-488a-8551-ea65c6be99a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus: Concepts and Contexts (4TH 10 Edition)
Synopses & Reviews
Publisher Comments:
Stewart's CALCULUS: CONCEPTS AND CONTEXTS, 3rd Edition focuses on major concepts and supports them with precise definitions, patient explanations, and carefully graded problems. Margin notes clarify
and expand on topics presented in the body of the text. The Tools for Enriching Calculus CD-ROM contains visualizations, interactive modules, and homework hints that enrich your learning experience.
iLrn Homework helps you identify where you need additional help, and Personal Tutor with SMARTHINKING gives you live, one-on-one online help from an experienced calculus tutor. In addition, the
Interactive Video Skillbuilder CD-ROM takes you step-by-step through examples from the book. The new Enhanced Review Edition includes new practice tests with solutions, to give you additional help
with mastering the concepts needed to succeed in the course.
Stewart's CALCULUS: CONCEPTS AND CONTEXTS, FOURTH EDITION offers a streamlined approach to teaching calculus, focusing on major concepts and supporting those with precise definitions, patient
explanations, and carefully graded problems. CALCULUS: CONCEPTS AND CONTEXTS is highly regarded because this text offers a balance of theory and conceptual work to satisfy more progressive programs
as well as those who are more comfortable teaching in a more traditional fashion. Each title is just one component in a comprehensive calculus course program that carefully integrates and coordinates
print, media, and technology products for successful teaching and learning.
About the Author
James Stewart received his M.S. from Stanford University and his Ph.D. from the University of Toronto. He did research at the University of London and was influenced by the famous mathematician
George Polya at Stanford University. Stewart is currently Professor of Mathematics at McMaster University, and his research field is harmonic analysis. Stewart is the author of a best-selling
calculus textbook series published by Cengage Learning Brooks/Cole, including CALCULUS, CALCULUS: EARLY TRANSCENDENTALS, and CALCULUS: CONCEPTS AND CONTEXTS, as well as a series of precalculus texts.
Table of Contents
Preface. To the Student. Diagnostic Tests. A Preview of Calculus. 1. FUNCTIONS AND MODELS. Four Ways to Represent a Function. Mathematical Models: A Catalog of Essential Functions. New Functions from
Old Functions. Graphing Calculators and Computers. Exponential Functions. Inverse Functions and Logarithms. Parametric Curves. Laboratory Project: Running Circles around Circles. Review. Principles
of Problem Solving. 2. LIMITS AND DERIVATIVES. The Tangent and Velocity Problems. The Limit of a Function. Calculating Limits Using the Limit Laws. Continuity. Limits Involving Infinity. Derivatives
and Rates of Change. Writing Project: Early Methods for Finding Tangents. The Derivative as a Function. What Does f? ? Say about f ? Review. Focus on Problem Solving. 3. DIFFERENTIATION RULES.
Derivatives of Polynomials and Exponential Functions. Applied Project: Building a Better Roller Coaster. The Product and Quotient Rules. Derivatives of Trigonometric Functions. The Chain Rule.
Laboratory Project: Bezier Curves. Applied Project: Where Should a Pilot Start Descent? Implicit Differentiation. Inverse Trigonometric Functions and their Derivatives. Derivatives of Logarithmic
Functions. Discovery Project: Hyperbolic Functions. Rates of Change in the Natural and Social Sciences. Linear Approximations and Differentials. Laboratory Project: Taylor Polynomials. Review. Focus
on Problem Solving. 4. APPLICATIONS OF DIFFERENTIATION. Related Rates. Maximum and Minimum Values. Applied Project: The Calculus of Rainbows. Derivatives and the Shapes of Curves. Graphing with
Calculus and Calculators. Indeterminate Forms and l'Hospital's Rule. Writing Project: The Origins of l'Hospital's Rule. Optimization Problems. Applied Project: The Shape of a Can. Newton's Method.
Antiderivatives. Review. Focus on Problem Solving. 5. INTEGRALS. Areas and Distances. The Definite Integral. Evaluating Definite Integrals. Discovery Project: Area Functions. The Fundamental Theorem
of Calculus. Writing Project: Newton, Leibniz, and the Invention of Calculus. The Substitution Rule. Integration by Parts. Additional Techniques of Integration. Integration Using Tables and Computer
Algebra Systems. Discovery Project: Patterns in Integrals. Approximate Integration. Improper Integrals. Review. Focus on Problem Solving. 6. APPLICATIONS OF INTEGRATION. More about Areas. Volumes.
Discovery Project: Rotating on a Slant. Volumes by Cylindrical Shells. Arc Length. Discovery Project: Arc Length Contest. Average Value of a Function. Applied Project: Where To Sit at the Movies.
Applications to Physics and Engineering. Discovery Project: Complementary Coffee Cups. Applications to Economics and Biology. Probability. Review. Focus on Problem Solving. 7. DIFFERENTIAL EQUATIONS.
Modeling with Differential Equations. Direction Fields and Euler's Method. Separable Equations. Applied Project: How Fast Does a Tank Drain? Applied Project: Which Is Faster, Going Up or Coming Down?
Exponential Growth and Decay. Applied Project: Calculus and Baseball. The Logistic Equation. Predator-Prey Systems. Review. Focus on Problem Solving. 8. INFINTE SEQUENCES AND SERIES. Sequences.
Laboratory Project: Logistic Sequences. Series. The Integral and Comparison Tests; Estimating Sums. Other Convergence Tests. Power Series. Representations of Functions as Power Series. Taylor and
Maclaurin Series. Laboratory Project: An Elusive Limit. Writing Project: How Newton Discovered the Binomial Series. Applications of Taylor Polynomials. Applied Project: Radiation from the Stars.
Review. Focus on Problem Solving. 9. VECTORS AND THE GEOMETRY OF SPACE. Three-Dimensional Coordinate Systems. Vectors. The Dot Product. The Cross Product. Discovery Project: The Geometry of a
Tetrahedron. Equations of Lines and Planes. Laboratory Project: Putting 3D in Perspective. Functions and Surfaces. Cylindrical and Spherical Coordinates. Laboratory Project: Families of Surfaces.
Review. Focus on Problem Solving. 10. VECTOR FUNCTIONS. Vector Functions and Space Curves. Derivatives and Integrals of Vector Functions. Arc Length and Curvature. Motion in Space: Velocity and
Acceleration. Applied Project: Kepler's Laws. Parametric Surfaces. Review. Focus on Problem Solving. 11. PARTIAL DERIVATIVES. Functions of Several Variables. Limits and Continuity. Partial
Derivatives. Tangent Planes and Linear Approximations. The Chain Rule. Directional Derivatives and the Gradient Vector. Maximum and Minimum Values. Applied Project: Designing a Dumpster. Discovery
Project: Quadratic Approximations and Critical Points. Lagrange Multipliers. Applied Project: Rocket Science. Applied Project: Hydro-Turbine Optimization. Review. Focus on Problem Solving. 12.
MULTIPLE INTEGRALS. Double Integrals over Rectangles. Iterated Integrals. Double Integrals over General Regions. Double Integrals in Polar Coordinates. Applications of Double Integrals. Surface Area.
Triple Integrals. Discovery Project: Volumes of Hyperspheres. Triple Integrals in Cylindrical and Spherical Coordinates. Applied Project: Roller Derby. Discovery Project: The Intersection of Three
Cylinders. Change of Variables in Multiple Integrals. Review. Focus on Problem Solving. 13. VECTOR CALCULUS. Vector Fields. Line Integrals. The Fundamental Theorem for Line Integrals. Green's
Theorem. Curl and Divergence. Surface Integrals. Stokes' Theorem. Writing Project: Three Men and Two Theorems. The Divergence Theorem. Summary. Review. Focus on Problem Solving. APPENDIXES. A.
Intervals, Inequalities, and Absolute Values. B. Coordinate Geometry. C. Trigonometry. D. Precise Definitions of Limits. E. A Few Proofs. F. Sigma Notation. G. Integration of Rational Functions by
Partial Fractions. H. Polar Coordinates. I. Complex Numbers. J. Answers to Odd-Numbered Exercises.
What Our Readers Are Saying
Be the first to add a comment for a chance to win! | {"url":"http://www.powells.com/biblio/65-9780495557425-0","timestamp":"2014-04-19T00:22:15Z","content_type":null,"content_length":"67111","record_id":"<urn:uuid:af214f40-e338-42e6-ae82-3a53c0f1d65c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does homology have a coproduct?
up vote 16 down vote favorite
Standard algebraic topology defines the cup product which defines a ring structure on the cohomology of a topological space. This ring structure arises because cohomology is a contravariant functor
and the pullback of the diagonal map induces the product (using the Kunneth formula for full generality, I think.)
I've always been mystified about why a dual structure, perhaps an analogous (but less conventional) "co-product", is never presented for homology. Does such a thing exist? If not, why not, and if so,
is it such that the cohomology ring structure can be derived from it?
I am aware of the intersection products defined using Poincare duality, but I'm seeking a true dual to the general cup product, defined via homological algebra and valid for the all spaces with a
cohomology ring.
at.algebraic-topology gn.general-topology homology
add comment
3 Answers
active oldest votes
The Eilenberg-Zilber theorem says that for singular homology there is a natural chain homotopy equivalence:
S[*](X) ⊗ S[*](Y) ≅ S[*](X × Y)
The map in the reverse direction is the Alexander-Whitney map. Therefore we obtain a map
S[*](X) → S[*](X × X) → S[*](X) ⊗ S[*](X)
which makes S[*](X) into a coalgebra.
My source (Selick's Introduction to Homotopy Theory) then states that this gives H[*](X) the structure of a coalgebra. However, I think that the Kunneth formula goes the wrong way. The
Kunneth formula says that there is a short exact sequence of abelian groups:
0 → H[*](C) ⊗ H[*](D) → H[*](C ⊗ D) → Tor(H[*](C), H[*](D)) → 0
(the astute will complain about a lack of coefficients. Add them in if that bothers you)
This is split, but not naturally, and when it is split it may not be split as modules over the coefficient ring. To make H[*](X) into a coalgebra we need that splitting map. That requires H
[*](X) to be flat (in which case, I believe, it's an isomorphism).
That's quite a strong condition. In particular, it implies that cohomology is dual to homology.
up vote Of course, if one works over a field then everything's fine, but then integral homology is so much more interesting than homology over a field.
14 down
vote In the situation for cohomology, only some of the directions are reversed, which means that the natural map is still from the tensor product of the cohomology groups to the cohomology of
the product. Since the diagonal map now gets flipped, this is enough to define the ring structure on H^*(X).
There are deeper reasons, though. Cohomology is a representable functor, and its representing object is a ring object (okay, graded ring object) in the homotopy category. That's the real
reason why H^*(X) is a ring (the Kunneth formula has nothing to do with defining this ring structure, by the way). It also means that cohomology operations (aka natural transformations)
are, by the Yoneda lemma, much more accessible than the corresponding homology operations (I don't know of any detailed study of such).
Rings and algebras, being varieties of algebras (in the sense of universal or general algebra) are generally much easier to study than coalgebras. Whether this is more because we have a
greater history and more experience, or whether they are inherently simpler is something I shall leave for another to answer. Certainly, I feel that I have a better idea of what a ring
looks like than a coalgebra. One thing that makes life easier is that often spectral sequences are spectral sequences of rings, which makes them simpler to deal with - the more structure,
the less room there is for things to get out of hand.
Added Later: One interesting thing about the coalgebra structure - when it exists - is that it is genuinely a coalgebra. There's no funny completions of the tensor product required. The
comultiplication of a homology element is always a finite sum.
Two particularly good papers that are worth reading are the ones by Boardman, and Boardman, Johnson, and Wilson in the Handbook of Algebraic Topology. Although the focus is on operations of
cohomology theories, the build-up is quite detailed and there's a lot about general properties of homology and cohomology theories there.
Added Even Later: One place where the coalgebra structure has been extremely successfully exploited is in the theory of cohomology cooperations. For a reasonably cohomology theory, the
cooperations (which are homology groups of the representing spaces) are Hopf rings, which are algebra objects in the category of coalgebras.
Is there any reason one couldn't have spectral sequences of coalgebras? – Ben Webster♦ Oct 13 '09 at 15:36
The short answer is that I don't see why not, but you'd need every term in the spectral sequence to be flat in order to get this. I'm also not so sure how much help it would be. The point
about rings is that once you know where x goes to then you know where x^2 goes to. But knowing where x goes to doesn't obviously tell you where everything in the comultiplication of x
goes to. – Andrew Stacey Oct 13 '09 at 17:58
add comment
Homology is not naturally a coalgebra unless you take field coefficients or unless your object has torsion free homology groups over the integers. The basic issue, as mentioned above, is
that even though you have a split exact universal coefficient sequence for the homology of a product, the splitting isn't natural. You don't actually need homology to be dual to cohomology
because that would involve some additional finiteness properties.
However, if your space has torsion-free homology with integer coefficients, then H_(X;R) = H_(X) ⊗ R for all R, and so you get a coalgebra structure on the homology of X with coefficients
in R simply as the base change of the one over the integers. If R is an algebra over a field then you get a coalgebra structure with no assumptions on X by base-change from said field.
I should probably point out that the Kunneth formula is more complicated than stated in a previous answer. There's an exact sequence
0 → H_(C;Z) ⊗ H_(D;M) → H_(C ⊗ D;M) → Tor(H_(C;Z), H_*(D;M)) → 0
but notice that one side involves integer coefficients and the other coefficients in a general module. If you want the universal coefficient theorem with the same coefficients on both sides
up vote 6 it takes the form of a spectral sequence with E_2-term
down vote
Tor^R_{p,q} (H_(X;R), H_(Y;R))
converging to H_*(X x Y;R). (The bigrading on Tor comes because we're taking Tor of graded modules.)
In general if E is a generalized homology-cohomology theory then flatness of E_* X over the ground ring E_* guarantees a coalgebra structure on the E-homology of X. This also may or may not
have anything to do with duality, because flatness and projectiveness are not the same.
As mentioned, you still do have a coalgebra structure on the chains of X (or the "E-homology object of X" in the stable homotopy category), and this is really just some kind of failure of
the homology groups to mimic what's going on behind the scenes.
The universal coefficient theorem wasn't stated in a previous answer. – Andrew Stacey Oct 16 '09 at 8:21
yargh, I meant the Kunneth formula. fixed. – Tyler Lawson Oct 16 '09 at 12:01
Also, although this is the most general form for chains, for singular cohomology then it's a little elaborate, isn't it? After all, singular chains are free (by definition!) so the
complication of coefficients doesn't arise. Or am I missing something? – Andrew Stacey Oct 21 '09 at 9:02
You need that singular chains are free to get the conclusion about the Tor spectral sequence in the first place; the spectral sequence is a general one computing H_*(C ⊗_R D) from H_* C
and H_* D when C and D are (nonnegative) chain complexes of R-modules with one of them levelwise free. One example is to look at the mod-4 homology of RP^2 x RP^2 from the mod-4 homology
of its factors. Having said that, the spectral sequences you get always collapse at E_3 because everything is arising from integral coefficients, but if you leave the higher Tors out it
doesn't work. – Tyler Lawson Oct 21 '09 at 12:40
add comment
Yes, homology is a coalgebra, at least with coefficients over a field, with the coproduct given by the pushforward X -> X x X. This is dual to the ring structure on cohomology, so they
up vote determine each other.
3 down
I know this is included in my answer above, but so that answers are self-contained: for this to be always correct everything needs to be over a field right from the start. – Andrew Stacey
Oct 13 '09 at 17:59
This is the answer I'd have brought - with the added comment that the reason I hear most often /why/ we don't study the coalgebra of homology is that coalgebras are so much less studied
than algebras, and we don't have a firm intuition formed for coalgebras in the same way. – Mikael Vejdemo-Johansson Oct 13 '09 at 18:05
@ Andrew Does it have to be a field? Or is it enough that all Tor vanish? And for that matter, does 'is a field' follow from 'all Tor vanish'? – Mikael Vejdemo-Johansson Oct 13 '09 at
Well, semi-simple assures that all modules are flat. As this question of Anton's shows, any other examples would be pretty pathological. mathoverflow.net/questions/208/… – Ben Webster♦ Oct
13 '09 at 18:39
I'd need to look it up to be sure, but the slogan that I've absorbed from algebraic topology is that to have "good" behaviour for all spaces then you need the coefficient ring to be a
(graded) field. But "good" generally means Kunneth formula and cohomology dual to homology, rather than just one of them. I don't know enough about homological algebra to state the
conditions precisely, though, without looking them up (the Boardman et al papers are a good reference, btw). But that's why the Morava K-theories are so popular: they are the only ones
where the coefficient ring is a graded field. – Andrew Stacey Oct 14 '09 at 6:45
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gn.general-topology homology or ask your own question. | {"url":"http://mathoverflow.net/questions/415/does-homology-have-a-coproduct?sort=oldest","timestamp":"2014-04-20T01:33:25Z","content_type":null,"content_length":"78766","record_id":"<urn:uuid:d8397cea-c5f4-43cc-b27c-1dbc4212722d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
What mathematicians get up to
Issue 2
May 1997
After 5,000 years, the game of Nine Men's Morris has succumbed to the power of modern computing. William Hartston looks at other recent mathematical discoveries in the world of games.
Ancient games from Iceland, India and England
hala-tafl (mentioned in the 14th-century Grettis Saga); White's 13 geese try to stifle Pulijudam (the tiger game): three tigers fight against 15 lambs on a 23-square board. The tigers can jump over
the black fox to death before it "eats" them by jumping over them. (and "eat") the lambs; the lambs must try to crowd the tigers to prevent them from moving.
"The fold stands empty in the drowned field
And the crows are fatted with the murrain flock;
The Nine Men's Morris is filled up with mud,
And the quaint mazes in the wanton green,
For lack of tread are indistinguishable."
-- 'A Midsummer Night's Dream' Act 2, scene 1.
"Nine Men's Morris is a draw."
-- Ralph Gasser, Games of No Chance, 1996.
As computers have become faster and faster, their capacity has grown to analyse traditional games to extinction. While chess and Go still contain vastly more possibilities than any computer can hope
to analyse exhaustively, other games, in which the space of possible positions is less huge, can now be played perfectly by machine.
We are not talking here simply about machines playing better than humans. When mathematicians play games they will be satisfied by nothing less than perfection, and a recently published book shows
how perfection has been achieved in a number of games, and how close to or far away from it machines still are in others.
Games of No Chance, edited by Richard J Nowakowski (Cambridge University Press, £40), is a series of articles based on the seminars given at a workshop in 1994 on the branch of mathematics that is
known as combinatorial game theory.
Mathematicians are quite clear about what comprises a game:
1. There are two players;
2. They move alternately;
3. There are no dice or other chance devices.
4. They both have perfect information about all the elements of the game;
5. The game cannot go on for ever;
6. There are no draws;
7. The last player to move wins.
So that cuts out bridge (by rule 4) and chess (by rule 6) and backgammon (by rule 3) for a start, but just to show how magnanimous they are towards our less rigorous games, the workshop participants
spend a good deal of their time looking into traditional, faulty games, as well as those that satisfy the above definition.
One such impure game is Nine Men's Morris, which has been around since 1400BC. Each player starts with nine men, which are placed in turn on a board of 24 squares (see illustration). When all 18 men
have been placed on the board, play continues by moving them a square at a time. The object, both in the initial phase and the later play, is to form a straight (not diagonal) row of three men. Each
time this is accomplished, on of the opponent's men may be removed from the board.
Now here's the abstract of Ralph Glasser's paper from the book:
"We describe the combination of two search methods used to solve Nine Men's Morris. An improved analysis algorithm computes endgame databases comprising about 10 billion states. An 18-ply alpha-beta
search then used these databases to prove that the value of the initial position is a draw. Nine Men's Morris is the first non-trivial game to be soled that does not seem to benefit from
knowledge-based methods."
In other words, they got the machine to work out and tabulate 10 billion positions that they knew were a win for one side or the other, then worked forward 18 moves from the beginning of the game
until their opening analysis met their endgame analysis. Result, a recipe for perfect play that guarantees either side freedom from defeat.
As you may have guessed by now, some of the mathematics is formidable, and some of the numbers involved are enormous. For example, another game that has succumbed to computer power is Pentominoes.
You start with an empty chessboard and the 12 regular pentominoes - the distinct shapes that can be formed from five connected squares. At each turn a player selects a pentomino and places it in an
empty space on the board, not overlapping any previously placed piece. Each pentomino may be used only once. The first player who cannot move, loses.
Hilarie Korman's paper "Pentominoes: A First Player Win" proves precisely that - and it took a Sun IPC Sparcstation five days of solid computing to do it. As she says, however, the program did have
to look at 22 billion board positions to reach its conclusion.
As for the game of draughts, you may be interested to know that the total number of possible positions in the game is 500,995,484,682,338,672,639, and even that may soon not be beyond the largest
None of this, however, is likely to stop determined humans from playing all the above-mentioned games. So turn your computers' faces to the wall and have a go at two particularly ancient ones,
illustrated above.
Hala-tafl (the fox game) is played on the board above. Both the fox (the black piece) and geese (white ones) can move one square along a line to a vacant point. The fox may also jump over a goose (as
in draughts) to land on the point on the other side - and if it does so, the goose is "eaten" and removed from the board. The fox can eat more than one goose in a single move, by a series of
connected leaps. The geese cannot jump over the fox, but they can win by crowding him to death,. by depriving him of moves. This game has been known for a long time to be a win for the geese, but it
may be an improvement to increase the number of geese to 17 and not let them move backward.
Pulijudam (the tiger game)
The Indian game of Lambs and Tigers (above) is similar: tigers jump and eat, lambs cannot capture, but you start with an empty board on which the three tigers and 15 lambs must be placed in the early
The tigers must be put on the three white squares; none of the lambs may move until all 15 have been placed on the board. Off you go - and remember, no computing. | {"url":"http://plus.maths.org/content/what-mathematicians-get","timestamp":"2014-04-17T18:53:17Z","content_type":null,"content_length":"29694","record_id":"<urn:uuid:94cc410c-cda3-49c7-9939-48a5d9db3a39>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plane (geometry)
Citizendium - building a quality free general knowledge encyclopedia. Click here to join and contribute—free
Many thanks December donors; special to Darren Duncan. January donations open; need minimum total $100. Let's exceed that.
Donate here. By donating you gift yourself and CZ.
Plane (geometry)
From Citizendium, the Citizens' Compendium
This article is about the geometrical concept. For other uses of the term Plane, please see Plane (disambiguation).
In Euclidean geometry, a plane is an abstract concept that models the common notion of a flat surface — without depressions, protrusions, holes and a boundary — that for any two of its points
entirely contains the straight line joining them.
Assuming a common (intuitive, physical) idea of the geometry of space, "plane" can be defined in terms of distances, orthogonality, lines, coordinates etc. In a more abstract approach (vector spaces)
planes are defined as two-dimensional affine subspaces. In an axiomatic approach, basic concepts of elementary geometry, such as "point", "line" and "plane", are undefined primitives.
Non-axiomatic approach
A remark
To define a plane is more complicated than it may seem.
It is tempting to define a plane as a surface with zero curvature, where a surface is defined as a geometric object having length and breadth but no depth. However, this is not a good idea; such
definitions are useless in mathematics, since they cannot be used when proving theorems. Planes are treated by elementary geometry, but the notions of surface and curvature are not elementary, they
need more advanced mathematics and more sophisticated definitions. Fortunately, it is possible to define a line via more elementary notions, and this way is preferred in mathematics. Still, the
definitions given below are tentative. They are criticized afterwards, see axiomatic approach.
The definitions of "plane" given below may be compared with the definition of a circle as consisting of those points in a plane that are a given distance (the radius) away from a given point (the
center). A circle is a set of points chosen according to their relation to some given parameters (center and radius). Similarly, a plane is a set of points chosen according to their relation to some
given objects (points, lines etc). However, a circle determines its center and radius uniquely; for a plane, the situation is different.
Four equivalent definitions of "plane" are given below. Any other definition is equally acceptable provided that it is equivalent to these. Note that a part of a plane is not a plane. Likewise, a
line segment is not a line.
Below, all points, lines and planes are situated in the space (assumed to be a three-dimensional Euclidean space), and by lines we mean straight lines.
Definition via distances
Let two different points A and B be given. The set of all points C that are equally far from A and B — that is,
— is a plane.
This is the plane orthogonal to the line AB through the middle point of the line segment AB.
Definition via right angles (orthogonality)
Let two different points A and B be given. The set of all points C such that the lines AB and AC are orthogonal (that is, the angle BAC is right) is a plane.
This is the plane orthogonal to the line AB through the point A.
Definition via lines
Let three points A, B and C be given, not lying on a line. Consider the lines DE for all points D (different from B) on the line AB and all points E (also different from B) on the line BC. The union
of all these lines, together with the point B, is a plane.
This is the plane through A, B and C.
In other words, this plane is the set of all points F such that either F coincides with B or there exists a line through F that intersects the lines AB and BC (in distinct points).
Definition via Cartesian coordinates
In terms of Cartesian coordinates x, y, z ascribed to every point of the space, a plane is the set of points whose coordinates satisfy the linear equation
$ax+by+cz = d$.
Here real numbers a, b, c and d are parameters such that at least one of a, b, c does not vanish.
Some properties of planes
Most basic properties
For any three points not situated in the same straight line there exists one and only one plane that contains these three points.
If two points of a straight line lie in a plane, then every point of the line lies in that plane.
If two planes have a common point then they have at least a second point in common.
Every plane contains at least three points not lying in the same straight line, and the space contains at least four points not lying in a plane.
Further properties
Two planes either do not intersect (are parallel), or intersect in a line, or coincide.
A line either does not intersect a plane (is parallel to it), or intersects it in a single point, or is contained in the plane.
Two lines perpendicular to the same plane are parallel to each other (or coincide).
Two planes perpendicular to the same line are parallel to each other (or coincide).
Axiomatic approach
What is wrong with the definitions given above?
The definitions given above assume implicitly that the 3-dimensional Euclidean space is already defined, together with (at least one of) such notions as distances, angles, straight lines, Cartesian
coordinates, while planes are not defined yet. However, this situation never appears in mathematical theory.
In the axiomatic approach points, lines and planes are undefined primitives.
The modern approach (below) defines planes in a completely different way.
How does it work
Axiomatic approach is similar to chess in the following aspect.
A chess piece, say a rook, cannot be defined before the whole chess game is defined, since such a phrase as "the rook moves horizontally or vertically, forward or back, through any number of
unoccupied squares" makes no sense unless it is already known that "chess is played on a square board of eight rows and eight columns" etc. And conversely, the whole chess game cannot be defined
before each piece is defined; the properties of the rook are an indispensable part of the rules of the game. No chess without rooks, no rooks outside chess! One must introduce the game, its pieces
and their properties in a single combined definition.
Likewise, Euclidean space, its points, lines, planes and their properties are introduced simultaneously in a set of 20 assumptions known as Hilbert's axioms of Euclidean geometry.^[1] The "most basic
properties of planes" listed above are roughly the plane-related assumptions (Hilbert's axioms), while "further properties" are the first plane-related consequences (theorems).
Modern approach
The modern approach defines the three-dimensional Euclidean space more algebraically, via linear spaces and quadratic forms, namely, as a real affine space whose difference space is a
three-dimensional inner product space. For further details see Affine space#Euclidean space and space (mathematics).
In this approach a plane in an n-dimensional affine space (n ≥ 2) is defined as a (proper or improper) two-dimensional affine subspace.
A less formal version of this approach uses points, vectors and scalar product (called also dot product or inner product) of vectors without mentioning linear and affine spaces. Optionally, Cartesian
coordinates of points and vectors are used. See algebraic equations below. There, in particular, equivalence between the definition via right angles (orthogonality) and the definition via Cartesian
coordinates is explained.
Plane geometry
Plane geometry (also called "planar geometry") is a part of solid geometry that restricts itself to a single plane ("the plane") treated as a geometric universe. In other words, plane geometry is the
theory of the two-dimensional Euclidean space, while solid geometry is the theory of the three-dimensional Euclidean space.
Plane geometry studies the properties of plane figures (and configurations). Plane figures in elementary geometry are sets of points, lines, line segments and sometimes curves that fall on the same
plane. For example, triangles, polygons and circles. In plane geometry every figure is plane, in contrast to solid geometry.
Algebraic equations
In analytic geometry several closely related algebraic equations are known for a plane in three-dimensional Euclidean space. A few algebraic representations will be discussed.
Point-normal representation
One such equation is illustrated in the figure. Point P is an arbitrary point in the plane and O (the origin) is drawn outside the plane. The point A in the plane is chosen such that vector
$\vec{d} = \overrightarrow{OA}$
is orthogonal to the plane. The collinear vector
$\vec{n}_0 = \frac{1}{d} \vec{d} \quad \hbox{with}\quad d = \left|\vec{d}\,\right|$
is a unit (length 1) vector normal (perpendicular) to the plane which is known as the normal of the plane in point A. Note that d is the distance of O to the plane. The following relation holds for
an arbitrary point P in the plane (according to the definition via right angles):
$\left(\vec{r}-\vec{d}\;\right)\cdot \vec{n}_0 = 0 \quad\hbox{with}\quad \vec{r} = \overrightarrow{OP}\quad\hbox{and}\quad \vec{r}-\vec{d} = \overrightarrow{AP} .$
This equation for the plane can be rewritten in terms of coordinates with respect to a Cartesian frame with origin in O. Dropping arrows for component vectors (real triplets) that are written bold,
we find
$\left( \mathbf{r} - \mathbf{d}\right)\cdot \mathbf{n}_0 = 0 \Longleftrightarrow \mathbf{r} \cdot \mathbf{n}_0 = \mathbf{d}\cdot \mathbf{n}_0 \Longleftrightarrow x a_0 +y b_0+z c_0 = d$
$\mathbf{d} = (a,\;b,\; c), \quad \mathbf{n}_0 = (a_0,\;b_0,\; c_0), \quad \mathbf{r} = (x,\;y,\; z),$
$\mathbf{d}\cdot \mathbf{n}_0 = \mathbf{d}\cdot \frac{1}{d} \mathbf{d} = \frac{1}{d} \mathbf{d}^2 = \frac {d^2}d = d = \sqrt{a^2+b^2+c^2}.$
The definition via Cartesian coordinates is thus derived from the definition via right angles.
Moreover, the Hesse normal form for the plane (called after the 19th century mathematician Ludwig Otto Hesse) is obtained,
$\mathbf{r}\cdot\mathbf{n}_0 = d;$
it is characterized by the use of a unit-length vector $\mathbf{n}_0$ rather than an arbitrary vector orthogonal to the plane.
Conversely, the definition via right angles can be derived from the definition via Cartesian coordinates as follows. Given a linear equation for a plane
$ax+by+cz = e, \,$
we write
$\mathbf{r} = (x,\;y,\; z), \quad\mathbf{f} = (a,\;b,\; c), \quad\hbox{and}\quad \mathbf{d} = \left(\frac{e}{a^2+b^2+c^2}\right) \mathbf{f}.$
It follows that
$\mathbf{f}\cdot\mathbf{r} = e = \mathbf{f}\cdot \mathbf{d}.$
Hence we find the same orthogonality relation,
$\mathbf{f}\cdot(\mathbf{r}-\mathbf{d}) = 0 \;\Longrightarrow\; (\mathbf{r}-\mathbf{d})\cdot\mathbf{n}_0 = 0 \quad\hbox{with}\quad \mathbf{n}_0 = \frac{1}{\sqrt{a^2+b^2+c^2}}\mathbf{f}$
where f , d, and n[0] are collinear. The equation may also be written in the following mnemonically convenient form
$\mathbf{d}\cdot(\mathbf{r}-\mathbf{d}) = 0,$
which is the equation for a plane through a point A perpendicular to $\overrightarrow{OA}=\vec{d}$.
Three-point representation
The figure shows a plane that by definition passes through three different points A, B, and C that are not on one line. The point P is an arbitrary point in the plane and the reference point O is
again drawn outside the plane, but the case that the plane passes through O is not excluded. Referring to figure 2 we introduce the following definitions
$\vec{a} = \overrightarrow{OA},\quad \vec{b} = \overrightarrow{OB},\quad\vec{c} = \overrightarrow{OC},\quad \vec{r} = \overrightarrow{OP}.$
Clearly the following two non-collinear vectors belong to the plane
$\vec{u} = \overrightarrow{AB}= \vec{b}-\vec{a} ,\quad \vec{v} = \overrightarrow{AC}= \vec{c}-\vec{a}.$
Because a plane (an affine space), with a given fixed point as origin is a 2-dimensional linear space and two non-collinear vectors with "tails" in the origin are linearly independent, it follows
that any vector in the plane can be written as a linear combination of these two non-collinear vectors. (This is also expressed as: Any vector in the plane can be decomposed into components along the
two non-collinear vectors.) In particular, taking A as origin in the plane,
$\overrightarrow{AP}= \vec{r}-\vec{a} = \lambda \vec{u} + \mu\vec{v},\qquad \lambda,\mu \in \mathbb{R}.$
The real numbers λ and μ specify the direction of $\overrightarrow{AP}$. Hence the following equation for the position vector $\vec{r}$ of the arbitrary point P in the plane:
$\vec{r} = \vec{a} + \lambda \vec{u} + \mu\vec{v}$
is known as the point-direction representation of the plane. This representation is equal to the three-point representation
$\vec{r} = \vec{a}+ \lambda (\vec{b}-\vec{a}) + \mu(\vec{c}-\vec{a}),$
where $\vec{a}$, $\vec{b}$, and $\vec{c}$ are the position vectors of the three points that define the plane.
Writing for the position vector of the arbitrary point P in the plane
$\vec{r} = (1-\lambda-\mu)\, \vec{a}+ \lambda\, \vec{b} + \mu\,\vec{c} \;\equiv\; \xi_1\, \vec{a} +\xi_2\,\vec{b} + \xi_3\, \vec{c}\; ,$
we find that the real triplet (ξ[1], ξ[2], ξ[3]) with ξ[1] + ξ[1] + ξ[1] = 1 forms a set of coordinates for P. The numbers {ξ[1], ξ[2], ξ[3] | ξ[1]+ ξ[2]+ ξ[3] = 1 } are known as the barycentric
coordinates of P. It is trivial to go from barycentric coordinates to the "three-point representation",
$\vec{r} = \xi_1 \vec{a} + \xi_2\vec{b} + \xi_3 \vec{c}\quad\hbox{with}\quad \xi_1 = 1- \xi_2-\xi_3 \;\Longleftrightarrow\; \vec{r} = \vec{a} + \xi_2 (\vec{b}-\vec{a}) + \xi_3(\vec{c}-\vec{a}).$
Beyond mathematics
In industry, a surface plate^[2] is a piece of cast iron or other appropriate material whose surface (or rather a part of it) is made as close as possible to a geometric plane (or rather a part of
it, usually a square). An old method of their manufacturing is the three-plate method: three roughly flat surfaces become more and more flat when rubbing against each other: first and second; second
and third; third and first; first and second again, and so on. It is possible to achieve a surface close to a plane up to 10^–5 of its size.
1. ↑ D. Hilbert, Grundlagen der Geometrie, B. G. Teubner, Leipzig (1899) 2nd German edition
2. ↑ Miller, Jimmie A. (5 October 2004). Surface Plate. University of North Carolina. Retrieved on 29 October 2013. | {"url":"http://en.citizendium.org/wiki/Plane_(geometry)","timestamp":"2014-04-19T14:32:07Z","content_type":null,"content_length":"54714","record_id":"<urn:uuid:db013920-0062-4d9a-bde1-f034d7c36f71>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: PERIODIC FLAT MODULES, AND FLAT MODULES FOR FINITE GROUPS
D. J. BENSON AND K. R. GOODEARL
Abstract. If R is a ring of coefficients and G a finite group, then a flat RGmodule which is
projective as an Rmodule is necessarily projective as an RGmodule. More generally, if H is a
subgroup of finite index in an arbitrary group \Gamma, then a flat R\Gammamodule which is projective as an
RHmodule is necessarily projective as an R\Gammamodule. This follows from a generalization of the
first theorem to modules over strongly Ggraded rings. These results are proved using the following
theorem about flat modules over an arbitrary ring S: If a flat Smodule M sits in a short exact
sequence 0 ! M ! P ! M ! 0 with P projective, then M is projective. Some other properties
of flat and projective modules over group rings of finite groups, involving reduction modulo primes,
are also proved.
1. Introduction
In the representation theory of finite groups, a great deal of attention has been given to the
problem of determining whether a module over a group ring is projective. For example, a well
known theorem of Chouinard [13] states that a module is projective if and only if its restriction
to each elementary abelian subgroup is projective. A theorem of Dade [16] states that over an
algebraically closed field of characteristic p, a finitely generated module for an elementary abelian
pgroup is projective if and only if its restriction to each cyclic shifted subgroup is projective,
where a cyclic shifted subgroup is a certain kind of cyclic subgroup of the group algebra. For an
infinitely generated module, the statement is no longer valid, but in [8] it is proved that an infinitely | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/685/3725609.html","timestamp":"2014-04-20T12:05:06Z","content_type":null,"content_length":"8808","record_id":"<urn:uuid:d24fecaf-2f04-4ffe-89ba-940e3a260a9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
/Heron's Form
I have to use methods to write a program to allow the user to input the 3 lengths of their triangle and then the program must output the area of the triangle using Heron's Formula to calculate the
area. I think my code so far is correct because there are no errors but I keep getting the area =0. I think there may be something wrong with my calculation steps. If somebody could help me it would
be much appreciated!
import java.util.Scanner;
public class HeronsFormula {
public static void main(String[] args){
Scanner input = new Scanner(System.in);
System.out.print("Enter the length of the three sides of your triangle: ");
double a = input.nextDouble();
double b = input.nextDouble();
double c = input.nextDouble();
System.out.println("The lengths of the sides of your triangle are: " + '\n' + a + '\n' + b + '\n' + c);
System.out.println("The are of your triangle according to Heron's Formula is: " + getArea(a,b,c));
public static double getArea(double a, double b, double c) {
double s = (a + b + c)/2;
double x = ((s) * (s-a) * (s-B)/> * (s-c));
double area = Math.sqrt(x);
return area; | {"url":"http://www.dreamincode.net/forums/topic/293893-java-code-using-to-calculate-the-area-of-a-triangle-wherons-formula/","timestamp":"2014-04-18T22:45:36Z","content_type":null,"content_length":"98044","record_id":"<urn:uuid:14668022-163a-48e8-bea8-859c1ec298c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rebinding do notation for indexed monads
up vote 13 down vote favorite
I was following Conor McBride's "Kleisli arrows of outrageous fortune" paper and I've posted my implementation of his code here. Briefly, he defines the following types and classes:
type a :-> b = forall i . a i -> b i
class IFunctor f where imap :: (a :-> b) -> (f a :-> f b)
class (IFunctor m) => IMonad m where
skip :: a :-> m a
bind :: (a :-> m b) -> (m a :-> m b)
data (a := i) j where
V :: a -> (a := i) i
Then he defines two types of binds, the latter of which uses (:=) to restrict the initial index:
-- Conor McBride's "demonic bind"
(?>=) :: (IMonad m) => m a i -> (a :-> m b) -> m b i
(?>=) = flip bind
-- Conor McBride's "angelic bind"
(>>=) :: (IMonad m) => m (a := j) i -> (a -> m b j) -> m b i
m >>= f = bind (\(V a) -> f a) m
The latter bind works perfectly fine for rebinding do notation to use indexed monads with the RebindableSyntax extension, using the following corresponding definitions for return and fail:
return :: (IMonad m) => a -> m (a := i) i
return = skip . V
fail :: String -> m a i
fail = error
... but the problem is that I cannot get the former bind (i.e. (?>=)) to work. I tried instead defining (>>=) and return to be:
(>>=) :: (IMonad m) => m a i -> (a :-> m b) -> m b i
(>>=) = (?>=)
return :: (IMonad m) => a :-> m a
return = skip
Then I created a data type guaranteed to inhabit a specific index:
data Unit a where
Unit :: Unit ()
But when I try to rebind do notation using the new definitions for (>>=) and return, it does not work, as demonstrated in the following example:
-- Without do notation
test1 = skip Unit >>= \Unit -> skip Unit
-- With do notation
test2 = do
Unit <- skip Unit
skip Unit
test1 type-checks, but test2 does not, which is weird, since I thought all that RebindableSyntax did was let do notation desugar test2 to test1, so if test1 type-checks, then why does not test2? The
error I get is:
Couldn't match expected type `t0 -> t1'
with actual type `a0 :-> m0 b0'
Expected type: m0 a0 i0 -> (t0 -> t1) -> m Unit ()
Actual type: m0 a0 i0 -> (a0 :-> m0 b0) -> m0 b0 i0
In a stmt of a 'do' block: Unit <- skip Unit
In the expression:
do { Unit <- skip Unit;
skip Unit }
The error remains even when I use the explicit forall syntax instead of the :-> type operator.
Now, I know there is another problem with using the "demonic bind", which is that you can't define (>>), but I still wanted to see how far I could go with it. Can anybody explain why I cannot get GHC
to desugar the "demonic bind", even when it would normally type-check?
add comment
1 Answer
active oldest votes
IIUC, the GHC desugarer actually runs after the typechecker (source). That explains why the situation you observe is theoretically possible. The typechecker probably has some
special typing rules for the do-notation, and those may be inconsistent with what the typechecker would do with the desugarred code.
up vote 4 down
vote accepted Of course, it's reasonable to expect them to be consistent, so I would recommend filing a GHC bug.
1 Thanks for the link. I will check this out. If they agree that's the reason for the type error I'll accept your answer. – Gabriel Gonzalez Jun 16 '12 at 0:32
2 I'm also keen to know what's going on. I faced the same issue, but was less agitated. I expect the demonic polymorphism was unexpected: it's surprised lots of people. –
pigworker Jun 16 '12 at 0:42
add comment
Not the answer you're looking for? Browse other questions tagged haskell or ask your own question. | {"url":"http://stackoverflow.com/questions/11042685/rebinding-do-notation-for-indexed-monads","timestamp":"2014-04-18T01:42:04Z","content_type":null,"content_length":"67854","record_id":"<urn:uuid:0072759c-b3bf-41c3-8a63-cf9fa05bf75e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using SPSS for One Way Analysis of Variance
This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests.
This tutorial assumes that you have:
• Downloaded the standard class data set (click on the link and save the data file)
• Started SPSS (click on Start | Programs | SPSS for Windows | SPSS 12.0 for Windows)
• Loaded the standard data set
The one way analysis of variance (ANOVA) is an inferential statistical test that allows you to test if any of several means are different from each other. It assumes that the dependent variable has
an interval or ratio scale, but it is often also used with ordinally scaled data.
In this example, we will test if the response to the question "If you could not be a psychology major, which of these majors would you choose? (Math, English, Visual Arts, or History)" influences the
person's GPAs. We will follow the standard steps for performing hypothesis tests:
1. Write the null hypothesis:
H[0]: µ[Math] = µ[English] = µ[Visual Arts] = µ[History]
Where µ represents the mean GPA.
2. Write the alternative hypothesis:
H[1]: not H[0]
(Remember that the alternative hypothesis must be mutually exclusive and exhaustive of the null hypothesis.)
3. Specify the α level: α = .05
4. Determine the statistical test to perform: In this case, GPA is approximately ratio scaled, and we have multiple (4) groups, so the between-subjects ANOVA is appropriate.
5. Calculate the appropriate statistic:
SPSS assumes that the independent variable (technically a quasi-independent variable in this case) is represented numerically. In the sample data set, MAJOR is a string. So we must first convert
MAJOR from a string variable to a numerical variable. See the tutorial on transforming a variable to learn how to do this. We need to automatically recode the MAJOR variable into a variable
called MAJORNUM.
Once you have recoded the independent variable, you are ready to perform the ANOVA. Click on Analyze | Compare Means | One-Way ANOVA:
The One-Way ANOVA dialog box appears:
In the list at the left, click on the variable that corresponds to your dependent variable (the one that was measured.) Move it into the Dependent List by clicking on the upper arrow button. In
this example, the GPA is the variable that we recorded, so we click on it and the upper arrow button:
Now select the (quasi) independent variable from the list at the left and click on it. Move it into the Factor box by clicking on the lower arrow button. In this example, the quasi-independent
variable is the recoded variable from above, MAJORNUM:
Click on the Post Hoc button to specify the type of multiple comparison that you would like to perform. The Post Hoc dialog box appears:
Consult your statistics text book to decide which post-hoc test is appropriate for you. In this example, I will use a conservative post-hoc test, the Tukey test. Click in the check box next to
Tukey (not Tukey's-b):
Click on the Continue Button to return to the One-Way ANOVA dialog box. Click on the Options button in the One-Way ANOVA dialog box. The One-Way ANOVA Options dialog box appears:
Click in the check box to the left of Descriptives (to get descriptive statistics), Homogeneity of Variance (to get a test of the assumption of homogeneity of variance) and Means plot (to get a
graph of the means of the conditions.):
Click on the Continue button to return to the One-Way ANOVA dialog box. In the One Way ANOVA dialog box, click on the OK button to perform the analysis of variance. The SPSS output window will
appear. The output consists of six major sections. First, the descriptive section appears:
For each dependent variable (e.g. GPA), the descriptives output gives the sample size, mean, standard deviation, minimum, maximum, standard error, and confidence interval for each level of the
(quasi) independent variable. In this example, there were 7 people who responded that they would be a math major if they could not be a psychology major, and their mean GPA was 3.144, with a
standard deviation of 0.496. There were 16 people who would be an English major if they could not be a psychology major, and their mean GPA was 2.937 with a standard deviation of 0.5788.
The Test of Homogeneity of Variances output tests H[0]: σ^2[Math] = σ^2[English] = σ^2[Art] = σ^2[History]. This is an important assumption made by the analysis of variance. To interpret this
output, look at the column labeled Sig. This is the p value. If the p value is less than or equal to your α level for this test, then you can reject the H[0] that the variances are equal. If the
p value is greater than α level for this test, then we fail to reject H[0] which increases our confidence that the variances are equal and the homogeneity of variance assumption has been met. The
p value is .402. Because the p value is greater than the α level, we fail to reject H[0] implying that there is little evidence that the variances are not equal and the homogeneity of variance
assumption may be reasonably satisfied.
The ANOVA output gives us the analysis of variance summary table. There are six columns in the output:
│ Column │ Description │
│Unlabeled │The first column describes each row of the ANOVA summary table. It tells us that the first row corresponds to the between-groups estimate of variance (the estimate that measures the│
│(Source of │effect and error). The between-groups estimate of variance forms the numerator of the F ratio. The second row corresponds to the within-groups estimate of variaince (the estimate of│
│variance) │error). The within-groups estimate of variance forms the denominator of the F ratio. The final row describes the total variability in the data. │
│Sum of │The Sum of squares column gives the sum of squares for each of the estimates of variance. The sum of squares corresponds to the numerator of the variance ratio. │
│Squares │ │
│ │The third column gives the degrees of freedom for each estimate of variance. │
│ │ │
│ │The degrees of freedom for the between-groups estimate of variance is given by the number of levels of the IV - 1. In this example there are four levels of the quasi-IV, so there │
│ │are 4 - 1 = 3 degrees of freedom for the between-groups estimate of variance. │
│df │ │
│ │The degrees of freedom for the within-groups estimate of variance is calculated by subtracting one from the number of people in each condition / category and summing across the │
│ │conditions / categories. In this example, there are 2 people in the Math category, so that category has 7 - 1 = 6 degrees of freedom. There are 16 people in the English category, so│
│ │that category has 16 - 1 = 15 degrees of freedom. For art, there are 15 - 1 = 14 degrees of freedom. For history there are 7 - 1 = 6 degrees of freedom. Summing the dfs together, we│
│ │find there are 6 + 15 + 14 + 6 = 41 degrees of freedom for the within-groups estimate of variance. The final row gives the total degrees of freedom which is given by the total │
│ │number of scores - 1. There are 45 scores, so there are 44 total degrees of freedom. │
│ │The fourth column gives the estimates of variance (the mean squares.) Each mean square is calculated by dividing the sum of square by its degrees of freedom. │
│Mean Square│ │
│ │MS[Between-groups] = SS[Between-groups] / df[Between-groups] │
│ │MS[Within-groups] = SS[Within-groups] / df[Within-groups] │
│F │The fifth column gives the F ratio. It is calculated by dividing mean square between-groups by mean square within-groups. │
│ │F = MS[Between-groups] / MS[Within-groups] │
│ │The final column gives the significance of the F ratio. This is the p value. If the p value is less than or equal your α level, then you can reject H[0] that all the means are │
│Sig. │equal. In this example, the p value is .511 which is greater than the α level, so we fail to reject H[0]. That is, there is insufficient evidence to claim that some of the means may│
│ │be different from each other. │
We would write the F ratio as: The one-way, between-subjects analysis of variance failed to reveal a reliable effect of other major on GPA, F(3, 41) = 0.781, p = .511, MS[error] = 0.292, α = .05.
The 3 is the between-groups degrees of freedom, 41 is the within-groups degrees of freedom, 0.781 is the F ratio from the F column, .511 is the value in the Sig. column (the p value), and 0.292
is the within-groups mean square estimate of variance.
6. Decide whether to reject H[0]: If the p value associated with the F ratio is less than or equal to the α level, then you can reject the null hypothesis that all the means are equal. In this case,
the p value equals .511, which is greater than the α level (.05), so we fail to reject H[0].
When the F ratio is statistically significant, we need to look at the multiple comparisons output. Even though our F ratio is not statistically significant, we will look at the multiple comparisons
to see how they are interpreted.
The Multiple Comparisons output gives the results of the Post-Hoc tests that you requested. In this example, I requested Tukey multiple comparisons, so the output reflects that choice. Different
people have different opinions about when to look at the multiple comparisons output. One of the leading opinions is that the multiple comparison output is only meaningful if the overall F ratio is
statistically significant. In this example, it is not statistically significant, so technically I should not check the multiple comparisons output.
The output includes a separate row for each level of the independent variable. In this example, there are four rows corresponding to the four levels of the quasi-IV. Lets consider the first row, the
one with major equal to art. There are three sub-rows within in this row. Each sub-row corresponds to one of the other levels of the quasi-IV. Thus, there are three comparisons described in this row:
│ Comparison │ H[0] │ H[1] │
│Art vs English│H[0]: µ[Art] = µ[ English] │H[1]: µ[Art] ≠ µ[English]│
│Art vs History│H[0]: µ[Art] = µ[ History] │H[1]: µ[Art] ≠ µ[History]│
│Art vs Math │H[0]: µ[Art] = µ[ Math] │H[1]: µ[Art] ≠ µ[Math] │
The second column in the output gives the difference between the means. In this example, the difference between the GPA of the people who would be art majors and those who would be English majors is
0.2532. The third column gives the standard error of the mean. The fourth column is the p value for the multiple comparison. In this example, the p value for comparing the GPAs of people who would be
art majors with those those who would be English majors is 0.565, meaning that it is unlikely that these means are different (as you would expect given that the difference (0.2532) is small.) If the
p values is less than or equal to the α level, then you can reject the corresponding H[0]. In this example, the p value is .565 which is larger than the α level of .05, so we fail to reject H[0] that
the mean GPA of the people who would be art majors is different from the mean GPA of the people who would be English majors. The final two columns give you the 95% confidence interval.
The next part of the SPSS output (shown above) summarizes the results of the multiple comparisons procedure. Often there are several subset columns in this section of the output. The means listed in
each subset column are not statistically reliably different from each other. In this example, all four means are listed in a single subset column, so none of the means are reliably different from any
of the other means. That is not to say that the means are not different from each other, but only that we failed to observe a difference between any of the means. This is consistent with the fact
that we failed to reject the null hypothesis of the ANOVA.
The final part of the SPSS output is a graph showing the dependent variable (GPA) on the Y axis and the (quasi) independent variable (other major) on the X axis:
Because the quasi-independent variable is nominally scaled, the plot really should be a bar plot. Double click on the plot to invoke the SPSS Chart Editor:
In the Chart Editor, click on one of the data points:
In the Chart Editor, select Chart | Change Data Element Type | Simple Bar:
The new bar graph appears in the editor:
Make any other changes to the bar graph that you want. (See the tutorial on editing graphs if you don't remember how to make changes.)
Close the Chart Editor by selecting File | Close in the chart editor. | {"url":"http://academic.udayton.edu/gregelvers/psy216/SPSS/1wayanova.htm","timestamp":"2014-04-19T11:56:41Z","content_type":null,"content_length":"16216","record_id":"<urn:uuid:97347698-9dbe-4ecb-9c7f-2e449781cf73>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigometric for odd function
November 28th 2009, 08:39 PM
Trigometric for odd function
For trigometric if the integrand inside the integral (i.e., the function to be integrated in theta )is ODD.
How do we approach the problem? is there a way ?
November 28th 2009, 10:02 PM
If the function can be transformed to this form :
$g(\theta) = \sin(a\theta)f(\cos(a\theta))$
For finding the anti-derivative , we just sub.
$x = \cos(a\theta) ,~~ \sin(a\theta) ~ d\theta = -\frac{1}{a}dx$
$I = -\frac{1}{a} \int f(x) ~dx$
We can always find out such function but hardly transform it to that form ( not impossible )
For example ,
$g(\theta) = \sin^{2n+1}(\theta)$
we have $f(\cos(\theta)) = \sin^{2n}(\theta)$
so $f(x) = (1-x^2)^n$ | {"url":"http://mathhelpforum.com/calculus/117291-trigometric-odd-function-print.html","timestamp":"2014-04-18T14:06:12Z","content_type":null,"content_length":"5819","record_id":"<urn:uuid:d062621a-0f89-4870-92dc-55e16bb20991>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Breakthrough Circuit Lower Bound
On October 8th, when in my graduate complexity course as we discussed the 1987 Razborov-Smolensky result that one can't compute the Mod
by constant-depth polynomial-size circuits with AND, OR, NOT and Mod
gates for primes p≠q, I noted the limits of our knowledge,
We can't even prove that NEXP (the exponential time version of NP) can't be solved by constant-depth polynomial-size circuits with just Mod[6] gates, even without AND and ORs.
Now we can. And by "we" I mean Ryan Williams who just yesterday posted
his proof
that NEXP cannot be solved in the class ACC
, the class of constant-depth poly-size circuits with AND, OR, NOT and Mod
gates for any fixed m. For the larger class EXP
, Ryan gets exponential lower bounds.
already heaped their praise on these new results. No doubt, this is perhaps the most interesting computational complexity result since
and the best progress in circuit lower bounds in nearly a quarter century.
It's not so much the result itself, in class I'll now just say "We don't even know how to prove that NEXP isn't in TC
(poly-size constant-depth circuits with majority gates)" but an outright example that the proof techniques described in Ryan's
STOC paper
can be used to get new lower bounds, really opening the door to the first fresh approach to circuits in a long time. This approach converts weak algorithms for solving circuit satisfiability
questions into circuit lower bounds. Ryan's proof doesn't use deep mathematical techniques but rather puts together a series of known tools in amazingly clever ways.
Ryan breaks through the natural proofs barrier in an interesting way. We didn't know if one can have one-way functions described by ACC
circuits so it wasn't clear if natural proofs applied to lower bounds for ACC
. Ryan's paper doesn't break these one-way functions, rather he avoids the issue by using diagonalization and so his proof does not fulfill the constructivity requirement of natural proofs. Modifying
his proof to make it "natural" would imply showing there aren't ACC
one-way functions but that's still open. So there is no inherent reason Ryan's techniques couldn't work on TC
or larger classes without violating natural proofs.
We're still a long long long way from showing NP does not have arbitrary polynomial-size circuits and P ≠ NP but Ryan has taken the first real baby step in decades.
10 comments:
1. What a wonderful result. As a small aside, we actually think we do have one-way functions in ACC0 (and even NC0), I believe the open question is regarding pseudorandom functions.
2. Maybe a silly question, but is anything known about what happens to the class ACC^0 if we allow mod_m gates and mod_n gates for two distinct m, n?
3. Why mod6?
4. For mod_m and mod_n with different m and n, we can replace the two types of gates with one gate mod_(mn).
5. Anonymous: ACC^0 supports multiple moduli. You can simulate mod_m and mod_n with mod_{m*n}. So as long as there are a constant number of such moduli, ACC^0 remains the same.
6. Anonymous 3: Razborov-Smolensky already showed that modp can't be computed by ACC with modq gates for p \neq q distinct primes. (And so NEXP is not in ACC with modp gates for p prime.) 6 is the
smallest product of distinct primes.
7. What is NEXP? Could someone describe what just happened on a slightly less technical level?
8. He uses "diagonalization", and we're calling that progress? Umm, no. Try again. Non-constructive proofs aren't worth the paper they're printed on.
9. I second the last "Anonymous": where is the hard function? To call circuit lower bounds obtained by diagonalization and counting "breakthrough" seems somewhat too generous ... I don't say the
result is not interesting: it is! But not in the same level with, say, Razborov's *explicit* lower bounds.
10. One thing I find worth noticing in the blog post is the statement:
"Ryan's proof doesn't use deep mathematical techniques but rather puts together a series of known tools in amazingly clever ways."
Cleverness is often a key to breakthroughs in algorithm: sometimes a small twist makes a huge difference. However, some referees seem to think that papers need to have fundamental new technique:
that a strong result doesn't do it. | {"url":"http://blog.computationalcomplexity.org/2010/11/breakthrough-circuit-lower-bound.html","timestamp":"2014-04-20T21:54:36Z","content_type":null,"content_length":"171355","record_id":"<urn:uuid:da9a3e41-6a87-494b-b502-cc9675aa81ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trying to use C++ to solve a math problem
January 11th, 2013, 01:15 AM #1
Junior Member
Join Date
Jan 2013
Trying to use C++ to solve a math problem
a=40; b=30; c=10;
I had this riddle to solve and I know the answer to be
around ~ 12.31185723778668829963
What steps can i go about to get this to compute this problem without doing the work for this problem and other problems similar to this?
I'm not used to c++ and just need help with initialization and outputting the answer I got.
Its based off the very old riddle if anyone cares :
Re: Trying to use C++ to solve a math problem
You would first turn it from a set of flat mathematical equations/rules into an actual set of logic steps
How would you solve this without a computer and with only a pen and a sheet of paper. That's you're first step. Then you turn the "method" of how you solved it manually into a program.
Re: Trying to use C++ to solve a math problem
I'm not used to c++ and just need help with initialization and outputting the answer I got.
Use variables and number literals of type double throughout.
Declare double variables like this,
double a=40.0;
or several on one line
double b=30.0, c=10.0;
All double literals should have a . in them otherwise things may go wrong. For example 1/3 may evaluate to 0 because you may get integer division whereas 1.0/3.0 will evaluate to
0.3333333333..... as it should.
To output a number you can do,
std::cout << a << std::endl;
or several like
std::cout << b << ", " << c << std::endl;
To calculate powers you need to use the pow function (or multiply the number several times with itself like say x^3 = x*x*x).
Make sure to include the necessary header files.
Last edited by nuzzle; January 11th, 2013 at 07:54 AM.
Re: Trying to use C++ to solve a math problem
Okay thank OReubens and nuzzle that helped alot.
The only header ill need should be <cmath> and <iostream> since its just calculations and outputting.
Re: Trying to use C++ to solve a math problem
I had this riddle to solve and I know the answer to be
around ~ 12.31185723778668829963
Note that the precision of a double is only about 17 digits (including all digits before and after the decimal dot). Specifying more digits is superfluous.
Anyway what's that number above and how did you arrive at it?
Looking at the site you referred to you have these relations,
1/h = 1/A + 1/B
A^2 = b^2 - w^2
B^2 = a^2 - w^2
Solved for h you get this formula,
h = 1 / (1/sqrt(b^2 - w^2) + 1/sqrt(a^2 - w^2))
where a and b are the ladder lengths, h is the height above the alley floor up to where the ladders cross, and w is the alley width.
I suppose the number you posted is w, isn't it? When it's entered into the formula,
double a=40.0, b=30.0, w=12.311857237786688;
double h = 1.0 / (1.0/std::sqrt(b*b-w*w) + 1.0/std::sqrt(a*a-w*w));
std::cout << h << std::endl;
the result isn't 10 (as I guess it should be because c=10). So your solution doesn't seem correct.
Last edited by nuzzle; January 13th, 2013 at 01:28 AM.
January 11th, 2013, 05:25 AM #2
Elite Member
Join Date
Apr 2000
Belgium (Europe)
January 11th, 2013, 07:46 AM #3
Elite Member
Join Date
May 2009
January 11th, 2013, 12:38 PM #4
Junior Member
Join Date
Jan 2013
January 12th, 2013, 10:58 AM #5
Elite Member
Join Date
May 2009 | {"url":"http://forums.codeguru.com/showthread.php?532883-Trying-to-use-C-to-solve-a-math-problem&p=2100279","timestamp":"2014-04-17T05:07:00Z","content_type":null,"content_length":"89451","record_id":"<urn:uuid:ddccb98d-b7cf-4585-970e-ed9e12834e78>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
circular motion
Hi. The centrifugal force appears in the frame of reference of the ball( which is a non inertial frame). It never arises in the frame of reference of the ground. If we draw a free body diagram of the
ball in the ground frame of reference, then the only forces acting on the ball will be the tension (=centripetal force) and gravitational force.Speaking of Newton's third law , you must remember that
the equal-and-opposite pair of forces (in general ) will not act on the same body . Here the string exerts an inward force on the ball, and the ball exerts an outward force on the string , both being
equal in magnitude. I hope this clears things for you . | {"url":"http://www.physicsforums.com/showthread.php?s=8700f965c7e8148e16533eaea1d96992&p=4564461","timestamp":"2014-04-24T21:49:57Z","content_type":null,"content_length":"35207","record_id":"<urn:uuid:a86fc02f-06a6-4923-a176-efa987195775>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
PostgreSQL Extension Network
mpq data type¶
The mpq data type can store rational numbers whose denominator and numerator have arbitrary size.
Rational numbers are converted in canonical form on input (meaning that the denominator and the numerator have no common factors) and all the operators will return a number in canonical form.
PostgreSQL integer types (int16, int32, int64), numeric and mpz can be converted to mpq without loss of precision and without surprise. Floating point types (float4, float8) are converted without
loss as well... but with some surprise, as many fractions with finite decimal expansion have no finite expansion in binary.
=# select 10.1::numeric::mpq as "numeric",
-# 10.1::float4::mpq as "single",
-# 10.1::float8::mpq as "double";
numeric | single | double
101/10 | 5295309/524288 | 5685794529555251/562949953421312
mpq values can be converted to integer types (both PostgreSQL’s and mpz): the result will be truncated. Conversion to float4 and float8 will round the values to the precision allowed by the types (in
case of overflow the value will be Infinity). Conversion to numeric will perform a rounding to the precision set for the target type.
=# select mpq('4/3')::integer as "integer",
-# mpq('4/3')::float4 as "single",
-# mpq('4/3')::decimal(10,3) as "decimal";
integer | single | decimal
1 | 1.33333 | 1.333
mpq values can be compared using the regular PostgreSQL comparison operators. Indexes on mpq columns can be created using the btree or the hash method.
mpq textual input/output¶
mpq(text, base)
Convert a textual representation into an mpq number. The form text::mpq is equivalent to mpq(text).
The string can be an integer like 41 or a fraction like 41/152. The fraction will be converted in canonical form, so common factors between denominator and numerator will be removed.
The numerator and optional denominator are parsed the same as in mpz. White space is allowed in the string, and is simply ignored. The base can vary from 2 to 62, or if base is 0 then the leading
characters are used: 0x or 0X for hex, 0b or 0B for binary, 0 for octal, or decimal otherwise. Note that this is done separately for the numerator and denominator, so for instance 0xEF/100 is 239
/100, whereas 0xEF/0x100 is 239/256.
The maximum base accepted by GMP 4.1 is 36, not 62.
text(q, base)
Convert the mpq q into a string. The form q::text is equivalent to text(q).
The string will be of the form num/den, or if the denominator is 1 then just num.
base may vary from 2 to 62 or from −2 to −36. For base in the range 2..36, digits and lower-case letters are used; for −2..−36, digits and upper-case letters are used; for 37..62, digits,
upper-case letters, and lower-case letters (in that significance order) are used. If base is not specified, 10 is assumed.
The maximum base accepted by GMP 4.1 is 36, not 62.
mpq conversions¶
mpq(num, den)
Return an mpq from its numerator and denominator.
The function signature accepts mpz values. PostgreSQL integers are implicitly converted to mpz so invoking the function as mpq(30,17) will work as expected. However if the numbers become too big
for an int8 they will be interpreted by PostgreSQL as numeric and, because the cast from numeric to mpz is not implicit, the call will fail. Forcing a cast to mpz (e.g. mpq(30::mpz,17::mpz)) will
work for numbers of every size.
Return the numerator or the denominator of q as mpz.
Arithmetic Operators and Functions¶
All the arithmetic operators and functions return their their output in canonical form.
Arithmetic operators
Operator Description Example Return
- Unary minus - '4/3'::mpq -4/3
+ Unary plus + '4/3'::mpq 4/3
+ Addition '2/3'::mpq + '5/6'::mpq 3/2
- Subtraction '1/3'::mpq - '5/6'::mpq -1/2
* Multiplication '2/3'::mpq * '5/6'::mpq 5/9
/ Division '2/3'::mpq / '5/6'::mpq 4/5
<< Multiplication by \(2^n\) '2/3'::mpq << 3 16/3
>> Division by \(2^n\) '2/3'::mpq >> 3 1/12
Return the absolute value of q.
Return 1/q.
limit_den(q, max_den=1000000)¶
Return the closest rational to q with denominator at most max_den.
The function is useful for finding rational approximations to a given floating-point number:
=# select limit_den(pi(), 10);
or for recovering a rational number that’s represented as a float:
=# select mpq(cos(pi()/3));
=# select limit_den(cos(pi()/3));
=# select limit_den(10.1::float4);
This function is not part of the GMP library: it is ported instead from the Python library.
Table Of Contents
Enter search terms or a module, class or function name.
© Copyright 2011, Daniele Varrazzo. Created using | {"url":"http://www.pgxn.org/dist/pgmp/1.0.0-b3/docs/html/mpq.html","timestamp":"2014-04-19T17:01:59Z","content_type":null,"content_length":"18190","record_id":"<urn:uuid:5cb30448-7920-43cb-a6ba-86d89ba729df>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Functions of Matrices
This Demonstration computes some standard functions of a set of rather arbitrary matrices. The test matrix has distinct eigenvalues; the matrices and are symbolic, but triangular with different and
multiple eigenvalues; the matrices to are numeric with the same multiple eigenvalues but different Jordan decomposition forms; is a numerical random matrix.
Different methods of computing a function of a matrix are described in:
F. R. Gantmacher,
The Theory of Matrices
, trans. K. A. Hirsch, 2 vols., New York: Chelsea Publishing Company, 1959.
This Demonstration uses the matrix exponential of a matrix with no zero eigenvalues to compute an arbitrary function of the matrix. Replacing by sin, cos, , , , , or erf computes the corresponding
function of the matrix.
The matrix satisfies the matrix differential equation:
if or ,
if or ,
if or ,
if or ,
if . | {"url":"http://demonstrations.wolfram.com/FunctionsOfMatrices/","timestamp":"2014-04-18T20:43:30Z","content_type":null,"content_length":"45219","record_id":"<urn:uuid:b8cae160-1757-4e48-a2f2-18a50fadb061>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry TE - Enrichment
Chapter 3: Geometry TE - Enrichment
Created by: CK-12
Geometry- Teacher’s Edition- Enrichment Jen Kershaw
The goal of an enrichment section is just what is implied in the title, “to enrich.” By enrichment, we mean something that breathes a new or different life into something else- to make it better to
enliven it. This is the goal of this branch of the teacher’s edition. This is an opportunity for you and your students to locate and explore the wonderful world of geometry in other subjects such as
architecture or music or art. It is a chance for students to see how the world of mathematics can connect to other subjects that they are passionate about.
Our goal is that using this Enrichment Flexbook will help you to expand your own personal creativity as well as the creativity of your students. The projects/topics in this flexbook can be used in
several different ways. They can be used as a discussion point, an example to highlight during a lesson, a project to expand on whether students complete the project in class or at home or as a way
to broaden student thinking by using a web search once per week as an example. It is not the intention that every single lesson be used in this flexbook. Take what inspires you and use it to inspire
your students. Isn’t that what the world of mathematics is all about!
Chapter Outline
Chapter Summary
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first. | {"url":"http://www.ck12.org/tebook/Geometry-Teacher%2527s-Edition/r1/section/3.0/","timestamp":"2014-04-19T14:00:47Z","content_type":null,"content_length":"99257","record_id":"<urn:uuid:716ab80d-122d-4712-a6e3-19bca9ffa8a1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
HP OpenVMS
Compaq C
Language Reference Manual
9.4 Error Codes (<errno.h>)
The <errno.h> header file defines several macros used for error reporting.
Error codes that can be stored in errno . They expand to integral constant expressions with unique nonzero values.
Variable or Macro
An external variable or a macro that expands to a modifiable lvalue with type int , depending on the operating system.
The errno variable is used for holding implementation-defined error codes from library routines. All error codes are positive integers. The value of errno is 0 at program startup, but is never
set to 0 by any library function. Therefore, errno should be set to 0 before calling a library function and then inspected afterward.
9.5 ANSI C Limits (<limits.h> and <float.h>)
The <limits.h> and <float.h> header files define several macros that expand to various implementation-specific limits and parameters, most of which describe integer and floating-point properties of
the hardware. See your platform-specific Compaq C documentation for details.
9.6 Localization (<locale.h>)
The <locale.h> header file declares two functions and one type and defines several macros.
struct lconv
A structure containing members relating to the formatting of numeric values. The structure contains the following members in any order, with values shown in the comments:
char *decimal_point; /* "." */
char *thousands_sep; /* "" */
char *grouping; /* "" */
char *int_curr_symbol; /* "" */
char *currency_symbol; /* "" */
char *mon_decimal_point; /* "" */
char *mon_thousands_sep; /* "" */
char *mon_grouping; /* "" */
char *positive_sign; /* "" */
char *negative_sign; /* "" */
char int_frac_digits; /* CHAR_MAX */
char frac_digits; /* CHAR_MAX */
char p_cs_precedes; /* CHAR_MAX */
char p_sep_by_space; /* CHAR_MAX */
char n_cs_precedes; /* CHAR_MAX */
char n_sep_by_space; /* CHAR_MAX */
char p_sign_posn; /* CHAR_MAX */
char n_sign_posn; /* CHAR_MAX */
These members are described under the localeconv function in this section.
Expand to integral constant expressions with distinct values, and can be used as the first argument to the setlocale function.
char *setlocale(int category, const char *locale);
Selects the appropriate portion of the program's locale as specified by the category and locale arguments. This function can be used to change or query the program's entire current locale or
portions thereof.
The following values can be specified for the category argument:
LC_ALL---affects the program's entire locale.
LC_COLLATE---affects the behavior of the strcoll and strxfrm functions.
LC_CTYPE---affects the behavior of the character-handling functions and multibyte functions.
LC_MONETARY---affects the monetary-formatting information returned by the localeconv function.
LC_NUMERIC---affects the decimal-point character for the formatted I/O functions and string-conversion functions, as well as the nonmonetary formatting information returned by the localeconv
LC_TIME---affects the behavior of the strftime function.
The following values can be specified for the locale argument:
□ "C"---specifies the minimal environment for C translation
□ ""---specifies the use of the environment variable corresponding to category. If this environment variable is not set, the LANG environment variable is used. If LANG is not set, an error is
At program startup, the equivalent of the following is executed:
The setlocale function returns one of the following:
□ If a pointer to a string is specified for locale and the selection can be honored, setlocale returns a pointer to the string associated with the specified category for the new locale. If the
selection cannot be honored, setlocale returns a null pointer and the program's locale is not changed.
□ If a null pointer is specified for locale, setlocale returns a pointer to the string associated with the category for the program's current locale. The program's locale is not changed.
In either case, the returned pointer to the string is such that a subsequent call with that string value and its associated category will restore that part of the program's locale. This string
must not be modified by the program, but it can be overwritten by subsequent calls to setlocale .
struct lconv *localeconv(void);
Sets the components of an object with type struct lconv with values appropriate for formatting numeric quantities according to the rules of the current locale.
The structure members with type char * are pointers to strings, any of which (except decimal_point ) can point to "", which indicates that the value has zero length or is not available in the
current locale. Structure members of type char are nonnegative numbers, any of which can be CHAR_MAX to indicate that the value is not available in the current locale. Structure members include
the following:
char *decimal_point
The decimal-point character used to format nonmonetary quantities.
char *thousands_sep
The character used to separate groups of digits before the decimal point in formatted nonmonetary quantities.
char *grouping
A string whose elements indicate the size of each group of digits in formatted nonmonetary quantities.
char *int_curr_symbol
The international currency symbol applicable to the current locale. The first three characters contain the alphabetic international currency symbol in accordance with those specified in
ISO 4217 Codes for the Representation of Currency and Funds. The fourth character (immediately preceding the null character) is the character used to separate the international currency
symbol from the monetary quantity.
char *currency_symbol
The local currency symbol applicable to the current locale.
char *mon_decimal_point
The decimal-point character used to format monetary quantities.
char *mon_thousands_sep
The character used to separate groups of digits before the decimal point in formatted monetary quantities.
char *mon_grouping
A string whose elements indicate the size of each group of digits in formatted monetary quantities.
char *positive_sign
The string used to indicate a nonnegative formatted monetary quantity.
char *negative_sign
The string used to indicate a negative formatted monetary quantity.
char int_frac_digits
The number of fractional digits to be displayed in internationally formatted monetary quantities.
char frac_digits
The number of fractional digits to be displayed in formatted monetary quantities.
char p_cs_precedes
Set to 1 if the currency_symbol precedes the value for a nonnegative formatted monetary quantity; set to 0 if the currency_symbol follows the value.
char p_sep_by_space
Set to 1 if the currency_symbol is separated by a space from the value for a nonnegative formatted monetary quantity; set to 0 if there is no space.
char n_cs_precedes
Set to 1 if the currency_symbol precedes the value for a negative formatted monetary quantity; set to 0 if the currency_symbol follows the value.
char n_sep_by_space
Set to 1 if the currency_symbol is separated by a space from the value for a negative formatted monetary quantity; set to 0 if there is no space.
char p_sign_posn
Set to a value indicating the positioning of the positive_sign for a nonnegative formatted monetary quantity.
char n_sign_posn
Set to a value indicating the positioning of the negative_sign for a negative formatted monetary quantity.
The elements of grouping and mon_grouping are interpreted according to the following:
□ CHAR_MAX ---no further grouping is to be performed.
□ 0---the previous element is to be repeatedly used for the remainder of the digits.
□ other---the integer value is the number of digits that comprise the current group. The next element is examined to determine the size of the next group of digits before the current group.
The value of p_sign_posn and n_sign_posn is interpreted as follows:
□ 0---parentheses surround the quantity and currency_symbol
□ 1---the sign string precedes the quantity and currency_symbol
□ 2---the sign string follows the quantity and currency_symbol
□ 3---the sign string immediately precedes the currency_symbol
□ 4---the sign string immediately follows the currency_symbol
The localeconv function returns a pointer to the filled in structure. The structure must not be modified by the program, but might be overwritten by subsequent calls to localeconv or to setlocale
with categories LC_ALL , LC_MONETARY , or LC_NUMERIC .
9.7 Mathematics (<math.h>)
The <math.h> header file defines types, macros, and several mathematical functions. The functions take double arguments and return double-precision values.
The behavior of the functions in this header is defined for all representable values of their input arguments. Each function executes as if it were a single operation, without generating any
externally visible exceptions.
For all functions, a domain error occurs if an input argument is outside the domain over which the mathematical function is defined. The description of each function lists any domain errors. On a
domain error, the function returns an implementation-defined value; the value of the EDOM macro is stored in errno .
For all functions, a range error occurs if the result of the function cannot be represented as a double value. If the result overflows (the magnitude of the result is so large that it cannot be
represented in an object of the specified type), the function returns the value of the macro HUGE_VAL , with the same sign (except for the tan function) as the correct value of the function; the
value of the ERANGE macro is stored in errno . If the result underflows (the magnitude of the result is so small that it cannot be represented in an object of the specified type), the function
returns 0; whether the value of the ERANGE macro is stored in errno is implementation-defined.
Expands to a positive double expression.
Expands to a constant expression of type float representing positive or unsigned infinity, if available; otherwise, expands to a positive constant of type float that overflows at translation
Expands to a constant expression of type float representing a quiet NaN.
Trigonometric Functions
double acos(double x);
Returns the value, in radians, of the arc cosine of x in the range [0, pi]. A domain error occurs for arguments not in the interval [ - 1,+1].
double asin(double x);
Returns the value, in radians, of the arc sine of x in the range [-pi/2,+pi/2]. A domain error occurs for arguments not in the interval [ - 1,+1].
double atan(double x);
Returns the value, in radians, of the arc tangent of x in the range [-pi/2,+pi/2].
double atan2(double y, double x);
Returns the value, in radians, of the arc tangent of y/x, using the signs of both arguments to determine the quadrant of the return value. The value returned is in the range [-pi,+pi]. A domain
error may occur if both arguments are 0.
double cos(double x);
Returns the value, in radians, of the cosine of x.
double sin(double x);
Returns the value, in radians, of the sine of x.
double tan(double x);
Returns the value, in radians, of the tangent of x.
Hyperbolic Functions
double cosh(double x);
Returns the value of the hyperbolic cosine of x. A range error occurs if the magnitude of x is too large.
double sinh(double x);
Returns the value of the hyperbolic sine of x. A range error occurs if the magnitude of x is too large.
double tanh(double x);
Returns the value of the hyperbolic tangent of x.
Exponential and Logarithmic Functions
double exp(double x);
Returns the value of the exponential function of x. A range error occurs if the magnitude of x is too large.
double frexp(double value, int *eptr);
Breaks the floating-point number value into a normalized fraction in the interval [1/2, 1) or 0, which it returns, and an integral power of 2, which it stores in the int object pointed to by eptr
. If value is 0, both parts of the result are 0.
double ldexp(double x, int exp);
Multiplies a floating-point number by an integral power of 2, and returns the value x x 2^exp. A range error may occur.
double log(double x);
Returns the natural logarithm of x. A domain error occurs if the argument is negative. A range error may occur if the argument is 0.
double log10(double x);
Returns the base-ten logarithm of x. A domain error occurs if x is negative. A range error may occur if x is 0.
double modf(double value, double *iptr);
Breaks the argument value into integral and fractional parts, each of which has the same sign as the argument. The modf function returns the signed fractional part and stores the integral part as
a double in the object pointed to by iptr.
Power Functions
double pow(double x, double y);
Returns the value x^y. A domain error occurs if x is negative and y is not an integral value. A domain error occurs if the result cannot be represented when x is 0 and y is less than or equal to
0. A range error may occur.
double sqrt(double x);
Returns the nonnegative square root of x. A domain error occurs if x is negative.
Nearest Integer, Absolute Value, and Remainder Functions
double ceil(double x);
Returns the smallest integral value not less than x.
double fabs(double x);
Returns the absolute value of a floating-point number x.
double floor(double x);
Returns the largest integral value not greater than x.
double fmod(double x, double y);
Computes the floating-point remainder of x/y. The fmod function returns the value x - i * y, for some integer i such that if y is nonzero, the result has the same sign as x and magnitude less
than the magnitude of y. The function returns 0 if y is 0.
9.8 Nonlocal Jumps (<setjmp.h>)
The <setjmp.h> header file contains declarations that provide a way to avoid the normal function call and return sequence, typically to permit an intermediate return from a nested function call.
int setjmp(jmp_buf env)
Sets up the local jmp_buf buffer and initializes it for the jump (the jump itself is performed with longjmp .) This macro saves the program's calling environment in the environment buffer
specified by the env argument for later use by the longjmp function. If the return is from a direct invocation, setjmp returns 0. If the return is from a call to longjmp , setjmp returns a
nonzero value.
An array type suitable for holding the information needed to restore a calling environment.
Restores the context of the environment buffer env that was saved by invocation of the setjmp function in the same invocation of the program. The longjmp function does not work if called from a
nested signal handler; the result is undefined.
The value specified by value is passed from longjmp to setjmp . After longjmp is completed, program execution continues as if the corresponding invocation of setjmp had just returned value. If
value is passed to setjmp as 0, it is converted to 1.
┃Previous │Next│Contents │Index┃ | {"url":"http://h71000.www7.hp.com/commercial/c/docs/6180p023.html","timestamp":"2014-04-17T06:55:48Z","content_type":null,"content_length":"33567","record_id":"<urn:uuid:833b525f-6f0d-4b3b-9237-8a93cff7eb17>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explorations in overriding MATLAB functions
In a recent blog post, I demonstrated how to use the MATLAB 2012a Symbolic Toolbox to perform Variable precision QR decomposition in MATLAB. The result was a function called vpa_qr which did the
necessary work.
>> a=vpa([2 1 3;-1 0 7; 0 -1 -1]);
>> [Q R]=vpa_qr(a);
I’ve suppressed the output because it’s so large but it definitely works. When I triumphantly presented this function to the user who requested it he was almost completely happy. What he really
wanted, however, was for this to work:
>> a=vpa([2 1 3;-1 0 7; 0 -1 -1]);
>> [Q R]=qr(a);
In other words he wants to override the qr function such that it accepts variable precision types. MATLAB 2012a does not allow this:
>> a=vpa([2 1 3;-1 0 7; 0 -1 -1]);
>> [Q R]=qr(a)
Undefined function 'qr' for input arguments of type 'sym'.
I put something together that did the job for him but felt that it was unsatisfactory. So, I sent my code to The MathWorks and asked them if what I had done was sensible and if there were any better
options. A MathWorks engineer called Hugo Carr sent me such a great, detailed reply that I asked if I could write it up as a blog post. Here is the result:
Approach 1: Define a new qr function, with a different name (such as vpa_qr). This is probably the safest and simplest option and was the method I used in the original blog post.
• Pros: The new function will not interfere with your MATLAB namespace
• Cons: MATLAB will only use this function if you explicitly define that you wish to use it in a given function. You would have to find all prior references to the qr algorithm and make a decision
about which to use.
Approach 2: Define a new qr function and use the ‘isa’ function to catch instances of ‘sym’. This is the approach I took in the code I sent to The MathWorks.
function varargout = qr( varargin )
if nargin == 1 && isa( varargin{1}, 'sym' )
[varargout{1:nargout}] = vpa_qr( varargin{:} );
[varargout{1:nargout}] = builtin( 'qr', varargin{:} );
• Pros: qr will always select the correct code when executed on sym objects
• Cons: This code only works for shadowing built-ins and will produce a warning reminding you of this fact. If you wish to extend this pattern for other class types, you’ll require a switch
statement (or nested if-then-else block), which could lead to a complex comparison each time qr is invoked (and subsequent performance hit). Note that switch statements in conjunction with calls
to ‘isa’ are usually indicators that an object oriented approach is a better way forward.
Approach 3: The MathWorks do not recommend that you modify your MATLAB install. However for completeness, it is possible to add a new ‘method’ to the sym class by dropping your function into the sym
class folder. For MATLAB 2012a on Windows, this folder is at
C:\Program Files\MATLAB\R2012a\toolbox\symbolic\symbolic\@sym
For the sake of illustration, here is a simplified implementation. Call it qr.m
function result = qr( this )
result = feval(symengine,'linalg::factorQR', this);
Pros: Functions saved to a class folder take precedence over built in functionality, which means that MATLAB will always use your qr method for sym objects.
Cons: If you share code which uses this functionality, it won’t run on someone’s computer unless they update their sym class folder with your qr code. Additionally, if a new method is added to a
class it may shadow the behaviour of other MATLAB functionality and lead to unexpected behaviour in Symbolic Toolbox.
Approach 4: For more of an object-oriented approach it is possible to sub-class the sym class, and add a new qr method.
classdef mySym < sym
function this = mySym(arg)
this = this@sym(arg);
function result = qr( this )
result = feval(symengine,'linalg::factorQR', this);
Pros: Your change can be shipped with your code and it will work on a client’s computer without having to change the sym class.
Cons: When calling superclass methods on your mySym objects (such as sin(mySym1)), the result will be returned as the superclass unless you explicitly redefine the method to return the subclass.
N.B. There is a lot of literature which discusses why inheritance (subclassing) to augment a class’s behaviour is a bad idea. For example, if Symbolic Toolbox developers decide to add their own qr
method to the sym API, overriding that function with your own code could break the system. You would need to update your subclass every time the superclass is updated. This violates encapsulation, as
the subclass implementation depends on the superclass. You can avoid problems like these by using composition instead of inheritance.
Approach 5: You can create a new sym class by using composition, but it takes a little longer than the other approaches. Essentially, this involves creating a wrapper which provides the functionality
of the original class, as well as any new functions you are interested in.
classdef mySymComp
function this = mySymComp(symInput)
this.SymProp = symInput;
function result = qr( this )
result = feval(symengine,'linalg::factorQR', this.SymProp);
Note that in this example we did not add any of the original sym functions to the mySymComp class, however this can be done for as many as you like. For example, I might like to use the sin method
from the original sym class, so I can just delegate to the methods of the sym object that I passed in during construction:
classdef mySymComp
function this = mySymComp(symInput)
this.SymProp = symInput;
function result = qr( this )
result = feval(symengine,'linalg::factorQR', this.SymProp);
function G = sin(this)
G = mySymComp(sin(this.SymProp));
Pros: The change is totally encapsulated, and cannot be broken save for a significant change to the sym api (for example, the MathWorks adding a qr method to sym would not break your code).
Cons: The wrapper can be time consuming to write, and the resulting object is not a ‘sym’, meaning that if you pass a mySymComp object ‘a’ into the following code:
isa(a, 'sym')
MATLAB will return ‘false’ by default.
January 31st, 2013 at 05:46
Reply | Quote | #1
Note that you can do your approach 3 without modifying the matlab installation, and I think that is the best approach. If you add the qr method to *any* @sym folder on the path it will get dispatched
to when you call qr on an instance of sym. It can even be in your own working folder.
If you are concerned that sym may implement their own qr method then you can always look at the meta class information for the sym class (look at what you get with cls = ?sym in MATLAB) and look at
its MethodList to see what it defines and you should be able to assert that you are not overriding sym’s native implementation.
February 4th, 2013 at 16:50
Reply | Quote | #2
Andy – good idea. We had a similar thought. The ability to extend a class with methods in another directory is a feature of MATLAB’s classic OO system (struct-based), but is not supported in the
modern OO system introduced in R2008a (with classdef files). Since syms are implemented with this newer object system, this technique won’t work. | {"url":"http://www.walkingrandomly.com/?p=4776","timestamp":"2014-04-20T00:37:45Z","content_type":null,"content_length":"51767","record_id":"<urn:uuid:d242a277-051c-402f-8755-73502123c696>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel Lines and Three-Dimensional Space
Date: 12/18/2005 at 20:30:28
From: Tammy
Subject: Parallel lines and Space
If two lines are perpendicular to the same line or plane, are they
parallel to each other? Why or why not? I do not agree with my
textbook, which states that the lines are not necessarily parallel.
If the two lines never intersect, why can't they be parallel? Isn't
that the definition of parallel?
Date: 12/19/2005 at 10:06:57
From: Doctor Peterson
Subject: Re: Parallel lines and Space
Hi, Tammy.
There are two questions in one here, and I think you're really talking
only about the first:
If two lines are perpendicular to the same line, are they parallel
to each other?
If two lines are perpendicular to the same plane, are they parallel
to each other?
The answer to the first is no, and to the second is yes.
To see why the first is true, look back at the definition your book
gave for parallel lines. If they did their job right, they will have
said that two lines are parallel if they are IN THE SAME PLANE and
never intersect. That's important: two lines in different planes that
don't intersect are called "skew lines".
To picture the difference between parallel and skew lines, look at the
edges of your room. Take the vertical line up one corner, and first
look at the top and bottom of the wall to its left. These are both
perpendicular to the vertical line, and they are also in the same
plane (the wall). They are parallel; they go in the same direction.
Now take the BOTTOM of the wall on the left, and the TOP of the wall
on the right. These are NOT in the same plane, and they do NOT go in
the same direction. They are skew lines.
The real idea behind "parallel" is "going the same direction". We
don't put that in the definition for a simple reason: we'd first have
to define what we mean by "direction", and that's not easy! But
having defined "parallel" as we do, you can then take that as the
definition of "in the same direction", and that can help you see why
we don't want to define parallel lines as ANY two lines that never
(As for the second question, looking at your room again, all four
vertical lines in the corners are perpendicular to the floor plane,
and they are parallel! Part of the reason this is true is that any
two of them ARE in the same plane.)
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/69183.html","timestamp":"2014-04-17T04:15:32Z","content_type":null,"content_length":"7625","record_id":"<urn:uuid:b0857b17-1112-4baf-9b22-80b0bacf84ac>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Walk Like a Sabermetrician
I'm not really sure why I'm writing this post, since it uses a metric that I don't particularly wish to propagate and doesn't offer any serious analytical application.
Nonetheless, herein is a (relatively) simple and reasonably accurate way to predict a team's win-loss record from its OPS and OPS Allowed. It combines two simple rules of thumb (one relating OPS to
runs, and one relating runs to wins) into a single OPS to wins conversion. And that is the reason I am sharing it here despite my antipathy towards OPS--the two rules of thumb both take the same
form, and so their combination is fairly elegant. I guess what I'm trying to say is that it is "neat", even if I don't think you should use it.
This will be a metric of the type that I call predicted winning percentage, which is based on component statistics, as opposed to expected winning percentage, which is based on actual runs scored and
The first rule of thumb is the conversion between OPS and runs. OPS has, roughly, a 2:1 relationship with runs scored. If a team has an OPS 5% better than league average, then we expect them to score
about 10% more runs than league average. I have noted this relationship before, but of course it is well-known.
Above and in the linked post, I used this in relation to the league average, but it can be applied to a team and its opponents as well. So we can estimate a team's run ratio (R/RA) as:
RR = 2*OPS/OPS Allowed - 1
Run Ratio can be related to wins in any number of ways, some more accurate than others--the most notable application is the Pythagorean formula. A linearization of the Pythagorean formula for the
normal range of run scoring is what Bill James called "Double the Edge"--a team that scores 5% more runs than its opponents should win about 10% more games. So we can estimate a team's win ratio (W/
L) as:
WR = 2*RR - 1
We can substitute the OPS relationship in, and get:
WR = 2*(2*OPS/OPS Allowed - 1) - 1
which simplifies to:
WR = 4*OPS/OPS Allowed - 3
As you can see, this is a steep function, and is a consequence of combining the pair of 2:1 functions. A team with an OPS 5% better than its opponents figures to have a W/L ratio of 4*1.05 - 3 = 1.2.
Win Ratio can be converted to a more familiar form, W%, very simply, as WR/(WR + 1). If we substitute the OPS relationship into that equation, we get this formula that takes us directly from OPS and
OPS Allowed to W%:
W% = (4*OPS/OPS Allowed - 3)/(4*OPS/OPS Allowed - 2)
How well does this work? Not too shabby...over the past two seasons (not the largest sample size in the world, but nothing in this post is meant to be rigorous in any way, shape, or form) it has a
RMSE in predicting actual W% of 5.64. My PW% estimate using Base Runs and Pythagenpat has a similar RMSE over the same period (5.52). Using it to estimate expected W% (Pythagenpat record, based on
actual runs scored and allowed), the OPS knockoff has a RMSE of 4.06, while PW% has a RMSE of 3.46.
When you estimate W% from component statistics (in other words, without the benefit of R and RA), you have three areas where errors can occur:
1. error in predicting runs scored
2. error in predicting runs allowed
3. error in converting between runs and wins
If you estimate W% from runs and runs allowed, you obviously only have to worry about the third type of error. But with so much going on in figuring PW% (as I have defined it), it doesn't really
matter whether you use "state of the art" methods (BsR + Pythagenpat), or use chicken scratchings based on OPS. You're going to have some fairly significant error either way. The theoretical
superiority of the "state of the art" approach is hinted at by its better tracking of EW%.
Anyway, there are still a number of weaknesses with the OPS method (I'll give it a name just for convenience--let's call it the Reynolds estimate, since we all know Harold loves his OPS). These
include, but are not necessarily limited to:
1. The simple fact that it's based on OPS. OPS has a lot of problems, but they don't manifest themselves too much when you deal with real teams in the normal performance range, so it's not too much
of a concern here.
2. It can't be used with OPS+. OPS+ is no great shakes either, but given it's prominence in the Total Baseball and later the ESPN Encyclopedia and Baseball-Reference, it gets used just as much in the
sabermetric community as ordinary OPS. The Reynolds estimate is incompatible with OPS+, as OPS+ does not have a 2:1 relationship with runs (it has a 1:1 relationship--the misunderstanding of the OPS
and OPS+ relationships with runs is a never-ending frustration of mine). This is a selling point for OPS+, but it means it doesn't work here (you can of course work out a PW% estimate based on OPS+,
but that's besides the point).
3. It breaks down at the extremes. The OPS to runs relationship, particularly when using outs, will cause you all sorts of problems if you attempt to use it to estimate how many runs Babe Ruth
created in 1920. The double the edge estimate of W% is fine in the normal performance range, but you don't want to use it to figure individual Offensive W% or anything. Combining those two issues,
you don't want to figure a pitcher's estimated W% or a hitter's OW% with this method.
4. It does not have the property of reciprocity between a team and its opponents. For example, the 2007 Red Sox had an OPS of 806 and allowed an OPS of 705. That gives them a Reynolds estimate of
But if you plug in the Red Sox opponents (a team with an OPS of 705 and an OPS Allowed of 806), you get a Reynolds estimate of .333. In order for this to make theoretical sense (unless you know
something about run distributions that the rest of us don't), the Red Sox and their opponents need to add up to 1.
Why does this happen? Well, for one thing I played fast and loose by equating OPS Allowed with League OPS when plugged into the regression equation. In fact, this is a shortcut that works fine for
average teams but will cause problems at extremes. Let me reintroduce an equation for estimating runs from OPS and outs:
Runs = (.496*OPS - .182)*(AB - H)
The Red Sox OPS of 806 means they should score about .218 runs/out, and their OPS Allowed of 705 means they should allow about .168 runs/out, for a run ratio of 1.299. Our shortcut (2*OPS/OPS Allowed
- 1) yields an estimated run ratio of 1.287. Not a huge difference, but a small source of error, and due entirely to a shortcut.
It's worse for the Red Sox opponents, who should be estimated with a run ratio of .77 (.168/.218). But the shortcut estimates a run ratio of .749. To make matters worse, the shortcut estimates a
1.287 run ratio for the Red Sox, which has a reciprocal of .779. But the use of the shortcut eliminates reciprocity between the run ratio of a team and its opponents.
To state it again, the reason this happens is that 2*OPS/LgOPS - 1 relates to runs scored by a team, and is centered around LgOPS. It really should be applied separately to estimate runs scored from
OPS and runs allowed from OPS Allowed.
An even bigger reciprocity problem arises from the use of WR = 2*RR - 1. This is why analysts who have worked with that equation (like Bill Kross) have used a different formula for teams whose run
ratio < 1. We could invert OPS and OPS Allowed and subtract from one:
W% = 1 - (4*OPS Allowed/OPS - 2)/(4*OPS Allowed/OPS - 3)
Which can be simplified to:
W% = 1/(4*OPS Allowed/OPS - 2)
Doing it this way, with separate equations, the RMSE against actual W% drops to 5.40, which is actually a tad better than the Base Run/Pythagenpat estimate (remember, this is only a small sample of
sixty teams, and I'm not using the most accurate BsR formula available). The entire approach is a shortcut itself, and so I'm not advocating using separate formulas; that would defeat the purpose of
a quick and dirty estimate. If you want something deeper than a quick and dirty estimate, you shouldn't be using OPS at all.
Anyway, the reason I got to thinking about this at all was that in the Bill James Gold Mine, the statistical summary for each team includes OPS and OPS Allowed. I certainly don't go out of my way to
look up team OPS. Then it dawned on me that it was a neat coincidence that the conversion could be made by combining the pair of 2:1 functions, and that it would at least look nice. But make no
mistake--like anything involving OPS, it's an "accident" that it works out so nicely. The 2:1 relationship between runs and wins is well documented, and it is the basis for a few W% estimators
(including Pythagorean). But OPS is not a meaningful, real-life baseball number; it's a made-up statistic that happens to relate to runs on the team level at 2:1.
To end on a *truly* frivolous note, the inclusion of OPS and OPS Allowed in the Gold Mine caused me to notice something I hadn't before--that a team's raw run total over 162 games is relatively close
to its OPS without the decimal place (in mathematical terms, OPS*1000). For 2008-2009, the RMSE of this direct estimate (looking at OPS-->Runs and OPS Allowed-->Runs Allowed) is 43.94. Of course, a
real estimate based on OPS will have a much lower RMSE, somewhere in the general vicinity of 26 runs. But that involves applying a formula like:
Runs = (.496*OPS - .182)*(AB - H)
Just looking at a team's OPS over the course of a 162 game season, without any sort of mathematical manipulation, gives you an estimate of team runs that is in the same accuracy ballpark as running a
regression for runs based on batting average. This is not any great shakes, of course, and you'd be a fool to estimate that because the Rangers allowed a 817 OPS last year, they should have allowed
817 runs (they actually allowed 967). But in many other cases, it will put you in the right ballpark, although you will be stuck in the nosebleed seats.
The reason this "works" can be seen by looking at the regression equation. The average team will make about 4080 outs (AB-H) per season (25.2 outs/game * 162 games). Substituting 4080 into the
equation for outs, you can simplify it to roughly:
Runs = 2*OPS - 743
Over the past two years, the average major league team has scored 765 runs and compiled a 753 OPS. So for an average team, there isn't much difference between figuring 2*OPS - 743 or just taking OPS,
since their OPS is pretty close to 743 as it is. As you move away from the average, this "formula's" accuracy will take a nose dive (exemplified by the Rangers example above).
This is the part where I set off the secret beacon in the Statue of Liberty and perform a mind-wipe, and you forget everything you just read and never, ever actually use the Reynolds estimate, okay?
No comments:
Post a Comment
Comments are moderated, so there will be a lag between your post and it actually appearing. I reserve the right to reject any comment for any reason. | {"url":"http://walksaber.blogspot.com/2009/08/silly-ops-tricks.html","timestamp":"2014-04-20T21:02:19Z","content_type":null,"content_length":"94815","record_id":"<urn:uuid:d327f5d0-9ec1-4693-a138-1647aad84dd6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Answer the following question using the data collected in the collision tables.
* I got the results in yellow charts,... - Homework Help - eNotes.com
Answer the following question using the data collected in the collision tables.
* I got the results in yellow charts, but I am not sure how to process the information, please help!
a. Is momentum conserved in each collision?
b. Is kinetic energy conserved in each collision?
a) In any closed system, momentum is conserved. The momentum of a body is calculcated by multiplying the mass and velocity of the body together.
If you have two bodies colliding (lets call them body x and body y), their total momentum is calculated by just adding them together.
Momentum is considered conserved if the momentum before is the same as the momentum after. In your first example, the total momentum of the blue and green object before the collision is 40`(kg*m)/(s)
` . Afterwards it is still 40`(kg*m)/(s)` . Thus, momentum is conserved.
b) Once you understand that 'conserved' means 'the same total before and after', you can also see that in your first example, kinetic energy is conserved because the total kinetic energy before and
after are the same.
a. The momentum is always conserved not matter what! It is the conservation of Momentum Law
`P_(i)= P_(f)`
b. Since `E_(K_i)!=E_(K_f)` the collision is inelastic.
The collision between the blue object and the green object is inelastic.
In an inelastic collision, the total kinetic energy of the system is not conserved.
*If we are talking about the totals, then:
In an elastic collision, the total kinetic of the system is conserved,
160 J = 160 J
90 J = 90 J
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/answer-following-question-using-data-collected-456987","timestamp":"2014-04-20T21:43:11Z","content_type":null,"content_length":"32138","record_id":"<urn:uuid:4ca2178f-11ce-4cd2-b310-5e6041b21ed6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combination Problems ( Read ) | Probability
Do you know how to use combination notation? Take a look at this dilemma.
Evaluate the following combination.
Find ${{_8}C{_3}}$
To figure this out, you will need to understand combination notation. Pay attention and you will know how to evaluate this combination by the end of the Concept.
Order is important for some groups of items but not important for others. Consider a list of the words: POTS, STOP, SPOT, and TOPS.
• For the spelling of each individual word, order is important. The words POTS, STOP, SPOT, and TOPS all use the same letters, but spell out very different words.
• For the list itself, order is not important. Whether the words are presented in one order – such as POTS, STOP, SPOT, TOPS, or another order, such as STOP, SPOT, TOPS, POTS, or a third order,
such as TOPS, POTS, SPOT, STOP – makes no difference. As long as the list includes all 4 words, the order of the 4 words doesn’t matter.
A combination is an arrangement of items in which order, or how the items are arranged, is not important. The collection of one order of the items is not functionally different than any other order.
Think about a pizza. It doesn’t matter which order you put on the toppings once they are all on there. You can put a combination of toppings on a pizza.
When evaluating a combination, you can use a tree diagram. Use a tree diagram can be time consuming, combination notation is a much simpler option.
To use combination notation, you must first understand factorials. Do you remember factorials?
A factorial is a special number that represents the product of a set of values in descending order.
Take a look at this one.
To evaluate 5! We can say that this is the product of values starting with 5 in descending order.
$5 \times 4 \times 3 \times 2 \times 1 = 120$
The answer is 120.
We can use factorials and combination notation to evaluate combinations without using lists or tree diagrams. Let’s take a look at how this works.
The notation for combinations is similar to the notation for permutations. To represent the number of combinations there are for 6 items taken 4 at a time, write:
${\color{red}_6}C{\color{blue}_4} \ \Longleftarrow$
In general, combinations are written as:
${\color{red}_n}C{\color{blue}_r} \ \Longleftarrow \color{red}n$$\color{blue}r$
To compute ${{_n}C{_r}}$
Here is another one.
Find ${{_5}C{_2}}$
Step 1: Understand what ${_5}C{_2}$
${\color{red}_5}C{\color{blue}_2} \ \Longleftarrow$
Step 2: Set up the problem.
Step 3: Fill in the numbers and simplify.
${{_5}C{_2}}=\frac{5!}{2! (3!)}=\frac{5 \times \overset{2}{\cancel{4}} \times \cancel{3 \times 2 \times 1}}{\cancel{2} \times \cancel{1} \times \cancel{(3 \times 2 \times 1)}}=\frac{5 \times 2}{1}=
There are 10 different possible combinations.
Evaluate each combination.
Example A
Find ${{_6}C{_3}}$
Solution: $20$
Example B
Find ${{_9}C{_2}}$
Solution: $36$
Example C
Find ${{_5}C{_4}}$
Solution: $5$
Now let's go back to the dilemma from the beginning of the Concept.
Find ${{_8}C{_3}}$
First, we can write out the numerator.
$\frac{8 \times 7 \times 6 \times 5 \times 4 \times 3 \times 2 \times 1}{(3 \times 2 \times 1)(5 \times 4 \times 3 \times 2 \times 1)}$
Next, we simplify.
$\frac{8 \times 7 \times 6}{3 \times 2 \times 1}$
This is our answer.
an arrangement of items or events where the order is not important.
a special number which represents the product of numbers in descending order.
Guided Practice
Here is one for you to try on your own.
Write the following situation using combination notation. Then evaluate it.
Sixteen students went to the park. Four students could ride in four cars. How many different combinations of students could there be?
First, use combination notation.
Find ${{_16}C{_4}}$
Now we can evaluate the combination by simplifying first.
$\frac{16 \times 15 \times 14 \times 13}{4 \times 3 \times 2 \times 1}$
There can be $1,820$
Video Review
Directions: Evaluate each combination.
1. Find ${{_5}C{_2}}$
2. Find ${{_6}C{_5}}$
3. Find ${{_7}C{_2}}$
4. Find ${{_7}C{_3}}$
5. Find ${{_8}C{_2}}$
6. Find ${{_6}C{_4}}$
7. Find ${{_9}C{_2}}$
8. Find ${{_9}C{_4}}$
9. Find ${{_8}C{_3}}$
10. Find ${{_4}C{_4}}$
Directions: Use the formula to figure out the different combinations.
11. How many different color pairs are there among red, orange, yellow, green, and blue?
12. How many different sets of 3 colors are there among red, orange, yellow, green, and blue?
13. How many different color pairs are there among red, orange, yellow, green, blue, and purple?
14. How many different sets of 3 colors are there among red, orange, yellow, green, blue, and purple?
15. How many different sets of 3 colors are there among red, orange, yellow, green, blue, purple, and white?
16. Ten tennis players are on the Davis Cup Team. Only two players can play in the doubles finals. How many different doubles teams could play in the finals? | {"url":"http://www.ck12.org/probability/Combination-Problems/lesson/Evaluate-Combinations-using-Combination-Notation/","timestamp":"2014-04-16T11:18:29Z","content_type":null,"content_length":"115472","record_id":"<urn:uuid:6ab59d46-c592-4234-9e76-0e305415ebb4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Smart Car a solution?
10-03-2008 06:06 AM
The mpg is very, very
The mpg is very, very disappointing for this car, and I have also read awful reviews regarding the handling of the vehicle. Just does not seem like a very good option to me.
08-30-2009 10:38 PM
I live in Canada and the
I live in Canada and the Smarts have been here for about 5 years. I have the diesel and get about 75 miles per gallon. We used to have a Camry as well but sold it because we preferred to use the
Smart. I can't believe the misinformation about it. The car buying public has really been brainwashed into being paranoid. Quit watching the Ford commercials and go try one..sheeesh.
09-09-2009 06:31 AM
The Smart Car comes from
The Smart Car comes from France. This shows how much fuel is used to shipped it the United States.
This is usually measured in pounds of fuel per horsepower per hour. Roughly speaking, 0.25 lbs/hp/hr is considered to be pretty good, and 100,000 hp is a low-side estimate of an average container
ship's horsepower. This then works out to 25,000 pounds of marine diesel fuel per hour. Marine diesel weighs about 7lbs/gallon, which gets us about 3600 gallons per burned per hour. A common
cruise speed is 25 knots or 28.75mph. To make the math easier, let's call it 30mph. What this means is that for a container ship to travel 30 miles, it'll burn through 3600 gallons, which is the
same as burning 120 gallons to go one mile . There are 5280 feet in a mile, so if 120 gallons is good for 5280 feet, then one gallon is burned every 44 feet!!
09-10-2009 05:52 AM
While this is useful
While this is useful information, then just think how much fuel is burnt transporting cars that take up twice as much space on a container ship from japan/korea. | {"url":"http://www.hybridcars.com/forums/printthread.php?t=100426&pp=10&page=2","timestamp":"2014-04-17T19:49:34Z","content_type":null,"content_length":"8908","record_id":"<urn:uuid:d255b71b-ecc1-4714-86a9-d25f6b1b65ce>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Redundancy in Random SAT Formulas
Yacine Boufkhad and Olivier Roussel, Université d'Artois
The random k-SAT model is extensively used to compare satisfiability algorithms or to find the best settings for the parameters of some algorithm. Conclusions are derived from the performances
measured on a large number of random instances. The size of these instances is, in general, small to get these experiments done in reasonable time. This assumes that the small size formulas have the
same properties than the larger ones. We show that small size formulas have at least a characteristic that makes them relatively easier than the larger ones (beyond the increase in the size of the
formulas). This characteristic is the redundancy. We show, experimentally, that the irredundant formulas are harder for both complete and incomplete methods. Besides, the randomly generated formulas
tend to be naturally irredundant as their size become larger. Thus, irredundant small formulas are more suitable for testing algorithms because they better reflect the hardness of the larger ones.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://aaai.org/Library/AAAI/2000/aaai00-042.php","timestamp":"2014-04-20T18:32:11Z","content_type":null,"content_length":"2878","record_id":"<urn:uuid:5a3b0bc5-166d-4749-a8b7-2c2c1789287a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manchester, MA Math Tutor
Find a Manchester, MA Math Tutor
...Everyone intending to pursue studies in basic science (including life sciences), engineering or economics should have a good foundation in introductory calculus. I did not really begin to
appreciate the genius of Isaac Newton until I was asked, as a young NASA employee, to code a computer progra...
7 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...The courses I've taught and tutored required differential equations, so I have experience working with them in a teaching context. In addition to undergraduate level linear algebra, I studied
linear algebra extensively in the context of quantum mechanics in graduate school. I continue to use undergraduate level linear algebra in my physics research.
16 Subjects: including algebra 1, geometry, precalculus, trigonometry
...In addition to the basic grammatical concepts that both test require you to be aware of, the ACT also looks at such ideas as mood and tone in writing and what is most appropriate. Given my
strengths in grammar and reading, I have always done well in this and have successfully helped others impro...
28 Subjects: including algebra 1, algebra 2, ACT Math, SAT math
...I will demonstrate this in our tutoring sessions. I also enjoy communicating the beauty of math by illustrating how alternative solutions will all result in the same answer. Although I do not
teach SAT Math per se through this venue, I will ensure that my coaching will also benefit the student in the Math SAT.
13 Subjects: including calculus, SAT math, trigonometry, ACT Math
...My ultimate intent is to ensure that every client has all appropriate skills needed to compensate for weaker areas, thereby improving the probability of success. I spent three years in
residential care at the Institute of Logopedics (now Heartspring) in Wichita, KS, providing academic support, l...
45 Subjects: including algebra 2, precalculus, discrete math, SAT math
Related Manchester, MA Tutors
Manchester, MA Accounting Tutors
Manchester, MA ACT Tutors
Manchester, MA Algebra Tutors
Manchester, MA Algebra 2 Tutors
Manchester, MA Calculus Tutors
Manchester, MA Geometry Tutors
Manchester, MA Math Tutors
Manchester, MA Prealgebra Tutors
Manchester, MA Precalculus Tutors
Manchester, MA SAT Tutors
Manchester, MA SAT Math Tutors
Manchester, MA Science Tutors
Manchester, MA Statistics Tutors
Manchester, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/manchester_ma_math_tutors.php","timestamp":"2014-04-18T13:56:36Z","content_type":null,"content_length":"24143","record_id":"<urn:uuid:5631e6ac-8705-45f0-b1e2-733b4be25a77>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: March 2007 [00885]
[Date Index] [Thread Index] [Author Index]
Re: Sequence as a universal UpValue
• To: mathgroup at smc.vnet.net
• Subject: [mg74632] Re: Sequence as a universal UpValue
• From: "Szabolcs" <szhorvat at gmail.com>
• Date: Thu, 29 Mar 2007 02:29:32 -0500 (EST)
• References: <euan47$dni$1@smc.vnet.net>
On Mar 27, 11:11 am, "Chris Chiasson" <chris at chiasson.name> wrote:
> In his presentation on working with held expressions at
> http://library.wolfram.com/conferences/devconf99/villegas/Unevaluated...
> Villegas says:
> "In fact, Sequence itself could almost be implemented as a universal
> UpValue (maybe Dave Withoff or Roman Maeder remembers if that's not
> quite true)."
> So, I am wondering, does the following input disprove that Sequence
> can be implemented as a universal UpValue? How should I think of
> Sequence? Importantly, why doesn't blocking Sequence work like
> blocking the arbitrary symbol?
I'm not sure I understand completely how these things work, but the
behaviour of Block does seem to make sense if you read its help page.
It says:
" When you execute a block, values assigned to x, y, ... are cleared.
When the execution of the block is finished, the original values of
these symbols are restored. "
When you put blahblah in a Block, the definitions associated with it
are cleared, and
its arguments are not spliced into Map. You get the expected result.
But the definitions associated with Sequence are built-in, so they can
not be cleared.
Hmm ... Now that I experimented some more, Sequence does seem to be
special in this respect:
>From In[1]:=
>From In[1]:=
So built-in definitions can be cleared after all. But the Mathematica
book does mention that Sequence is treated in a special way (unlike
other built-ins). Check Section A.4.1 (Mathematica Reference Guide ->
Evaluation -> The Standard Evaluation Sequence).
> In[1]:=
> blahblah/:h_[l___,blahblah[blahblahArgs___],r___]=h[l,blahblahArgs,r]
> UpValues@blahblah
> a[1,blahblah[2,3]]
> Block[{Sequence},f/@Sequence[1,2,3]]
> Block[{blahblah},f/@blahblah[1,2,3]]
> Out[1]=
> h[l,blahblahArgs,r]
> Out[2]=
> {HoldPattern[h_[l___,blahblah[blahblahArgs___],r___]]\[RuleDelayed]
> h[l,blahblahArgs,r]}
> Out[3]=
> a[1,2,3]
> Map::nonopt: Options expected (instead of 3) beyond position 3 in
> Map[f,1,2,3]. An option must be a rule or a list of rules.
> Out[4]=
> Map[f,1,2,3]
> Out[5]=
> blahblah[f[1],f[2],f[3]]
> Thanks for your input,
> --http://chris.chiasson.name/
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Mar/msg00885.html","timestamp":"2014-04-18T15:44:36Z","content_type":null,"content_length":"36748","record_id":"<urn:uuid:fbcb7c13-9988-46f3-a327-0f6ddb1f9ed7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Initializing an estimation of dynamic model parameters
Patent application title: Initializing an estimation of dynamic model parameters
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The present disclosure relates to monitoring of electromechanical oscillations in
electric power systems
, and their identification by an adaptive algorithm based on a repeatedly measured and evaluated signal. In order for an estimation of parameters of a model of the power system to reasonably
converge, proper initialization of the recursive calculation is required, including the definition of tuning parameters constraining the model and the calculation. Initialization for a second signal
to be exploited can then be simplified by copying the set of tuning parameters tuned previously for a different signal. A conditioning gain multiplying the second signal establishes compatibility
between the different signals, and a signal pre-filter in turn discards contributions beyond a frequency band comprising typical electromagnetic oscillations.
A method of initializing a deduction, from estimated model parameters (a
, a
, . . . ) of a parametric model of an electric power system, of frequency and damping (f, ξ) of an electromechanical oscillation mode of the power system (1), wherein the estimation of the model
parameters (aj, a
. . . ) is based on a series of measured values (y.sub.
sup.1, y.sub.
sup.2, . . . ) of a second system quantity (y
) of the power system and wherein said model parameters (a
, a
, . . . ) are adaptively estimated every time a new value (y.sub.
sup.k) of the second system quantity (y
) is measured, the method comprising:tuning a set of tuning parameters (tp
) for the subsequent estimation of the model parameters (a
, a
, . . . );tuning the set of tuning parameters (tp
) by copying tuning parameters (tp
) previously tuned for estimating the model parameters (a
, a
, . . . ) based on a first system quantity (y
) of the electric power system; anddetermining a conditioning gain (G
) for scaling the measured values (y.sub.
sup.1, y.sub.
sup.2, . . . ) of the second system quantity (y
) prior to each adaptive estimation of the model parameters (a
, a
, . . . ).
The method according to claim 1, wherein the determination of the conditioning gain (G
) comprises comparing a statistical measure (Sr
, Sr
) about the first and second system quantities (y
, y
The method according to claim 1, comprising providing a band-pass filter (f
, f
) for filtering the series of measured values (y.sub.
sup.1, y.sub.
sup.2, . . . ) of the second system quantity (Y2) of the power system prior to the scaling by means of the conditioning gain (G
The method according to claim 3, comprising cutting off, by the bandpass filter (f
, f
), frequencies untypical of electromechanical power system oscillations.
The method according to claim 3, comprising providing a threshold for ignoring the latest filtered and scaled value (y.sub.
sup.k) if it is below said threshold.
A system for deducing, from estimated model parameters (a
, a
. . . ) of a parametric model of an electric power system, frequency and damping (f, ξ) of an electromechanical oscillation mode of the power system, comprising two measuring units for measuring
first and second system quantities (y
, y
), and a monitoring centre for estimating the model parameters (a
, a
, . . . ) based on a series of measured values (y.sub.
sup.1, y.sub.
sup.2, . . . ) of the second system quantity (y
) of the power system, wherein said model parameters (a
, a
, . . . ) are adaptively estimated every time a new value (y.sub.
sup.k) of the second system quantity (y
) is measured, and wherein a set of tuning parameters (tp
) are tuned for initializing the subsequent estimation of the model parameters (a
, a
, . . . ), the system comprising:means for tuning the set of tuning parameters (tp
) by copying tuning parameters (tp
) previously tuned for estimating the model parameters (a
, a
, . . . ) based on the first system quantity (y
) of the electric power system; andmeans for determining a conditioning gain (G
) for scaling the measured values (y.sub.
sup.1, y.sub.
sup.2, . . . ) of the second system quantity (y
) prior to each adaptive estimation of the model parameters (a
, a
, . . . ).
A use of the method according to claim 1 for deducing frequency and damping (f, ξ) of electromechanical oscillations in the electric power system from the model parameters (a
, a
, . . . ) estimated by Kalman filtering techniques.
The use according to claim 7, wherein the scaling parameter G is on-line adapted.
A computer program for controlling power flow and damping electromagnetic oscillations in a power system, which computer program is loadable into an internal memory of a digital computer and
comprises computer program code means to make, when said program is loaded in said internal memory, the computer execute the functions of the controller according to claim
8. 10.
A use of the method according to claim 4 for deducing frequency and damping (f, ξ) of electromechanical oscillations in the electric power system from the model parameters (a
, a
, . . . ) estimated by Kalman filtering techniques.
A computer program for controlling power flow and damping electromagnetic oscillations in a power system, which computer program is loadable into an internal memory of a digital computer to execute a
method of initializing a deduction, from estimated model parameters (a
, a
, . . . ) of a parametric model of an electric power system, of frequency and damping (f, ξ) of an electromechanical oscillation mode of the power system, comprising the steps of:tuning a set of
tuning parameters (tp
) for the subsequent estimation of the model parameters (a
, a
, . . . );tuning the set of tuning parameters (tp
) by copying tuning parameters (tp
) previously tuned for estimating the model parameters (a
, a
, . . . ) based on a first system quantity (y
) of the electric power system; anddetermining a conditioning gain (G
) for scaling measured values (y.sub.
sup.1, y.sub.
sup.2, . . . ) of a second system quantity (y
) prior to each adaptive estimation of the model parameters (a
, a
, . . . ).
The computer program according to claim 11, wherein the estimation of the model parameters (aj, a
) is based on a series of the measured values (y.sub.
sup.1, . . . ) of the second system quantity (y
) of the power system and wherein said model parameters (a
, a
, . . . ) are adaptively estimated every time a new value (y.sub.
sup.k) of the second system quantity (y
) is measured.
RELATED APPLICATIONS [0001]
This application claims priority under 35 U.S.C. §119 to EP Application 05405614.8 filed in Europe on Oct. 31, 2005, and as a continuation application under 35 U.S.C. § 120 to PCT/CH2006/000608 filed
as an International Application on Oct. 31, 2006 designating the U.S., the entire contents of which are hereby incorporated by reference in their entireties.
TECHNICAL FIELD [0002]
The disclosure relates to the field of monitoring electromagnetic oscillations in electric power systems comprising a plurality of generators and consumers. It departs from a method of initializing
an estimation of model parameters of a parametric model of the power system.
BACKGROUND INFORMATION [0003]
In the wake of the ongoing deregulations of the electric power markets, load transmission and wheeling of power from distant generators to local consumers has become common practice. As a consequence
of the competition between utilities and the emerging need to optimize assets, increased amounts of electric power are transmitted through the existing networks, invariably causing congestion,
transmission bottlenecks and/or oscillations of parts of the power transmission systems. In this regard, electrical transmission networks are highly dynamic. In general, electromagnetic oscillations
in electric power systems comprising several alternating current generators have a frequency of less than a few Hz and considered acceptable as long as they decay. They are initiated by the normal
small changes in the system load, and they are a characteristic of any power system. However, insufficiently damped oscillations may occur when the operating point of the power system is changed,
e.g. due to a new distribution of power flows following a connection or disconnection of generators, loads and/or transmission lines. Likewise, the interconnection of several existing power grids,
even if the latter do not individually present any badly damped oscillations prior to their interconnection, may give rise to insufficiently damped oscillations. In these cases, an increase in the
transmitted power of a few MW may make the difference between stable oscillations and unstable oscillations which have the potential to cause a system collapse or result in lost of synchronism, lost
of interconnections and ultimately the inability to supply electric power to the customer. Appropriate monitoring of the power system can help a network operator to accurately assess power system
states and avoid a total blackout by taking appropriate actions such as the connection of specially designed oscillation damping equipment.
In the Patent Application EP-A 1 489 714, an adaptive detection of electromechanical oscillations in electric power systems is based on a linear time-varying model. A system quantity or signal such
as e.g. the amplitude or angle of the voltage or current at a selected node of the network is sampled, and the parameters of the linear model representing the behaviour of the power system are
estimated by means of Kalman filtering techniques. This process is carried out in a recursive manner, i.e. every time a new value of the system quantity is measured the parameters of the model are
updated. Finally, from the estimated parameters of the model, the parameters of the oscillatory modes, such as frequency and damping, are deduced and presented to an operator. This adaptive
identification process enables a real-time analysis of the present state of the power system, comprising in particular the damping ξ and frequency f of the dominant power oscillation mode, i.e. the
mode with the lowest relative damping ratio.
In order for such an estimation of dynamic model parameters to work properly, the estimation has to be initialized by a set of properly chosen tuning parameters, such as the order of the dynamic
model, the process and measurement noise, cut-off frequencies for signal pre-filters etc. In general, the values of the tuning parameters depend on the particular power system being monitored and on
the particular signal being selected as the input for the monitoring algorithm. These values are then being adjusted or tuned by a commissioning engineer who analyzes the respective input signal and
makes sure that the output of the subsequent estimation process, i.e. the estimated dominant frequency and damping, responds sufficiently fast and is not too sensitive with respects to measurement
noise. In particular, the commissioning engineer has to adjust the values of the tuning parameters in such a way that an estimation error given by the difference between the measured signal and the
signal predicted e.g. by the aforementioned linear time-varying model is minimal, and the captured oscillatory modes(s) of interest are estimated precise enough using a possibly small order of the
dynamic model. It has turned out that this tuning procedure may be time intensive and requires a certain level of knowledge and experience of the commissioning engineer.
To identify oscillations in an electric power system, different system quantities such as amplitudes or phase angles of voltages, currents and power flows can be used as inputs to the proposed
identification procedure. However, these signals differ with respect to their statistical properties such as magnitudes and signal variance. In order to simplify the tuning procedure, i.e. to find
the best initial values of the tuning parameters to start the estimation algorithm, the abovementioned European Patent Application proposes to introduce a signal conditioning for all admissible
measurements obtained from the power system being monitored.
SUMMARY [0007]
Exemplary embodiments disclosed herein can increase the flexibility in detecting and monitoring electromechanical power oscillations in an electric power system without increasing the engineering
complexity or workload at commissioning. A method of initializing an estimation of model parameters of a parametric model of an electric power system and a system for monitoring an electric power
system are disclosed.
A method of initializing a deduction is disclosed, from estimated model parameters (a
, a
, . . . ) of a parametric model of an electric power system, of frequency and damping (f, ξ) of an electromechanical oscillation mode of the power system (1), wherein the estimation of the model
parameters (aj, a
. . . ) is based on a series of measured values (y
, y
, . . . ) of a second system quantity (y
) of the power system and wherein said model parameters (a
, a
, . . . ) are adaptively estimated every time a new value (y
) of the second system quantity (y
) is measured, wherein the method of initializing comprises tuning a set of tuning parameters (tp
) for the subsequent estimation of the model parameters (a
, a
, . . . ), wherein the method comprises further tuning the set of tuning parameters (tp
) by copying tuning parameters (tp
) previously tuned for estimating the model parameters (a
, a
, . . . ) based on a first system quantity (y
) of the electric power system, and determining a conditioning gain (G
) for scaling the measured values (y
, y
, . . . ) of the second system quantity (y
) prior to each adaptive estimation of the model parameters (a
, a
, . . . ).
A system for deducing, from estimated model parameters (a
, a
, . . . ) of a parametric model of an electric power system, frequency and damping (f, ξ) of an electromechanical oscillation mode of the power system, comprising two measuring units for measuring
first and second system quantities (y
, y
), and a monitoring centre for estimating the model parameters (a
, a
, . . . ) based on a series of measured values (y
, y
, . . . ) of the second system quantity (y
) of the power system, wherein said model parameters (a
, a
, . . . ) are adaptively estimated every time a new value (y
) of the second system quantity (y
) is measured, and wherein a set of tuning parameters (tp
) are tuned for initializing the subsequent estimation of the model parameters (a
, a
, . . . ), wherein the system comprises means for tuning the set of tuning parameters (tp
) by copying tuning parameters (tp
) previously tuned for estimating the model parameters (a
, a
, . . . ) based on the first system quantity (y
) of the electric power system, and means for determining a conditioning gain (G
) for scaling the measured values (y
, y
, . . . ) of the second system quantity (y
) prior to each adaptive estimation of the model parameters (a
, a
, . . . ).
A computer program is disclosed for controlling power flow and damping electromagnetic oscillations in a power system, which computer program is loadable into an internal memory of a digital computer
to execute a method of initializing a deduction, from estimated model parameters (a1, a2, . . . ) of a parametric model of an electric power system, of frequency and damping (f, ξ) of an
electromechanical oscillation mode of the power system, comprising the steps of tuning a set of tuning parameters (tp2) for the subsequent estimation of the model parameters (a1, a2, . . . ); tuning
the set of tuning parameters (tp2) by copying tuning parameters (tp1) previously tuned for estimating the model parameters (a1, a2, . . . ) based on a first system quantity (y1) of the electric power
system; and determining a conditioning gain (G2) for scaling the measured values (y21, y22, . . . ) of the second system quantity (y2) prior to each adaptive estimation of the model parameters (a1,
a2, . . . ).
BRIEF DESCRIPTION OF THE DRAWINGS [0011]
The subject matter of the disclosure will be explained in more detail in the following text with reference to exemplary embodiments which are illustrated in the attached drawings, in which:
FIG. 1 schematically shows a power system,
FIG. 2 depicts a flow chart of a process of estimating model parameters,
FIG. 3 are results from an analysis of a power system based on a first system quantity y
FIG. 4 are results from an analysis of the same power system based on a second system quantity y
and initialized with the same tuning parameters.
The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols. In principle, identical parts are provided with the same reference symbols
in the figures.
DETAILED DESCRIPTION [0017]
According to the disclosure, advantage is taken from the fact that for one and the same electric power system, a multitude of different system quantities, i.e. different signals measured at distinct
locations, are available. In these distinct input signals however, the same dynamic phenomena, e.g. electromechanical oscillations, are observable. Hence, one may swap from one signal to another one,
e.g. in order to track geographically or temporally a certain oscillation mode, such as an inter-area mode that may be observable in a measured voltage from a first location and in a current signal
from another location of the power system, or that may shift following e.g. connection or disconnection of a transmission line or a generator.
In order to avoid independent tuning efforts for each of the system quantities when using e.g. a method of detecting electromechanical oscillations as mentioned initially, only the tuning parameters
for a first or reference system quantity are determined independently. The initialization procedure for any second or further system quantity, that may e.g. offer a better observability of a certain
oscillation mode, is then abbreviated by copying or re-using all or a fraction of the aforementioned tuning parameters and by determining an adequate re-scaling factor as a signal conditioning gain.
The latter is determined by comparing the first and the second system quantity, it renders compatible different input signals and is a prerequisite for the successful re-use of the tuning parameter
values stemming from the first system quantity. The copied set of tuning parameter values including said conditioning gain is then employed to identify model parameters representing the behaviour of
the power system based on a series of measured values of the second system quantity.
The conditioning gain can be determined by comparing statistical information contained in the measured signals such as a maximum signal power, a mean value or a root mean square value, about a number
of measured values of both the first and the second system quantity. An adaptation of the scaling factor can be arranged for in real-time.
In an exemplary embodiment, a band-pass filter is provided for the measured values of the second system quantity prior to the aforementioned signal conditioning. The filter may be based on a general
knowledge about the oscillations that are tracked, or be defined based on, i.e. centred about, the frequency of the dominant electromechanical oscillation resulting from a previous analysis based on
the first or reference system quantity.
A use of the above simplified tuning process of the parameter estimation concerns the derivation of information such as frequency or damping of the dominant oscillatory modes in the power system from
the estimated dynamic model parameters. To this end, the dynamic model parameters can be determined by Kalman filtering techniques.
FIG. 1 shows an electric power system 1 including two generators 10, 10' and several substations represented each by a busbar 11, 11' 11'' and interconnected by a number of transmission lines. System
quantities y
, y
, y
such as the phase angle and/or the amplitude of voltages or currents, frequencies, power flows etc, are measured by suitable measuring units 20, 20', 20'' located at various substations or nodes
throughout the power system 1. The signals measured by the measuring units 20, 20', 20'' are transmitted to and exploited in an oscillation monitoring centre 21. In general, several measuring units
20, 20', 20'' may be implemented in one single device, which in addition does not need to be a dedicated device, the respective measuring functions being executable likewise by an intelligent
electronic device provided for protection and control tasks in the system 1. Furthermore, the monitoring centre 21 could be identical with one of the measuring units 20.
As set out above, a proper initialization of the adaptive estimation of model parameters requires the tuning or off-line adjusting of the tuning parameters used for the recursive calculations. By way
of example, in the procedure as set out in the aforementioned European Patent Application EP-A 1 489 714, the selection of the dynamical order n of a discrete-time autoregressive model, which order
equals the number of parameters to be estimated, is the most important single aspect. If this order is too low, the obtained spectrum in the frequency domain will be highly smoothed, and the
oscillations of interest with low-level peaks in the spectrum are dissimulated. On the other hand, if the order n is too high, faked low-level peaks will be introduced in the spectrum. In addition,
the correlation matrix of the measurement noise Q
and process noise Q
represent further, less sensitive tuning parameters. Other tuning parameters are the sampling time T
between successive measured values of the system quantity y, and the cut-off frequencies f
, f
for the signal pre-filter and a signal conditioning factor or gain G as detailed below.
FIG. 2 depicts an advantageous refinement of an adaptive real-time algorithm for the monitoring of power system oscillations as described in the aforementioned European Patent Application EP-A 1 489
714, the disclosure of which is incorporated herein for all purposes by way of reference. In initialization step 30, the tuning parameters tP
to be used with system quantity y
are determined according to the disclosure, i.e. copied from tuning parameters tp
determined previously for a different system quantity y
. Step 30 includes an initial determination of the conditioning gain G
, based e.g. on an off-line analysis of a limited number of measured values {y
'}, {y
'} of the system quantities y
, y
under consideration, and involving filtering and statistics steps as described in the following. During the repeated execution of the algorithm, new values y
(k) of the second system quantity y
are measured in measurement step 31 with a sampling or update frequency of 1/T
. The series of measured values of y
(k) is then band-pass filtered in filtering step 32, wherein the cut-off frequencies f
, f
as tuning parameters have been introduced above, to yield a series of filtered values of y
(k). A statistical measure of this series of filtered values is determined in statistics step 33 for an eventual update of the conditioning gain G
. Finally, the series of filtered values of y
(k) is re-scaled with the actual value of the conditioning gain G
in scaling step 34. If the latest measured, filtered and scaled value y
(k) exceeds a certain threshold, and/or if some counter indicates so, the series of filtered and scaled values of y
(k) is further exploited in a model parameter update step 35 as known in the art.
In more detail, the band-pass filtering step 32 prior to re-scaling removes the DC components below the lower cut-off frequency f
of e.g. 0.1 Hz and the higher frequencies above the upper cut-off frequency f
of e.g. 2 Hz. The fact that the typical frequencies of power system oscillations are known allows to define the band-pass range as indicated, however the cut-off frequencies f
, f
can at any time be adapted if e.g. the results of the recursive algorithm indicates to do so.
Statistical measures of a band-pass filtered series s(k) of values measured during a period T that can be considered to initialize or update the conditioning gain G
comprise e.g. the maximum signal power, the mean-value or the root-mean-square value as follows.
S r
= max 0 < k < T s ( k ) ( maximum value ) S r = 1 T k = 1 T s ( k ) ( mean value ) S r = 1 T k = 1 T s ( k ) 2 ( root mean square value )
The conditioning gain G[2]
can be calculated from the respective statistical measure Sr
, Sr
of the first and second system quantity y
, y
under consideration by division: G
. In case of low signal to noise ratio, e.g. in case of a fault of a measuring unit, the incoming signal, i.e. some subsequent measured values of the system quantity, may temporarily consist of noise
with a mean value close to zero rather than of realistic data. It is then advantageous to consider all measurements to equal exactly zero, otherwise the dominant frequency of the noise is estimated
rather than the dominant frequency of the measured signal. Based on an observation of the average signal power, a threshold is fixed, and the estimated model parameters will be frozen (not up-dated)
if the actual signal power is lower than the threshold.
In the following, an example shows the effectiveness of the proposed procedure, in which two completely different signals have been chosen and analysed with the developed tool for detection of
oscillations. Actually measured data comprise two series of 1600 values y
, y
sampled at intervals of T
=0.05 sec, corresponding to a short data collection interval of 80 sec.
First system quantity y
: input signal is AC voltage with an RMS-amplitude of 400 kV±2 kV or 1 p.u.±0.005 in the conventional notation where 1 p.u.=400 kV. This is depicted in FIG. 3, first plot. On the second plot, the
filtered signal is depicted. With a certain set of a total of 19 tuning parameters tp
, the subsequent adaptive procedure results in the estimation of the dominant frequency f and its relative damping ξ as depicted in the third and forth plot of FIG. 3, converging to values of f≈0.45
Hz and ξ≈17% well within the interval shown. The initial spikes in the two bottom plots are caused by the transient behaviour of the model parameter estimation algorithm when no additional
information is a priori included and all estimated model parameters (a
, a
, . . . ) start from any initial value (here zero) and converge fast to their correct values.
Second system quantity y
: input signal is power flow in a power line with values of 1350 MW±60 as depicted in FIG. 4, first plot. This kind of information is available to the commissioning engineer immediately after
collecting a few samples and running a first analysis. According to the disclosure, the tuning parameters tp
for this second system quantity based on power flow measurements are copied from the first set based on voltage measurements. The conditioning gain G
to be used with the filtered second signal in this case can be calculated as G
=0.005/60=8.3 e-5. As a result, the estimated oscillation parameters frequency and relative damping, using the second system quantity y
, visibly converge at a similar speed (FIG. 4, third and forth plots) as the parameters from the recursive calculation based on the first system quantity y
(FIG. 3). The simplified initialization thus has substantially minimized the working time and tuning effort, without negatively affecting the quality of the results.
It will be appreciated by those skilled in the art that the present disclosure can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The
presently disclosed embodiments are therefore considered in all respects to be illustrative and not restricted. The scope of the disclosure is indicated by the appended claims rather than the
foregoing description and all changes that come within the meaning and range and equivalence thereof are intended to be embraced therein.
LIST OF DESIGNATIONS [0031]
1 electric power system
10 generator
11 busbar
20 measuring unit
21 oscillation monitoring centre
Patent applications by Marek Zima, Zurich CH
Patent applications by Mats Larsson, Baden CH
Patent applications by Petr Korba, Turgi CH
Patent applications by ABB RESEARCH LTD
Patent applications in class Electrical power distribution
Patent applications in all subclasses Electrical power distribution
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20080281437","timestamp":"2014-04-21T03:38:28Z","content_type":null,"content_length":"57539","record_id":"<urn:uuid:02a92754-debf-496a-bc2f-31bdf8bf995b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] [SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU)
[SciPy-dev] [SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU)
Robert Cimrman cimrman3@ntc.zcu...
Thu Mar 22 09:46:07 CDT 2007
Neilen Marais wrote:
>> Robert Cimrman wrote:
>> Well, I did it since I am going to need this, too :-)
>> In [3]:scipy.linsolve.factorized?
>> ...
>> Definition: scipy.linsolve.factorized(A)
>> Docstring:
>> Return a fuction for solving a linear system, with A pre-factorized.
>> Example:
>> solve = factorized( A ) # Makes LU decomposition.
>> x1 = solve( rhs1 ) # Uses the LU factors.
>> x2 = solve( rhs2 ) # Uses again the LU factors.
>> This uses UMFPACK if available.
> This is a useful improvement, thanks. But why not just extend
> linsolve.splu to use umfpack so we can present a consistent interface? The
> essential difference between factorized and splu is that you get to
> explicity control the storage of the LU factorisation and get some
> additional info (i.e. the number of nonzeros), whereas factorised only
> gives you a solve function. The actual library used to do the sparse LU is
> just an implementation detail that should abstracted wherever possible, no?
> If nobody complains about the idea I'm willing to implement it.
Sure, splu is an exception, every effort making it consistent is
welcome. But note that umfpack always gives you complete LU factors,
there is no ILU (drop-off) support - how would you tackle this?
Maybe change its name to get_superlu_obj or something like that, use
use_solver( useUmfpack = False ) at its beginning, and restore the
use_solver setting at the end?
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2007-March/006783.html","timestamp":"2014-04-16T07:39:20Z","content_type":null,"content_length":"4829","record_id":"<urn:uuid:782c54c6-a1d4-4be4-aae7-b437f3e9594f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complete Boolean Algebra Examples
A selection of articles related to complete boolean algebra examples.
Original articles from our library related to the Complete Boolean Algebra Examples. See Table of Contents for further available material (downloadable resources) on Complete Boolean Algebra
Body Mysteries >> Sexuality
Complete Boolean Algebra Examples is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Complete Boolean Algebra Examples books
and related discussion.
Suggested Pdf Resources
This Chapter provides only a basic introduction to boolean algebra. This chapter uses the mapping method as an example of boolean function ...
well known that a cr-complete Boolean algebra has in general no such representation. For example.
THE SEQUENTIAL TOPOLOGY ON COMPLETE BOOLEAN ALGEBRAS 5. Example: Measure algebras.
A CHARACTERIZATION OF UNIVERSAL COMPLETE. BOOLEAN ALGEBRAS. J.
only if every subsequence of {xn} has a subsequence that converges to x algebraically. Example (Measure algebras).
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/complete-boolean-algebra-examples/","timestamp":"2014-04-20T14:27:42Z","content_type":null,"content_length":"27670","record_id":"<urn:uuid:313b2063-5243-411d-90b4-d9ea7ed9ed3f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
BIOL398-01/S11:Class Journal Week3
From OpenWetWare
1. The purpose of this assignment was to know how to write and understand differential equations. Also, how matlab works with differential equations. The simulation allows us to understand how an
ordinary differential equation works and how it applies to modeling different areas of biology.
2. The part of this assignment that came easily to me was working with the simulation. It was easy since we did not have to write the actual program.
3. The part of this assignment that came most difficult was writing the ordinary differential equations. I was using the example from class, but it was hard to know when we needed to subtract the
concentrations and when it was needed to be added, or even included.
4. What I still do not understand is the writing of a differential equation. Specifically, what I mentioned in number 3. I feel that I wrote the differential equations in the wrong order, as in I
subtracted a number when it was supposed to be added. Also, in the first reaction I only wrote one rate of change equation and I was wondering if I needed one for the elements A and B. I did not
write any, but am I supposed to have d[A]/dt = -k[1][A][B]? The same equation would be d[B]/dt. I just was not sure to include them, since the reaction is only going in one direction. It makes
sense that they would have an equation, since they are also changing over time. I am just not sure.
5. The relationship between homeostasis and equilibrium is that homeostasis focuses on having a stable internal environment, which also means to be in equilibrium.
Alondra Vega 12:53, 30 January 2011 (EST) | {"url":"http://www.openwetware.org/wiki/BIOL398-01/S11:Class_Journal_Week3","timestamp":"2014-04-20T12:43:34Z","content_type":null,"content_length":"15469","record_id":"<urn:uuid:1fe589f2-79e7-442d-a1fb-76293a885441>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the sum up to \( n\) terms of the series: \[ \frac{1}{3} + \frac 2{21} + \frac{3}{91} + \frac{4}{273} + \cdots \text{upto n terms} \] PS: I encountered this problem in spoj: http://www.spoj.pl/
problems/SUMUP/). Not much later I devised a solution which is currenly #9 in rankings (http://www.spoj.pl/ranks/SUMUP/). Although there is much scope of further optimization. Good luck!
• one year ago
• one year ago
Best Response
You've already chosen the best response.
EDIT: #8 th now :)
Best Response
You've already chosen the best response.
is this beyond the scope of a standard calc with series class? I spent about 2 weeks with series so I dont know if I want to get into this:)
Best Response
You've already chosen the best response.
The mathematical part doesn't use any sort of fancy things, any high-school students should be able to do it :)
Best Response
You've already chosen the best response.
ok ok I'll try :P
Best Response
You've already chosen the best response.
is ans 1/2 ?
Best Response
You've already chosen the best response.
No, but for \(n=10000\) the answer is \(\frac 12 \)
Best Response
You've already chosen the best response.
hmm,,i used hit and trial i.e. checked each term..sum approaches 0.5 only when it gets bigger..
Best Response
You've already chosen the best response.
Nopes :) You have to derive a term of \(S_n \) summation upto \(n\) terms
Best Response
You've already chosen the best response.
i easily see the denominator proceeds as (n^2 +n +1)(n^2 -n +1).. dont know what comes next.. hold on,,leme try..
Best Response
You've already chosen the best response.
damn how do you easily see that?
Best Response
You've already chosen the best response.
somehow or the other way,,i get 1/2 only..please correct me where am going wrong(if i am :P) the given terms are (1/3)+(2/21)+(3/91)+(4/273)......... 1/(1*3) + 2/(3*7) + 3/(7*13) + 4/ (13*21) i
notice that 3-1 =2,,7-3 =4,, 13-7 =6 ,,and so on ,,i.e a table of 2 so i take 1/2 common ,,we have : 1/2( (1-1/3) + (1/3-1/7) + (1/7 -1/13).........) =1/2*1 =1/2 o.O
Best Response
You've already chosen the best response.
My head hurt but isn't it somewhere near 1/2 and 1/3
Best Response
You've already chosen the best response.
@zzr0ck3r i somehow saw that it is (n^2 +1)^2 - (n^2) dont know how,,even that amazes me!! :D
Best Response
You've already chosen the best response.
nice work
Best Response
You've already chosen the best response.
but @FoolForMath says its not 1/2..i must be wrong somewhere..hmm
Best Response
You've already chosen the best response.
and how do we know its 1/2 only upto n=10000?? sir?? you there?
Best Response
You've already chosen the best response.
i think no he is not here
Best Response
You've already chosen the best response.
code it and check
Best Response
You've already chosen the best response.
im guessing he would not have said it if it was not true:)
Best Response
You've already chosen the best response.
hmmm.. :)
Best Response
You've already chosen the best response.
\[S_{n} = \frac{n(n+1)}{2(n^{2}+n+1)}\]
Best Response
You've already chosen the best response.
ohh..i see,,i calcuated for infinite terms i.e n tends to infinity..hmm..not the general formula
Best Response
You've already chosen the best response.
how do you find summation (n/n^4 + n^2 +1) ??
Best Response
You've already chosen the best response.
@FoolAroundMath @FoolForMath
Best Response
You've already chosen the best response.
I will give my solution: \[T_{k} = \frac{1}{2}(\frac{1}{k^{2}-k+1}-\frac{1}{k^{2}+k+1})\] It is easy to see that \[(k+1)^{2} - (k+1) + 1 = k^{2} + k + 1\] Thus when summing to 'n' terms,
successive terms cancel. What remains is : \[S = \frac{1}{2}(1 - \frac{1}{n^{2}+n+1})\]
Best Response
You've already chosen the best response.
ohh lol..yes,,damn i missed that..nice :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fe42f33e4b06e92b872bc1d","timestamp":"2014-04-19T04:54:40Z","content_type":null,"content_length":"91048","record_id":"<urn:uuid:bd105857-ce0d-4ca8-9ce7-da6eed39cb61>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cubic Bezier Curves
December 6th 2008, 10:38 AM #1
Dec 2008
Cubic Bezier Curves
I'm trying to implement a program to draw some bi-cubic bezier curves, however there seem to be some gaps in my notes on how exactly one caculates the (x,y) coords for points on regular cublic
bezier curves (a necessary subset). Pulled from my notes:
P(u)=[u3,u2,u,1] [-1, 3, -3, 1] [P0]
| 3, -6, 3, 0| |P1|
|-3, 3, 0, 0| |P2|
[ 1, 0, 0, 0] [P3]
(I apologize for formating abuse, that is supposed to be 3 matrix's)
I have (x,y,z,w) for each of the 4 control points, though only x and y are variable (z=0, w=1, neither probably applies to this problem). The center matrix I know is the Bezier matrix, which is
constant. I'm trying to get 10 steps out of this, so I think this means U increments by .1 on each of 10 steps. Problem is, I have no idea what the P values are nor the outputs of the function
Question resolved. P is X or Y of each control point, output is X or Y value, calculated independently.
December 6th 2008, 07:55 PM #2
Dec 2008 | {"url":"http://mathhelpforum.com/advanced-applied-math/63600-cubic-bezier-curves.html","timestamp":"2014-04-17T13:51:19Z","content_type":null,"content_length":"31006","record_id":"<urn:uuid:243609b0-56a9-4f5d-8164-7b2129333e25>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
My professor has given me this programming assignment to be done in C++ (which I have never really used, but I know the basic syntax) Here is the problem:
We are given a chart with 10 cities and how long it takes to get from one to the other, we have to determine what would be the fifth shortest route to take and the third longest, visiting every
city starting from 'A' and ending in 'A'.
It looks to me like I'm going to have to make a vector to keep track of all of the trips, and to determine every trip possible I will need a permutations function (which I have already), and to
keep track of all of the distances I would need an 8X8 matrix. Using this method, I do think I know everything that's going to have to done, but it just seems like a lot (to me), I just wanted to
make sure I am on the right track before I get started. Also after I finish this, I'm going to have to write the program again in fortran, is that going to be a lot more difficult? Thanks for any
help anybody offers!
"Using this method, I do think I know everything that's going to have to done, but it just seems like a lot (to me), I just wanted to make sure I am on the right track before I get started."
Programming doesn't work that way. You make a plan, you start coding, and if needed you change your plan as you go along.
It's always good to plan first.
You should not need vectors. A 2-dimensional array (matrix) should do.
I haven't really thought deeply about this... or drawn myself any pictures... but I think you need a 10 x 10 matirx, then you ignore 10 of 'em (point A to A, etc) Actually, you only need half of
the matrix, because A-to-B is the same distance as B-to-A.
Fortran shouldn't any more difficult. It all depends on which language the programmer is more familiar with. I don't remember any Fortran, but it seemed easier to learn than C/C++.
Oops, I meant 8 cities, I don't know why I said 10, anyway though, I planned to use the vector to keep track of the order in which the trips would take, because I need the 5th and 3rd not the
fastest and longest... Thanks for the input!
I give up
This was supposed to be a reply to my other thread, but I didn't click reply, sorry everyone.
>>> This was supposed to be a reply to my other thread,
And now it is! | {"url":"http://cboard.cprogramming.com/cplusplus-programming/38544-oops-printable-thread.html","timestamp":"2014-04-21T08:02:05Z","content_type":null,"content_length":"9261","record_id":"<urn:uuid:3f8d3850-c222-47ef-b5d0-c52c12e92c5b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
Marc Joye, Sung-Ming Yen, "Optimal Left-to-Right Binary Signed-Digit Recoding," IEEE Transactions on Computers, vol. 49, no. 7, pp. 740-748, July, 2000.
BibTex x
@article{ 10.1109/12.863044,
author = {Marc Joye and Sung-Ming Yen},
title = {Optimal Left-to-Right Binary Signed-Digit Recoding},
journal ={IEEE Transactions on Computers},
volume = {49},
number = {7},
issn = {0018-9340},
year = {2000},
pages = {740-748},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.863044},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Optimal Left-to-Right Binary Signed-Digit Recoding
IS - 7
SN - 0018-9340
EPD - 740-748
A1 - Marc Joye,
A1 - Sung-Ming Yen,
PY - 2000
KW - Computer arithmetic
KW - converter
KW - signed-digit representation
KW - redundant number representation
KW - SD2 left-to-right recoding
KW - canonical/minimum-weight/nonadjacent form
KW - exponentiation
KW - elliptic curves
KW - smart-cards
KW - cryptography.
VL - 49
JA - IEEE Transactions on Computers
ER -
Abstract—This paper describes new methods for producing optimal binary signed-digit representations. This can be useful in the fast computation of exponentiations. Contrary to existing algorithms,
the digits are scanned from left to right (i.e., from the most significant position to the least significant position). This may lead to better performances in both hardware and software.
[1] A. Avizienis, “Signed-Digit Number Representations for Fast Parallel Arithmetic,” IRE Trans. Electronic Computers, vol. 10, pp. 389-400, 1961.
[2] I. Koren, Computer Arithmetic Algorithms.Englewood Cliffs, N.J.: Prentice Hall, 1993.
[3] N. Takagi,H. Yasuura,, and S. Yajima,“High-speed VLSI multiplication algorithm with a redundant binary addition tree,” IEEE Trans. Computers, vol. 34, no. 9, pp. 789-796, Sept. 1985.
[4] S. Kuninobu, T. Nishiyama, H. Edamatsu, T. Taniguchi, and N. Takagi, “Design of High Speed MOS Multiplier and Divider Using Redundant Binary Representation,” Proc. Eighth Symp. Computer
Arithmetic, pp. 80-86, 1987.
[5] Y. Harata, Y. Nakamura, H. Nagese, M. Takigawa, and N. Takagi, "A High-Speed Multiplier Using a Redundant Binary Adder Tree," IEEE J. Solid-State Circuits, vol. 22, pp. 28-34, Feb. 1987.
[6] A. Vandemeulebroecke,E. Vanzieleghem,T. Denayer, and P.G.A. Jespers,"A New Carry-Free Diversion Algorithm and Its Application to a Single-Chip 1024-b RSA Processor," IEEE J. Solid State Circuits,
vol. 25, no. 3, pp. 748-765, 1990.
[7] K. Hwang,Computer Arithmetic, Principles, Architecture, and Design.New York: John Wiley&Sons, 1979.
[8] S.M. Yen, C.S. Laih, C.H. Chen, and J.Y. Lee, “An Efficient Redundant-Binary Number to Binary Number Converter,” IEEE J. Solid-State Circuits, vol. 27, no. 1, pp. 109-112, 1992.
[9] R.L. Rivest,A. Shamir, and L.A. Adleman,"A Method for Obtaining Digital Signatures and Public Key Cryptosystems," Comm. ACM, vol. 21, pp. 120-126, 1978.
[10] Ç.K. Koç, “High-Speed RSA Implementations,” Technical Report TR 201, RSA Laboratories, Nov. 1994.
[11] D. Knuth, The Art of Computer Programming, Vol. 2, Addison-Wesley, Reading, Mass., 1998.
[12] P. Downey, B. Leong, and R. Sethi, “Computing Sequences with Addition Chains,” SIAM J. Computing, vol. 10, pp. 638-646, 1981.
[13] J. Bos and M. Coster, "Addition Chain Heuristics," Proc. Crypto '89, Lecture Notes in Computer Science, vol. 435, pp. 400-407. Springer-Verlag, 1990.
[14] T. ElGamal, A Public-Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms IEEE Trans. Information Theory, vol. 31, no. 4, pp. 469-472, 1985.
[15] A.J. Menezes, P.C. van Oorschot, and S.A. Vanstone, Handbook of Applied Cryptography, CRC Press, Boca Raton, Fla., 1996, pp. 543-590.
[16] E.F. Brickell, D.M. Gordon, K.S. McCurley, and D.R. Wilson, “Fast Exponentiation with Precomputation: Algorithms and Lower Bounds,” preprint, Mar. 1995. An earlier version appeared in Proc.
EUROCRYPT '92.
[17] F. Morain and J. Olivos, “Speeding Up the Computations on an Elliptic curve Using Addition-Subtraction Chains,” Theoretical Informatics and Applications, vol. 24, pp. 531-543, 1990.
[18] A.D. Booth, “A Signed Binary Multiplication Technique,” Quarterly J. Mechanics and Applied Math., vol. 4, pp. 236-240, 1951.
[19] G.W. Reitwiesner, “Binary Arithmetic,” Advances in Computers, vol. 1, pp. 231-308, 1960.
[20] J. Jedwab and C.J. Mitchell, Minimum Weight Modified Signed-Digit Representations and Fast Exponentiation Electronics Letters, vol. 25, no. 17, pp. 1171-1172, 1989.
[21] H. Cohen, A Course in Computational Algebraic Number Theory. Springer-Verlag, 1993.
[22] Ö. Egecioglu and Ç. K. Koç, "Exponentiation Using Canonical Recoding," Theoretical Computer Science, vol. 129, no. 2, pp. 407-417, 1994.
[23] B.S. Kaliski Jr., “The Montgomery Inverse and Its Applications,” IEEE Trans. Computers, vol. 44, no. 8, pp. 1,064-1,065, Aug. 1995.
[24] S. Arno and F.S. Wheeler, Signed Digit Representations of Minimal Hamming Weight IEEE Trans. Computers, vol. 42, no. 8, pp. 1007-1010, Aug. 1993.
[25] D.M. Gordon, “A Survey of Fast Exponentiation Methods” J. Algorithms, vol. 27, no. 1, pp. 129-146, Apr. 1998.
[26] W.E. Clark and J.J. Liang, On Arithmetic Weight for a General Radix Representation of Integers IEEE Trans. Information Theory, vol. 19, no. 6, pp. 823-826, 1973.
[27] H. Wu and M.A. Hasan, Efficient Exponentiation of a Primitive Root in$GF(2^m)$ IEEE Trans. Computers, vol. 46, no. 2, pp. 162-172, Feb. 1997.
[28] J. Omura and J. Massey, “Computational Method and Apparatus for Finite Field Arithmetic,” U.S. Patent #4,587,627, 1986.
[29] Ç.K. Koç, “Parallel Canonical Recoding,” Electronics Letters, vol. 32, pp. 2,063-2,065, 1996.
[30] Computer Arithmetic, E.E. Swartzlander Jr., ed., vols. 1 and 2. IEEE CS Press, 1990.
Index Terms:
Computer arithmetic, converter, signed-digit representation, redundant number representation, SD2 left-to-right recoding, canonical/minimum-weight/nonadjacent form, exponentiation, elliptic curves,
smart-cards, cryptography.
Marc Joye, Sung-Ming Yen, "Optimal Left-to-Right Binary Signed-Digit Recoding," IEEE Transactions on Computers, vol. 49, no. 7, pp. 740-748, July 2000, doi:10.1109/12.863044
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/2000/07/t0740-abs.html","timestamp":"2014-04-18T06:43:44Z","content_type":null,"content_length":"56098","record_id":"<urn:uuid:9db7a0bf-6b28-4cb3-a9a1-72bd4cd09eba>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Approximate joint probabilities using marginal probabilities
June 23rd 2011, 04:04 AM #1
Jun 2011
Approximate joint probabilities using marginal probabilities
Let $\mathbf Y=(Y_1,...,Y_n)^T$ be an n-dimensional Gaussian random vector with known mean vector and covariance matrix. I am interested in a joint probability that the elements are contained in
some intervals, i.e. $P(Y_1 \in A_1,...,Y_n \in A_n)$.
In my particular case I have 2-dimensional marginal probabilities that take pairwise correlations into account easily available, i.e. $P(Y_1 \in A_1,Y_2 \in A_2), P(Y_2 \in A_2,Y_3 \in A_3), P
(Y_1 \in A_1,Y_3 \in A_3)$ and so on.
Is there a nice way to (roughly) approximate the n-dimensional joint probability using several (or all) of the 2D-marginals?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/183494-approximate-joint-probabilities-using-marginal-probabilities.html","timestamp":"2014-04-19T07:01:20Z","content_type":null,"content_length":"31122","record_id":"<urn:uuid:fb24c325-e706-46c4-a4f0-8ed8e36a3582>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deriving normal and shear stresses
When we talk about shear stresses in a fluid, we find that the shear stress is given by
\tau_{xy} = \mu(\partial_y u + \partial_x v) = \tau_{yx}
This relation we get when only looking at one side of our fluid-"cube". Now, in order to take into account the opposite side we assume that the fluid element is so small that the shear stress is
constant, leading to the average
\tau_{xy} = \frac{1}{2}2\mu(\partial_y u + \partial_x v) = \mu(\partial_y u + \partial_x v) = \tau_{yx}
Applying the same logic to the normal stresses gives me
\tau_{xx} = \frac{1}{2}\mu(\partial_x u + \partial_x u) = \mu(\partial_x u)
However, in my textbook (White) it is given as
\tau_{xx} = 2\mu(\partial_x u)
Where does this extra factor of 2 come from in the normal stress? | {"url":"http://www.physicsforums.com/showthread.php?s=4b5fd51f23c0e3810a99c1767e66a3f3&p=4607666","timestamp":"2014-04-19T12:38:46Z","content_type":null,"content_length":"20366","record_id":"<urn:uuid:8932a7dd-12f5-4696-926d-63db9913fcd2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jeans instability
I'm trying to understand all the properties of the Jeans instability, the Jeans mass and the Jeans length. I understand the mathematics behind it, though not all the variables. There is a [itex]m_{H}
[\itex] or I've also seen it as [itex]m_{p}[\itex] in the Jeans length and Jeans mass. The formula is as follows:
[itex]M_{J}=(\frac{5×k_{B}×T}{G×μ×m_{H}})^{1.5}×(\frac{3}{4×\Pi×\rho_{0}})^{. 5}[\itex], I know that μ is the average molecular mass.
Can someone help me understand this please? I'm a bit confused. I tried looking at the units, though couldn't figure it out.
Also, another question about it: Does it work for all masses of ISM? From what I picked up in class, most things don't work for very large objects in astro, and they need more "fudge factors" to
work. Is that the same for the Jeans instabilities? | {"url":"http://www.physicsforums.com/showthread.php?t=649337","timestamp":"2014-04-16T13:47:06Z","content_type":null,"content_length":"22517","record_id":"<urn:uuid:d959aeec-387a-4838-bcba-ecbe888eee54>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Octave is a free Matlab workalike, available from http://octave.sourceforge.net/.
Octave has been built with Matlab compatibility in mind, however there are a few differences. For an overview of the key differences check this website.
Octave and qtoctave (an Octave GUI) are currently installed on the Power 7 Linux (see Getting Started on the Power755 Cluster). The login node is p2n14.canterbury.ac.nz.
Getting Started with Octave
To check the compatibility of your Matlab code with Octave, we recommend that you first test a very small version of your code on the login node (p2n14) that will only take a couple of minutes
maximum to run. This is to insure that all the functions in your program are working with Octave as you expect with Matlab.
First in a terminal, login to the Power 7 login node:
> ssh username@p2n14.canterbury.ac.nz
Then you can call octave by simply running the following commands in the same terminal:
> module load octave
> octave
GNU Octave, version 3.8.0
Copyright (C) 2013 John W. Eaton and others.
This is free software; see the source code for copying conditions.
There is ABSOLUTELY NO WARRANTY; not even for MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. For details, type 'warranty'.
Octave was configured for "powerpc64-unknown-linux-gnu".
Additional information about Octave is available at http://www.octave.org.
Please contribute if you find this software useful.
For more information, visit http://www.octave.org/get-involved.html
Read http://www.octave.org/bugs.html to learn how to submit bug reports.
For information about changes from previous versions, type 'news'.
You can run your program the same way you would with Matlab.
Alternatively you could use the Octave GUI by doing the following:
> ssh username@p2n14.canterbury.ac.nz -X
And then call the GUI
> module load octave
> qtoctave
Note that using the GUI with X forwarding might not be ideal if you have a slow internet connection. It is best to use this option while connecting from the Canterbury University campus.
Once your program works you can start full simulations on the compute node via the scheduling system Loadleveler.
Submitting jobs with Octave
Once you have insured that your program is fully compatible with Octave, you can submit jobs to the scheduling system Loadelever and using a more advance version of octave 3.6.4.
Serial jobs
The following script RunOctave_serial.ll is an example script that you can use and modify appropriately
# Example Serial LoadLeveler Job with Octave script
# @ shell = /bin/bash
# @ job_name = octave_job
# @ job_type = parallel
# @ wall_clock_limit = 00:30:00
# @ class = p7linux
# Group and project account number
# @ group = UC
# Change your project account number below
# @ account_no = bfcs00000
# @ output = $(job_name).$(schedd_host).$(jobid).out
# @ error = $(job_name).$(schedd_host).$(jobid).err
# @ notification = complete
# @ notify_user = user@email.address #change to your email address
# @ total_tasks = 1 # In serial leave the number of tasks to 1
# @ blocking = unlimited
# @ task_affinity = core(8) # See comment below on task affinities
# @ rset = rset_mcm_affinity
# @ mcm_affinity_options = mcm_mem_pref
# @ environment = COPY_ALL
# @ queue
# suggested environment settings:
export MEMORY_AFFINITY=MCM #to use the memory closest to the cpu
# All commands that follow will be run as part of my serial job
# Display name of host running serial job
# Display current time
# Run a serial octave program (do not include .m)
# To use octave version 3.8.0
module load octave
poe octave --eval myProgram
In this example, myProgram is the name of the Octave script you wish to run. Note that it cannot be a function with parameters (e.g myProgram(10,2,3) ), if you wish to use a function simply create a
small program that contains a call to this function.
Note that you need to specify your group with # @ group = UC, from NZ, NZ_merit, UC, or UC_merit. If you are unsure of of your group, run the command line "whatgroupami" on the login node. Please see
CurrentQuestions for more details.
To submit your job:
1. Login to the login node,
2. simply go to the directory where both RunOctave_serial.ll and your program are,
3. enter the command line:
> llsubmit RunOctave_serial.ll
4. To check if your job is running or listed in the queue:
> llq
or alternatively to only see your own jobs
> llq -u username
Octave uses a multi-threaded version of the ATLAS library (instead of BLAS and Lapack) and can take advantage of free parallelisation under the hood for all calculations involving matrix or vector
operations. To enable more than 2 threads per core (task_affinity=core(1)) and therefore improving the speed of your code it is recommended that you check how much faster your code will be when
increasing the value of core(X), e.g task_affinity=core(1), core(8), core(16), core(32). On the Power 7 you can have a maximum of 32 cores per task. If you don't see a speed-up between core(8) and
core(16), keep the core option to core(8). Typically the larger the matrix sizes are in the code (over 1000 by 1000) the more octave will take advantage of using a larger amount of cores. In some
cases you can see a speed-up of roughly 10 times faster when using core(8) instead of core(1).
Note that Matlab typically uses a multi-threaded version of BLAS and Lapack (MKL) and take advantage of multicores on your PC in a similar way.
Parallel jobs with pMatlab/MatlabMPI libraries
Several open source MPI libraries have and are being developed for Octave and Matlab. Some rely on "true" message passing interface:
Others rely on "fake" message passing interface based on file I/O and shared file system:
MatlabMPI developed by MIT is a set of Matlab scripts that implement a subset of MPI and allow any Matlab/Octave program to be run on a parallel computer. The advantages are that this is completely
platform independent (no compiling is involved) and only rely on a shared file system other than the presence of Octave or Matlab.
Furtheremore MIT has also developed a Distributed Computing Library "pMatlab" that provides a user friendly interface to work with distributed numerical array specified by a map construct. pMatlab
relies on MatlabMPI and abstracts the communication layer from the application layer. The user doe snot have to worry about parallel programming concepts such as deadlocks, barriers and
For more information about pMatlab check http://www.ll.mit.edu/mission/isr/pmatlab/pmatlab.html, and specifically for an introduction to parallel programming with pMatlab see the following
documentation pMatlab_intro.pdf.
The advantage of using pMatlab/MatlabMPI libraries for parallel and memory distributed computing is that it is highly compatible with Matlab and Octave.
You can download examples from the MIT website ( http://www.ll.mit.edu/mission/isr/pmatlab/pmatlab.html) or alternatively on our Power7 system:
1. Log in to p2n14 via a terminal or Putty:
ssh username@p2n14.canterbury.ac.nz
2. Copy the "Example" folder from /usr/local/pkg/pMatlab/version/Examples into your home directory or a directory of your choice.
scp -r /usr/local/pkg/pMatlab/version/Examples .
scp -r /usr/local/pkg/pMatlab/version/Examples MY_DIRECTORY/.
Basic concepts for running pMatlab jobs on your local PC
In pMatlab, the user can issue pMatlab commands by invoking scripts or functions that contain pMatlab code. The following script RUN.m can be executed at the command line instead of directly calling
the parallel function written with pMatlab.
% pMatlab: Parallel Matlab Toolbox
% Software Engineer: Ms. Nadya Travinin (nt@ll.mit.edu)
% Architect: Dr. Jeremy Kepner (kepner@ll.mit.edu)
% MIT Lincoln Laboratory
% RUN is a generic script for running pMatlab scripts.
% Code to run.
mFile = 'pFFT';
% Interactive runs:
% Define number of processors to use.
% Define machines, empty means run locally.
machines = {};
% Run the script.
disp(['Running: ' mFile ' on ' num2str(Nprocs) ' processors']);
eval(pRUN(mFile, Nprocs, machines));
For example let's look at the following Fast Fourier Transform code with pMatlab.The program is called pFFT.
% Fast Fourier Transform with pMatlab
% To run in serial without distributed arrays, set
% PARALLEL = 0
% At the Matlab/Octave prompt type
% pFFT
% To run in serial with distributed arrays, set
% PARALLEL = 1
% At the Matlab/Octave prompt type
% pFFT
% To run in parallel with distributed arrays
% at the Matlab prompt type
% eval(pRUN('pFFT',2,{}))
% Number of processors is Np (and is set automatically by pMatlab
% to Np=Nprocs when calling eval(pRUN('pFFT',Nprocs,machines)) )
N = 2^10; % NxN Matrix size.
% Turn parallelism on:1 or off:0.
PARALLEL = 1; % Can be 1 or 0.
% Create Maps.
mapX = 1; mapY = 1;
if (PARALLEL)
% Break up channels.
mapX = map([1 Np], {}, 0:Np-1);
mapY = map([1 Np], {}, 0:Np-1);
% Allocate data structures.
X = rand(N,N,mapX);
Y = zeros(N,N,mapY);
% Do fft. Changes Y from real to complex.
Y(:,:) = fft(X);
% Finalize the pMATLAB program
To test your parallel program "interactively", you can start qtoctave and enter the following line in the command line:
where Nprocs and machines are defined as follow in RUN.m:
% Interactive runs:
% Define number of processors to use.
% Define machines, empty means run locally.
machines = {};
Alternatively, you can directly enter the following command line at the Octave prompt:
> eval(pRUN(pFFT, 2, {}));
This will run your pMatlab program pFFT.m with 2 processors on the login node (where Octave was started).
Submitting parallel Octave jobs (with pMatlab) with Loadeleveler
To submit your pMatlab program on the BlueFern power7 systems, you will have to use a Loadleveler script and will need to specify the number or processors (cores) you wish to use in this
Loadleveler script only, with the option # @ total_tasks = 10, (for 10 processors/tasks).
The following script RunOctave_parallel.ll is an example script that you can use and modify appropriately:
# Example Parallel LoadLeveler Job with Octave script
# @ shell = /bin/bash
# @ job_name = octave_job
# @ wall_clock_limit = 02:00:00
# @ class = p7linux
# Group and project account number
# @ group = UC
# Change your project account number below
# @ account_no = bfcs00000
# @ output = $(job_name).$(schedd_host).$(jobid).out
# @ error = $(job_name).$(schedd_host).$(jobid).err
# @ notification = complete
# change to your email address
# @ notify_user = user@email.address
# CHANGE BELOW the number of tasks
# @ total_tasks = 10
# @ job_type = parallel
# @ blocking = unlimited
# @ task_affinity = core(2) # See comment below on task affinities
# @ rset = rset_mcm_affinity
# @ mcm_affinity_options = mcm_mem_pref
# @ environment = COPY_ALL
# @ queue
# Specify the name of your parallel Octave job below
PARALLEL_CODE='pStream' # CHANGE HERE to your own Octave script (do not include .m)
export PARALLEL_CODE
#Display name of host running master
# Display current time
# To use octave version 3.8.0, load the appropriate module
module load octave
# Serial preparation Step to set-up a parallel Octave job ## Do NOT change
octave --eval Mpi_Octave_init
# Execute the parallel Octave job using the distributed pMatlab library
poe octave --eval pMatlab_Octave_run -labelio yes -stdoutmode ordered ## DO NOT change
# Note that -labelio yes -stdoutmode ordered is to see the outputs from each worker and leader ordered into 1 output file
Note that you need to specify your group with # @ group = UC, from NZ, NZ_merit, UC, or UC_merit. If you are unsure of of your group, run the command line "whatgroupami" on the login node. Please
see CurrentQuestions for more details.
Octave uses a multi-threaded version of the ATLAS library (instead of BLAS and Lapack) and can take advantage of free parallelisation under the hood for all calculations involving matrix or vector
operations. To enable more than 2 threads per core (task_affinity=core(1)) and therefore improving the speed of your code it is recommended that you check how much faster your code will be when
increasing the value of core(X), e.g task_affinity=core(1), core(8), core(16), core(32). On the Power 7 you can have a maximum of 32 cores per task. If you don't see a speed-up between core(8) and
core(16), keep the core option to core(8). Typically the larger the matrix sizes are in the code (over 1000 by 1000) the more octave will take advantage of using a larger amount of cores. In some
cases you can see a speed-up of roughly 10 times faster when using core(8) instead of core(1).
Note that Matlab typically uses a multi-threaded version of BLAS and Lapack (MKL) and take advantage of multicores on your PC in a similar way.
To submit your parallel Octave job:
1. Login to the login node,
2. simply go to the directory where both RunOctave_parallel.ll and your program are,
3. enter the command line:
> llsubmit RunOctave_parallel.ll
4. To check if your job is running or listed in the queue:
> llq
or alternatively to only see your own jobs
> llq -u username
Submitting parallel Octave jobs (with MatlabMPI) with Loadeleveler
To submit your MatlabMPI program on the BlueFern power7 systems, you will have to use a Loadleveler script and will need to specify the number or processors (cores) you wish to use in this
Loadleveler script only, with the option # @ total_tasks = 10, (for 10 processors/tasks).
The following script RunOctave_parallel.ll is an example script that you can use and modify appropriately:
# Example Parallel LoadLeveler Job with Octave script
# @ shell = /bin/bash
# @ job_name = octave_job
# @ wall_clock_limit = 02:00:00
# @ class = p7linux
# Group and project account number
# @ group = UC
# Change your project account number below
# @ account_no = bfcs00000
# @ output = $(job_name).$(schedd_host).$(jobid).out
# @ error = $(job_name).$(schedd_host).$(jobid).err
# @ notification = complete
# change to your email address
# @ notify_user = user@email.address
# CHANGE BELOW the number of tasks
# @ total_tasks = 10
# @ job_type = parallel
# @ blocking = unlimited
# @ task_affinity = core(2) # See comment below on task affinities
# @ rset = rset_mcm_affinity
# @ mcm_affinity_options = mcm_mem_pref
# @ environment = COPY_ALL
# @ queue
# Specify the name of your parallel Octave job below
PARALLEL_CODE='basic_app' # CHANGE HERE to your own Octave MatlabMPI script (do not include .m)
export PARALLEL_CODE
#Display name of host running master
# Display current time
# To use octave version 3.8.0, load the appropriate module
module load octave
# Serial preparation Step to set-up a parallel Octave job ## Do NOT change
octave --eval Mpi_Octave_init
# Execute the parallel Octave job using the MPI MatlabMPI library
poe octave --eval Mpi_Octave_run -labelio yes -stdoutmode ordered ## DO NOT change
# Note that -labelio yes -stdoutmode ordered is to see the outputs from each worker and leader ordered into 1 output file
Note that you need to specify your group with # @ group = UC, from NZ, NZ_merit, UC, or UC_merit. If you are unsure of of your group, run the command line "whatgroupami" on the login node. Please
see CurrentQuestions for more details.
Octave uses a multi-threaded version of the ATLAS library (instead of BLAS and Lapack) and can take advantage of free parallelisation under the hood for all calculations involving matrix or vector
operations. To enable more than 2 threads per core (task_affinity=core(1)) and therefore improving the speed of your code it is recommended that you check how much faster your code will be when
increasing the value of core(X), e.g task_affinity=core(1), core(8), core(16), core(32). On the Power 7 you can have a maximum of 32 cores per task. If you don't see a speed-up between core(8) and
core(16), keep the core option to core(8). Typically the larger the matrix sizes are in the code (over 1000 by 1000) the more octave will take advantage of using a larger amount of cores. In some
cases you can see a speed-up of roughly 10 times faster when using core(8) instead of core(1).
Note that Matlab typically uses a multi-threaded version of BLAS and Lapack (MKL) and take advantage of multicores on your PC in a similar way.
To submit your parallel Octave job:
1. Login to the login node,
2. simply go to the directory where both RunOctave_parallel.ll and your program are,
3. enter the command line:
> llsubmit RunOctave_parallel.ll
4. To check if your job is running or listed in the queue:
> llq
or alternatively to only see your own jobs
> llq -u username
Octave versus Matlab and key differences on the Power 7
As highlighted previously, Octave is a free Matlab workalike, available from http://octave.sourceforge.net/.
Octave has been built with Matlab compatibility in mind, however there are a few differences. For an overview of the key differences check out http://en.wikibooks.org/wiki/MATLAB_Programming/
Below are some key differences to keep in mind when working with Octave on the Power 7.
Reading and Saving .mat files and compatibility with Matlab and Octave
If you run simulations with Octave and save the results in a .mat file you might also want to read and load the same file with Matlab on your PC for post-processing. Alternatively, you may have input
data created with Maltab that you wish to read on the Power 7 with Octave. Let's assume that we want to create "data.mat" that contains 2 matrices A and B.
From Maltab to Octave
There is no specific options that need to be added when saving or reading the .mat file: Octave will be able to read data.m saved by Matlab on a different architecture.
% In Matlab
% Create some random matrices
A = rand(3,3);
B = rand(3,3);
% Save A and B in data.mat
% From Matlab: no specific options required to read in Octave
save('data.mat', 'A', 'B');
% In Octave
% Reading data.mat (which was saved with Matlab)
From Octave to Matlab
In order to read a .mat file with Matlab that was saved with Octave you DO need to specify a particular set of format options: use "Matlab's v7 binary data format" when saving the .mat file (note
that you could also use format 4 and 6 instead of 7.)
% In Octave
% Create some random matrices
A = rand(3,3);
B = rand(3,3);
% Save A and B in data.mat
% From Octave: use '-7' to read in Matlab
save('-7','data.mat', 'A', 'B');
% In Matlab
% Reading data.mat (which was saved with Octave)
Plotting and saving figures with Octave
When running simulations with Octave on the Power 7, it is strongly recommended to save your data in ".mat" format and plotting figures at a later post-processing stage. Octave uses gnuplot for
plotting figures and does not recognise the ".fig" Matlab format. Furthermore, you can only save figures with specific image formats with Octave, e.g eps or jpeg format. The following highlight the
syntax difference when saving figures:
% Plotting a figure (same syntax for Octave and Matlab)
title('My plot');
% Saving a figure
% Matlab syntax
% Octave syntax
ODE solvers in Octave and Matlab
ODE functions are the strongest key differences between Octave and Matlab. The syntax as well as the order of the input parameters are different. See the example below:
Octave uses "lsode"
[x_sol,t]=lsode("my_ode_function_Octave",[x0;y0;z0; u0;v0;w0], time_vector);
([x0;y0;z0; u0;v0;w0] = initial values)
Matlab uses "ode23" (or ode45)
[t,x_sol]=ode23(@my_ode_function_Matlab,[0 final_time],[x0;y0;z0; u0;v0;w0]);
The 2 ode functions will have different ordering:
function z=my_ode_function_Octave(x,t)
function z=my_ode_function_Matlab(t,x) | {"url":"http://wiki.canterbury.ac.nz/plugins/viewsource/viewpagesrc.action?pageId=24216704","timestamp":"2014-04-20T11:32:32Z","content_type":null,"content_length":"37667","record_id":"<urn:uuid:ec77547c-77cf-4b42-8459-307469cdf8d6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Silicon and Germanium Nanostructures for Photovoltaic Applications: Ab-Initio Results
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Nanoscale Res Lett. 2010; 5(10): 1637–1649.
Silicon and Germanium Nanostructures for Photovoltaic Applications: Ab-Initio Results
Actually, most of the electric energy is being produced by fossil fuels and great is the search for viable alternatives. The most appealing and promising technology is photovoltaics. It will become
truly mainstream when its cost will be comparable to other energy sources. One way is to significantly enhance device efficiencies, for example by increasing the number of band gaps in multijunction
solar cells or by favoring charge separation in the devices. This can be done by using cells based on nanostructured semiconductors. In this paper, we will present ab-initio results of the
structural, electronic and optical properties of (1) silicon and germanium nanoparticles embedded in wide band gap materials and (2) mixed silicon-germanium nanowires. We show that theory can help in
understanding the microscopic processes important for devices performances. In particular, we calculated for embedded Si and Ge nanoparticles the dependence of the absorption threshold on size and
oxidation, the role of crystallinity and, in some cases, the recombination rates, and we demonstrated that in the case of mixed nanowires, those with a clear interface between Si and Ge show not only
a reduced quantum confinement effect but display also a natural geometrical separation between electron and hole.
Keywords: Silicon, Germanium, Nanocrystals, Nanowires, Nanophotonics, Photovoltaics
Photovoltaic (PV) energy is experiencing a large interest mainly due to the request for renewable energy sources. It will become mainstream when its costs will be comparable to other sources. At the
moment it is too expensive for competitive production. For this reason an intense research activity is of fundamental importance to develop efficient PV devices ensuring a low cost and a low
environmental impact. Until now three generations of solar cells have been envisaged [1]. Currently, PV production is 90% first-generation and is based on Si wafers. First generation refers to high
quality and hence low-defect single crystal devices and is slowly approaching the limiting efficiencies of about 31% [2] of single-band gap devices. These devices are reliable and durable, but half
of the cost is the Si wafer. The second generation of cells make use of cheap semiconductor thin films deposited on substrates to produce low-cost devices of lower efficiency. These thin-film cells
account for around 5–6% of the market. For these second-generation devices, the cost of the substrate represents the cost limit and higher efficiency will be needed to maintain the cost-reduction
trend [3]. Third-generation cells use, instead, new technologies to produce high-efficiency devices [4,5]. They are photo-electrochemical cells based on dye-sensitized nanocrystalline wide bandgap
semiconductors [6] or multiple energy threshold devices based on nanocrystalline silicon for the widening of the absorbed solar spectrum, due to the quantum confinement (QC) effect that enlarges the
energy gap of the nanostructures, and for the use of excess thermal generation to enhance voltages or carrier collection [7]. Moreover recently also silicon and germanium nanowires have been used and
envisaged for PV applications [8-13].
Besides the intense experimental work, devoted to the improvement of the nanostructures growth and characterization techniques and to the realization of the nanodevices, an increasing number of
theoretical works, based on empirical and on ab-initio approaches, is now available in the literature (see for example Refs. [14-16]). The importance of the theoretical efforts lies not only in the
interpretation of experimental results but also in the possibility to predict structural, electronic, optical, and transport properties aimed at the realization of more efficient devices. Important
progresses in the description of the electronic properties of Si and Ge nanostructures have been reported, but an exhaustive understanding is still lacking. This is due, on one side to the not
obvious transferability of the empirical parameters to low- dimensional systems and on the other side to the deficiency of the ab-initio Density Functional Theory (DFT) approach in the correct
evaluation of the excitation energies. In fact, due to their reduced dimensionality, the inclusion of many-body (MB) effects in the theoretical description, in the so-called many-body perturbation
theory (MBPT), is mandatory for a proper interpretation of the excited state properties. In particular, the quasiparticle structure is a key for the calculation of the electronic gap and to the
understanding of charge transport as the inclusion of excitonic effects is really important for a description of the optical properties. In this paper, we apply DFT and MBPT to the calculation of the
structural, electronic and optical properties of two classes of systems: pure and alloyed Si/Ge nanocrystals (NCs) embedded in wide band gap SiO[2] matrices, and free-standing SiGe mixed nanowires
(NWs). These systems have been chosen for their application in photovoltaics, and therefore our results will be discussed with respect to their potentiality. The paper is organized as follows: in
section “Ab-initio Methods: DFT and MBPT”, we sketched the theoretical methods used in our computations, section “Embedded Si and Ge Nanocrystals” is devoted to the presentation of the results
related to the embedded Si and Ge NCs, whereas section “Si/Ge Mixed Nanowires” discusses the outcomes for the mixed SiGe NWs, finally some conclusion is outlined in section “Conclusions”.
Ab-initio Methods: DFT and MBPT
DFT [17,18] is a single-particle ab-initio approach successfully used to calculate the ground-state equilibrium geometry and electronic properties of materials, from bulk to systems of reduced
dimensionality like surfaces, nanowires, nanocrystals, nanoparticles.
However, the mean-field description of the MB effects, taken into account in this method, by the so-called exchange-correlation (XC) term, is not enough to describe excited state properties. Even the
time-dependent development of this approach, the TDDFT [19,20], formally appropriate to calculate the optical excitations and the dielectric response of materials, presents problems due to the
limited knowledge of the exact form of the XC functional [21,22]. For these reasons, excited state calculations based on MBPT, performed on top of DFT ones, have become state-of-the-art to obtain a
correct description of electronic and optical transition energies. The DFT simulations of our nanostructures are performed using the Quantum Espresso package [23], with a plane-wave (PW) basis set to
expand the wavefunctions (WF) and norm-conserving pseudopotentials to describe the electron-ion interaction. The local density approximation (LDA) is used for the XC potential. A repeated cell
approach allows to simulate NCs and NWs. A full geometry optimization is performed and, after the equilibrium geometry is reached, a final calculation is made to obtain not only the occupied but also
a very high number of unoccupied Kohn-Sham (KS) eigenvalues and eigenvectors (ε[nk], ψ[n,k]) [24,25]. In fact, although they cannot be formally identified as the correct quasi-particle (QP) energies
and eigenfunctions, they are the starting point to perform MB calculations.
Indeed the second step consists in carrying out GW calculations which give the correct QP electronic gaps. Within the Green functions formalism, the poles of the one-particle propagator correspond to
the real QP excitation energies and can be determined as solutions of a QP equation which apparently is very similar to the KS equation but where a non hermitian, non-local, energy dependent
self-energy (SE) operator Σ [26] replaces the XC potential:
The SE is approximated, here, by the product of the KS Green function G times the screened Coulomb interaction W obtained within the Random Phase Approximation (RPA): Σ = iGW[27]. Moreover instead to
solve the full QP equation, its first-order perturbative solution, with respect to Σ − V[xc], is used. In this way the QP energies are obtained:
where β[nk] is the linear coefficient (changed of sign) in the energy expansion of the SE around the KS energies. In eq. 2, Σ[x] represents the exchange part and Σ[c] is the correlation part. To
determine Σ[c], a plasmon pole approximation for the inverse dielectric matrix, is assumed [28,29].
Regarding the ab-initio calculations of the optical properties, by means of the KS or the QP energies and WF it is possible to carry out the calculation of the macroscopic dielectric function of the
system at the independent-(quasi) particle level.
This formula relies on the fact that, although in an inhomogeneous material the macroscopic field varies with frequency ω and has a Fourier component of vanishing wave vector, the microscopic field
varies with the same frequency but with different wave vectors q + G. These microscopic fluctuations induced by the external perturbation are at the origin of the local-field effects (LF) and reflect
the spatial anisotropy of the material. In particular for NWs, like other one-dimensional nanostructures [30,31] it has been demonstrated [32-34] that the classical depolarization is accounted for
only if LF are included and it is responsible of the suppression of the low energy absorbtion peaks in the 35], in the photoluminescence spectra of porous Si [36] and in the optical gain in Si
elongated nanodots [37].
In any case, at this level of approximation, even if GW corrections are included, still no good agreement with the experimental data is found: in particular one finds optical spectra of Si NWs with
peaks at too high energy with respect to the experimental optical data, available, for example, for porous Si samples (see Ref. [32] for more details). In order to describe correctly the optical
response, the solution of the Bethe–Salpeter equation (BSE), where the coupled electron-hole (e-h) excitations are included [20,25], is required. In the Green’s functions formalism, the solution of
the BSE corresponds to diagonalize the following excitonic problem
where (ε[ck] − ε[vk]) are the quasi-particle energies obtained within a GW calculation, W is the statically screened Coulomb interaction, V is the bare Coulomb interaction, and E[λ] is obtained as
Embedded Si and Ge Nanocrystals
In this section we present ab-initio results for Si and Ge NCs, pure and alloyed, that are embedded in a SiO[2] matrix. The role of crystallinity (symmetry) is investigated by considering both the
crystalline (betacristobalite (BC)) and the amorphous phases of the SiO[2], while size and interface effects emerge from the comparison between NCs of different diameters. A mixed half-Si/half-Ge NC
is additionally introduced in order to explore the effects of alloying. The BC SiO[2] is well known to give rise to one of the simplest NC/SiO[2] interface because of its diamond-like structure [38].
The crystalline embedded structures have been obtained from a BC cubic matrix by removing all the oxygens included in a cutoff-sphere, whose radius determines the size of the NC. By centering the
cutoff-sphere on one Si atom or in an interstitial position it is possible to obtain structures with different symmetries. The pure Ge-NCs and the Si/Ge alloyed NCs are obtained from such structures
by replacing all or part of the NC Si-atoms with Ge-atoms. In such initial NC, before the relaxation, the atoms show a bond length of 3.1 Å, larger with respect to that of the Si (Ge) bulk structure,
2.35 Å (2.45 Å). No defects (dangling bonds) are present, and all the O atoms at the NC/SiO[2] interface are single bonded with the Si (Ge) atoms of the NC.
To model NCs of increasing size, we enlarge the hosting matrix so that the separation between the NC replicas is still around 1 nm, enough to correctly describe the stress localized around each NC [
39-41] and to avoid the overlapping of states belonging to the NC, due to the application of periodic boundary conditions [42].
The optimized structure has been achieved by relaxing the total volume of the cell. The relaxation of all the structures have been performed using the SIESTA code [43,44] and Troullier–Martins
pseudopotentials with non-linear core corrections. A cutoff of 250 Ry on the electronic density and no additional external pressure or stress were applied. Atomic positions and cell parameters have
been left totally free to move. Following the procedure described above, seven embedded system have been produced: the Si[10], Si[17], Ge[10], and Ge[17] structures have been obtained from a BC-2x2x2
supercell (192 atoms, supercell volume V[s] = 2.94 nm^3), while for the Si[32], Ge[32], and Si[16]Ge[16] NCs the larger BC-3x3x3 supercell (648 atoms, supercell volume V[s] = 9.94 nm^3) has been
used. Table Table11 (upper set) reports some structural characteristics for all the systems enumerated earlier. In all the cases, after the relaxation the silica matrix gets strongly distorted in
the proximity of the NC, with Si–O–Si angles lowering from 180 to about 150-170 degree depending on the interface region, and reduces progressively its stress far away from the interface [45]. The
difference between the BC lattice constant (7.16 Å) with that of bulk-Si (5.43 Å) and bulk-Ge (5.66 Å) results in a strained NC/SiO[2] interface. Therefore, the NC has a strained structure with
respect to the bulk value [46-48], and both the NC and the host matrix symmetries are lowered by the relaxation procedure.
Structural characteristics of the embedded crystalline (upper set) and amorphous (lower set) NCs: number of NC atoms, number of core atoms (not bonded with oxygens), symmetry (cutoff sphere centered
or not on one silicon), number of oxygens bonded to ...
Together with the crystalline structure, the complementary case of an amorphous silica (a-SiO[2]) has been considered. The glass model has been generated using classical molecular dynamics (MD)
simulations of quenching from a melt, as described in Ref. [49]. The amorphous a-Si[10] and a-Si[17] embedded NCs and their corresponding Ge-based counterparts have been obtained starting from the Si
[64]O[128] glass (supercell volume V[s] = 2.76 nm^3), while for the a-Si[32] and a-Ge[32] NCs the larger Si[216]O[432] glass have been used (supercell volume V[s] = 9.13 nm^3). The structural
characteristics of the embedded amorphous NCs are reported in Table Table11 (lower set). We find that the number of bridge bonds (Si–O–Si or Ge–O–Ge, where Si or Ge are atoms belonging to the NC)
increases with the dimension of the NC (three for the largest case and none for the smallest NC) in nice agreement with other structures obtained by different methods [50,51]. For each structure we
calculated the eigenvalues and eigenfunctions using DFT-LDA and in some cases MBPT [23,24,52]. An energy cutoff of 60 Ry on the PW basis has been considered.
Pure Si Nanocrystals
We resume here results previously obtained for pure Si NCs embedded in SiO[2] matrices [24,53-57]. These results provide not only a good starting point for the comparison between pure Si and pure Ge
NCs (see III B) and with alloyed NCs (see III C), but also allow to discuss our results in view of the theoretical methods used (MBPT vs DFT-LDA) and with respect to the technological applications.
As discussed in section “Ab-initio Methods: DFT and MBPT”, it is well known that the DFT-LDA severely underestimates the band gaps for semiconductors and insulators. A correction to the fundamental
band gap is usually obtained by calculating the QP energies via the GW method [20]. The QP energies, however, are still not sufficient to correctly describe a process in which e-h pairs are created,
such as in the optical absorption and luminescence. Their interaction can lead to a dramatic shift of peak positions as well as to distortions of the spectral lineshape. Table Table22 shows the
highest-occupied-molecular-orbital (HOMO)—lowest-unoccupied-molecular-orbital (LUMO) gap values calculated at the DFT-LDA level for three different Si NC embedded in a crystalline or amorphous SiO[2]
matrix. These values are compared with the HOMO-LUMO gap values relative to the silica matrices. In Table Table3,3, we report, instead, the results of the MB effects [52] on the DFT gap values,
through the inclusion of the GW, GW+BSE and GW+BSE+LF. It should be noted that in these last two cases the values are inferred from the calculated absorption spectra.
DFT-LDA HOMO-LUMO gap values (in eV) for the crystalline and amorphous silica, and for the embedded Si nanocrystals
Many-body effects on the energy gap values (in eV) for the crystalline and amorphous embedded Si[10] dots
First, we note that the HOMO-LUMO gap for the crystalline cases seems to increase with the NC size, in opposition to the behavior expected assuming the validity of the QC effect. As discussed in Ref.
[53] such deviation from the QC rule can be explained by considering the oxidation degree at the NC/SiO2 interface: for small NC diameters the gap is almost completely determined by the average
number of oxygens per interface atom, while QC plays a minor role. Besides, also other effects such as strain, defects, bond types, and so on, contribute to the determination of the fundamental gap,
making the system response largely dependent on its specific configuration. Moreover, looking at Table Table33 we note that for the Si[10] and a-Si[10] embedded NCs the SE (calculated through the GW
method) and the e-h Coulomb corrections (calculated through the Bethe–Salpeter equation) more or less exactly cancel out each other (with a total correction to the gap of the order of 0.2 eV) when
the LF effects are neglected. Besides we note the presence of large exciton binding energies, of the order of 1.5 eV, similarly to other highly confined Si and Ge systems [32,58-60]. Furthermore,
some our recent calculations (still unpublished) show that the LF effects actually blue-shifts the absorption spectrum of the smallest systems (d < 1 nm), with corrections of the order of few tenths
of eV. Instead, for larger NCs no blue-shift is observed. Therefore, while such corrections should be taken into account for a rigorous calculation, we expect that the LF effects will have the same
influence on Si and Ge NCs of the same size and geometry, allowing in principle a straightforward comparison between the responses of the two compounds. Besides, Table Table33 and previous MB
calculations on Si-NCs show absorption results very close to those calculated with DFT-LDA in RPA [24,55,61,62]. In fact these results show that the energy position of the absorption onset is
practically not modified by the inclusion of MB effects. The arguments remarked above justify the choice of DFT-LDA for the results discussed in sections. “Comparison Between Pure Si and Ge
Nanocrystals” and “Alloyed Si/Ge Nanocrystals”, assuring a good compromise between results accuracy and computational effort.
Concerning the applications we demonstrated [56] that the emission rates follow a trend with the emission energy that is nearly linear for the hydrogenated NCs and nearly cubic for the NCs passivated
with OH groups or embedded in SiO[2]. Moreover, the hydrogenic passivation produces higher optical yields with respect to the hydroxilic one, as also evidenced experimentally. Besides, for the
hydroxided NCs the emission is favored for systems with a high O/Si ratio. In particular the analysis of the results for the embedded NCs reveals a clear picture in which the smallest, highly
oxidized, crystalline NCs, belong to the class of the most optically-active Si/SiO[2] structures, attaining impressive rates of less than 1 ns, in nice agreement with experimental observations. From
the other side, a reduction of five orders of magnitude (10 ms) of the emission rate is achievable by a proper modification of the structural parameters, favoring the conditions for charge-separation
processes, thus photovoltaic applications [56]. In the case of strongly interacting systems (i.e. when the separation between the NCs lowers under a certain limit), the overlap of the NCs WF becomes
relevant, promoting the tunneling process. Therefore, while for the single Si/SiO[2] heterostructure the e-h pair is confined on the NC, in the case of two (or more) interacting NCs a charge
migration from one NC to the neighbor can occur [63]. Evidence of an interaction mechanism operating between NCs has been frequently reported [64-66], sometimes indicated as an active process for
optical emission [67], and sometimes even exploited as a probing technique [68]. This interaction has been widely interpreted in terms of a kind of excitonic hopping or migration between NCs,
although only more recently the mechanisms for carrier transfer among Si-NCs have been more clearly elucidated [69,70]. Roughly speaking, the possibility of charge migration reduces the QC effect,
possibly leading to the formation of minibands with indirect gaps [63]. It should be noted that, contrary to photonics applications, for PV purposes the indirect nature of the energy bandgap in
Si-NCs is advantageous, since the photogenerated e-h pair has a longer lifetime with respect to direct bandgap materials. Therefore, the NC–NC interaction can be considered as an additional parameter
(tunable by the NC density) that concurs to the characterization of the system behavior: while the NC-size primarily determines the absorption/emission energy, the interaction level affects the
absorption/emission rates. This picture opens to the possibility of creating from one side (high rates) extremely efficient Si-based emitters [71], and from the other side (low rates) PV devices
capable to harvest the full solar energy with high yields. While the role of the NC size has been extensively investigated by many works, as theoretically like as experimentally, the study of the
effects of NC–NC interplay is still at an early stage, due to the difficulties encountered.
Comparison Between Pure Si and Ge Nanocrystals
In this section, we compare the responses of the pure Si and Ge NCs at the DFT-LDA level. The density of states (DOS) calculation provides a first insight into the electronic configuration. In Fig.
Fig.1,1, we report the DOS for the crystalline Si and Ge NCs for an energy region focused around the band edge. All the DOS have been normalized following the constraint
Normalized DOS for the crystalline pure Si and Ge NCs. Energy units are in eV. The dotted lines marks the HOMO state that has been positioned at 0 eV
in which E[F] is the Fermi energy located half-between the HOMO and the LUMO.
The analysis of Fig. Fig.11 reveals that Ge-NCs present reduced gaps with respect to their Si counterparts. This could be reasonably associated with the reduced band-gap value of bulk-Ge with
respect to bulk-Si. The Ge[10] case is an outlier to this rule, showing a gap slightly larger than the Si[10] NC. This exception can be justified by considering that such NCs represent a limit case
in which all the NC atoms are localized at the interface.
It is noteworthy that the DOS profile arising from conduction states is similar for Si- and Ge-based structures of the same size, while the DOS profile arising from valence states differs for the two
species. In particular, in the case of Ge-NCs the energy region around the valence band edge tends to be densely occupied, while for Si-NCs only few discrete levels appear in that region.
The DOS for the amorphous Si and Ge embedded NCs is reported in Fig. Fig.2.2. The symmetry breaking deriving from the amorphyzation evidently broadens the energy distribution of the states,
consequently reducing the HOMO-LUMO gap. The discussion concerning the gap-reduction for Ge NCs is still valid in the amorphous case, with Ge systems presenting similar or smaller gaps with respect
to the equivalent Si counterparts. This effect is particularly evident for the a-Ge[32] NC in which several states gets localized within the band gap due to the amorphyzation.
Normalized DOS for the amorphized pure Si and Ge NCs. Energy units are in eV. The dotted lines marks the HOMO state that has been positioned at 0 eV
The properties of the calculated DOS reflect into the absorption spectra (represented by the imaginary part of the dielectric function calculated in RPA) presented in Fig. Fig.33 for all the pure Si
and Ge NCs. We clearly distinguish the absorption features associated to the embedding matrix, for energies above 7 eV, that do not depend on the embedding species nor on the NC size. Instead, for
energies lower than 7 eV the absorption curve shows a dramatic sensitivity to the NC configuration. In particular, we observe in this region a broadening of the absorption peaks with the NC size,
demonstrating that the interface effects become very important for smaller NCs (i.e. when the proportion of atoms at the interface becomes larger). As expected from the DOS analysis discussed
earlier, the amorphyzation tends to produce smoother spectra, as for the low-energy region (NC) like as for high-energy one (SiO[2] matrix). Finally, for all the sizes, Ge-NCs present lower
absorption thresholds with respect to the Si-NCs, mostly due to the higher occupation of the valence band discussed earlier. In Fig. Fig.4,4, we report the absorption thresholds, calculated from the
absorption spectra, corresponding to the minimum energy for which the absorption is greater than 2.5% of the highest peak (corresponding to about 0.1 a.u.); in this way, we introduce a sort of
“instrument resolution” that neglects very unfavorable optical transitions (for instance the 2–4 eV spectral region of the Si[10]/SiO[2] crystalline system). The absorption thresholds show a trend
that generally decrease with the NC size, highlighting the fundamental role of QC at this stage. The amorphyzation tends to smooth-out the curves and to reduce the absorption threshold of about 1 eV.
Also, all the Ge-NCs present lower thresholds with respect to their Si counterparts. Therefore, by varying the composition and the disorder of a NC of fixed size, we can obtain impressive variations
of the absorption threshold up to about 2.7 eV.
Imaginary part of the dielectric function for the crystalline (top) and amorphous (bottom) embedded NCs, made by 10 (left), 17 (center), and 32 (right) atoms, respectively. For each plot a comparison
of the responses of Si (dashed) and Ge (solid) NCs ...
Absorption thresholds for the pure Si and Ge NCs (see text). The lines are drawn to guide the eye
This result can motivate the employment of Ge together with Si for the production of semiconductor-based NCs, in order to improve the possibility of tuning the opto-electronic response by selecting,
in addition to the structural configuration, also the composition of the NC. Another opportunity comes from the exploitation of alloyed Si/Ge-NCs, that could provide additional control over the final
response as discussed in the next section.
Alloyed Si/Ge Nanocrystals
In this section, we consider the case of the Ge[16]Si[16] NC, that has been built starting from the pure Si[32] NC by replacing half of the NC with Ge atoms and then by totally relaxing the resulting
alloyed. The crystallinity of the system permits a net specularity in the geometry of the two halves of the compound. This choice eliminates any complication that may arise from differences in the
structural configuration of the two halves, eventually overbalancing the response of one species with respect to the other.
By comparing the responses of the pure Si and Ge systems with that of the alloy, we investigate the effects of the alloying on the electronic configuration and on the absorption spectrum. In Fig.
Fig.5,5, we report the DOS projected (PDOS) on the atoms belonging to the NC, for the pure and the alloyed NCs. For all the cases, the PDOS concentrates near the band edge while getting weaker for
energies lower than HOMO or higher than LUMO, in agreement with the fact that low-energy transitions mostly derive from states localized on the NC [24]. Some difference emerges between the PDOS of Si
[32] and Ge[32] NCs, in particular near the valence band edge, where the former PDOS presents a higher concentration of states in the 0–1 eV energy range. Besides, the PDOS of the alloyed NC appears
as a half-half mixture of the PDOS of the two pure NCs. Therefore, neither of the two species of the alloyed NC seems to dominate over the other, with the final response of the alloyed system lying
in-between those of the pure systems. The absorption spectra of the three samples (see Fig. Fig.6)6) supports this argument, showing the curve of the Ge[16]Si[16]/SiO[2] NC nicely amongst the curves
of the pure systems. Also the trend of the absorption threshold (see inset of Fig. Fig.6)6) shows an-almost linear dependency with the alloy index, in nice agreement with the discussion earlier.
Normalized projected DOS for the 32 atoms crystalline NC in the pure-Si (left), half-half alloy (center), and pure-Ge (right) phases
Imaginary part of the dielectric function calculated in RPA for the Si[32] (dashed), Ge[16]Si[16] (dotted), and Ge[32] (solid) crystalline embedded NCs. Inset absorption threshold as a function of
the alloy index, x (color online)
In order to explore deeper on the role of the alloying, we compare the charge localization of the HOMO and LUMO states for the pure and alloyed 32-atoms NCs. Due to the T[d] symmetry of the initial
systems, the HOMO state forms a degenerate triplet before the relaxation, and therefore, for a proper comparison of the band-edge states we have to consider the set of non-degenerate HOMO-2, HOMO-1,
and HOMO states of the relaxed system, that origins from the same symmetry group. From now on we will refer to all these states as HOMO(3). In Fig. Fig.77 we report the HOMO(3) and LUMO states for
the Si[32]/SiO[2] (top panel) Ge[16]Si[16]/SiO[2] (center panel), and Ge[32]/SiO[2] (bottom panel) NCs. For the pure systems we observe a partial separation of the HOMO(3)-LUMO density charges, with
the former localized mainly on one half of the NC and the latter mainly on the other half. Such a separation is probably due to the distortion of the NC after the relaxation, favoring the
localization of the charge on the most strained bonds of the NC. In the case of the alloyed NC, the separation seems more pronounced, with the LUMO localized on the Ge atoms and the HOMO(3) on the Si
atoms. It is not clear at this stage whether a real charge separation effectively occurs (as in the case of Si/Ge mixed nanowires, see section “Si/Ge Mixed Nanowires”) or if such effect depends on
the particular configuration of the NC considered here. More investigations are therefore required in order to shed some light on this important aspect.
Kohn–Sham orbitals at 10% of their maximum amplitude for the Si[32] (top), Ge[16]Si[16] (center), and Ge[32] (bottom) crystalline embedded NCs. The LUMO state is represented in dark red (black), the
HOMO state is represented in blue (gray), the HOMO-1 ...
Si/Ge Mixed Nanowires
The scientific and technological importance of SiGe NWs is related to the peculiar physical properties that they present and that make them more suitable for PV with respect to the corresponding pure
Si and Ge NWs. In fact it has been demonstrated, both experimentally and theoretically [12,72-74], that the electronic and optical properties of SiGe NWs can be strongly modified by changing the size
of the system (like in the pure nanowires [75-77]), but also by changing the relative composition of Si and Ge atoms and the geometry of Si/Ge interface [12,78,79]. This additional degree of control
on the electronic structure makes these type of wires a possible route for PV because it offers a very wide range of possibilities to modulate the electronic structure of the material in order to
obtain the desired properties. As discussed in section “Introduction”, in order to improve the efficiency in PV it is necessary or to maximize the absorption spectrum, or to obtain inside the
material, after the absorption of light, a strong separation of electron and hole, or to improve the rapidity of transfer electrons and holes to metallic electrodes. Here, we show how a particular
type of SiGe NWs, called Abrupt SiGe NWs and characterized by a clear planar Si/Ge interface, can satisfy (more than the corresponding pure NWs) the requirements of a material for solar cell.
The free-standing NWs considered here are oriented along the [110] direction (that guarantees thermodynamic stability [80]) and have an approximately cylindrical shape; the diameter range is from 0.8
to 1.6 nm and all the surface atoms have been passivated with H atoms in order to eliminate the intra-gap states. For the details of the construction of the geometry of NWs we refer to Ref.[12,78].
We have analyzed pure Si, pure Ge and Abrupt SiGe NWs. This particular type of SiGe NWs is characterized by the presence of a planar Si/Ge interface along the shortest dimension of the transverse
cross-section of the wire [12,78]. The compositional range for Abrupt SiGe NWs is 0 ≤ x ≤ 1, where x is the relative composition of one type of atom with respect to the total number of atoms in the
unit cell. An energy cutoff of 30 Ry, a Monkhorst-Pack grid of 16 × 1 × 1 points and 10 Å of vacuum between NWs replicas have been evaluated enough to ensure the convergence of all the calculated
properties. In order to obtain the geometry of minimum energy of our structures, we have performed total energy minimization of the positions of atoms in the plane normal to the growth direction;
while to take into account the effect of the strain in the direction of growth, we have used the Vegard’s law for semiconductor bulk alloys [81], which very recently has been demonstrated also valid
for nanoalloys and which states that the relaxed lattice parameter of a binary system is a linear function of the composition of the system. After the evaluation of DFT-LDA ground state properties,
in order to calculate the optical properties of the wires, in particular the excitonic wave function localization, we have solved the Bethe–Salpeter equation (BSE) in the basis set of quasi-electron
and quasi-holes states, as described in section “Ab-initio Methods: DFT and MBPT”.
As first step, we have estimated how the variation of the size of the system has an influence on the electronic DFT-LDA band gap of the wires; to analyze this aspect, we have fixed the composition of
Abrupt SiGe NWs (x[Ge] = 0.5) and we have calculated the scaling of the electronic band gap as a function of the inverse of the diameter of the wire. Our results are reported in Fig. Fig.8.8.
Clearly, for all the types of NWs, on reducing the size of the wire (that means moving from left to right in Fig. Fig.8)8) the electronic band gap increases; this result, as demonstrated in many
theoretical and experimental works, is strictly related to the QC effect [16,74-77,82]. Moreover the most interesting result is that Abrupt SiGe NWs show a pronounced Reduced QC Effect (RQCE) [12,78
]: this means that, when the size of the system is reduced, the opening of the bulk band gap is not so strong like in the pure wires of similar size and therefore, at fixed diameter, the band gap of
Abrupt SiGe NWs is smaller than the one of pure wires. Then we have analyzed how, at fixed diameter, the variation of composition for Abrupt SiGe NWs influences the electronic band gap. To do this,
we have fixed the size of the wire and we added or deleted some rows of one type of atom in the transverse cross section (along the shortest dimension) of the wire in order to preserve a clear
interface between Si and Ge (that is the main feature of this type of wire). In Table Table4,4, we report the electronic DFT-LDA band gap E[g] as a function of the relative composition x[Ge] for
Abrupt SiGe NWs with d = 1.6 nm (Fig. (Fig.99).
DFT-LDA bandgap as a function of the inverse of the diameter of the wire for Si NWs (green line), Ge NWs (orange line) and for Abrupt SiGe NWs with x[Ge] = 0.5 (cyan line) (color online)
DFT-LDA electronic gaps (in eV) as function of Ge composition x[Ge] for Abrupt SiGe NWs with d = 1.6 nm
Electronic wave function localization for VBM (a) and CBM (b) at Γ point for an Abrupt SiGe NW with diameter d = 1.2 nm and composition x[Ge] = 0.5. Blue spheres represent Ge atoms, cyan spheres
represent Si atoms, while white spheres are H atoms ...
By analyzing the numerical values, we can say that also the variation of the composition causes a reduction of the electronic band gap with respect to those of the pure wires. Therefore the
composition, like as the diameter of the wire in the previous case, is responsible of a strong RQCE in this case. In particular E[g] depends on x[Ge] in a quadratic form [78]. This result represents
a very useful tool in order to engine a material with the desired electronic properties and also in order to predict the absorption spectra of the wire. Since the RQCE is responsible for a red-shift
in the absorption spectra of Abrupt SiGe NWs [83], it offers the possibility to access wavelengths that would otherwise not be available within a single material; this feature can be crucial for the
engineering of a solar cell. The physical origin of the pronounced RQCE can be ascribed to the existence of type II band offset, when there is a planar interface between the two semiconductors. This
type of offset implies that the minimum of conduction band (CBM) and the maximum of the valence band (VBM) are localized on different materials. In the next figure, we show how, for the Abrupt SiGe
NWs, the wave function spatial localization is very strong, in particular the VBM is localized on the Ge part of the wire, while the CBM is localized on the Si part of the wire. This property is also
present, if we change the diameter or the composition of the system [78]. The type II offset is still present when the composition of the wire is varied, because, during the variation, we preserve
the planar interface between Si and Ge, that offers a strong degree of control on the carriers space localization. As a confirmation of this idea, we have evaluated, through MBPT [84], the spatial
localization of the lowest exciton of an Abrupt SiGe NW, with d = 1.6 nm and with x[Ge] = 0.6875 (see Fig. Fig.10).10). Fixing the position of the hole in the Ge part of the wire, one can note that
the electronic probability distribution function is mainly localized on the Si part of the wire. This property is a demonstration of a clear tendency for these type of systems to strongly separate
electrons and holes, a property useful in PV applications, offering a strong degree of control on the carriers localization.
Side (top panel) and top (bottom panel) view of the electron distribution probability when the hole position is fixed on top of the Ge atom indicated by a black dot. The wire under consideration is
an Abrupt SiGe NW with diameter d = 1.6 nm and composition ...
Finally, we present one example of the electronic band structure for the Abrupt SiGe NWs. In Fig. Fig.11,11, the one-dimensional band structure along the wire axis of an Abrupt SiGe NW with d = 1.2
nm and composition x[Ge] = 0.5 is reported. The most interesting result is that the band structure shows a direct gap at the Γ point; this property, as already demonstrated for pure Si and pure Ge
wires with the same spatial orientation [16,32,60,85], derives from the folding of the bulk energy bands onto the wire axis. Since the electronic wave function for the CBM at Γ point is completely
localized on Si, one can conlude that the direct band gap comes out from the folding of one of the six equivalent valleys of the Si bulk band structure onto the Γ point. Moreover it is important to
point out that the indirect CBM (at X) is located more than 0.5 eV higher than the Γ point CBM. The presence of a direct band gap can have remarkable consequences for the technological applications
of these wires: in fact it can modify the optical properties of a device, since offers the possibility to have optical transitions without involving phonons, thus increasing the optical intensities.
However, a direct band gap structure alone do not ensure that a particular nanostructure will have a strong optical transitions [86]. Therefore further calculations concerning optical properties are
needed in order to completely characterize these materials.
Electronic band structure along the wire axis for an Abrupt SiGe NW with diameter d = 1.2 nm and composition x[Ge] = 0.5 (color online)
In this paper, we have presented ab-initio computational methods for determining the structural, electronic and optical properties of Si and Ge nanostructures. We have concentrated our interest to
those nanostructures that play a role in PV applications. In particular, we presented one-particle and many-body results for Si and Ge nanocrystals embedded in oxide matrices and for mixed SiGe
nanowires. The discussed results shed light on the importance of many-body effects in systems of reduced dimensionality. In particular, we showed for embedded Si and Ge nanoparticles how the
absorption threshold depends on size and oxidation and we have calculated the exciton binding energies. Besides, we have elucidated the role of crystallinity and through the calculation of
recombination rates and absorption properties we have highlighted the best conditions for technological applications. In the case of Si/Ge embedded alloyed nanocrystals, we have shown the dependence
of the absorption spectra on the alloying and the presence of a different localization for HOMO and LUMO. Regarding the SiGe nanowires, we demonstrated that those which show a clear interface between
Si and Ge originate not only a reduced quantum confinement effect but display also a direct band gap and a natural separation between electron and hole, a property directly related to PV
The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n. 211956 and n. 245977, by MIUR-PRIN 2007,
Ministero Affari Esteri, Direzione Generale per la Promozione e la Cooperazione Culturale and Fondazione Cassa di Risparmio di Modena. The authors acknowledge also CINECA CPU time granted by
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the
original author(s) and source are credited.
• Bagnall DM, Boreland M. Energy Policy. 2008. p. 4390. [Cross Ref]
• Shockley W, Queisser HJ. J. 1961. p. 510. COI number [1:CAS:528:DyaF3MXpslGqsQ%3D%3D]; Bibcode number [1961JAP....32..510S] [Cross Ref]
• Green M, Basore PA, Chang N, Clugston D, Egan R, Evans R, Hogg D, Jarnason S, Keever M, Lasswell P, O’Sullivan J, Schubert U, Turner A, Wenham SR, Young T. Solar Energy Mater. Solar Cells. 2004.
p. 857. COI number [1:CAS:528:DC%2BD2cXhtVektrbJ]
• Grätzel M. Nature. 2001. p. 338. Bibcode number [2001Natur.414..338G] [PubMed] [Cross Ref]
• Green MA. Third Generation Photovoltaics: Advanced Solar Energy Conversion. Springer, Berlin; 2003.
• Grätzel M. J. 2004. p. 3. [Cross Ref]
• Conibeer G, Green M, Corkish R, Cho Y, Cho E-C, Jang C-W, Fangsuwannarak T, Pink E, Huang Y, Puzzer T, Trupke T, Richeards B, Shalav A, Lin K-L. Thin Solid Films. 2006. p. 654. Bibcode number
[2006TSF...511..654C] [Cross Ref]
• Tian B, Zheng X, Kempa TJ, Fang Y, Yu N, Yu G, Huang J, Lieber CM. Nature. 2007. p. 885. COI number [1:CAS:528:DC%2BD2sXhtFOjt7bP]; Bibcode number [2007Natur.449..885T] [PubMed] [Cross Ref]
• Kelzenberg MD, Turner-Evans DB, Kayes BM, Filler MA, Putnam MC, Lewis NS, Atwater HA. Nano Lett. 2008. p. 710. COI number [1:CAS:528:DC%2BD1cXhslKgsb4%3D]; Bibcode number [2008NanoL...8..710K] [
PubMed] [Cross Ref]
• Garnett EC, Yang P. J. 2008. p. 9224. COI number [1:CAS:528:DC%2BD1cXnsl2ktbY%3D] [PubMed] [Cross Ref]
• Nduwimana A, Musin RN, Smith AM, Wang X-Q. Nano Lett. 2008. p. 3341. COI number [1:CAS:528:DC%2BD1cXhtVChs73K]; Bibcode number [2008NanoL...8.3341N] [PubMed] [Cross Ref]
• Amato M, Palummo M, Ossicini S. Phys. Rev. B. 2009. p. 201302(R). Bibcode number [2009PhRvB..79t1302A] [Cross Ref]
• Cao L, White JS, Park J-S, Schuller JA, Clemes BM, Brongersma ML. Nature Mater. 2009. p. 643. COI number [1:CAS:528:DC%2BD1MXovFSju7c%3D]; Bibcode number [2009NatMa...8..643C] [PubMed] [Cross Ref
• Ossicini S, Pavesi L, Priolo F. Light Emitting Silicon for Microphotonics, Springer Tracts on Modern Physics, vol 194. Springer, Berlin; 2003.
• Delerue C, Lannoo M. Nanostructures, Theory and Modelling. Springer, Berlin; 2004.
• Rurali R. Rev. 2010. p. 427. COI number [1:CAS:528:DC%2BC3cXktlSht7s%3D]; Bibcode number [2010RvMP...82..427R] [Cross Ref]
• Hohenberg P, Kohn W. Phys. 1964. p. B864. Bibcode number [1964PhRv..136..864H] [Cross Ref]
• Kohn W, Sham LJ. Phys. 1965. p. A1113. [Cross Ref]
• Gross EKU, Marques MAL. Ann. 2005. p. 427.
• Onida G, Reining L, Rubio A. Rev. 2002. p. 601. COI number [1:CAS:528:DC%2BD38Xlt1ymsL0%3D]; Bibcode number [2002RvMP...74..601O] [Cross Ref]
• Seidl A, Görlig A, Vogl P, Majewski JA, Levy M. Phys. Rev. B. 1996. p. 3764. COI number [1:CAS:528:DyaK28Xht1ejsrs%3D]; Bibcode number [1996PhRvB..53.3764S] [PubMed] [Cross Ref]
• Gruning M, Marini A, Rubio A. Phys. Rev. B. 2006. p. 161103(R). Bibcode number [2006PhRvB..74p1103G] [Cross Ref]
• Giannozzi P, Baroni S, Bonini N, Calandra M, Car R, Cavazzoni C, Ceresoli D, Chiarotti GL, Cococcioni M, Dabo I, Dal Corso A, de Gironcoli S, Fabris S, Fratesi G, Gebauer R, Gerstmann U,
Gougoussis C, Kokalj A, Lazzeri M, Martin-Samos L, Marzari N, Mauri F, Mazzarello R, Paolini S, Pasquarello A, Paulatto L, Sbraccia C, Scandolo S, Sclauzero G, Seitsonen AP, Smogunov A, Umari P,
Wentzcovitch RM. J. 2009. p. 395502. [PubMed] [Cross Ref]
• Guerra R, Marri I, Magri R, Martin-Samos L, Pulci O, Degoli E, Ossicini S. Phys. Rev. B. 2009. p. 155320. Bibcode number [2009PhRvB..79o5320G] [Cross Ref]
• Palummo M, Ossicini S, De Sole R. Phys. Status Solidi B. 2010. p. 2089. COI number [1:CAS:528:DC%2BC3cXpt1Ojtbg%3D]; Bibcode number [2010PSSBR.247.2089P] [Cross Ref]
• Hedin L. Phys. 1965. p. A796. Bibcode number [1965PhRv..139..796H] [Cross Ref]
• Aryasetiawan F, Gunnarsson O. Rep. 1998. p. 237. COI number [1:CAS:528:DyaK1cXitlWktLw%3D]; Bibcode number [1998RPPh...61..237A] [Cross Ref]
• Hybertsen MS, Louie SG. Phys. Rev. B. 1987. p. 5585. COI number [1:CAS:528:DyaL2sXksFSiurw%3D]; Bibcode number [1987PhRvB..35.5585H] [PubMed] [Cross Ref]
• Godby RW, Schlüter M, Sham LJ. Phys. Rev. B. 1988. p. 10159. Bibcode number [1988PhRvB..3710159G] [PubMed] [Cross Ref]
• Marinopoulos AG, Reining L, Rubio A, Vast N. Phys. 2003. p. 046402. COI number [1:STN:280:DC%2BD3szntlWhtQ%3D%3D]; Bibcode number [2003PhRvL..91d6402M] [PubMed] [Cross Ref]
• Spataru C, Ismail-Beigi S, Benedict LX, Louie SG. Appl. Phys. A. 2004. p. 1129. COI number [1:CAS:528:DC%2BD2cXitVahtrs%3D]; Bibcode number [2004ApPhA..78.1129S] [Cross Ref]
• Bruno M, Palummo M, Marini A, Del Sole R, Ossicini S. Phys. 2007. p. 036807. Bibcode number [2007PhRvL..98c6807B] [PubMed] [Cross Ref]
• Bruneval F, Botti S, Reining L. Phys. 2005. p. 219701. Bibcode number [2005PhRvL..94u9701B] [PubMed] [Cross Ref]
• Aradi B, Ramos LE, Deak P, Köhler Th, Bechstedt F, Zhang RQ, Frauenheim Th. Phys. Rev. B. 2007. p. 035305. Bibcode number [2007PhRvB..76c5305A] [Cross Ref]
• Wang N, Tang ZK, Li GD, Chen JS. Nature. 2000. p. 50. COI number [1:CAS:528:DC%2BD3cXotValu7o%3D]; Bibcode number [2000Natur.408...50W] [PubMed] [Cross Ref]
• Kovalev D. Phys. 1996. p. 2089. COI number [1:CAS:528:DyaK28XltlKjs70%3D]; Bibcode number [1996PhRvL..77.2089K] [PubMed] [Cross Ref]
• Cazzanelli M, Kovalev D, Negro LD, Gaburro Z, Pavesi L. Phys. 2004. p. 207402. Bibcode number [2004PhRvL..93t7402C] [PubMed] [Cross Ref]
• Kageshima H, Shiraishi K. In: in Proceedings of 23rd International Conference on Physics Semiconduction. M. Scheffler, R. Zimmermann, editor. World Scientific, Singapore; 1996. p. 903.
• Luppi M, Ossicini S. J. 2003. p. 2130. COI number [1:CAS:528:DC%2BD3sXlsFKgsro%3D]; Bibcode number [2003JAP....94.2130L] [Cross Ref]
• Luppi M, Ossicini S. Phys. Rev. B. 2005. p. 035340. Bibcode number [2005PhRvB..71c5340L] [Cross Ref]
• Djurabekova F, Nordlund K. Phys. Rev. B. 2008. p. 115325. Bibcode number [2008PhRvB..77k5325D] [Cross Ref]
• Daldosso N, Luppi M, Ossicini S, Degoli E, Magri R, Dalba G, Fornasini P, Grisenti R, Rocca F, Pavesi L, Boninelli S, Priolo F, Spinella C, Iacona F. Phys. Rev. B. 2003. p. 085327. Bibcode number
[2003PhRvB..68h5327D] [Cross Ref]
• Ordejón P, Artacho E, Soler JM. Phys. Rev. B. 1996. p. R10441. Bibcode number [1996PhRvB..5310441O] [PubMed] [Cross Ref]
• Soler JM, Artacho E, Gale JD, García A, Junquera J, Ordejón P, Sánchez-Portal D. J. 2002. p. 2745. COI number [1:CAS:528:DC%2BD38XivFGrsL4%3D]; Bibcode number [2002JPCM...14.2745S] [Cross Ref]
• Watanabe T, Tatsamura K, Ohdomari I. Appl. 2004. p. 125. COI number [1:CAS:528:DC%2BD2cXnvV2isLc%3D]; Bibcode number [2004ApSS..237..125W]
• Yilmaz DE, Bulutay C, Cagin T. Phys. Rev. B. 2008. p. 155306. Bibcode number [2008PhRvB..77o5306Y] [Cross Ref]
• Kroll P, Schulte HJ. Phys. Stat. Sol. B. Vol. 243. World Scientific, Singapore; 2006.
• Yilmaz DE, Bulutay C, Cagin T. Phys. Rev. B. 2008. p. 155306. Bibcode number [2008PhRvB..77o5306Y] [Cross Ref]
• Martin-Samos L, Limoge Y, Crocombette J-P, Roma G, Richard N, Anglada E, Artacho Phys E. Rev. B. 2005. p. 014116. we gratefully thanks Layla Martin-Samos for the contribution relative to the
generation of the glass.
• Hadjisavvas G, Kelires PC. Physica E. 2007. p. 99. COI number [1:CAS:528:DC%2BD2sXktF2luro%3D]; Bibcode number [2007PhyE...38...99H] [Cross Ref]
• Ippolito M, Meloni S, Colombo L. Appl. 2008. p. 153109. Bibcode number [2008ApPhL..93o3109I] [Cross Ref]
• Guerra R, Degoli E, Ossicini S. Phys. Rev. B. 2009. p. 155332. Bibcode number [2009PhRvB..80o5332G] [Cross Ref]
• Degoli E, Guerra R, Iori F, Magri R, Marri I, Pulci O, Bisi O, Ossicini S. C. R. Physique. 2009. p. 575. COI number [1:CAS:528:DC%2BD1MXhtF2hs73L]; Bibcode number [2009CRPhy..10..575D] [Cross Ref
• Guerra R, Marri I, Magri R, Martin-Samos L, Pulci O, Degoli E, Ossicini S, SuperlattMicrostruc. 2009. p. 246.
• Guerra R, Ossicini S. Phys. Rev. B. 2010. p. 245307. Bibcode number [2010PhRvB..81x5307G] [Cross Ref]
• Guerra R, Degoli E, Marsili M, Pulci O, Ossicini S. Phys. Status Solidi B. 2010. p. 2113. [Cross Ref]
• Luppi E, Iori F, Magri R, Pulci O, Del Sole R, Ossicini S, Degoli E, Olevano V. Phys. Rev. B. 2007. p. 033303. Bibcode number [2007PhRvB..75c3303L] [Cross Ref]
• Iori F, Degoli E, Magri R, Marri I, Cantele G, Ninno D, Trani F, Pulci O, Ossicini S. Phys. Rev. B. 2007. p. 085302. Bibcode number [2007PhRvB..76h5302I] [Cross Ref]
• Bruno M, Palummo M, Marini A, Del Sole R, Olevano V, Kholod AN, Ossicini S. Phys. Rev. B. 2005. p. 153310. Bibcode number [2005PhRvB..72o3310B] [Cross Ref]
• Delerue C, Lannoo M, Allan G. Phys. 2000. p. 2457. COI number [1:CAS:528:DC%2BD3cXhs1Cms70%3D]; Bibcode number [2000PhRvL..84.2457D] [PubMed] [Cross Ref]
• Ramos LE, Paier J, Kresse G, Bechstedt F. Phys. Rev. B. 2008. p. 195423. Bibcode number [2008PhRvB..78s5423R] [Cross Ref]
• Guerra R. PhD Thesis. 2009. unpublished.
• Heitmann J, Müller F, Yi L, Zacharias M, Kovalev D, Eichhorn F. Phys. Rev. B. 2004. p. 195309. Bibcode number [2004PhRvB..69s5309H] [Cross Ref]
• Linnros J, Lalic N, Galeckas A, Grivickas V. J. 1999. p. 6128. COI number [1:CAS:528:DyaK1MXnsVarsb0%3D]; Bibcode number [1999JAP....86.6128L] [Cross Ref]
• Glover M, Meldrum A. Optical Materials. 2005. p. 977. COI number [1:CAS:528:DC%2BD2MXhtFGnsbo%3D]; Bibcode number [2005OptMa..27..977G] [Cross Ref]
• Shimitsu-Iwayama T, Hama T, Hole DE, Boyd IW. Solid-State Electronics. 2001. p. 1487. Bibcode number [2001SSEle..45.1487S] [Cross Ref]
• Schneibner M, Yakes M, Bracker AS, Ponomarev IV, Doty MF, Hellberg CS, Whitman LJ, Reinecke TL, Gammon D. Nature Physics. 2008. p. 291. [Cross Ref]
• Allan G, Delerue C. Phys. Rev. B. 2007. p. 195311. Bibcode number [2007PhRvB..75s5311A] [Cross Ref]
• Lockwood R, Hryciw A, Meldrum A. Phys. 2006. p. 263112.
• Dal Negro L, Cazzanelli M, Pavesi L, Ossicini S, Pacifici D, Franzó G, Priolo F. Appl. 2003. p. 4636. COI number [1:CAS:528:DC%2BD3sXkvFyjurk%3D]; Bibcode number [2003ApPhL..82.4636D] [Cross Ref]
• Musin R, Wang X. Phys. Rev. B. 2006. p. 165308. Bibcode number [2006PhRvB..74p5308M] [Cross Ref]
• Musin R, Wang X. Phys. Rev. B. 2005. p. 155318. Bibcode number [2005PhRvB..71o5318M] [Cross Ref]
• Yang J, Jin C, Kim C, Jo M. Nano Lett. 2006. p. 12,2679. [PubMed]
• Bruno M, Palummo M, Del Sole R, Ossicini S. Surface Sci. 2007. p. 277.
• Beckman S, Han J, Chelikowsky J. Phys. Rev. B. 2006. p. 165314. Bibcode number [2006PhRvB..74p5314B] [Cross Ref]
• Zhao X, Wei CM, Yang L, Chou MY. Phys. 2004. p. 236805. Bibcode number [2004PhRvL..92w6805Z] [PubMed] [Cross Ref]
• Amato M, Palummo M, Ossicini S. Phys. Rev. B. 2009. p. 235333. Bibcode number [2009PhRvB..80w5333A] [Cross Ref]
• Amato M, Palummo M, Ossicini S. Phys. Status Solidi B. 2010. p. 2096. [Cross Ref]
• Wu Y, Cui Y, Huynh L, Barrelet CJ, Bell DC, Lieber CM. Nano Lett. 2004. p. 433. COI number [1:CAS:528:DC%2BD2cXosFKqtg%3D%3D]; Bibcode number [2004NanoL...4..433W] [Cross Ref]
• Vegard L. Z. 1921. p. 17. COI number [1:CAS:528:DyaB3MXhsV2luw%3D%3D]; Bibcode number [1921ZPhy....5...17V] [Cross Ref]
• Ma DDD, Lee CS, Au FCK, Tong SY, Lee ST. Science. 2003. p. 1874. COI number [1:CAS:528:DC%2BD3sXitFKlu7s%3D]; Bibcode number [2003Sci...299.1874M] [PubMed] [Cross Ref]
• Palummo M, Palummo M, Ossicini S. Phys. Status Solidi B. 2010. (under review)
• Marini A, Hogan C, Grüning M, Varsano D. Comput. 2009. p. 1392. COI number [1:CAS:528:DC%2BD1MXovFyjtL8%3D]; Bibcode number [2009CoPhC.180.1392M] [Cross Ref]
• Kholod AN, Shaposhnikov VL, Sobolev N, Borisenko VE, D’Avitaya FA, Ossicini S. Phys. Rev. B. 2004. p. 035317. Bibcode number [2004PhRvB..70c5317K] [Cross Ref]
• Kholod AN, Saul A, Fuhr JD, Borisenko VE, D’Avitaya FA. Phys. Rev. B. 2000. p. 12949. COI number [1:CAS:528:DC%2BD3cXnvFamsbY%3D]; Bibcode number [2000PhRvB..6212949K] [Cross Ref]
Articles from Nanoscale Research Letters are provided here courtesy of Springer
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2956023/?tool=pubmed","timestamp":"2014-04-16T20:02:47Z","content_type":null,"content_length":"151403","record_id":"<urn:uuid:a7c8788f-0f66-4246-a754-6122e7139b29>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baylor University || Mathematics Department || Differential Equations
Differential Equations
The differential equations group at Baylor conducts research in ordinary and partial differential equations, discrete dynamical systems cast as finite difference equations, and the interaction
between hybrid discrete/continuous dynamical systems (dynamic equations on time scales). We are also interested in theoretical, applied, and numerical results. We enjoy at least two active seminars
during any semester where students and faculty present their current work.
Differential Equations Group: John Davis, Johnny Henderson, Tim Sheng, Jonatan Lenells
Differential Equations Seminar
The Differential Equations Seminar meets in Sid Richardson 325 on Mondays and Wednesdays from 2:00-3:20 p.m. | {"url":"http://www.baylor.edu/math/index.php?id=53585","timestamp":"2014-04-24T01:48:06Z","content_type":null,"content_length":"17043","record_id":"<urn:uuid:ec95df25-a04a-480c-aeda-228d4f53b850>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where does 'on the wrong side of square law' come from?
Where does "on the wrong side of square law" come from?
June 28, 2012 8:09 AM Subscribe
Where does this phrase come from? "On the wrong side of square law" is a science aphorism for a situation in which efforts must be increased at an exponential rate to achieve only modest returns.
Does it have a specific origin or source?
posted by Jackson to Writing & Language (5 answers total)
Inverse-square law
, simply put, if you double the distance between the source of radiation and the target object, you need four times as much radiation to get the same exposure.
posted by laconic skeuomorph at 8:17 AM on June 28, 2012 [2 favorites]
I've not heard this phrase before, but I would imagine it is because a
power law distribution
with exponent between -1 and -2 has a finite
and an infinite
. That means the average value is finite, but the average squared value is infinite.
That makes life very difficult experimentally or in simulations. You're studying something (say how long it takes for your system to go from X to Y, whatever those are), and you want to figure you
the average time. A finite average time exists if your exponent is between -1 and -2, but since the variance is infinite you're guaranteed to measure some very large values. Your experimental values
will fluctuate
because of the diverging variance. Because you're only going to do a bit of sampling (i.e. you won't do your experiment millions of times), those large values will throw off your estimate of the
That's my guess anyway. I'm always dealing with finite mean and variance distributions, so my life is easy.
posted by bessel functions seem unnecessarily complicated at 8:26 AM on June 28, 2012
The only reference to that phrase that google coughs up is the original mention by Fred Pollack, Intel Fellow and Director of Microprocessor Research Labs, in 2000, and references thereto. It appears
he's referring to the exponential growth of computing power predicted by Moore's Law, and how, based on the diminishing returns of making smaller and smaller transistors on processing chips, they
were failing to keep up with the "Square law," that is, the exponential increase, in computing power.
posted by Sunburnt at 8:47 AM on June 28, 2012
The correct phrase is "on the wrong side of
square law". Sunburnt's citation can be viewed
. Pollack in 2000 does appear to be the originator of the phrase.
posted by beagle at 8:50 AM on June 28, 2012
This paper
says that Pollack first said it in 1999 "in his keynote at MICRO-32."
Pollack's presentation
at MICRO-32.
took place November, 1999.
posted by beagle at 9:02 AM on June 28, 2012
« Older What are some NYC-centric chai... | Help me find or create a calen... Newer »
This thread is closed to new comments. | {"url":"http://ask.metafilter.com/218767/Where-does-on-the-wrong-side-of-square-law-come-from","timestamp":"2014-04-23T12:53:32Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:be9a830d-c1fb-4b20-837a-30734e19cf27>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
ime constant
This article shows how to calculate the time constant of a CR network, and why it's useful to know it.
How long does it take to charge a capacitor?
A capacitor is a container for electric charge, so the answer depends on how big the container is, and how fast you allow charge to flow into it. How long does it take to fill a bucket?
Why should you limit the charging rate?
If you don't, you could damage either the capacitor or the power source. Big electrolytic capacitors intended for use in switched-mode power supplies have an internal resistance measured in
milli-ohms. If this is the only resistance in the circuit, there could be a huge current spike as the capacitor begins to charge.
How fast does a capacitor charge?
Here's a capacitor C being charged from a power supply, with all the instantaneous voltages labelled so that I can set up the equations and explore what's happening.
When power is first applied, the capacitor holds no charge, so there's zero voltage across it (v[C] = 0). This means that all the applied voltage appears across R, so the current at switch-on is V/R.
If V is 10v and R is 50 mΩ (that is, if the only resistance in the circuit is the capacitor's internal resistance) the initial current spike would be 20 amps! Are you sure the capacitor could handle
Setting up the equation
1. The first step is to add up the voltages and write an equation that links them together, then express v[R] and v[C] in terms of current and charge.
2. Next, divide through by R, and rewrite current as rate-of-change of charge.
3. The maximum possible charge the capacitor could hold in this circuit is C V, so it makes sense to write this as Q and simplify the equation a little.
4. And finally, separate the variables. The equation is ready to be solved. I'll have to use integration, because each of the little dq's is different. As the capacitor charges, it gains voltage, so
as time passes the charging current gets smaller and smaller. (In fact, current never really stops flowing - it just shrinks past the point of being measurable.)
Solving the equation
5. So, put in the integral signs, and the limits. I want to discover the amount of charge q at any time t as the capacitor charges.
6. Doing the integration and putting in the limits shows that the relationship between q and t is logarithmic (ln means natural logarithm - that is, log to the base e.)
7. Raising each side to the power e simplifies things ...
8. ... and I end up with q, expressed in terms of its maximum possible value (Q) and describing how it varies with time t during charging.
What does the answer mean?
Faced with an equation like this, my first instinct is to check what happens at the limits. What is q when t is zero? What happens when t is infinite? In other words, does the answer seem sensible?
When t is zero, the term e^-t/RC becomes e^0, which is 1, and 1-1=0, so the right-hand side of the expression simplifies to 0. So q is zero - the capacitor holds no charge - before I switch on the
power. Then when t is infinite, the term e^-t/RC becomes e^-∞, which is 0, and 1-0=1, so the right-hand side of the expression simplifies to Q. So q would become Q - the capacitor would become fully
charged - if I waited long enough. So the equation says that q rises from 0 to its final value of Q, which is of course what I specified in the integral.
Q is 1, and using as the horizontal scale the ratio of t to CR.
This ratio must be a number. Since t is time, CR must be a time, too. What matters is not t alone, but the value of t as a fraction of CR. And the value of CR depends solely on the values of
capacitor and resistor in this particular circuit. CR is known as the circuit's time constant. For example, if C is 10 μF and R is 1 MΩ, the time constant is 10 seconds. Microfarads times megohms
equals seconds.
After a time equal to CR - one time constant - the capacitor has charged to 63%. That's not particularly interesting or useful. It might have been better to define, say, 3CR as the time constant,
because after 3CR the charge has reached 95% of its final value.
But CR is easy to remember, and it's the reciprocal of the network's corner frequency which is a useful idea when you're thinking about how RC networks affect a circuit's frequency response. I
discuss this further in the articles on RC network theory.
And since the voltage across the capacitor is proportional to the charge it's holding, this exponential curve also shows how the capacitor voltage varies with time.
Single-pulse generator
And here's an example to show how time constant is used in practice ...
The need often arises to generate a single fixed-length pulse in response to a trigger event. For example, many audio amplifiers do not connect the loudspeakers until the power rails have stabilised,
to avoid a disturbing 'thump' from the speakers.
Whilst the circuit is waiting for a trigger pulse, the RS flipflop is held reset. Its output is high, so the transistor conducts, placing a short-circuit across the capacitor C and so preventing it
from charging. The output of the 555 itself is low at this time, due to the inverting buffer.
As soon as the trigger pulse arrives, the lower comparator sets the RS flipflop. The output changes state, switching off the transistor, and the capacitor begins to charge through R. Eventually the
voltage across C reaches a preset threshold value. The upper comparator then resets the flipflop, ready for the next trigger pulse.
Other related pages: | {"url":"http://www.johnhearfield.com/Physics/Time_constant.htm","timestamp":"2014-04-19T22:22:09Z","content_type":null,"content_length":"9539","record_id":"<urn:uuid:c4a0b05d-bf21-41c3-8e8b-6074cf54a2a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: REWRITE-EFFICIENT ECC/INTERLEAVING FOR MULTI-TRACK RECORDING ON MAGNETIC TAPE
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
For writing data to multi-track tape, a received data set is received and segmented into unencoded subdata sets, each comprising an array having K
rows and K
columns. For each unencoded subdata set, N
C1-parity bytes are generated for each row and N
C2-parity bytes are generated for each column. The C1 and C2 parity bytes are appended to the ends of the row and column, respectively, to form encoded C1 and C2 codewords, respectively. All of the
C1 codewords per data set are endowed with a specific codeword header to form a plurality of partial codeword objects (PCOs). Each PCO is mapped onto a logical data track according to information
within the header. On each logical data track, adjacent PCOs are merged to form COs which are modulation encoded and mapped into synchronized COs. Then T synchronized COs are written simultaneously
to the data tape where T is the number of concurrent active tracks on the data tape.
A method for writing data to a multi-track data tape, comprising:receiving a stream of user data symbols, the stream comprising at least one data set;segmenting each data set into a plurality S of
unencoded subdata sets, each subdata set comprising an array having K
rows and K
columns;for each unencoded subdata set:generating N
C1-parity bytes for each row of a subdata set and appending the C1-parity bytes to the end of the row to form an encoded C1 codeword having a length N
; andgenerating N
C2-parity bytes for each column of the subdata set and appending the C2-parity bytes to the end of the column to form an encoded C2 codeword having a length N
, whereby an encoded subdata set is generated having N
C1 codewords;from the S encoded subdata sets, forming a plurality of (S×N
) partial codeword objects (PCOs), each comprising a header and a C1 codeword;mapping each PCO onto a logical data track according to information within the header of the PCO;on each logical data
track, merging adjacent PCOs to form codeword objects (COs), each comprising at least two adjacent PCOs;modulation encoding the COs and adding synchronization patterns to obtain T synchronized COs,
where T is the number of concurrent active tracks on the data tape; andwriting T synchronized COs simultaneously to the data tape.
The method of claim 1, wherein mapping each PCO comprises assigning to each PCO with index m=CWI_number a logical track number t based on the formula t=mod(7 floor(f(m)/S)+g(m), T), where floor(x)
denotes the integer part of the real number x and mod(a,T) denotes the remainder of the division by T, where the remainder is in the range 0, 1, . . . , T
-1. 3.
The method of claim 2, wherein f(m)=m and g(m)=m.
The method of claim 2, wherein f(m)=m and g(m)=floor(m/2).
The method of claim 2, further comprising writing a PCO with CWI_number=m to the tape at location i=floor(m/T).
The method of claim 2, mapping each PCO onto a logical data track is characterized by m=mod(i,2)+2T floor(i/2)+2 mod(t-7 mod(floor(i/2), T), T), where PCO-set index i=floor(m/T).
The method of claim 1, wherein T=16, N
=96 and S=32, and wherein further mapping each PCO comprises mapping a PCO from a CW interleave number m to a track number according to t=mod{2 floor(m/2)+6 floor(m/32)+mod[floor(m/16),2]+mod[floor(m
/256), 2]-2 mod[floor(m/256), 2]×mod[floor(m/16), 2], 16}.
The method of claim 7, mapping each PCO onto a logical data track is characterized by m=32 floor(i/2)+mod(i,2)+2 mod[5 floor(i/2)+floor(t/2), 8]+16 mod[mod(t,2)+mod(floor(i /16), 2), 2].
A data storage tape device, comprising:a host interface through which a stream of user data symbols comprising a data set is received;a segmenting module operable to segment the data set into a
plurality S of unencoded subdata sets, each subdata set comprising a array having K
rows and K
columns;a C1 encoder operable to generate N
C1-parity bytes for each row of a subdata set and append the C1-parity bytes to the end of the row to form an encoded C1 codeword having a length N
;a C2 encoder operable to generate N
C2-parity bytes for each column of the subdata set and append the C2-parity bytes to the end of the column to form an encoded C2 codeword having a length N
, whereby an encoded subdata set is generated having N
C1 codewords;a partial codeword object (PCO) formatter operable to form a plurality of (S×N
) PCOs, each comprising a header and a C1 codeword, from the S encoded subdata sets;a PCO interleaver operable to map each PCO onto a logical data track according to information within the header of
the PCO;a codeword object formatter operable on each logical data track, to merge adjacent PCOs into COs, each comprising at least two adjacent PCOs;a modulation encoder operable to encode the COs
into synchronized COs; anda write channel, including a write head, operable to write T synchronized COs simultaneously to the tape, where T equals the number of concurrent active tracks on a data
storage tape.
The data storage tape device of claim 9, wherein the PCO interleaver is operable to map each PCO onto a logical data track by assigning to each PCO with index m=CWI_number a logical track number t
based on the formula t=mod(7 floor(f(m)/S)+g(m), T), where floor(x) denotes the integer part of the real number x and mod(a,T) denotes the remainder of the division by T, where the remainder is in
the range 0, 1, . . . , T
-1. 11.
The data storage tape device of claim 10, wherein f(m)=m and g(m)=m.
The data storage tape device of claim 10, wherein f(m)=m and g(m)=floor(m/2).
The data storage tape device of claim 10, wherein the write channel write T synchronized COs simultaneously to the tape by a PCO with CWI_number=m to the tape at location i=floor(m/T).
The data storage tape device of claim 9, wherein T=16, N
=96 and S=32, and wherein further the PCO interleaver maps each PCO by mapping a PCO from a CW interleave number m to a track number according to t=mod{2 floor(m/2)+6 floor(m/32)+mod[floor(m/16),2]
+mod[floor(m/256), 2]-2 mod[floor(m/256), 2]×mod[floor(m/16), 2], 16}.
The data storage tape device of claim 14, wherein the PCO interleaver maps each PCO onto a logical data track is characterized by m=32 floor(i/2)+mod(i,2)+2 mod[5 floor(i/2)+floor(t/2), 8]+16 mod[mod
(t,2)+mod(floor(i/16), 2), 2].
A computer program product of a computer readable medium usable with a programmable computer, the computer program product having computer-readable code embodied therein for writing data to a
multi-track data tape, the computer-readable code comprising instructions for:receiving a stream of user data symbols, the stream comprising at least one data set;segmenting each data set into a
plurality S of unencoded subdata sets, each subdata set comprising an array having K
rows and K
columns;for each unencoded subdata set:generating N
C1-parity bytes for each row of a subdata set and appending the C1-parity bytes to the end of the row to form an encoded C1 codeword having a length N
; andgenerating N
C2-parity bytes for each column of the subdata set and appending the C2-parity bytes to the end of the column to form an encoded C2 codeword having a length N
, whereby an encoded subdata set is generated having N
C1 codewords;from the S encoded subdata sets, forming a plurality of (S×N
) partial codeword objects (PCOs), each comprising a header and a C1 codeword;mapping each PCO onto a logical data track according to information within the header of the PCO;on each logical data
track, merging adjacent PCOs to form codeword objects (COs), each comprising at least two adjacent PCOs;modulation encoding the COs and adding synchronization patterns to obtain T synchronized COs,
where T is the number of concurrent active tracks on the data tape; andwriting T synchronized COs simultaneously to the data tape.
The computer program product of claim 16, wherein the instructions for mapping each PCO comprise instructions for assigning to each PCO with index m=CWI_number a logical track number t based on the
formula t=mod(7 floor(f(m)/S)+g(m), T), where floor(x) denotes the integer part of the real number x and mod(a,T) denotes the remainder of the division by T, where the remainder is in the range 0, 1,
. . . , T
-1. 18.
The computer program product of claim 17, wherein f(m)=m and g(m)=m.
The computer program product of claim 17, wherein f(m)=m and g(m)=floor(m/2).
The computer program product of claim 17, further comprising instructions for writing a PCO with CWI_number=m to the tape at location i=floor(m/T).
The computer program product of claim 17, mapping each PCO onto a logical data track is characterized by m=mod(i,2)+2T floor(i/2)+2 mod(t-7 mod(floor(i/2), T), T), where PCO-set index i=floor(m/T).
The computer program product of claim 16, wherein T=16, N
=96 and S=32, and wherein further the instructions for mapping each PCO comprise instructions for mapping a PCO from a CW interleave number m to a track number according to t=mod{2 floor(m/2)+6 floor
(m/32)+mod[floor(m/16),2]+mod[floor(m/256), 2]-2 mod[floor(m/256), 2]×mod[floor(m/16), 2], 16}.
The computer program product of claim 22, mapping each PCO onto a logical data track is characterized by m=32 floor(i/2)+mod(i,2)+2 mod[5 floor(i/2)+floor(t/2), 8]+16 mod[mod(t,2)+mod(floor(i/16),
2), 2].
An apparatus for encoding a stream of user data symbols comprising a data set, comprising:a segmenting module operable to segment the data set into a plurality S of unencoded subdata sets, each
subdata set comprising a array having K
rows and K
columns;a C1 encoder operable to generate N
C1-parity bytes for each row of a subdata set and append the C1-parity bytes to the end of the row to form an encoded C1 codeword having a length N
;a C2 encoder operable to generate N
C2-parity bytes for each column of the subdata set and append the C2-parity bytes to the end of the column to form an encoded C2 codeword having a length N
, whereby an encoded subdata set is generated having N
C1 codewords;a partial codeword object interleaver operable to map each PCO onto a logical data track according to information within the header of the PCO;a codeword object formatter operable on
each logical data track, to merge adjacent PCOs into COs, each comprising at least two adjacent PCOs; anda modulation encoder operable to encode the COs into synchronized COs.
The data storage tape device of claim 24, wherein the PCO interleaver is operable to map each PCO onto a logical data track by assigning to each PCO with index m=CWI_number a logical track number t
based on the formula t=mod(7 floor(f(m)/S)+g(m), T), where floor(x) denotes the integer part of the real number x and mod(a,T) denotes the remainder of the division by T, where the remainder is in
the range 0,1, . . . , T
-1. 26.
The data storage tape device of claim 25, wherein f(m)=m and g(m)=m.
The data storage tape device of claim 25, wherein f(m)=m and g(m)=floor(m/2).
The data storage tape device of claim 25, wherein the write channel write T synchronized COs simultaneously to the tape by a PCO with CWI_number=m to the tape at location i=floor(m/T).
The data storage tape device of claim 24, wherein T=16, N
=96 and S=32, and wherein further the PCO interleaver maps each PCO by mapping a PCO from a CW interleave number m to a track number according to t=mod{2 floor(m/2)+6 floor(m/32)+mod[floor(m/16),2]
+mod[floor(m/256), 2]-2 mod[floor(m/256), 2]×mod[floor(m/16), 2], 16}.
The data storage tape device of claim 29, wherein the PCO interleaver maps each PCO onto a logical data track is characterized by m=32 floor(i/2)+mod(i,2)+2 mod[5 floor(i/2)+floor(t/2), 8]+16 mod[mod
(t,2)+mod(floor(i /16), 2), 2].
A method for deploying computing infrastructure, comprising integrating computer readable code into a computing system, wherein the code, in combination with the computing system, is capable of
performing the following:receiving a stream of user data symbols, the stream comprising at least one data set;segmenting each data set into a plurality S of unencoded subdata sets, each subdata set
comprising an array having K
rows and K
columns;for each unencoded subdata set:generating N
C1-parity bytes for each row of a subdata set and appending the C1-parity bytes to the end of the row to form an encoded C1 codeword having a length N
; andgenerating N
C2-parity bytes for each column of the subdata set and appending the C2-parity bytes to the end of the column to form an encoded C2 codeword having a length N
, whereby an encoded subdata set is generated having N
C1 codewords;from the S encoded subdata sets, forming a plurality of (S×N
) partial codeword objects (PCOs), each comprising a header and a C1 codeword;mapping each PCO onto a logical data track according to information within the header of the PCO;on each logical data
track, merging adjacent PCOs to form codeword objects (COs), each comprising at least two adjacent PCOs;modulation encoding the COs and adding synchronization patterns to obtain T synchronized COs,
where T is the number of concurrent active tracks on the data tape; andwriting T synchronized COs simultaneously to the data tape.
The method of claim 31, wherein mapping each PCO comprises assigning to each PCO with index m=CWI_number a logical track number t based on the formula t=mod(7 floor(f(m)/S)+g(m), T), where floor(x)
denotes the integer part of the real number x and mod(a,T) denotes the remainder of the division by T, where the remainder is in the range 0, 1, . . . , T
-1. 33.
The method of claim 32, further comprising writing a PCO with CWI_number=m to the tape at location i=floor(m/T).
The method of claim 32, mapping each PCO onto a logical data track is characterized by m=mod(i,2)+2T floor(i/2)+2 mod(t-7 mod(floor(i /2), T), T), where PCO-set index i=floor(m/T).
The method of claim 31, wherein T=16, N
=96 and S=32, and wherein further mapping each PCO comprises mapping a PCO from a CW interleave number m to a track number according to t=mod{2 floor(m/2)+6 floor(m/32)+mod[floor(m/16),2]+mod[floor(m
/256), 2]-2 mod[floor(m/256), 2]×mod[floor(m/16), 2], 16}.
RELATED APPLICATION DATA [0001]
The present application is related to commonly-assigned and co-pending U.S. application Ser. No. ______ [IBM Docket TUC920070102US1], entitled ECC INTERLEAVING FOR MULTI-TRACK RECORDING ON MAGNETIC
TAPE, and Ser. No. ______ [IBM Docket TUC920070254US1], entitled REWRITING CODEWORD OBJECTS TO MAGNETIC DATA TAPE UPON DETECTION OF AN ERROR, both filed on the same date as the present application,
which related applications are hereby incorporated herein by reference in their entireties.
TECHNICAL FIELD [0002]
The present invention relates generally to formatting data to be recorded onto magnetic tape and, in particular, to an adjustable ECC format and interleaving process to accommodate tape drives having
a multiple of eight transducers/sensors per head to read and write from/to a multiple of eight number of tracks simultaneously.
BACKGROUND ART [0003]
The Linear Tape Open (LTO) formats Generations 3 and 4 use error-correcting codes (ECC), which are based on a 2-dimensional product code. The C1-code is arranged along the rows of the 2-dimensional
array. It is an even/odd interleaved Reed-Solomon (RS) code of length 240 giving rise to a row of length 480. The C2-code is arranged along the columns of the array. It is a RS-code of length 64 and
dimension 54. The codewords are 2-dimensional arrays of size 64×480 and they are called subdata sets in the LTO standard. It is anticipated that future generation of drives will write on more than 16
tracks simultaneously. However, all current generations of LTO formats (Gen-1 to Gen-4) are based on the above C2 coding scheme which, together with its associated interleaving, cannot accommodate
future tape-drive systems that will support heads with 16, 24, 32 or 48 (or other multiple of eight) transducers/sensors per head to read/write 16, 24, 32 or 48 (or other multiple of eight)
concurrent tracks, respectively. Furthermore, it is expected that future generations of drives will use longer subdata sets having rows, which consists of 4-way codeword interleaves (CWI-4) of length
960 instead of the 2-way even/odd codeword interleaves (CWI-2), which are called codeword pairs of the LTO format (Gen-1 to Gen-4). In LTO Gen-1 to Gen-4, these CWI-2s are endowed with codeword pair
headers and grouped into pairs to form codeword objects (CO). When a write failure occurs, entire COs are rewritten. If the same CO-rewrite strategy is applied to subdata set rows consisting of
CWI-4s, there is a loss in efficiency because most often only one of the two CWI-4s per CO had a failure and the other CWI-4 would not need to be rewritten. Since CWI-4s are twice as long as CWI-2s,
the loss in efficiency is about twice as large in the former case as in the latter.
SUMMARY OF THE INVENTION [0004]
The present invention provides methods, apparatus and computer program product for writing data to multi-track tape. In one embodiment, a method comprises receiving a stream of user data symbols, the
stream comprising a data set and segmenting the data set into a plurality S of unencoded subdata sets, each subdata set comprising an array having K
rows and K
columns. For each unencoded subdata set, N
C1-parity bytes are generated for each row of a subdata set which are appended to the end of the row to form an encoded C1 codeword having a length N
. Similarly, for each unencoded subdata set, N
C2-parity bytes are generated for each column of the subdata set which are appended to the end of the column to form an encoded C2 codeword having a length N
, whereby an encoded subdata set is generated having N
C1 codewords. All the S×N
C1 codewords per data set are endowed with a specific codeword header to form a plurality of S×N
partial codeword objects (PCOs). Each PCO is mapped onto a logical data track according to information within the headers of the PCO. On each logical data track, adjacent PCOs are merged to form COs,
each comprising at least two adjacent PCOs. Each CO is modulation encoded and mapped into a synchronized CO by adding various synchronization patterns. T synchronized COs are then written
simultaneously to the tape, where T equals the number of concurrent active tracks on the tape.
In another embodiment, a data storage tape device comprises a host interface through which a stream of user data symbols comprising a data set is received and a segmenting module operable to segment
the data set into a plurality S of unencoded subdata sets, each subdata set comprising a array having K
rows and K
columns. A C1 encoder is operable to generate N
C1 parity bytes for each row of a subdata set and append the C1 parity bytes to the end of the row to form an encoded C1 codeword having a length N
and a C2 encoder is operable to generate N
C2 parity bytes for each column of the subdata set and append the C2 parity bytes to the end of the column to form an encoded C2 codeword having a length N
, whereby an encoded subdata set is generated having N
C1 codewords. A partial codeword object formatter is operable to form a plurality of (S×N
) partial codeword objects (PCOs) from the S encoded subdata sets, each PCO comprising a header and a C1 codeword. A partial codeword object interleaver is operable to map each PCO onto a logical
data track according to information within the headers of the PCO. A codeword object formatter is operable, on each logical data track, to merge adjacent PCOs into COs, each comprising at least two
adjacent PCOs. A modulation encoder is operable to encode the COs into synchronized COs that contain various sync patterns in addition to modulation encoded COs. A write channel, including a write
head, is operable to write T synchronized COs simultaneously to the tape, where T equals the number of concurrent active tracks on the tape.
BRIEF DESCRIPTION OF THE DRAWINGS [0006]
FIG. 1 is a block diagram of a magnetic tape drive with which the present invention may be implemented;
FIG. 2 is a schematic representation of an encoded subdata set, including interleaved C1 and C2 ECC;
FIG. 3 is a block diagram of components generalizing the LTO Gen-4 standard used to form data sets layouts from a stream of user data symbols;
FIGS. 4A and 4B are schematic representations of unencoded and encoded subdata sets, respectively;
FIG. 5 is a logic diagram of a C2-encoder for a [96, 84, 13]-RS code which may be used with the present invention;
FIG. 6 illustrates a codeword object (CO) of the LTO Gen-4 data format;
FIG. 7 illustrates an example of a distribution of subdata sets along 16 tracks of recording media based on CO-interleaving;
FIG. 8 illustrates a PCO of the present invention;
FIG. 9 is a block diagram of components of the present invention used to form data sets layouts from a stream of user data symbols;
FIG. 10 illustrates an alternative CO of the present invention;
FIG. 11 illustrates an example of a distribution of subdata sets along 16 tracks of recording media based on PCO-interleaving;
FIG. 12 illustrates a synchronized CO of the LTO Gen-4 data format; and
FIG. 13 illustrates an alternative synchronized CO of the present invention.
FIG. 14 is a flow chart illustrating the encoding processes and the generation of PCOs, COs and synchronized COs.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0020]
Some of the functional units described in this specification have been labeled as modules in order to more particularly emphasize their implementation independence. For example, a module may be
implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be
implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like. Modules may also be implemented in software for
execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance,
be organized as an object, procedure, or function. A module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments,
among different programs and across several memory devices.
Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific
details are provided, such as examples of programming, software modules, hardware modules, hardware circuits, etc., to provide a thorough understanding of embodiments of the invention. One skilled in
the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components and so forth. In other instances,
well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
FIG. 1 is a high level block diagram of a data tape drive 100 in which the present invention may be incorporated. Data to be recorded is transmitted from a host (not shown) to the drive 100 through a
host interface 102. The data undergoes a first encoding in a C1 encoder 104 and passed to a DRAM buffer controller 106. The C1-encoded data undergoes a second encoding in a C2 encoder 108 and is
stored in a DRAM buffer 110. The data is subsequently stored in an SRAM buffer 112 and formatted in a formatter 114. Formatted data is sent to a write channel and then to a write head 118 which
records the data onto the tape 120.
When the data is read back from the tape 120, a read head 122 detects the data and passes it to a read channel. The data is then processed in a de-formatter 126 and COs are verified in a verifier
128. The data is then decoded and sent to the requesting host.
The Linear Tape Open (LTO) format is based on the concept of data sets (the smallest unit written to tape) and subdata sets. A data set contains two types of data: user data and administrative
information about the data set, the latter being in a Data Set Information Table (DSIT). All data is protected by an error correction code (ECC) to minimize data loss due to errors or defects. A data
set comprises a number of subdata sets, each containing data arranged in rows. A subdata set row may contain user data or contain the DSIT. As illustrated in FIG. 2, each row consists of two
interleaved byte sequences. A first level ECC (C1 ECC) is computed separately for the even bytes and for the odd bytes for each row. The resulting C1 ECC even and odd parity bytes are appended to the
corresponding row, also in an interleaved fashion. The ECC protected row is the Codeword Pair (CWP). The even bytes form the even C1 Codeword while the odd bytes form the odd C1 Codeword. A second
level ECC (C2 ECC) is computed for each column and the resulting C2 ECC parity bytes are appended to the corresponding columns. The ECC protected column is a C2 Codeword.
The subdata set, when so protected by C1 and C2 ECC, is the smallest ECC-protected unit written to tape. Each subdata set is independent with respect to ECC; that is, errors in a subdata set affect
only that subdata set. The power of any ECC algorithm depends upon the number of parity bytes and is stated in terms of its correction capability. For a given number of N
C1-parity bytes computed for a C1 codeword, up to floor((N
)/4) errors may be corrected in each of the two interleaves of that codeword, where floor(x) denotes the integer part of the real number x. And, for a given number of N
C2-parity bytes computed for a C2 codeword, up to floor((N
)/2) errors or N
erasures may be corrected in that C2 Codeword.
It will be appreciated that multiple errors in the same subdata set can overwhelm the ability of the C1 or the C2 correction power to the extent that an error occurs when the data is read. Errors may
be caused by very small events such as small particles or small media defects. Errors may also be caused by larger events such as scratches, tracking errors or mechanical causes.
To mitigate the possibility that a single large error will affect multiple Codewords in a single subdata set, some methods of writing place Codewords from each subdata set as far apart as possible
along and across the tape surface. A single error would therefore have to affect multiple Codewords from the same subdata set before the ECC correction capability is overwhelmed. Spatial separation
of Codewords from the same subdata set reduces the risk and is accomplished in the following manner for a multi-track recording format. For each track of a set of tracks being recorded
simultaneously, a Codeword Quad (CQ) is formed by combining a Codeword Pair from one subdata set with a Codeword Pair from a different subdata set. The resulting CQ is written on one of the multiple
recorded tracks. In like manner, CQs are formed for all remaining tracks by combining Codeword Pairs, all Codeword Pairs being from differing subdata sets. The group of CQs written simultaneously is
called a CQ Set.
As illustrated in the block diagram of FIG. 3, data sets of a specified fixed size are formed by segmentation of a stream of user data symbols in a data set segmentation module 302. The data set is
further partitioned into S unencoded subdata sets. The subdata set structure is matched to an ECC module 304, which is based on a C1/C2 product code. The unencoded subdata sets comprise 2-dimensional
arrays of bytes of size K
, where K
and K
are the dimensions of the C1 and C2 code, respectively (FIG. 4A). A C1-encoder 306 operates on rows and adds parity bytes in each row. A C2-encoder 308 operates on the C1-encoded columns and appends
parity in each column. The resulting C1/C2-encoded subdata set is an N
array of bytes, where N
and N
are the lengths of the C1 and C2 code, respectively (FIG. 4B).
The encoding by the C2 encoder 308 may be performed by a linear feedback shift register (LFSR) 500 as shown in FIG. 5, whose feedback coefficients are given by the generator polynomial of the [96,
84, 13]-RS code. The initial state of the LFSR 500 is the all-zero state. The N
=12 parity bytes of an RS codeword are obtained by clocking all the systematic K
=84 data bytes through the LFSR and reading out registers R0 to R11.
In LTO-3/4, S=64 subdata sets (or codewords) form a data set (DS), the C1 code has length N
=480 and the C2 code has length N
=64. The C1-codewords within a DS are fully determined by the subdata set (SDS) number (in the range from 0 to S-1) and by the row number within the subdata set (codeword array). In LTO-3/4, this
assignment is called codeword pair designation. It is determined by the following expression:
1-codeword_number=SDS_number+64×row_number, (1)
where SDS
_number=0, 1, 2, . . . , S-1 and row_number=0, 1, . . . , 63. For LTO-3/4, the C1-codeword_number index takes values from 0 to 4095.
A structure 600 as shown in FIG. 6 is a Codeword Object (CO) structure and reflects the organization of the basic unit that includes C1-codewords and associated headers.
From the ECC module 304, a CO formatter 310 forms COs consisting of two 10-byte headers 602, 604 and of two C1-codewords 606, 608 out of the S×N
=4096 C1-codewords per DS. Thus, there are S×N
/2=2048 COs, which are numbered from 0 to 2047. The CO structure 600 with index CO_number contains the two C1-codewords with indices C1-codeword_number that are related as follows. The indices
0 and C1-codeword_number
1 of the first and second C1-codewords, respectively, are given by
0=2×CO_number (2)
1=2×CO_number+1. (3)
, two C1-codewords in an CO are taken from two SDSs with consecutive SDS_number indices.
The COs are written simultaneously onto the tape in batches of T COs, where T is the number of concurrent active tracks. The CO-interleaver 312 assigns a logical track number t in the range 0, 1, . .
. , T-1 to each CO of the DS. Thus, the S×N
/2 COs of a DS are grouped into batches of T COs based on their consecutive CO_number indices and then these batches are written onto the T active tracks. Thereby, one CO of each batch is written
onto one of the T tracks in a one-to-one fashion, which is determined by the CO-interleaver 312.
The CO-interleaver is determined by assigning to each CO with index n=CO_number a logical track number t based on the formula
=mod(7 floor(2n/S)+n, T) (4)
where floor
(x) denotes the integer part of the real number x and mod(a,T) denotes the remainder of the division by T, where the remainder is in the range 0,1, . . . , T-1.
In FIG. 7, the result of CO-interleaving based on formula (4) above is illustrated for a DS with S=32 SDSs by showing the data set layout along T=16 logical tracks. The 96 squares correspond to the
96 COs of the SDSs with SDS_number 0 and 1. Note that the interleaver-granularity along tape is a CO, i.e., the x-axis is measured in CO lengths. As T=16 tracks are written concurrently, the CO with
CO_number=n will be at location j=floor(n/T) from the beginning of the DS.
Because of re-write considerations, it is desirable to have smaller re-write units than COs. For this reason, one defines a Partial Codeword Object (PCO), as illustrated in FIG. 8, to consist of a
CWI together with a CWI header, which contains the corresponding CWI_number (which is 4 in the FIG) specified by formula (4).
The data flow in the write path with PCO-interleaving is illustrated in FIG. 9. As with the data flow set forth in FIG. 3, data sets of a specified fixed size are formed by segmentation of a stream
of user data symbols in a data set segmentation module 902. The data set is further partitioned into S unencoded subdata sets. The subdata set structure is matched to an ECC module 904, which is
based on a C1/C2 product code. A C1-encoder 906 operates on rows and adds parity bytes in each row and a C2-encoder 908 operates on the C1-encoded columns and appends parity in each column to
generate the C1/C2-encoded subdata sets.
From the ECC module 904, a PCO formatter 910 forms PCOs. The PCO illustrated in FIG. 8 consists of a 12-byte CWI header and a 960-byte CWI.
As there are S×N
CWIs per data set, there are also S×N
PCOs per data set. Each PCO is mapped onto a logical data track according to information within the headers of the PCO.
A PCO-interleaver 912 operates on PCOs and maps them onto T concurrent logical tracks. On each logical data track, adjacent PCOs are merged in a CO formatter 914 to form COs, each comprising at least
two adjacent PCOs. Thus, the PCO-interleaver 912 operates before the CO formatter 914, which groups pairs of adjacent PCOs along the same track into COs (see FIG. 10).
The PCO-interleaver 912 is determined by assigning to each PCO with index m=CWI_number a logical track number t based on the formula
=mod(7 floor(m/S)+m, T). (5)
A more general PCO-interleaver is obtained by using the formula
=mod(7 floor(f(m)/S)+g(m), T). (6)
with predefined functions f
(.) and g(.). Formula (5) is a special case of (6) in which f(m)=m and g(m)=m. Another interesting case of the general formula is
=mod(7 floor(m/S)+floor(m/2), T). (7)
Note that (7) is obtained from (4) by using the relation of the CWI index m=CWI_number to the CO index n=CO_number given in (2) and (3).
In FIG. 11, the result of PCO-interleaving based on (7) is illustrated for a DS with S=32 SDSs by showing the data set layout along T=16 logical tracks. The N
=96 PCOs of the SDS with SDS_number 0 are represented by dots and the 96 PCOs of SDS 1 are marked by crosses. Consecutive PCOs, denoted by dots and crosses in FIG. 11, correspond to COs, denoted by
squares in FIG. 7. This illustrates the equivalence of the CO-interleaver (4) and the PCO-interleaver (7). In FIG. 11, the interleaver-granularity along tape is a PCO, i.e., the x-axis is measured in
PCO lengths. As T=16 tracks are written concurrently, the PCO with CWI_number=m will be at location i=floor(m/T) from the beginning of the DS. The finer interleaver granularity by a factor of two has
the advantage that during a rewrite PCOs are rewritten instead of COs, which allows one to reduce the loss due to rewrite by a factor of about two.
As is apparent from the data set layout, there are T PCOs that are written simultaneously onto the T logical tracks. Each such set of T PCOs is called a PCO set. There are S×N2 PCOs per data set and,
hence, there are S×N
/T PCO sets. All PCO sets in the data set layout are labeled from 0 to S×N
/T-1. Each CWI_number m determines a PCO-set index i=floor(m/T). Conversely, each CWI_number m is uniquely determined by the track number t and the PCO-set index i=floor(m/T). This "inverse map" is
an equivalent way to characterize the PCO-interleaving. In particular, the inverse map of (7) is given by
=mod(i,2)+2T floor(i/2)+2 mod(t-7 mod(floor(i/2), T), T). (8)
The CO-interleaver has a different granularity from the PCO-interleaver; the former is based on the natural ordering of CO-numbers n whereas the latter is based on the natural ordering of CWI_numbers
m. The CO-formatter provides the link between CO-numbers and CWI_numbers. However, the relations (2) and (3) of the CO-formatter are not compatible with the natural ordering of CO_numbers and
CWI_numbers. e.g., based on the CO_number ordering, the consecutive CWIs with indices 0 and 1 are not written simultaneously to tape. First, the CWI with index 0 is written simultaneously with the
T-1 CWIs having CWI_numbers 2, 4, . . . , 2T-2; then, the CWI with index 1 is written simultaneously with the T-1 CWIs having CWI_numbers 3, 5, . . . , 2T-1. Thus, proceeding in sequential order of
CO_numbers n for the generation of the DS layout is not the same as proceeding in sequential order of CWI_numbers m. To overcome this difference and to obtain identical DS layouts, one can apply a
fixed permutation p to the set of all CWI indices, i.e., to the set {0, 1, . . . , S×N
×1}, which achieves the desired reordering. Specifically, this permutation is given by
(m)=2T floor(m/(2T))+T mod(m,2)+floor(mod(m,2T)/2) (9)
where mod
(a,b) denotes the remainder of the division of a by b. Thus, when using CWI_numbers m as reference for the DS layout, the reordering permutation needs to be incorporated and the two functions in the
general PCO-interleaver formula (6) are f(m)=p(m) and g(m)=floor(p(m)/2). This shows that the generalized CWI-interleaver can generate the same data set layout as the CO-interleaver, which is based
on (4).
The design of PCO-interleavers above is based on (5) or (7). By modifying these equations, one can obtain alternative PCO-interleavers. The following provides a description of such an alternative
PCO-interleaver for T=16 tracks, C2-length N
=96 and S=32 subdata sets per data set. The "direct mapping" from CWI_number m to track number is defined by
=mod{2 floor(m/2)+6 floor(m/32)+mod[floor(m/16),2]+mod[floor(m/256), 2]-2 mod[floor(m/256), 2]×mod[floor(m/16), 2], 16} (10)
together with the PCO set assignment given by i
=floor(m/T). The inverse interleaving map, which assigns a CWI_number m to every PCO-set index i and logical track t, is given by
= 32 floor ( i / 2 ) + mod ( i , 2 ) + 2 mod [ 5 floor ( i / 2 ) + floor ( t / 2 ) , 8 ] + 16 mod [ mod ( t , 2 ) + mod ( floor ( i / 16 ) , 2 ) , 2 ] ( 11 ) ##EQU00001##
This mapping is illustrated in TABLE 1. The shaded cells correspond to the PCOs of subdata set number 0. From TABLE 1, it is apparent how pairs of PCOs are grouped into COs by the CO formatter 914
shown in FIG. 9. Namely, for a fixed logical track number t, the two PCOs with PCO indices 2i and 2i+1 are combined into a CO and written onto tape in this order. For example, the two pairs of PCOs
with CWI indices (16, 17) and (58, 59) are grouped into two consecutive COs and then written onto logical track t=1.
There are alternative ways to perform the grouping of pairs of PCOs into COs. For instance, the PCO pairs within each CO could be swapped and, thus, PCOs with odd PCO indices 2i+1 are written prior
to those with even PCO indices 2i. Furthermore, such swapping could be a function of the PCO index i, say all PCOs are swapped within a CO if the PCO index i is in the ranges {32, 33, . . . , 63},
{96, 97, . . . , 127} and {160, 161, . . . , 191}. Such a specific swapping rule can easily be incorporated into the inverse interleaver map. For instance, the previous PCO-index-dependent swapping
is obtained by replacing the term mod(i,2) on the right hand side of (11) by the term mod(mod(i,2)+mod(floor(i/32),2), 2). As a result, one obtains a modified CWI_Number Assignment table. This
modified table differs from TABLE 1 by the swapping of all PCO pairs having a PCO index i in the ranges {32, 33, . . . , 63}, {96, 97, . . . , 127} and {160, 161, . . . , 191}.
-US-00001 TABLE 1 CWI_Number Assignement of the Alternative PCO-Interleaver with T = 16 Tracks, C2-Length N
= 96 and S = 32 Subdata Sets ##STR00001## ##STR00002## ##STR00003## ##STR00004## ##STR00005## ##STR00006## ##STR00007## ##STR00008## ##STR00009## ##STR00010## ##STR00011## ##STR00012## ##STR00013## #
#STR00014## ##STR00015##
After PCO-interleaving or CO-interleaving illustrated by FIG. 9 and FIG. 3, respectively, and before writing the COs onto tape, the COs are modulation encoded and transformed into synchronized
codeword object (SCO) structures by inserting VFO, forward, resync and reverse sync fields (see FIGS. 12 and 13). In LTO-4, the headers and CWI-2s, which are called codeword pairs, are passed through
a rate-16/17 RLL encoder resulting in RLL-encoded bit-sequences of length 85 and 4080, respectively. More generally, the CO structures can be modulation encoded using an RLL-encoder of rate RH for
the header portion and an RLL-encoder of rate R for the CWIs. For the CWI-4 based CO structure, the resulting SCO-structure is illustrated in FIG. 13, where the VFO, forward sync, re-sync and reverse
sync fields have some suitable lengths L
, L
, L
, and L
, respectively.
The interleaving scheme described herein is intended to provide robustness against dead tracks and an increased robustness against stripe errors (that is, errors across all tracks). The robustness of
an ECC/CO-interleaving scheme or ECC/PCO-interleaving scheme against stripe errors depends on three factors: (i) the parameters [N
, K
, d
] of the C2-code, (ii) the interleaving depth given by the number S of subdata sets (SDS) within each Data Set (DS), and (iii) the number T of parallel channels (tracks).
In case of a stripe error, a decoder will operate as follows. The C1-decoder detects that certain rows in a number of subdata sets are uncorrectable and provides erasure-flags of these rows to the
C2-decoder. The C2-decoder performs erasure-decoding and can correct up to N
-M erasures per subdata set while keeping a margin of M bytes to avoid miscorrections. If a stripe error along tape extends over no more than (S/2)×(N
-M)/T SCOs, then there are no more than (S/2)×(N
-M) COs which are affected by errors and these erroneous COs are evenly distributed by the inverse CO-interleaving map over the S/2 pairs of subdata sets of an affected DS. In case of
PCO-interleaving, the erroneous (S/2)×(N
-M) COs correspond to S×(N
-M) PCOs, which are evenly distributed by the inverse PCO-interleaving map over S subdata sets. Thus, in both cases, each subdata set will contain at most N
-M erased rows, which can be corrected and, therefore, the maximum stripe error length (MSEL) in terms of SCO units is given by
-M)/(2T). (12)
Note that the absolute length of the MSEL along the tape
[in mm] depends on the length of the SCO [in mm].
The maximum number of dead tracks (MNDT) that can be tolerated in the absence of channel errors can be derived in a similar manner. Specifically, the formula
/T)) (13)
can be used to compute the maximum number of dead tracks
In terms of MSEL and MNDT properties, the CO-interleaver (4) and the PCO-interleaver based on (5) or (7) are equivalent. Thus, TABLES II-III below are valid for both CO-interleaving and
Based on the synchronized codeword quad (SCQ), which is the SCO structure of LTO-4, TABLE 2 shows specific configurations of C2-code designs and properties with regard to maximum stripe error length
and dead track support. In TABLE 2, an erasure-correction margin of M=2 was assumed. All C2-codes with N
>64 have 3.7% improved format efficiency (FE) when compared to the C2-code in LTO-4 (see first row in TABLE 2). It should be emphasized that in all these cases the results hold for the
CO-interleaving formula (4) or the PCO-interleaving formula (5) or (7).
-US-00002 TABLE 2 Specific C2-Code Configurations for CWI-2-Based SCO-Structures Relative FE MSEL Dead N
Tracks T S DS size in % in SCQs tracks 64 54 16 64 1X 0 16 2 128 112 16 32 1X 3.7 14 2 128 112 16 64 2X 3.7 28 2 128 112 32 64 2X 3.7 14 4 128 112 32 128 4X 3.7 28 4 96 84 16 64 1.5X 3.7 20 2 96 84
24 96 2.25X 3.7 20 3 96 84 32 128 3X 3.7 20 4 192 168 16 32 1.5X 3.7 22 2 192 168 24 48 2.25X 3.7 22 3 192 168 32 64 3X 3.7 22 4
In TABLE 3, the results are summarized for two embodiments of C2-codes for T=16 parallel tracks and SCO-structures, which are based on CWI-4s. Note that the scheme ECC-1 with the C2-code of length 96
can be implemented either with the PCO-interleaver (5) or (7) or by applying the alternative PCO-interleaver specified in TABLE 1. The resulting three interleavers achieve the same MSEL and the MNDT.
The length of a CWI-4 [in mm] is roughly twice as long as that of a CWI-2. Thus, a maximum stripe error length of say 20 SCQs is comparable to 10 SCOs in TABLES 2 and 3, respectively.
-US-00003 TABLE 3 Proposed C2-Code Configurations for CWI-4-Based SCO-Structures and T = 16 Tracks Relative FE MSEL Dead Code N
S DS size in % in SCOs tracks ECC-1 96 84 32 1.5X 3.7 10 2 ECC-2 192 168 32 3X 3.7 22 2
The flow chart of FIG. 14 summarizes the foregoing process. A stream of user data symbols is received from a host, the stream including at least one data set (step 1400). Each data set is segmented
into a plurality S of unencoded subdata sets, each subdata set comprising an array having K
rows and K
columns (step 1402). Each unencoded subdata set is then encoded (step 1404). For each row of each unencoded subdata set, N
C1-parity bytes are generated (step 1406) and appended to the end of the row to form an encoded C1 codeword having a length N
(step 1408). In addition, for each column of each unencoded subdata set, N
C2-parity bytes are generated (step 1410) and appended to the end of the row to form an encoded C2 codeword having a length N
(step 1412). Thus, an encoded subdata set is generated having N
C1 codewords.
After all of the S unencoded subdata sets have been encoded, a plurality of (S×N
) partial codeword objects (PCOs) are formed from the encoded subdata sets, each comprising a header and a C1 codeword (step 1414). Each PCO is then mapped onto a logical data track according to
information within the header of the PCO (step 1416) and, on each logical data track, adjacent PCOs are merged to form codeword objects (COs), each comprising at least two adjacent PCOs (step 1418).
The COs are modulation encoded (step 1420) and VFO and synchronization patterns are added to obtain T synchronized COs, where T is the number of concurrent active tracks on the data tape, (step
1422). The T synchronized COs are then written simultaneously to the data tape (step 1424).
For the same C2-code and the same number S of SDSs per data set, the PCO-interleaving scheme described herein provides the same MSEL and MNDT properties as CO-interleaving. Thus, both schemes have
the same robustness against stripe errors and dead tracks. One benefit of the PCO-based scheme is the smaller granularity of the rewritten objects, which are PCOs rather than COs, in case of
It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the
processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies regardless
of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable storage media include recordable-type media such as a floppy disk, a hard
disk drive, a RAM, and CD-ROMs.
The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many
modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical
application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Moreover, although described above with respect to methods and systems, the need in the art may also be met with a computer program product containing instructions for writing data to a multi-track
data tape medium or a method for deploying computing infrastructure comprising integrating computer readable code into a computing system for writing data to a multi-track data tape medium.
Patent applications by Evangelos S. Eleftheriou, Zurich CH
Patent applications by Hisato Matsuo, Yokohama JP
Patent applications by Keisuke Tanaka, Tokyo-To JP
Patent applications by Paul J. Seger, Tucson, AZ US
Patent applications by Roy D. Cideciyan, Rueschlikon CH
Patent applications by Thomas Mittelholzer, Zurich CH
Patent applications by IBM Corporation
Patent applications in class In specific code or form
Patent applications in all subclasses In specific code or form
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20100177422","timestamp":"2014-04-16T10:58:35Z","content_type":null,"content_length":"85896","record_id":"<urn:uuid:e50ac2e1-a2fa-423b-a32a-ac4bd6ac2899>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lesson: Introduction to Addition
2034 Views
70 Downloads
8 Favorites
Lesson Objective
To introduce students to the meaning of the addition sign and equal sign. SWBAT identify the addends, equal sign, addition sign, sum, and solve an addition problem.
Lesson Plan
Building Number Counting Circle (create a circle of shapes drawn on the white board or chart paper, student groups of 4 to 5 students each choose a shape to start on, teacher points to that
Sense (5 minutes) shape, students receive a value to count to by increments (“Count to 20 by 2’s”), teacher points to each shape around the circle as the whole class counts for the group, if the
count ends on a triangle that groups gets a point if it ends on another shape, the shape is erased and no points are awarded, another group is now up)
Name Collection Box (Use the number from the day of school, write number in top left of box, write different “names” for that number in box; coins, base ten blocks, tallies,
pictures, words, etc.)
Mental Math Fluency Pepper with place value (all students stand, write numbers on board and ask place value, when student answers correctly they sit and “earn their seat”, ex: the number is 45 how
(5 minutes) much is the 4 digit worth? Or What place value is the 5 in)
Problem of the Day Terrance buys 5 zhu zhu pets, Julia buys 4 zhu zhu pets, how many zhu zhu pets did they buy altogether?
(7 minutes)
How did you solve this problem?
Teacher or Student (TS): Well we knew that Terrance bought 5 and Julia bought 4. So if they both bought that many we put those numbers together. I can count on my fingers, draw
tallies, or use unifix cubes to show that when we add 5 to 4 it is 9.
How did you know that you were going to be putting the numbers together?
TS: When you buy something you are getting/adding it NOT losing it, also the word problem ends with a key vocab word “altogether” which lets us know it is adding up
How did you combine the numbers?
Student (S): I counted on my fingers, I made tallies, I used unifix cubes, etc.
Teacher (T): Well there is a special way we can show when numbers are being combined.
Mini (With all questions you may elicit answers from class or give answer depending on time, class, and teaching style. I prefer to have kids drive answers and all the answers written
Lesson are the conclusions I will drive student conversations to. But I still keep it very teacher driven as this is the “I” do)
(12 minutes)
What is this [draw the + sign on board]?
T: It is an addition sign, it shows when things are being put together, combined. Go over vocab for combining: more, adding, plus, combined with, and any more student added ones.
(write the addition sign on a “vocab” chart and write “+ is an addition sign it shows we are combining numbers.” )
Why do we use an addition sign?
TS: We use it to show that two numbers are being combined, added together. It shows we are adding more. So you know what to do with the answer.
What is this [draw the = sign on board]?
TS: An equal sign.
What does it show?
T: It is like a balance (take out a balance and demonstrate balancing it with unifix cubes if there is three on one side we need three on the other side to balance, place a note
card on the base with a big equal sign on it). So the equal sign is a balance each side of it needs to be balanced. Add “= equal sign is a balance, both sides of the equal sign
need to be worth the same amount”. So if there is 4 cubes on this side how many do I need on this side?
S: We need 4 unifix cubes on the other side.
T: Why?
S: We need 4 unifix cubes on the other side because we need to make both sides equal (the same), since one side has 4 we need 4 on the other side.
Stress this use of complete sentences and “because” answer format.
[Show two blue unifix cubes and 3 black unifix cubes.]
How many unifix cubes do I have in all?
T: Well I have 2 blue lets write that [draw two cubes with blue marker], and 3 black lets write that [draw 3 black cubes with black marker]. So are we combining/adding together
our numbers?
S: Yes, we are adding the numbers together.
T: Correct, why?
S: We are adding the numbers because we are combining all the cubes together, adding them together.
T: So I’m now going to write the EQUATION for this problem. First I write one of the numbers [write 2] then write the…what sign are we using? Why?
T: Addition sign [write +] and lastly the equal sign [write =]. Who has seen this before?
S: I have seen that before.
T: Where have you seen that?
S: Last year, in math, on homework, on the computer, etc.
T: Great well who knows what this is called (point to the two)? It is called an addend both the two and three are our addends [add to the vocab chart that should have addition
sign, equal sign, and addends on it at this point]
What are we going to do with these two numbers (2 and 3)? How do you know?
S: We are going to add the two numbers together, I know this because I see the + sign which means we are combining the numbers.
T: So lets combine them, how much is 2 plus 3? [draw tallies or circles or whichever you prefer the kids go to first] It is 5 so if there is five on this side of the equal sign
how could we balance out the two sides?
S: By putting a five on the right side of the equal sign.
T: This five has a special name when we write an answer to an addition fact it is called a sum (add sum to vocab chart)
But what if the problem is =2+3?
TS: It’s the same, we still have five on one side of the equal sign so we need to balance the two sides and put 5=2+3.
Work Time (Zones, Independent work “we do”: Students will have whiteboards or notebook paper. Draw some pictures on the board of 3 blue circles and 5 red circles or 2 triangles and 1 square. Go
Independent, Group through the problems figuring out how many shapes there are in each of them. Doing the first one and then releasing more steps to students as they become comfortable writing the
30 minutes) numbers for each set, putting an addition sign and equal sign and balancing the two sides. Ask reinforcing questions: Why do we put a + sign, what does the = sign mean, how do
you know we are combining the numbers?
Move from shapes to just writing two numbers 2 and 4, and asking students to combine them.
Group Work “You Do” (Math Zones): All groups (of 4 students each) plays addition build it. Students roll two six sided dice and then pull the two numbers in unifix or base ten
cubes. Student rolls a 2 and a 3 then they pull two unfix cubes and three unifix cubes, they then write down the addition problem and solve it. Teacher is either working with a
small group that has been targeted for individual instruction or circling, taking notes on students work and misunderstanding, also asking supporting questions. So why did you
put an addition sign here? Why did you put a 6 on that side of the equal sign? (Because one side was 2+4 which is 6 so I had to balance both sides)
Math Reflection/ (This is a time to share work and discuss critically a problem a student had or explain student work. Also this time can be used to ask a difficult question that takes the
Share (4 minutes) concept taught one more level up in bloom’s taxonomy)
Why is 2+7=10 wrong? Students write down answers in math notebooks or on a piece of paper using words, drawings, etc.
(It is wrong because each side of the equal sign are not balanced/equal. I know this because 2+7, shown by fingers, manipulatives, or drawings is 9 and the other side of the
equal sign says 10. So how can we fix the problem? Change the 10 to a nine, change the 2 to a 3, or the 7 to an 8. Why is the problem now correct? Both sides of the equal sign
are balanced/equal)
│1. What went well? │2. What would you change? │3. What needs explanation? │
│The direct translation of manipulative into │Forgot to play through game first with whole group for │In general the question to “answer” format is to show a constructive lesson structure, │
│equations was great for the students, kept │“addition build it”, explained and then released kids to │pushing children down a path of thinking through good questions and pushing their │
│them engaged and gave tangibility to addition │groups, next time will play through game me vs. class for a │thinking, lots of “whys” and “explain that more”. This scaffolding of their thinking and │
│sentence. │couple of turns. │input allows them to guide lesson. │
Lesson Resources
Math Unit 2 Day 1 Assessment and Answer Key Assessment 557
Math Unit 2 Day 1 Building Number Sense Examples doc Activity 537 | {"url":"http://betterlesson.com/lesson/19851/introduction-to-addition","timestamp":"2014-04-18T05:32:07Z","content_type":null,"content_length":"62073","record_id":"<urn:uuid:adde64f3-ec31-4bfd-a1e2-c0ae88986f15>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/reneestacha/medals","timestamp":"2014-04-18T20:48:43Z","content_type":null,"content_length":"64294","record_id":"<urn:uuid:bf919b4e-122a-4a3b-a53e-343ba863db7f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve this equation using the quadratic formula:
(2x-3)^2 - 14 = 2x(x-7) - Homework Help - eNotes.com
Solve this equation using the quadratic formula:
(2x-3)^2 - 14 = 2x(x-7)
First, we need to simpify the equation to standard form `(ax^2+bx+c)` :
Expand the squared term:
Next expand the multiplication:
Finally, move all terms to the left side and collect like terms:
We can now solve for the roots of this equation using the quadratic formula:
`x=(-b+-sqrt(b^2-4ac)) /(2a)`
a=2; b=2; c=-5
`x=(-2+-sqrt(2^2-4(2)(-5))) /(2(2))`
`x_1=(-2+sqrt(44))/4 =1.16`
Graph of the function:
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/solve-this-equation-using-quadratic-formula-2x-3-2-435201","timestamp":"2014-04-19T12:57:39Z","content_type":null,"content_length":"25608","record_id":"<urn:uuid:2723569b-c0db-4e4d-b1e6-5525310c4a5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Many Flaws of Dual_EC_DRBG
Update 9/19: RSA warns developers not to use the default Dual_EC_DRBG generator in BSAFE. Oh lord.
As a technical follow up to my
previous post
about the NSA's war on crypto, I wanted to make a few specific points about standards. In particular I wanted to address the allegation that NSA inserted a backdoor into the
Dual-EC pseudorandom number generator
For those not following the story, Dual-EC is a pseudorandom number generator proposed by
for international use back in 2006. Just a few months later, Shumow and Ferguson made cryptographic history by pointing out that
there might be an NSA backdoor
in the algorithm. This possibility -- fairly remarkable for an algorithm of this type -- looked bad and smelled worse. If true, it spelled almost certain doom for anyone relying on Dual-EC to keep
their system safe from spying eyes.
Now I should point out that much of this is ancient history. What
news today is the recent
leak of classified documents
that points a very emphatic finger towards Dual_EC, or rather, to an unnamed '2006 NIST standard'. The evidence that Dual-EC is this standard has now become so hard to ignore that NIST recently took
the unprecedented step of
warning implementers to avoid it altogether
Better late than never.
In this post I'm going to try to explain the curious story of Dual-EC. While I'll do my best to keep this discussion at a high and non-mathematical level, be forewarned that I'm probably going to
fail at least at a couple of points. I you're not the mood for all that, here's a short summary:
• In 2005-2006 NIST and NSA released a pseudorandom number generator based on elliptic curve cryptography. They released this standard -- with very little explanation -- both in the US and abroad.
• This RNG has some serious issues with just being a good RNG. The presence of such obvious bugs was mysterious to cryptographers.
• In 2007 a pair of Microsoft researchers pointed out that these vulnerabilities combined to produce a perfect storm, which -- together with some knowledge that only NIST/NSA might have -- opened a
perfect backdoor into the random number generator itself.
• This backdoor may allow the NSA to break nearly any cryptographic system that uses it.
If you're still with me, strap in. Here goes the long version.
For a good summary on the history of Dual-EC-DRBG, see this
2007 post
by Bruce Schneier. Here I'll just give the highlights.
Back in 2004-5, NIST decided to address a longstanding weakness of the
FIPS standards
, namely, the limited number of approved
pseudorandom bit generator
algorithms (PRGs, or 'DRBGs' in NIST parlance) available to implementers. This was actually a bit of an issue for FIPS developers, since the
existing random number generators
had some
known design weaknesses
NIST's answer to this problem was
Special Publication 800-90
, parts of which were later wrapped up into the international standard
ISO 18031
. The NIST pub added four new generators to the FIPS canon. None these algorithms is a
random number generator in the sense that they collect physical entropy. Instead, what they do is process the (short) output of a true random number generator -- like the
one in Linux
-- conditioning and stretching this 'seed' into a large number of
bits you can use to get things done.** This is particularly important for
cryptographic modules, since the
FIPS 140-2
standards typically
you to use a DRBG as a kind of 'post-processing' -- even when you have a decent hardware generator.
The first three SP800-90 proposals used standard
components like hash functions and block ciphers.
was the odd one out, since it employed mathematics more that are typically used to construct
public-key cryptosystems
. This had some immediate consequences for the generator: Dual-EC is slow in a way that its cousins aren't. Up to a thousand times slower.
Now before you panic about this, the inefficiency of Dual_EC is
not necessarily
one of its flaws! Indeed, the inclusion of an algebraic generator actually makes a certain amount of sense. The academic literature includes a distinguished history of
provably secure PRGs based on on number theoretic assumptions
, and it certainly didn't hurt to consider one such construction for standardization. Most developers would probably use the faster symmetric alternatives, but perhaps a small number would prefer the
added confidence of a provably-secure construction.
Unfortunately, here is where NIST ran into their first problem with Dual_EC.
Flaw #1: Dual-EC has no security proof.
Let me spell this out as clearly as I can. In the course of proposing this complex and slow new PRG where the
only damn reason you'd ever use the thing is for its security reduction,
NIST forgot to provide one. This is like selling someone a Mercedes and forgetting to attach the hood ornament.
I'd like to say this fact alone should have damned Dual_EC, but sadly this is par for the course for NIST -- which treats security proofs like those cool
Japanese cookies
you can never find. In other words, a fancy, exotic
. Indeed, NIST has a nasty habit of dumping proof-writing work onto outside academics, often
the standard has been finalized and implemented in products everywhere.
So when NIST put forward its first draft of SP800-90 in 2005, academic cryptographers were left to analyze it from scratch. Which, to their great credit,
they were quite successful
The first thing reviewers noticed is that Dual-EC follows a known design paradigm -- it's a weird variant of an elliptic curve linear congruential generator. However they
noticed that NIST had made some odd rookie mistakes.
Now here we will have to get slightly wonky -- though I will keep mathematics to a minimum. (I promise it will all come together in the end!) Constructions like Dual-EC have basically two stages:
1. A stage that generates a series of pseudorandom elliptic curve points. Just like on the graph at right, an elliptic curve point is a pair (x, y) that satisfies an elliptic curve equation. In
general, both x and y are elements of a finite field, which for our purposes means they're just large integers.***
The main operation of the PRNG is to apply mathematical operations to points on the elliptic curve, in order to generate new points that are pseudorandom -- i.e., are indistinguishable from
random points in some subgroup.
And the good news is that Dual-EC seems to do this first part beautifully! In fact Brown and Gjøsteen even proved that this part of the generator is sound provided that the Decisional
Diffie-Hellman problem is hard in the specific elliptic curve subgroup. This is a well studied hardness assumption so we can probably feel pretty confident in this proof.
2. Extract pseudorandom bits from the generated EC points. While the points generated by Dual-EC may be pseudorandom, that doesn't mean the specific (x, y) integer pairs are random bitstrings. For
one thing, 'x' and 'y' are not really bitstrings at all, they're integers less than some prime number. Most pairs don't satisfy the curve equation or are not in the right subgroup. Hence you
can't just output the raw x or y values and expect them to make good pseudorandom bits.
Thus the second phase of the generator is to 'extract' some (but not all) of the bits from the EC points. Traditional literature designs do all sorts of things here -- including hashing the point
or dropping up to half of the bits of the x-coordinate. Dual-EC does something much simpler: it grabs the x coordinate, throws away the most significant 16-18 bits, and outputs the rest.
In 2006, first
and later
Schoenmakers and Sidorenko
took a close look at Dual-EC and independently came up with a surprising result:
Flaw #2: Dual-EC outputs too many bits.
Unlike those previous EC PRGs which output anywhere from 2/3 to half of the bits from the
-coordinate, Dual-EC outputs nearly the entire thing.
This is good for efficiency, but unfortunately it also gives Dual-EC a
Due to some quirks in the mathematics of the field operations, an attacker can now predict the next bits of Dual-EC output with a fairly small -- but non-trivial -- success probability, in the range
of 0.1%. While this number may seem small to non-cryptographers, it's basically a hanging offense
for a cryptographic random number generator where probability of predicting a future bit should be many orders of magnitude lower.
What's just plain baffling is that this flaw ever saw the light of day. After all, the specification was developed by bright people at NIST -- in collaboration with NSA. Either of those groups should
have discovered a bug like this, especially since this issue had been previously studied. Indeed, within a few months of public release,
two separate
groups of academic cryptographers found it, and were able to implement an attack using standard PC equipment.
So in summary, the bias is mysterious and it seems to be very much an 'own-goal' on the NSA's part. Why in the world would they release so much information from each EC point? It's hard to say, but a
bit more investigation reveals some interesting consequences:
Flaw #3: You can guess the original EC point from looking at the output bits.
By itself this isn't really a flaw, but will turn out to be interesting in just a minute.
Since Dual-EC outputs so many bits from the
coordinate of each point -- all but the most significant 16 bits -- it's relatively easy to guess the original source point by simply brute-forcing the missing 16 bits and solving the elliptic curve
equation for
y. (
This is all high-school algebra, I swear!)
While this process probably won't
identify the original
(x, y)
, it'll give you a modestly sized list of candidates. Moreover with only 16 missing bits the search can be done quickly even on a desktop computer. Had Dual_EC thrown away more bits of the
-coordinate, this search would not have been feasible at all.
So what does this mean? In general, recovering the EC point shouldn't actually be a huge problem. In theory it could lead to a weakness -- say predicting future outputs -- but in a proper design you
would still have to solve a
discrete logarithm
instance for
each and every point
in order to predict the next bytes output by the generator.
And here is where things get interesting.
Flaw #4: If you know a certain property about the Dual_EC parameters, and can recover an output point, you can predict all subsequent outputs of the generator.
Did I tell you this would get interesting in a minute? I totally did.
The next piece of our puzzle
was discovered
by Microsoft researchers Dan Shumow and Niels Ferguson, and announced at the CRYPTO 2007 rump session. I think this result can best be described via the totally intuitive diagram below. (Don't worry,
I'll explain it!)
Annotated diagram from Shumow-Ferguson presentation (CRYPTO 2007).
Colorful elements were added by yours truly. Thick green arrows mean 'this part is
easy to reverse'. Thick red arrows should mean the opposite. Unless you're the NSA.
The Dual-EC generator consists of two stages: a portion that generates the output bits (right) and a part that updates the internal state (left).
Starting from the "
" value (circled, center) and heading right, the bit generation part first computes the output point using the function "
r_i * Q
" -- where
is an elliptic curve point defined in the parameters -- then truncates 16 bits its off its
coordinate to get the raw generator output. The "*" operator here describes elliptic point multiplication, which is a complex operation that should be relatively hard to invert.
Note that everything after the point multiplication should be
to invert and recover from the output, as we discussed in the previous section.
Every time the generator produces one block of output bits, it also
its internal state. This is designed to prevent attacks where someone compromises the internal values of a working generator, then uses this value to wind the generator backwards and guess past
outputs. Starting again from the circled "
" value, the generator now heads upwards and computes the point "
r_i * P
" where
is a different elliptic curve point also described in the parameters. It then does some other stuff.
The theory here is that
should be random points, and thus it should be difficult to find "
r_i * P
" used for state update even if you know the output point "
r_i * Q
" -- which I stress you
know, because it's easy to find. Going from one point to the other requires you to know a relationship between
Q, which you shouldn't actually know
since they're supposed to be random values. The difficulty of this is indicated by the thick red arrow.
Looks totally kosher to me. (Source: NIST SP800-90A)
There is, however, one tiny little exception to this rule. What if
Q aren't
entirely random values? What if you chose them yourself specifically so you'd know the mathematical relationship between the two points?
In this case it turns out you can easily compute the next PRG state after recovering a single output point (from 32 bytes of RNG output). This means you can follow the equations through and predict
output. And the next output after that. And on forever and forever.****
This is a huge deal in the case of
, for example. If I use the Dual-EC PRG to generate the "Client Random" nonce transmitted in the beginning of an SSL connection, then the NSA will be able to predict the "Pre-Master" secret that I'm
going to generate during the RSA handshake. Given this information the connection is now a cleartext read. This is
not good.
So now you should all be asking the most important question of all:
how the hell
did the NSA generate the specific
values recommended in Appendix A of Dual-EC-DRBG? And do they know the relationship that allows them to run this attack? All of which brings us to:
Flaw #5: Nobody knows where the recommended parameters came from.
And if you think that's problematic, welcome to the club.
But why? And where is Dual-EC used?
The ten million dollar question of Dual-EC is
the NSA would stick such an obviously backdoored algorithm into an important standard. Keep in mind that cryptographers found the major (bias) vulnerabilities almost immediately after Dual-EC
shipped. The possibility of a 'backdoor' was announced in summer 2007. Who would still use it?
A few people have
gone through the list of CMVP-evaluated products
and found that the answer is:
quite a few people would
. Most certify Dual-EC simply because it's implemented in
, and they happen to use that library. But at least
one provider
certifies it
Hardcoded constants from the OpenSSL-FIPS
implementation of Dual_EC_DRBG. Recognize 'em?
It's worth keeping in mind that NIST standards carry a lot of weight -- even those that
have a backdoor. Folks who aren't keeping up on the latest crypto results could still innocently use the thing, either by accident or (less innocently) because the government asked them to. Even if
they don't use it, they might include the code in their product -- say through the inclusion of OpenSSL-FIPS or MS Crypto API -- which means it's just a function call away from being surreptitiously
Which is why people need to stop including Dual-EC immediately. We have no idea what it's for, but it needs to go away. Now.
So what about the curves?
The last point I want to make is that the vulnerabilities in Dual-EC have precisely nothing to do with the specifics of the
NIST standard elliptic curves themselves
. The 'back door' in Dual-EC comes exclusively from the relationship between
-- the latter of which is published only in the Dual-EC specification.
The attack can work even if you don't use the NIST pseudorandom curves.
However, the revelations about NIST and the NSA certainly make it worth our time to ask whether these curves themselves are somehow weak. The best answer to that question is:
we don't know
. Others have observed that NIST's process for generating the curves
leaves a lot to be desired
. But including some kind of hypothetical backdoor would be a
horrible, horrific
idea -- one that would almost certainly blow back at us.
You'd think people with common sense would realize this. Unfortunately we can't count on that anymore.
Thanks to Tanja Lange for her assistance proofing this post. Any errors in the text are entirely mine. Notes:
* My recollection of this period is hazy, but prior to SP800-90 the two most common FIPS DRBGs in production were
the SHA1-based DSA generator of
FIPS 186-2
(2) ANSI X9.31
. The DSA generator was a special-purpose generator based on SHA1, and was really designed just for that purpose. ANSI X9.31 used block ciphers, but suffered from some more subtle weaknesses it
retained from the earlier X9.17 generator. These were pointed out by
Kelsey, Schneier, Wagner and Hall
This is actually a requirement of the FIPS 140-2 specification. Since FIPS does not approve any true random number generators, it instead
that you run your true RNG output through a DRBG (PRNG) first. The only exception is if your true RNG has been approved 'for classified use'.
*** Specifically,
are integers in the range
is a large prime number. A point is a pair (
x, y
) such that
$y^2 = x^3 + ax + b$
The values
are defined as part of the curve parameters.
**** The process of predicting future outputs involves a few guesses, since you don't know the exact output point (and had to guess at the missing 16 bits), but you can easily reduce this to a small
set of candidates -- then it's just a question of looking at a few more bits of RNG output until you guess the right one.
23 comments:
1. I have degree from ITT Technical Institute. I was taught that the U.S. government mandates no encryption standard can surpass theirs and they have to be able to crack all encryption standards in
I guess a new trend in Information Technology is going to emerge. Specifically a heightened increase in national IT security and pride. Brazil has already announced plans for a new dedicated
backbone bypassing America. So much for globalization.
1. The US government tried to mandate backdoors in everything by law in the 90s and lost due to public outcry. What they couldn't mandate by law they have ever since resorted to trying to get by
subterfuge. They will never listen to what the people want, they are against the people.
And reducing USA-centric-ness of the internet isn't killing globalization, it's enhancing globalization. The whole definition of "globalization" excludes there being one massive central
authority controlling everything like the US has been doing! Why do people have an "I must be in charge to ensure harmony by force, otherwise there's chaos" attitude to everything? Open
Source has taught us that that is not the case, it's just an excuse that dictators use to keep power.
2. I have a degree from the Blaine School of Cosmetology and I would love for you to state an actually credible source for that.
3. Hey Javier,
A quick question, how is Brazil being able to communicate with Europe without being MITM'd by USA bad for globalization?
It skips America because America is hostile, it's just common sense. Connections still would be able to reach America, or go through America if that is the most convenient path.
4. I wonder if the choice of Brazil to bypass the US will help them.... Point of arrival probably will be a NSA-cooperative state. We all learned that the services of NSA and other secret
services are so damned intertwined that it is impossible to transfer packets around the globe finally to hit one of their tapping points. And then.... With IPV6 we're lost.... Profiling is
50% of their information value, fully fully compatible with war time UK Y-stations, only capable to log the 'gibberish' from the Germans (Enigma traffic) but very very good in profiling the
radio networks. At the start of the war the job actually was done by half. Look at the analogy.... Amazing, isn't it?
2. You Mr Green, Lange, Schneier, Bernstein etc became famous because of hero whistleblower Mr Snowden and the evil NSA. Your life must have changed for the better. Keep it up. In math and
cryptographers we trust.
3. Thank you, that was just the detailed analysis of the issue I was hoping to eventually find here :)
I'm wondering about one thing: Assuming that somebody chose P and Q specifically for their mathematical relationship (i.e., they own the "secret key" to the DRBG), would they also be able to
"rewind" the DRBG and recover previous outputs, or does this concern future outputs only?
1. From 'just' knowing log_Q P you only get the future outputs. For getting past ones requires computing discrete logarithms. So far we don't know any way that this mathematical problem is
easier on the NIST curves.
4. Thank your for the very informative and interesting read :)
Maybe I got it wrong but
- The EC points P and Q are defined in the Dual-EC standard appendix and are recommendations (?)
- They are basically the seed for the algorithm
So, without real knowledge of crypto, I would naively assume that, if we just use other EC points than the predefined P and Q, we could bypass the NSA-knows-the-ec-point-relationship problem and
just use the basic algorithm of the standard.
Or is the problem one level deeper, at the EC itself? Is the EC fundamentally corrupted?
1. No, you're absolutely right. Replacing "Q" with one generated randomly *will* kill the backdoor. Later versions of the spec include an (optional) Appendix describing how to do this. However
I've looked at a couple of open implementations -- including OpenSSL -- and they still use the default parameters.
As I said above, the backdoor in Dual_EC has nothing to do with corruption of the elliptic curves themselves. But there's nothing wrong with an excess of caution right now.
2. Ok, thank you for the clarification :)
5. If the problem is in just two number P and Q, why hasn't somebody generated new set of numbers and explained to everyone how he did it? Seems like guys from OpenSSL could have done this. Why not?
1. You can generate your own "Q" value. But it's not the default in most implementations.
6. This isn't about common sense -- it's about corruption and lies.
7. The reason why tou would use the NIST (read NSA) mandated EC Points is the reason you would use any NIST (read NSA) approved algorithm : It has supposedly been vetted by experts cryptographers
and are A (not necessarily THE) good choice. Implementing DUAL-EC exactly as NIST specified requires programming skills while changing the points requires some mild understanding of what a
Elliptic Curve is.
And last but not least, I'm not sure you can get a product certified if you actually chose other parameters as the ones that were specified by NIST (read, say-it-with-me NSA).
8. Thanks for the post, Dr. Green. In the future I hope you tackle the question of the NIST elliptic curve generation "controversy" and let us know if there is truly anything fishy with that whole
It's sad, but this whole Snowden debacle is going to cause a lot of distrust in NIST from now on. Perhaps that's a good thing as it might result in new findings from cryptographers who go back
and really analyze these standards again.
9. Assume there are more obscure hacks hiding behind the obvious fix. In the intelligence biz, it's called a 'modified limited hangout': the ability to say "oh, yes, you caught us dead to rights"
while distracting you from deeper wrongness.
10. I never fully understood what my teacher in a cryptography 101 lesson at university 20 years ago meant with: Be suspicious with algorithms where the design principles are not published. I simply
couldn't imagine how a backdoor could be planted into an algorithms - now I can!
Thanks for making this understandable to someone who is dealing with security in everyday practice, but who is not a mathematician. :)
11. Purely for the entertainment value to the audience here, I offer that it occurred to me that the suspect P-Q could have been a test case provided by the NSA, along the lines of "Given how the
algorithm is supposed to work, if we corrupt the P-Q pair by making them non-random using a specific mathematical relationship between them, then the algorithm should be provably not secure.
Demonstrating this should increase the confidence that the correctly implemented algorithm is secure." Then what happened is some arrogant scientist at NIST (full disclosure--I was formerly a
NIST employee, and the terms of my departure still burn as a fire in the pit of my stomach) conveniently "forgot" to put the correct ones in the standard, or did it on purpose since "Anyone of
modest skill in cryptography will detect the problem and come up with their own P-Q pair correctly. Anyone who doesn't deserves what they get." There are, in my estimation, people that arrogant
employed by NIST.
12. There’s no word to describe such a great masterpiece. You made such an interesting piece to read, giving every subject enlightenment for us to gain knowledge and information Porcelain Veneers
13. I would appreciate your advice on how to publish a freeware encryption program I constructed.
Please reply to DavidLNewton@comcast.net
14. This comment has been removed by the author.
15. Some of the comments ask why not just generate your own points. The naive answer is, "Sure, you can do that." NIST SP 800-90A even discusses that as an option. However, if you want to have your
device or code validated as FIPS 140-2 compliant, it has to pass some known answer tests. (You feed in the supplied value as though it was entropy, and check the first so many output blocks match
the known answers.) And those tests are (I think) dependent on P and Q being as published in NIST SP 800-90A.
So the short answer is "yes, you can", but the catch is that you will have to have some sort of option to test with the published points, and the use your own, and you will have to convince the
validation lab that this is all okay.
The "I think" is because the published examples are on the NIST web site, but that is down due to the government shutdown, so I can't go look. | {"url":"http://blog.cryptographyengineering.com/2013/09/the-many-flaws-of-dualecdrbg.html","timestamp":"2014-04-18T08:02:07Z","content_type":null,"content_length":"185338","record_id":"<urn:uuid:b1e41339-3153-4df8-a0bb-0192761b3d9f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The current in the wire is given by the equation I = 2.5 - 0.6t, with positive to the right as shown. Which of the statements concerning the current induced in the loop is correct? a. There is no
induced current. b. The current is clockwise always. c. The direction of current depends on the value of r. d. The current is first clockwise then counterclockwise. e. The direction of current
depends on the value of A.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
do you know right hand rule? that will give you the magnetic field's direction in the loop. but after some time the direction of current changes which will change the direction of magnetic field,
there fore the current in the loop will also change its direction.
Best Response
You've already chosen the best response.
the current decreases then becomes negative so the direction of current changes right?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
thank you
Best Response
You've already chosen the best response.
you'r welcome
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50beb6b4e4b0de4262a04b88","timestamp":"2014-04-18T18:35:13Z","content_type":null,"content_length":"44737","record_id":"<urn:uuid:060601ed-1994-48f8-bdd8-fa206afaf622>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paper sizes in FPDF
• From: jodleren <sonnich@xxxxxx>
• Date: Mon, 30 Nov 2009 07:59:36 -0800 (PST)
I have used FPDF for our documentation, and now they found that we
need bigger sizes - some new drawings are in - I guess - A2. And FPDF
can only handle max A3.
In the file fpdf.php line 119, I have updated this:
//Page format
'junior legal'=>array(576,360));
I want to share this, but also ask- is this right?
Accoding to:
1) American sizes are easy, the number about should be 72 dpi, so 8
inches = 8 * 72 = 576.
2) new A sizes are odd - they should be twice as big, but e.g the jump
from A2 to A1 increases by 1 mm (420 to twice that = 840 mm, but it is
841 mm?!?!).
Anyway, I calculated them as = size / 25,4 * 72
Any comments anyone? | {"url":"http://coding.derkeiler.com/Archive/PHP/comp.lang.php/2009-11/msg00749.html","timestamp":"2014-04-18T13:43:24Z","content_type":null,"content_length":"8587","record_id":"<urn:uuid:eb07120a-47d8-4c02-bb3b-523d5dcf7f10>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polymorphic stanamically balanced binary trees
oleg@pobox.com oleg@pobox.com
Sun, 20 Apr 2003 15:25:12 -0700 (PDT)
We present a datatype of polymorphic binary trees. Nodes within a
single tree can have values of arbitrary types. The tree data
structure is subjected to a balancing constraint: for each non-leaf
node, the heights of its two children can differ at most by one.
Here, by definition the height of a node is 1 + max of the heights of
its children. A leaf node has the height of 0.
The main feature of the present approach is a blended static and
dynamic enforcement of the balancing constraint. Whenever possible, the
typechecker verifies the constraint and flags an attempt to make an
unbalanced tree as a type error. A user cannot construct an unbalanced
tree by composing make_leaf and make_node constructors.
This static enforcement of a dependent type constraint is
good. However, it's obviously limited. To verify type constraints, the
compiler needs to see the entire chain of constructor
applications. That chain however may not be not known until the run
time -- and it can depend on external conditions. How can we guarantee
that a program that builds binary trees from user input data will not
return an unbalanced tree? How can we typecheck a function that builds
a tree recursively, with the recursion depth unknown until the
run-time. Obviously, a run-time check is needed.
The following code shows a simple approach to a combined static and
dynamic checking. By default, a static check is enforced. If the user
cannot placate the typecheker, who keeps complaining about infinite
types, the user can delay the check until the run time, by staging the
typecheck. We enclose the dependent type into an existential envelope
to be opened and checked later. To the end user, all these
machinations are more or less transparent. To build nodes, the user
will invoke the same function, make_node. The latter does a static
check, if it can. Otherwise, it defers the check till the run time.
Another feature of the following code is an undefined arithmetics. We
can increment and compare undefined values, provided they are
appropriately typed. In fact, the height of our tree data structure
is represented by bottom. The following snippet from a uvaluator
demonstrates an undefined type subtraction:
instance (UNUM u) => UNUM (Succ u) where
uval _ = 1 + (uval (undefined::u))
Finally, yet another feature of the code is polymorphic recursion, which
specifically bypasses the monomorphic restriction.
I should point out a particularly apt error message that I received
from GHCi:
My brain just exploded.
I can't handle pattern bindings for existentially-quantified constructors.
In the binding group
(BU (cv :: ct)) = check (uval au) (uval bu)
In the definition of `heightc':
let (BU (cv :: ct)) = check (uval au) (uval bu)
in BU $ (undefined :: Succ ct)
It happened around midnight, when I was quite exhausted already. I
typed ":r", read the first line of the message "My brain just
exploded." and said, "Yeah, tell me about it...". And then I jumped
from the chair, having realized what I've been talking with. BTW, I
replaced 'let' with 'case' and the error disappeared. Incidentally,
this code gives another example of the undefined arithmetics:
incrementing the type of an existential bottom.
The literate Haskell code follows. To run it, do
ghci -fglasgow-exts -fallow-undecidable-instances /tmp/code-file.lhs
To verify the balancing condition at compile type, we need an
appropriate arithmetic type. We chose the following. We should stress
that Zero and Succ are type constructors. There are no data
constructors. In fact, those types have no values except the bottom.
> data Zero
> data Succ a
> type One = Succ Zero
> class UNUM u where
> uval:: u -> Int
Note that uval converts a type to a value. We can apply uval to
undefined and get the right answer, provided that undefined is typed
> instance UNUM Zero where
> uval _ = 0
> instance (UNUM u) => UNUM (Succ u) where
> uval _ = 1 + (uval (undefined::u))
The following datatype BU is a run-time representation of UNUMs. All
UNUM types have the same value: the bottom. Therefore, the BU value
represents UNUM's type without the value of the latter. We might
suppose that BU is quite compact: it only needs to keep track of the
enclosed type without worrying about the enclosed value.
> data BU = forall u. (UNUM u) => BU u
Incidentally, the following function umake is the inverse of uval.
The function uval converts a UNUM type to an Int value. The function
umake goes the other way around: from values to types. The function
demonstrates the type arithmetics in a value form. We create and
deconstruct types, whereas the value remains the same: undefined. We
should also point out that umake implements a polymorphic recursion,
getting around the monomorphic restriction.
> umake 0 = BU (undefined::Zero)
> umake n = case (umake (n-1)) of (BU (x::z)) -> BU $ (undefined::(Succ z))
> --test: case (umake 5) of (BU x) -> uval x
The following class HeightC expresses the balancing constraint. The
class and its instances do both static and dynamic checks.
> class HeightC lheight rheight pheight | lheight rheight -> pheight where
> heightc:: lheight -> rheight -> (Bool,pheight)
> heightc a b = (True,(undefined::pheight))
> -- The static portion of the constraint
> instance HeightC Zero Zero (Succ Zero)
> instance HeightC (Succ h) (Succ h) (Succ (Succ h))
> instance HeightC h (Succ h) (Succ (Succ h))
> instance HeightC (Succ h) h (Succ (Succ h))
The following instance does a dynamic check. It opens existential
envelopes with deferred types, does the check, and puts the result
back into the envelope. In a manner of speaking, we stage the
> instance HeightC BU BU BU where
> heightc a@(BU au) b@(BU bu) =
> case check (uval au) (uval bu) of
> (BU (cv::ct)) -> (True, BU $ (undefined::(Succ ct)))
> where
> check av bv | av == bv = a
> check av bv | av == (bv + 1) = a
> check av bv | bv == (av + 1) = b
The following declarations introduce the polymorphic tree datatype.
> data Nil = Nil deriving Show
> data Leaf v h l r = Leaf v h
> data Node v h l r = Node v h l r
> class BBTree ntype vtype ht lchildtype rchildtype where
> left:: ntype vtype ht lchildtype rchildtype -> lchildtype
> right:: ntype vtype ht lchildtype rchildtype -> rchildtype
> is_leaf:: ntype vtype ht lchildtype rchildtype -> Bool
> value:: ntype vtype ht lchildtype rchildtype -> vtype
> reheight::(UNUM ht) =>
> ntype vtype ht lchildtype rchildtype ->
> ntype vtype BU lchildtype rchildtype
> height:: ntype vtype ht lchildtype rchildtype -> ht
> height _ = (undefined::ht)
> -- A statically-typed leaf
> instance BBTree Leaf vtype Zero Nil Nil where
> value (Leaf v h) = v
> is_leaf = const True
> reheight (Leaf v h) = (Leaf v (BU h))
> -- A dynamically-typed leaf
> instance BBTree Leaf vtype BU Nil Nil where
> value (Leaf v h) = v
> is_leaf = const True
> height (Leaf v h) = h
> -- A stanamically-typed node
> instance BBTree Node vtype height lchildtype rchildtype where
> is_leaf = const False
> value (Node v h a b) = v
> left (Node v h a b) = a
> right (Node v h a b) = b
> reheight (Node v h a b) = (Node v (BU h) a b)
> height (Node v h a b) = h
> make_leaf:: vtype -> Leaf vtype Zero Nil Nil
> make_leaf v = Leaf v (undefined::Zero)
> make_node:: (BBTree lnt lvt lh cl cr, BBTree rnt rvt rh cl' cr',
> HeightC lh rh ph)
> => vtype -> (lnt lvt lh cl cr) -> (rnt rvt rh cl' cr') ->
> Node vtype ph (lnt lvt lh cl cr) (rnt rvt rh cl' cr')
> make_node v l r = let (c,h) = heightc (height l) (height r)
> in if c then Node v h l r else error "balance error"
Let's make our trees instances of a class Show, so we have something to show.
> instance (Show vtype, BBTree Leaf vtype height Nil Nil) =>
> Show (Leaf vtype height Nil Nil)
> where
> show = show . value
> instance (Show vtype, BBTree Node vtype height lchildtype rchildtype,
> Show lchildtype, Show rchildtype)
> =>
> Show (Node vtype height lchildtype rchildtype)
> where
> show x = "[" ++ (show $ value x) ++ ": " ++ (show $ left x)
> ++ "," ++ (show $ right x) ++ "]\n"
Examples follow. They show off the true polymorphic nature of the trees.
> leaf1 = make_leaf 'a'
> leaf2 = make_leaf (1::Int)
> tree1 = make_node "b" leaf1 leaf2
> tree2 = make_node (Just 'a') tree1 leaf1
We can print out each of these trees by 'showing' them, or just by typing
leaf1, tree1, etc. at the GHCi prompt.
However, if we try the following,
*> tree3 = make_node False tree2 leaf1
we get a type error at compile time:
No instance for (HeightC (Succ (Succ Zero)) Zero ph)
arising from use of `make_node' at stanamically-balanced-trees.lhs:222
In the definition of `tree3': make_node False tree2 leaf1
The error tells us of an attempt to build an unbalanced node, whose
children have heights two and zero.
As we mentioned above, static checks are sometimes too restrictive and
insufficient. For example, suppose we want to write a function that
builds a full binary tree. The first attempt could be as follows:
*> make_bbtree1 n = makeit n 0
*> where
*> makeit 0 counter = make_leaf counter
*> makeit n counter = make_node counter (makeit n' c') (makeit n' (c'+1))
*> where
*> n' = n -1
*> c' = 2*counter
Alas, it does not typecheck. First, we get an error stemming from a
monomorphic restriction. Even if we managed to get around that (as we
do in the following), we would still get an error message about an
infinite type. The compiler cannot tell, statically, that (makeit n'
c') and (makeit n' (c'+1)) construct trees of the same height. Indeed,
the height of the tree is a function of 'n', which is a run-time
value. Clearly, a run-type check is needed.
To defer static checks till the run-time, we need to enclose our types
into an envelope:
> data BW = forall ntype vtype lchildtype rchildtype .
> (Show (ntype vtype BU lchildtype rchildtype),
> BBTree ntype vtype BU lchildtype rchildtype) =>
> BW (ntype vtype BU lchildtype rchildtype)
Now we can write our function as follows:
> make_bbtree2 n = makeit n 0
> where
> makeit 0 counter = BW $ reheight $ make_leaf counter
> makeit n counter = case (wlk,wrk) of
> (BW lk, BW rk) -> BW $ make_node counter lk rk
> where
> wlk = (makeit n' c')
> wrk = (makeit n' (c'+1))
> n' = n -1
> c' = 2*counter
Now it types, and actually works:
> bwshow (BW t) = show t
> bwh (BW t) = height t
we can try "bwshow $ make_bbtree2 1" and "bwshow $ make_bbtree2 5"
We should point out that the invocation "make_node counter lk rk" has
exactly the same form as make_node in tree2 above. However, the latter
does a static check whereas the former checks the heights dynamically.
To see that the dynamic checking really works, suppose we wrote
make_bbtree as follows (with a small typo):
> make_bbtree3 n = makeit n 0
> where
> makeit 0 counter = BW $ reheight $ make_leaf counter
> makeit n counter = case (wlk,wrk) of
> (BW lk, BW rk) -> BW $ make_node counter lk rk
> where
> wlk = (makeit n' c')
> wrk = (makeit 0 (c'+1))
> n' = n -1
> c' = 2*counter
If we try "bwshow$ make_bbtree3 3", we get a run-time exception alerting us
of a violation of the balancing condition. | {"url":"http://www.haskell.org/pipermail/haskell/2003-April/011621.html","timestamp":"2014-04-19T15:36:28Z","content_type":null,"content_length":"15181","record_id":"<urn:uuid:5cfeeab2-4b36-4c34-8f91-1286573fc6ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
November 2nd 2009, 02:08 PM #1
Nov 2008
How would I show that a circle and a circle with a line bisecting it(with the end-points of the line touching the circumference) aren't homeomorphic?
I know I would have to use the 'cut-point principle' which states that homeomorphic sets have the same number of n-points for each n, but I don't know how to find the n-points....
For any circle (a simple closed curve) removing any two points leaves exactly two connected subsets.
We know that a simple closed curve is the union of two arcs the intersection of which is the set of their endpoints.
Is that true of the second set in this problem? What about the endpoints of the chord?
In a circle, any single point will disconnect the set right... Why specify 2?
I don't understand how to find the n-points?
For example, I know $s^1$, the unit circle, every point is a 1-point and [0,1)
is not homeomorphic to $s^1$, since [0,1) has some 2-points whereas $S^1$ doesn't....can anyone explain why!?
[QUOTE=bigdoggy;396352]In a circle, any single point will disconnect the set right... Why specify 2?[QUOTE]
How in the world could a single point's removal disconnect a circle?
How much do you know about simple closed curves and about arcs?
How in the world could a single point's removal disconnect a circle?
I was told that every point of a circle is a 1-point which I believed implied by removing a single point would disconnect it...?
I'm trying to understand how to find the n-points in general so I can then decide if two sets are homeomorphic....
Either you are being totally false and incompetent instruction or you do not understand.
The whole area of ‘cut-points’ as applied to arcs and simple closed curves is such a rich area of study.
It is a pity that you seem to be so confused by this topic.
I don't see the value in that comment Plato.
Surely it is apparent that I'm failing to 'connect the dots' here, hence I was hoping for some pointers!!
I think from your definition of cut points a "1-point" means you are left with 1 connected path component. So of course, on the unit circle all points are 1-points.
To show 2 sets X and Y aren't homeomorphic with cut points, either show that X (or Y) contains an n-point that isn't in Y (or X), or show that X contains more n-points than Y (for a specified n)
removing one point from a circle allows you to "open it up" to be a line segment but that is still connected. Removing another point from that line segment makes it unconnected.
What, exactly, is the definition of "n-points"?
Say the space X is path-connected, a point $x \ \varepsilon X$ is called an n-point if $X-\{x\}$ has n path-compnents...
That is a completely understandable definition.
No one could argue with its clarity.
BUT, under that definition a 1-point is not a cut-point.
Authors are not free to co-op definitions.
That is the basis of the arguments in this thread.
The idea of a ‘cut-point’ goes back as far as a 1909 paper by R.L. Moore, the founder of point set topology.
Again a “1-point is not a cut-point.”
November 2nd 2009, 02:45 PM #2
November 2nd 2009, 02:52 PM #3
Nov 2008
November 2nd 2009, 02:59 PM #4
November 2nd 2009, 03:05 PM #5
Nov 2008
November 2nd 2009, 03:22 PM #6
November 2nd 2009, 03:35 PM #7
Nov 2008
November 3rd 2009, 06:34 AM #8
Junior Member
Apr 2008
November 4th 2009, 01:36 AM #9
MHF Contributor
Apr 2005
November 4th 2009, 03:25 PM #10
Nov 2008
November 4th 2009, 03:58 PM #11 | {"url":"http://mathhelpforum.com/differential-geometry/112012-cut-point.html","timestamp":"2014-04-20T21:06:38Z","content_type":null,"content_length":"64979","record_id":"<urn:uuid:2a05f028-84d5-45f1-8d4a-2e988b33bbaf>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics for Electricity and Electronics
ISBN: 9780766827011 | 0766827011
Edition: 2nd
Format: Hardcover
Publisher: Cengage Learning
Pub. Date: 8/7/2001
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/mathematics-electricity-electronics-2nd/bk/9780766827011","timestamp":"2014-04-20T07:36:16Z","content_type":null,"content_length":"44757","record_id":"<urn:uuid:053afebe-bf86-4edf-ad7f-eb964940823d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mirada, CA Algebra Tutor
Find a Mirada, CA Algebra Tutor
Hello! My name is David, and I hope to be the tutor you are looking for. I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus,
Calculus, Probability and Statistics.
14 Subjects: including algebra 2, algebra 1, calculus, physics
...If you need help in these subjects, feel free to contact me. My classes are customized based on the students' needs. I make certain that I prepare in advance before I go to class.
7 Subjects: including algebra 2, chemistry, geometry, algebra 1
...Have tutored a couple high school students in geometry. Very qualified, been working in group settings at a middle school and with middle school students one-on-one for almost two years.
Majored in English in college and graduated with a BA in 2009.
12 Subjects: including algebra 1, algebra 2, English, reading
...In my workplace, I always wanted to automate my production steps (often repetitive tasks that take lots of time) so over the years I became very efficient in CAD, as a user also programming
with Lisp, VBA, then VisualLisp, VB and then .Net for past 8+ years. For last 10 years, I mostly manage CA...
21 Subjects: including algebra 2, algebra 1, reading, English
...I graduated with a Bachelor and Masters degree in Mathematics in the Philippines. I have experience tutoring students in advance mathematics. I helped my students with their assignments,
projects, and examinations.
3 Subjects: including algebra 2, algebra 1, statistics | {"url":"http://www.purplemath.com/Mirada_CA_Algebra_tutors.php","timestamp":"2014-04-18T11:39:06Z","content_type":null,"content_length":"23591","record_id":"<urn:uuid:777409e6-9c1b-4976-a6fe-46225433b3a1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Effect of Imperfections and Damping on the Type of Nonlinearity of Circular Plates and Shallow Spherical Shells
Mathematical Problems in Engineering
Volume 2008 (2008), Article ID 678307, 19 pages
Research Article
Effect of Imperfections and Damping on the Type of Nonlinearity of Circular Plates and Shallow Spherical Shells
^1ENSTA-UME, Unité de Mécanique, Chemin de la Hunière, 91761 Palaiseau Cedex, France
^2CNAM-LMSSC, Laboratoire de Mécanique des Structures et Systèmes Couplés, 2 rue Conté, 75003 Paris, France
Received 28 November 2007; Accepted 20 February 2008
Academic Editor: Paulo Gonçalves
Copyright © 2008 Cyril Touzé et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The effect of geometric imperfections and viscous damping on the type of nonlinearity (i.e., the hardening or softening behaviour) of circular plates and shallow spherical shells with free edge is
here investigated. The Von Kármán large-deflection theory is used to derive the continuous models. Then, nonlinear normal modes (NNMs) are used for predicting with accuracy the coefficient, the sign
of which determines the hardening or softening behaviour of the structure. The effect of geometric imperfections, unavoidable in real systems, is studied by adding a static initial component in the
deflection of a circular plate. Axisymmetric as well as asymmetric imperfections are investigated, and their effect on the type of nonlinearity of the modes of an imperfect plate is documented.
Transitions from hardening to softening behaviour are predicted quantitatively for imperfections having the shapes of eigenmodes of a perfect plate. The role of 2:1 internal resonance in this process
is underlined. When damping is included in the calculation, it is found that the softening behaviour is generally favoured, but its effect remains limited.
1. Introduction
When continuous structures such as plates and shells undergo large amplitude motions, the geometrical nonlinearity leads to a dependence of free oscillation frequencies on vibration amplitude. The
type of nonlinearity describes this dependency, which can be of the hardening type (the frequency increases with amplitude), or of the softening type (the frequency decreases). A large amount of
literature is devoted to predicting this type of nonlinearity for continuous structures, and especially for structures with an initial curvature such as arches or shells because the presence of the
quadratic nonlinearity makes the problem more difficult to solve. On the other hand, the hardening behaviour of flat structures such as beams and plates is a clearly established fact, on the
theoretical as well as the experimental viewpoint, (see, e.g., [1–6]). The presence of the quadratic nonlinearity may change the behaviour from hardening to softening type, depending on the relative
magnitude of quadratic and cubic nonlinear terms.
Among the available studies concerned with this subject, quite all of them that were published before 1992 could not be considered as definitive since they generally restrict to the case of a
single-mode vibration through Galerkin method, see, for example, [7–9] for shallow spherical shells, or [10] for imperfect circular plates. Unfortunately, it has been shown by a number of more recent
investigations that too severe truncations lead to erroneous results in the prediction of the type of nonlinearity, see, for example, [11, 12], or the abundant literature on circular cylindrical
shells, where the investigators faced this problem for a long time [13–18]. As a consequence, a large number of modes must mandatory be kept in the truncation of the partial differential equations
(PDEs) of motion, in order to accurately predict the type of nonlinearity. Recent papers are now available where a reliable prediction is realized, for the case of buckled beams [19], circular
cylindrical shells [20], suspended cables [21], and shallow spherical shells [22].
However, these last studies are restricted to the case of perfect structures, and the damping is neglected in the computations; and both of them have an influence on the type of nonlinearity, so that
a complete and thorough theoretical study that could be applied to real structures need to address the effect of imperfections and damping. The geometric imperfections have a first-order effect on
the linear as well as the nonlinear characteristics of structures. A large amount of studies are available, where the effect of imperfections on the eigenfrequencies and on the buckling loads are
generally addressed, see, for example, [23–28] for the case of circular cylindrical shells, [29] for shallow cylindrical panels, and [30] for the case of rectangular plates. Nonlinear
frequency-responses curves are shown in [31, 32] for clamped circular plates, [33–35] for rectangular plates, [36] for circular cylindrical shells, and [37] for circular cylindrical panels. Even
though the presence of geometric imperfection has been recognized as a major factor that could make the hardening behaviour of the flat plate turn to softening behaviour for an imperfection amplitude
of a fraction of the plate thickness [10, 38], a quantitative study, which is not restricted to axisymmetric modes and that does not perform too crude truncations in the Galerkin expansion, is still
To the authors' knowledge, the role of the damping in the prediction of the type of nonlinearity has been only recently detected as an important factor that could change the behaviour from hardening
to softening type [39]. In particular, it is shown in [39] on a simple two degrees-of-freedom (dofs) system, that the damping generally favours the softening behaviour. The aim of the present study
is thus to apply this theoretical result to the practical case of a damped shallow spherical shell, so as to quantitatively assess the effect of structural damping of the viscous type on the type of
nonlinearity of a two-dimensional vibrating structure.
The article is organized as follows. In Section 2, local equations and boundary conditions for an imperfect circular plate with free edge are given. Then the method used for computing the type of
nonlinearity is explained. Section 3 investigates how typical imperfections may turn the hardening behaviour of flat plates to softening behaviour. Quantitative results are given for selected
imperfections having the shape of eigenmodes of the perfect structure. Section 4 is devoted to the effect of viscous damping. The particular case of a spherical imperfection is selected, and the
results are shown for three different damping dependances on frequency.
2. Theoretical Formulation
2.1. Local Equations and Boundary Conditions
A thin plate of diameter and uniform thickness is considered, with , and free-edge boundary condition. The local equations governing the large-amplitude displacement of a perfect plate, assuming the
nonlinear Von Kármán strain-displacement relationship and neglecting in-plane inertia, are given, for example, in [5, 40]. An initial imperfection, denoted by and associated with zero inital stresses
is also considered, see Figure 1. The shape of this imperfection is arbitrary, and its amplitude is small compared to the diameter (shallow assumption): . The local equations for an imperfect plate
deduce from the perfect case [18, 41, 42]. With being the transverse displacement from the imperfect position at rest, the equations of motion write where is the flexural rigidity, stands for the
laplacian operator, accounts for structural damping of the viscous type, is the Airy stress function, and is a bilinear operator, whose expression in polar coordinates reads
The equations are then written with nondimensional variables, by introducingAs nondimensional equations will be used in the remainder of the study, overbars are now omitted in order to write the
dimensionless form of the equations of motion where .
The boundary conditions for the case of a free edge write, in nondimensional form [5]
In order to discretize the PDEs, a Galerkin procedure is used. As the eigenmodes cannot be computed analytically because the shape of the imperfection is arbitrary, the eigenmodes of the perfect
plate are selected as basis functions. Analytical expressions of involve Bessel functions and can be found in [5]. The unknown displacement is expanded withwhere the time functions are now the
unknowns. In this expression, the subscript refers to a specific mode of the perfect plate, defined by a couple , where is the number of nodal diameters and the number of nodal circles. If , a binary
variable is added, indicating the preferential configuration considered (sine or cosine companion mode). Inserting the expansion (2.6) into (2.4a) and (2.4b) and using the orthogonality properties of
the expansion functions, the dynamical equations are found to be, for all Linear coupling terms between the oscillator equations are present, as the natural modes have not been used for discretizing
the PDEs. Analytical expressions of the coupling coefficients are given in [42]. The generic viscous damping term of (2.4a) has been specialized in the discretized equations so as to handle the more
general case of a modal viscous damping term of the form , where is the damping factor and the eigenfrequency of mode . On the other hand, external forces have been cancelled, as the remainder of the
study will consider free vibrations only.
In order to work with diagonalized linear parts, the matrix of eigenvectors of the linear part is numerically computed. A linear change of coordinates is processed, , where is, by definition, the
vector of modal coordinates, and is the number of expansion function kept in practical application of the Galerkin's method. Application of makes the linear part diagonal, so that the discretized
equations of motion finally writes,
The temporal equations (2.8) describe the dynamics of an imperfect circular plate. The type of nonlinearity can be inferred from these equations. Unfortunately, too severe truncations in (2.8), for
example, by keeping only one degree-of-freedom (dof) when studying the nonlinear behaviour of the th mode, lead to incorrect predictions. Nonlinear normal modes (NNMs) offer a clean framework for
deriving a single oscillator equation capturing the correct type of nonlinearity [12]. This is recalled in Section 3, where the analytical expression of the coefficient dictating the type of
nonlinearity is given.
2.2. Type of Nonlinearity
Nonlinear oscillators differ from linear ones by the frequency dependence on vibration amplitude. The type of nonlinearity defines the behaviour, which can be of the hardening or the softening type.
As shown in [12], NNMs provide an efficient framework for properly truncating nonlinear oscillator equations like (2.8) and predict the type of nonlinearity (hardening or softening behaviour). The
method has already been successfully applied to the case of undamped shallow spherical shells in [22]. The main idea is to derive a nonlinear change of coordinates, allowing one to pass from the
modal coordinates to new-defined normal coordinates , describing the motions in an invariant-based span of the phase space. The nonlinear change of coordinates is computed from Poincaré and
Poincaré-Dulac's theorems, by successive elimination of nonessential coupling terms in the nonlinear oscillator equations. Formally, it readsA third-order approximation of the complete change of
coordinates is thus computed. The analytical expressions of the introduced coefficients and are not given here for the sake of brevity. The interested reader may find them in [12] for the undamped
case, and in [39] for the damped case.
Once the nonlinear change of coordinates operated, proper truncations can be realized. In particular, keeping only the normal coordinates allows prediction of the correct type of nonlinearity for the
th mode. The dynamics onto the th NNM readswhere , , and are new coefficients coming from the change of coordinates. Their expressions involve the quadratic coefficients only, together with some of
the transformation coefficients, from (2.9a) and (2.9b) [39]:
The asymptotic third-order approximation of the dynamics onto the th NNM given by (2.10) allows one to accurately predict the type of nonlinearity of mode . A first-order perturbative development
from (2.10) gives the dependence of the nonlinear oscillation frequency on the amplitude of vibration by the relationship:where is the natural angular frequency. In this expression, is the
coefficient governing the type of nonlinearity. If , then hardening behaviour occurs, whereas implies softening behaviour. The analytical expression for writes [12, 22]
Finally, the method used for deriving the type of nonlinearity can be summarized as follows. For a geometric imperfection of a given amplitude, the discretization leading to the nonlinear oscillator
(2.8) is first computed. The numerical effort associated to this operation is the most important but remains acceptable on a standard computer. Then the nonlinear change of coordinates is computed,
which allows derivation of the and terms occuring in (2.13), the sign of which determines the type of nonlinearity. Numerical results are given in Section 3 for specific imperfections.
3. Effect of Imperfections
This section is devoted to numerical results about the effect of typical imperfections on the type of nonlinearity of imperfect plates. Two typical imperfections are selected. The first one is
axisymmetric and has the shape of mode (0,1), the second one has the shape of the first asymmetric mode (2,0). Consequently, damping is not considered, so that in each equation we have: . The study
of the effect of damping will be done separately and is postponed to Section 4.
3.1. Axisymmetric Imperfection
In this section, the particular case of an axisymmetric imperfection having the shape of mode (0,1) (i.e., with one nodal circle and no nodal diameter) is considered. The expression of the static
deflection writeswhere is the mode shape, depending only on the radial coordinate as a consequence of axisymmetry, and the considered amplitude. The mode shape depends on Bessel function [5], and is
shown in Figure 2. The eigenmode is normalized so that .
Figure 3 shows the effect of the imperfection on the eigenfrequencies, for an imperfection amplitude from 0 (perfect plate) to 10 h. It is observed that the purely asymmetric modes , having no nodal
circle and nodal diameters, are marginally affected by the axisymmetric imperfection. The computation has been done by keeping 51 basis functions: purely asymmetric modes from (2,0) to (10,0), purely
axisymmetric modes from (0,1) to (0,13); and mixed modes from (1,1) to (6,1), (1,2), (2,2), (3,2) and (1,3). More details and comparisons with a numerical solution based on finite elements are
provided in [42, 43]. The slight dependence of purely asymmetric eigenfrequencies on an axisymmetric imperfection has already been observed in [44] with the case of the shallow spherical shell.
First, the effect of the imperfection on the axisymmetric modes (0,1) and (0,2) is studied. In this case, the problem is fully axisymmetric so that all the truncations can be limited to axisymmetric
modes only, which drastically reduces the numerical burden. The result for mode (0,1) is shown in Figure 4. It is observed that the huge variation of the eigenfrequency with respect to the amplitude
of the imperfection results in a quick turn of the behaviour from the hardening to the softening type, occuring for an imperfection amplitude of = 0.38 h. This small value has direct implication for
the case of real plates. As the behaviour changes for a fraction of the plate thickness, it should not be intriging to measure a softening behaviour with real plates having small imperfections. This
result can also be compared to an earlier result obtained by Hui [10]. Although Hui did not study free-edge boundary condition, he reported a numerical result for the case of simply supported
boundary conditions, where the behaviour changes for an imperfection amplitude of 0.28 h. The second main observation inferred from Figure 4 is the occurrence of 2:1 internal resonance between
eigenfrequencies, leading to discontinuities in the coefficient dictating the type of nonlinearity. This fact has already been observed and commented for the case of shallow spherical shells in [22].
It has also been observed for buckled beams and suspended cables [19, 21]. This is a small denominator effect typical of internal resonance, that is, when the frequency of the studied mode (0,1)
exactly fulfills the relationship with another axisymmetric mode. 2:1 resonance arises here with mode (0,2) at 1.85 h and with mode (0,3) at 5.66 h. On a practical point of view, one must bear in
mind that when 2:1 internal resonance occurs, single-mode solution does not exist anymore, only coupled solutions are possible. Hence the concept of the type of nonlinearity, intimately associated
with a single dof behaviour, loses its meaning in a narrow interval around the resonance.
The numerical result for mode (0,2) is shown in Figure 5. Once again, the geometric effect is important and leads to a change of behaviour occurring at = 0.75 h, that is, for a small level of
imperfection. 2:1 internal resonance also occurs, thus creating narrow region where hardening behaviour could be observed. This result extends Hui's analysis since only mode (0,1) was studied.
Moreover, as a single-mode truncation was used in [10], 2:1 resonances were missed.
Finally, the effect of the imperfection on asymmetric modes is shown in Figure 6 for modes (2,0) and (4,0). The very slight variation of the eigenfrequencies of these modes versus the axisymmetric
imperfection results in a very slight effect of the geometry. It is observed that before the first 2:1 internal resonance, the type of nonlinearity shows small variations. Hence it is the behaviour
of the other eigenfrequencies and the occurrence of 2:1 internal resonance that makes, in these cases, the behaviour turn from hardening to softening behaviour. For mode (2,0), this occurs for an
imperfection amplitude of = 0.44 h, where 2:1 resonance with mode (0,1) is observed. For mode (4,0), the first 2:1 resonance occurs with mode (0,2) at = 1.39 h, but do not change the behaviour. It is
the resonance with mode (0,1) at = 4 h which makes the behaviour turn from hardening to softening.
These results corroborate those obtained on shallow spherical shells [22]. The fundamental importance of axisymmetric modes in the study of asymmetric ones is confirmed, showing once again that
reduction to single mode has no chance to deliver correct results. The behaviour of purely asymmetric modes is found to be of the hardening type until the 2:1 internal resonance with mode (0,1)
occurs. However, a specificity of mode (2,0) with regard to all the other purely asymmetric modes is that after this resonance, hardening behaviour (though with a very small value of ) is observed.
This was also the case for shallow spherical shells [22]. Finally, for very large values of the imperfection, the behaviour tends to be neutral.
3.2. Asymmetric Imperfection
In this section, the effect of an imperfection having the shape of mode (2,0) is studied. Due to the loss of symmetry, degenerated modes are awaited to cease to exist : the equal eigenfrequencies of
the sine and cosine configuration of degenerated modes split. In the following, distinction is made systematically between the sine or cosine configuration of companion modes, for example, mode
(2,0,C) (resp., (2,0,S)) refers to the cosine (resp., sine) configuration. More precisely, the imperfection has the shape of (2,0,C) and is shown in Figure 7.
The behaviour of the eigenfrequencies with the imperfection is shown in Figure 8. As expected, the variation of the eigenfrequency corresponding to (2,0,C) is huge, whereas (2,0,S) keep quite a
constant value. The symmetry is not completely broken. One can show that only eigenmodes of the type split. On the other hand, as shown in Figure 8, modes (3,0), (5,0), and (1,1) are still
The numerical results for type of nonlinearity relative to the two configurations (2,0,C) and (2,0,S) are shown in Figure 9. The natural frequency of mode (2,0,C) undergoes a huge variation, which
results in a quick change of behaviour, occurring at 0.54 h. Then, a 2:1 internal resonance with (0,2) is noted, but without a noticeable change in the type of nonlinearity, as the interval where the
discontinuity present is very narrow. In this case, the behaviour of looks like the one observed in the precedent case, that is, the variation of versus an imperfection having the same shape. On the
other hand, the eigenfrequency of mode (2,0,S) remains quite unchanged, so that the behaviour of is not much affected by the imperfection until the 2:1 internal resonance is encountered. In that
case, the resonance occurs with the other configuration, that is, mode (2,0,C).
Finally, the results for the first two axisymmetric modes (0,1) and (0,2) are shown in Figure 10. Mode (0,1) shows a very slight variation of its eigenfrequency with respect to the asymmetric
imperfection (2,0,C). Consequently, the type of nonlinearity is not much affected, until the eigenfrequency of (2,0,C) comes to two times : 2:1 internal resonance occurs, and the behaviour becomes
softening. On the other hand, the eigenfrequency of (0,2) is more affected by the imperfection. This result in an important decrease of while still remaining positive. A 2:1 internal resonance with
(0,3) is encountered for 3.51 h, and two others 2:1 resonance, with (0,4) and (0,5), occur around 8 h. However, the interval on which the type of nonlinearity changes its sign is so narrow that it
can be neglected. The behaviour is thus mainly of the hardening type for (0,2).
4. Effect of Damping
In this section, the effect of viscous damping on the type of nonlinearity is addressed. The particular case of the shallow spherical shell is selected to establish the results. The equations of
motion are first briefly recalled. Then specific cases of damping are considered, hence complementing the results of [22], that were limited to the undamped shell.
4.1. The Shallow Spherical Shell Equations
The local equations of motions for the shallow spherical shell can be obtained directly, see [44] for a thorough presentation. They can also be obtained from (2.4a) and (2.4b), by selecting an
imperfection having a spherical shape, as shown in Figure 1(c), see [42]. With the radius of curvature of the spherical shell ( to fulfill the shallow assumption), the local equations write [44]
where the aspect ratio of the shell has been introduced:and . The boundary conditions for the case of the spherical shell with free edge write exactly as in the case of the imperfect circular plates
so that (2.5a), (2.5b) and (2.5c) are still fulfilled [42, 44]. A peculiarity of the spherical shell is that all the involved quantities, linear (eigenfrequencies and mode shapes), and nonlinear
(coupling coefficients and type of nonlinearity) only depends on , which is the only free-geometric parameter. Hence all the results will be presented as functions of .
A Galerkin expansion is used for discretizing the PDEs of motion. As the eigenmodes are known analytically [44], they are used for expanding the unknown transverse displacement:The modal
displacements are the unknowns, and their dynamics are governed by, The analytical expressions for the quadratic and cubic coupling coefficients involve integrals of products of eigenmodes on the
surface, they can be found in [22, 44]. As in Section 3, a modal viscous damping term of the form is considered, whereas external forces have been cancelled as only free responses are studied.
The type of nonlinearity can be inferred from (4.4) by using the NNM method. The results for an undamped shell have already been computed and are presented in [22]. However, an extension of the
NNM-method, taking into account the damping term, has been proposed in [39]. Amongst other things, it has been shown on a simple two dofs system of coupled oscillators, that the type of nonlinearity
depends on the damping. The aim of this section is thus to complement the results presented in [22] for documenting the dependence of a shell on viscous damping and for assessing its effect.
4.2. Numerical Results
Three cases are selected in order to derive results for a variety of damping behaviours:
Case 1. For all
Case 2. For all
Case 3. For all
In the above cases, is a constant value, ranging from 0 to 0.3. Case 1 corresponds to a decay factor () that is independent from the frequency, that is, with a constant value for any mode. With a
small value of , it may model the low-frequency (i.e., below the critical frequency) behaviour of thin metallic structures such as aluminium plates [45, 46]. Case 2 describes a decay factor that is
linear with the frequency, and may model, for instance, damped structures as glass plates in the low-frequency range [45]. Finally, Case 3 accounts for a strongly damped structure, with a center
manifold limited to a few modes.
The effect of increasing damping is shown for modes (0,1) and (4,0), for Case 1 in Figure 11, Case 2 in Figure 12, and Case 3 in Figure 13. Mode (0,1) undergoes a rapid change of behaviour: the
transition from hardening to softening type nonlinearity occurs at = 1.93. Then 2:1 internal resonance with mode (0,2) occurs at = 36, but the behaviour remains of the softening type. Mode (4,0)
displays a hardening behaviour until the 2:1 resonance with mode (0,1) at . The first resonance with (0,2) at does not change the behaviour on a large interval. Adding the damping of Case 1 shows
that the discontinuity ocurring at 2:1 internal resonance is smoothened. However, it happens for a quite large amount of damping in the structure. Damping values of 0, 1e-4, 1e-3, and 1e-2 have been
tested and give exactly the same behaviour so that only one curve is visible in Figure 11. Large values of the damping term , namely, 0.1 and 0.3 (which correspond to strongly damped structures) must
be selected to see the discontinuity smoothened. Moreover, outside the narrow intervals where 2:1 resonance occurs, the effect of damping is not visible. As a conclusion for Case 1, it appears that
this kind of damping has a really marginal effect on the type of nonlinearity, so that undamped results can be estimated as reliable for lightly damped structures with modal damping factor below 0.1.
Case 2 corresponds to a more damped structure than Case 1. However, it is observed in Figure 12 that the discontinuity is not smoothened at the 2:1 internal resonance. Inspecting back the analytical
results shows that this is a natural consequence of the expression of the coefficients of the nonlinear change of coordinates for asymptotic NNMs. When the specific Case of constant damping factors
is selected, small denominators remain present. On the other hand, outside the regions of 2:1 resonance, the effect of damping is pronounced and enhances the softening behaviour. But once again, very
large values of damping factors such as 0.3 must be reached to see a prominent influence.
Finally, Case 3 depicts the case of a rapidly increasing decay factor with respect to the frequency. As the overall damping in the structure is thus larger, smaller values of have been selected,
namely, 1e-4, 1e-3, and 1e-2. = 1e-4 gives quite coincident results with = 0. But from = 1e-3, the effect of the damping is very important: the discontinuities are smoothened, except the larger one
occurring for mode (4,0) with mode (0,1). For = 1e-2, 2:1 resonance are not visible anymore. A particular result with this value is for mode (4,0): the smoothening effect is so important that the
nonlinearity remains of the hardening type. Finally, the fact that the damping generally favours the softening behaviour cannot be declared as a general rule, as one counterexample has been exhibited
here. From these results, it can be inferred that the damping has little incidence on the type of nonlinearity for thin structures, until very large values are attained. It is observed that the
damping generally favours the softening behaviour, but this rule is not true in general. In particular when the transition from hardening to softening type nonlinearity is due to a 2:1 internal
resonance and is not the direct effect of the change of geometry, a large value of damping may favours hardening behaviour, as observed here for mode (4,0) in Case 3.
5. Conclusion
The effect of geometric imperfections on the hardening/softening behaviour of circular plates with a free edge has been studied. Thanks to the NNMs, quantitative results for the transition from
hardening to softening behaviour has been documented, for a number of modes and for two typical imperfections. Two general rules have been observed from the numerical results: for modes which
eigenfrequency strongly depends on the imperfection, the type of nonlinearity changes rapidly, and softening behaviour occurs for a very small imperfection with an amplitude being a fraction of the
plate thickness. On the other hand, some eigenfrequencies show a slight dependence with the considered imperfection. For these, 2:1 internal resonances are the main factor for changing the type of
nonlinearity. In a second part of the paper, the effect of viscous damping on the type of nonlinearity of shallow spherical shells has been studied. It has been shown quantitatively that this effect
is slight for usual damping values encountered in thin structures.
1. S. A. Tobias, “Free undamped nonlinear vibrations of imperfect circular disks,” Proceedings of the Institution of Mechanical Engineers, vol. 171, pp. 691–715, 1957. View at Publisher · View at
Google Scholar
2. N. Yamaki, “Influence of large amplitudes on flexural vibrations of elastic plates,” Zeitschrift für Angewandte Mathematik und Mechanik, vol. 41, no. 12, pp. 501–510, 1961. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
3. K. A. V. Pandalai and M. Sathyamoorthy, “On the modal equations of large amplitude flexural vibration of beams, plates, rings and shells,” International Journal of Non-Linear Mechanics, vol. 8,
no. 3, pp. 213–218, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
4. S. Sridhar, D. T. Mook, and A. H. Nayfeh, “Nonlinear resonances in the forced responses of plates—I: symmetric responses of circular plates,” Journal of Sound and Vibration, vol. 41, no. 3, pp.
359–373, 1975. View at Publisher · View at Google Scholar
5. C. Touzé, O. Thomas, and A. Chaigne, “Asymmetric nonlinear forced vibrations of free-edge circular plates—I: theory,” Journal of Sound and Vibration, vol. 258, no. 4, pp. 649–676, 2002. View at
Publisher · View at Google Scholar
6. O. Thomas, C. Touzé, and A. Chaigne, “Asymmetric nonlinear forced vibrations of free-edge circular plates—II: experiments,” Journal of Sound and Vibration, vol. 265, no. 5, pp. 1075–1101, 2003.
View at Publisher · View at Google Scholar
7. P. L. Grossman, B. Koplik, and Y.-Y. Yu, “Nonlinear vibrations of shallow spherical shells,” Journal of Applied Mechanics, vol. 36, no. 3, pp. 451–458, 1969. View at Zentralblatt MATH
8. D. Hui, “Larg-amplitude vibrations of geometrically imperfect shallow spherical shells with structural damping,” AIAA Journal, vol. 21, no. 12, pp. 1736–1741, 1983. View at Zentralblatt MATH
9. K. Yasuda and G. Kushida, “Nonlinear forced oscillations of a shallow spherical shell,” Bulletin of the Japan Society of Mechanical Engineers, vol. 27, no. 232, pp. 2233–2240, 1984. View at
10. D. Hui, “Large-amplitude axisymmetric vibrations of geometrically imperfect circular plates,” Journal of Sound and Vibration, vol. 91, no. 2, pp. 239–246, 1983. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH
11. A. H. Nayfeh, J. F. Nayfeh, and D. T. Mook, “On methods for continuous systems with quadratic and cubic nonlinearities,” Nonlinear Dynamics, vol. 3, no. 2, pp. 145–162, 1992. View at Publisher ·
View at Google Scholar
12. C. Touzé, O. Thomas, and A. Chaigne, “Hardening/softening behaviour in nonlinear oscillations of structural systems using nonlinear normal modes,” Journal of Sound and Vibration, vol. 273, no.
1-2, pp. 77–101, 2004. View at Publisher · View at Google Scholar
13. M. Amabili, F. Pellicano, and M. P. Païdoussis, “Nonlinear vibrations of simply supported, circular cylindrical shells, coupled to quiescent fluid,” Journal of Fluids and Structures, vol. 12, no.
7, pp. 883–918, 1998. View at Publisher · View at Google Scholar
14. E. H. Dowell, “Comments on the nonlinear vibrations of cylindrical shells,” Journal of Fluids and Structures, vol. 12, no. 8, pp. 1087–1089, 1998. View at Publisher · View at Google Scholar
15. M. Amabili, F. Pellicano, and M. P. Païdoussi, “Further comments on nonlinear vibrations of shells,” Journal of Fluids and Structures, vol. 13, no. 1, pp. 159–160, 1999. View at Publisher · View
at Google Scholar
16. D. A. Evensen, “Nonlinear vibrations of cylindrical shells—logical rationale,” Journal of Fluids and Structures, vol. 13, no. 1, pp. 161–164, 1999. View at Publisher · View at Google Scholar
17. M. Amabili and M. P. Païdoussis, “Review of studies on geometrically nonlinear vibrations and dynamics of circular cylindrical shells and panels, with and without fluid-structure interaction,”
Applied Mechanics Reviews, vol. 56, no. 4, pp. 349–356, 2003. View at Publisher · View at Google Scholar
18. M. Amabili, Nonlinear Vibrations and Stability of Shells and Plates, Cambridge University Press, New York, NY, USA, 2008.
19. G. Rega, W. Lacarbonara, and A. H. Nayfeh, “Reduction methods for nonlinear vibrations of spatially continuous systems with initial curvature,” in IUTAM Symposium on Recent Developments in
Non-linear Oscillations of Mechanical Systems (Hanoi, 1999), vol. 77 of Solid Mech. Appl., pp. 235–246, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2000. View at Zentralblatt MATH ·
View at MathSciNet
20. F. Pellicano, M. Amabili, and M. P. Païdoussis, “Effect of the geometry on the nonlinear vibration of circular cylindrical shells,” International Journal of Non-Linear Mechanics, vol. 37, no. 7,
pp. 1181–1198, 2002. View at Publisher · View at Google Scholar
21. H. N. Arafat and A. H. Nayfeh, “Nonlinear responses of suspended cables to primary resonance excitations,” Journal of Sound and Vibration, vol. 266, no. 2, pp. 325–354, 2003. View at Publisher ·
View at Google Scholar
22. C. Touzé and O. Thomas, “Nonlinear behaviour of free-edge shallow spherical shells: effect of the geometry,” International Journal of Non-Linear Mechanics, vol. 41, no. 5, pp. 678–692, 2006. View
at Publisher · View at Google Scholar
23. A. Rosen and J. Singer, “Effect of axisymmetric imperfections on the vibrations of cylindrical shells under axial compression,” AIAA Journal, vol. 12, no. 7, pp. 995–997, 1974.
24. D. Hui and A. W. Leissa, “Effects of uni-directional geometric imperfections on vibrations of pressurized shallow spherical shells,” International Journal of Non-Linear Mechanics, vol. 18, no. 4,
pp. 279–285, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
25. P. B. Goncalves, “Axisymmetric vibrations of imperfect shallow spherical caps under pressure loading,” Journal of Sound and Vibration, vol. 174, no. 2, pp. 249–260, 1994. View at Publisher · View
at Google Scholar
26. M. Amabili, “A comparison of shell theories for large-amplitude vibrations of circular cylindrical shells: Lagrangian approach,” Journal of Sound and Vibration, vol. 264, no. 5, pp. 1091–1125,
2003. View at Publisher · View at Google Scholar
27. V. D. Kubenko and P. S. Koval'chuk, “Influence of initial geometric imperfections on the vibrations and dynamic stability of elastic shells,” International Applied Mechanics, vol. 40, no. 8, pp.
847–877, 2004. View at Publisher · View at Google Scholar
28. E. L. Jansen, “The effect of geometric imperfections on the vibrations of anisotropic cylindrical shells,” Thin-Walled Structures, vol. 45, no. 3, pp. 274–282, 2007. View at Publisher · View at
Google Scholar
29. C.-Y. Chia, “Nonlinear free vibration and postbuckling of symmetrically laminated orthotropic imperfect shallow cylindrical panels with two adjacent edges simply supported and the other edges
clamped,” International Journal of Solids and Structures, vol. 23, no. 8, pp. 1123–1132, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
30. D. Hui and A. W. Leissa, “Effects of geometric imperfections on vibrations of biaxially compressed rectangular flat plates,” Journal of Applied Mechanics, vol. 50, no. 4, pp. 750–756, 1983. View
at Zentralblatt MATH
31. N. Yamaki, K. Otomo, and M. Chiba, “Nonlinear vibrations of a clamped circular plate with initial deflection and initial edge displacement—I: theory,” Journal of Sound and Vibration, vol. 79, no.
1, pp. 23–42, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
32. N. Yamaki, K. Otomo, and M. Chiba, “Nonlinear vibrations of a clamped circular plate with initial deflection and initial edge displacement—II: experiment,” Journal of Sound and Vibration, vol.
79, no. 1, pp. 43–59, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
33. N. Yamaki and M. Chiba, “Nonlinear vibrations of a clamped rectangular plate with initial deflection and initial edge displacement—I: theory,” Thin-Walled Structures, vol. 1, no. 1, pp. 3–29,
1983. View at Publisher · View at Google Scholar
34. N. Yamaki, K. Otomo, and M. Chiba, “Nonlinear vibrations of a clamped rectangular plate with initial deflection and initial edge displacement—I: experiment,” Thin-Walled Structures, vol. 1, no.
1, pp. 101–119, 1983. View at Publisher · View at Google Scholar
35. M. Amabili, “Theory and experiments for large-amplitude vibrations of rectangular plates with geometric imperfections,” Journal of Sound and Vibration, vol. 291, no. 3–5, pp. 539–565, 2006. View
at Publisher · View at Google Scholar
36. M. Amabili, “Theory and experiments for large-amplitude vibrations of empty and fluid-filled circular cylindrical shells with imperfections,” Journal of Sound and Vibration, vol. 262, no. 4, pp.
921–975, 2003. View at Publisher · View at Google Scholar
37. M. Amabili, “Theory and experiments for large-amplitude vibrations of circular cylindrical panels with geometric imperfections,” Journal of Sound and Vibration, vol. 298, no. 1-2, pp. 43–72,
2006. View at Publisher · View at Google Scholar
38. C. C. Lin and L. W. Chen, “Large-amplitude vibration of an initially imperfect moderatly thick plate,” Journal of Sound and Vibration, vol. 135, no. 2, pp. 213–224, 1989. View at Publisher · View
at Google Scholar
39. C. Touzé and M. Amabili, “Nonlinear normal modes for damped geometrically nonlinear systems: application to reduced-order modelling of harmonically forced structures,” Journal of Sound and
Vibration, vol. 298, no. 4-5, pp. 958–981, 2006. View at Publisher · View at Google Scholar
40. G. J. Efstathiades, “A new approach to the large-deflection vibrations of imperfect circular disks using Galerkin's procedure,” Journal of Sound and Vibration, vol. 16, no. 2, pp. 231–253, 1971.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH
41. G. L. Ostiguy and S. Sassi, “Effects of initial geometric imperfections on dynamic behaviour of rectangular plates,” Nonlinear Dynamics, vol. 3, no. 3, pp. 165–181, 1992. View at Publisher · View
at Google Scholar
42. C. Camier, C. Touzé, and O. Thomas, “Nonlinear vibrations of imperfect free-edge circular plates,” submitted to European Journal of Mechanics: A/Solids.
43. C. Camier, C. Touzé, and O. Thomas, “Effet des imperfections géométriques sur les vibrations nonlinéaires de plaques circulaires minces,” in Proceedings of 18 ème Congrès Français de Mécanique,
Grenoble, France, August 2007.
44. O. Thomas, C. Touzé, and A. Chaigne, “Nonlinear vibrations of free-edge thin spherical shells: modal interaction rules and 1:1:2 internal resonance,” International Journal of Solids and
Structures, vol. 42, no. 11-12, pp. 3339–3373, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
45. A. Chaigne and C. Lambourg, “Time-domain simulation of damped impacted plates—I: theory and experiments,” Journal of the Acoustical Society of America, vol. 109, no. 4, pp. 1422–1432, 2001. View
at Publisher · View at Google Scholar
46. M. Amabili, “Nonlinear vibrations of rectangular plates with different boundary conditions: theory and experiments,” Computers and Structures, vol. 82, no. 31-32, pp. 2587–2605, 2004. View at
Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/mpe/2008/678307/","timestamp":"2014-04-19T15:03:55Z","content_type":null,"content_length":"297071","record_id":"<urn:uuid:b733128b-ffb8-427a-9d29-4c5fa9fff763>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
DSMC is an algorithm used for simulating rarefied gas flows.
You define the simulation domain, the body inside this domain, gas flow parameters and several other options. DSMC iteratively models the behaviour of gas molecules according to time and space
decoupling scheme for the Boltzmann equation. The result of simulation is a field of macroscopic parameters across the simulation domain.
:: Domain
-> Body
-> Flow
-> Time Time step.
-> Bool If true, start with empty domain. Add initial particle distribution to the domain otherwise.
-> Double Source reservoir extrusion.
-> Double Steadiness epsilon.
-> Int Step count limit in steady regime.
-> Surface Model for surface of body.
-> (Double, Double, Double) Spatial steps in X, Y, Z of grid used for macroscopic parameter sampling.
-> Int Use that many test points to calculate volume of every cell wrt body. Depends on Knudsen number calculated from cell size.
-> Int Split Lagrangian step into that many independent parallel processes.
-> IO (Int, Ensemble, MacroField)
Perform DSMC simulation, return total iterations count, final particle distribution and field of averaged macroscopic parameters.
This is an IO action since system entropy source is polled for seeds. | {"url":"http://hackage.haskell.org/package/dsmc-0.1.0.1/docs/DSMC.html","timestamp":"2014-04-23T13:12:42Z","content_type":null,"content_length":"7219","record_id":"<urn:uuid:205ebc05-eebf-4458-862a-a5b672e0ada7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random effects
Linear Models
Analyzing data with random effects (Littell, chapter 4) This site supports tutorial instruction on an linear models, based on Littell, et al. (2002) SAS for Linear Models. The materials in this site
are appropriate for someone who has a reasonable command of basic linear regression. In a basic regression course, it is usually assumed that we are interesting in modeling effects based on
observations of independent and identically distributed observations (e.g. a single cross section with simple random sampling). In the materials in this site, we expand the application of the linear
model to the analysis of data arising from more complex design and sampling scenarios (e.g. experimental and quasi-experimental designs, cases with nested or clustered samples). The site is provided
by Robert Hanneman, in the Department of Sociology at the University of California, Riverside. Your comments and suggestions are welcome, as is your use of any materials you find here. The materials
in this page are parallel to Littell's text: 4.1 Introduction
The key conceptual distinction is between a fixed effect and a random effect. This is important because the inference and analysis of fixed and random effects are different.
Fixed effects can be thought of as "treatment" levels that we have selected for inclusion in a study, which are the only levels of the variable in question in which we have an interest. In an
experiment, we might have a treatment group and a control group. The purpose of the study is to compare these two groups -- we are not trying to generalize to other treatments that we might have
included, but didn't. In a non-experimental setting, a variable may have a small number of levels, and we have included them all -- gender, for example, might be treated as a fixed effect because we
have included all possible levels of this "treatment" variable in our study (i.e. males and females). Or, we may have a variable that has, many possible levels, but we are only interested in
generalizing study results to the ones that we happened to include. For example, suppose that we did a survey using cluster sampling -- and selected 10 cities at random. Normally, "city" could be
thought of as a factor in the design, with 10 levels. If we did not want to generalize our results to all of the cities that might have been selected (for example, the 10 cities that we did include
could be regarded as a sample of 10 drawn from a larger population of cities) -- but only wanted to generalize to the 10 we included, then "city" could be treated as a "fixed" effect.
In the case of selecting 10 cities from a larger population of cities, just discussed, we would more naturally treat the variable "city" as a random effect. That is, we would regard the effects of
"city" as a random sample of the effects of all the cities in the full population of cities. Conceptually, a variable's effects might be treated as random effects if we can think of the levels of the
variable that we included in the study as a sample drawn from some larger (conceptual) population of levels that could (in principle) have been selected.
One key difference between fixed and random effects is in the kind of information we want from the analysis of the effects. In the case of fixed effects, we are usually interested in making explicit
comparisons of one level against another. For example, we very much would want to compare the mean of the "control group" to the mean of the "treatment group" in an experiment. If explicit comparison
of the levels of a variable against one another is the goal of the research, then the levels of the variable are usually treated as "fixed." If, on the other hand, our primary interest is in the
effects of other variables or treatments across the levels of a factor (e.g. the effect of gender on voting, across samples from 15 nations), then the "blocking" or "control" variable might be
treated as a "random" effect. In this example, the dependent variable is voting, the independent variable is gender (which would be treated as a fixed factor so we can compare men and women), and the
national context or sampling design variable (which of 15 nations) might be treated as a random factor.
In the case of a fixed factor, then, we are usually interested in comparing the scores on the dependent variable among the levels of the factor; our interest will be in differences between means. In
the case of a random factor, we are not really interested in the specific differences in means from one level of the factor to another -- but, we are interested in the extent to which the random
factor accounts for variance in the dependent variable, because we want to control for this. So, rather than being interested in the individual means across the levels of the fixed factor, we are
interested in the variance of means across the levels of a random factor.
In modern mixed-models methodology, random factors are actually treated in a more complicated way. While we are interested in the total variation in outcomes attributable to the random factor (it's
variance), we might also be interested in specific levels of random factors, in addition. Two examples: a) in using cities as PSUs for surveying how gender affects attitudes, we want to estimate
variance in mean attitudes across cities, and remove this component of the variance before making the key gender comparison; but, we might also be interested in the variance across cities itself, and
might want to test hypotheses about these differences (in a hierarchical model, we might want to introduce other variables to explain the mean differences across cities). b) In looking at differences
between individuals in a treatment group and those in a control group, we might measure each individual a number of times. Some individuals will consistently score higher than others for reasons
other than whether they were in the treatment group or not. In testing the fixed effect (does the treatment group differ from the control group), we might want to control for a "random" effect of
individual differences. We would want to remove this individual level variance from the outcome in testing for a treatment effect (this is, by the way, the basic idea of how random effects models
apply to repeated measures designs). But, in addition to estimating the variance due to the random effect of "individual," we might want to compare particular individuals, or even (again by a
hierarchical model) seek to explain this variation with other predictors.
The key issue between fixed and random effects, statistically, is whether the effects of the levels of a factor are thought of as being a draw from a probability distribution of such effects. If so,
the effect is random. If the levels of a factor are not a sample of possible levels, the effects are fixed. Usually treatment effects are fixed. Many naturally occurring qualitative variables can be
thought of as having fixed effects. Most blocking (sampling design), control, and repeated measures factors are usually treated as random.
Random effects raise two issues: how to construct tests and confidence intervals -- since the effects are a sample of possible effects, we need assumptions about the underlying distribution of all
possible effects that might have been observed. Second, if we want to compare or predict specific levels of random effects, then we need to worry about BLUE estimation of such effects.
With balanced data, random factors do not cause inferential problems for tests of fixed effects. But for unbalanced data (very common in non-experimental applications) improper treatment can lead to
mistaken inference about treatment effects.
In SAS, the "random" statement in GLM can handle many situations. More complex situations are best dealt with using PROC MIXED. In SPSS, both analyze GLM and analyze Mixed Models do fixed and random
effects. The mixed models application is more general and provides some "wizards" to help specify models.
return to the chapter table of contents
4.2 Nested classifications
Probably the most common sampling approach in non-experimental work (and in many experiments) is the crossed design. In such a design, each level of each factor may occur with each level of each
other factor. If we had males and females in a treatment and a control group -- and there were some of each gender in each group, the design would be crossed. If we looked at the effects of income
(with, potentially, almost as many levels as observations) and years on the job (a discrete variable with a large but finite number of levels) on prestige -- we can think of this as a crossed
observational design where we observe the mean prestige of all individuals who fall in a particular income-by-prestige combination. Carefully constructed laboratory studies often have one and only
one observation at each level of a crossed design (e.g. latin squares or randomized blocks); some studies have multiple replications in each cell of the design; many non-experimental designs have
many observations in some cells, and no observations in many others. Regardless, all are basically crossed classifications. Stratified sampling is a case where we may pay attention to one or more
factors in selecting cases, but allow all levels of other variables to occur at each level of the stratified factors (at least in theory).
Nested classifications have some levels of one factor occur only within certain levels of a first factor. Many sampling designs involve this sort of clustering. Individual children are selected
within classrooms (but the children from classroom B can never be selected from the population of classroom A); classrooms may occur within schools, which may occur within districts that occur within
states, etc. Observations of Y are nested within classrooms, which are nested in....
Another common application of the same idea is in "repeated measures" where we make multiple observations on Y (or multiple Ys) for each individual subject. The simplest example being a
"before-after" comparison, where the scores on Y are nested within individuals.
The text gives the example of two bacterial counts being made on each of three samples drawn from each of 20 packages of ground beef. There are 2x3*20 observations (120 total). These data are
reproduced in the data set MICROBE.sav. For this example, the model of interest is ln(bacteria) = grand mean + effect of package + effect of sample(within package) + error. With the two observations
of each sample being treated as replications to define the error.
It is argued that the main variable of interest (package) is a random draw from all possible packages. It is argued that the samples within each package are a random draw of possible samples, and
that the replications are also random outcomes. So, all factors are random. We assume that all are normal and independent.
Notation: fixed effects are labeled with greek (e.g. alpha), random effects are labeled with latin (e.g. a). the notation b(a) means the effect of b is nested within a.
The variance in the dependent variable can be thought of as independent components: variance due to package, to samples(within packages) and to observations(within samples, or "error"). These are
called components of the variance. In this example, this is our primary interest -- to identify the relative sizes of the components of variance.
4.2.1 ANOVA for Nested Classifications
Nested designs with random factors are easy to specify in SAS. In GLM, the random command specifies the effects that are to be treated as random -- in the example:
random package sample(package);
specifies that both the package effect, and the nested effect of sample within package are random effects. Specifying an effect as nested is as simple as using the sample(package) notation shown
above in both the model and random commands. I have not located a way in the GUI approach to SPSS to get it to estimate this basic nested model. It is possible to specify this nested random effects
model in the mixed models module, but this does not produce SS, etc. There may be a way using the command language editor, rather than the GUI.
The analysis produces a partition of the SS (type I and type III are identical for a balanced model). There are 19 df for package (20 packages), and 40 df for sample (1 df is lost within each
package, so the 60 samples from 20 packages have 40 df). Caution, the F tests produced by default for nested models are not correct -- the wrong denominator is used -- see section 4.2.3.
The variance components of the model are estimated from the mean squares. The error variance component is simply the error mean squares. The sample component, in this case is ms sample - ms error /
2. The weights to construct the variance components are given by the Type III expected mean squares output. What is critical about this, is that these variance components decompose the total sample
variance. In this case, 8.5% of the variance is error, 50.7% is due to variance from sample to sample within packages, and 40.1% is due to packages. This gives you a clear sense of the "reliability"
of the package variance estimates (the real substantive interest in the problem).
4.2.2 Computing variances of means and optimum sampling plans
The variance of a mean can also be computed from the ms. This could be used to minimize cost in deriving a sampling design -- most effort should go to making more observations where the variance is
high (in the current case, more samples within packages and fewer packages -- given cost constraints -- would improve the standard errors of means.
4.2.3 Using expected MS to obtain valid inference tests
In nested classifications, the "right" denominator for F tests is not always the "error" SS -- in this case, the sum of the variance between the two readings on each sample within each package. For
testing package to package differences, for example, the denominator should be the variance attributable to sample-to-sample variance within packages, but ignore the variance due to multiple
measurements. Construction of proper F tests can be done by hand using the expected mean squares. Or, the test sub-command can be used to specify the hypothesis (numerator) MS and the error
(denominator) SS.
4.2.4 Variance component estimation for nested models with PROC MIXED
MIXED has a slightly different syntax, but provides an easy specification of fixed and random factors, as well as nesting. SPSS proc mixed looks very similar, but I can't find any way to specify
nested effects. Produces the "covariance parameter estimates" which are the variance components. For unbalanced data, MIXED should be used to produce variance components estimates.
Rather than F tests of variance components, MIXED uses -2 residual log likelihoods (known elsewhere as "deviance."). Hypotheses about effects are tested by comparing the deviance of nested
alternative models. Differences in the deviance are distributed as chi-square with 1 df per variance component. Wald (chi square difference) tests can also be produced by running:
proc mixed covtest;
model = depvar = fixed factors;
random = random effect nested_effect(effect);
Warning: The Wald tests are not good approximations to normal unless sample size is very large. It is better to use F tests.
4.2.5 Additional tests for nested data: overall mean and BLUE
Using the "solution" or the "estimate" option in MIXED allows computation of the overall mean.
We might also be interested in whether the mean of a particular package (cluster or PSU) differs from the overall mean. This can be described from the means, but to test the differences, the standard
errors of the means must be corrected for the nested sampling design. This is done using BLUP.
return to the chapter table of contents
4.3 Blocked designs with random blocks
The complete randomized blocks design administers each treatment level within each of a number of experimental blocks (e.g. sites or clusters). In chapter 2, the effect of block was treated as fixed.
More realistically, the blocks might be regarded as a sample of experimental sites drawn from a population of possible experimental sites -- a random effect. In this case, the variance component and
inference should be treated differently.
Often the sampling of blocks includes all blocks -- and then they are treated as fixed. Or, they may be a sample of blocks, but not drawn with any kind of random process -- also, fixed treatment
would be better. The choice of fixed versus random effect treatment of the blocking factor does not matter much for testing treatment mean differences -- usually the main interest. But, if getting
proper estimates of the treatment group means and their confidence intervals is important, then random treatment of the blocking factor is superior.
This section compares the treatment of a complete random block design to that in chapter 3, where the blocking factor was treated as fixed.
4.3.1 Random blocks analysis using PROC MIXED
The model is a basic linear model of y = intercept + vector of treatment effects + vector of block effects + individual error. Because block is now treated as random, it is represented with latin
rather than greek letter. The data set is that of fruit weights for irrigation methods with areas as blocking factor (data set METHOD).
Analysis using proc mixed to specify block as random gives covariance parameter, residual, and liklihood tests for the random factor, and F tests for the fixed factor (irrigation method). Requesting
means gives treatment means and estimated standard errors. The treatment means are identical to the analysis treating block as fixed. But...
The standard errors of the treatment means and differences among treatment means may differ from fixed effects. The standard error of a treatment mean is sqr ( variance of blocks + variance of error)
/ number of blocks); the standard error of the difference between treatment means is sqr (2 variance of error / number of blocks).
4.3.2 Differences between Fixed and Random effects analysis of the complete blocks design
Inference about treatment group differences is not affected by fixed versus random treatment of the blocking factor in a complete and balanced design (but is affected in incomplete designs). But,
inferences about treatment means themselves (e.g. do they differ from zero) is affected.
4.3.2.1 Treatment means IMPORTANT
The estimated standard error of a treatment mean differs, depending on whether we assume the blocking factor to be fixed or random. If fixed, the standard error is sqrt (MSerror/blocks). If random,
the standard error is sqrt (var block + var error) / blocks. That is: if the blocks are fixed, the only error component is the sum of the individual residuals. But, if the blocks are random, then the
variation between the blocks is also error. Hence, the certainty about treatment means is reduced by the desire to generalize beyond the blocks observed to a full population of possible blocks. The
differences between blocks are treated as an estimator of this component of the uncertainty or error variance.
This may be important, for example, in setting a confidence interval around the overall mean of the dependent variable. If we assume that the blocks we have observed are all there are, then the only
uncertainty is estimated by variation across the sample of individuals. But, if our mean is uncertain both because of individual sampling variability, and because of uncertainty in which blocks were
chosen, the confidence intervals are wider.
Often, this difference is logically equivalent to saying: if we want to make an inference about treatment effects for this experiment or study only -- then we can treat the blocking factor as fixed.
But, if we want to consider making inferences about replications (which, by definition, use new blocks), then we should treat blocking as random. The latter type of inference, and our confidence,
will always be less than in the fixed case.
4.3.2.2 Treatment differences
It is possible to validly estimate and evaluate differences in treatment means, even in designs where it may not be possible to estimate the confidence interval on the overall mean.
return to the chapter table of contents 4.4 The two-way mixed model
This section examines an alternative treatment of a two-way factorial experiment using the GRASSES data set (section 3.7).
Five varieties of seeds are examined with each of three propagation methods, with six replications in each of the 15 conditions. Let us suppose that the five varieties are a sample of varieties --
and we are interested in broader inference about variety effects. Method will remain fixed (say, these are the only three methods that are ever used), but the interaction of method*variety now also
becomes random, because one of its components is random.
To properly estimate standard errors for means and differences, mixed must be used. And, if there is an interest in examining specific variety effects with BLUP, then mixed must be used.
4.4.1 Working with expected mean squares to get valid tests. Code for this example is contained in the file GRASSES.sas.
Y = mu + alphai + bj + (ab) + eijk
the method mean mui = grand mean plus the method mean averaged across all levels of b -- "population average" model
the b effects, representing varieties in this case, are assumed normal, equal variance, mean zero
the interaction effects are also assumed mean zero, normal, equal variance
error is the variance among cases using the same variety and method
The "random" statement in SAS causes the program to produce the forumulas for the correct expected mean squares of the effects -- which can then be used for constructing tests. Adding a test
statement after the random statement allows specification of hypothesis error component and error error components for tests.
The danger in not properly specifying effects as random, and in not using the correct expected mean squares to construct tests is that estimated errors are too small, and significance of treatment
effects may be overstated.
4.4.2 Standard errors for the mixed two-way model GLM versus MIXED
GLM standard errors are not correct for the mixed 2-way factoral model. However, the variance component method of mixed analysis does not always yield non-zero components. Adding the "nobound"
statement is one fix-up to this problem. lsmeans and differences are correctly estimated when nobounds is used.
4.4.3 More on Expected Mean Squares -- null hypotheses for fixed effects
Deals with correct forms for testing fixed effects in mixed models with unbalanced data using quadratic forms. Beyond my comprehension.
return to the chapter table of contents 4.5 A classification with both crossed and nested effects
This section uses the data CHIPS, available for both SPSS and SAS.
The dependent variable is the amount of resistance measured across a computer chip. The primary concern is with the effects of a treatment factor (et) which has four levels. We assume that this
treatment effect is fixed. Twelve wafers of silicon (you might think of them as, for example cities in another context) are selected, and three are assigned to each treatment. Wafers, then, are
nested within treatment -- as each wafer occurs only within a given treatment. Four positions are selected for testing on each wafer; these four positions are taken here as an experimental factor and
occur in each wafer in each treatment. Imagine, analogously, that we have selected a hispanic, a white, a black and a mixed race neighborhood in each city. We are interested in whether our four kinds
of treatments work and whether there are systematic differences due to ethinicity, but we have used a cluster sampling technique to select cases.
For this design, where we are interested in et effects, the variance against which they are tested is wafers. Analogously, for examining our program effects, the SSerror is city. For testing effects
of pos, a different error term is needed. Analogously, for testing ethnicity effects, we need a different error term than for testing program effects.
So, the effects here are both crossed and nested. However, all effects are treated as fixed.
4.5.1 Analysis of Variance
With all fixed effects, SS are used. There are four levels of et (program); 8 df for wafer (12 wafers, less one degree of freedom within each level of et -- since wafer is nested within et
(analogously, effects of city have 8 df, since four cities are assigned to each treatment, and the city effects are therefore nested within treatment. There are 3 df associated with position on wafer
-- there are four position that are crossed with all other factors of the design. Analogously, there are 3 df associated with ethnicity effects, because there are four groups that occur in every one
of the 12 cities. The interaction of the two treatment factors (et and position) has 9 df (3 from et, times 3 from pos); analogously (3 from program, times 3 from ethnicity); individual residual has
24 df. Analogously, variation across neighborhoods within cities within programs has 24 df.).
4.5.2 Expected mean squares
Shows how to design contrasts to test hypotheses about treatment (ET) effects, hypotheses about ethnicity (pos) effects within program (et); and hypotheses about program (et) effects within ethnicity
(pos). Table 4.3 (p 127) shows the proper error terms for examining each of the effects -- this is actually the key thing to try to understand.
4.5.3 Satterthwaite's formula for approximate degrees of freedom
All I get from this is that the df(error) in complex designs like this can be very complex, and there are methods for approximating the correct df.
4.5.4 MIXED analysis of the same problem
If the model is correctly specified in PROC MIXED, the complexity of finding correct error terms and df is handled by the program -- and hence this is preferred to GLM. In this case, the example
suggests that the effect wafer(et) be treated as random. By analogy, we might say that the effect of city(nested within program) might reasonably be treated as a random effect, because we may be
interested in attempting to generalize beyond the 12 cities actually examined -- but program and ethnicity are reasonably treated as fixed effects. This seems a better specification of the problem.
The example shows how the correct variance components and tests of effects and contrasts are estimated by MIXED.
return to the chapter table of contents 4.6 Split-plot experiments
There are a wide variety of experiments of this general type. One common occurrence is where there are two or more treatment effects, and one is applied at broad areas (like cities) and another is
applied at narrower areas (like neighborhoods) within each city. Designs vary in whether they are complete or not, re-use areas or not, etc. The key issues are that the error terms for effects
applied at the larger level differ from (and are larger than) the error terms applied at the local level -- that is, macro effects need to be tested against both macro and micro variability; micro
effects can be tested against micro variability pooled across contexts. This section shows the analysis of the same split plot design using GLM and MIXED.
4.6.1 A standard split-plot experiment.
Data for this example are contained in the datasets CULTIVAR.sav and CULTIVAR.sav.
Four main plots (rep) are selected at random from a population of plots that might have been used to do the study. Within each plot, three kinds of inoculation treatments (innoc) are applied to each
of two types of grasses (cult). The dependent variable is the dry weight of the crop. There are 24 observations, being six cult*innoc conditions within each of four plots. Replication has 3df (four
plots); cultivar has 1 df (two types of grasses); replication by cultivar interaction (differences between grasses across plots -- the error A or whole-plot error) has 3 df; innoculi (treatments)
have 2 df.
There are two relevant error terms. For testing differences between grasses, the replication (and hence source of error mean square) is replication by grass (the four plots by the two grasses) --
ignoring any variation due to the treatment innoculi. For testing effects of the treatment (innoculi), the replication (and hence error mean squares) is the replication by cultivar by innoculi -- or
the variation across all the observations.
Think of a parallel case, substituting sociological variables.
4.6.1.1 Analysis using GLM
GLM yields the formulas for expected mean squares in terms of variance components.
4.6.1.2 Analysis with MIXED
Treatment main effects and interactions are treated as fixed (in this case, cult, innoc and cult*innoc). The replication across the macro units (plots) and it's interactions are treated as random (in
this case, rep and rep*cult). That is, usually the replication or blocking factor in a split-plot design is treated as random.
MiXED yields variance component tests for rep and rep*cult; and yields ss and F for fixed cult, innoc, and innoc*cult.
return to the chapter table of contents
Data sets
return to the chapter table of contents
return to Linear Models home page | {"url":"http://faculty.ucr.edu/~hanneman/linear_models/c4.html","timestamp":"2014-04-18T23:16:42Z","content_type":null,"content_length":"35645","record_id":"<urn:uuid:47b5050e-8d4d-43ba-9fa5-cd60e9cd01a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US5280429 - Method and apparatus for displaying multi-frequency bio-impedance
1. Field of the Invention
My invention relates generally to methods for measurement and display of complex sinusoidal impedance and, more specifically, to the display of complex sinusoidal electrical impedance in biological
tissues at many frequencies.
2. Background of the Invention
The concept of a complex impedance for an object is well-known in the acoustical, electrical, and mechanical arts. An object impedance is usually defined as the ratio of a motivating excitation
divided by a resulting object response at a single sinusoidal frequency, ω. For example, the complex mechanical impedance, Z, of a structure can be expressed as the ratio of a sinusoidal excitation
force, F, to the resulting sinusoidal velocity, V, of the structure at the point of application of the excitation force, that is:
The complex nature of such a driving point impedance arises from the time delay, d, of the peak sinusoidal velocity, V, with respect to the peak sinusoidal excitation force, F. This is commonly
expressed in the following form: ##EQU1## Where: ω=angular sinusoidal frequency in radians/second, and
ωd=phase delay of V with respect to F in radians.
This concept of complex object impedance at a single sinusoidal frequency can be used to express a mechanical ratio of force to velocity, an acoustic ratio of pressure to displacement, an electrical
ratio of voltage to current, a thermal ratio of temperature differential to heat flow, an electromagnetic ratio of electric field to magnetic field, and so forth as is well-known in the art.
The classical method known in the art for measuring and displaying values of complex object impedance requires the measurement of the response of the object to an applied excitation at a single
sinusoidal frequency. For instance, the complex electrical impedance of an object can be determined by applying a sinusoidal voltage to the object and measuring the resultant sinusoidal current flow
through the object. The electrical object impedance magnitude may then be determined as the ratio of the root-mean-square (RMS) voltage and current values and the object impedance phase angle may be
determined as the delay in radians of the peak sinusoidal current with respect to the peak sinusoidal voltage. The real and imaginary components of the complex object impedance may be determined from
impedance magnitude and phase angle according to:
A number of problems are known in the impedance measurement art that lead to errors and ambiguities when the values of complex object impedance are measured and displayed. The most significant
problem arises from the requirement that the complex impedance be determined at a single sinusoidal frequency. The presence of other frequency components in the excitation and response signals lead
to errors in the displayed complex impedance. Practitioners in the art have proposed analog filtering techniques and special excitation signal generation methods for minimizing the errors arising
from such unwanted signal components. However, analog filter devices are subject to calibration errors, thermal drift and other problems, which introduce amplitude and phase errors in the excitation
and response signals. These phase errors often become severe at high values of reactance (imaginary impedance) and can completely overwhelm the character of the complex impedance under measurement.
Other serious limitations to accurate complex impedance display are well-known in the art. High impedance magnitudes require the determination of a ratio between a large excitation signal and a very
small response signal, leading to errors arising from the presence of noise in the small response signal. Practitioners propose the use of very high excitation signal amplitudes to overcome the
effects of such noise, but high levels of excitation signal may damage or destroy the object to which the excitation signal is applied. This is especially problematic in cases where the object under
measurement is living biological tissue.
Another problem arising from the single frequency nature of complex object impedance is the difficulty in measuring complex object impedance at several frequencies over a wide frequency range.
Because signal filters are necessary to ensure sinusoidal purity, these filters must be retuned to permit impedance display at other sinusoidal frequencies. The range of frequencies over which analog
filters can be retuned is very limited, often to a few octaves. Alternatively, individual signal filters can be provided for each measurement frequency, but this approach seriously limits the number
of frequencies for which complex object impedance can be displayed in any particular situation.
The analog signal filtering and display components used in the art are subject to calibration drift resulting from thermal changes and variations in operating region. This problem leads to a
requirement for frequent recalibration to minimize complex impedance display errors, thereby preventing the rapid and effective measurement of complex object impedance at several frequencies.
The use of balanced bridge impedance measurement techniques known in the art can overcome many of the deficiencies of the signal ratio measurement methods mentioned above. However, balanced bridge
techniques require a well calibrated impedance standard and are limited in practical application to the determination and display of electrical impedances. Moreover, the standard analog bridge
components are presumed to be linear with respect to signal amplitude, which is an inaccurate presumption in most applications. This presumption leads to errors in display of complex object impedance
that arise with changes in excitation signal level. Finally, balanced bridge techniques are unsuitable for accurate complex impedance determination over a wide frequency range and, in fact, are often
limited to a single sinusoidal calibration frequency at a single excitation signal amplitude.
Another problem well-known in the art arises from the effects of small errors in impedance phase angle at angle values approaching π/2 radians (90 infinity and extremely small errors in phase angle
cause extremely high errors in one or both of the complex impedance components. A number of clever techniques have been proposed by practitioners in the art to overcome this problem, but most rely on
the thermal discrimination between real and imaginary electrical power flow in a circuit and are unsuitable for use over a range of frequencies or with nonelectrical impedance measurement.
Yet another problem with classical impedance measurement and display techniques is the difficulty in simultaneously determining impedance at widely separated frequencies. This problem arises when the
object under measurement experiences changes in impedance properties as a result of the application of the excitation signal. An example of this is the well-known tendency of an object to increase in
temperature in response to the application of an electrical voltage. When the complex object impedance is a function of object temperature, then the measurement of complex impedance at one frequency
will heat the object and create errors in the measurement of complex impedance at another frequency because of the difficulty in measuring such impedances simultaneously. One solution is to use a
plurality of impedance measurement and display devices to simultaneously measure complex object impedance at a plurality of sinusoidal frequencies, but this method is cumbersome and not practical for
large numbers of measurement frequencies.
These and other related difficulties with accurate display of complex object impedance are exacerbated in the situation where the object of interest is living biological tissue. The measurement and
display of complex electrical impedance in biological tissue has been of interest since the late 1800's By 1921 it had been well-supported that the living cell had a well-conducting interior
surrounded by a relatively impermeable, poorly conducting membrane. In 1925, Fricke reported (Fricke, H., Mathematical Treatment of the Electrical Conductivity and Capacity of Disperse Systems, Phys.
Rev. 26:678-681, 1925) that at low frequency (LF) currents, there was little conduction through the cell because of high membrane capacitance. Thus, Fricke argued that conduction occurs primarily in
the extracellular fluid (ECF) compartment and, at a sufficiently high frequency (HF), the current is shunted through the cell membrane and conducts through both the ECF and intracellular fluid (ICF).
Knowledge of biological material properties has been obtained through complex impedance measurements across various cells, suspensions, fibers, eggs and tissues. Of primary significance were the
discoveries of cell membrane capacitance, the beta dispersion region, the additional regions of dispersions for cell suspensions, and Maxwell's mixture theory for analyzing the impedance measurement
In general, all cells and tissues may be expected to show three major dispersion regions in relation to frequencies (α, β and γ). Of particular interest is the central β-dispersion region, which is
explained by the dielectric capacity of the cell membranes. The α and γ regions are attributed to a surface conductance and to intracellular components. The β region can be expressed in either a
simple or complex equivalent circuit orientation form, where parallel resistance (R.sub.P) reflects extracellular fluid (R.sub.ccw) and the series orientation corresponds to intracellular fluid
resistance (R.sub.icw) and cell membrane capacitance (C.sub.M). The equivalent circuit used in the Maxwell analysis must give the semi-circular relation (impedance and admittance loci) between the
real and imaginary components of the impedance as applied frequency is varied, according to Cole (Cole, K. S., Membrane, Ions and Impulses, University Press, Berkeley, 1968).
An early attempt to apply this concept was made by Thomassett (Thomassett, A., Bio-Electrical Properties of Tissues, Lyon Med. 209:1325-1352, 1963), who measured simple impedance at high (100 kHz)
and low (1 kHz) frequency currents using a two-wire technique. The high correlations of high frequency impedance to total body water (TBW) and low frequency impedance to extracellular fluid gave
further support to the membrane and dispersion theories. However, Thomassett's approach did not develop into a practical clinical method.
Using a four-wire configuration, Hoffer, et al. (Hoffer, E., Meador, C., Simpson, C., Correlation of Whole-Body Impedance with Total Body Water Volume, J. Appl. Physio. 17, 4:531-534, 1969) reported
a high correlation between measured TBW and TBW estimated by Ht.sup.2 /Resistance at 100 kHz. However, Hoffer, et al. reported that their high standard deviations implied that further development was
necessary to make the technique a practical clinical method.
A decade later, Nyboer, et al. (Nyboer, J., Liedtke, Reid, K., Gesert, W., Nontraumatic Electrical Detection of Total Body Water and Density in Man, Proceedings of the VI ICEBI, 381-384, 1983) found
that measurements of electrical resistance at 50 kHz, combined with subject weight, height and age, could accurately determine body density (fat and lean) in human subjects These relationships were
based on the assumption that a strong relationship must exist between Fat-Free Mass (FFM) and TBW estimated by impedance because FFM tissue is consistently hydrated. Following the Nyboer, et al.
work, a tremendous amount of interest arose for the impedance method.
Since the Nyboer, et al. presentation, many practitioners have sought to validate the single frequency impedance method of estimating human body composition in various populations (Lukaski, H. C.,
Johnson, P. E., Bolonchuk, W. W., Lykken, G. I., Assessment of Fat-Free Mass Using Bioelectrical Impedance Measurements of the Human Body, Am. J. Clin. Nutr., 41:810-817, 1985; Segal, K., Van Loan,
M., Fitzgerald, P., Hodgdon, J. A., Van Itallie, T. B., Lean Body Mass Estimation by Bioelectrical Impedance Analysis: a Four Site Cross-Validation Study, Am. J. Clin. Nutr. 47:7-14, 1988; and
Kushner, R., Kunigk, A., Alspaugh, M., Andronis, P., Leitch, C., Schoeller, D., Validation of Bioelectrical-Impedance Analysis as a Measurement of Change in Body Composition in Obesity, Am. J. Clin.
Nutr. 52:219-23, 1990). Results have been mostly positive for normal subjects but the predictions of FFM have been far less precise in the clinical and abnormal population groups. Although impedance
data continued to correlate highly to TBW, the underlying assumption that FFM is consistently hydrated has recently been found to be incorrect (Deurenberg, P., Westtrate, J. A., Hautvast, J., Changes
in Fat-Free Mass During Weight Loss Measured by Bioelectrical Impedance and by Densitometry, Am. J. Clin. Nutr. 49:33-6, 1989). Furthermore, increased awareness of the usefulness of the
multi-frequency measurements has led to a general belief that the single frequency method is too simplistic and limiting (Cohn, S., How Valid are Bioelectrical Impedance Measurements in Body
Composition Studies?, Am. J. Clin. Nutr. 42:889-890, 1985). Criticism was also raised over the clinical usefulness of TBW in view of frequently encountered fluid shifts between the ICW and ECW. Van
Itallie, et al. (Van Itallie, T., Segal, K., Nutritional Assessment of Hospital Patients: New Methods and New Opportunities, Am. J. Hum. Bio 1:205-8, 1989) assert that a practical method of
discerning the ICW from the extracellular fluid would offer much greater utility and could profoundly influence hospital patient care and diagnosis.
Recently, several practitioners (Lukaski, H. C., Bolonchuck, W. W., Estimation of Body Fluid Volumes Using Tetrapolar Bioelectrical Impedance Measurements, Aviat. Space Environ., Med Dec., 1163-69,
1988 and McDougall, D., Shizgal, H., Body Composition Measurements from Whole Body Resistance and Reactance, Surgical Forum, 36:43-44, 1986) have proposed using the reactive element in a complex
single frequency (50 kHz) impedance measurement to accurately discriminate the extracellular from the cellular mass. However, the use of multi-frequency measurements of impedance remains the
technique of choice (Boulier, A., Fricker, J., Thomassett, A. L., Apfelbaum, M., Fat-Free Mass Estimation by Two-Electrode Impedance Method, Am. J. Clin. Nutr. 52:581-5, 1990).
Several practitioners have continued to test the Thomassett technique (Jenin, P., Lenoir, J., Roullet, C., Thomassett, A., Ducrot, H., Determination of Body Fluid Compartments by Electrical Impedance
Measurements, Aviat Space Environ. Med. 46:152-5, 1975; Settle, R. G., Foster, K. R., Epstein, B. R., Mullen, J. L., Nutritional Assessment: Whole Body Impedance and Body Fluid Compartments, Nutr.
Cancer, 2:72-80, 1980; and Tedner, B. T., Equipment Using an Impedance Technique for Automatic Recording of Fluid-Volume Changes During Hemodialysis, Med. & Biol. Eng. & Comput. 21:285-290, 1983).
These practitioners use the ratio of high to low frequency simple impedance to reflect fluid compartment volume, the normal and abnormal fluid ratio, and fluid compartment change. The major problem
reported for this technique is that the simple HF/LF ratio is too simplistic because volume is not determined and the compartment actually affected is not identified, although a change in ratio does
reflect a change in compartmental volume. Furthermore, practitioners note that the lack of two-frequency simple impedance instrumentation prevents further exploration of this approach.
Because living tissue is mainly affected by ECF at frequencies below the β-dispersion region, and ECF, C.sub.M and ICF at frequencies above this region, Kanai, et al. (Kanai, H., Haeno, M., Sakamoto,
K., Electrical Measurements of Fluid Distribution of Legs and Arms, Med. Prog. Tech. 12:159-170, 1987) Computed the component values of the human muscle tissue equivalent circuit model (R.sub.ecw,
R.sub.kw, and C.sub.M) known in the art. By mathematically analyzing the complex impedance measurements at multiple frequencies, Kanai, et al. obtained information specific to the ECW and ICW.
However, others have been unable to replicate this advanced approach because of the lack of necessary and appropriate complex impedance instrumentation.
Expanding the prior art human body composition model from a simple two-compartment (lean/fat) model to a more complex model including ICF and ECF, protein, mineral and fat, creates a clearly-felt
need for new and improved assessment techniques. This need has been unmet, until now because of the lack of appropriate complex impedance instrumentation and the lack of the necessary degree of
sophistication in the associated measurement and analysis methods. This situation has inhibited the progress of this promising technology.
Increased use of diuretic drugs, fluid monitoring difficulties in intensive care, and the shrinking cell mass and expanding ECF that accompany most systemic wasting diseases and malnutrition has led
to a well-recognized need for effective multi-frequency bioimpedance instrumentation in support of the compartmental approach to body fluid assessment. These unresolved problems and deficiencies are
clearly felt in the art and are solved by my invention in the manner described below.
My invention is a method and apparatus for the simultaneous determination and display of complex electrical bio-impedance at several frequencies. Although I teach primarily the application of my
invention to the display of multi-frequency electrical bio-impedance, my invention is also readily applicable to the determination and display of complex impedances for many other types of objects,
including other impedances such as mechanical, acoustic and the like. This can be appreciated by recognizing that my invention determines and displays complex object impedance by treating an
excitation signal and a response signal as two independent functions in the time domain. Because of this view, I can apply certain signal processing techniques to the two independent time domain
signals to obtain results in a manner previously unsuspected in the art.
The first such signal processing concept that I use is the creation of a complex cross-correlation signal as a function of time delay, τ, between the excitation, e, and response, r, signals. My
complex cross-correlation function, R.sub.er (τ), can be developed from the integral over time of the product of the two excitation and response signals where the response signal is delayed by time τ
with respect to the excitation signal.
The second signal processing concept that I use is the complex Fourier transform, F, of the cross-correlation signal, R.sub.er (τ). The Fourier transform of R.sub.er (τ) is F(R.sub.er), which is a
signal in the frequency domain having a complex value at each angular frequency ω that is proportional to the complex object impedance.
Following the conversion of the two excitation and response signals to an intermediate cross-correlation signal and a final Fourier transform signal, I display the complex object impedance as a
function of frequency over a predetermined range. My cross-correlation signal and Fourier transform signals are both complex because they exhibit amplitude and phase information at each point in the
time delay and frequency domains, respectively.
My application of these simple signal processing techniques for the conversion of time domain signals to frequency domain signals can be embodied in a an apparatus using either analog or digital
electronic components. An advantage of my preferred embodiment is that my use of digital components leads to improved storage and processing accuracy and minimizes the effects of analog component
calibration drift. I have also discovered that the development of a complete cross-correlation signal, R(τ), over a large time-delay region and the subsequent development of a Fourier transform
signal, F(ω), over a broad frequency region, while possible, is not the most useful embodiment of my invention. Accordingly, I have refined this elementary convolution approach by adding the
following improvements.
First, I use an electrical current for the excitation signal instead of the electrical voltage normally used in the art. The constant current excitation signal prevents unsuspected hazards to living
biological tissues because the biological hazards are more closely related to current levels than to voltage levels as is known in the medical art. Secondly, I make use of a series of stored digital
data to define the excitation current waveform precisely. By precisely defining the excitation current waveform, I can create and apply a single-frequency excitation current to the living biological
object under measurement. Conversion of two such single-frequency excitation and response signals to a cross-correlation signal results in a sinusoidal cross-correlation signal having an amplitude
and phase representative of the complex cross-correlation between the excitation and response signal. Moreover, the Fourier transform of such a sinusoidal cross-correlation signal is a single impulse
function at the point in the frequency domain equivalent to the single frequency of the initial excitation and response signal.
These observations are particularly useful because they result in simplification of my signal and convolution conversion process to a simple signal multiplication procedure in quadrature. That is,
the complex object impedance at a single frequency can be developed from the excitation and response signals in the time domain by a multiplication and integration process in quadrature. This
quadrature signal conversion method is the preferred embodiment of my invention because of the relatively simple apparatus necessary.
A potential problem with my quadrature technique is that a different excitation waveform is required for each sinusoidal frequency value at which complex object impedance must be displayed. I have
resolved this problem by providing for the storage of a plurality of excitation waveforms in a signal generator digital memory means so that the excitation signal can be quickly stepped through a
variety of sinusoidal frequencies merely by selecting the necessary waveform data from the memory device in the desired sequence. The excitation current is then produced from the stored digital data
and the analog response voltage developed across the object under measurement is sensed. Moreover, network analyzer means may be added to compute and display a plurality of equivalent circuit
elements from a similar plurality of complex impedance data obtained at the different sinusoidal frequencies stored in my digital signal generator memory means.
For ease of implementation and minimization of measurement and display errors, I prefer to perform all signal conversions using digital means. Accordingly, in my preferred embodiment, the response
and excitation signals are converted to digital form by sampling means and the multiplication of the two excitation response signals in the time domain is performed using digital multiplication
means. The real or resistive component of the complex object impedance is related to a first impedance signal expressed as the arithmetic average of the products of a series of simultaneous samples
of the excitation signal and response signal The imaginary or reactive component of the complex object impedance is related to a second impedance signal, expressed as the arithmetic average of a
series of products of simultaneous samples of the response signal and a delayed excitation signal, where the delay is precisely equal to π/2 radians or one-fourth cycle of the sinusoidal waveform
representing the frequency at which the complex object impedance is determined.
By using digital sampling techniques and synchronizing the sampling windows of the excitation and response signals, the implementation of my preferred embodiment is simplified and the signal
conversion process made more robust (resistant to noise and calibration errors). It is not necessary to independently sample the excitation current waveform because this waveform is developed from
digital samples stored in a digital memory device and these samples can be used directly in the quadrature multiplication process. However, I prefer to independently sample the analog excitation
current waveform at the point where it is applied to the biological object for two reasons. First, this simplifies the synchronization of the sampling windows for the two excitation and response
signals. Secondly, this avoids errors arising from unexpected changes in the excitation signal between the memory storage means and the point of application to the object under measurement.
One of the advantages of my invention is that complex bio-impedance can be measured almost simultaneously at a plurality of frequencies in a wide frequency region. Another advantage of my invention
is that a change in the sinusoidal excitation signal frequency can be achieved without changing analog circuit paths or component values. This minimizes the errors arising from component calibration
errors. An objective of my invention is to avoid complex impedance errors arising from nonlinearities in the object under measurement. Yet another advantage of my invention is the rejection of noise
effects over a broad frequency range without the use of analog filter techniques. It is another object of my invention to accurately display the complex object impedance phase at all phase values
from -90 a variety of calibration factors may be readily applied to the displayed results by using digital techniques known in the art.
An important objective of my invention is to permit the measurement of bio-impedance at a plurality of frequencies in as little as two cycles of the primary frequency waveform. Another important
advantage of my invention is that my time series convolution process eliminates most errors that otherwise arise from the undesired presence of random noise in the relevant signals.
The foregoing, together with other features and advantages of my invention will become more apparent when referring to the following specifications, claims and the accompanying drawings.
For a more complete understanding of my invention, reference is now made to the following detailed description of the embodiments illustrated in the accompanying drawings wherein:
FIG. 1, comprising FIGS. 1(a) through 1(h), shows a series of waveforms in the time domain exemplifying the excitation signal, delayed excitation signal, response signal, first and second impedance
signals and the related series of digital samples;
FIG. 2, comprising FIGS. 2A and 2B, illustrates a method for applying an excitation signal to an object and detecting a resulting response signal and shows a typical equivalent circuit representation
of a living cell;
FIG. 3 is an illustrated embodiment of an apparatus for measuring and displaying the complex impedance of an object;
FIG. 4 is an illustrated embodiment of the signal generating means from FIG. 3; and
FIG. 5 is an illustrated embodiment of an apparatus that uses my more general convolution method to display the complex impedance of an object over a region in the frequency domain.
FIG. 1 illustrates the typical waveform characteristics for the excitation and response signals used in my invention for displaying complex object impedance. FIG. 1A shows an excitation signal
current 10 that varies in amplitude sinusoidally with time in the manner shown. FIG. 1B shows the resulting series of excitation pulses or samples 12 that represent current 10 samples made at a
predetermined sampling rate. This predetermined sampling rate should be high enough (as seen from the Nyquist criteria, known in the art) to avoid losing significant phase and amplitude information
contained in current 10. FIG. 1C illustrates a delayed excitation signal current 14 that is identical to current 10 except for a time delay of π/2 radians or one-quarter cycle of the sinusoidal
waveform. FIG. 1D illustrates a series of delayed excitation pulses or samples 16 that represent current waveform 14 samples taken in synchronism with excitation current samples 12.
In operation, my invention uses excitation signal current 10 to excite a living biological tissue sample and I then monitor the sample for a response signal voltage waveform 18 shown in FIG. 1E.
Response signal voltage 18 may contain several sinusoidal frequency components, but the fundamental sinusoidal frequency of signal current 10 will predominate. FIG. 1F illustrates a series of
response voltage samples 20 that are taken in synchronism with samples 12 and 16.
FIGS. 1G and 1H illustrate the first impedance samples 22 and second impedance samples 24, respectively. Sample 22 is obtained by multiplying the concurrent values of sample 12 and sample 16 (I.sub.n
V.sub.n), either by analog or digital means. Sample 24 is obtained by multiplying concurrent values of delayed excitation sample 16 and response voltage sample 20 (I.sub.90n V.sub.n). FIG. 1G shows a
first mean impedance signal 26 representing the long term arithmetic average of impedance samples 22. Similarly, FIG. 1H shows a second mean impedance signal 28 representing the long term arithmetic
average of samples 24.
While I believe that my invention is useful for determining many different types of impedances, including thermal, mechanical, acoustical and so forth, I limit my discussion herein to my preferred
application, which is the measuring of complex electrical impedances of living biological tissues. FIG. 2A illustrates the fundamental arrangement required to measure and display complex electrical
impedance of an object 30. Excitation signal current 10 is applied to object 30 and response signal voltage 18 is monitored.
FIG. 2B illustrates a useful equivalent circuit model for the living human cell 32 proposed by Kanai, et al., cited above. Cell model 32 comprises three lumped constant RC circuits 34, 36 and 38. RC
circuit 34 represents the cell membrane capacitance and cell membrane resistance. RC circuit 36 represents the intracellular fluid capacitance and resistance. RC circuit 38 represents the
extracellular fluid capacitance and resistance.
Human cell equivalent circuit 32 presents a single complex impedance to the pair of terminals 40 at any single sinusoidal frequency. Accordingly, the six lumped-constant model elements can be
determined only from impedance data at six (or more) different sinusoidal frequencies by means of a critically-or over-determined system of linear equations as known in the art.
As discussed above, I prefer to use a simplified signal multiplication procedure in quadrature to determine complex object impedance. This signal multiplication procedure was introduced and described
in connection with FIG. 1, wherein first and second mean impedance signals 26 and 28 represent real and imaginary components (Re[Z],Im[Z]). FIG. 3 shows a functional implementation of my invention,
including the necessary means for determining a plurality of equivalent circuit element values such as described in FIG. 2B for human cell 32. These equivalent circuit analyzer elements are shown
connected by dotted lines in FIG. 3, and are further described below.
A frequency selecton means 42 may comprise front panel switch controls, digital memory means or other related devices suitable for selecting one or more sinusoidal operating frequencies. Each
selected frequency is passed to a signal generator means 44 and also to an impedance analyzer memory means 46. Impedance analyzer memory means 46 merely receives the frequency selection and stores it
in memory for later use with concurrent complex body impedance data. Signal generator means 44 preferably comprises a frequency sequencing means 48, a frequency memory means 50 and a generator memory
means 52 as shown in FIG. 4. Frequency sequencing means 48 selects from among a plurality of operating frequencies stored in memory means 50 and 52. Frequency memory means 50 provides the necessary
clocking pulses to generator memory means 52, which then provides a train of excitation current pulses making up a sinusoidal excitation current in response to the commands received from frequency
sequencing means 48 and frequency memory means 50.
Returning to FIG. 3, a current limiting means 54 is interposed in series with excitation current samples 12 to prevent the excitation current amplitude from exceeding biologically safe levels.
Excitation current samples 12 are then imposed on object 30 at terminals 40 and a response detector means 56 is connected across terminals 40 to measure response signal voltage 18.
First and second analog-to-digital (A/D) conversion means 58 and 60 serve to convert excitation signal current 10 and response signal voltage 18 from analog form to digital form in a manner known in
the art. The operation of A/D means 58 and 60 is synchronized with the operation of signal generator means 44 by clocking means (not shown) to ensure that the leading and lagging edges of the digital
pulses created by the three circuits are synchronous in time to within less than one microsecond.
The output of first A/D means 58 presents excitation samples 12 to the input of a quadrature delay means 62 and the first input of a first multiplier means 64. The output of second A/D means 60
presents response samples 20 to the second input of first multiplier means 64 and the first input of a second multiplier means 66. The output of quadrature delay means 62 provides delayed excitation
samples 16 that represent a digital embodiment of delayed excitation signal current 14 discussed in connection with FIG. 1. Delayed excitation signal current 14 lags excitation signal current 10 by
precisely one-fourth of a waveform cycle at the fundamental sinusoidal operating frequency. Delayed excitation samples 16 from quadrature delay means 62 are presented to the second input of second
multiplier means 66.
The output of first multiplier means 64 provides a series of pulses equal to the series of products of concurrent excitation samples 12 and response samples 20, which we have named first impedance
samples 22 in FIG. 1. Similarly, the output of second multiplier means 66 provides a series of pulses named second impedance samples 24, which is the series of products of concurrent delayed
excitation samples 16 and response samples 20. First impedance samples 22 are presented to a first averaging means 68 and second impedance samples 24 are presented to a second averaging means 70. The
outputs from first and second averaging means 68 and 70 comprise first and second mean impedance signals 26 and 28 as discussed above in connection with FIG. 1. Signals 26 and 28 are exactly
proportional to the real and imaginary components of the complex electrical impedance of object 30. Signals 26 and 28 are presented to impedance display means 72, which is preferably a multi-digit
LCD numerical display or other suitable display means known in the art.
The calibration means 74 comprises circuitry that produces a current calibration signal 76 and a voltage calibration signal 78 for use in adjusting impedance display means 72 in response to
independent calibration of excitation signal current 10 and response signal Voltage 18.
Impedance display means 72 also presents the complex impedance of object 30 to impedance analyzer memory means 36 for storage together with the relevant sinusoidal frequency value presented by signal
frequency selection means 42. These data are stored in impedance analyzer memory means 46 for use in computing a plurality of lumped constant equivalent circuit elements as discussed above in
connection with FIG. 2. These equivalent circuit elements are computed and displayed by means of a network analyzer means 80 and an equivalent circuit display means 82 disposed substantially as shown
in FIG. 3. Network analyzer means 80 acts to solve a system of linear equations for a plurality of equivalent circuit elements from a plurality of complex impedance and operating frequency data pairs
in any suitable digital or analog manner known in the art for solving linear systems of equations. The resulting equivalent circuit element values are then presented by network analyzer means 80 to
equivalent circuit display means 82 for display in any suitable manner known in the art for displaying data.
Although I prefer the simple digital quadrature multiplication and integration process discussed above, I also show in FIG. 5 the use of a cross-correlator means 84 and a convolver means 86
accomplish a similar result in lieu of the apparatus disclosed in FIG. 3. The effect of cross-correlation of signals 18 and 10 followed by convolution of the cross-correlation function R.sub.iv (τ)
shown in FIG. 5 was briefly discussed above and is further treated in detail by K. G. Beauschamp and C. Yuen, Digital Methods for Signal Analysis, George Allen & Unwin, Ltd, London, 1979. The
applications of these digital signal processing techniques to the measurement of complex body impedance is suggested herein for the first time. Note that the two outputs 126 and 128 from convolver 86
contain complex impedance data over the entire frequency domain and impedance display means 72 can be configured to display impedance data at a single frequency in response to an input from frequency
selection means 42.
Obviously, other embodiments and modifications of my invention will occur readily to those of ordinary skill in the art in view of these teachings. Therefore, my invention is to be limited only by
the following claims which include all such obvious embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings. | {"url":"http://www.google.com/patents/US5280429?dq=7565338","timestamp":"2014-04-23T09:32:50Z","content_type":null,"content_length":"131381","record_id":"<urn:uuid:ee496f66-6eb1-48bc-b232-a036cf8c1f18>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Previous Posts
cbse class 9th test paper chapter triangle
8 Comments
CLASS 9TH CBSE MATHS CHAPTER - TRIANGLE CBSE TEST PAPER-1
1. PQ = PR of < QPR and S and T are point on PR and PQ such that ∠PQS = ∠PRT . Prove that Δ PQS ≅ Δ PRT.
2. Two lines AB and CD intersect each other at the point O such that BC || DA and BC = DA. Show that O is the midpoint of both the line-segments AB and CD( join B-C and A-D)
3. In triangle P Q R , PQ > PR and QS and RS are the bisectors of ∠Q and ∠R, respectively.Show that SQ > SR
4. ABC is an isosceles triangle with AB = AC and BD and CE are its two medians. Show that BD = CE.
5. D and E are points on side BC of a Δ ABC such that BD = CE and AD = AE. Show that Δ ABD ≅ Δ ACE.
6. CDE is an equilateral triangle formed on a side CD of a square ABCD (join AE and BE). Show that Δ ADE ≅ Δ BCE.
7. BA ⊥ AC, DE ⊥ DF such that BA = DE and BF = EC. Show that Δ ABC ≅ Δ DEF
.8. Q is a point on the side SR of a Δ PSR such that PQ = PR. Prove that PS > PQ.
9. S is any point on side QR of a Δ PQR. Show that: PQ + QR + RP > 2 PS
10. D is any point on side AC of a Δ ABC with AB = AC. Show that CD < BD.
11. l || m and M is the mid-point of a line segment AB. Show that M is also the mid-point of any line segment CD, having its end points on l and m, respectively.
12. Bisectors of the angles B and C of an isosceles triangle with AB = AC intersect each other at O. BO is produced to a point M. Prove that ∠MOC =∠ABC.
13. Bisectors of the angles B and C of an isosceles triangle ABC with AB = AC intersect each other at O. Show that external angle adjacent to ∠ABC is equal to ∠BOC.
14. AD is the bisector of ∠BAC. of DABC . Prove that AB > BD.
15. ABC is a right triangle and right angled at B such that ∠BCA = 2 ∠BAC. AD perpendicular to BC. Show that hypotenuse AC = 2 BC.
16. Prove that if in two triangles two angles and the included side of one triangle are equal to two angles and the included side of the other triangle, then the two triangles are congruent.
17. If the bisector of an angle of a triangle also bisects the opposite side, prove that the triangle is isosceles.
18. S is any point in the interior of Δ PQR. Show that SQ + SR < PQ + PR. { Produce QS to intersect PR at T}
CBSE TEST PAPER-2
1 marks questions
1. Which of the following is not a criterion for congruence of triangles?
(A) SAS (B) ASA (C) SSA (D) SSS
2 . If AB = QR, BC = PR and CA = PQ, then
(A) Δ ABC ≅ Δ PQR (B) Δ CBA ≅ Δ PRQ (C) Δ BAC ≅ Δ RPQ (D) Δ PQR ≅ Δ BCA
3 . In Δ ABC, AB = AC and ∠B = 50°. Then ∠C is equal to
(A) 40° (B) 50° (C) 80° (D) 130°
2 marks questions
1. In triangles ABC and PQR, ∠A = ∠Q and ∠B = ∠R. Which side of Δ PQR should be equal to side AB of Δ ABC so that the two triangles are congruent? Give reason for your answer.
2 . In triangles ABC and PQR, ∠A = ∠Q and ∠B = ∠R. Which side of Δ PQR should be equal to side BC of Δ ABC so that the two triangles are congruent? Give reason for your answer.
3 . AB is a line segment and line l is its perpendicular bisector. If a point P lies on l, show that P is equidistant from A and B.
3 marks questions
1. S is any point in the interior of Δ PQR. Show that SQ + SR < PQ + PR.
2. If the bisector of an angle of a triangle also bisects the opposite side, prove that the triangle is isosceles.
3. P is a point on the bisector of ∠ABC. If the line through P, parallel to BA meets BC at Q, prove that BPQ is an isosceles triangle.
4 marks questions
1. Prove that sum of any two sides of a triangle is greater than twice the median with respect to the third side
2. Show that in a quadrilateral AB + BC + CD + DA < 2 (BD + AC)
3. In a right triangle, prove that the line-segment joining the mid-point of the hypotenuse to the opposite vertex is half the hypotenuse.
8 Comments
6 Comments
3 Comments
6 Comments
1 Comment
1 Comment
1 Comment
2 Comments
3 Comments
122 Comments
The CBSE Class 12th Mathematics Question Paper Solution 2014 has been prepared by the top faculties of CBSE Board School Teacher and by many famous Coaching Institutes faculties is coming soon
IX Science and maths Original Sample Paper with OTBA 2014
3 Comments
IX Maths Original Sample Paper with OTBA 2014
IX Maths SA-2 Original Paper with OTBA 2014-1
IX Maths SA-2 Original Paper with OTBA 2014-2
IX Maths SA-2 Original Paper with OTBA 2014-3
IX Maths SA-2 Original Paper with OTBA 2014-4
IX Maths SA-2 Original Paper with OTBA 2014-5
Download Files
CBSE Class IX Maths Original Question Paper 2013
CBSE IX Maths SA-2 Original Paper 2013-1
CBSE IX Maths SA-2 Original Paper 2013-2
CBSE IX Maths SA-2 Original Paper 2013-3
Download Files
IX Science Original Sample Paper with OTBA 2014
IX Science SA-2 Original Paper with OTBA 2014-1
IX Science SA-2 Original Paper with OTBA 2014-2
IX Science SA-2 Original Paper with OTBA 2014-3
CBSE IX Science SA-2 Original Paper Of DAV 2014-4
IX Science SA-2 Original Paper with OTBA 2014-5
IX Science SA-2 Original Paper with OTBA 2014-6
IX Science SA-2 Original Paper with OTBA 2014-7
IX Science SA-2 Original Paper with OTBA 2014-8
Download Files
CBSE Class IX Original Paper Of Session 2013
CBSE IX Science SA-2 Original Paper 2013-1
CBSE IX Science SA-2 Original Paper 2013-2
CBSE IX Science SA-2 Original Paper 2013-3
Download Files
CBSE-XII-2014 CBSE Board
CHEMISTRY Paper & Solution Code : 56/3
Time : 3 Hrs. Max. Marks : 70
General Instructions :
(i) All questions are compulsory.
(ii) Questions number 1 to 8 are very short answer questions and carry 1 mark each.
(iii) Questions 9 to 18 are short answer questions and carry 2 marks each.
(iv) Question number 19 to 27 are also short-answer questions and carry 3 marks each.
(v) Question number 28 to 30 are long-answer questions and carry 5 marks each.
(vi) Use Log Tables, if necessary. Use of calculators is not allowed
Solve Chemistry Question Paper 2014 CBSE Board Class XII
File Size: 327 kb
File Type: pdf
Download File
Physics Paper & Solution Code : 55/3
Time : 3 Hrs. Max. Marks : 70
General Instruction :
(i) All questions are compulsory.
(ii) There are 30 questions in total. Questions 1 to 8 are very short answer type questions and carry one mark each.
(iii) Questions 9 to 18 carry two marks each, questions 19 to 27 carry three marks each and questions 28 to 30 carry five marks each.
(iv) One of the questions carrying three marks weightage is value based question
(v) There is not overall choice.
However, an internal choice has been provided in one question of two marks, one question of three marks and all three question of five marks each weightage. You have to attempt only one of the
choices in such question
Courtesy: Career Point
Class-XIISubject- English (Core)
Read the passage carefully and answer the questions that follow : 12
1. Too many parents these days can't say no. As a result, they find themselves raising children who respond
greedily to the advertisements aimed right at them. Even getting what they want doesn't satisfy some kids; they only want more. Now, a growing number of psychologists, educators and parents think
it's time to stop
the madness and start teaching kids about what's really important : values like hard work, contentment, honesty and compassion. The struggle to set limits has never been tougher - and the stakes have
never been higher. One recent study of adults who were overindulged as children, paints a discouraging picture of their
future : when given too much too soon, they grow up to be adults who have difficulty coping with life's disappointments. They also have a distorted sense of entitlement that gets in the way of
success in the workplace and in relationships.
Courtesy: Career Point
Solution Board Examinations 2013 Class-XII English (Core)
File Size: 210 kb
File Type: pdf
Download File
IX Sanskrit and Hindi Original Question Paper New Term-2
1 Comment
X Hindi Original Paper with OTBA 2014-1
IX Hindi Original Paper with OTBA 2014-2
IX Hindi Original Paper with OTBA 2014-3
IX Hindi Original Paper with OTBA 2014-4
IX Sanskrit Original Paper with OTBA 2014-1
IX Sanskrit Original Paper with OTBA 2014-2
IX English Original Paper with OTBA 2014
Click Here Download File
CBSE Class XII Mathematics Solved Model Paper - 2013-14 New
1 Comment
CBSE - XII
Solved Model Exam Paper of CBSE 12th, 2013
Subject : Mathematics FM: 100
General Instructions :
All questions are compulsory.
The question paper consists of 29 questions divided into three sections A, B and C. Section A comprises of 10 questions of one mark each, Section B comprises of 12 questions of four marks each and
Section C
comprises of 7 questions of six marks each.
All questions in Section A are to be answered in one word, one sentence or as per the exact requirement of the questions.
There is no overall choice, However, internal choice has been provided in 4 questions of four marks each and 2 questions of six marks each. You have to attempt only one of the alternatives in all
such questions.
Use of calculators is not permitted. You may ask for logarithmic tables, if required.
Click link given below To Download Full Paper
File Size: 2881 kb
File Type: pdf
Download File
File Size: 664 kb
File Type: pdf
Download File
File Size: 621 kb
File Type: pdf
Download File
File Size: 464 kb
File Type: pdf
Download File
IX(9th) Sample Papers for September 2014 CBSE Board Exam
2 Comments
IX Maths Sample Question paper SA-1 for 2014-2015-1
Download File
SA-1 Class IX Maths Sample paper 2014-15-2
Download File
SA 1 Class IX Science Sample Question paper 2014-15
Download File
SA 1 Class IX Social Science Sample Question paper 2014-15
Download File
SA 1 Sample paper Class IX Hindi 2014-15
Download File
SA 1 Sample paper Class IX English 2014-15
Download File
Searches related to CBSE Class 9 Sample Paper SA-1 Sep 2014
cbse sample papers class 9 sa 1 english
cbse sample papers class 9 sa 1 maths
cbse sample papers for class 9 sa1 science
cbse sample papers for class 9 sa1 maths
cbse sample papers for class 9 sa1 english
cbse sample papers for class 9 sa1 social science
cbse sample papers for class 9 sa1 hindi
cbse sample papers for class 9 sa1 maths 2012
Solution of board paper 2014 maths class 10 SA-II Set-30/1
3 Comments
Solution of board paper 2014 maths set 30/1 class 10 [by pioneermathematics]
File Size: 1131 kb
File Type: pdf
Download File
IX(9th) Sample Papers for March 2014 CBSE Board Exam
122 Comments
Solved Sample paper for CBSE Board
English Communicative (Class IX )
Hindi course A (Class IX)
Hindi Course B (Class IX)
Sanskrit (Class IX)
Mathematics (Class IX)
Science (Class IX)
Social Science (Class IX)
Download following sample and start practicing to excel in CBSE/NCERT Board exam
2014 term 2 ka paper for class IX [These are fresh]
IX Maths Original Paper with OTBA 2014
Download File
IX Science Original Paper with OTBA 2014
Download File
IX Social Science Original Paper with OTBA 2014
Download File
IX Hindi Original Paper with OTBA 2014
Download File
IX English Original Paper with OTBA 2014
Download File | {"url":"http://jsuniltutorial.weebly.com/1/previous/2.html","timestamp":"2014-04-16T10:29:04Z","content_type":null,"content_length":"170464","record_id":"<urn:uuid:57ed0fb5-30e4-4034-9206-dcfa1d55a2d8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carrollton, GA Prealgebra Tutor
Find a Carrollton, GA Prealgebra Tutor
...I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for educational fields. During this course, I went to the local elementary school and sat in
with a class everyday for a semester.
40 Subjects: including prealgebra, English, reading, Spanish
My name is James and I am currently working to receive my education certificate from The University of West Georgia. Although my degree is in Music, I have also been well educated in physical
science, biology, human anatomy and physiology, literature, history, and mathematics. While at Auburn I pu...
26 Subjects: including prealgebra, reading, English, writing
...I have tutored many students in algebra who are high school students and students in college. I enjoy teaching math and helping others learn math. I will work with students and try to find the
easiest and most comfortable way for them to learn.
9 Subjects: including prealgebra, calculus, algebra 1, geometry
...Helped two twins with the math portion of the PSAT. Taught similar topics as a GMAT instructor for three years. Tutored ACT math topics during high school, college, and as a GMAT instructor for
three years.
28 Subjects: including prealgebra, calculus, finance, economics
I am certified and have taught both middle school and elementary grades. I also have experience working with special needs students-ADHD, ADD, and Aspergers. I can tutor in english, science,
reading, and social studies.
48 Subjects: including prealgebra, English, reading, writing | {"url":"http://www.purplemath.com/carrollton_ga_prealgebra_tutors.php","timestamp":"2014-04-20T21:14:38Z","content_type":null,"content_length":"23989","record_id":"<urn:uuid:4a2c4d2b-0ed5-4956-b334-b99d7b3055e1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brazilian Archives of Biology and Technology
Serviços Personalizados
Links relacionados
versão impressa ISSN 1516-8913
Braz. arch. biol. technol. vol.54 no.6 Curitiba nov./dez. 2011
FOOD/FEED SCIENCE AND TECNOLOGY
Growth characteristics modeling of Bifidobacterium bifidum using RSM and ANN
Ganga S. Meena; Suneel Gupta; Gautam C. Majumdar; Rintu Banerjee^*
Microbial Biotechnology and Downstream Processing Laboratory; Department of Agricultural and Food Engineering; Indian Institute of Technology; Kharagpur-721 302 - India
The aim of this work was to optimize the biomass production by Bifidobacterium bifidum 255 using the response surface methodology (RSM) and artificial neural network (ANN) both coupled with GA. To
develop the empirical model for the yield of probiotic bacteria, additional carbon and nitrogen content, inoculum size, age, temperature and pH were selected as the parameters. Models were developed
using ¼ fractional factorial design (FFD) of the experiments with the selected parameters. The normalized percentage mean squared error obtained from the ANN and RSM models were 0.05 and 0.1%,
respectively. Regression coefficient (R^2) of the ANN model showed higher prediction accuracy compared to that of the RSM model. The empirical yield model (for both ANN and RSM) obtained were
utilized as the objective functions to be maximized with the help of genetic algorithm. The optimal conditions for the maximal biomass yield were 37.4 ^°C, pH 7.09, inoculum volume 1.97 ml, inoculum
age 58.58 h, carbon content 41.74% (w/v), and nitrogen content 46.23% (w/v). The work reported is a novel concept of combining the statistical modeling and evolutionary optimization for an improved
yield of cell mass of B. bifidum 255.
Key words: Probiotics, response surface methodology (RSM), FFD, artificial neural network (ANN), genetic algorithms (GA)
Bifidobacterium is the most prominent member of plethora class of bacterial species with probiotic properties. The popularity of this group of bacteria is based on the millennia of use in the food
and feed that are used in the probiotic dairy drinks and yoghurts since long (Sanders, 1999). At present, in India, the production of probiotics is reported to grow annually about 22.6 % until 2015
and the market of the probiotics is ~20.6 million rupees (€320,000). The market demand indicates that it is economically viable product. The probiotics have immense application in the food/healthcare
sector. There are plenty of industries venturing into the production and selling of the probiotics sachets to meet the increasing demand. Most common bacteria targeted by the industries for the
probiotic sachet preparation includes Bifidiobacterium. Microbial colonization of the human intestine starts immediately after the birth (Gibson and Roberfroid, 1995). The predominant bacteria at the
infancy stage are Bifidobacteria which colonize within the first 4-7 days of birth with the numbers ranging from 10^9-10^10 CFU/g of faeces in breast-fed infants (Gismondo et al., 1999).
Bifidobacterium sp. is one of the major microorganisms in the gastrointestinal tract flora of the children and adults. These bacteria have a strong stimulatory effect for the normal development of
microbiota and maturation of gut associated lymphoid tissue (Schezenmeir and De Vrese, 2001). Probiotic bacteria such as Bifidobacterium and Lactobacillus sp. in the gastrointestinal tract can play
an important role in promoting the human health (Savage, 1977; Mitsuoka, 1990). These microorganisms can contribute to digestion, immune stimulation and inhibition of the pathogens such as
Bacteroides, Escherichia, Clostriduim and Proteus which are potentially harmful bacteria found in the gastrointestinal tract (Ziemer and Gibson, 1998).
The primary mechanism for probiotic action is known as competitive colonization or competitive suppression. It is best described as the proliferation of the probiotic bacteria in the human intestine,
leaving little space for the growth of any pathogens (Ballongue, 1992; Biavati et al., 2000). To develop the growth model of probiotic bacteria through the traditional method, i.e. one variable
at-a-time is time consuming and interactions of different variables can also affect the yield. Unlike the conventional optimization, the statistical optimization methods can take into account the
interactions of the variables in generating the process response. Process optimization through the statistical method is a technique in which changes or adjustments are made in a process to get
better results (Myers and Montgomery, 2002). There are several techniques for process optimization, i.e., Response Surface Methodology (RSM), Artificial Neural Networks (ANN), Genetic Algorithms
(GA), etc. In these engineering applications, a response of interest is usually influenced by several variables and the objective of the engineering applications is to find the variables that can
optimize the response. RSM is a tool on that basis we find the optimal process parameters that produce a maximum or minimum value of the response and represent the direct and interactive effects of
the process parameters through two and three-dimensional plots (Gangadharan et al., 2008). Artificial neural networks are computational models of nervous systems. Natural organisms, however, do not
possess only nervous systems but also genetic information stored in the nucleus of their cells (genotype). The nervous system is part of the phenotype which is derived from this genotype through the
process of development (Rajasekaran and Vijaylakshmi, 2004). Using the method of neural networks (NN), the relationship between a set of independent variables X and the dependent variables Y can be
obtained. From the given pairs of input X and output Y data, neural network directly learns, and then develops a relationship between them but does not yield any mathematical equation relating the
variables. After the learning, this network is able to predict the correct output from an input data set that has not been previously used during the learning. Genetic algorithms (GA) are a tool by
which the optimization problems can be accurately solved within a limited use of computer time (Das, 2005). The objective of this work was to optimize and improve the yield of probiotic bacteria,
Bifidobacterium bifidum by optimizing the growth parameters such as temperature, pH, inoculum volume, inoculum age and additional effect of different carbon and nitrogen sources with the help of
Response Surface Methodology, Artificial Neural Network and Genetic Algorithms.
Organism and growth condition
Pure culture of Bifidobacterium bifidum 255 was obtained from the National Collection of Dairy Cultures (NCDC) Karnal, Haryana (India). The culture was grown in a modified MRS media containing 1% (w/
v) sodium thiosulphate at 30^ºC under anaerobic condition. Biomass growth was determined by measuring the optical density (OD) at 600 nm.
Experimental design
Selection of initial parameters
For the selection of initial parameters, 'one variable at a time method' was used. The different variables viz. temperature, pH, volume of inoculum, age of inoculum and additional carbon and nitrogen
sources were selected for growth of B. bifidum.
Empirical model development
To find out the effect of different growth parameters on the predicted value of the bacterial growth, Yp was obtained by conducting the experiments on different combination of independent variables
(growth parameters), which was obtained from a standard experimental design. During the experiments, the 'response' or values of 'dependent variables' obtained from each of the combinations of
independent variables was measured. A mathematical relationship between the independent and dependent variables was developed. This relationship was called 'model'. Using this model, the predicted
values of responses were found out within the domain of limiting values of independent variables. For the different growth parameters, a polynomial model was developed between the growth and growth
parameters to find out the following relationship between the coded values x[1], x[2], x[3][,] x[4][,] x[5] and x[6] of independent variables and dependent variable Yp as shown below
Where b[o][,] b[1][,] b[2].........etc. are the regression constants.
Experimental modeling
Fractional factorial design
Using two levels (+1 and -1) factorial design, two values of l and s for two sacrificing interactions were l[1], s[1], l[2] and s[2.] With the help of factorial design, s values were identified as (s
[1]= 0, s[2] =0), (s[1]= 0, s[2] =1), (s[1]= 1, s[2] =0), and (s[1]= 1, s[2] =1). In this study, all the experiments were conducted according to s[1]= 0 and s[2] =0 design.
Neural Network modeling
ANN chosen was a radial basis function network with supervised learning. The model was based on feed forward back propagation training method. In this process, the network computed the error between
the desired output (predicted) and the actual (experimental) output. It trained the network to make adjustments to minimize the error and back propagate the same.
Genetic Algorithms
In this optimization study, GA was applied to the developed ANN based model as shown in the Fig 2. The prime objective of this study was to maximize the biomass yield of gap B. bifidum by monitoring
the growth parameters such as temperature, pH, inoculum volume, inoculum age, carbon % and nitrogen %. It was posed as the minimization of problem associated with the optimization studies. Genetic
optimization continued till the termination condition i.e. maximum biomass yield was obtained.
Software used
For proper execution of ANN and GA, MATLAB 7.0 was used to develop the empirical model.
Selection of initial parameters
Fig 3 (A-F) shows the effect of temperature, initial pH, initial inoculum volume, initial incubation period, supplementation of additional carbon and nitrogen sources on the growth of the bacterial
culture. All these parameters, their variation and optimum values are given in Table 1.
Empirical model development
From the above results, the maximum and minimum values of six independent parameters for B. bifidum were fixed as shown in Table 2. For developing the model between coded values x[1], x[2], x[3], x
[4][,] x[5], x[6] of independent variables and dependent variable Yp, the experiments were conducted according to the fractional factorial design. All these combinations have been given in Table 3
with their corresponding l and s values.Various combination of process variable found at s[1]=0, s[2]=0 is shown in the Table 4 with their experimental value Ye for the growth of B. bifidum.
The experimental data were fitted to the full quadratic equation. The design matrix and the fitness of each term were analyzed by means of the ANOVA (Kumari et al., 2008). Figure 4 shows the
corresponding model coefficients (R^2 0.840) together with the regression coefficient of determination, which is a measure of how well the regression model can be made to fit the raw data.
A self-organizing feature map network was used to predict the growth condition parameters. Different factors, viz. temperature, pH, inoculum volume, Inoculum age, additional carbon and nitrogen
sources were used as each unit of input layer. The output layer was composed of one response variable, the growth of B. bifidum. A set of factors was used for training and fed into the computer.
Several iterations were conducted with different numbers of neurons of hidden layer in order to determine the optimal ANN structure.
The optimum number of neurons in the hidden layer was iteratively determined by changing the number of neurons. This was started with two neurons and the number of neurons was increased up to six.
The least MSE value and a good prediction of the outputs of both training and validation sets were obtained with four neurons in the hidden layer (Dutta et al., 2004). The R^2 value between the
actual and estimated responses was determined as 0.930 (Fig. 5). In ANN modeling, the replicates at center point did not improve the rediction capability of the network because of the similar inputs.
Using MATLAB 7.0, the constants of regression equation and predicted value of dependent variable (OD) were found out. The 'model' which was obtained for B. bifidum 255 is given below.
The predicted value of independent variable and corresponding experimental value for B. bifidum 255 is shown in Table 5. Genetic algorithms were applied on the data obtained from the neural network
using MATLAB 7.0 The optimum values or the combination of different process parameters on which the bacterial growth measured by the optical density (OD) was maximum for B. bifidum which is given in
the Table 6.
There are several reports on the optimization of growth of the probiotic bacteria which are very close to the present result. Kiviharju et al. (2005) reported maximum production of B.longum at 40ºC.
Ram and Chander, (2003) reported maximum growth of Bifidobacteria at 37 ºC and pH 7.0. Laxmi et al. (2011) reported the addition of carbon and nitrogen sources for enhanced growth of Bifidobacterium
sp. In the present study, the RSM/ANN coupled with GA methodology resulted in an enhanced biomass yield. This is a new approach not reported earlier. However, optimization studies based on the ANN-GA
for improved performance of biological systems have been reported earlier by Haider et al. (2008) and Sivapathasekaran et al. (2010).
In the present study, MATLAB 7.0 was used to fit the experimental values into a regression equation which predicted the yield of B. bifidum 255. The RSM and ANN methodologies coupled with GA were
used for optimizing the input parameters. Both the models provided similar quality predictions for the above independent variables in terms of the growth conditions with ANN with more accuracy in
estimation. The regression coefficients (R^2) of ANN and RSM were 0.9368 and 0.8838, respectively, which clearly reflected that the ANN was better than RSM. The optimum values obtained after the GA
study were 37.4^°C, pH 7.09, inoculum volume 1.97 ml, inoculum age 58.58 h, carbon content 41.74% (w/v), nitrogen content 46.23% (w/v), resulting the maximum yield of probiotic bacteria. It was
further noticed that ANN coupled with GA was the best combination for model development of B. bifidum.
Ballongue, J. (1992), Bifidobacteria and probiotic action probiotics, the scientific basis. J. Dairy Sci., 7, 357-413. [ Links ]
Biavati, B.; Vescovo, M.; Torriabi, S. and Bottazzi, V. (2000), Bifidobacteria: history, ecology, physiology and applications. Analyt. Microbiol., 50, 117-131. [ Links ]
Das, H. (2005), Hand book of food processing operations analysis. Asian book private limited, New Delhi. [ Links ]
Dutta, J.R.; Dutta, P.K. and Banerjee, R. (2004), Optimization of culture parameters for extracellular protease production from a newly isolated Pseudomonas sp. using response surface and artificial
neural network models. Process Biochem., 39, 2193-98. [ Links ]
Gangadharan, D.; Sivaramakrishnan, S.; Nampoothiri, K.M.; Sukumaran, R.K. and Pandey, A. (2008), Response surface methodology for the optimization of alpha amylase production by Bacillus
amyloliquefaciens. Bioresource Technol., 99, 4597-02. [ Links ]
Gibson, G.R. and Roberfroid, M. (1995), Dietary modulation of the human colonic microbiota: introducing the concept of prebiotics. J. Nutr., 125, 1401-1412. [ Links ]
Gismondo, M.R.; Drago, L. and Lombardi, A. (1999), Review of probiotics available to modify gastrointestinal flora. Int. J. Antimicrob. Agents, 12, 287-292. [ Links ]
Goldberg, D. (1989), Genetic Algorithms in search, Optimization and machine learning. Pearson Education, Asia. [ Links ]
Gulati, T.; Chakrabarti, M.; Singh, A.; Duvuuri, M. and Banerjee, R. (2010), Comparative study of response surface methodology, artificial neural network and genetic algorithms for optimization of
Soybean hydration. Food Technol. Biotech., 48, 11-18. [ Links ]
Haider, M.A.; Pakshirajan, K.; Singh, A. and Chaudhury, S. (2008), Artificial neural network-genetic algorithm approach to optimize media constituents for enhancing lipase production by a soil
microorganism. Appl. Biochem. Biotechnol., 144, 225-235. [ Links ]
Kiviharju, K.; Leisola, M. and Eerikäinen, T. (2005), Optimization of a Bifidobacterium longum production process. J. Biotechnol., 117, 299-308. [ Links ]
Kumari, K.S.; Babu, I.S. and Rao, G.H. (2008), Process optimization for citric acid production from raw glycerol using response surface methodology, Indian J. Biotech., 7, 496-501. [ Links ]
Laxmi, N.P.; Mutamed, M.A. and Nagendra, P.S. (2011), Effect of carbon and nitrogen sources on growth of Bifidobacterium animalis Bb12 and Lactobacillus delbrueckii ssp. bulgaricus ATCC 11842 and
production of β-galactosidase under different culture conditions. Int. Food Res. J., 18, 373-380. [ Links ]Mitsuoka, T. (1990), Bifidobacteria and their role in human health. J. Ind. Microbiol., 6,
263-268. [ Links ]
Myers, R.M. and Montgomery, D.C. (2002), Response surface methodology. John Wiley and Sons, Inc., New York. [ Links ]
Rajasekaran, S. and Vijaylakshmi, P.G.S. (2004), Neural networks fuzzy Logic and genetic algorithms. Prentice Hall of India, New Delhi. [ Links ]
Ram, C. and Chander, H. (2003), Optimization of culture conditions of probiotic bifidobacteria for maximal adhesion to hexadecane. World J. Microbiol. Biotechnol., 19, 407-410. [ Links ]
Sanders, M.E. (1999), Probiotics. Food Biotechnol.,53, 67-77. [ Links ]
Savage, D. (1977), Microbial ecology of the gastrointestinal tract. J. Microbiol., 31, 107-133. [ Links ]
Schezenmeir, J. and De Vrese, M. (2001), Probiotics, prebiotics, and synbiotics -approaching a definition. Am. J. Clin. Nutr., 73, 361-364. [ Links ]
Sen, R. and Babu K.S. (2005), Modeling and optimization of the process conditions for biomass production and sporulation of a probiotic culture. Process Biochem., 40, 2531-38. [ Links ]
Sivapathasekaran, C.; Mukherjee, S.; Ray, A.; Gupta, A. and Sen, R. (2010). Artificial neural network modeling and genetic algorithm based medium optimization for the improved production of marine
biosurfactant. Bioresource Technol., 101, 1884-87. [ Links ]
Ziemer, C.J. and Gibson, G.R. (1998), An Overview of probiotics, prebiotics and synbiotics in the functional food concept: perspectives and future strategies. Int. Dairy. J., 8, 473-479. [ Links ]
Received: June 09, 2010
Revised: December 28, 2010
Accepted: September 12, 2011.
* Author for correspondence: rb@iitkgp.ac.in | {"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1516-89132011000600023&lng=pt&nrm=iso&tlng=en","timestamp":"2014-04-21T05:04:56Z","content_type":null,"content_length":"51725","record_id":"<urn:uuid:df907eb9-2649-47c4-945d-74e622409342>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Story Problems: Real, Realistic, Theoretical
Date: 10/24/2002 at 21:45:09
From: Jenifer Garner
Subject: Theoretical, realistic and real problems
I am trying to get the definitions for theoretical, realistic, and
real problems to be able to determine different types of story
Date: 10/25/2002 at 17:26:09
From: Doctor Achilles
Subject: Re: Theoretical, realistic and real problems
Hi Jenifer,
Thanks for writing to Dr. Math.
It's easiest to start with real and work from there.
A real story problem is something that actually did happen or is
happening. For example, this morning I really had to be at work by
9:15 a.m. It really takes me 25 minutes to bike to work, 10 minutes to
eat breakfast, and 35 minutes to shower and get dressed. What time
should I have set my alarm for?
A realistic story problem is something that could happen, but did not
actually happen. For example, "Suzie went to the store and noticed
that there was a sale on 18-packs of eggs for $3. The store also sells
individual eggs for $0.25 each. She needs 14 eggs for a recipe, so any
eggs she buys over the 14 she needs will be wasted. Is it cheaper for
Suzie to buy one 18-pack or 14 individual eggs?" That problem is not
real because Suzie is just someone I made up and I also made up the
store, the eggs, the prices, and the recipe. But it is something that
COULD have happened, so it is REALISTIC.
A theoretical story problem is a little bit stranger. Here's an
example: "Normally, when Tom goes bowling, it takes 4 seconds from
when he throws the ball until it hits the pins. However, today there
is a wizard in the bowling alley. The wizard casts a spell on the
bowling ball so that it doesn't go at a constant speed any more, but
instead it goes like this: for the first second, the ball goes at its
normal speed, for the second second, it goes at half the normal speed,
for the third second, it goes at 1/8 the normal speed, for the fourth
second it goes at 1/16 its normal speed, and so on. It doesn't
accelerate; rather, it magically changes from one speed to a slower
speed every second. Under this magic spell, how long will it take a
ball Tom throws to reach the pins?" That problem is pure fantasy. Not
only is it NOT something that ever actually happened, it isn't even
something that ever COULD happen. So it isn't even realistic, it's
just plain theoretical.
I hope this helps. If you have other questions or you'd like to talk
about this some more, please write back.
- Doctor Achilles, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/61594.html","timestamp":"2014-04-19T08:45:31Z","content_type":null,"content_length":"7752","record_id":"<urn:uuid:2984d746-f4af-4fa8-a696-5828d7c7121f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
magnetic field
Authors: Lei Xu, Jin An, Chang-De Gong
The quantum Hall and longitudinal resistances in multi-terminal ferromagnetic graphene p-n junctions under a perpendicular magnetic field are investigated. In the Hall measurements, the transverse
contacts are assumed to be located at p-n interface to avoid the mixing of edge states at the interface and the resulting quantized resistances are then topologically protected. According to the
charge carrier type, the resistances in four-terminal p-n junction can be naturally divided into nine different regimes. The symmetric Hall and longitudinal resistances are observed, with lots of new
robust quantum plateaus revealed due to the competition between spin splitting and local potentials. | {"url":"http://graphenetimes.com/tag/magnetic-field/","timestamp":"2014-04-21T00:11:10Z","content_type":null,"content_length":"104456","record_id":"<urn:uuid:ae86ecc0-5a64-4c4a-8642-fad9bacbe770>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS 306, Computer Algorithms II
Jon Bentley, Programming Pearls, Addison-Wesley, Reading, Mass., 1986. QA 76.6 B453.
Jon Bentley, More Programming Pearls: Confessions of a Coder, Addison-Wesley, Reading, Mass., 1990. QA 76.6.B452.
These books collect Bentlys Programming Pearls column that ran in the Communications of the ACM. Every working programmer should at least read (and understand) both volumes; after that you can decide
whether or not you should buy them (I did). Bently is particularly strong on what you do after you've designed and implemented your algorithms
David Berlinski, The Advent of the Algorithm, Harcort, New York, 2000. QA9.58 .B47.
A popular, historical development of the idea of an algorithm, and what that idea means today.
Gilles Brassard and Paul Bratly, Algorithmics: Theory and Practice, Prentice Hall, Englewood Cliffs, New Jersey, 1988. QA 9.6 B73.
Written for upper-level undergraduates, this book provides a good, general purpose introduction to algorithms.
Thomas Cormen, Charles Lieserson, and Ronald Rivest, Introduction to Algorithms, second edition, MIT Press, Cambridge, Mass., 1993. QA76.6.I5858.
An excellent book, thorough in both the theory and practice of algorithm design and analysis. This book serves well both the student and the working programmer.
Ronald Garham, Donald Knuth, and Oren Patashnik, Concrete Mathematics, Addison-Wesley, Reading, Mass., 1989. QA 39.2 G733.
A rigorous and detailed presentation of the mathematics behind algorithm analysis. Written for advanced graduate students, this is not a book for the faint of heart (or brain).
E. Horowitz and S. Sahni, Fundamentals of Computer Algorithms, Computer Science Press, Rockville, Maryland, 1978.
This book has the distinction of being the worst algorithms book I ever used. The algorithms are presented an a short and unhelpful fashion (all variable names are single letter and un-mnemonic, for
example), and the algorithm descriptions are terse and not well keyed to the algorithms themselves.
Donald Knuth, The Art of Computer Programming, Volume 1: Fundamental Algorithms, Second Edition, Addison-Wesley, Reading, Mass., 1973. QA 76.6 K64.
Donald Knuth, The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Second Edition, Addison-Wesley, Reading, Mass., 1981. QA 76.6 K64.
Donald Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching, Second Edition, Addison-Wesley, Reading, Mass., 1973. QA 76.6 K64.
Even though Bill Gates has blurbs on the dust jackets of the second editions, these are still the books to have for algorithms, their design, and their analysis. Buy them, read them, use them in your
work, and savagely ridicule code produced by programmers that have done none of these things.
Udi Manber, Introduction to Algorithms, Addison-Wesley, Reading, Mass., 1989. QA 76.9 D35 M36.
An excellent book that uses recursion as the principle algorithm design technique.
Gregory Rawlins, Compared to What?, Computer Science Press, New York, New York, 1992.
A good introduction to algorithms, pitched to upper-level undergraduates. Has a good, informal presentation on asymptotic analysis (big-oh analysis) and its pitfalls.
Nicklaus Wirth, Algorithms + Data Structures = Programs, Addison-Wesley, Reading, Mass., 1976. QA 76.6 W56.
Another classic, aimed at lower-level undergraduates. A book to recommend to your kid sister or brother.
This page last modified on 25 August 2008. | {"url":"http://bluehawk.monmouth.edu/~rclayton/web-pages/f08-306/biblio.html","timestamp":"2014-04-18T13:06:49Z","content_type":null,"content_length":"6461","record_id":"<urn:uuid:b5d63944-53a8-4818-b5de-5f1d9f60552d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the exact value of the square root of the quantity 63 over 16.? A:9 the square root of 7 all over 2. B:3 the square root of 7 all over 4. C:3 the square root of 21 all over 4. D:9 the square
root of 7 all over 8.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
factor 63 & 9 63 = 9 * 7 16 = 4 * 4
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
since 9 and 4 are perfect squares. we can simplify... can you do the rest ?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ff8bca7e4b058f8b7633550","timestamp":"2014-04-18T03:48:36Z","content_type":null,"content_length":"64256","record_id":"<urn:uuid:4f45867a-dc2d-4998-ab53-29809f698a69>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Chopped Dice' printed from http://nrich.maths.org/
Alison and Steve wish to make a new sort of die which can land on its faces and its corners.
They plan to start with a cube and make planar cuts across the corners to create a solid which, when rolled, has a good chance of landing on the 'corners' and the 'faces'
Consider designing such a die such that it is symmetrical -- i.e. each corner is to be cut off in the same way.
How many faces would the die have and what shape would they each be?
Draw a net of a cube and indicate accurately the lines along which the cuts are to be made.
Where would you align the cuts such that each of the new faces was of the same area?
Collaboration/cross curricular activity: Suppose that we wish to make a physical die of this sort such that there are equal probabilities of landing on each of the 'faces' and 'corners'. Discuss
with DT the possible construction of such a die and plan a way of producing the die. | {"url":"http://nrich.maths.org/7393/index?nomenu=1","timestamp":"2014-04-19T09:38:46Z","content_type":null,"content_length":"4438","record_id":"<urn:uuid:daceb60d-37c4-4bb5-8896-f3a3e3d2cb31>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items where Author is "
Number of items: 69.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (2004) Present scenario of seaweed exploitation and industry in India. Seaweed Research and Utilisation, 26 (1 & 2). pp. 47-53.
Ramalingam, J R and Kaliaperumal, N and Kalimuthu, S (2003) Commercial scale production of carrageenan from red algae. Seaweed Research and Utilisation, 25 (1 & 2). pp. 37-46.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (2003) Pilot scale field cultivation of the agarophyte Gracilaria edulis (Gmelin) Silva at Vadakadu (Rameswaram). Seaweed Research and
utilisation, 25 (1 & 2). pp. 213-219.
Ramalingam , J R and Kaliaperumal, N and Kalimuthu, S (2002) Agar production from Gracilaria with improved qualities. Seaweed Research and Utilisation, 24 (1). pp. 25-34.
Kaliaperumal, N and Ramalingam, J R and Kalimuthu, S and Ezhilvalavan, R (2002) Seasonal changes in growth, biochemical constituents and phycocolloid of some marine algae of Mandapam coast. Seaweed
Research and Utilisation, 24 (1). pp. 73-77.
Ramalingam, J R and Kaliaperumal, N and Kalimuthu, S (2000) Seaweed exploitation in India. Seaweed Research and Utilisation, 22 (1 & 2). pp. 75-80.
Sundararajan, M and Rajendran, M and Kaliaperumal, N and Kalimuthu, S (1999) Studies on species of Halimeda from Lakshadweep. Seaweed Research and Utilisation, 21 (1&2). pp. 161-169.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R and Pillai, S Krishna and Muniyandi, K and Rao, K Rama and Rao, P V Subba and Thomas, P C and Zaidi, S H and
Subbaramaiah, K (1998) Seaweed resources and distribution in deep waters from Dhanushkodi to Kanyakumari, Tamilnadu. Seaweed Research and Utilisation, 20 (1 & 2). pp. 141-151.
Kaliaperumal, N and Kalimuthu, S (1997) Seaweed potential and its exploitation in India. Seaweed Research and Utilisation , 19 (1&2). pp. 33-40.
Kaliaperumal, N and Kalimuthu, S and Muniyandi, K and Ramalingam, J R and Pillai, S Krishna and Chennubhotla, V S Krishnamurthy and Rajagopalan, M S and Rao, P V Subba and Rao, K Rama and Thomas, P C
and Zaidi, S H and Subbaramaiah, K (1996) Distribution of marine algae and seagrass off Valinokkam-Kilakkarai, Tamil Nadu coast. Seaweed Research and Utilisation, 18 (1 & 2).
Rao, K Rama and Rao, P V Subba and Mal, T K and Subbaramaiah, K and Kaliaperumal, N and Kalimuthu, S and Muniyandi, K and Ramalingam, J R and Pillai, S Krishna and Chennubhotla, V S Krishnamurthy
(1996) Distribution of Seaweeds off Alantalai-Manapad and Vembar-Nallatanni Tivu in Tamil Nadu. Phykos, 35 (1 & 2). pp. 163-170.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1996) Effect of repeated harvesting on the growth of Sargasum spp and Turbiniria conoides occurring in Mandapam area. Seaweed Research and
Utilisation, 18 (1 & 2).
Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1995) Distribution of algae and seagrasses in the estuaries and backwaters of Tamil Nadu and Pondichery. Seaweed Research and Utilisation, 17 .
pp. 79-96.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R and Pillai, S Krishna and Subbaramaiah, K and Rao, K Rama and Rao, P V Subba (1995) Distribution of sea weeds
off Kattapadu - Tiruchendur coast, Tamil nadu. Seaweed Research and Utilisation, 17 (1&2). pp. 183-193.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1995) Economically important seaweeds. CMFRI Special Publication , 62 . pp. 1-35.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Najmuddin, M and Ramalingam, J R and Kalimuthu, S (1994) Biochemical composition of some common seaweeds from Lakshadweep. Journal of the
Marine Biological Association of India, 36 (1&2). pp. 316-319.
Kalimuthu, S and Kaliaperumal, N (1994) Commercial exploitation of seaweeds in India and need for their large scale cultivation. Proceedings of the national symposium on aquaculture for 2000 AD
November 1994 . pp. 215-219.
Kaliaperumal, N and Kalimuthu, S and Muniyandi, K (1994) Experimental cultivation of Gracilaria edulis at Valinokkam Bay. Proceedings of the national symposium on aquaculture for 2000 AD November
1994 . pp. 221-226.
Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1993) Effect of repeated harvesting on the growth of Gelidiella acerosa and Gracilaria corticata var corticata occurring at Mandapam coast.
Seaweed Research and Utilisation, 16 (1 & 2). pp. 155-160.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R and Muniyandi, K (1993) Growth of Gracilaria edulis in relation to environmental factors in field cultivation.
Seaweed Research and Utilisation, 16 (1 & 2). pp. 167-176.
Kaliaperumal, N and Kalimuthu, S (1993) Need for conservation of economically important seaweeds of Tamil Nadu coast and time-table for their commercial exploitation. Marine Fisheries Information
Service, Technical and Extension Series, 119 . pp. 5-11.
Rao, K Rama and Rao, P V Subba and Thomas, P C and Zaidi, S H and Subbaramaiah, K and Kaliaperumal, N and Kalimuthu, S and Muniyandi, K and Ramalingam, J R and Najmuddin, N and Chennubhotla, V S
Krishnamurthy (1993) Seaweed resources off Tamil nadu coast, Sector - IV Kilakkarai - Rameswaram island (Dhanushkodi. Seaweed Research and Utilisation , 16 (1&2). pp. 103-110.
Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1992) Distribution and seasonal changes of marine algal flora from seven localities around Mandapam. Seaweed Research and Utilisation, 15 (1 &
2). pp. 119-126.
Rao, P V Subba and Rao, K Rama and Mal , T K and Subbaramaiah, K and Kaliaperumal, N and Kalimuthu, S and Muniyandi, K and Ramalingam, J R and Chennubhotla, V S Krishnamurthy (1992) Seaweed resources
off Tamil Nadu coast: Sector II. Alanthali - Manapad and Vembar – Nallathanni Thivu. Seaweed Research and Utilisation, 15 (1 & 2). pp. 177-182.
Kaliaperumal, N and Kalimuthu, S and Muniyandi, K and Ramalingam, J R and Chennubhotla, V S Krishnamurthy and Rao, K Rama and Rao, P V Subba and Thomas, P C and Zaidi, S H and Subbaramaiah, K (1992)
Seaweed resources off Tamil Nadu coast: Sector III. Valinokkam - Kilakkarai. Seaweed Research and Utilisation, 15 (1 & 2). pp. 11-14.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1992) Studies on the agar content in Gracilaria arcuata Var. Arcuata and G. Corticata Var. Cylindrica. Seaweed Research and Utilisation, 15 (1 &
2). pp. 191-195.
Narasimham, K A and Marichamy, R and James, D B and Nammalwar, P and Victor, A C C and Kaliaperumal, N and Rajapandian, M E and Dharmaraj, S and Maheswarudu, G and Sundararajan, D and Arputharaj, M R
and Muniyandi, M and Kalimuthu, S and Rodrigo, Joseph Xavier and Selvaraj, M and Fernando, A Dasman and Rayan, V Soosai and Jesuraj, N and Muthukrishnan, P and Fernando, D Bosco (1992) Survey of
Valinokkam Bay and adjoining area to assess its suitability for integrated sea farming — A report. Marine Fisheries Information Service, Technical and Extension Series, 117 . pp. 1-8.
Kalimuthu, S and Kaliaperumal, N (1991) Unusual landings of agar yielding seaweed Gracilaria edulis in Kottaipattanam-Chinnamanai area. Marine Fisheries Information Service, Technical and Extension
Series, 108 . pp. 10-11.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1991) Commercially important seaweeds of India, their occurrence,chemical products and uses. Marine Fisheries
Information Service, Technical and Extension Series, 107 . pp. 11-16.
Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1991) Standing crop, algin and mannitol of some alginophytes of Mandapam coast. Journal of the Marine Biological Association of India, 33 (1&2).
pp. 170-174.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R and Muniyandi, K (1990) Environmental factors influencing the growth of Gracilaria edulis in culture. Marine
Fisheries Information Service, Technical and Extension Series`, 105 . pp. 10-11.
Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R and Chennubhotla, V S Krishnamurthy (1990) Present status of seaweed exploitation and seaweed industry in India. Marine Fisheries Information
Service, Technical and Extension Series, 103 . pp. 7-8.
Bensam, P and Kaliaperumal, N and Gandhi, V and Raju, A and Rangasamy, V S and Kalimuthu, S and Ramalingam, J R and Muniyandi, K (1990) Occurrence and growth of the commercially important red algae
in fish culture pond at Mandapam. Seaweed Research and Utilisation, 13 (2). pp. 101-108.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R and Subbaramaiah, K and Rao, K Rama and Rao, P V Subba (1990) Seaweed resources of the Tuticorin-Tiruchendur
coast, Tamil Nadu, India. Journal of the Marine Biological Association of India, 32 (1&2). pp. 146-149.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1990) Studies on phycocolloid contents from seaweeds of south Tamil Nadu coast. Seaweed Research and Utilisation, 12 (1 & 2). pp. 115-119.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R and Pillai, S Krishna and Subrahmanyan, M and Rao, K Rama and Rao, P V Subba (1989) Seaweed resources off
Tamil Nadu coast: Kattapadu- Tiruchendure. Marine Fisheries Information Service, Technical and Extension Series, 96 . pp. 10-11.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1989) Agar, algin and mannitol from some seaweeds of Lakshadweep. Journal of the Marine Biological Association of India, 31 (1&2). pp. 303-305.
Lal Mohan, R S and James, D B and Kalimuthu, S (1989) Mariculture potentials. CMFRI Bulletin Marine living resources of the union territory of Lakshadweep An Indicative Survey With Suggestions For
Development, 43 . pp. 243-247.
Kaliaperumal, N and Kaladharan, P and Kalimuthu, S (1989) Seaweed and seagrass resources. CMFRI Bulletin Marine living resources of the union territory of Lakshadweep An Indicative Survey With
Suggestions For Development, 43 . pp. 162-176.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S and Nair, P V Ramachandran (1987) Biology of the economically important Indian seaweeds-a review. Seaweed Research and
Utilisation, 10 (1). pp. 21-32.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R and Selvaraj, M and Najmuddin, M (1987) Chemical composition of seaweeds. CMFRI Bulletin, 41 . pp. 31-51.
Silas, E G and Kalimuthu, S (1987) Commercial exploitation of seaweeds in India. CMFRI Bulletin, 41 . pp. 55-59.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1987) Common seaweed products. CMFRI Bulletin, 41 . pp. 26-30.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1987) Economically important seaweeds. CMFRI Bulletin, 41 . pp. 3-18.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1987) Post harvest technology. CMFRI Bulletin, 41 . pp. 74-77.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R and Selvaraj, M and Najmuddin, M (1987) Seaweed culture. CMFRI Bulletin, 41 . pp. 60-74.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S (1987) Seaweed resources of India. CMFRI Bulletin, 41 . pp. 51-54.
Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R and Najmuddin, M and Kaliaperumal, N (1987) Transfer of technology on seaweed culture. CMFRI Bulletin, 41 . pp. 78-79.
Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Najmuddin, M and Panigrahy, R and Selvaraj, M (1986) Changes in growth and phycocolloid content of Gelidiella acerosa and Gracilaria edulis.
Seaweed Research and Utilisation, 9 (1 & 2). pp. 45-48.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R and Selvaraj, M (1986) Experimental field cultivation of Acanthophora spicifera in the near shore area of Gulf of Mannar. Indian Journal of
Fisheries, 33 (4). pp. 476-478.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Ramalingam, J R (1986) Growth, Phenology and Spore Shedding in Gracilaria arcuata var. arcuata (Zanardini) Umamaheswara Rao &
G. corticata var. cylindrica ( J . Agardh) Umamaheswara Rao (Rhodophyta). Indian Journal of Marine Sciences, 15 . pp. 107-110.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Ramalingam, J R and Kalimuthu, S (1986) Growth, reproduction and spore output in Gracilaria foliifera (Forsskal) boergesen and Gracilariopsis
sjoestedtii (Kylin) Dawson around Mandapam. Indian Journal of Fisheries, 33 (1). pp. 76-84.
Kaliaperumal, N and Kalimuthu, S and Ramalingam, J R (1983) Proven Technology. 7. Technology of Cultured Seaweed Production. Marine Fisheries Information Service, Technical and Extension Series, 54 .
pp. 19-20.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S and Selvaraj, M and Ramalingam, J R and Najmuddin, M (1982) Seasonal changes in growth & alginic acid & mannitol contents in
Sargassum ilicifolium (Turner) J. agardh & S. myriocystum J. agardh. Indian Journal of Marine Sciences, 11 . pp. 195-196.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1981) Seaweed recipes and other practical uses of Seaweeds. Seafood Export Journal, 13 (10). pp. 9-16.
Kalimuthu, S and Chennubhotla, V S Krishnamurthy and Selvaraj, M and Najmuddin, M and Panigrahy, R (1980) Alginic acid and mannitol contents in relation to Growth in Stoechospermum marginatum (C.
Agardh) Kuetzing. Indian Journal of Fisheries, 27 (1&2). pp. 267-268.
Kalimuthu, S (1980) Variations in growth and mannitol and alginic acid Contents of Sargassum myriocystum J. Agardh. Indian Journal of Fisheries, 27 (1&2). pp. 265-266.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1978) Culture of Gracilaria edulis in the inshore waters Of Gulf of Mannar (Mandapam). Indian Journal of Fisheries, 25 (1&2). pp.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Kalimuthu, S (1978) Seasonal Changes in Growth, Fruiting Cycle and Oospore Output in Turbinaria conoides (J. Agardh) Kiitzing. Botanica Marina,
21 . pp. 67-69.
Kaliaperumal, N and Chennubhotla, V S Krishnamurthy and Kalimuthu, S (1977) Growth, Reproduction and Liberation of Oospores in Turbinaria ornata (Turner) J. Agardh. Indian Journal of Marine Sciences,
6 (2). pp. 178-179.
Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Kaliaperumal, N and Ramalingam, J R (1977) Studies on the growth variation, Alginic acid and Mannitol contenU in Padina gymnospora (Kuetzing)
Vickers. Seaweed Research Utilization, 2 (1). pp. 91-94.
Kaliaperumal, N and Kalimuthu, S (1976) Changes in growth, reproduction, alginic acid and Mannitol contents. Botanica Marina, 19 (3). pp. 161-178.
Rao, M Umamaheswara and Kalimuthu, S (1972) Changes in Mannitol and Alginic Acid Contents of Turbinaria ornata (TURNER) J. AGARDH in Relation to Growth and Fruiting. Botanica Marina, 15 (1). pp.
Book Section
Kalimuthu, S (2000) Seaweed exploitation and Industry in India. In: Souvenir 2000. UNSPECIFIED,(ed.) Central Marine Fisheries Research Institute, Mandapam, pp. 71-74.
Chennubhotla, V S Krishnamurthy and Kaliaperumal, N and Jayasankar, Reeta and Kalimuthu, S and Ramalingam, J R and Muniyandi, K and Selvaraj, M (2000) Seaweeds. In: Marine Fisheries Research and
Management. Pillai, V N and Menon, N G,(eds.) CMFRI; Kochi, Kochi, pp. 21-37.
Rao, D S and Girijavallabhan, K G and Muthusamy, S and Chandrika, V and Gopinathan, C P and Kalimuthu, S and Najmuddin, M (1991) Bioactivity in marine algae. In: Bioactive compounds from marine
organisms With Emphasis on the Indian Ocean: An Indo-United States Symposium. Thompson, Mary Frances and Sarojini, R and Nagabhushanam, R,(eds.) Oxford and IBH Publishing Company, New Delhi, pp.
Conference or Workshop Item
Chennubhotla, V S Krishnamurthy and Kalimuthu, S and Selvaraj, M (1986) Seaweed culture - its feasibility and industrial utilization. In: Proceedings of the Symposium on Coastal Aquaculture, Part 4,
MBAI, 12-18 January 1980, Cochin.
Kaliaperumal, N and Kalimuthu, S (1986) Tropical Cyclones. In: Souvenir: 35th Anniversary of the Recreation Club of RC of CMFRI , 1986, Mandapam Camp.
Kalimuthu, S (2000) Studies on some Indian members of the Rhodymeniales. PhD thesis, Bharathidasan University, Tiruchirappalli. | {"url":"http://eprints.cmfri.org.in/view/creators/Kalimuthu=3AS=3A=3A.html","timestamp":"2014-04-21T04:44:30Z","content_type":null,"content_length":"38485","record_id":"<urn:uuid:2ac04409-4a1d-4fc8-9f0a-6798ed0f947d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unknown Quantity: A Real and Imaginary History of Algebra
Derbyshire's previous book, Prime Obsession, was a tour de force whose goal was to explain the Riemann Hypothesis to non-mathematicians. The book was widely acclaimed as a success, an opinion that I
share: I read the book with pleasure and learned some mathematics along the way. I thought Derbyshire succeeded in telling the interesting history behind the mathematics in pleasant and engaging
prose, and in explaining (at least some of) the mathematics to non-specialists, a feat not to be sneered at given that the mathematics of the Riemann Hypothesis is certainly non-trivial.
So, having read Prime Obsession, I was curious and motivated to read Derbyshire's newest book, Unknown Quantity, which he describes as "a history of algebra, written for the curious
nonmathematician". In this sense, Unknown Quantity has the same goal as Prime Obsession. However, the scope of Unknown Quantity is much larger, because algebra is a vast subject with a long history,
dating back to the Babylonians, whose coherence is hard to see, especially for nonmathematicians.
Like Prime Obsession, Unknown Quantity contains large sections that describe the mathematics. In Prime Obsession this was done in the odd-numbered chapters, with the even-numbered chapters focusing
on the history. In Unknown Quantity, Derbyshire chose to sprinkle "Math Primers" along the way (there are six primers among 15 chapters). There is a fair amount of mathematics in the main text as
well. I have a feeling that a nonmathematician will need a lot of motivation to go through all this material. But such a reader will be rewarded with a reasonable sense of how algebra evolved from
concrete (and theoretical!) problems handled by the Babylonians to the solution of polynomial equations, and then will get at least an overview of how algebra become abstract and pervaded all areas
of mathematics.
Some high points in the book are: an engaging account the romantic story of the solution of cubic and quartic equations by Tartaglia, Cardano, and Ferrari; the use of complex numbers by Bombelli; the
early attempts at a theory of equations via invariants and symmetric functions; the role of Lagrange (or should we say Vandermonde?) resolvents for solving the quintic. The book even contains a
convincing "proof" of the Fundamental Theorem of Algebra.
On other topics, Derbyshire has not been as successful. Galois theory, despite the romantic aura around the short life of Galois (which Derbyshire argues is not totally warranted), is not a light
topic, despite (or perhaps because of) its great beauty. It is hard to get the point across to nonmathematicians. The same can be said, even more strongly, of Kummer's ideals and Noether's ring
theory. In particular, I think Derbyshire has failed to give a good account of Emmy Noether's work on invariants.
The book goes as near as possible to contemporary mathematics, discussing Klein's Erlangen Program, algebraic topology, algebraic geometry, algebraic number theory, category theory, and even
Grothendieck's work on the modern foundations of algebraic geometry (Derbyshire concentrates mostly on Grothendieck's "colorful" life rather than on his mathematics, which is just as well, given how
hard it is, even for mathematicians).
The prose itself is quite pleasant, as we have come to expect from Derbyshire. The book contains over 170 informative and sometimes entertaining endnotes, a good index, and 32 pictures of the main
characters in the history of algebra.
In summary, I think Derbyshire has done at good job at portraying algebra and its journey toward abstraction from its roots in early civilizations. All interested readers will learn something about
mathematics and its history. Readers with the right background will then be able to enjoy more mathematical accounts such as The Beginnings and Evolution of Algebra by Bashmakova and Smirnova and van
der Waerden's classic A History of Algebra.
Luiz Henrique de Figueiredo is a researcher at IMPA in Rio de Janeiro, Brazil. His main interests are numerical methods in computer graphics, but he remains an algebraist at heart. He is also one of
the designers of the Lua language. | {"url":"http://www.maa.org/publications/maa-reviews/unknown-quantity-a-real-and-imaginary-history-of-algebra","timestamp":"2014-04-16T23:15:09Z","content_type":null,"content_length":"98836","record_id":"<urn:uuid:ab9d6bd4-6d8e-49a2-a291-280dfad2ddd3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do the Schimmy: Efficient Large-Scale Graph Analysis with Hadoop
Michael Schatz is an assistant professor in the Simons Center for Quantitative Biology at Cold Spring Harbor Laboratory. His research interests are in developing large-scale DNA sequence analysis
methods to search for DNA sequence variations related to autism, cancer, and other human diseases, and also to assemble the genomes of new organisms. Given the recent tremendous advances of DNA
sequencing technologies, Michael has pioneered the use of Hadoop and cloud computing for accelerating genomics, as described in a guest blog post last fall.
Jimmy Lin is an associate professor in the College of Information Studies at the University of Maryland. His research lies at the intersection of information retrieval and natural language
processing, with an emphasis on large-scale distributed algorithms. Currently, Jimmy is spending his sabbatical at Twitter.
Part 1: Graphs and Hadoop
Question: What do PageRank, the Kevin Bacon game, and DNA sequencing all have in common?
As you might know, PageRank is one of the many features Google uses for computing the importance of a webpage based on the other pages that link to it. The intuition is that pages linked from many
important pages are themselves important. In the Kevin Bacon game, we try to find the shortest path from Kevin Bacon to your favorite movie star based on who they were costars with. For example,
there is a 2 hop path from Kevin Bacon to Jason Lee: Kevin Bacon starred in A Few Good Men with Tom Cruise, whom also starred in Vanilla Star with Jason Lee. In the case of DNA sequencing, we compute
the full genome sequence of a person (~3 billion nucleotides) from many short DNA fragments (~100 nucleotides) by constructing and searching the genome assembly graph. The assembly graph connects
fragments with the same or similar sequences, and thus long paths of a particular form can spell out entire genomes.
The common aspect for these and countless other important problems, including those in defense & intelligence, recommendation systems & machine learning, social networking analysis, and business
intelligence, is the need to analyze enormous graphs: the Web consists of trillions of interconnected pages, IMDB has millions of movies and movie stars, and sequencing a single human genome requires
searching for paths between billions of short DNA fragments. At this scale, searching or analyzing a graph on a single machine would be time-consuming at best and totally impossible at worst,
especially when the graph cannot possibly be stored in memory on a single computer.
Fortunately, Hadoop and MapReduce can enable us to tackle the largest graphs around by scaling up many graph algorithms to run on entire clusters of commodity machines. The idea of using MapReduce
for large-scale graph analysis is as old as MapReduce itself – PageRank was one of the original applications for which Google developed MapReduce.
Formally, graphs are comprised of vertices (also called nodes) and edges (also called links). Edges may be “directed” (e.g., hyperlinks on Web) or “undirected” (e.g., costars in movies). For
convenient processing in MapReduce, graphs are stored as key-value pairs, in which the key is the vertex id (URL, movie name, etc), and the value is a complex record called a “tuple” that contains
the list of neighboring vertices and any other attributes of the graph vertices (text of the webpage, date of the movie, etc). The key point is that the graph will be distributed across the cluster
so different portions of the graph, including direct neighbors, may be stored on physically different machines. Nevertheless, we can process the graph in parallel using Hadoop/MapReduce, to compute
PageRank or solve the Kevin Bacon game without ever loading the entire graph on one machine.
Graph algorithms in Hadoop/MapReduce generally follow the same pattern of execution: (1) in the map phase, some computation is independently executed on all the vertices in parallel, (2) in the
shuffle phase, the partial results of the map phase are passed along the edges to neighboring vertices, including when those vertices are located on physically different machines, and (3) in the
reduce phase, the vertices compute a new value based on all the incoming values (once again in parallel). Generically, we can speak of vertices passing “messages” to their neighbors. For example, in
PageRank the current PageRank value of each vertex is divided up and distributed to their neighbors in the map and shuffle phases, and in the reduce phase the destination vertices compute their
updated PageRank value as the sum of the incoming values. If necessary, the algorithm can iterate and rerun the MapReduce code multiple times, each time updating a vertex’s value based on the new
values passed from its neighbors.
This algorithm design pattern fits the large class of graph algorithms that need to distribute “messages” between neighboring vertices. For search problems like the Kevin Bacon game, we can use this
pattern to execute a “frontier search” that initially distributes the fact that there is a 1-hop path from Kevin Bacon to all of his immediate costars in the first MapReduce iteration. In the second
MapReduce iteration the code extends these partial 1-hop paths to all of his 2-hop neighbors, and so forth until we find the shortest path to our favorite movie star. Be warned, though, that frontier
search algorithms generally require space that is exponential in the search depth – therefore a naïve frontier search in MapReduce is not appropriate for searching for very deep connections: you may
exhaust the disk storage of your cluster or wait a long time waiting for the network to shuffle terabytes upon terabytes of intermediate data. In contrast, PageRank is computed using values just from
immediate neighbors, and is therefore more suitable for parallelization with Hadoop/MapReduce.
The other main technical challenge of MapReduce graph algorithms is that the graph structure must be available at each iteration, but in the design above we only distribute the messages (partial
PageRank values, partial search paths, etc). This challenge is normally resolved by “passing along” the graph structure from the mappers to the reducers. In more detail: the mapper reads in a vertex
as input, emits messages for neighboring vertices using the neighboring vertex ids as the keys, and also reemits the vertex tuple with the current vertex id as the key. Then, as usual, the shuffle
phase collects key-value pairs with the same key, which effectively collects together a vertex tuple with all the messages destined for that vertex (remember, this happens in parallel on multiple
reducers). The reducer then processes each vertex tuple with associated messages, computes an updated value, and saves away the updated vertex with the complete graph structure for the next
iteration. But wait, you might ask: doesn’t this entail the mappers emitting two different types of values (messages destined for neighboring vertices and the graph structure)? Yes, this is handled
by “tagging” each value to indicate which type it is, so that the reducer can process appropriately. For more details about such graph algorithms, you be interested in Jimmy Lin and Chris Dyer’s
recent book on MapReduce algorithm design.
This basic design works and with it we can compute PageRank, solve the Kevin Bacon game, assemble together genomes, and attack many other large-scale graph problems. However, it has several
inefficiencies that needlessly slow it down, such as the poor use of locality and substantial unnecessary computation. In part two we will explore the causes of those inefficiencies, and present a
set of simple techniques called Schimmy that we developed that can dramatically improve the runtime of virtually all Hadoop/MapReduce graph algorithms without requiring any changes to the underlying
Hadoop implementation. | {"url":"http://blog.cloudera.com/blog/2010/11/do-the-schimmy-efficient-large-scale-graph-analysis-with-hadoop/","timestamp":"2014-04-20T00:37:14Z","content_type":null,"content_length":"37336","record_id":"<urn:uuid:77cf894c-a054-4fc9-bff8-bd863f9094f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Status of Quantum Computing - Omega Wellfoundedness
Axiomize@aol.com Axiomize at aol.com
Fri Oct 25 10:48:17 EDT 2002
> Charlie Volkstorf wrote:
>> Is "the probability of a Turing Machine halting" (assuming uniform
>> distribution of Turing Machines) well-defined?
>> Is the "probability of program P halting" (assuming uniform distribution
>> N) well-defined?
> I do not think so, that probability would be defined as a limit that
> (as in your example) may not exist. But in the definition of
> Chaitin's Omega number the halting probability is the limit of an
> increasing (and bounded) sequence of rationals, which always exists.
Yes. Thus Omega is not "the probability of a Turing Machine halting".
By altering the representation of the universal Turing Machine, we arrive at
different values for Omega, even though the set of numbers on which the
algorithm halts is the same in each case.
>> How is Omega different (formally, please) from any other encoding of the
>> halting predicate into a real number (e.g. Marvin Minsky, "Computation:
>> Finite and Infinite Machines", 1967, p. 159)?
> In the extremely compact way in which the information is packed in
> Chaitin's Omega number. Omega is algorithmically incompressible,
> Chaitin-random, Solovay-random, Martin-Lof-random... See for
> instance:
> - Antonin Kucera and Theodore A. Slaman: "Randomness and Recursive
> Enumerability", SIAM J. Comput, Vol.31, No.1, pp. 199-211.
Do we know that this is not true of the number discussed by Minsky?
Minsky writes: "We now define our non-computable number Ru. [u is any
universal Turing Machine.] Ru will begin with the decimal point and, going
to the right, its Nth digit is 1 if U halts on the Nth tape, 0 if U never
halts on the Nth tape."
In the example of Omega discussed here earlier, the "end" marker is bit 0, so
that the Turing Machines are, in order of enumeration, {0, 10, 110, 1110, …}.
Thus the values of 2^(-|p|) are 1/2, 1/4, 1/8, 1/16, … and Omega is
identical with Ru. Thus Ru shares the properties of Omega.
Minsky further writes, "I think I first learned about such things from
Hartley Rogers' lecture notes on recursive function theory at MIT around
1960 or so. In those days, that subject was called 'Recursive Real Numbers.'
In any case, the subject was clearly understood in the 1956 article by
Shannon, Moore, deLeeuw, and Shapiro in the book 'Automata Studies'."
>Miguel A. Lerma
Charlie Volkstorf
Cambridge, MA
FOM mailing list
FOM at cs.nyu.edu
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2002-October/006001.html","timestamp":"2014-04-16T22:11:41Z","content_type":null,"content_length":"5347","record_id":"<urn:uuid:373af2b7-5642-463b-a899-216c1311ed22>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
June 4th 2009, 07:40 PM #1
Senior Member
Apr 2009
True or False?
If a, b, and c denote real numbers and p(x)=ax^2+bx+c, x € ℝ, then there exists a real number x0 such that p(x0)=0.
Please show work
Last edited by yoman360; June 4th 2009 at 08:02 PM.
then there exist two different solutions
there is no solutions
there are two same solutions
for more information see link below
Quadratic function - Wikipedia, the free encyclopedia
Last edited by Amer; June 4th 2009 at 08:35 PM.
Putting this equation in standard form we have
Assume that $a>0$, we can conclud that this is graph of a parabola that opens upwards and has vetex $(-\frac{b}{2a},c-\frac{b^2}{4a})$. Again, since the parabola opens upward, $p(x_0)=0\
And since there are enumerable instances where this statement does not hold, the answer is false
Last edited by VonNemo19; June 14th 2009 at 12:59 PM. Reason: Small touch up
Putting this equation in standard form we have
Assume that $a>0$, we can conclud that this is graph of a parabola that opens upwards and has vetex $(-\frac{b}{2a},c-\frac{b^2}{4a})$. Again, since the parabola opens upward, $p(x_0)=0\
And since there are enumerable instances where this statement does not hold, the answer is false
what if a<0 then it would cross the x-axis thus making the statement true.
The part in red doesn't make sense. What I think you're trying to say is that $p\!\left(x_0\right)=0\iff c-\frac{b^2}{4a}\leq0$. An ordered pair cannot be less that a number.
And since there are enumerable instances where this statement does not hold, the answer is false
But the question isn't asking if this holds for all cases. The statement is an existential statement; one that asks us to find at least one case where the statement is true.
True or False?
If $a,b,c\in\mathbb{R}$ and $p\!\left(x\right)=ax^2+bx+c$, $x\in\mathbb{R}$, then there exists a real number $x_0$ such that $p\!\left(x_0\right)=0$.
This is a true statement under certain cases. Again, it all depends on the value of the discriminant $\Delta = b^2-4ac$. If
$\Delta > 0$, then there are two solutions (so there exists more than one $x_0$).
$\Delta = 0$, there there is only one solution (thus $x_0$ exists).
$\Delta < 0$ , there there is no real solution. This implies that a $x_0$ does not exist.
To summarize, the only time when an $x_0$ exists is when $\Delta\geq 0$. Also, the value of $x_0$ is the real zero(s) of the quadratic equation.
Consider $f\!\left(x\right)=x^2+3x+2$. Since $1,3,2\in\mathbb{R}$, we search for an $x_0$ such that $f\!\left(x_0\right)=0$. So in essence, we solve $x_0^2+3x_0+2=0$. But $x_0^2+3x_0+2=0\implies\
left(x_0+2\right)\left(x_0+ 1\right)=0\implies\left(x_0+2\right)=0$ or $\left(x_0+1\right)=0$. So in this case, there exists two possible values for $x_0$: $x_0=-2$ and $x_0=-1$, where both
values for $x_0$ are real numbers.
The part in red doesn't make sense. What I think you're trying to say is that $p\!\left(x_0\right)=0\iff c-\frac{b^2}{4a}\leq0$. An ordered pair cannot be less that a number.
But the question isn't asking if this holds for all cases. The statement is an existential statement; one that asks us to find at least one case where the statement is true.
This is a true statement under certain cases. Again, it all depends on the value of the discriminant $\Delta = b^2-4ac$. If
$\Delta > 0$, then there are two solutions (so there exists more than one $x_0$).
$\Delta = 0$, there there is only one solution (thus $x_0$ exists).
$\Delta < 0$ , there there is no real solution. This implies that a $x_0$ does not exist.
To summarize, the only time when an $x_0$ exists is when $\Delta\geq 0$. Also, the value of $x_0$ is the real zero(s) of the quadratic equation.
Consider $f\!\left(x\right)=x^2+3x+2$. Since $1,3,2\in\mathbb{R}$, we search for an $x_0$ such that $f\!\left(x_0\right)=0$. So in essence, we solve $x_0^2+3x_0+2=0$. But $x_0^2+3x_0+2=0\implies\
left(x_0+2\right)\left(x_0+ 1\right)=0\implies\left(x_0+2\right)=0$ or $\left(x_0+1\right)=0$. So in this case, there exists two possible values for $x_0$: $x_0=-2$ and $x_0=-1$, where both
values for $x_0$ are real numbers.
Oh ok I get it now
can you help me with this one:
Wouldn't the answer be true? because 0 is a real number and if b and c= 0 then then $p(x_0)=0$
$p(x)= ax^2 +bx +c$
$p(x) = 1x^2+0x+0$
$p(x) = x^2$
then there's only one solution of the function.
Yes, that is true. Note that you imposed certain conditions on what you let a,b, and c equal. As I said above, the statement is true for certain cases/conditions, and this would be one of them.
You just illustrated the case where we only have one value $x_0$ such that $p\!\left(x_0\right)=0$ (I illustrated a case where we had more than one value).
The part in red doesn't make sense. What I think you're trying to say is that $p\!\left(x_0\right)=0\iff c-\frac{b^2}{4a}\leq0$. An ordered pair cannot be less that a number.
But the question isn't asking if this holds for all cases. The statement is an existential statement; one that asks us to find at least one case where the statement is true.
This is a true statement under certain cases. Again, it all depends on the value of the discriminant $\Delta = b^2-4ac$. If
$\Delta > 0$, then there are two solutions (so there exists more than one $x_0$).
$\Delta = 0$, there there is only one solution (thus $x_0$ exists).
$\Delta < 0$ , there there is no real solution. This implies that a $x_0$ does not exist.
To summarize, the only time when an $x_0$ exists is when $\Delta\geq 0$. Also, the value of $x_0$ is the real zero(s) of the quadratic equation.
Consider $f\!\left(x\right)=x^2+3x+2$. Since $1,3,2\in\mathbb{R}$, we search for an $x_0$ such that $f\!\left(x_0\right)=0$. So in essence, we solve $x_0^2+3x_0+2=0$. But $x_0^2+3x_0+2=0\implies\
left(x_0+2\right)\left(x_0+ 1\right)=0\implies\left(x_0+2\right)=0$ or $\left(x_0+1\right)=0$. So in this case, there exists two possible values for $x_0$: $x_0=-2$ and $x_0=-1$, where both
values for $x_0$ are real numbers.
So, you maintain that there exists a zero for every parabola in the form of $ax^2+b^2+c$?
You were right about the last statement of my argument, a small error, but the result is still sound. I cahnged it. But by reading the question again, and taking note of the phrase "there exists"
, I interpret this to mean that "there must exist" or 'there always exists". And you and I both know that this is a false statement.
I will attempt to reword the statement. Note that a,b, and c can be any real numbers.
--For any given parabola, there exists at least one place where it touches the x-axis.--
Is this ridiculous, or is this ridiculous?
"Exists is not a predicate."Immanuel Kant.
Last edited by VonNemo19; June 14th 2009 at 01:03 PM.
The original statement was, essentially that "there exist a real number solution to every quadratic equation". The simplest way to show that is false is to take a= 1, b= 0, c= 1. There is no real
number, x, such that $x^2+ 1= 0$.
Ok it makes sense now since the statement is not always true its false.
June 4th 2009, 07:58 PM #2
June 4th 2009, 08:31 PM #3
June 13th 2009, 07:45 PM #4
Senior Member
Apr 2009
June 13th 2009, 09:23 PM #5
June 13th 2009, 09:48 PM #6
Senior Member
Apr 2009
June 13th 2009, 09:57 PM #7
Senior Member
Apr 2009
June 13th 2009, 10:07 PM #8
June 14th 2009, 10:14 AM #9
June 14th 2009, 11:25 AM #10
MHF Contributor
Apr 2005
June 14th 2009, 12:48 PM #11
Senior Member
Apr 2009
June 14th 2009, 12:55 PM #12 | {"url":"http://mathhelpforum.com/pre-calculus/91833-p-x-ax-2-bx-c.html","timestamp":"2014-04-18T06:21:38Z","content_type":null,"content_length":"88597","record_id":"<urn:uuid:57a171e9-f72c-47c3-9014-9361bf8559c9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
$\infty$-Chern-Weil theory
Differential cohomology
Connections on bundles
Higher abelian differential cohomology
Higher nonabelian differential cohomology
Fiber integration
Application to gauge theory
The formal notion of curvature is a formalization and generalization of the intuitive notion of the (”extrinsic”) curvature of a surface embedded in a Cartesian space $\mathbb{R}^n$.
This extrinsic curvature of a surface is called Gaussian curvature?. It may also be understood intrinsically as a property of just the surface without reference to the ambient Cartesian space that it
is embedded in: the canonical metric on $\mathbb{R}^n$ induces a Riemannian metric on the surface and the surface’s curvature is encoded in the Levi-Civita connection on the tangent bundle of the
This notion of curvature of a Levi-Civita connection in turn generalizes straightforwardly to a notion of curvature of any connection on a bundle and thus gives the name to the general concept.
For instance in the first-order formulation of gravity the curvature of spacetime is literally the curvature of the Levi-Civita connection on spacetime in this sense. But also the Yang-Mills field is
a connection on a principal bundle and its curvature encodes the field strength of the Yang-Mills field, which is a concept rather remote from the intuition of a curved surface (thogh not unrelated).
Even more generally, the notion of a connection on a bundle and of a Lie algebra-valued 1-form generalizes to connections on principal 2-bundles and principal ∞-bundles and curvature of ∞-Lie
algebroid valued differential forms.
In Eilenberg-Steenrod-type differential cohomology describing abelian such higher connections these curvatures appear in the form of generalized Chern character curvature characteristic forms.
Extrinsic curvature of embedded manifolds
Curvature $\kappa(\gamma)$ of a smooth curve $\gamma$ at a point $p$ of a smooth curve is the (signed) inverse radius of the (oriented) circle having tangency of order 1 with the curve at the point
$p$. Every smooth curve in a 3-dimensional space is determined up to isometry by its (parametrised) curvature and torsion. Intuitively, the curvature measures how much curves are bent, when measured
in some metrics. Fernet-Serret formulas describe the curvature, torsion and higher analogues for naturally parametrized curves in $n$-dimensional Euclidean space.
For higher dimensional surfaces one can look at normal curvature which is the curvature of the curve which is intersection of a plane determined by the normal vector to the surface and a given
tangent vector. For any 2-dimensional tangent plane, the normal curvature has two extreme values. Their product is called the Gaussian curvature.
Sectional curvature of a higher dimensional smooth surface at its point $p$ in an Euclidean space along a tangent 2-dimensional plane is the Gaussian curvature of the curve which is the intersection
of the surface with the plane. As a plane is determined by two vectors, the sectional curvature is determined by a surface and a pair of vectors, and all possible sectional curvatures can be read
form that 2-form; therefore we talk about the operator of the curvature.
Curvature can be described also intrinsically, without recourse to the ambient space and its metric. Therefore it makes sense in Riemannian geometry based on the metric tensor just on a manifold. In
1917, Herman Weyl postulated a more fundamental quantity than a Riemannian metric, the connection on a fibre bundle, giving hence rise to a modern, generalized idea of the curvature. While Riemannian
metric gives rise to a Levi-Civita connection on the tangent bundle of the Riemannian manifold, not every connection on a vector or principal bundles is induced by metrics. In that sense the
connection is a more basic notion in geometry.
Curvature of a connection
The curvature of a connection on a bundle measures how the connection is locally non-trivial.
In as far as the notion of connection on a bundle is generalized by the notion of a cocycle in differential cohomology, curvature is essentially the Chern character.
In as far as cocycles in differential cohomology represent gauge fields in physics, the curvature is the field strength of these gauge fields.
After the conception of gauge theory, the term curvature was firmly established in its generalization from this special case to the case of connections on all kinds of bundles and higher bundles.
Curvature of a Lie-algebra valued form
• A connection on a trivial line bundle on a space $X$ is just a 1-form
$A \in \Omega^1(X) \,.$
The curvature in this case is the 2-form $F = d A$.
• A connection on a trivial $G$-principal bundle for $G$ a Lie group with Lie algebra $\mathfrak{g}$ is a $\mathfrak{g}$-valued 1-forfm (see groupoid of Lie-algebra valued forms)
$A \in \Omega^1(X) \otimes \mathfrak{g} \,.$
Its curvature is the Lie-algebra valued 2-form
$F_A = \mathbf{d} A + [A \wedge A] \,,$
where $[-,-]$ is the Lie bracket in $\mathfrak{g}$.
• According to the discussion at ∞-Chern-Weil theory, a connection on a trivial principal ∞-bundle is given by a collection of ∞-Lie algebroid valued differential forms. The notion of curvature in
this general context is discussed at curvature of ∞-Lie algebroid valued differential forms.
Curvature characteristic forms
Geometric interpretation of curvature 2-forms
For the geometric interpretation of the curvature 2-form of a $\mathfrak{g}$-valued 1-form
$\Omega^\bullet(X) \leftarrow W(\mathfrak{g}) : A$
it is useful to recall that both the deRham complex $\Omega^\bullet(X)$ as well as the Chevalley-Eilenberg algebra $CE(\mathfrak{g})$ are naturally interpreted as function algebras on infinitesimal
objects, as discussed at ∞-Lie algebroid.
The deRham complex may be thought of as the algebra of functions on the infinitesimal path ∞-groupoid $\Pi^{inf}(X)$. This has as objects the points of $X$, as morphisms infinitesimal paths
$x \to y$
in $X$
as 2-morphisms infinitesimal little surfaces
$\array{ x &\to& y_1 \\ \downarrow & \searrow & \downarrow \\ y_2 &\to& z }$
in $X$, and so on.
On the other hand, $CE(\mathfrak{g})$ is the algebra of functions on the infinitesimal version $\mathbf{B}G_{(1)}$ of what is called the delooping groupoid $\mathbf{B}G$ of the Lie group of which $\
mathfrak{g}$ is a Lie algebra. This has a single object ${*}$, and a morphism is an infinitesimal group element
${*} \stackrel{e + \lambda^a t_a}{\to} {*}$
for $e$ the neutral element of the group (the identity), for $t_a$ an element of the Lie algebra as before and for $\lambda^a$ some coefficient.
A 2-morphism is an infinitesimal surface bounded by such infinitesimal 1-morphisms such that going either way around the surface
$\array{ {*} &\stackrel{e + \lambda^a_1 t_a}{\to}& {*} \\ \downarrow^{\rlap{\lambda_3^a t_a}} & \searrow & \downarrow^{\rlap{\lambda_2^a t_a}} \\ {*} &\stackrel{\lambda_4^a t_a}{\to}& {*} }$
produces the same result when then morphisms are composed using the product in the Lie group: the top right way around the square here yields
$(e + \lambda_1^a t_a) )(e + \lambda_2^b t_b) ) = e + \lambda_1^a t_a + \lambda_2^b t_b + \lambda_1^a \lambda_2^b t_a t_b$
and the other way round yields
$(e + \lambda_3^a t_a) )(e + \lambda_4^b t_b) ) = e + \lambda_3^a t_a + \lambda_4^b t_b + \lambda_3^a \lambda_4^b t_a t_b \,.$
A morphism of dg-algebras of the form we have been considering
$\Omega^\bullet(X) \leftarrow CE(\mathfrak{g}) : A$
is now evidently equivalenty a morphism
$A : \Pi^{inf}(X) \to \mathbf{B}G_{(1)}$
that sends infinitesimal paths in $X$ to infinitesimal group elements of the form $e + \lambda^a t_a$:
$A : (x \to y) \;\;\mapsto\;\; ({*} \stackrel{e + A^a(x,y) t_a}{\to} {*}) \,.$
If we denote by
$v = y - x$
the tangent vector that connects the infinitesimally close points $x$ and $y$ and write $A(x,y) = A_x(v)$ as a function of the first point and the vector pointing away from it, then this reads
$A : (x \to y) \;\;\mapsto\;\; ({*} \stackrel{e + A^a_x(v) t_a}{\to} {*}) \,.$
We can now look at what this assignment $A$ of infinitesimal group elements to infinitesimal paths does to a little square in $X$ as above, with sides spanned by tangent vectors $v_1$ and $v_2$. We
$A \;\; : \;\; \array{ x &\stackrel{v_1}{\to}& y_1 \\ \downarrow^{v_2} & \searrow & \downarrow \\ y_2 &\to & z } \;\;\mapsto\;\; \array{ {*} &\stackrel{e + A_x(v_1) t_a}{\to}& {*} \\ \downarrow^{\
rlap{A_x(v_2)^a t_a}} & \searrow & \downarrow^{\rlap{A_{y_1}(v_2)^a t_a}} \\ {*} &\stackrel{A_{y_2}(v_1)^a t_a}{\to}& {*} } \,.$
For the result on the right to qualify as a 2-morphism in $\mathbf{B}G_{(1)}$ we need that
going around the top right edges, which yields
$e + A_x^a(v_1) t_a + A^b_{y_1}(v_2) t_b + A_x^a(v_1) A^b_{y_1}(v_2) t_a t_b$
is the same as
$e + A_x^a(v_2) t_a + A^b_{y_2}(v_1) t_b + A_{x}^a(v_2) A^b_{y_2}(v_1) t_a t_b \,.$
To express what this means as a condition at the point $x$, we may Taylor expand to first order
$A_{y_1} = A_x + \partial_{v_1} A_x$
$A_{y_2} = A_x + \partial_{v_2} A_x \,.$
Then some terms cancel and the above condition becomes, to second order
$\partial_{v_1} A^a_x(v_2) t_a + A_x^a(v_1) A^b_{x}(v_2) t_a t_b = \partial_{v_2} A^a_x(v_1) t_a + A_x^a(v_2) A^b_{x}(v_1) t_a t_b \,.$
In other words, the expression
$F_A(v_1,v_2) := \partial_{v_1} A^a_x(v_2) t_a - \partial_{v_2} A^a_x(v_1) t_a + \frac{1}{2} A_x^a(v_1) A^b_{x}(v_2) [t_a, t_b]$
has to vanish. This is the curvature form that we already found above by more algebraic means.
If this does not vanish, then we don’t really have a morphism $A : \Pi^{inf}(X) \to \mathbf{B}G_{(1)}$. But then we instead have some morphism that uses the 1-forms $A^a$ to assigns data to little
edges, and that uses the 2-forms $F_A^a$ to assign data to little surfaces. That morphism then will respect a condition as above, but now on little cubes. That condition is the Bianchi identity
$d F_A + [A \wedge F_A] = 0$
on the curvature 2-form.
Curvature of $\infty$-Lie algebroid-valued forms
The notion of curvature of a Lie-algebra valued 1-form discussed above generalizes to that of ∞-Lie algebroid valued differential forms.
Let $\mathfrak{g}$ be an ∞-Lie algebra. A $\mathfrak{g}$-valued differential form on a smooth manifold $X$ is a morphism
$\Omega^\bullet(X) \leftarrow W(\mathfrak{g}) : A$
of dg-algebras, where $\Omega^\bullet(X)$ is the de Rham complex and $W(\mathfrak{g})$ is the Weil algebra.
There is a canonical inclusion of graded vector spaces
$W(\mathfrak{g}) \leftarrow \mathfrak{g}^*[1] : F_{(-)} \,.$
The curvature of the $\infty$-Lie algebroid valued form $A$ is the composite
$\Omega^\bullet(X) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \stackrel{F_{(-)}}{\leftarrow} \mathfrak{g}^*[1] : F_A \,.$
This consists, in general, of a tower of components: write $\mathfrak{g}_n$ for the degree $n$-part of the $\infty$-Lie algebra, then we have the further restrictions
$\Omega^\bullet(X) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \stackrel{F_{(-)}}{\leftarrow} \mathfrak{g}_{k}^*[1] : (F_A)_{k+1} \,.$
$(F_A)_n \in \Omega^n(X, \mathfrak{g}_{n-1})$
is a $\mathfrak{g}_{n-1}$-valued $n$-form on $X$.
One may speak of the 2-form curvature, the 3-form curvature, the 4-form curvature and so on.
If instead of just an $\infty$-Lie algebra $\mathfrak{g}$ we take more generally an ∞-Lie algebroid, then there is also a 1-form curvature component.
Remark In some places in the literature, the lower curvature form components have been called fake curvature (BreenMessing).
Obstruction to flatness
Precisely if the curvatures $F_A$ vanish does the morphism $A : W(\mathfrak{g}) \to \Omega^\bullet(X)$ factor through the Chevalley-Eilenberg algebra $W(\mathfrak{g}) \to CE(\mathfrak{g})$.
$(F_A = 0) \;\;\Leftrightarrow \;\; \left( \array{ && CE(\mathfrak{g}) \\ & {}^{\mathllap{\exists A_{flat}}}\swarrow & \uparrow \\ \Omega^\bullet(X) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) } \
in which case we call $A$flat.
Bianchi identity
By the fact that $A$ is a dg-algebra homomorphism, its curvature forms satisfy
$d_{dR} F_A + A(d_{W(\mathfrak{g})}(-)) = 0 \,.$
This is the Bianchi identity.
Curvature characteristic forms
The algebra $inv(\mathfrak{g})$ of invariant polynomials embeds into the Weil algebra
$W(\mathfrak{g}) \leftarrow inv(\mathfrak{g}) \,.$
For $A$ a $\mathfrak{g}$-valued form, and $\langle - \rangle \in inv(\mathgfrak{g})$, the ordinary closed $n$-form
$\Omega^\bullet(X) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \stackrel{}{\leftarrow} inv(\mathfrak{g}) \stackrel{\langle - \rangle}{\leftarrow} CE(b^{n-1} \mathbb{R}) : \langle F_A \rangle$
is the corresponding curvature characteristic form.
Curved dg-algebras
(TO ADD: Something about curved $A_\infty$ algebras and curved dg algebras.)
gauge field: models and components | {"url":"http://ncatlab.org/nlab/show/curvature","timestamp":"2014-04-16T13:12:09Z","content_type":null,"content_length":"105047","record_id":"<urn:uuid:a08490ce-0d7a-4291-90c1-68157594e8d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adding lines or points to an existing barplot
January 15, 2011
By danganothererror
Sometimes you will need to add some points to an existing barplot. You might try
par(mfrow = c(1,2))
df <- data.frame(stolpec1 = 10 * runif(10), stolpec2 = 30 * runif(10))
lines(df$stolpec2/10) #implicitno x = 1:10
but you will get a funky looking line/points. It’s a bit squeezed. This happens because bars are not drawn at intervals 1:10, but rather on something else. This “else” can be seen if you save your
barplot object. You will notice that it’s a matrix object with one column – these are values that are assumed on x axis. Now you need to feed this to your lines/points function as a value to x
argument and you’re all set.
df.bar <- barplot(df$stolpec1)
lines(x = df.bar, y = df$stolpec2/10)
points(x = df.bar, y = df$stolpec2/10)
Another way of plotting this is using plotrix package. The controls are a bit different and it takes some time getting used to it.
barp(df$stolpec1, col = "grey70")
for the author, please follow the link and comment on his blog:
Dang, another error
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/adding-lines-or-points-to-an-existing-barplot/","timestamp":"2014-04-18T10:46:57Z","content_type":null,"content_length":"36410","record_id":"<urn:uuid:aa993210-50da-4119-bf0e-c7aba4983102>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
In the diagram below, BD is parallel to XY. What is the value of x? Note: BD and XY have the little line over them but I have no idea how to put it. (Picture below.)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506e2404e4b060a360ff3ce5","timestamp":"2014-04-16T04:13:04Z","content_type":null,"content_length":"167460","record_id":"<urn:uuid:557f7f96-6aeb-4aaf-bf0c-27f80a195dec>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: First order logic and SPARQL
From: Bijan Parsia <bparsia@cs.man.ac.uk> Date: Thu, 9 Sep 2010 19:38:53 +0100 Message-Id: <4042C996-CE26-488C-84C7-EC97BFC43C47@cs.man.ac.uk> Cc: www-archive@w3.org To: Pat Hayes <phayes@ihmc.us>
On 9 Sep 2010, at 17:05, Pat Hayes wrote:
> On Sep 9, 2010, at 6:44 AM, Bijan Parsia wrote:
>> On 8 Sep 2010, at 15:58, Pat Hayes wrote:
>> The semantics of SPARQL (by which I mean the set of triples
>> <Dataset, Query, Answers> sanctioned by the specification wrt RDF
>> datasets)
> Ah. I didn't mean that. I meant a model theory, providing truth
> conditions relative to an interpretation drawn from a set of
> possible interpretations.
First, see Axel's paper.
Second, next time just read "consequence" where we say "entailment".
Or ask us to use "consequence".
>> is rather well specified (with some exceptions, e.g., DESCRIBE).
>> In the spec, this set is described primarily in terms of an
>> algebra which is pretty much the relational algebra. Equivalently,
>> Axel has shown that it can be described in terms of entailment in
>> (nonrecursive, I would think for simple entailment and SPARQL1.0)
>> Datalog with stratified negation (well, I guess it's automatically
>> stratified :)). If you conceptualize the triple as an axiom schema
>> such that Dataset |- Grounding(Query, Answer) (where Answer is
>> drawn from the set of answers), then we have a clear consequence
>> relation (or something reasonably modeled by a consequence
>> relation). This consequence relation is clearly non-monotonic.
> That is, yes. But monotonicity is traditionally defined in terms of
> entailment, which refers to truth.
Actually, it's typically defined in terms of *consequences*. E.g.,
"A non-monotonic logic is a formal logic whose consequence relation
is not monotonic. Most studied formal logics have a monotonic
consequence relation, meaning that adding a formula to a theory never
produces a reduction of its set of consequences. Intuitively,
monotonicity indicates that learning a new piece of knowledge cannot
reduce the set of what is known. "
"Consider the formal properties of a consequence relation from an
abstract point of view. Let
be any relation between sets of premises and single sentences. The
following properties are all satisfied by the consequence relation ⊨
of FOL:"
Just consider Default logic.
Or read Makinson.
In many cases these consequent relations can be given a model
theoretic account. Like SPARQL. As Axel has done.
But a consequence (or entailment) relation is never purely
semantically characterized. After all, we have to speak of the syntax
of the assertions and the syntax of the consequences. For many
standard logics, these coincide. But, quite obviously, they don't
have to.
>> I would go further and say that this is a bog-standard and very
>> useful way to conceptualize and reason about the situation (e.g.,
>> complexity results fall out very naturally as do implementation
>> techniques). There are other ways of conceptualizing it, but they
>> need to capture the same feature, to wit, there are cases where
>> when you extend a Dataset with an additional triple, you lose "an
>> answer", i.e. a consequence.
>> It's useful, even if you don't find it congenial, to talk this way
>> even if we didn't have a plethora of model theoretic techniques to
>> cope with it. Default logic was understandable without a model
>> theoretical account
> Actually I disagree. The general idea was of course understood
> intuitively, but there was a jumble of confusion, with many
> varieties and subspecies of defaultitude, all loosely based on
> linguistic intuitions, none of them clearly better or worse than
> any other.
I mean Reiter's default logic defined in terms of fixed point
equations. Well understood. Precisely characterized.
>> I'm not sure why you feel this is important. But I don't think we
>> have to go "assertional", at least as I understand it.
> Well, monotonicity is defined in terms of entailment which is
> defined in terms of truth. But I see now that you have been using
> 'consequence' as distinct from 'entailment' throughout.
I do this for your comfort.
>> Yes, of course, SPARQL in some sense "asserts" stuff about various
>> RDF KBs. But SPARQL doesn't allow you to assert things (as we
>> currently use it) about other SPARQL queries. That is, we only see
>> SPARQL in consequents (just as in the relational calculus we only
>> see certain constructs in the consequents). It's pretty common to
>> say that this makes SPARQL a query language :)
> I agree: I thought this idea - of treating SPARQL as an assertion
> language - was what *you* were urging.
SPARQL queries are never queried. They are only entailed. The
language of "SPARQL" antecedents are RDF and of SPARQL consequences
is SPARQL.
No syntactic change in an antecedent affects a given consequent. They
in no way syntactically coupled.
>>> An RDF graph is a set of triples. Adding a triple to a set
>>> creates a *different* set.
>> But doesn't change the query.
> It changes what the query refers to,
Which is not a change in the syntax of the query.
> so it does change the query, implicitly.
If you mean that the very same query is not a consequence of the new
dataset, sure. It is syntactically unchanged.
> The query always refers to the query graph.
ASK {?s p o}
Does not contain any reference to any queried graph. I can evaluted
against any arbitrary RDF graph. It will evalaute to true or false
against each graph.
> Adding a triple to that graph gives a different graph. That means
> that the query itself has changed in what it refers to.
There is no syntactic change. The query is exactly unchanged. What it
evaluates to changes, as would be no surprise. If the new graph is a
superset of the old graph, this very same query might evaluate to
false. Which means, y'know, it's not a consequent.
>>> So when we start talking about monotonicity, we need to be very
>>> careful what it means to add a triple to a graph. In RDF this is
>>> simply conjunction, but if the language itself can talk about
>>> these sets of triples, then adding a triple to the antecedent
>>> graph is a lot more than a simple logical conjunction: it
>>> involves a change to the antecedent.
>> Pat, I think you got a bit lost here. If I have the propositional
>> formula A and A |- B and then I conjoin something to A, e.g., to
>> form A&C, then, well, I have a change to the antecedent.
>> Trivially, right?
> Right, but that trivial sense is not what I meant.
But, alas, that's the one that's relevant.
> My point is that if B itself refers to A,
There is no syntactic reference.
There's no implicit syntactic reference.
In propositional logic, if I have a dataset D={A, B} and I have a
"query" ?-C? D does not entail C. D'={A, B, C&D} *does* entail C. C
is unchanged. It is a consequent of the second, but not of the first.
Now add a NAF negation \+. D does entail \+C and D' does not. \+C is
> and if conjoining C to A *also* changes this reference to refer to
> (A & C) rather than A, then B has also changed. So the correct way
> to describe the result is not (A & C) |- B but rather (A & C) |-
> B', where B' looks very similar to B but means something subtly
> different.
Or, as is more standard and I would say sensible, say that B is no
longer entailed because e.g., some model of A&C doesn't model B.
>> The question is what sort of change and what happens as a result
>> of that change. In a monotonic logic if A |- B. then so with A&C |-
>> B. an {A,C} |- B.
> See above.
See Makinson. It might really help.
>>> This is what I meant by deleting something: again, I apologise
>>> for the misleading carelessness in my expression.
>> Even if I conceptualize adding a conjunction as "Deleting A and
>> then adding A&C", or "Deleting {A} and adding {A,C}" it is still
>> the case that these formula have a specific relation which I shall
>> imprecisely characterise as "A" is a subset of "A&C" (from the
>> correspondence in our logics between "A&C" and {A,C}. What we
>> don't have is any assertions of the form "C is not asserted (or
>> implied)". That is, "A" is not shorthand for {A, C is not
>> asserted, or implied}.
> I entirely agree. That whole line of thinking is aside from my point.
I don't see how.
>>> I think not. At the very least it would require a model theory
>>> for epistemic reflection.
>> I suspect the sticking point is that you want to reserve
>> "entailment" for the semantic consequence relation.
> Um, yes. I believe that is what the word means.
Well, there isn't a single semantic consequence relation. We can vary
the one under consideration in a number of ways. However, sometimes
it's very helpful to consider consequence relations more abstractly.
This is quite common in non-mon logic.
>> Fine. We do have that (see Axel's paper).
Oh yeah, we have that!
>> See above. It seems very odd to think that because the *question*
>> is more sophisticated we've changed the *data*.
> Well, I think "the data" is changed as soon as anything has the
> ability to refer to the data and talk *about* it rather than simply
> draw conclusions *from* it.
I don't see how this can possibly be true. I haven't changed the
language of Emma because I ask a question about it in French. I don't
change any plot events if I ask how many pages.
> If I can do something as simple as count the number of triples in a
> graph, its not just RDF any more.
I'm not able to assert that in the graph. So, nothing I write in
sparql can change, e.g., the satisfiability of my dataset.
SPARQL, however, is (semantically) sensitive to features of my
dataset that I cannot express in RDF.
Similarly, if I have an ABox:
bob loves mary.
bob loves john.
john != mary.
I can conclude that bob is an instance of the expression has min 2
loves successors. min 2 doesn't exist in RDF. The existence of min 2
doesn't change RDF. There is, of course, a superset language that
tells us how min2 is sensitive to models....so?
>>> Its the one where you agree that a dataset really is a *set*, not
>>> a thing with state which can retain its identity while having
>>> information added to it.
>> As should be clear above, no one is appealing to stateful things.
> Are you sure? Isn't the 'query set' something that can grow or
> shrink, so not actually a set?
Saying it can grow or shrink is meant as loose talk for "replaced
with a super or sub set".
>> The nonmonotonicity is expressed in terms of relations between
>> *sets* not in terms of mutation.
What I said here.
>>> I dont want to make that change, but I still disagree that the
>>> consequence relation is nonmonotonic, if you state the truth
>>> conditions for the SPARQL conclusion precisely, and in a form
>>> which acknowledges the specs requirements that an RDF graph (note
>>> the singular) is a *set* of triples. And of course that adding
>>> something to a set gives you a new set, not the original set
>>> 'enlarged'.
>> I really don't know where you're getting that from.
> That last sentence is plain old set theory. A union B =/= A, unless
> B is a subset of A already.
No, I don't know why you think I meant anything else. I have no idea
why you are attributing mutation semantics to our fairly standard use
of "grow", "shrink", and "expansion".
The reason doesn't matter. Rest assured that nothing I've written
depends on mutable sets.
>> By "expansion" I mean the belief revision theory operation (http://
>> en.wikipedia.org/wiki/Belief_revision) which does not involve
>> mutation.
> ? Of course it does. Any change to a set is a "mutation".
Mutation implies identity through change (which is what you were
harping on). Belief sets are characterized entirely set
theoretically. The result of an AGM expansion is (typically) a new
set (if you add something already in the set, obviously,you'll get
your original set).
>>> But it does have its problems, such as hallucinations of
>>> nonmonotonicity... :-)
>> I really hope at this point you see that there's no hallucination.
>> OTOH, I don't know why the question wasn't immediately settled
>> when Axel pointed to his paper which showed an answer preserving
>> reduction from SPARQL 1.0 to Datalog with stratified negation.
> Basically because Im not very interested in Datalog semantics, and
> indeed feel like I ought to use scare quotes when typing that
> phrase. But I agree this isn't a very persuasive point to make in a
> public debate.
I don't see it persuasive in any debate. I trust it's obvious that
it's not a support for "you have no model theoretic semantics" to
say "because I don't like the model theoretic semantics you use".
>> That just seems to answer every point you might make.
>> Sure, but then that's a different consequence relation. I can
>> define consequence relations using the RDF model theory in a
>> number of ways:
>> A) I can restrict the syntactic form of the consequences
>> B) I can change how the consequence relation is sensitive to the
>> semantics of the graphs
>> E.g., finite vs. infinite models
>> Minimal vs. all models
>> C) I can extend the syntactic form of the consequences
>> I can make the extended syntax sensitive to various aspects of
>> the RDF models (i.e., combine with B)
>> E.g., I can count successor nodes in a relational structure.
>> I can pay attention to certain classes of model (as above).
>> Given a "standard" consequence relation (say, simple entailment),
>> I can use these techniques to define a supra-standard (has more
>> consequences) or sub-standard standard (has fewer consequences)
>> relation or suprasub (missing some and adding some). Furthermore,
>> if I have a supra relation, i can change the monotonicity of the
>> relation wrt to the added consequences while leaving the
>> "classical" or "standard" consequences alone. See (propositional)
>> default logic.
>> It's reasonable to regard sparql as characterizing a supra-sub
>> relation. We restrict some cosnequences (e.g., so that any
>> particular query characterizes at most a finite number of answers)
>> and we extend others (by having a construct with is sensitive to
>> non-entailment, e.g., !bound). Those extended consequences can
>> increase *or decrease* when the queried against KB merely
>> increases (that is, we replace it with a superset).
>> Thus, again, that consequence relation is non-monotonic.
> Ah, I see. True, of course, nonmonotonicity comes cheap
? This thread feels expensive.
> when you take this much freedom to tinker with consequences.
Like, the way which is completely bog standard? See wikipedia,
Makinson, Stanford Encyclopedia of Philosophy, arbitrary logic
programming or database books etc.
> I was, all along, intending to refer to monotonicity with respect
> to a genuine entailment relation,
You mean a consequence relation which doesn't characterize SPARQL?
But why would that be relevant?
(And, see Axel's paper.)
(And, of course, not everything in logic needs to correspond to
entailment, genuine or otherwise. We can have consequence relations
which are sub or supra relations to classical, first order entailment
characterized consequence relations. It's just maths in the end.)
> where entailment is defined in terms of truth conditions.
Truth conditions need things which have them. I.e., sentences,
characterized syntactically.
And see Axel's paper. !bound's semantic conditions are in terms of
(for example) minimal models. As is no surprise, such a consequence
relation is non-monotonic.
> Anything else is just computational hacking
Regardless of your feelings about it, I don't think you can both
claim that you are appealing to *standard* notions of non-monotonic
logic *and* that all the standard ways of characterizing are
> on top of the actual semantics (which is fine: some of my best
> friends do it; but it isn't entailment.)
Let me end with a reflection: I totally fail to see that you have any
analytic benefit from saying this nor clarity or pedagogic benefit.
Contrariwise, it seems like you have to bend your self into nasty
shapes to maintain this (preference model semantics aren't semantics
(screw finite model satisfiability), antecedents and consequent
languages must be uniform, !bound implicitly, though explicitly,
mentions datasets, etc.
I don't see how you can read any papers on non-monotonic logic. At
all. Substructural logics must hurt you as well.
Received on Thursday, 9 September 2010 18:38:02 GMT | {"url":"http://lists.w3.org/Archives/Public/www-archive/2010Sep/0008.html","timestamp":"2014-04-16T10:10:19Z","content_type":null,"content_length":"26072","record_id":"<urn:uuid:ea78550d-38ce-41a4-aa4c-a9b4966908f5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Book on the history of logic
Irving ianellis at iupui.edu
Thu Feb 17 13:34:35 EST 2011
Sam Sanders asked:
Can someone explain to me why Kneale & Kneale's account of logic is
"extremely narrow"?
If their account is indeed of such nature, what else should have been
The short answer to that question must be that it depends on what one
is looking for. But it is fair to say that, like a number of mid 20th
century accounts, it is long on the Russello-Fregean history and
comparatively short on the Boole-De Morgan-Peirce Schröder line.
(Kneale & Kneale also devotes much more space to the medieval logicians
than of Boole, De Morgan, Peirce, and Schröder.)
Another such example of the comparatively sparse coverage of algebraic
logic would be Bochenski's "History of Formal Logic", which has the
added feature of being primarily composed of a patchwork of translated
selections taken from the authors being discussed, held together by a
few interstitial lines of connective tissue.
Nathan Houser and I discussed this in "The Nineteenth Century Roots of
Universal Algebra and Algebraic Logic", in Hajnal Andreka, James Donald
Monk, Istvan Nemeti (eds.), Colloquia Mathematica Societis Janos Bolyai
54. Algebraic Logic, Budapest (Hungary), 1988 (Amsterdam/London/New
York: North-Holland, 1991), 1-36.
For works of the same vintage, one would do well, if adopting a work
such as Kneale & Kneale, to supplement that book with Nikolai
Styazhkin's "History of Mathematical Logic from Leibniz to Peano" (MIT
Press), to fill in the details of the algebraic logicians that are
missing from the more standard Russello-Fregean histories. (If I
remember correctly, Elliott Mendelson was the translator of Styazkin's
From the standpoint of historiography, one might consider Ivor
Grattan-Guinness's distinction (which he formulated in his writings on
history of mathematics) between history are heritage, the former asking
'What happened in the past?', the latter asking 'How did we get to
where we are today?', or equivalently, 'what happened in the past that
leads to me?' (The differences are discussed in my "Navigating History
of Mathematics: Essay-Review of Ivor Grattan-Guinness, Routes of
Learning: Highways, Pathways, and Byways in the History of
Mathematics", Annals of Science
Irving H. Anellis
Visiting Research Associate
Peirce Edition, Institute for American Thought
902 W. New York St.
Indiana University-Purdue University at Indianapolis
Indianapolis, IN 46202-5159
URL: http://www.irvinganellis.info
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-February/015296.html","timestamp":"2014-04-16T10:12:17Z","content_type":null,"content_length":"5216","record_id":"<urn:uuid:82a84075-627a-4e96-abba-9daa061b890e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Static or rotating device : this is a mechanical difference (even for AC inductive motors which have electrical rotating magnetic flux)
But the general rule is : as long as there is inductance (or capacitor) inside the load, their is an apparent power and reactive power,
Please note that reactive power is the power responsible of creating the magnetic flux (which is 90% of cases apply on rotating machines :)
To know the apparent and reactive power on any machine, I'll use an inductive motor as an example,
On every motor there is a name plate, which must indicates the apperant power ( Pn in W or KW or HP ) , Voltage ( in V ), and the phase shift ( cos phy ), and nominal current ( In in A)
Those values are for the full load,
for lower loads you should measure the current I and compare it to the nominal current In and devide with the calculated ration I/In=A
P =Pn x A, Q=Qn x A
Now for Q reactive power in VAR
Q=P x tg (Phy)
Phy can be known by any scientefic calculator,
Please note that the Harmonics effects are not considered here.
Take care | {"url":"http://engineering.electrical-equipment.org/forum/general-discussion/how-to-find-apparent-power-and-reactive-power","timestamp":"2014-04-21T08:45:37Z","content_type":null,"content_length":"61115","record_id":"<urn:uuid:02d771ed-5595-4139-bbf6-de849e756564>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 240m wide river flows East at 1.40m/s. A swimmer is caplable of swimming at 2.40m/s in still water.
a. Determine... - Homework Help - eNotes.com
A 240m wide river flows East at 1.40m/s. A swimmer is caplable of swimming at 2.40m/s in still water.
a. Determine the velocity of the swimmer if the swimmer followed the shortest distance (in m/s)
b. Dtermine the time it would take to cross the river if the swimmer followed the shortst distance (in minutes)
c. Determine the time it would take if the swimmer was trying to cross in the shortest time (in minutes)
Velocity of flowing water = 1.40 m/s
Velocity of swimmer in still water= 2.4 m/s (let us assume swimmer wants to cross river from North to south)
Thus resultant velocity of swimmer
`sqrt(1.4^2+2.4^2)=2.78` m/s and direction will SE.
angle which swimmer's resultant velocity be
Since he wants to cross river in short distance= 240 `sec(59.74)`
b. time=`476.31/2.78=171.22` (approx)
c . time (in min) =`171.22/60=2.85 min.` (approx.)
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/240m-wide-river-flows-east-1-40m-s-swimmer-442732","timestamp":"2014-04-17T18:43:55Z","content_type":null,"content_length":"26232","record_id":"<urn:uuid:698673bc-fedc-4574-9a11-d3984567f38a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
SUSY exists because the number 3/2 can't be missing
Paul Frampton: Off-topic sad news: he gets 4.67 years in prison for drug smuggling; Google News. He may be moved to the U.S. in 2014 if he applies. They found a note, "1grm/200U$S. 2000grms/
400000 U$S", handwritten by Frampton himself on which he calculated the price of the stuff. He admitted he wrote it but only after the stuff was found, weighed, and he was accused. So the curious
Paul calculated what he was supposed to earn, too. When cops accuse you of anything in Argentina, shut up and don't calculate anything! Even more seriously, however, Paul's e-mail to (fake) Ms
Denisa Krajíčková a day before he was caught allegedly "worried about sniffer dogs looking after the small suitcase".
This blog entry elaborates upon a simple point made by Nima Arkani-Hamed in a
recent talk
(and by others). First, look at this IQ test. What is missing in the box?\[
j& 0 & 1/2 & 1 & ??? & 2\\
\] Those of you who have figured out that \(???=3/2\) earned a ticket and they may continue to read. ;-)
The numbers in the table are the spins of the known elementary particles. The Higgs boson became the first discovered
spinless elementary particle
, one with \(j=0\). Leptons and quarks have \(j=1/2\). The photon, gluon, W-boson, Z-boson – gauge bosons – carry \(j=1\). And a \(j=2\) graviton has to exist because we know that there exist
gravitational waves (see e.g.
1993 Physics Nobel Prize
) and because all energy at the frequency \(\omega\) is inevitably packaged into quanta of energy \(E=\hbar\omega\), because of the most universal laws of quantum mechanics. Why? If all expectation
values etc. \[
\bra\psi L \ket\psi
\] are demanded to be periodic with period \(2\pi / \omega\), it follows that \(\ket\psi\) must be periodic with this period, up to an overall phase. But if \(\ket\psi\) is a linear superposition of
various energy eigenstate terms whose time dependence is \(\exp(Et/ i\hbar)\), it follows that between \(t=0\) and \(t=2\pi/\omega\), the relative phases must return to the original value which means
that \(E_i-E_j=N\hbar\omega\) for any pair of allowed eigenvalues \(E_i,E_j\). If the two states included in the superposition differ by an addition of a particle or particles, the particle(s) must
have \(E=N\hbar\omega\) for \(N\in\ZZ\).
Again, if you don't understand the argument above sufficiently clearly so that you have eradicated all doubts about the existence of gravitons, I kindly ask you to stop reading because you're not
qualified to study or discuss the allowed spins of elementary particles.
Why gauge symmetries are needed
Now, when we have hopefully gotten rid of 99.9% of would-be readers :-), we may continue with gauge symmetries. This topic has been mentioned many times on this blog but let's make the point once
again. If you deal with fields that manifestly obey the Lorentz symmetry and whose spin is \(j\geq 1\), you critically need a gauge symmetry, otherwise the probabilities would have both signs. But
you don't want the probability of your winning in a lottery to be negative. It's worse than just to lose the money: you would be a ghost, a bad ghost, if your probabilities were negative. And that's
worse than to be a zombie, believe me.
It's not too hard to see why a gauge symmetry is required. Take a creation operator for a particle, \(a^\dagger(k)\), and produce a one-particle state out of the vacuum \(\ket 0\):\[
a^\dagger \ket 0
\] That's nice. We want this state to have a positive (or at least non-negative) squared norm because this squared norm gives you the probability of the projection operator \(1\), the truth, and it
should be 100 percent. For one creation operator, the equality between the norms\[
\braket{0}{0} = \bra 0 a(k)\cdot a^\dagger(k) \ket 0
\] is guaranteed by the commutator below and by the annihilation skill of the annihilation operator:\[
[a(k),a^\dagger(k)]=1,\quad a(k)\ket 0 = 0.
\] Of course, the usual normalization of the creation and annihilation operators involves a \(\delta\)-function but I don't want to go into messy details. Now, the description here is OK for scalars
but what happens when the creation operator has some extra indices related to the spacetime?
Start with the \(j=1/2\) Dirac spinors. The fields have an extra index \(i=1,2,3,4\) and the relevant commutator – well, we need an anticommutator for fermions but it changes nothing about our
discussion – is\[
\{ c_i(k),c^\dagger_j(k) \} = \delta_{ij}
\] The Kronecker delta is actually right but you could make an error – one that would prove that you know something but you don't know it quite well – and write \(\gamma_0\) instead of the identity
matrix. It's because \(\psi\) and the Dirac conjugate \(\bar\psi\) rather than the Hermitian conjugate \(\psi^\dagger=\bar\psi\gamma^0\) are naturally conjugate to produce Kronecker deltas. For
example, \(\bar\psi\psi\) and not \(\psi^\dagger\psi\) is Lorentz-invariant. However, the equal-time commutator has another time index in the game (the time-like vector enters because it's orthogonal
to the slice on which we define the equal-time [anti]commutators), one which translates to \(\gamma^0\), so the two factors of \(\gamma^0\) cancel and you actually do get the Kronecker delta on the
right hand side.
Now, notice that we're kind of lucky that the right hand side is a positively definite matrix \(\delta_{ij}\) rather than an indefinite matrix such as \(\gamma^0_{ij}\) we have mentioned. When you
create particles with \(c^\dagger_i\), they will still have a positive norm. In fact, even when the role of \(c\) and \(c^\dagger\) is interchanged for one-half of the values of the index \(i\),
because you have to fill the Dirac sea and introduce antiparticles as the holes in the Dirac sea, you will still create positive-norm states because the relevant positive definite construct above was
the anticommutator which was symmetric. It just didn't care which of the factors was \(c\) and which of them was \(c^\dagger\).
The percentage of readers who have given up has jumped from 99.9% to 99.99%. The remaining reader, if any, knows this stuff anyway and she or more likely he may notice that we have just seen that \(j
=1/2\) fermionic operators produce nicely positive definite particle states.
But what about \(j=1\)?
The creation and annihilation operators have to carry an extra vector index \(\mu=0,1,2,3\). The commutators or, equivalently, the normalization of one-particle states is\[
\bra 0 a_{\mu}(k)\cdot a_{\nu}^\dagger(k) \ket 0 = C\cdot g_{\mu\nu}.
\] The right hand side has to be proportional to the metric tensor, otherwise the formula would violate the Lorentz symmetry! But the metric tensor has both positive and negative eigenvalues. So some
of the states are positive definite while others are negative definite.
The latter states would be a problem because they would lead to negative probabilities once again. (Well, it's actually the first time in our story when they're a genuine threat.) In fact, it is such
a big problem that we don't want too much of this problem. It means that the overall sign must be such that the 3 spatial polarizations are positive definite and only the remaining 1 time-like
polarization has a negative squared norm. What do we do with it?
It's simple. We must kill it. But we're physicists, not murderers or terrorists, so we must kill it in an elegant way. An elegant way to kill states (yes, Gazan terrorists, we are able to kill whole
states!) is to make them unphysical. A reason allowing us to delegitimize a state and declare this state unphysical is that this state isn't invariant under a gauge symmetry.
For each value of \(k^\alpha\), we need to kill exactly one polarization. So we must have enough generators of symmetries, one per each \(k^\alpha\) or, equivalently, one per \(x^\beta\). Clearly, we
need a gauge symmetry. We must have a generator \(j^0(x,y,z)\) for each point in the three-dimensional space and we require all the physical states to obey\[
j^0(x,y,z)\ket\psi = 0.
\] Also, states of the form \(j^0(x,y,z)\ket\lambda\) are "pure gauge" and they have a zero norm which is still harmless because probabilities may be zero. The operators \(j^0\) above have to be
symmetry generators (time-like components of a conserved current) and not just some arbitrary operators because we want the labeling "\(\ket\psi\) is unphysical for \(t=0\)" to survive at later times
\(t\), too. That's why the charge – integrated current – has to commute with the Hamiltonian. It has to be a symmetry and because it's localized in space and locally conserved, it's a local or gauge
symmetry. If it were just some random operator, it wouldn't commute with the Hamiltonian and a ghost-free state at \(t=0\) would typically evolve into a state with ghosts at \(t\gt 0\). But we want
to get rid of the zombies permanently!
In this way, not one but two polarizations out of the 4 polarizations of the photon become unphysical and only the transverse two polarizations, \(\ket x\) and \(\ket y\) or, in a different basis, \
(\ket R\) and \(\ket L\) remain physical. Yes, the macro \(\backslash{\rm ket}\) is my most useful \(\rm\LaTeX\) macro I have ever created. It produces lots of cheap yet elegant fun.
Just a comment I won't need: the \(j=1\) field may be given an extra internal index \(j\), an adjoint index, e.g. \(j=1,2,\dots 8\) in QCD, and we will need to get rid of a greater number of ghosts
(8 times in the case of QCD) so the conserved quantities have to possess this extra index, too: the conserved charges become generators of a non-Abelian group, e.g. \(SU(3)\) in QCD. Also, the gauge
bosons may be made massive via the Higgs mechanism. If it is so, they have 3 and not 2 physical polarizations: the meat for the third one is obtained by
eating a Goldstone boson
Because we needed to get rid of one polarization of a photon for each \(k^\alpha\), we needed a current whose total charge is a scalar. What about \(j=2\) fields? If fields have two indices, the
inner products roughly look like\[
\bra 0 a_{\kappa\lambda}(k)\cdot a^\dagger_{\mu\nu}(k)\ket 0 \sim g_{\kappa\mu} g_{\lambda\nu}.
\] There could also be permutations of the indices on the right hand side and/or symmetries, antisymmetries, or other constraints on the tensor. At any rate, we have two copies of the metric tensor.
The one-particle state created by \(a^\dagger_{\mu\nu}\) will have a negative squared norm if exactly one of the two indices \(\mu,\nu\) is timelike. (We avoid the scary scenario in which all the
doubly spatial components have the wrong sign: we couldn't get enough symmetries to kill the beasts.)
Fine, in this case, we need to get rid of the wrong "mixed" components \(a_{i0}\) so roughly speaking (ignoring the single \(a_{00}\) component which actually seems to have the right sign), we need a
whole Lorentz vector of conserved charges. We know what such a vector of conserved charges could be: it could be the energy-momentum vector, associated with the spacetime translations via Noether's
In fact, if your theory is made out of local or nearly point-like particles, the energy-momentum vector is the only conserved vector you may have as long as your theory admits interesting
interactions. For extended objects, you could also consider some kind of a stringy "winding number" but that would require a compactification or infinitely long strings so let's ignore this option.
To summarize, if we want to get rid of the negative-norm polarizations of symmetric \(j=2\) fields, such as the metric tensor, we need a gauge symmetry whose integrated charges transform as spacetime
vectors. We need a conserved energy-momentum generating spacetime translations and we need to make its components local, i.e. make the generators independent for each point \((x,y,z)\) in the space.
In other words, we need the diffeomorphism symmetry!
It's bizarre and cute. Our argument critically depended both on the Lorentz symmetry and quantum mechanics and we derived the existence of the diffeomorphism symmetry in a consistent theory – with
non-negative probabilities – that allows you to create spin-two quanta! Isn't it cool? Gravity and principles of general relativity, perhaps broken in surprising ways, are inevitable parts of any
consistent theory with fields whose spin is \(j=2\).
Why spins higher than two are not fine for elementary particles
What about \(j=3\) or higher? In that case, we would produce an even larger number of wrong-sign polarizations of the one-particle states created by the creation operators transforming as \(j\geq 3\)
tensors. The corresponding conserved charges would have to transform as \(j\geq 2\) tensors. And they have too many components in \(d\geq 4\). In fact, if this high number of tensor components were
conserved, one could prove that interactions are so constrained that they de facto vanish. Any momentum exchange between the lowest-mass scalar particles would violate the conservation laws.
This is the essence of the
Coleman-Mandula theorem
. There can't be conserved charges with spin greater than one. It follows – through our negative-norm-based arguments – that there can't be any fundamental fields with \(j\geq 3\) in your theory.
Well, string theory – and also its currenly fashionable "toy model", the Vasiliev higher-spin theory – circumvents this ban but the ability of these theories to avoid the conclusion critically
depends on their having an infinite number of excitations with arbitrarily high spins \(j\) and their subtle interplay.
Let me mention that fields with spin \(j=5/2\) would have to come with conserved charges with spin \(j=3/2\) which is already too high and prohibits interesting interactions. So \(j=2\) is indeed the
highest spin of "ordinary" fundamental fields.
Why the gauge symmetry for \(j=3/2\) fields is inevitably local supersymmetry
But as we mentioned at the beginning, there actually exists an integer or half-integer non-negative number – an allowed value of the spin – that is smaller than \(j=2\) and that we have omitted: \(j=
Why have we omitted it? Because we were assholes. Some people would like to omit it even when they're reminded about their mistake. Those people are assholes even right now. I won't name all these
Shmoits because I would have to throw up.
So how do our arguments work for \(j=3/2\) fields and what do they tell us about the required symmetries? Well, \(j=3/2\) fields look like \(\psi_{\mu k}\). One index is the Lorentz vector index,
another index is a spinor index. They may be constrained by some extra conditions but those conditions don't kill too many components. Such fields are sometimes called Rarita-Schwinger fields and the
particles they create are gravitinos but this sentence isn't assumed to hold in the following paragraphs.
Combining (and tensoring) our formulae for \(j=1/2\) and \(j=1\), we may determine that the wrong-sign polarizations of the field are created by the components with \(\mu=0\), i.e. \(\psi_{0k}\).
Because the spinor index \(k\) may still take on any value, there is roughly one spinor of the required gauge symmetry generators at each point.
The conserved charges must transform as \(j=1/2\) spinors to allow us to kill the negative-norm states in an elegant way.
Are \(j=1/2\) conserved spinors allowed? Well, they have a smaller spin than the \(j=1\) energy-momentum vector so the answer seems to be Yes and indeed, below 12 spacetime dimensions, the answer is
Yes. There can be conserved spinors. We call them \(Q_k\) where the index is a spinor index. And yes, let's say the word: they're generators of supersymmetry, the supercharges!
They're constructed out of fields that respect the spin-statistics relationship and because they're \(j=1/2\) spinors, they have to be fermionic, Grassmann-valued operators. We may ask what their
anticommutators are. The anticommutator must involve a \(j=1\) conserved object:\[
\{ Q_{m},\bar Q^{n} \} = (\gamma^\mu)_m^n P_\mu+\dots
\] Ignore the bars and the difference between upper and lower indices if you're not experienced in the algebra involving spinors. The anticommutator of two \(j=1/2\) objects may contain a \(j=1\)
object which is conserved as well. And I have already mentioned that the only sensible \(j=1\) "set of charges" is the energy-momentum vector.
(If the anticommutator were zero, the theory would be too trivial once again. Let me avoid this discussion.)
Moreover, we need to make the charges and currents local in spacetime, so we have just derived that a consistent theory with \(j=3/2\) fields has to contain both \(j=3/2\) currents for local
supersymmetry as well as the \(j=2\) current for local coordinate transformations (diffeomorphisms) i.e. the stress-energy tensor. A consistent theory with \(j=3/2\) fields inevitably contains
supergravity – because it contains both Einstein's gravity as well as (local) supersymmetry. Another article (one that has been
written on this blog
, in fact) may be written to prove that string/M-theory is the only consistent completion of supergravity.
Despite the extra restrictions, symmetries, and limitations, \(\NNN=1\) supersymmetric theories (supergravity coupled to super-Yang-Mills etc.) are compatible with all the known experimental
constraints and conditions, assuming a viable choice of parameters. Nature could "discriminate against" the number \(j=3/2\) and omit this number from the list of "low spins" of allowed fundamental
fields. But it would be bizarre, wouldn't it?
There is of course a lot of other circumstantial evidence why supersymmetry exists in Nature:
dark matter
hierarchy problem
gauge coupling unification
its existence implied by string theory
, and so on.
Jesus Off-topic but funny
: The Catholic Church has found a new heretic. Most scholars believe that Jesus was born between 6 years Before Himself and 4 years Before Himself. The new heretic wrote a book in which he claims
that He was actually born several years earlier.
The name of the heretic is Ratzinger and he is employed as the Holy Father in a branch of the church somewhere in Rome, Italy.
This heresy is progress but if Jesus is supposed to have any relationship with Creation whatsoever, there are still 13.73 billion years of adjustments waiting in the pipeline.
15 comments:
1. Ha ha, where is my ticket ... :-D ?!
I look forward to read the rest of the text (from scrolling through I see it should be about my level) .
This article has a nice "feel-good" title which makes me happy :-)
2. Peter F.Nov 23, 2012, 2:31:00 AM
I'm not allowed to read, and in deep shame, by having thought what was missing was 1½. %-((((
3. About Off topic Dr. Frampton: if you ever get caught with a suitcase full of white stuff in Argentina, you must say it was given to you by the secretary of Senator Aníbal Fernández, a guy that 20
years ago had to run from the police hiding inside the trunk of a car when he was mayor in Quilmes, a city near Buenos Aires. They where looking for him on charges of narcotraffic and cocaine
As Fernández is the government unofficial "spokeperson" then the police will tell you "Shut up, keep it quite and you'll be free in a few hours." :-)
4. anna vNov 23, 2012, 6:28:00 AM
I am a great fan of the novels of Jane Austen. In Pride and prejudice there is the character of a Duchess who has a sour faced and sickly daughter sitting by her. Listening to Elizabeth playing
the piano she says the phrase that is relevant to the post :"If my Ann were not sickly she would play the piano perfectly".
So if I were twenty years younger I would understand the argument perfectly :) .
I prefer to trust you. Anyway the whole construct is so tied together that this 3/2 should be inevitable.
5. Can he still do something against this?
I mean, is there some kind of a international court where he can complain?
This stupid verdict was the last data point that was needed to hava a 7 sigma evidence that (at least the legal system of) Argentina corresponds to a banana repuplic. All scientists and other
reasonable educated people should avoid and flee from this country.
6. Marcel van VelzenNov 24, 2012, 10:31:00 AM
Hello Lubos,
I'm a bit confused. If I understand it well, to get a well behaved
spin 3/2 particle theory one has to extend the Poincare-algebra to
the super Poincare algebra. The Q's cannot be constructed if one
sticks to the Poincare-algebra (maybe I'm wrong here). So couldn't
you argue that the apparent absence of a spin 3/2 particle in nature
is a prediction from a theory based only on the Poincare-group, just
like one should not expect interacting j>=3 fundamental particles
in such a theory. Thanks already!
7. Dear Lubos, can you give a comment on a recent paper of Bekenstein on a possible
experimental probe of Planck physics?
8. Right, exactly. The extension of spacetime symmetry from Poincare to super-Poincare is equivalent to peacefully adding a spin-3/2 particle (assuming gravity). Given the fact that it's unnatural
to work without either, because 3/2 would be an unnaturally omitted number, because one would get a huge hierarchy problem, and for other reasons, one may see that Nature fundamentally has
super-Poincare and Nature fundamentally has j=3/2 particles.
9. Hi, please see my (Lumidek) comment at
10. Nice answer you have given over there ;-)
Sabine Hossenfelder seems not to think much of it either :-D
11. Thank you for your answer!
12. Robert RehbockNov 25, 2012, 8:32:00 PM
Their was sufficient evidence of guilt, though. If he were caught similarly in USA he would have likely been sentenced to even more time in a Fed Pen with no chance of early release. Not
disagreeing with your characterization of Argentina. I just think that USA ends far too much to find, prosecute and imprison far too many for drugs, too. Would rather that our resources and views
of criminality were otherwise directed.
13. Hm, maybe you are right somehow.
But nevertheless, I think the a priori probability that drug mafia villains play such bad trick on innocent people, as it happened to Prof. Frampton, are not the same for every country in the
world ...
I still think he has done nothing worse than being too naive and even silly in doing these darn prize calculations.
14. Dear Dilaton,
much of Sabine's description of what's happening in this simple thought experiment is as wrong as Bekenstein's.
In particular, she interprets the "fuzziness at the Planck scale" as the statement that the location of a crystal can't be determined with better-than-Planck-length precision to start with.
But that's a deep misunderstanding. Quantum gravity doesn't imply anything of the sort. There exist arbitrarily high-momentum (boosted) vectors in the Hilbert space describing a crystal, and one
may construct their linear superpositions that are arbitrarily accurately localized.
New "fuzzy effects" near the Planck length only start to occur when one is looking at Planckian proper distances in a rest frame, not when one looks at arbitrary coordinate differences in any
frame. Indeed, new physics that would kick in whenever some coordinate differences get tiny would totally break the Lorentz symmetry and the rules of relativity which doesn't occur in our
Universe (or in our multiverse).
15. Thanks Lumo for these additional explanations.
I just scrolled through Sabine Hossenfelder's post without reading it in detail, since apart from being wrong her reasoning is often too fuzzy and not enough to the point for me to get something
out of it ...
And thanks for pointing out the 2009 TRF article, I've not yet seen this one and it will be a nice lunch time reading for me :-) | {"url":"http://motls.blogspot.com/2012/11/susy-exists-because-number-32-cant-be.html?m=1","timestamp":"2014-04-17T12:29:08Z","content_type":null,"content_length":"128905","record_id":"<urn:uuid:523420a5-4a76-4ead-b96b-51c5c037cb16>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indefinite Integrals
June 22nd 2011, 02:45 AM #1
Junior Member
Oct 2010
Johor Bahru
Indefinite Integrals
Given that y= ln cot x, find dy/dx. Hence, find ∫(2-cosec 2x)dx.
Okay basically, I'm stuck on how to differentiate y=ln cot x & then inserting it into the integration equation.
Re: Indefinite Integrals
To get dy/dx, use the chain rule (you will also need to use the quotient rule during this process).
I suggest you review some examples from your class notes or textbook (integration by recognition is the name this 'technique' is often given).
If you need more help, please show what you've done and say where you get stuck.
Re: Indefinite Integrals
Alright, I know the how the chain rule works and here goes....
y= ln u ; u=cotx
dy/du= 1/u ; du/dx= -cosec^2 x
but why dy/dx cotx = -cosec^2 x . I don't get it.
Re: Indefinite Integrals
Last edited by CaptainBlack; June 22nd 2011 at 04:44 AM.
Re: Indefinite Integrals
Re: Indefinite Integrals
Typo - rather :
And, just in case a picture helps with this part...
Balloon Calculus: standard integrals, derivatives and methods
Re: Indefinite Integrals
Re: Indefinite Integrals
Typo - rather :
And, just in case a picture helps with this part...
Balloon Calculus: standard integrals, derivatives and methods
Perfect. That diagram really sparked my understanding in the derivation of cosec^2 (x).
Well now how to get dy/dx. I know it should be;
dy/dx= (1/cot x)*( -cosec^2 (x) )
= ??
The answer suppose to be dy/dx= -2 cosec 2x but I don't know how to get it?
Re: Indefinite Integrals
Re: Indefinite Integrals
Re: Indefinite Integrals
sin(2x) = 2 sinx cosx
Re: Indefinite Integrals
Re: Indefinite Integrals
Hi there,
$\frac{1}{\cot(x)}\ (-\csc^2(x))$
$=\ \tan(x)\ \frac{-1}{\sin^2(x)}$
$=\ \frac{- \sin(x)}{\cos(x) \sin^2(x)}$
$=\ \frac{-1}{\sin(x) \cos(x)}$
$=\ \frac{-1}{\frac{1}{2} \sin(2x)}$
$=\ \frac{-2}{\sin(2x)}$
PS you have a typo in your latest (i.e. should be -2 not 2-, maybe that was the bug).
Last edited by tom@ballooncalculus; June 23rd 2011 at 06:46 AM. Reason: ps
Re: Indefinite Integrals
Roger that, thank you.
June 22nd 2011, 02:50 AM #2
June 22nd 2011, 04:01 AM #3
Junior Member
Oct 2010
Johor Bahru
June 22nd 2011, 04:24 AM #4
Grand Panjandrum
Nov 2005
June 22nd 2011, 04:27 AM #5
Grand Panjandrum
Nov 2005
June 22nd 2011, 04:32 AM #6
MHF Contributor
Oct 2008
June 22nd 2011, 04:45 AM #7
Grand Panjandrum
Nov 2005
June 22nd 2011, 07:10 AM #8
Junior Member
Oct 2010
Johor Bahru
June 22nd 2011, 07:29 AM #9
MHF Contributor
Oct 2008
June 22nd 2011, 07:51 AM #10
Junior Member
Oct 2010
Johor Bahru
June 22nd 2011, 08:01 AM #11
MHF Contributor
Oct 2008
June 23rd 2011, 04:28 AM #12
Junior Member
Oct 2010
Johor Bahru
June 23rd 2011, 06:35 AM #13
MHF Contributor
Oct 2008
June 23rd 2011, 07:34 AM #14
Junior Member
Oct 2010
Johor Bahru | {"url":"http://mathhelpforum.com/calculus/183436-indefinite-integrals.html","timestamp":"2014-04-18T00:28:29Z","content_type":null,"content_length":"72269","record_id":"<urn:uuid:2ab9412b-1a49-451e-ac8c-6f9d11816559>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lessons In Electric Circuits -- Volume V
Lessons In Electric Circuits -- Volume V
Chapter 1
comment>modified 10/29/2005 (DC) added torque conversions -->
NOTE: the symbol "V" ("U" in Europe) is sometimes used to represent voltage instead of "E". In some cases, an author or circuit designer may choose to exclusively use "V" for voltage, never using the
symbol "E." Other times the two symbols are used interchangeably, or "E" is used to represent voltage from a power source while "V" is used to represent voltage across a load (voltage "drop").
"The algebraic sum of all voltages in a loop must equal zero."
Kirchhoff's Voltage Law (KVL)
"The algebraic sum of all currents entering and exiting a node must equal zero."
Kirchhoff's Current Law (KCL)
• Components in a series circuit share the same current. I[total] = I[1] = I[2] = . . . I[n]
• Total resistance in a series circuit is equal to the sum of the individual resistances, making it greater than any of the individual resistances. R[total] = R[1] + R[2] + . . . R[n]
• Total voltage in a series circuit is equal to the sum of the individual voltage drops. E[total] = E[1] + E[2] + . . . E[n]
• Components in a parallel circuit share the same voltage. E[total] = E[1] = E[2] = . . . E[n]
• Total resistance in a parallel circuit is less than any of the individual resistances. R[total] = 1 / (1/R[1] + 1/R[2] + . . . 1/R[n])
• Total current in a parallel circuit is equal to the sum of the individual branch currents. I[total] = I[1] + I[2] + . . . I[n]
A formula for capacitance in picofarads using practical dimensions:
Wheeler's formulas for inductance of air core coils which follow are useful for radio frequency inductors. The following formula for the inductance of a single layer air core solenoid coil is
accurate to approximately 1% for 2r/l < 3. The thick coil formula is 1% accurate when the denominator terms are approximately equal. Wheeler's spiral formula is 1% accurate for c>0.2r. While this is
a "round wire" formula, it may still be applicable to printed circuit spiral inductors at reduced accuracy.
The inductance in henries of a square printed circuit inductor is given by two formulas where p=q, and p≠q.
The wire table provides "turns per inch" for enamel magnet wire for use with the inductance formulas for coils.
Time constant in seconds = RC
Time constant in seconds = L/R
Z[L] = R + jX[L]
Z[C] = R - jX[C]
NOTE: All impedances must be calculated in complex number form for these equations to work.
NOTE: This equation applies to a non-resistive LC circuit. In circuits containing resistance as well as inductance and capacitance, this equation applies only to series configurations and to parallel
configurations where R is very small.
• Metric prefixes
• Yotta = 10^24 Symbol: Y
• Zetta = 10^21 Symbol: Z
• Exa = 10^18 Symbol: E
• Peta = 10^15 Symbol: P
• Tera = 10^12 Symbol: T
• Giga = 10^9 Symbol: G
• Mega = 10^6 Symbol: M
• Kilo = 10^3 Symbol: k
• Hecto = 10^2 Symbol: h
• Deca = 10^1 Symbol: da
• Deci = 10^-1 Symbol: d
• Centi = 10^-2 Symbol: c
• Milli = 10^-3 Symbol: m
• Micro = 10^-6 Symbol: µ
• Nano = 10^-9 Symbol: n
• Pico = 10^-12 Symbol: p
• Femto = 10^-15 Symbol: f
• Atto = 10^-18 Symbol: a
• Zepto = 10^-21 Symbol: z
• Yocto = 10^-24 Symbol: y
• Conversion factors for temperature
• ^oF = (^oC)(9/5) + 32
• ^oC = (^oF - 32)(5/9)
• ^oR = ^oF + 459.67
• ^oK = ^oC + 273.15
Conversion equivalencies for volume
1 US gallon (gal) = 231.0 cubic inches (in^3) = 4 quarts (qt) = 8 pints (pt) = 128 fluid ounces (fl. oz.) = 3.7854 liters (l)
1 Imperial gallon (gal) = 160 fluid ounces (fl. oz.) = 4.546 liters (l)
Conversion equivalencies for distance
1 inch (in) = 2.540000 centimeter (cm)
Conversion equivalencies for velocity
1 mile per hour (mi/h) = 88 feet per minute (ft/m) = 1.46667 feet per second (ft/s) = 1.60934 kilometer per hour (km/h) = 0.44704 meter per second (m/s) = 0.868976 knot (knot -- international)
Conversion equivalencies for weight
1 pound (lb) = 16 ounces (oz) = 0.45359 kilogram (kg)
Conversion equivalencies for force
1 pound-force (lbf) = 4.44822 newton (N)
Acceleration of gravity (free fall), Earth standard
9.806650 meters per second per second (m/s^2) = 32.1740 feet per second per second (ft/s^2)
Conversion equivalencies for area
1 acre = 43560 square feet (ft^2) = 4840 square yards (yd^2) = 4046.86 square meters (m^2)
Conversion equivalencies for pressure
1 pound per square inch (psi) = 2.03603 inches of mercury (in. Hg) = 27.6807 inches of water (in. W.C.) = 6894.757 pascals (Pa) = 0.0680460 atmospheres (Atm) = 0.0689476 bar (bar)
Conversion equivalencies for energy or work
1 british thermal unit (BTU -- "International Table") = 251.996 calories (cal -- "International Table") = 1055.06 joules (J) = 1055.06 watt-seconds (W-s) = 0.293071 watt-hour (W-hr) = 1.05506 x
10^10 ergs (erg) = 778.169 foot-pound-force (ft-lbf)
Conversion equivalencies for power
1 horsepower (hp -- 550 ft-lbf/s) = 745.7 watts (W) = 2544.43 british thermal units per hour (BTU/hr) = 0.0760181 boiler horsepower (hp -- boiler)
Conversion equivalencies for motor torque
Locate the row corresponding to known unit of torque along the left of the table. Multiply by the factor under the column for the desired units. For example, to convert 2 oz-in torque to n-m, locate
oz-in row at table left. Locate 7.062 x 10^-3 at intersection of desired n-m units column. Multiply 2 oz-in x (7.062 x 10^-3 ) = 14.12 x 10^-3 n-m.
Converting between units is easy if you have a set of equivalencies to work with. Suppose we wanted to convert an energy quantity of 2500 calories into watt-hours. What we would need to do is find a
set of equivalent figures for those units. In our reference here, we see that 251.996 calories is physically equal to 0.293071 watt hour. To convert from calories into watt-hours, we must form a
"unity fraction" with these physically equal figures (a fraction composed of different figures and different units, the numerator and denominator being physically equal to one another), placing the
desired unit in the numerator and the initial unit in the denominator, and then multiply our initial value of calories by that fraction.
Since both terms of the "unity fraction" are physically equal to one another, the fraction as a whole has a physical value of 1, and so does not change the true value of any figure when multiplied by
it. When units are canceled, however, there will be a change in units. For example, 2500 calories multiplied by the unity fraction of (0.293071 w-hr / 251.996 cal) = 2.9075 watt-hours.
The "unity fraction" approach to unit conversion may be extended beyond single steps. Suppose we wanted to convert a fluid flow measurement of 175 gallons per hour into liters per day. We have two
units to convert here: gallons into liters, and hours into days. Remember that the word "per" in mathematics means "divided by," so our initial figure of 175 gallons per hour means 175 gallons
divided by hours. Expressing our original figure as such a fraction, we multiply it by the necessary unity fractions to convert gallons to liters (3.7854 liters = 1 gallon), and hours to days (1 day
= 24 hours). The units must be arranged in the unity fraction in such a way that undesired units cancel each other out above and below fraction bars. For this problem it means using a
gallons-to-liters unity fraction of (3.7854 liters / 1 gallon) and a hours-to-days unity fraction of (24 hours / 1 day):
Our final (converted) answer is 15898.68 liters per day.
Conversion factors were found in the 78^th edition of the CRC Handbook of Chemistry and Physics, and the 3^rd edition of Bela Liptak's Instrument Engineers' Handbook -- Process Measurement and
Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information.
Gerald Gardner (January 2003): Addition of Imperial gallons conversion.
Lessons In Electric Circuits copyright (C) 2000-2014 Tony R. Kuphaldt, under the terms and conditions of the Design Science License. | {"url":"http://www.ibiblio.org/kuphaldt/electricCircuits/Ref/REF_1.html","timestamp":"2014-04-21T09:06:04Z","content_type":null,"content_length":"18446","record_id":"<urn:uuid:c51f0c85-dbb2-4d8e-84fb-f7a843891568>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
(a) Find The Solution Of The Given Initial Value ... | Chegg.com
(a) Find the solution of the given initial value problem in explicit form.
(b) Determine (at least approximately) the interval in which the solution is defined.
Image text transcribed for accessibility: y' = x(x2 + 1)/18y17, y(0) = -1/9 2
Advanced Math | {"url":"http://www.chegg.com/homework-help/questions-and-answers/find-solution-given-initial-value-problem-explicit-form-b-determine-least-approximately-in-q4851184","timestamp":"2014-04-20T01:40:16Z","content_type":null,"content_length":"21436","record_id":"<urn:uuid:9f7ae26b-06a8-4ed4-9d7b-305ea428f593>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unit root versus breaking trend: Perron's criticism
I came across an ingenious simulation by Perron during my Time-series lecture which I thought was worth sharing. The idea was to put your model to a further test of breaking trend before accepting
the null of unit root. Let me try and illustrate this in simple language.
A non-stationary time series is one that has its mean changing with time. In other words, if you randomly choose a bunch of values from the series from the middle, you would end up with different
values of mean for different bunches. In short there is a trend in the data which needs to be removed to make it stationary and proceed with our analysis (its far easier to work with stationary
timeseries). In order to deal with non-stationary time-series one has to be careful about the kind of non-stationarity that is exhibited by the variable. Two corrections for non-stationarity include
(1) Trend stationary (TS) models, which are suitable for models that have a deterministic trend and fluctuations about that deterministic trend. This can be fit by a simple z[t] = a + bt + e[t][ ]
where e[t ]~ ARMA(p,q)
(2) Difference stationary (DS) models, which are suitable for models having a stochastic trend. The DS models are appropriate for models that have a unit root in the AR polynomial. Unit root in the
AR polynomial means that the trend part in the series cannot be represented by a simple linear trend with time (a + bt). And the correct representation is (1 – B)z[t ]= a + e[t,][ ]where e[t ]is
The asymptotic properties of the estimates, forecasts and forecast errors vary substantially between the TS and DS models. (For the ones interested in the algebra behind this, lecture notes of Dr.
Krishnan are here) Therefore it is important for us be sure that the model belongs to the appropriate class before we fit a TS or DS model. This is the reason why the clash between the two school of
thoughts has bred enormous literature and discussions on the methodology to check for unit roots. One could try and endlessly argue about these discussions but I want to illustrate the genius of
Perron who criticized the idea of fitting a DS model to series that could have a structural breaks. He said that you ought to take into account the structural break before you check for the unit
roots, if you don't do so, you might end up accepting the null of unit root, even when the true data generating process (DGP) is a trend stationary process. He illustrated this using a simple, but
very elegant, simulation exercise. Madhav and I, along with fine-tuning on the codes provided by Utkarsh, replicated this exercise with R.
The steps involved are as follows:
(1) Simulate 1000 series with the DGP as:
z[t ]= u[1 ] + (u[2 ]– u[1])DU[t ]+ bt + e[t][ ]
where e[t ]are i.i.d innovations and t = 1,2,3,...100. For simplicity I have assumed b = 1 and u[1 ]= 0.
(2) Assume that there is a crash at time T[b] = 50 and the entire series comes down by amount u[2].
## Simulating a trend stationary model with a crash in the intercept ##
t <- c(1:100) # specifying the time
dummy <- as.numeric(ifelse(t <= 50, 0, 1)) # specifying the dummy for trend break at T = 50
z <- ts(t - 15*dummy + rnorm(100, mean = 0, sd = 3))# This is the trend stationary model with break in trend
x <- ts(t - 15*dummy) # This is just the trend line that we see in "red" in the plot below
plot(z, main = "Simulated series with break at T = 50")
lines(x, col = "red") ## Plotting a sample of the model that we have simulated
(3) For these simulations compute the autoregressive coefficient, “rho” in the regression:
z[t] = u + bt + ‘rho’z[t-1 ]+ e[t]
(4) Plot the cumulative distribution function (c.d.f) of “rho” for different values of u[2] (crash).
## Now we will simulate the sample data above 1000 times and check for unit roots for each of these samples ##
# For simplicity we define a function to generate the "rho's" for each of the simulated series
sim <- function(crash) ## Function name "sim"
d <- ts(t - crash*dummy + rnorm(100, mean = 0, sd = 3))
## saving the simulated series in "d"
trend <- lm(d ~ t) ## remove the trend from the
simulated series
# crash in the above function refers to the value of u[2 ]in equation 1
res <- ar(ts(trend$residuals), order=1, aic= FALSE) ##
Fit an AR(1) model to the residue obtained after
detrending the series
if(length(res$ar) < 1) 0 else res$ar ## Return the ar
coefficient of the fitted AR(1) model above.
## Generate "rho's" for different magnitude of crash by
simply using the sim() function defined above
temp1 <- replicate(n, sim(10))
temp2 <- replicate(n, sim(15))
temp3 <- replicate(n, sim(20))
temp4 <- replicate(n, sim(35))
## Sort the values of "rho", we do this to plot the CDF
as we will see shortly
temp1.1 <- sort(temp1)
temp2.1 <- sort(temp2)
temp3.1 <- sort(temp3)
temp4.1 <- sort(temp4)
y <- seq(from=0, to=1, length.out=n)## This is how I
define the y-axis of my CDF which are basically the
## Plotting all the CDF of rho for different magnitude in one plot.
plot(c(min(temp1.1), max(temp4.1)), c(0, 1), type='n', xlab = "Rho", ylab= "Probability", main = "CDF of 'Rho' for differnt magniturde of crashes")
lines(temp1.1, y, type = 'l', col = 'red')
lines(temp2.1, y, type = 'l', col = 'green')
lines(temp3.1, y, type = 'l', col = 'blue')
lines(temp4.1, y, type = 'l', col = 'black')
b <- c("10 unit crash", "15 unit crash", "20 unit crash", "35 unit crash")
legend("topleft", b , cex=0.5, col=c("red", "green", "blue", "black"), lwd=2, bty="n")
An interesting observation that we make (or rather Perron made) is that the c.d.f of our autoregressive coefficient “rho” tends more towards unity with increase in the magnitude in crash. What this
means is that as the magnitude of crash increases the possibility of your accepting the (false) null of unit root increases. Why I say the false null is because I know the true DGP is a trend
stationary one.
This idea of Perron was criticised on the ground that he was specifying the break point (T[b]) exogenously, that is from outside the DGP. Frankly speaking I do not understand why was this taken as a
criticism. I think fixing the break point exogenously was a good way of fixing it with an economic intuition and not making is a purely statistical exercise. Some researchers (I don’t understand why)
termed this (simulation) illustration as a “data mining” exercise, and improved it by selecting the break point (T[b]) endogenously (by Zivot and Andrews as mentioned in the lecture notes).
I would hate to impose my opinion here but I feel this was a very elegant and logical way of driving home the point that the null of unit root should be accepted for your sample if and only if your
model stands the test of extreme rigour and not otherwise, and the rigour could be imposed exogenously with economic intuition too.
P.S. Perron did a similar simulation for breaking trend model, i.e where the slope of the model had a structural break. The codes would be quite similar to the ones given above, in fact it would be a
good practice if you could do the similar simulation for a breaking trend. In case you do want to try but face any issues please feel free to post/email your queries.
Criticism and discussions welcome.
4 comments:
1. Shreyes, I wish I could understand this post, but I don't. My level in R does not allow me to understand the code, it is nevertheless imperative for me to understand it.
If I have your email address, I'll give you some suggestions.
Thanks, though :(
1. If you are the student of time series analysis I would urge you to read through the lecture noted of Dr. Krishnan that I have mentioned above in the post. (it also has the algebra behind the
ADF test) It would help you appreciate the background of the discussion apart from the R codes.
Feel free to email me at shreyes.upadhyay@gmail.com for any suggestions.
2. Oh yes, I am currently studying time series analysis and that's why I love your blogspot because I have found so many relevant posts that have greatly helped me.
2. Great work man very helpful. do know wer i can find help with the lm unit root test wit 2 breaks or more?? | {"url":"http://programming-r-pro-bro.blogspot.com/2011/11/unit-root-versus-breaking-trend-perrons.html","timestamp":"2014-04-19T17:32:39Z","content_type":null,"content_length":"132366","record_id":"<urn:uuid:09b0b930-de6c-48ee-93ba-b77f1d634a09>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from March 2009 on On Clojure
Before moving on to the more advanced aspects of monads, let’s recapitulate what defines a monad (see part 1 and part 2 for explanations):
1. A data structure that represents the result of a computation, or the computation itself. We haven’t seen an example of the latter case yet, but it will come soon.
2. A function m-result that converts an arbitrary value to a monadic data structure equivalent to that value.
3. A function m-bind that binds the result of a computation, represented by the monadic data structure, to a name (using a function of one argument) to make it available in the following
computational step.
Taking the sequence monad as an example, the data structure is the sequence, representing the outcome of a non-deterministic computation, m-result is the function list, which converts any value into
a list containing just that value, and m-bindis a function that executes the remaining steps once for each element in a sequence, and removes one level of nesting in the result.
The three ingredients above are what defines a monad, under the condition that the three monad laws are respected. Some monads have two additional definitions that make it possible to perform
additional operations. These two definitions have the names m-zero and m-plus. m-zero represents a special monadic value that corresponds to a computation with no result. One example is nil in the
maybe monad, which typically represents a failure of some kind. Another example is the empty sequence in the sequence monad. The identity monad is an example of a monad that has no m-zero.
m-plus is a function that combines the results of two or more computations into a single one. For the sequence monad, it is the concatenation of several sequences. For the maybe monad, it is a
function that returns the first of its arguments that is not nil.
There is a condition that has to be satisfied by the definitions of m-zero and m-plus for any monad:
(= (m-plus m-zero monadic-expression)
(m-plus monadic-expression m-zero)
In words, combining m-zero with any monadic expression must yield the same expression. You can easily verify that this is true for the two examples (maybe and sequence) given above.
One benefit of having an m-zero in a monad is the possibility to use conditions. In the first part, I promised to return to the :when clauses in Clojure’s for forms, and now the time has come to
discuss them. A simple example is
(for [a (range 5)
:when (odd? a)]
(* 2 a))
The same construction is possible with domonad:
(domonad sequence
[a (range 5)
:when (odd? a)]
(* 2 a))
Recall that domonad is a macro that translates a let-like syntax into a chain of calls to m-bind ending in a call to m-result. The clause a (range 5) becomes
(m-bind (range 5) (fn [a] remaining-steps))
where remaining-steps is the transformation of the rest of the domonad form. A :when clause is of course treated specially, it becomes
(if predicate remaining-steps m-zero)
Our small example thus expands to
(m-bind (range 5) (fn [a]
(if (odd? a) (m-result (* 2 a)) m-zero)))
Inserting the definitions of m-bind, m-result, and m-zero, we finally get
(apply concat (map (fn [a]
(if (odd? a) (list (* 2 a)) (list))) (range 5)))
The result of map is a sequence of lists that have zero or one elements: zero for even values (the value of m-zero) and one for odd values (produced by m-result). concat makes a single flat list out
of this, which contains only the elements that satisfy the :when clause.
As for m-plus, it is in practice used mostly with the maybe and sequence monads, or with variations of them. A typical use would be a search algorithm (think of a parser, a regular expression search,
a database query) that can succeed (with one or more results) or fail (no results). m-plus would then be used to pursue alternative searches and combine the results into one (sequence monad), or to
continue searching until a result is found (maybe monad). Note that it is perfectly possible in principle to have a monad with an m-zero but no m-plus, though in all common cases an m-plus can be
defined as well if an m-zero is known.
After this bit of theory, let’s get acquainted with more monads. In the beginning of this part, I mentioned that the data structure used in a monad does not always represent the result(s) of a
computational step, but sometimes the computation itself. An example of such a monad is the state monad, whose data structure is a function.
The state monad’s purpose is to facilitate the implementation of stateful algorithms in a purely functional way. Stateful algorithms are algorithms that require updating some variables. They are of
course very common in imperative languages, but not compatible with the basic principle of pure functional programs which should not have mutable data structures. One way to simulate state changes
while remaining purely functional is to have a special data item (in Clojure that would typically be a map) that stores the current values of all mutable variables that the algorithm refers to. A
function that in an imperative program would modify a variable now takes the current state as an additional input argument and returns an updated state along with its usual result. The changing state
thus becomes explicit in the form of a data item that is passed from function to function as the algorithm’s execution progresses. The state monad is a way to hide the state-passing behind the scenes
and write an algorithm in an imperative style that consults and modifies the state.
The state monad differs from the monads that we have seen before in that its data structure is a function. This is thus a case of a monad whose data structure represents not the result of a
computation, but the computation itself. A state monad value is a function that takes a single argument, the current state of the computation, and returns a vector of length two containing the result
of the computation and the updated state after the computation. In practice, these functions are typically closures, and what you use in your program code are functions that create these closures.
Such state-monad-value-generating functions are the equivalent of statements in imperative languages. As you will see, the state monad allows you to compose such functions in a way that makes your
code look perfectly imperative, even though it is still purely functional!
Let’s start with a simple but frequent situation: the state that your code deals with takes the form of a map. You may consider that map to be a namespace in an imperative languages, with each key
defining a variable. Two basic operations are reading the value of a variable, and modifying that value. They are already provided in the Clojure monad library, but I will show them here anyway
because they make nice examples.
First, we look at fetch-val, which retrieves the value of a variable:
(defn fetch-val [key]
(fn [s]
[(key s) s]))
Here we have a simple state-monad-value-generating function. It returns a function of a state variable s which, when executed, returns a vector of the return value and the new state. The return value
is the value corresponding to the key in the map that is the state value. The new state is just the old one – a lookup should not change the state of course.
Next, let’s look at set-val, which modifies the value of a variable and returns the previous value:
(defn set-val [key val]
(fn [s]
(let [old-val (get s key)
new-s (assoc s key val)]
[old-val new-s])))
The pattern is the same again: set-val returns a function of state s that, when executed, returns the old value of the variable plus an updated state map in which the new value is the given one.
With these two ingredients, we can start composing statements. Let’s define a statement that copies the value of one variable into another one and returns the previous value of the modified variable:
(defn copy-val [from to]
(domonad state-m
[from-val (fetch-val from)
old-to-val (set-val to from-val)]
What is the result of copy-val? A state-monad value, of course: a function of a state variable s that, when executed, returns the old value of variable to plus the state in which the copy has taken
place. Let’s try it out:
(let [initial-state {:a 1 :b 2}
computation (copy-val :b :a)
[result final-state] (computation initial-state)]
We get {:a 2, :b 2}, as expected. But how does it work? To understand the state monad, we need to look at its definitions for m-result and m-bind, of course.
First, m-result, which does not contain any surprises: it returns a function of a state variable s that, when executed, returns the result value v and the unchanged state s:
(defn m-result [v] (fn [s] [v s]))
The definition of m-bind is more interesting:
(defn m-bind [mv f]
(fn [s]
(let [[v ss] (mv s)]
((f v) ss))))
Obviously, it returns a function of a state variable s. When that function is executed, it first runs the computation described by mv (the first ‘statement’ in the chain set up by m-bind) by applying
it to the state s. The return value is decomposed into result v and new state ss. The result of the first step, v, is injected into the rest of the computation by calling f on it (like for the other
m-bind functions that we have seen). The result of that call is of course another state-monad value, and thus a function of a state variable. When we are inside our (fn [s] ...), we are already at
the execution stage, so we have to call that function on the state ss, the one that resulted from the execution of the first computational step.
The state monad is one of the most basic monads, of which many variants are in use. Usually such a variant adds something to m-bind that is specific to the kind of state being handled. An example is
the the stream monad in clojure.contrib.stream-utils. (NOTE: the stream monad has not been migrated to the new Clojure contrib library set.) Its state describes a stream of data items, and the m-bind
function checks for invalid values and for the end-of-stream condition in addition to what the basic m-bind of the state monad does.
A variant of the state monad that is so frequently used that has itself become one of the standard monads is the writer monad. Its state is an accumulator (any type implementing th e protocol
writer-monad-protocol, for example strings, lists, vectors, and sets), to which computations can add something by calling the function write. The name comes from a particularly popular application:
logging. Take a basic computation in the identity monad, for example (remember that the identity monad is just Clojure’s built-in let). Now assume you want to add a protocol of the computation in the
form of a list or a string that accumulates information about the progress of the computation. Just change the identity monad to the writer monad, and add calls to write where required!
Here is a concrete example: the well-known Fibonacci function in its most straightforward (and most inefficient) implementation:
(defn fib [n]
(if (< n 2)
(let [n1 (dec n)
n2 (dec n1)]
(+ (fib n1) (fib n2)))))
Let’s add some protocol of the computation in order to see which calls are made to arrive at the final result. First, we rewrite the above example a bit to make every computational step explicit:
(defn fib [n]
(if (< n 2)
(let [n1 (dec n)
n2 (dec n1)
f1 (fib n1)
f2 (fib n2)]
(+ f1 f2))))
Second, we replace let by domonad and choose the writer monad with a vector accumulator:
(with-monad (writer-m [])
(defn fib-trace [n]
(if (< n 2)
(m-result n)
[n1 (m-result (dec n))
n2 (m-result (dec n1))
f1 (fib-trace n1)
_ (write [n1 f1])
f2 (fib-trace n2)
_ (write [n2 f2])
(+ f1 f2))))
Finally, we run fib-trace and look at the result:
(fib-trace 3)
[2 [[1 1] [0 0] [2 1] [1 1]]]
The first element of the return value, 2, is the result of the function fib. The second element is the protocol vector containing the arguments and results of the recursive calls.
Note that it is sufficient to comment out the lines with the calls to write and change the monad to identity-m to obtain a standard fib function with no protocol – try it out for yourself!
Part 4 will show you how to define your own monads by combining monad building blocks called monad transformers. As an illustration, I will explain the probability monad and how it can be used for
Bayesian estimates when combined with the maybe-transformer.
A monad tutorial for Clojure programmers (part 2)
In the first part of this tutorial, I have introduced the two most basic monads: the identity monad and the maybe monad. In this part, I will continue with the sequence monad, which will be the
occasion to explain the role of the mysterious m-result function. I will also show a couple of useful generic monad operations.
One of the most frequently used monads is the sequence monad (known in the Haskell world as the list monad). It is in fact so common that it is built into Clojure as well, in the form of the for
form. Let’s look at an example:
(for [a (range 5)
b (range a)]
(* a b))
A for form resembles a let form not only syntactically. It has the same structure: a list of binding expressions, in which each expression can use the bindings from the preceding ones, and a final
result expressions that typically depends on all the bindings as well. The difference between let and for is that let binds a single value to each symbol, whereas for binds several values in
sequence. The expressions in the binding list must therefore evaluate to sequences, and the result is a sequence as well. The for form can also contain conditions in the form of :when and :while
clauses, which I will discuss later. From the monad point of view of composable computations, the sequences are seen as the results of non-deterministic computations, i.e. computations that have more
than one result.
Using the monad library, the above loop is written as
(domonad sequence-m
[a (range 5)
b (range a)]
(* a b))
Since we alread know that the domonad macro expands into a chain of m-bind calls ending in an expression that calls m-result, all that remains to be explained is how m-bind and m-result are defined
to obtain the desired looping effect.
As we have seen before, m-bind calls a function of one argument that represents the rest of the computation, with the function argument representing the bound variable. To get a loop, we have to call
this function repeatedly. A first attempt at such an m-bind function would be
(defn m-bind-first-try [sequence function]
(map function sequence))
Let’s see what this does for our example:
(m-bind-first-try (range 5) (fn [a]
(m-bind-first-try (range a) (fn [b]
(* a b)))))
This yields (() (0) (0 2) (0 3 6) (0 4 8 12)), whereas the for form given above yields (0 0 2 0 3 6 0 4 8 12). Something is not yet quite right. We want a single flat result sequence, but what we get
is a nested sequence whose nesting level equals the number of m-bind calls. Since m-bind introduces one level of nesting, it must also remove one. That sounds like a job for concat. So let’s try
(defn m-bind-second-try [sequence function]
(apply concat (map function sequence)))
(m-bind-second-try (range 5) (fn [a]
(m-bind-second-try (range a) (fn [b]
(* a b)))))
This is worse: we get an exception. Clojure tells us:
java.lang.IllegalArgumentException: Don't know how to create ISeq from: Integer
Back to thinking!
Our current m-bind introduces a level of sequence nesting and also takes one away. Its result therefore has as many levels of nesting as the return value of the function that is called. The final
result of our expression has as many nesting values as (* a b) – which means none at all. If we want one level of nesting in the result, no matter how many calls to m-bind we have, the only solution
is to introduce one level of nesting at the end. Let’s try a quick fix:
(m-bind-second-try (range 5) (fn [a]
(m-bind-second-try (range a) (fn [b]
(list (* a b))))))
This works! Our (fn [b] ...) always returns a one-element list. The inner m-bind thus creates a sequence of one-element lists, one for each value of b, and concatenates them to make a flat list. The
outermost m-bind then creates such a list for each value of a and concatenates them to make another flat list. The result of each m-bind thus is a flat list, as it should be. And that illustrates
nicely why we need m-result to make a monad work. The final definition of the sequence monad is thus given by
(defn m-bind [sequence function]
(apply concat (map function sequence)))
(defn m-result [value]
(list value))
The role of m-result is to turn a bare value into the expression that, when appearing on the right-hand side in a monadic binding, binds the symbol to that value. This is one of the conditions that a
pair of m-bind and m-result functions must fulfill in order to define a monad. Expressed as Clojure code, this condition reads
(= (m-bind (m-result value) function)
(function value))
There are two more conditions that complete the three monad laws. One of them is
(= (m-bind monadic-expression m-result)
with monadic-expression standing for any expression valid in the monad under consideration, e.g. a sequence expression for the sequence monad. This condition becomes clearer when expressed using the
domonad macro:
(= (domonad
[x monadic-expression]
The final monad law postulates associativity:
(= (m-bind (m-bind monadic-expression
(m-bind monadic-expression
(fn [x] (m-bind (function1 x)
Again this becomes a bit clearer using domonad syntax:
(= (domonad
[y (domonad
[x monadic-expression]
(function1 x))]
(function2 y))
[x monadic-expression
y (m-result (function1 x))]
(function2 y)))
It is not necessary to remember the monad laws for using monads, they are of importance only when you start to define your own monads. What you should remember about m-result is that (m-result x)
represents the monadic computation whose result is x. For the sequence monad, this means a sequence with the single element x. For the identity monad and the maybe monad, which I have presented in
the first part of the tutorial, there is no particular structure to monadic expressions, and therefore m-result is just the identity function.
Now it’s time to relax: the most difficult material has been covered. I will return to monad theory in the next part, where I will tell you more about the :when clauses in for loops. The rest of this
part will be of a more pragmatic nature.
You may have wondered what the point of the identity and sequence monads is, given that Clojure already contains fully equivalent forms. The answer is that there are generic operations on
computations that have an interpretation in any monad. Using the monad library, you can write functions that take a monad as an argument and compose computations in the given monad. I will come back
to this later with a concrete example. The monad library also contains some useful predefined operations for use with any monad, which I will explain now. They all have names starting with the prefix
Perhaps the most frequently used generic monad function is m-lift. It converts a function of n standard value arguments into a function of n monadic expressions that returns a monadic expression. The
new function contains implicit m-bind and m-result calls. As a simple example, take
(def nil-respecting-addition
(with-monad maybe-m
(m-lift 2 +)))
This is a function that returns the sum of two arguments, just like + does, except that it automatically returns nil when either of its arguments is nil. Note that m-lift needs to know the number of
arguments that the function has, as there is no way to obtain this information by inspecting the function itself.
To illustrate how m-lift works, I will show you an equivalent definition in terms of domonad:
(defn nil-respecting-addition
[x y]
(domonad maybe-m
[a x
b y]
(+ a b)))
This shows that m-lift implies one call to m-result and one m-bind call per argument. The same definition using the sequence monad would yield a function that returns a sequence of all possible sums
of pairs from the two input sequences.
Exercice: The following function is equivalent to a well-known built-in Clojure function. Which one?
(with-monad sequence-m
(defn mystery
[f xs]
( (m-lift 1 f) xs )))
Another popular monad operation is m-seq. It takes a sequence of monadic expressions, and returns a sequence of their result values. In terms of domonad, the expression (m-seq [a b c]) becomes
[x a
y b
z c]
'(x y z))
Here is an example of how you might want to use it:
(with-monad sequence-m
(defn ntuples [n xs]
(m-seq (replicate n xs))))
Try it out for yourself!
The final monad operation I want to mention is m-chain. It takes a list of one-argument computations, and chains them together by calling each element of this list with the result of the preceding
one. For example, (m-chain [a b c]) is equivalent to
(fn [arg]
[x (a arg)
y (b x)
z (c y)]
A usage example is the traversal of hierarchies. The Clojure function parents yields the parents of a given class or type in the hierarchy used for multimethod dispatch. When given a Java class, it
returns its base classes. The following function builds on parents to find the n-th generation ascendants of a class:
(with-monad sequence-m
(defn n-th-generation
[n cls]
( (m-chain (replicate n parents)) cls )))
(n-th-generation 0 (class []))
(n-th-generation 1 (class []))
(n-th-generation 2 (class []))
You may notice that some classes can occur more than once in the result, because they are the base class of more than one class in the generation below. In fact, we ought to use sets instead of
sequences for representing the ascendants at each generation. Well… that’s easy. Just replace sequence-m by set-m and run it again!
In part 3, I will come back to the :when clause in for loops, and show how it is implemented and generalized in terms of monads. I will also explain another monad or two. Stay tuned!
Comments Off
A monad tutorial for Clojure programmers (part 1)
Monads in functional programming are most often associated with the Haskell language, where they play a central role in I/O and have found numerous other uses. Most introductions to monads are
currently written for Haskell programmers. However, monads can be used with any functional language, even languages quite different from Haskell. Here I want to explain monads in the context of
Clojure, a modern Lisp dialect with strong support for functional programming. A monad implementation for Clojure is available in the library clojure.algo.monads. Before trying out the examples given
in this tutorial, type (use 'clojure.algo.monads) into your Clojure REPL. You also have to install the monad library, of course, which you can do by hand, or using build tools such as leiningen.
Monads are about composing computational steps into a bigger multi-step computation. Let’s start with the simplest monad, known as the identity monad in the Haskell world. It’s actually built into
the Clojure language, and you have certainly used it: it’s the let form.
Consider the following piece of code:
(let [a 1
b (inc a)]
(* a b))
This can be seen as a three-step calculation:
1. Compute 1 (a constant), and call the result a.
2. Compute (inc a), and call the result b.
3. Compute (* a b), which is the result of the multi-step computation.
Each step has access to the results of all previous steps through the symbols to which their results have been bound.
Now suppose that Clojure didn’t have a let form. Could you still compose computations by binding intermediate results to symbols? The answer is yes, using functions. The following expression is in
fact equivalent to the previous one:
( (fn [a] ( (fn [b] (* a b)) (inc a) ) ) 1 )
The outermost level defines an anonymous function of a and calls with with the argument 1 – this is how we bind 1 to the symbol a. Inside the function of a, the same construct is used once more: the
body of (fn [a] ...) is a function of b called with argument (inc a). If you don’t believe that this somewhat convoluted expression is equivalent to the original let form, just paste both into
Of course the functional equivalent of the let form is not something you would want to work with. The computational steps appear in reverse order, and the whole construct is nearly unreadable even
for this very small example. But we can clean it up and put the steps in the right order with a small helper function, bind. We will call it m-bind (for monadic bind) right away, because that’s the
name it has in Clojure’s monad library. First, its definition:
(defn m-bind [value function]
(function value))
As you can see, it does almost nothing, but it permits to write a value before the function that is applied to it. Using m-bind, we can write our example as
(m-bind 1 (fn [a]
(m-bind (inc a) (fn [b]
(* a b)))))
That’s still not as nice as the let form, but it comes a lot closer. In fact, all it takes to convert a let form into a chain of computations linked by m-bind is a little macro. This macro is called
domonad, and it permits us to write our example as
(domonad identity-m
[a 1
b (inc a)]
(* a b))
This looks exactly like our original let form. Running macroexpand-1 on it yields
(clojure.algo.monads/with-monad identity-m
(m-bind 1 (fn [a] (m-bind (inc a) (fn [b] (m-result (* a b)))))))
This is the expression you have seen above, wrapped in a (with-monad identity-m ...) block (to tell Clojure that you want to evaluate it in the identity monad) and with an additional call to m-result
that I will explain later. For the identity monad, m-result is just identity – hence its name.
As you might guess from all this, monads are generalizations of the let form that replace the simple m-bind function shown above by something more complex. Each monad is defined by an implementation
of m-bind and an associated implementation of m-result. A with-monad block simply binds (using a let form!) these implementations to the names m-bind and m-result, so that you can use a single syntax
for composing computations in any monad. Most frequently, you will use the domonad macro for this.
As our second example, we will look at another very simple monad, but one that adds something useful that you don’t get in a let form. Suppose your computations can fail in some way, and signal
failure by producing nil as a result. Let’s take our example expression again and wrap it into a function:
(defn f [x]
(let [a x
b (inc a)]
(* a b)))
In the new setting of possibly-failing computations, you want this to return nil when x is nil, or when (inc a) yields nil. (Of course (inc a) will never yield nil, but that’s the nature of
examples…) Anyway, the idea is that whenever a computational step yields nil, the final result of the computation is nil, and the remaining steps are never executed. All it takes to get this
behaviour is a small change:
(defn f [x]
(domonad maybe-m
[a x
b (inc a)]
(* a b)))
The maybe monad represents computations whose result is maybe a valid value, but maybe nil. Its m-result function is still identity, so we don’t have to discuss m-result yet (be patient, we will get
there in the second part of this tutorial). All the magic is in the m-bind function:
(defn m-bind [value function]
(if (nil? value)
(function value)))
If its input value is non-nil, it calls the supplied function, just as in the identity monad. Recall that this function represents the rest of the computation, i.e. all following steps. If the value
is nil, then m-bind returns nil and the rest of the computation is never called. You can thus call (f 1), yielding 2 as before, but also (f nil) yielding nil, without having to add nil-detecting code
after every step of your computation, because m-bind does it behind the scenes.
In part 2, I will introduce some more monads, and look at some generic functions that can be used in any monad to aid in composing computations.
dorun, doseq, doall
It’s about time that I started writing some posts on this blog. I’ll start with something small, by talking about the difference between dorun, doseq and doall and how to easily remember what each
one of them does. All of them are supposed to force evaluation of lazy sequences. At least to me it was not obvious from the name of these functions and I had to constantly go back to the docstrings
at first.
doall forces the evaluation of the lazy sequence and it retains the head, causing the entire seq to live in memory. Useful when you want to immediately force evaluation of something like map. I use
the “all” part of doall to help me remember that it keeps all the items of the seq in memory.
Use dorun when you just want the side effects of computing the lazy sequence, but you don’t care about the items in the sequence itself. But be careful when you see yourself using (dorun (map ….. )),
you should probably use doseq instead. I use the “run” part of dorun to help me remember that it only runs over the seq for side effects, without keeping anything.
To me doseq has more in common with for than with doall or dorun. Since in Clojure for is used for lazy list-comprehension, doseq fulfills the role of something like “for … in …” or “foreach” from
other languages. It takes the same arguments that for does, but it runs immediately and it doesn’t collect the results. I really feel that doseq needs a better name so I don’t have a good way to
remember what it does just by looking at its name. | {"url":"http://onclojure.com/2009/03/","timestamp":"2014-04-19T12:31:46Z","content_type":null,"content_length":"51839","record_id":"<urn:uuid:c308d07f-a436-421b-be81-3599d29a820c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bulletin 2013 - 2014
College of Science and Engineering
Dean: Sheldon Axler
School of Engineering
SCI 163
Phone: 415-338-1174
E-mail: engineer@sfsu.edu
Director: Wenshen Pong
Graduate Coordinator: Hamid Shahnasser
Professors: D'Orazio, Ganji, Holton, Liou, Pong, Shahnasser, Sinha, Tarakji
Associate Professor: Cheng, Enssani, Jiang, Mahmoodi, Teh
Assistant Professors: Celik, Chen
B.S. in Electrical Engineering
B.S. in Mechanical Engineering
Minor in Electrical Engineering
Minor in Mechanical Engineering
Mission and Goal
The mission of the School of Engineering is to educate students from a diverse and multicultural population to become productive members of the engineering profession and society at large.
Educational objectives in support of this mission depend upon the major program, and are stated below in the description of each program.
Program Scope
The School of Engineering offers Bachelor of Science programs in Civil, Computer, Electrical, and Mechanical Engineering, as well as a minor program in each discipline. Descriptions of the four major
and minor programs follow this general introduction.
Civil engineering is concerned with the building of civil and environmental facilities, which are essential for the commerce of our society. Civil engineers design and construct bridges, buildings,
wastewater treatment plants, water supply facilities, hazardous waste facilities, and transportation systems. The program at San Francisco State University provides a broad and practical education
which prepares students for civil engineering employment and (for those who qualify) for graduate studies.
Computer engineering combines electrical engineering and computer science and deals with the design and application of computer systems. These computer systems can range from super computers to tiny
microprocessors that are embedded in all kinds of apparatus such as automobiles, appliances, cellular phones, medical devices, office equipment, etc. The computer engineering program teaches students
about computer hardware, software, integration, interfacing and applications with a strong emphasis on analysis and design. Hence, students pursuing a computer engineering degree must have a solid
foundation in mathematics and physical sciences. Students develop problem-solving and decision-making skills as well as an appreciation for the impact of technology in society. Graduates of the
program can seek employment immediately, or can continue studies for an advanced degree in computer engineering, computer science, electrical engineering, or other areas such as business, law, or
Electrical engineering is the profession that deals with the design and analysis of electrical and electronic devices and systems. This branch of engineering covers many diverse areas, including
electrical power generation and distribution, the design and fabrication of electronic semiconductor devices, and the creation of components and systems for consumer, medical, telecommunications and
many other applications. Graduates with a B.S. in Electrical Engineering have a number of options available to them. They may engage in the analysis, modeling, simulation, design, testing,
manufacturing, or field services of electrical, electronic, or magnetic equipment. Persons interested in research, development, or college-level teaching may return to universities for advanced
degrees in a specified area of electrical engineering.
Mechanical engineering is the field responsible for the design of machines and devices used throughout society. Industries involved in the generation of electricity; in petroleum production; and in
the design and manufacture of electronics, aircraft, automobiles, consumer and industrial products typically employ large numbers of mechanical engineers. Mechanical engineers are also employed by
companies involved in automated manufacturing as well as robotics and control. The program at San Francisco State University prepares the student to enter into professional employment directly after
graduation in addition to providing the needed foundation for graduate study.
Recognizing the value to certain students majoring in science broadening their education to include applications of their backgrounds in science to real-world physical systems, four minors in
engineering are offered.
The master’s program includes primary curricular areas of specialization in civil/structural, electrical/computer, and mechanical/energy engineering from which the student may choose his/her program
of study upon advisement. The objectives of the program are to provide students with the advanced engineering education necessary for solving complex problems in engineering practice and to provide
opportunities for updating and upgrading the skills of practicing engineers. These objectives are accomplished by a flexible program to meet individual student needs.
Career Outlook
Graduates with a B.S. in Civil Engineering may engage in the design and construction of buildings, bridges, roads, dams, water supply facilities, and environmental facilities for treating wastewater
and hazardous wastes. Civil engineers find employment with industrial firms, government agencies, utilities, and public works departments, as well as engineering firms which consult for these
enterprises. After gaining practical experience, some civil engineers form their own consulting firms.
Graduates with a B.S. in Computing Engineering may engage in the design, integration, interfacing, and application of computer hardware and software. Computer engineering is the fastest growing
engineering profession, and it impacts all aspects of our lives. Since computers are everywhere, from super computers to embedded microprocessors, computer engineers are needed in design,
development, testing, marketing, and technical support of a wide variety of industries. Examples of major industries that employ computer engineers include computers, semiconductors, instrumentation,
communications, networks, medical equipment and manufacturing.
Graduates with a B.S. in Electrical Engineering may engage in the analysis, modeling, simulation, design, testing, manufacturing, or field services of electrical, electronic, or magnetic equipment.
They may also engage in the operation and maintenance of facilities for electrical power generation or telecommunication. High technology companies employ electrical engineers in the fields of
electronic and computer manufacturing, as well as in power generation and communications.
Graduates with a B.S. in Mechanical Engineering may immediately engage in the design, analysis, testing, production, and maintenance of machines and mechanical systems. Most industries, including
aerospace, electronics, manufacturing, automotive, chemical, power generation, agriculture, food processing, textile, and mining, employ mechanical engineers.
Engineers interested in research, development, or college-level teaching return to college for an M.S. or Ph.D. in their specified field. Engineers interested in management and business aspects may
return to college for a Master of Business Administration.
Undergraduate Programs in Engineering
Freshman applicants have completed four years of high school mathematics, one year of high school chemistry, and one year of high school physics. Students are also encouraged to include courses in
mechanical drawing and computer programming.
Community college transfers should complete the sequence of mathematics, chemistry, physics, and engineering courses listed in freshman and sophomore years under the “sample sequence of courses,” if
available at the community college.
The Bachelors of Science in Civil, Computer, Electrical, and Mechanical Engineering require - 126, 128, 128 and 128 semester units, respectively. 30 units must be earned in residence at SF State. Of
these units 24 must be upper division courses and 12 of these upper division units must be in the major. Major requirements, including mathematics, chemistry, and physics prerequisites, comprise - 93
units for civil engineering, 95 for computer engineering, 95 for electrical engineering and 96 units for mechanical engineering. For civil engineering, 50 of the required units are lower division and
43 units are upper division. For mechanical engineering, 51 of the required units are lowers divisions and 44 units are upper division. For electrical engineering, 50 of the required units are lower
division and 45 units are upper division. For computer engineering, 50 of the required units are lower division and 45 units are upper division. The remaining 33 units satisfy the balance of the
university requirements including communication skills and general education in humanities and social sciences. Students are advised that, except for some general education (G.E.) courses, all
courses which are to be counted toward completion of an engineering degree must be taken for a letter grade; the CR/NC option may not be used in this context.
Recognizing the need of the professional engineer to participate in facets of problem solving that extend beyond technical and economic considerations, the G.E. requirement for engineering students
includes 33 units in courses other than mathematics, natural sciences, and business. Students have the option of following either the university G.E. program or the School of Engineering G.E.
program. The School of Engineering G.E. program permits a student to use courses required for the engineering majors to satisfy some of the G.E. requirements, so that the total number of units
outside of major requirements is reduced. Students should consult with the School of Engineering’s G.E. advisors about G.E. requirements for engineering majors.
Courses are scheduled during the day as well as in the late afternoon and evening. Other information and assistance in selecting courses can be obtained from a major advisor in the School of
Engineering, or by calling 415/338-1174, by e-mail to engineer@sfsu.edu, or by writing to School of Engineering, San Francisco State University, Science Building, 1600 Holloway Avenue, San Francisco,
CA 94132.
Bachelor of Science in Civil Engineering
The curriculum provides a broad-based common core of engineering science and the essential civil engineering subjects. The students conclude with 17 units of electives where primary emphasis is
placed on design, practical applications, and computer solutions in selected areas of civil engineering. The educational objectives of the civil engineering program are to produce graduates who
• Effectively engage their skills and knowledge in analysis, design, communication, teamwork and professional practice to perform competently in engineering enterprises, while being aware of the
economic, environmental, ethical and social factors affecting their work.
• Continue to develop their professional skills though lifelong learning, seek professional certification, and participate in professional societies.
The number of units required for graduation is described in the Undergraduate Education section of this Bulletin. For further information, see Undergraduate Programs in Engineering above. On-line
course descriptions are available.
Students must complete 21 units of upper-division engineering units before registering for ENGR 696.
Civil Engineering
Course Title Units
CHEM 115 General Chemistry I: Essential Concepts of Chemistry 5
MATH 226 Calculus I 4
ENGR 100 Introduction to Engineering 1
ENGR 101 Engineering Graphics 1
MATH 227 Calculus II 4
PHYS 220/222 General Physics with Calculus I/Laboratory (3/1) 4
ENGR 103 Introduction to Computers 1
MATH 228 Calculus III 4
PHYS 230/232 General Physics with Calculus II/Laboratory (3/1) 4
ENGR 102 Statics 3
ENGR 235 Surveying 3
MATH 245 Elementary Differential Equations and Linear Algebra 3
PHYS 240/ General Physics with Calculus III/Laboratory (3/1) 4
PHYS 242
ENGR 200 Materials of Engineering 3
ENGR 201 Dynamics 3
ENGR 205 Electric Circuits 3
ENGR 300 Engineering Experimentation 3
ENGR 304 Mechanics of Fluids 3
ENGR 309 Mechanics of Solids 3
ENGR 434 Principles of Environmental Engineering 3
ENGR 302 Experimental Analysis 1
ENGR 323 Structural Analysis 3
ENGR 429 Construction Management 3
ENGR 430 Soil Mechanics 3
ENGR 436 Transportation Engineering 3
ENGR 696 Engineering Design Project I 1
ENGR 697 GW Engineering Design Project II - GWAR 2
Upper Division Engineering Electives: 15 units.
Total for Civil Engineering: 93 units
Upper Division Electives
Choice of upper division electives must present a clearly identifiable educational objective and ensure that the program requirements in engineering science and design are met by all students.
Distribution of credit units among engineering science and design is given in the Advising Guide. A study plan of intended upper division electives must be approved by the student’s advisor and the
program coordinator prior to the seventh semester of the engineering program.
A total of 15 units from the following list of courses is required, subject to the minimum number of units specified for each group. Students with a GPA of at least 3.0 and the required prerequisites
may take graduate courses (numbered 800 and above) with approval of their advisor or the program coordinator.
Engineering Electives: 15 units
Course Title
ENGR 303 Engineering Thermodynamics (3)
ENGR 421 Structural Engineering Lab (2)
ENGR 425 Reinforced Concrete Structures (3)
ENGR 426 Steel Structures (3)
ENGR 427 Wood Structures (3)
ENGR 428 Applied Stress Analysis (3)
ENGR 431 Foundation Engineering (3)
ENGR 432 Finite Element Methods (3)
ENGR 435 Environmental Engineering Design (3)
ENGR 439 Construction Engineering (3)
ENGR 461 Mechanical and Structural Vibrations (3)
ENGR 468 Applied Fluid Mechanics and Hydraulics (3)
ENGR 469 Renewable Energy Systems (3)
ENGR 610 Engineering Cost Analysis (3)
ENGR 698 Engineering Seminar (1-3)
ENGR 699 Independent Study in Engineering (1-3)
ENGR 825 Bridge Engineering and Prestress Reinforced Concrete Structures (3)
ENGR 826 Seismic Hazard Analysis (3)
ENGR 827 Structural Design for Fire Safety (3)
ENGR 829 Advanced Topics in Structural Engineering (3)
ENGR 830 Finite Element Methods in Structural Continuum Mechanics (3)
ENGR 831 Advanced RC Structures (3)
ENGR 832 Advanced Topics in Seismic Design (3)
ENGR 833 Principles of Earthquake Engineering (3)
ENGR 835 Advanced Steel Structures (3)
ENGR 836 Structural Design for Earthquakes (3)
ENGR 837 Geotechnical Earthquake Engineering (3)
Bachelor of Science in Computer Engineering
Computer engineering is a multidisciplinary field with roots in electrical engineering and computer science that has grown to become a separate discipline in itself. The educational objectives of the
computer engineering program are to produce graduates who:
• Use the analysis and design skills that they have acquired in their computer engineering education to become productive, contributing engineers.
• Demonstrate the ability to work in teams, communicate effectively, act in a professional and ethically responsible manner computer engineering, and continue to develop their professional skills
through lifelong learning.
The first two years of the program are designed to build a strong background in mathematics and science to provide a basis for understanding the underlying analysis and modeling tools and physical
principles that are common to all engineering. The last two years cover a rich set of hardware and software subjects to give students a broad background in computer engineering. This broad foundation
enables students to adapt and extend their knowledge and skills more easily in the future. The curriculum also stresses problem solving skills and teamwork. Through electives, students can choose to
develop further breadth or in-depth knowledge in one of three areas: embedded systems, network systems, or multimedia systems.
The number of units required for graduation and the G.E. requirements are described in the Undergraduate Education section of this Bulletin. For information for all engineering students, see
Undergraduate Programs in Engineering above. On-line course descriptions are available.
A number of required and elective lecture courses in the computer engineering program have corresponding laboratory courses that students are either required or strongly encouraged to take
concurrently. These course pairs are:
• ENGR 205 (Electric Circuits) and
ENGR 206 (Circuits and Instrumentation Laboratory)
• ENGR 353 (Electronics) and
ENGR 301 (Electronics Laboratory)
• ENGR 356 (Basic Computer Architecture) and
ENGR 357 (Basic Digital Laboratory)
• ENGR 447 (Control Systems) and
ENGR 446 (Control Systems Laboratory)
Students who drop or withdraw from any of these lecture courses must also drop or withdraw from the corresponding laboratory course, or they will be administratively dropped or withdrawn.
Students must complete 21 units of upper-division engineering units before registering for ENGR 696.
Computer Engineering
Course Title Units
CHEM 115 General Chemistry I: Essential Concepts of Chemistry 5
MATH 226 Calculus I 4
ENGR 100 Introduction to Engineering 1
ENGR 121 Gateway to Computer Engineering 1
ENGR 212 Introduction to Unix/Linux for Engineers 2
MATH 227 Calculus II 4
PHYS 220/222 General Physics with Calculus I/Laboratory (3/1) 4
ENGR 213 C Programming for Engineers 3
MATH 228 Calculus III 4
PHYS 230/232 General Physics with Calculus II/Laboratory (3/1) 4
CSC 210 Introduction to Computer Programming 3
MATH 245 Elementary Differential Equations and Linear Algebra 3
CSC 220 Data Structures 3
CSC 230 Discrete Mathematics 3
ENGR 205 Electric Circuits 3
ENGR 206 Circuits and Instrumentation Laboratory 1
ENGR 300 Engineering Experimentation 3
ENGR 301 Electronics Laboratory 1
ENGR 305 Linear Systems Analysis 3
ENGR 353 Electronics 3
ENGR 356 Basic Computer Architecture 3
ENGR 357 Basic Digital Laboratory 1
CSC 340 Programming Methodology 3
ENGR 451 Digital Signal Processing 4
ENGR 476 Computer Communications Networks 3
ENGR 478 Design with Microprocessors 4
CSC 413 Software Development 3
ENGR 378 Digital Systems Design 3
ENGR 456 Computer Systems 3
ENGR 696 Engineering Design Project I 1
ENGR 697 GW Engineering Design Project II - GWAR 2
Engineering Electives. 6 units.
Total for Computer Engineering: 94 units
Upper Division Electives
Choice of upper division electives must demonstrate a clearly identifiable educational objective and have an advisor’s approval. A study plan of intended upper division electives must be approved by
the student’s advisor and the program coordinator prior to registering for ENGR 696.
A total of 6 units from the following list of courses is required. Students with a GPA of at least 3.0 and the required prerequisites may take graduate courses (numbered 800 and above) with approval
of their advisor or the program coordinator.
Course Title
ENGR 306 Electromechanical Systems (3)
ENGR 350 Engineering Electromagnetics (3)
ENGR 415 Mechatronics (3)
ENGR 416 Mechatronics Laboratory (1)
ENGR 442 Operational Amplifiers Systems Design (3)
ENGR 443 Multimedia Systems (3)
ENGR 446 Control Systems Laboratory (1)
ENGR 447 Control Systems (3)
ENGR 449 Communication Systems (3)
ENGR 453 Digital Integrated Circuit Design (4)
ENGR 454 High-speed Circuit Board Design (3)
ENGR 455 Power Electronics (4)
ENGR 479 Real-time Systems (3)
ENGR 610 Engineering Cost Analysis (3)
ENGR 844 Embedded Systems (3)
ENGR 848 Digital VLSI Design (3)
ENGR 849 Advanced Analog IC Design (3)
ENGR 851 Advanced Microprocessor Architecture (3)
ENGR 852 Advanced Digital Design (3)
ENGR 853 Advanced Topics in Computer Communication and Networks (3)
ENGR 854 Wireless Data Communication Standards (3)
ENGR 855 Advanced Wireless Communication Technologies (3)
ENGR 856 Nanoscale Circuits and Systems (3)
ENGR 857 Re-configurable Computing (3)
ENGR 868 Advanced Control Systems (3)
ENGR 869 Robotics and Haptics (3)
CSC 415 Operating System Principles (3)
CSC 510 Analysis of Algorithms I (3)
CSC 620 Natural Language Technologies (3)
CSC 630 Computer Graphics Systems Design (3)
CSC 640 Software Engineering (3)
CSC 642 Human Computer Interaction (3)
CSC 645 Computer Networks (3)
CSC 650 Secured Network Systems (3)
CSC 664 Multimedia Systems (3)
CSC 665 Artificial Intelligence (3)
CSC 667 Internet Application Design and Development (3)
CSC 668 Programming Cafe (3)
Bachelor of Science in Electrical Engineering
The required upper division courses provide a broad and basic understanding of the main fields in electrical engineering. Upon advisement, each student may choose an area of specialization in the
senior year in communications, computers, electronics, control/robotics, or power engineering. The educational objectives of the electrical engineering program are to produce graduates who:
• Use the analysis and design skills that they have acquired in their electrical engineering education to become productive, contributing engineers.
• Demonstrate the ability to work in teams, communicate effectively, act in a professional and ethically responsible manner, and continue to develop their professional skills through lifelong
The number of units required for graduation and the General Education requirements are described in the Undergraduate Education section of this Bulletin. For information for all engineering students,
see Undergraduate Programs in Engineering above. On-line course descriptions are available.
A number of required and elective lecture courses in the electrical engineering program have corresponding laboratory courses that students are either required or strongly encouraged to take
concurrently. These course pairs are:
• ENGR 205 (Electric Circuits) and
ENGR 206 (Circuits and Instrumentation Laboratory)
• ENGR 305 (Linear Systems Analysis) and
ENGR 315 (Linear Systems Analysis Laboratory)
• ENGR 353 (Electronics) and
ENGR 301 (Electronics Laboratory)
• ENGR 356 (Basic Computer Architecture) and
ENGR 357 (Basic Digital Laboratory)
• ENGR 415 (Mechatronics) and
ENGR 416 (Mechatronics Laboratory)
• ENGR 447 (Control Systems) and
ENGR 446 (Control Systems Laboratory)
Students who drop or withdraw from any of these lecture courses must also drop or withdraw from the corresponding laboratory course, or they will be administratively dropped or withdrawn.
Students must complete 21 units of upper-division engineering units before registering for ENGR 696.
Electrical Engineering
Course Title Units
CHEM 115 General Chemistry I: Essential Concepts of Chemistry 5
MATH 226 Calculus I 4
ENGR 100 Introduction to Engineering 1
MATH 227 Calculus II 4
PHYS 220/222 General Physics with Calculus I/Laboratory (3/1) 4
ENGR 213 Introduction to C Programming for Engineers 3
MATH 228 Calculus III 4
PHYS 230/232 General Physics with Calculus II/Laboratory (3/1) 4
MATH 245 Elementary Differential Equations and Linear Algebra 3
PHYS 240/ General Physics with Calculus III/Laboratory (3/1) 4
PHYS 242
ENGR 205 Electric Circuits 3
ENGR 206 Circuits and Instrumentation Laboratory 1
ENGR 290 Modular Elective 1
(consult engineering advisor for approved options)
ENGR 300 Engineering Experimentation 3
ENGR 301 Electronics Laboratory 1
ENGR 305 Linear Systems Analysis 3
ENGR 315 Linear Systems Analysis Laboratory 1
ENGR 353 Electronics 3
ENGR 356 Basic Computer Architecture 3
ENGR 357 Basic Digital Laboratory 1
ENGR 306 Electromechanical Systems 3
ENGR 442 Operational Amplifier System Design 3
ENGR 451 Digital Signal Processing 4
ENGR 478 Design with Microprocessors 4
ENGR 350 Introduction to Engineering Electromagnetics 3
ENGR 446 Control Systems Laboratory 1
ENGR 447 Control Systems 3
ENGR 449 Communication Systems 3
ENGR 696 Engineering Design Project I 1
ENGR 697 GW Engineering Design Project II - GWAR 2
Mechanical Engineering Elective: (3 units)
One of the following mechanical engineering elective courses
Course Title
ENGR 201 Dynamics (3)
ENGR 203 Materials of Electrical and Electronic Engineering (3)
ENGR 204 Engineering Mechanics (3)
ENGR 303 Engineering Thermodynamics (3)
Engineering Electives: 9 units
Total for Electrical Engineering: 95 units
Upper Division Electives
Choice of upper division electives must present a clearly identifiable educational objective and ensure that the program requirements in engineering science and design are met by all students.
Distribution of credit units among engineering science and design is given in the Advising Guide. A study plan of intended upper-division electives must be approved by the student s advisor and the
program coordinator prior to the seventh semester of the engineering program.
A total of 9 units of engineering electives and 3 units of technical electives from the following list of courses is required. Students with a GPA of at least 3.0 and the required prerequisites may
take graduate courses (numbered 800 and above) with approval of their advisor or the program coordinator.
Engineering Electives: (9 units)
Course Title
ENGR 378 Digital Systems Design (4)
ENGR 410 Instrumentation and Process Control (3)
ENGR 411 Instrumentation and Process Control Laboratory (1)
ENGR 415 Mechatronics (3)
ENGR 416 Mechatronics Laboratory (1)
ENGR 445 Analog Integrated Circuit Design (4)
ENGR 448 Electrical Power Systems (3)
ENGR 450 Electromagnetic Waves (3)
ENGR 452 Communications Laboratory (1)
ENGR 453 Digital Integrated Circuit Design (4)
ENGR 455 Power Electronics (4)
ENGR 456 Computer Systems (3)
ENGR 457 Electromagnetics Compatibility (3)
ENGR 458 Industrial and Commercial Power Systems (3)
ENGR 459 Power Engineering Laboratory (1)
ENGR 476 Computer Communications Networks (3)
ENGR 610 Engineering Cost Analysis (3)
ENGR 698 Engineering Seminar (1-3)
ENGR 699 Independent Study in Engineering (1-3)
ENGR 844 Embedded Systems (3)
ENGR 848 Digital VLSI Design (3)
ENGR 849 Advanced Analog IC Design (3)
ENGR 851 Advanced Microprocessor Architecture (3)
ENGR 852 Advanced Digital Design (3)
ENGR 853 Advanced Topics in Computer Communication and Networks (3)
ENGR 854 Wireless Data Communication Standards (3)
ENGR 855 Advanced Wireless Communication Technologies (3)
ENGR 856 Nanoscale Circuits and Systems (3)
ENGR 857 Re-configurable Computing (3)
ENGR 868 Advanced Control Systems (3)
ENGR 869 Robotics and Haptics (3)
Bachelor of Science in Mechanical Engineering
The required courses provide a thorough grounding in the essentials of mechanical engineering. Elective courses taken as part of one of the areas of emphasis allow for specialization. The areas of
emphasis currently offered are mechanical design, thermal-fluid systems, and robotics and controls. The educational objectives of the mechanical engineering program are to produce graduates who:
• Employ their skills in analysis, design, communication and teamwork to advance in the engineering profession, and engage in lifelong learning in order to maintain currency in their field.
• Demonstrate professionalism, ethics and social awareness as they move into positions of increasing responsibility.
The number of units required for graduation and the G.E. requirements are described in the Undergraduate Education section of this Bulletin. For information common to all engineering students, see
Undergraduate Programs in Engineering above. On-line course descriptions are available.
Mechanical Engineering
Course Title Units
CHEM 115 General Chemistry I: Essential Concepts of Chemistry 5
MATH 226 Calculus I 4
ENGR 100 Introduction to Engineering 1
ENGR 101 Engineering Graphics 1
MATH 227 Calculus II 4
PHYS 220/222 General Physics with Calculus I/Laboratory (3/1) 4
ENGR 103 Introduction to Computers 1
MATH 228 Calculus III 4
PHYS 230/232 General Physics with Calculus II/Laboratory (3/1) 4
ENGR 102 Statics 3
ENGR 200 Materials of Engineering 3
MATH 245 Elementary Differential Equations and Linear Algebra 3
PHYS 240/242 General Physics with Calculus III/Laboratory (3/1) 4
ENGR 201 Dynamics 3
ENGR 205 Electric Circuits 3
ENGR 206 Circuits and Instrumentation Laboratory 1
ENGR 290 Modular Electives (selected from approved options) 3
ENGR 300 Engineering Experimentation 3
ENGR 303 Engineering Thermodynamics 3
ENGR 305 Linear Systems Analysis 3
ENGR 309 Mechanics of Solids 3
ENGR 302 Experimental Analysis 1
ENGR 304 Mechanics of Fluids 3
ENGR 364 Materials and Manufacturing Processes 3
ENGR 464 Mechanical Design 3
ENGR 467 Heat Transfer 3
ENGR 696 Engineering Design Project I 1
ENGR 463 Thermal Power Systems 3
ENGR 697 GW Engineering Design Project II - GWAR 2
Emphasis Elective: (4 units)
Units selected from the following, depending on area of emphasis:
Course Title
ENGR 447/446 Control Systems/Lab (3/1)
ENGR 410/411 Process Instrumentation and Control/Lab (3/1)
Engineering Electives: 9 units
Total for Mechanical Engineering: 95 units
Upper Division Electives
Choice of upper division electives must present a clearly identifiable educational objective and ensure that the program requirements in engineering science and design are met by all students.
Distribution of credit units among engineering science and design is given in the Advising Guide. A study plan of intended upper division electives must be approved by the student s advisor and the
program coordinator prior to the seventh semester of the engineering program.
A total of 9 units from the following list of courses is required, subject to the minimum number of units specified for each group.
Engineering Electives: 9 units
Course Title
ENGR 306 Electromechanical Systems (3)
ENGR 410 Instrumentation and Process Control (3)
ENGR 411 Instrumentation and Process Control Laboratory (1)
ENGR 415 Mechatronics (3)
ENGR 416 Mechatronics Laboratory (1)
ENGR 428 Applied Stress Analysis (3)
ENGR 432 Finite Element Methods (3)
ENGR 446 Control Systems Laboratory (1)
ENGR 447 Control Systems (3)
ENGR 461 Mechanical and Structural Vibration (3)
ENGR 465 Principles of HVAC (3)
ENGR 466 Gas Dynamics and Boundary Layer Flow (3)
ENGR 468 Applied Fluid Mechanics and Hydraulics (3)
ENGR 469 Renewable Energy Systems (3)
ENGR 610 Engineering Cost Analysis (3)
ENGR 698 Engineering Seminar (1-3)
ENGR 699 Independent Study in Engineering (1-2)
Minor in Civil Engineering
The purpose of the Minor in Civil Engineering is to give students with sufficient background in mathematics, physics and chemistry, a fundamental understanding of the field of civil engineering. The
minor should be of special interest to students in Geosciences (foundations and earthquake), Environmental Studies, Physics, Mathematics, Computer Science, and other engineering fields. Students
interested in the Civil Engineering minor must meet with the program coordinator and complete the Civil Engineering Minor Program Approval Form. Revision of the form requires the approval of the
program coordinator.
Prerequisite Requirements
The minor is intended for students who have satisfied the following prerequisite requirements:
Course Title
MATH 226 Calculus I (4)
MATH 227 Calculus II (4)
PHYS 220/222 General Physics with Calculus I & Lab (4)
PHYS 240/242 General Physics with Calculus III & Lab (4)
CHEM 115 General Chemistry I: Essential Concepts of Chemistry (5)
The minor may be satisfied by a minimum of 21 units (not including prerequisite units) distributed as follows:
Core Requirements: 15 units
Course Title
ENGR 102 Statics (3)
ENGR 201 Dynamics (3)
ENGR 235 Surveying (3)
ENGR 304 Mechanics of Fluids (3)
ENGR 309 Mechanics of Solids (3)
Electives: 6 units
(approved upper division civil engineering courses, all within one of the civil engineering focus areas. No upper division course from the major can be double-counted towards meeting the elective
requirements of the minor or second major. There must be prior approval from the program coordinator.)
Total (not including prerequisites) 21 units
To earn the Minor in Civil Engineering, a student must complete at least 12 of the required 21 core and elective units at SF State. Each of the courses in the minor must be taken for a letter grade
(CR/NC is not acceptable).
Minor in Computer Engineering
The purpose of the Minor in Computer Engineering is to give students who are interested in the computer technology a good basic background in software development, digital electronics, computer
organization, and microprocessor applications. Additional knowledge of computer networks, multimedia systems, real-time systems, etc. may be acquired through electives. Students interested in the
computer engineering minor must meet with the program coordinator and complete the Computer Engineering Minor Program Approval Form. Revision of the form requires the approval of the program
Prerequisite Requirements
The minor is intended for students who have satisfied the following prerequisite requirements:
Course Title
MATH 226 Calculus I (4)
MATH 227 Calculus II (4)
MATH 228 Calculus III (4)
MATH 245 Elementary Differential Equations and Linear Algebra
PHYS 220/222 General Physics with Calculus I & Laboratory (4)
PHYS 230/232 General Physics with Calculus II & Laboratory (4)
ENGR 212 Introduction to Unix/Linux for Engineers (2)
The minor may be satisfied by a minimum of 21 units (not including prerequisite units) distributed as follows:
Core Requirements: 15 units
Course Title
ENGR 213 Introduction to C Programming for Engineers (3)
ENGR 205 Electric Circuits (3)
ENGR 206 Circuits and Instrumentation Laboratory (1)
ENGR 356 Basic Computer Architecture (3)
ENGR 357 Basic Digital Laboratory (1)
ENGR 478 Design with Microprocessors (4)
Electives: 6 units
(approved upper division computer engineering courses. No upper division course from the major can be double-counted towards meeting the elective requirements of the minor or second major. There must
be prior approval from the program coordinator.)
Total (not including prerequisites): 21 units
To earn the Minor in Computer Engineering, a student must complete at least 12 of the required 21 core and elective units at SF State. Each of the courses in the minor must be taken for a letter
grade (CR/NC is not acceptable).
Minor in Electrical Engineering
The purpose of the Minor in Electrical Engineering is to give students in other fields of study a good basic background in electrical engineering. The 16-unit core provides an introduction to four
basic areas of modern electrical engineering – basic electrical circuit theory, electronics, linear signals and systems, and digital logic and computer architecture. Elective courses provide
opportunities for additional breadth or depth in a particular area. Students interested in the electrical engineering minor must meet with the program coordinator and complete the Electrical
Engineering Minor Program Approval Form. Revision of the form requires the approval of the program coordinator.
Prerequisite Requirements
The minor is intended for students who have satisfied the following prerequisite requirements:
Course Title
MATH 226 Calculus I (4)
MATH 227 Calculus II (4)
MATH 228 Calculus III (4)
MATH 245 Elementary Differential Equations and Linear Algebra (3)
PHYS 220/222 Physics I with Calculus (4)
PHYS 230/232 Physics II with Calculus (4)
The minor may be satisfied by a minimum of 22 units (not including prerequisite units) distributed as follows:
Core Requirements: 16 units
Course Title
ENGR 205 Electric Circuits (3)
ENGR 206 Circuits and Instrumentation Laboratory (1)
ENGR 305 Linear System Analysis (3)
ENGR 315 System Analysis Laboratory (1)
ENGR 353 Electronics (3)
ENGR 301 Electronics Laboratory (1)
ENGR 356 Basic Computer Architecture (3)
ENGR 357 Basic Digital Laboratory (1)
Electives: 6 units
(approved upper division electrical engineering courses. No upper division course from the major can be double-counted towards meeting the elective requirements of the minor or second major. There
must be prior approval from the program coordinator.)
Total (not including prerequisites): 22 units
To earn the Minor in Electrical Engineering, a student must complete at least 12 of the required 22 core and elective units at SF State. Each of the courses in the minor must be taken for a letter
grade (CR/NC is not acceptable).
Minor in Mechanical Engineering
The purpose of the Minor in Mechanical Engineering is to give students from science and other branches of engineering the opportunity to learn the fundamentals of mechanical engineering, to broaden
their understanding of science and engineering, and to prepare them for new technological developments such as material science and nanotechnology. Additional knowledge in control and robotics,
mechanical design, or thermal-fluids may be acquired through electives. Students interested in the Minor in Mechanical Engineering must meet with the program coordinator and complete the Mechanical
Engineering Minor Program Approval Form. Revision of the form requires the approval of the program coordinator.
Prerequisite Requirements
The minor is intended for students who have satisfied the following prerequisite requirements:
Course Title
MATH 226 Calculus I (4)
MATH 227 Calculus II (4)
MATH 228 Calculus III (4)
MATH 245 Elementary Differential Equations and Linear Algebra (3)
PHYS 220/222 Physics I with Calculus (4)
PHYS 230/232 Physics II with Calculus (4)
PHYS 240/242 Physics III with Calculus (4)
CHEM 115 General Chemistry I: Essential Concepts of Chemistry (5)
The minor may be satisfied by a minimum of 21 units (not including prerequisite units) distributed as follows:
Core Requirements: 15 units
Course Title Units
ENGR 102 Statics (3) 3
ENGR 200 Materials of Engineering (3) 3
ENGR 201 Dynamics (3) 3
ENGR 303 Engineering Thermodynamics (3) 3
ENGR 309 Mechanics of Solids (3) 3
Electives: 6 units
(approved upper division Mechanical Engineering courses, all within one of the Mechanical Engineering Focus areas. No upper division course from the major can be double-counted towards meeting the
elective requirements of the minor or second major. There must be prior approval from the program coordinator.)
Total (not including prerequisites) 21 units
To earn the Minor in Mechanical Engineering, a student must complete at least 12 of the required 21 core and elective units at SF State. Each of the courses in the minor must be taken for a letter
grade (CR/NC is not acceptable).
Master of Science in Engineering
Admission to the Program
Applicants must hold a bachelor s degree in engineering, or a closely related discipline, with a minimum GPA of 3.0 in upper division major classes, in addition to meeting general university
requirements for graduate standing. The School of Engineering also requires two letters of recommendation from persons familiar with the student s previous academic work or professional
accomplishments. Graduate Record Exam (GRE) scores within the last three years are also required. A minimum score of 550 on the paper exam or 213 on the computer-based TOEFL is required for graduate
applicants whose preparatory education was principally in a language other than English.
Advancement to Candidacy
The applicant is advanced to candidacy when the Advancement to Candidacy (ATC) has been signed and approved by the Dean of the Graduate Division.
Written English Proficiency Requirements
Level One: As a preadmission requirement, applicants must have satisfied one of the following: 1) a score of at least 4.0/6.0 on the GRE or GMAT Analytic Writing Assessment; 2) a score of at least
4.5/6.0 on the essay test of the paper-based [PBT] TOEFL (a minimum score of 24/30 on the Writing section of the Internet-based test[iBT] TOEFL); or 3) a score of at least 6.5/9.0 on the IELTS
writing test, or a concordant score on the Pearson Test of English. An applicant that does not meet the above requirement may be conditionally accepted to the program but must complete SCI 614 within
the first year of attendance at SF State in order to meet the Level One requirement. SCI 614 does not count toward the 30 unit MS course work requirement.
Level Two is satisfied by the completion of a written thesis (ENGR 898) or research project (ENGR 895).
The Master of Science in Engineering is based on 30 semester units of which at least 21 units must be earned from graduate level courses. We expect that the graduate coordinator will work closely
with individual students to develop a curriculum plan that ensures academic rigor while at the same time meeting the needs of the student. The curriculum includes 12 units of required engineering
courses and a minimum of 6 units of elective engineering courses. A maximum of 6 units of elective non-engineering courses may be applied to the degree requirements with the consent of the graduate
coordinator, if they are consistent with the student s overall career objectives as provided in the program of study. There are two options for the culminating experience. One option is to first take
a 3-unit research course (ENGR 897), and then a 3-unit thesis course (ENGR 898). The other option is to take a 3-unit applied research project course (ENGR 895).
Master of Science in Engineering: Concentration in Structural/Earthquakes
Required Courses
Course Title Units
ENGR 800 Engineering Communications 3
ENGR 801 Engineering Management 3
ENGR 833 Principles of Earthquake Engineering 3
ENGR 836 Structural Design for Earthquakes 3
Total Units Required Courses: 12
The aggregate of courses that comprise the core of this concentration is designed to give students a broad foundation in general areas of engineering project management and engineering
communications, and in Structural/Earthquake engineering. These courses are aimed to provide our students opportunities for career advancement in their profession.
Engineering Electives
Units selected on advisement from the following: 6 - 15
Course Title
ENGR 825 Bridge Engineering and Prestress Reinforced Concrete Structures (3)
ENGR 826 Seismic Hazard Analysis (3)
ENGR 827 Structural Design for Fire Safety (3)
ENGR 829 Advanced Topics in Structural Engineering (3)
ENGR 830 Finite Element Methods in Structural and Continuum Mechanics (3)
ENGR 831 Advanced Concrete Structures (3)
ENGR 832 Advanced Topics in Seismic Design (3)
ENGR 835 Advanced Steel Structures (3)
ENGR 837 Geotechnical Earthquake Engineering (3)
* A program cannot contain more than 9 units of courses with course number below 700. Some upper division engineering courses may also be used as electives if not used in the undergraduate degree
program and approved by the Graduate Coordinator.
Non-Engineering Electives 0 - 6 units
Courses, either graduate or upper division, selected primarily from science, mathematics, social science, or business, upon approval of the graduate coordinator.
Culminating Experience: 3 - 6 units
Units selected from one of the options below
Option A
Course Title
ENGR 897 Research
ENGR 898 Thesis [thesis may not be started until completion of 12 units of graduate course work and ENGR 897]
Option B
Course Title
ENGR 895 Applied Research Project [project may not be started until completion of 12 units of graduate course work]
Minimum total: 30 units
Master of Science in Engineering:
Concentration in Embedded Electrical and Computer Systems
Required Courses
Course Title Units
ENGR 800 Engineering Communications 3
ENGR 801 Engineering Management 3
ENGR 844 Embedded Systems 3
ENGR 852 Advanced Digital Design 3
Total units required courses: 12 units
The aggregate of courses that comprise the core of this concentration is designed to give students a broad foundation in general areas of engineering project management and engineering
communications, and in embedded systems . These courses are aimed to provide our students opportunities for career advancement in their profession.
Elective Engineering Courses: 6 - 15 units
Elective technical engineering courses are selected from the following list upon approval of the graduate coordinator.
Course Title
ENGR 848 Digital VLSI Design (3)
ENGR 849 Advanced Analog IC Design (3)
ENGR 851 Advanced Microprocessor Architecture (3)
ENGR 853 Advanced Topics in Computer Communication and Networks (3)
ENGR 854 Wireless Data Communication Standards (3)
ENGR 855 Advanced Wireless Communication Technologies (3)
ENGR 856 Nanoscale Circuits and Systems (3)
ENGR 857 Re-configurable Computing (3)
ENGR 868 Advanced Control Systems (3)
ENGR 869 Robotics and Haptics (3)
* A program cannot contain more than 9 units of courses with course number below 700. Some upper division engineering courses may also be used as electives if not used in the undergraduate degree
program and approved by the graduate coordinator.
Non-Engineering Electives: 0 - 6 units
Courses, either graduate or upper-division, selected primarily from science, mathematics, social science, or business, upon approval of the graduate coordinator.
Culminating Experience: 3 - 6 units
Units selected from one of the options below
Option A
Course Title
ENGR 897 Research
ENGR 898 Thesis [thesis may not be started until completion of 12 units of graduate course work and ENGR 897]
Option B
Course Title
ENGR 895 Applied Research Project [project may not be started until completion of 12 units of graduate course work]
Minimum total: 30 units
Master of Science in Engineering: Concentration in Energy Systems
Required Courses
Course Title Units
ENGR 800 Engineering Communications 3
ENGR 801 Engineering Management 3
ENGR 820 Energy Resources and Sustainability 3
ENGR 866 Advanced Thermal-Fluids 3
Total Units Required Courses: 12
The aggregate of courses that comprise the core of this concentration is designed to give students a broad foundation in general areas of engineering project management and engineering
communications, and in Energy Systems. These courses are aimed to provide our students opportunities for career advancement in their profession.
Elective Engineering Courses
Course Title
ENGR 458 Industrial and Commercial Power systems
ENGR 465 Principles of HVAC
ENGR 469 Renewable Energy Systems
ENGR 865 Energy-Efficient Buildings
ENGR 867 Energy Auditing, Measurement, and Verification
ENGR 868 Advanced Control Systems (3)
ENGR 869 Robotics and Haptics (3)
* A program cannot contain more than 9 units of courses with course number below 700. Some upper division engineering courses may be used as electives if not used in the undergraduate degree program
and if approved by the graduate coordinator.
Engineering or non-engineering electives with the consent of engineering graduate coordinator and the consent of the non-engineering discipline graduate coordinator/chair as necessary.: 0 - 6 Units
Culminating Experience 3 - 6 Units
Units selected from one of the options below
Option A
Course Title
ENGR 897 Research
ENGR 898 Thesis [thesis may not be started until completion of 12 units of graduate course work and ENGR 897]
Option B
Course Title
ENGR 895 Applied Research Project [project may not be started until completion of 12 units of graduate course work]
Minimum Total: 30 units | {"url":"http://www.sfsu.edu/bulletin/programs/enginee.htm","timestamp":"2014-04-20T06:08:15Z","content_type":null,"content_length":"87800","record_id":"<urn:uuid:89818377-1e22-4469-8328-b8f90c504a60>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
When do Hochschild homology and cohomology agree? (Ambidexterity?)
up vote 3 down vote favorite
Suppose $X$ is a smooth algebraic variety over a field of characteristic $0$. What are the most general conditions under which Hochschild homology and cohomology of $X$ agree? The existence of a
symplectic form will do the job, but is there something more general? I would be interested in particular into a condition along the lines of Lurie's recent approach to isomorphism of homology and
cohomology via the theory of ambidexterous functors (iso between a left and a right adjoint to a given functor, under some specific conditions).
hochschild-cohomology hochschild-homology
If I understand correctly, the ambidexterity condition that Hopkins and Lurie study (which is a property rather than a structure on a variety) would only apply when your variety is not just
Calabi-Yau but zero-dimensional Calabi-Yau (giving the isomorphism in Sasha's answer but with no shift). – David Ben-Zvi Sep 30 '12 at 18:45
add comment
1 Answer
active oldest votes
If $X$ is Calabi--Yau (that is $K_X = 0$) then Hochschild homology and cohomology agree up to a shift of grading by dimension of $X$.
up vote 1 down vote accepted
add comment
Not the answer you're looking for? Browse other questions tagged hochschild-cohomology hochschild-homology or ask your own question. | {"url":"http://mathoverflow.net/questions/108463/when-do-hochschild-homology-and-cohomology-agree-ambidexterity/108479","timestamp":"2014-04-16T16:47:36Z","content_type":null,"content_length":"51936","record_id":"<urn:uuid:4e1e6703-7567-4169-96e9-2e494a2745ca>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
LOESS: Multivariate Smoothing by Moving Least Squares
Results 1 - 10 of 11
- ARTIFICIAL INTELLIGENCE REVIEW , 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions,
smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Cited by 448 (52 self)
Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing
parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by
tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally
weighted learning can be used in robot learning and control.
, 1996
"... Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways
in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We ex ..."
Cited by 159 (17 self)
Add to MetaCart
Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in
which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning
paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control.
, 1990
"... This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using
data received through its sensors--an approach which is formalized here as the $AB (State-Action-Behaviour) ..."
Cited by 108 (2 self)
Add to MetaCart
This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using data
received through its sensors--an approach which is formalized here as the $AB (State-Action-Behaviour) control cycle. A method of learning is presented in which all the experiences in the lifetime of
the robot are explicitly remembered. The experiences are stored in a manner which permits fast recall of the closest previous experience to any new situation, thus permitting very quick predictions
of the effects of proposed actions and, given a goal behaviour, permitting fast generation of a candidate action. The learning can take place in high-dimensional non-linear control spaces with
real-valued ranges of variables. Furthermore, the method avoids a number of shortcomings of earlier learning methods in which the controller can become trapped in inadequate performance which does
not improve. Also considered is how the system is made resistant to noisy inputs and how it adapts to environmental changes. A well founded mechanism for choosing actions is introduced which solves
the experiment/perform dilemma for this domain with adequate computational efficiency, and with fast convergence to the goal behaviour. The dissertation explefins in detail how the $AB control cycle
can be integrated into both low and high complexity tasks. The methods and algorithms are evaluated with numerous experiments using both real and simulated robot domefins. The final experiment also
illustrates how a compound learning task can be structured into a hierarchy of simple learning tasks.
- In Proceedings of the 1997 International Machine Learning Conference
"... Locally weighted polynomial regression (LWPR) is a popular instance-based algorithm for learning continuous non-linear mappings. For more than two or three inputs and for more than a few
thousand datapoints the computational expense of predictions is daunting. We discuss drawbacks with previous appr ..."
Cited by 79 (11 self)
Add to MetaCart
Locally weighted polynomial regression (LWPR) is a popular instance-based algorithm for learning continuous non-linear mappings. For more than two or three inputs and for more than a few thousand
datapoints the computational expense of predictions is daunting. We discuss drawbacks with previous approaches to dealing with this problem, and present a new algorithm based on a multiresolution
search of a quicklyconstructible augmented kd-tree. Without needing to rebuild the tree, we can make fast predictions with arbitrary local weighting functions, arbitrary kernel widths and arbitrary
queries. The paper begins with a new, faster, algorithm for exact LWPR predictions. Next we introduce an approximation that achieves up to a two-ordersof -magnitude speedup with negligible accuracy
losses. Increasing a certain approximation parameter achieves greater speedups still, but with a correspondingly larger accuracy degradation. This is nevertheless useful during operations such as the
early stages...
- Computational Learning Theory and Natural Learning Systems , 1992
"... The generalization error of a function approximator, feature set or smoother can be estimated directly by the leave-one-out cross-validation error. For memory-based methods, this is
computationally feasible. We describe an initial version of a general memory-based learning system (GMBL): a large col ..."
Cited by 42 (10 self)
Add to MetaCart
The generalization error of a function approximator, feature set or smoother can be estimated directly by the leave-one-out cross-validation error. For memory-based methods, this is computationally
feasible. We describe an initial version of a general memory-based learning system (GMBL): a large collection of learners brought into a widely applicable machine-learning family. We present ongoing
investigations into search algorithms which, given a dataset, find the family members and features that generalize best. We also describe GMBL's application to two noisy, difficult
problems---predicting car engine emissions from pressure waves, and controlling a robot billiards player with redundant state variables. 1 Introduction The main engineering benefit of machine
learning is its application to autonomous systems in which human decision making is minimized. Function approximation plays a large and successful role in this process. However, many other human
decisions are needed even for si...
- Neurocomputing , 1995
"... This paper explores a memory-based approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network
designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their ..."
Cited by 26 (8 self)
Add to MetaCart
This paper explores a memory-based approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to
explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a
set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted
regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a
robot to learn a difficult juggling task. Keywords: memory-based, robot learning, locally weighted regression, nearest neighbor, local models. 1 Introduction An important problem in motor learning is
- CARNEGIE MELLON UNIVERSITY , 1995
"... The central thesis of this article is that memory-based methods provide natural and powerful mechanisms for high-autonomy learning control. This paper takes the form of a survey of the ways in
which memory-based methods can and have been applied to control tasks, with an emphasis on tasks in robotic ..."
Cited by 25 (3 self)
Add to MetaCart
The central thesis of this article is that memory-based methods provide natural and powerful mechanisms for high-autonomy learning control. This paper takes the form of a survey of the ways in which
memory-based methods can and have been applied to control tasks, with an emphasis on tasks in robotics and manufacturing. We explain the various forms that control tasks can take, and how this
impacts on the choice of learning algorithm. We show a progression of five increasingly more complex algorithms which are applicable to increasingly more complex kinds of control tasks. We examine
their empirical behavior on robotic and industrial tasks. The final section discusses the interesting impact that explicitly remembering all previous experiences has on the problem of learning
- Proceedings of Artificial Intelligence and Statistics (AISTATS , 2007
"... Gaussian process (GP) models are flexible probabilistic nonparametric models for regression, classification and other tasks. Unfortunately they suffer from computational intractability for large
data sets. Over the past decade there have been many different approximations developed to reduce this co ..."
Cited by 21 (0 self)
Add to MetaCart
Gaussian process (GP) models are flexible probabilistic nonparametric models for regression, classification and other tasks. Unfortunately they suffer from computational intractability for large data
sets. Over the past decade there have been many different approximations developed to reduce this cost. Most of these can be termed global approximations, in that they try to summarize all the
training data via a small set of support points. A different approach is that of local regression, where many local experts account for their own part of space. In this paper we start by
investigating the regimes in which these different approaches work well or fail. We then proceed to develop a new sparse GP approximation which is a combination of both the global and local
approaches. Theoretically we show that it is derived as a natural extension of the framework developed by QuiƱonero Candela and Rasmussen [2005] for sparse GP approximations. We demonstrate the
benefits of the combined approximation on some 1D examples for illustration, and on some large real-world data sets. 1
, 2002
"... my family-- especially my father, Donald. iv Abstract Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general
approach to automatic classification problems is to learn a probabilistic model of each class from data in ..."
Cited by 3 (1 self)
Add to MetaCart
my family-- especially my father, Donald. iv Abstract Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general
approach to automatic classification problems is to learn a probabilistic model of each class from data in which the classes are known, and then use Bayes's rule with these models to predict the
correct classes of other data for which they are not known. Anomaly detection and scientific discovery tasks can often be addressed by learning probability models over possible events and then
looking for events to which these models assign low probabilities. Many data compression algorithms such as Huffman coding and arithmetic coding rely on probabilistic models of the data stream in
order achieve high compression rates.
"... . This talk will describe the visualization tools used in our scientific computing group to look at data and functions in two and three space variables. Emphasis is given to aspects that differ
from the prevailing style elsewhere, and the points made will be illustrated with a videotape of represent ..."
Add to MetaCart
. This talk will describe the visualization tools used in our scientific computing group to look at data and functions in two and three space variables. Emphasis is given to aspects that differ from
the prevailing style elsewhere, and the points made will be illustrated with a videotape of representative example of the tools in use. Aside from a few inherently interactive tools such as brushing
scatterplots and choosing viewpoints, we emphasize images recorded frame-at-a-time onto videotape. Sound works effectively for presenting scalar information in sync with field displays, for adding
tick marks on the time axis, and for more subtle stretched data displays. 1. tensor/scatter tools. We have been involved in the construction of algorithms and software for simulating complex physical
systems for many years. As simulations in two and three (or more) spatial dimensions and time become more commonplace, manipulating and understanding the results have become important aspects of the | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1663113","timestamp":"2014-04-16T17:16:57Z","content_type":null,"content_length":"39509","record_id":"<urn:uuid:23092d20-cb39-408c-b4d6-37ef3a6223b3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Role of Polemics
Harvey Friedman friedman at math.ohio-state.edu
Sat Jan 21 05:10:37 EST 2006
This posting concerns the role of f.o.m. polemics, past and present.
The most common kind of f.o.m. polemic has the following form.
1. There are restrictions placed on allowable methods of proof. This can
either be a restriction on the rules of inference, or a restriction on the
kinds of mathematical objects allowed, or more subtle restrictions. The more
subtle restrictions may involve restrictions on the kinds of statements that
are allowed to be used in proofs, or restrictions on the form of properties
that can be used to form objects.
2. The polemical component consists of assertions of two basic kinds.
a. All of the existing "good" mathematics can be carried out within the
allowable methods.
b. Use of the disallowed methods of proof is "invalid" in some sense.
3. Sometimes one simultaneously strengthens a and weakens b to:
c. All future "good" mathematics can be carried out within the allowable
4. The polemical nature of this setup becomes apparent because of the
absence of any convincing explanation as to what is "wrong" with the
disallowed methods, and the absence of any convincing explanation of what
"good" mathematics is.
There have been many polemics of this kind, including the most recent one in
It appears that polemics of this kind are considered now to be very
unconvincing - but nevertheless these polemics can be channeled along
productive lines.
The most common productive outlet for such polemics is a study of
i. What formal systems appropriately capture the allowable methods of proof.
ii. What existing theorems in the literature can or cannot be proved using
the restricted method of proof. Are they "good" or not?
iii. What "good" theorems, not in the literature, can or cannot be proved
using the restricted method of proof. After all, mathematics continually
evolves, with new "good" theorems created all the time. Moreover, on larger
time scales, new "good" subjects arise with lots of new "good" theorems.
Let me now be more specific.
Let us start with the polemics against the law of excluded middle. This is
usually combined with complaints against certain kinds of mathematical
In the context of the ring of integers, complaints are not normally leveled
against mathematical constructions - the integers are usually accepted at
face value.
The associated polemics have had very productive outlets.
First, Heyting formulated the important and surprisingly robust formalisms
of Heyting Propositional Logic, Heyting Predicate Calculus, and Heyting
Arithmetic (HA).
Godel established deep relationships between these three systems and their
classical counterparts. These results tell us that we can interpret normal
classical reasoning in this intuitionistic framework.
In particular, Godel showed that any A...AE...E sentence provable in PA is
provable in HA, using his Dialectica interpretation. I found a hugely
simplified proof, using a purely syntactic manipulation, and showed that
this method works very generally - even in a broad variety of intuitionistic
set theories.
These results have the effect of "reconciliation" between the classical and
intuitionistic points of view - at least in certain contexts.
But what about irreconcilable differences?
1. There are specific famous theorems in (essentially) the language of PA
which are not known to be provable in HA. In particular, various theorems in
Diophantine equations that assert "with finitely many exceptions", and also
in Diophantine approximation theory, also asserting "with finitely many
exceptions". These statements are Pi03.
2. To my knowledge, there is no famous theorem in (essentially) the language
of PA which is provable in PA, and KNOWN to be not provable in HA.
3. Concerning 2 in a different direction: I have conjectured that every
famous theorem in (essentially) the language of PA, is provable in EFA =
exponential function arithmetic. See http://www.andrew.cmu.edu/user/avigad/
Number theory and elementary arithmetic.
4. Perhaps one can find a genuine "real" theorem of PA that is not a theorem
of HA.
5. There are stabs at 4 that are not fully convincing. At least, it appears
that one can productively attempt to find more and more convincing examples.
6. There is one I have proposed.
THEOREM A. Every polynomial of several variables, with integer coefficients,
assumes a value closest to the origin.
This is provable in PA but not in HA. In fact, it is provably equivalent,
over intuitionistic EFA, to classical Sigma01 induction. (EFA with
intuitionistic logic derives the axioms of EFA, and the law of excluded
middle for bounded formulas).
So Theorem A is in particular not provable in classical EFA.
7. However, one can still PRODUCTIVELY complain about Theorem A - that it is
more of a logical principle than a real theorem.
8. Note that 7 does not ENDORSE the complaint. One could also complain about
the complaint.
9. In any case, it is a significant challenge to meet this complaint. But I
have no doubt that it can be met convincingly.
Let us take another example. There are polemics associated with
predicativity. Associated formal systems were proposed by Schutte and
Feferman, and reworked by Feferman several times.
These formalisms have, to some extent, been criticized in various ways, and
some of these criticisms are based on polemics of their own. Others simply
express skepticism that there is a clear enough notion of predicativity to
support any kind of definitive analysis - thereby (trying to) shifting the
burden onto the analyzers of and supporters of predicativity.
After the formalisms of Schutte and Feferman became standard, basic work by
Feferman and others laid out how various mathematical developments are
normally done predicatively, and how various mathematical developments, not
normally done predicatively, could be done predicatively. Furthermore, there
were some examples of where predicative interpretations were impossible - in
various appropriate senses.
If I recall, these examples, given by Feferman and others, were Pi12, and as
noted, were not directly predicatively meaningful. The point is that the
Pi12 examples failed in the HYP sets, as noted. And, as noted, even if the
universal quantifiers were predicative objects - even recursively defined -
there was no predicative realization of the existential quantifier. EXAMPLE:
Every uncountable closed set of reals has a perfect subset.
But these examples are not at all fatal for the predicative point of view.
These Pi12 statements can be appropriately (for the predicativist)
rejected as false on ontological grounds - certain objects simply don't
exist, or at least cannot be demonstrated to exist, predicatively.
Call the above the ONTOLOGICAL DEFENSE.
There remained the very basic and essential question of whether there were
any "good" examples of mathematical theorems, which could not be proved
predicatively - but where the examples are not subject to the ontological
This set the stage for the key events:
1. The celebrated Kruskal tree theorem is Pi11 and cannot be proved
predicatively (on the Feferman Schutte analysis).
2. The celebrated graph minor theorem is Pi11 and cannot be proved
predicatively (on the Feferman Schutte analysis).
3. In fact, even if one extends the analysis of predicativity to the
substantially stronger system Pi11-CA0, or ID<omega, one also has the
unprovability, for the GMT (2 above).
4. I have started to talk to Neil Robertson here, about what axioms are
required to prove the graph minor theorem. We know that Pi11-CA is enough
for EKT, and Pi11-CA0 is not enough for EKT and GMT. It appears that Pi11-CA
is not enough for the current proof of GMT, but just how much is enough for
the current proof of GMT is unclear at the moment.
Because the statements involved are Pi11, the ontological dodge is not
available. One can still ask for more: what if the statements are restricted
to predicatively pristine objects?
E.g., one can consider
Kruskal's theorem for primitive recursive infinite sequences of trees.
GMT for primitive recursive infinite sequences of graphs.
The same results apply. These statements are provably equivalent to the
1-consistency of the relevant theories.
Note that we have already completely eliminated the ontological aspect of
things with this formulation, without getting into finite forms.
One can complain that "primitive recursive" is not a nice mathematical
concept, and that, for some reason or another, one wants a nice mathematical
concept for this particular issue.
If so, then one can pass to the finite forms. These are directly Pi02 and
are well recognized by many to be "good". E.g., see
Now let us see how we can complain about the previous section.
1. We can introduce modified polemics and complain that KT and GMT above are
now provable in somewhat stronger systems, and hence cease to be
counterexamples to an altered thesis.
There should be no doubt that one can "chase" higher and higher systems, of
the kind that the new polemics put forward, with very similar new results,
even staying nicely in wqo. Of course, this is a significant challenge, but
I have no doubt that it can be met.
E.g., there is Igor Kriz's extension of my EKT. The Kriz proof uses
something like light faced Pi12-CA0, if I recall. We have no idea how strong
Kriz's extension of my EKT is, at this point.
2. We can complain that KT and GMT are celebrated mathematics, but NOT core
mathematics. I.e, KT and GMT are NOT "good enough", in some sense or
another. Again, this is a significant challenge, but again I have no doubt
that it can be met.
Experiences like the following convince me. When Bill Thurston was at
Princeton many years ago, and I was visiting there for a week, we talked
about KT and he became keenly interested in the proof, including the fact
that there were very unusual features of any proof of KT. I gave a lecture
on the spot to Bill and a student or two of his, on one of the convenient
blackboards in the hall.
3. We can complain about using finite sequences in the Pi02 forms of KT and
GMT. As mentioned before on the FOM, I gave Pi02 forms of KT that involve a
single tree only, and this appears in Lecture Notes in Logic, in honor of
Sol Feferman. These involve structural properties of tall thin finite trees.
4. We can complain that the structural properties proved for tall thin
finite trees are too special or not systematic enough or don't have enough
clear geometric meaning, etcetera. Again I have no doubt that one can make
substantial improvements of various kinds.
There are many additional polemics that seem a lot "safer" than the
preceding ones.
It has always been considered VERY SAFE to complain about big set theory
such as ZFC, and more so about the biggest set theory - ZFC with large
In particular, it has been considered to be a SAFE POLEMIC that the
underlying set concept behind ZFC and especially ZFC with large cardinals,
is useless for something "good".
Especially, it is considered VERY SAFE to complain that high powered set
theory is useless for something "good and finite".
BUT now we have http://www.cs.nyu.edu/pipermail/fom/2006-January/009565.html
Of course, one can continue the supposedly safe polemic in many ways.
1. These examples in finite graph theory are not "good".
They are getting better very rapidly. See the next few postings.
2. They are still not good enough.
But they will probably be "endorsed" in some public way by experts in graph
3. Graph theory is not good enough.
A significant challenge is to transfer these examples into the realm of
other mathematical subjects such as applied graph theory, optimization,
combinatorial group theory, semilinear algebra, semialagebraic geometry,
algebraic geometry, real analysis, complex analysis, functional analysis,
topology, geometry, etc.
The examples in finite graph theory are getting fundamental enough to
warrant full blown optimism about this.
4. Here is a complaint that one expects from philosophy and computer
science, and, perhaps from mathematics farther down the road: The concept of
arbitrary integer is the root of this evil. There are, after all,
nonstandard models.
5. Results are coming in that are of the following form.
The Pi01 sentences have positive integer parameters. Some of them have the
following property. When the parameters are set to be reasonable numbers
like, say, 8!!, one obtains a sentence B that involves only finitely many
objects (say at most 8!!! objects), such that
B is provable using large cardinals, in about 20 pages.
B is not provable in ZFC with abbreviations, without using more than 2^1000
6. Furthermore, any remotely reasonable standard of naturalness and
endorsements by experts in numerous fields, will be met, even for 5.
7. Some set theorists will complain that the statements involved only use
"small large cardinals". This will be completely overcome, with statements
at the level of VB + j:V into V, and higher.
8. Again, this will be ultimately put into forms that meet any remotely
reasonable standards of naturalness, and endorsements by experts in numerous
fields across mathematics, including applied mathematics.
THANK YOU, polemicists! May your retreats and retractions and reformulations
be eloquent!!
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-January/009598.html","timestamp":"2014-04-17T12:31:02Z","content_type":null,"content_length":"16467","record_id":"<urn:uuid:630ec50c-1691-44d6-be13-84a1c61fd323>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marvelous Math
Informally, if we want to define sequence (seq) it will be something as an arrangement of events, elements, terms etc. It is a manner or discipline to keep the things in order. It may be a set of
members. The length of the same is determined by the number of ordered elements. It is the arrangement of similar objects. The same elements can appear many times at different positions in the same
Sequence definition:
A function f(x) which has domain and range, where x may be set of the natural numbers is called as its definition. The seq are of the following types.
( 1) Finite
(2) Infinite
Finite seq:- In finite form the number of elements is countable. For example
A= ( 1,3,5,7,……………111)
B =(2,4,6,8………………112)
Seq of any finite length ‘n is termed as an n-tuple and Finite seqs might also include empty form ( ) which will have no elements.
Infinite seq:- In the Infinite form the number of elements is not countable. For example
A =(……….. -3,-2,-1,0,1,2,3……………….)
Infinite seq is infinite in both directions. It has neither a first nor a final element is called a bi-infinite or two-way infinite. For example a function from all the integers included into a set,
such that the seq of all even integers ( …-8,-6 -4, -2, 0, 2, 4, 6, 8,10,12… ), is found to be bi-infinite.
There are many important integer sequential forms and these are as follows.
(A) The even numbers which can be divided by 2.
(B) The odd numbers which cannot be divided by 2.
(C) The prime numbers that have no divisors except 1 and themselves.
The Fibonacci number Sequences:- It is nothing but in which elements are the sum of the previous two elements. The first two elements are either 0 or 1. This is (0,1,1,2,3,5,8,13,21,34,65,99...).
Formula for the Fibonacci:- It can be defined using a recursive rule along with two initial elements.
, with a0 = 0 and a1 = 1.
Where, 0 and 1 are initial elements of the Fibonacci sequence.
Special seq:- some of the special seq forms are given below.
(1) Arithmetic
(2) Geometric
(3) Square of numbers
(4) Triangular
(5) cube of numbers
(6) Roots of numbers
(7) cubic roots of numbers
(8) A set of vowels.
(9) indexing of the documents
Examples and notation
It is a list of elements with a particular order. These are useful for the study of the functions, spaces, and other mathematical structures by using the properties of convergence . The basis for
series is sequences. These are used in differential equations and analysis. These are also used to find the patterns or to solve the puzzles and can be used in the study of prime numbers. | {"url":"http://mathmarvelous.blogspot.com/2013/05/sequence-numbers.html","timestamp":"2014-04-16T04:40:26Z","content_type":null,"content_length":"82613","record_id":"<urn:uuid:41fccb21-9509-4f93-b0f7-e9e55da166f6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nomination and definition
Posted by: Alexandre Borovik | August 15, 2008
Nomination and definition
I move here my old post, with all comments.
Nomination (that is, naming, giving a name to a thing) is important but underestimated stage in development of a mathematical concept. I quote Semen Kutateladze:
Nomination is a principal ingredient of education and transfer of knowledge. Nomination differs from definition. The latter implies the description of something new with the already available
notions. Nomination is the calling of something, which is the starting point of any definition. Of course, the frontiers between nomination and definition are misty and indefinite rather than
rigid and clear-cut.
And here is another mathematician talking about this important, but underrated concept:
Suppose that you want to teach the ‘cat’ concept to a very young child. Do you explain that a cat is a relatively small, primarily carnivorous mammal with retractible claws, a distinctive sonic
output, etc.? I’ll bet not. You probably show the kid a lot of different cats, saying ‘kitty’ each time, until it gets the idea. (R. P. Boas, Can we make mathematics intelligible?, American
Mathematical Monthly 88 (1981) 727-731.)
And back to Kutateladze:
We are rarely aware of the fact the secondary school arithmetic and geometry are the finest gems of the intellectual legacy of our forefathers. There is no literate who fails to recognize a
triangle. However, just a few know an appropriate formal definition. This is not by chance at all, since the definition of triangle is absent in the Elements.
The list can be continued – most basic concepts of elementary mathematics is the result of nomination not supported by a formal definition: number, set, curve, figure, etc. Basically, mathematics
starts with nomination. I had already have a chance to write about Vladimir Radzivilovsky and his method of teaching mathematics to very young children. It involved a systematic use of an idea —
borrowed from 19th century Italian educator Maria Montessori — of teaching children to recognise and name basic geometric shapes: triangle, square, circle, etc. and comparing them by placing shapes
into similarly shaped (but perhaps differently oriented) pockets in a board. Nomenclature is a key component of the Montessori Method.
It is important to emphasise that not only vision, but also locomotor and tactile sensory systems were engaged in this exercise –Radzivilovsky trained his children to recognise and name shapes with
eyes closed.
Montessori’s inset board
Perhaps, I would suggest to introduce a name for an even more elementary didactic act: pointing, like pointing a finger at a thing before naming it. Julia Rempel’s introduction of proportions via
tasting mixtures of fruit juices, described in my previous post Proportions by mouth
, is an example of pointing — it starts by demonstrating physical manifestations of a mathematical concept.
6 Comments:
John Armstrong said…
Kutateladze: … the definition of triangle is absent in the Elements.
Euclid: Definition 19.
Rectilinear figures are those which are contained by straight lines, trilateral figures being those contained by three, quadrilateral those contained by four, and multilateral those contained by
more than four straight lines.
You know, just because he used a different word doesn’t mean he didn’t define the thing. And I’m supposed to believe what he says?
In fact, most Euclid’s definitions are very straightforward cases of nomination. Look how he introduces concepts used in Definition 19:
Definition 13.
A boundary is that which is an extremity of anything.
Definition 14.
A figure is that which is contained by any boundary or boundaries.
I have a remarkable personal story about definition/nomination of quadrilateral; I’ll try to tell it next time.
I believe a clear distinction needs to be drawn between giving a name to a thing (such as a triangle or a cat) and giving a name to a process (such as the process of creating a set from a
collection of things). Moreover, pedagogically, there is a difference between giving a name to a thing-in-the-world (such as a cat, or even a drawing of a cat) and giving a name to an abstract
concept (such as “catness” or a set). I am sure such distinctions mean that some naming activities are easier for children than are others — but then I am a computer scientist, and we tend to
concern ourselves with abstract concepts and with processes.
Now you’re changing your terms. Definition 13 doesn’t point to any series of things and say, “that’s a boundary”, “that’s a boundary”, “that’s a boundary”. Nothing is being named here.
Yes, the definitions aren’t up to modern standards of rigor, but that’s what Hilbert attempted to fix with his axiomatic approach.
But Hilbert, very famously, did not DEFINE points and lines, John, in his axiomatic treatment of geometry. He took these entities as undefined primitives, and assumed (assumed!) that they satisfy
his axioms. He once told a student in a bar that his theory of geometry could just as well apply to the beer mugs and tables in front of them as to anyone’s traditional conceptions of points and
lines, if beer mugs and tables happened to satisfy the axioms. Whatever Hilbert was doing, it was not defining things.
I do not agree with your assessment of Hilbert’s axiomatisation: it was him who defined points and lines by listing relations between them. Euclid did not definf lines and points (in our modern
understanding of “definition”). He described them.
Posted in Uncategorized | {"url":"https://micromath.wordpress.com/2008/08/15/nomination-and-definition/","timestamp":"2014-04-19T06:55:28Z","content_type":null,"content_length":"63754","record_id":"<urn:uuid:afcec735-9411-43e1-8e41-c3e1b8c5f473>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Wikipedia, the free encyclopedia
In special relativity, four-momentum is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum
is a four-vector in spacetime. The contravariant four-momentum of a particle with three-momentum p = (p[x], p[y], p[z]) and energy E is
$\mathbf{P} = \begin{pmatrix} P^0 \\ P^1 \\ P^2 \\ P^3 \end{pmatrix} = \begin{pmatrix} E/c \\ p_x \\ p_y \\ p_z \end{pmatrix}$
The four-momentum is useful in relativistic calculations because it is a Lorentz vector. This means that it is easy to keep track of how it transforms under Lorentz transformations.
The above definition applies under the coordinate convention that x^0 = ct. Some authors use the convention x^0 = t which yields a modified definition with P^0 = E/c^2. It is also possible to define
covariant four-momentum P[μ] where the sign of the energy is reversed.
Minkowski norm
Calculating the Minkowski norm of the four-momentum gives a Lorentz invariant quantity equal (up to factors of the speed of light c) to the square of the particle's proper mass:
$-\|\mathbf{P}\|^2 = - P^\mu P_\mu = - \eta_{\muu} P^\mu P^u = {E^2 \over c^2} - |\mathbf p|^2 = m^2c^2$
where we use the convention that
$\eta_{\muu} = \begin{pmatrix} -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}$
is the metric tensor of special relativity. The magnitude ||P||^2 is Lorentz invariant, meaning its value is not changed by Lorentz transformations/boosting into different frames of reference.
Relation to four-velocity
For a massive particle, the four-momentum is given by the particle's invariant mass m multiplied by the particle's four-velocity:
$P^\mu = m \, U^\mu\!$
where the four-velocity is
$\begin{pmatrix} U^0 \\ U^1 \\ U^2 \\ U^3 \end{pmatrix} = \begin{pmatrix} \gamma c \\ \gamma v_x \\ \gamma v_y \\ \gamma v_z \end{pmatrix}$
$\gamma = \frac{1}{\sqrt{1-\left(\frac{v}{c}\right)^2}}$
is the Lorentz factor, c is the speed of light.
Conservation of four-momentum
The conservation of the four-momentum yields two conservation laws for "classical" quantities:
1. The total energy E = P^0c is conserved.
2. The classical three-momentum p is conserved.
Note that the invariant mass of a system of particles may be more than the sum of the particles' rest masses, since kinetic energy in the system center-of-mass frame and potential energy from forces
between the particles contribute to the invariant mass. As an example, two particles with four-momenta (−5 GeV/c, 4 GeV/c, 0, 0) and (−5 GeV/c, −4 GeV/c, 0, 0) each have (rest) mass 3 GeV/c^2
separately, but their total mass (the system mass) is 10 GeV/c^2. If these particles were to collide and stick, the mass of the composite object would be 10 GeV/c^2.
One practical application from particle physics of the conservation of the invariant mass involves combining the four-momenta P(A) and P(B) of two daughter particles produced in the decay of a
heavier particle with four-momentum P(C) to find the mass of the heavier particle. Conservation of four-momentum gives P(C)^μ = P(A)^μ + P(B)^μ, while the mass M of the heavier particle is given by −
||P(C)||^2 = M^2c^2. By measuring the energies and three-momenta of the daughter particles, one can reconstruct the invariant mass of the two-particle system, which must be equal to M. This technique
is used, e.g., in experimental searches for Z' bosons at high-energy particle colliders, where the Z' boson would show up as a bump in the invariant mass spectrum of electron-positron or muon
-antimuon pairs.
If an object's mass does not change, the Minkowski inner product of its four-momentum and corresponding four-acceleration A^μ is zero. The four-acceleration is proportional to the proper time
derivative of the four-momentum divided by the particle's mass, so
$P^{\mu} A_\mu = \eta_{\muu} P^{\mu} A^u = \eta_{\muu} P^\mu \frac{d}{d\tau} \frac{P^{u}}{m} = \frac{1}{2m} \frac{d}{d\tau} \|\mathbf{P}\|^2 = \frac{1}{2m} \frac{d}{d\tau} (-m^2c^2) = 0 .$
Canonical momentum in the presence of an electromagnetic potential
For a charged particle of charge q, moving in an electromagnetic field given by the electromagnetic four-potential:
$\begin{pmatrix} A^0 \\ A^1 \\ A^2 \\ A^3 \end{pmatrix} = \begin{pmatrix} \phi / c \\ A_x \\ A_y \\ A_z \end{pmatrix}$
where φ is the scalar potential and A = (A[x], A[y], A[z]) the vector potential, the "canonical" momentum four-vector is
$Q^\mu = P^\mu + q A^\mu. \!$
This allows the potential energy from the charged particle in an electrostatic potential and the Lorentz force on the charged particle moving in a magnetic field to be incorporated in a compact way,
in relativistic quantum mechanics.
See also
• Goldstein, Herbert (1980). Classical mechanics (2nd ed.). Reading, Mass.: Addison–Wesley Pub. Co. ISBN 0201029189.
• Landau, L.D.; E.M. Lifshitz (2000). The classical theory of fields. 4th rev. English edition, reprinted with corrections; translated from the Russian by Morton Hamermesh. Oxford: Butterworth
Heinemann. ISBN 9780750627689.
• Rindler, Wolfgang (1991). Introduction to Special Relativity (2nd ed.). Oxford: Oxford University Press. ISBN 0-19-853952-5. | {"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Four-momentum","timestamp":"2014-04-16T16:19:18Z","content_type":null,"content_length":"74616","record_id":"<urn:uuid:fbe0e55e-ece2-4790-8cb8-737f62442360>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riemann Sums and The Trapezoidal Rule
Riemann Sums, you remember these little guys? The first time you saw them was when you first began doing integrals. A Riemann sum is a way of approximating the area underneath the curve by breaking
it up into sections. Sometimes the sections are rectangles, sometimes they are trapezoids.
So you did a bunch of work on Riemann Sums, you struggled, you fought with them. Then you learnt how to do integrals the quick way, and you completely forgot about Riemann Sums. We are here to pat
your back up today. So we are got a lovely joke for you here.
What do you get when you divide a jackal hunter by pie? Pumpkin pie.
Upper and lower Riemann Sums Riemann Sums come in a lot of different flavors. Now you might not actually have to calculate any of them in the AP test you never know. But if you do, you need to be
prepared. You need to know what the different types are. Because in another kind of question you might be asked are ones that are just asking you to compare the relative sizes of the Sums. First we
are going to tackle the upper and lower Riemann Sums.
Remember how a Riemann sum, what you did was you divided the shape up into sections, vertical sections. A lot of times you picked even sections maybe with a width of 1 or a width of 2. They don?t
really have to be even though. I can just pick anything I want. I?ll take this point here. I?ll take that point, I'll take that point and I?ll do this one right here. And then draw it vertically
upwards the function. If you were actually going to do the calculations, what you?d have to do then is find out how high this is, because you are going to figure area of sub sections.
For the upper Riemann sum what you did then, was you drew your rectangles, so they were all above the function. Going from here to there gives you one. Here to there and then down gives you your next
rectangle, over here to there next rectangle. So we?ve divided the interval between here and here into three different rectangles. If you find the area of all those rectangles, and add them up you?ve
got a really bad approximation for the area underneath the curve.
That was in the upper Riemann Sums, it?s too big. The lower Riemann sum uses the same spots. But instead of drawing a rectangles for above, we draw them below. We are going to have to use this one
this time. Because for the lower Riemann Sum, they are going to be drawn on the underside. I?m going to start over here though to make more sense.
For this one here you would draw underneath and that?s one of the sections we are doing to area on. Then you draw underneath this one, there?s the next section. For the next one, notice I?m using the
left side this time. I have to use the left side and my rectangle actually doesn?t even have any height, because this is right at 0,0.
But regardless, if you find that the areas of those two rectangles, and add them up you?ve got an approximation for the area underneath the curve. And this time it?s a really bad approximation. Left
and right Riemann Sums are just a different way to name the ones that are made by doing rectangles above or below. But this time instead of worrying whether the section is above or below, you just
look to see if you are going to fill in from the left side or the right side.
Notice this time I?ve got a different function. This one is underneath the x axis instead of above.
Let me take a couple of intervals. Draw in the vertical lines. If I was to do that the upper sum, I?d be doing rectangles that all fit above. If I did the lower sum, I do rectangles that fit below.
For the left sum, you do your rectangles based on the left side. So based on the left side, for this particular section, its left corner is here. In this section its left corner is here. And in this
section its left corner is at 0,0. So it?s a rectangle that again has no height.
That one will be called a left sum if you are suing this definition. It will be called an upper sum if you are doing this. So don?t make the mistake of thinking that the left or right sums are always
larger or smaller. You need to actually look where the function is to see which of these is larger or smaller. In this case the left sum is going to turn out to be the one that's smaller than the
right sum.
When I do the right sum, you take your rectangles. But this time you are draw them from the right hand side. So you get a bigger rectangle with more area another bigger rectangle with more area. And
another bigger rectangle with more area. So in this case, if you are asked to compare them, you would say that the left Riemann sum is smaller than the right Riemann sum. Left Riemann sum is an
underestimate, and the right Riemann sum is an overestimate.
The next way you can do a Riemann sum is as a midpoint a midpoint sum. Let?s take a look at one of those. Riemann sum we are still drawing rectangles, we are still going to find their areas. We are
still going to add up those areas to get an approximation for the area underneath the curve.
I?ll pick my subsections and this time I will go and find the midpoints. Instead of drawing from the ends of the interval, you need to go to the midpoint. I?m going to draw this as a dotted line so
it doesn?t get too confusing because these are going to be the rectangles. Middle between 0 and 2 is 1. Draw upwards. That is going to be the height. If you are actually going to calculate this, you
would have to do the average of these in points and then put that result into the formula to find the height of the first rectangle that we are using for the midpoint Riemann Sum.
Next one, between 2 and 4 us 3, draw it straight upwards. If you find that height, it will be the height of the rectangle from the second subsection. And that is the next part of the point Riemann
sum. Next between 4 and 8 the average of those two. Remember that?s the quick way to find the midpoint. Just do the average. 4 plus 8 over 2 is 6. Draw a straight up. That location is the height of
the next rectangle. Draw across and we have the next section in the midpoint Riemann sum.
In this case, the upper sum would be too high, the lower sum would be too small. The midpoint sum is more like the goldilocks of Riemann Sums. It?s just right not quite just right but it is a lot
Notice this area right here is overestimate, but it?s kind of being stopped by this underestimate. Over estimate, underestimate. Overestimate underestimate.
So this one is probably a fairly good approximation for the area underneath the curve. But don?t make this mistake. Don?t assume that the midpoint sum is the average of the upper and lower sums. It
usually isn?t.
Trapezoidal sums are the next kind of sums that you can do. Now these aren't Riemann Sums. They have a different way of organizing your graph so that you get usually a pretty good estimate of the
area underneath the curve. Trapezoidal sums. It is what it says. You actually do trapezoids and don?t feel like you are stopped, don?t feel like you are trapped. It?s just a trapezoid, you?ve been
doing them since geometry.
But you start by picking out your intervals. Same way I did before, pick your intervals. And if you are actually going at doing the trapezoidal sum, you would find the coordinates, the endpoint, the
x and y coordinates.
Instead of drawing rectangles we are going to draw trapezoids. Trapezoids also have parallel sides. Here is a set of parallel sides. But the other side?s aren?t necessarily parallel. There is the
next side of the trapezoid, and straight between here, and here is the last side of the trapezoid. The area of the trapezoid is ½ sum of the bases times the height of the trapezoid.
The nice thing here is that since we are doing these sums by drawing vertical lines, vertical lines are always perpendicular to the x axis. Like easy a couple of things. It guarantees that these are
parallel and it also tells you that width of the interval is really just the height of the trapezoid. Now that can be confusing because you are so used to thinking height is up and down, height is up
and down.
No it?s not. In geometry it?s not up and down. Height is the distance between bases. But you know we could go on from here, if we found this height, we would have base 1. If we found this height we
have base 2. We could substitute in to the formula, and we would have the area of that sub section. Let?s draw in the other subsections now.
Between here and here it?s not a trapezoid, it?s actually a triangle. But you could actually just use the trapezoid formula. This base right here will be called base 2 and this location isn?t a line.
It?s just a point. But you could still call it base 1 and say that the length of base 1 is a 0.
Now here is something that is interesting that?s going to happen. What this would do is show up later on when we make a combined formula for the trapezoidal sum. But just keep this in mind, this
section right here, and this section right here, share a side.
Base 2 for this section, is base 1 for this section. That must make a condensed formula later on. Next section looks like this. Next section looks like this, so you have 4 trapezoids, and in this
case it?s going to be a really good approximation for the curve. It?s still a slight underestimate. Because there is the area above all the trapezoids, but not too bad. Trapezoidal sums again they
aren?t always underestimates. Sometimes they can be overestimates.
Time for a better practice. The AP tests very often will ask you to deal with trapezoidal sums. Recent practice test that have been issued tend to show a lot of trapezoidal problems. We are going to
concentrate on those in this section.
So this problem asked you to do use a trapezoidal sum with three sub intervals to estimate area underneath the curve between t equals 0, and t equals 3.
This is something that you might see on free response section. If you do, make sure that you do good notes of what you are dealing with. Because they are going to try to check for your understanding.
And to show that you understand this in your graph, you need to show some work.
So I?m going to start off with the graph, and that should go between 0 and 3. Doesn?t have to be a really pretty graph as long as they can read it, that?s all they really care about. 1, 2, 3, put a
scale on the graph always want a scale. Where x is and x and y and the function values go between 0 and 1.68. There is something missing there, we'll have to deal with that in a second. Maybe we
should deal with that first because we need to know how tall to make our graph. We need that function value.
They may give you all of them, or they may ask you to calculate some. It?s really not that hard. You just put 2.5 into the formula. So you would have 2 sine of 2.5. Calculate your value. I worked
that one in advance f(2.5) is about 1.20.
Notice all these decimals. If it?s got these kind of decimals its probably going to be on a calculator section of the test. And you might have a trapezoidal sum on non calculator section, but then
they're going to make the numbers easy to deal with.
So I have to go as high as 1.68, only this time I think what I?ll do is, I?ll make each subsection worth 0.5. That way we can make the graph a little bit bigger. A little bit easier to read. You don?
t have to have even intervals, even scale from top to bottom.
Left and right Riemann Sums Trapezoidal Sums Inferring integrals from Trapezoidal sums | {"url":"https://www.brightstorm.com/test-prep/ap-calculus-ab/ap-calculus-videos/riemann-sums-and-the-trapezoidal-rule/","timestamp":"2014-04-16T16:07:56Z","content_type":null,"content_length":"70921","record_id":"<urn:uuid:8611cac0-1f49-48cd-8102-721c826d9fd0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |