content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Generating Functions help
October 26th 2012, 08:33 PM
Generating Functions help
Hey there everyone, I need some help with this math problem, hoping to get some help on this topic.
Use generating functions to find the number of solutions in integers to the equation a+b+c=30 where each variable is at least 3 and at most 7.
Please explain to me step by step on how to do this so I can understand it better thanks.
October 26th 2012, 09:34 PM
Re: Generating Functions help
Hey gfbrd.
Given your constraints you will never get a solution since your maximum value will be 7+7+7=21 and your minimum will be 3+3+3=9. Did you mean to say something else?
Edit: Can't add up properly.
October 26th 2012, 09:50 PM
Re: Generating Functions help
Nope that is how the question is, so there should be some kind of answer
oh and 7+7+7=21 not 24
October 26th 2012, 09:57 PM
Re: Generating Functions help
Thanks for pointing out the error.
Well it sounds like what they want you to do is have two independent random variables with 3 to 7 and then have another random variable which is 30 minus the sum of those two.
The sum of two random variables' distribution can be found with a PGF and this will be based on two uniform distributions of 5 values with same probability for 3 to 7 inclusive.
Then the other variable will be 30 - (X+Y) but 30 is just a special case of a distribution where you have Z - W where Z has a probability density function of P(Z=30) = 1 and the distribution of
-W just reflects the distribution around the y-axis.
So can you calculate for a start the PGF for the sum of two uniform random variables (discrete uniform) with values going from 3 to 7 inclusive.
October 26th 2012, 10:00 PM
Re: Generating Functions help
lol sorry im not understanding this :(
if you dont mind i hope you can make it more simple for me to understand hahaha sorry im slow at this kind of stuff
October 26th 2012, 10:03 PM
Re: Generating Functions help
Let X = Uniform(3,7) and Y = Uniform(3,7) as well. Use the PGF formula to find the probability generating function for X+Y if X,Y are independent.
October 26th 2012, 10:17 PM
Re: Generating Functions help
oh alright I get it now thanks for your help | {"url":"http://mathhelpforum.com/advanced-math-topics/206154-generating-functions-help-print.html","timestamp":"2014-04-18T09:06:45Z","content_type":null,"content_length":"6624","record_id":"<urn:uuid:e344558d-d1b3-41fd-9a19-8419a2e4443a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Field of the Invention
The present invention relates to a method for projecting wafer product overlay error and wafer product critical dimension, more specifically to a method utilizing neural network for projecting wafer
product overlay error and wafer product critical dimension.
2. Description of Related Art
Wafer product overlay error and wafer product critical dimension are two important factors in photolithography, so there are some measuring instruments for measuring wafer product overlay error and
wafer product critical dimension in a wafer factory. An engineer reads the measurement results from the measuring instruments so as to judge whether the measured wafer products conform to the wafer
specification or not, and adjust operating conditions of the relevant wafer manufacturing machine, so that when a new batch of wafers is transported to the wafer manufacturing machine whose operate
conditions has been adjusted, the new batch of wafer products has a better chance of conforming to the wafer specification. However the measuring instrument do not measure every batch of wafer
products in real time, so some bad wafer products are not found by the measuring instrument.
Moreover, the measuring instruments take much time to measure a batch of wafer products, when the volume of wafer products required are increased, measuring time will effect efficiency and yield of
manufacturing wafer products more dramatically.
Hence, the inventors of the present invention believe that the shortcomings described above can be improved upon and finally suggest the present invention which is of a reasonable design and is an
effective improvement.
An object of the present invention is to provide a method for projecting wafer product overlay error and wafer product critical dimension, people can use the projecting method to forecast wafer
product overlay error and wafer product critical dimension in real time, further enhance the efficiency of manufacturing wafer products.
Steps of a method for projecting wafer product overlay error comprise:
• (a) sample equipment overlay error data, equipment condition data and actual wafer product overlay error data;
• (b) establish a neural network, the equipment overlay error data and the equipment condition data are inputs of the neural network, the generated output of the neural network is projected wafer
product overlay error, and the actual wafer product overlay error data is the target output of the neural network; and
• (c) set a mean square error target, train the neural network continuously until the mean square error of the neural network is no longer bigger than the mean square error target.
Steps of a method for projecting wafer product critical dimension comprise:
• (a) sample equipment critical dimension data, equipment condition data, and wafer product critical dimension data;
• (b) establish a neural network, the equipment critical dimension data and the equipment condition data are inputs of the neural network, the generated output of the neural network is projected
wafer product critical dimension, and the actual wafer product critical dimension data is the target output of the neural network; and
• (c) set a mean square error target, train the neural network continuously until the mean square error of the neural network is no longer bigger than the mean square error target.
Steps of a method for projecting wafer product overlay error and wafer product critical dimension comprises:
• (a) sample equipment overlay error data, equipment critical dimension data, equipment condition data, actual wafer product overlay error data, and actual wafer product critical dimension data;
• (b) establish a first neural network and a second neural network, the equipment overlay error data and the equipment condition data are inputs of the first neural network, the generated output of
the first neural network is projected wafer product overlay error, the actual wafer product overlay error data is the target output of the first neural network, the equipment critical dimension
data and the equipment condition data are inputs of the second neural network, the generated output of the second neural network is projected wafer product critical dimension, and the actual
wafer product critical dimension data is the target output of the second neural network; and
• (c) set a first mean square error target and a second mean square error target, train the first neural network continuously until the mean square errors of the first neural network is no longer
bigger than the first mean square error target, train the second neural network continuously until the mean square errors of the second neural network is no longer bigger than the second mean
square error target.
FIG. 1 is a flow chart of a method for projecting wafer product overlay error according to the present invention.
FIG. 2 is a block diagram of a first neural network according to the present invention.
FIG. 3 is a figure showing the projected wafer product overlay error and the actual wafer product overlay error.
FIG. 4 is a flow chart of a method for projecting wafer product critical dimension.
FIG. 5 is a block diagram of a second neural network according to the present invention.
FIG. 6 is a figure showing the performance of the first neural network as the training continues.
As shown in FIGS. 1 and 2, a method for projecting wafer product overlay error is presented, and the steps of the method comprise:
S101: sample equipment overlay error data 1, equipment condition data 2, and actual wafer product overlay error data 3, wherein the equipment overlay error data 1 indicates manufacturing ability of
manufacturing machines, if a batch of wafers which is transported to the manufacturing machine whose manufacturing ability is better, the overlay error of the batch of wafer product is smaller;
S102: establish a first neural network 4, the first neural network 4 can be chosen as a back-propagation neural network, the equipment overlay error data 1 and the equipment condition data 2 are
inputs of the first neural network 4, the generated output of the first neural network 4 is projected wafer product overlay error 5, and the actual wafer product overlay error data 3 is the target
output of the first neural network 4. Therein the actual wafer product overlay error data 3 include overlay shift in x direction, overlay rotation in x direction, overlay magnification in x
direction, overlay shift in y direction, overlay rotation in y direction, overlay magnification in y direction, corrected reverse overlay in x direction, corrected reverse overlay in y direction,
potential rework overlay in x direction, potential rework overlay in y direction, reverse overlay in x direction, reverse overlay in y direction, and so on. Wherein the reverse overlay is composed of
the potential rework overlay and the corrected reverse overlay. Furthermore the number of output neuron of the first neural network 4 must be the same as the number of the kinds of actual wafer
product overlay error data 3; and
S103: set a first mean square error target, train the first neural network 4 by compensating the variance between the projected wafer product overlay error 5 and the actual wafer product overlay
error data 3, train the first neural network 4 continuously until the mean square error of the first neural network 4 is no longer bigger than the first mean square error target (refer to FIG. 6).
When the mean square error of the first neural network 4 is no longer bigger than the first mean square error target, the training process for the first neural network 4 is accomplished.
As shown in FIG. 3, when a new batch of wafers is transported to a manufacturing machine, when the training process for the first neural network 4 has been accomplished, the first neural network 4
can predict the overlay error of this batch of wafers via the equipment overlay error data 1 of this batch of wafers and the equipment condition data 2 of this batch of wafers. The generated output
of the first neural network 4 is the projected wafer product overlay error 5, an engineer can compare the projected wafer product overlay error 5 (dashed line) with the actual wafer overlay error 3
(solid line) measured by measuring instruments or measure machines so as to estimate projection accuracy of the first neural network 4. In order to enhance the projection accuracy of the first neural
network 4, an engineer can modulate some parameters of the first neural network 4, such as the number of hidden layer, the kind of activation functions, the number of neurons, the kind of input data,
or the original sampling frequency. For example, if sampling action is done once every twenty batch of wafers originally, the sampling action can be changed to be done once every five batch of
As shown in FIG. 4 and FIG. 5, a method for projecting wafer product critical dimension is presented, and the steps of the method comprise:
S201: sample equipment critical dimension data 6, equipment condition data 2, and actual wafer product critical dimension data 7, wherein the equipment critical dimension data 6 show manufacturing
ability of manufacturing machines, if a batch of wafers which is transported to the manufacturing machine whose manufacturing ability is better, the critical dimension of this batch of wafer products
is more accurate;
S202: establish a second neural network 8, the equipment critical dimension data 6 and the equipment condition data 2 are inputs of the second neural network 8, the generated output of the second
neural network 8 is projected wafer product critical dimension 9, and the actual wafer product critical dimension data 7 is the target output of the second neural network 8. Therein, the actual wafer
product critical dimension data 7 can include critical dimension mean, critical dimension range, and so on. Furthermore the number of output neurons of the second neural network 8 must be same as the
number of the actual wafer product critical dimension data 7; and
S203: set a second mean square error target, train the second neural network 8 by compensating the variance between the projected wafer product critical dimension 9 and the actual wafer product
critical dimension data 7, train the second neural network 8 continuously until the mean square error of the second neural network 8 is no longer bigger than the second mean square error target. When
the mean square error of the second neural network 8 is no longer bigger than the second mean square error target, the training process for the second neural network 8 is accomplished.
When a new batch of wafers is transported to a manufacturing machine, when the training process for the second neural network 8 has been accomplished, the second neural network 8 can predict the
critical dimension of this batch of wafer product via the equipment critical dimension data 6 of this batch of wafers and the equipment condition data 2 of this batch of wafers. An engineer can
compare the projected wafer product critical dimension 9 with actual wafer product critical dimension 7 measured by measuring instruments or measure machines so as to estimate projection accuracy of
the second neural network 8. In order to enhance forecast accuracy of the second neural network 8, the engineer can modulates some parameters of the second neural network 8 or the original sampling
The efficacy of the present invention is as follows: Because the first neural network and the second neural network are trained continuously by adjusting according to the variance of the projected
data against the actual data, therefore wafer product overlay error and wafer product critical dimension can be predicted accurately. Additionally bad wafer products can be found by engineers,
engineers don't need waste time waiting for measure data from measure machines, and the yield of wafer product and efficiency of manufacturing wafer product can be enhanced. Furthermore, a proprietor
does not need to buy many measure machines, so that cost of manufacturing wafer product can be down.
What are disclosed above are only the specification and the drawings of the preferred embodiments of the present invention and it is therefore not intended that the present invention be limited to
the particular embodiments disclosed. It shall be understood by those skilled in the art that various equivalent changes may be made depending on the specification and the drawings of the present
invention without departing from the scope of the present invention. | {"url":"http://www.freshpatents.com/-dt20100225ptan20100049680.php","timestamp":"2014-04-19T22:14:31Z","content_type":null,"content_length":"53542","record_id":"<urn:uuid:5cd4db58-16f7-485c-828d-5500739d459b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aleksandar Nikolov. Tight hardness results for minimizing discrepancy
- In ACM SoCG , 2011
"... ..."
, 2010
"... Iterative rounding and relaxation have arguably become the method of choice in dealing with unconstrained and constrained network design problems. In this paper we extend the scope of the
iterative relaxation method in two directions: (1) by handling more complex degree constraints in the minimum sp ..."
Cited by 5 (2 self)
Add to MetaCart
Iterative rounding and relaxation have arguably become the method of choice in dealing with unconstrained and constrained network design problems. In this paper we extend the scope of the iterative
relaxation method in two directions: (1) by handling more complex degree constraints in the minimum spanning tree problem (namely laminar crossing spanning tree), and (2) by incorporating ‘degree
bounds ’ in other combinatorial optimization problems such as matroid intersection and lattice polyhedra. We give new or improved approximation algorithms, hardness results, and integrality gaps for
these problems. • Our main result is a (1, b+O(log n))-approximation algorithm for the minimum crossing spanning tree (MCST) problem with laminar degree constraints. The laminar MCST problem is a
natural generalization of the well-studied bounded-degree MST, and is a special case of general crossing spanning tree. We give an additive Ω(log α m) hardness of approximation for general MCST, even
in the absence of costs (α> 0 is a fixed constant, and m is the number of degree constraints). This also leads to a multiplicative Ω(log α m) hardness of approximation for the robust k-median problem
[1], improving over the previously known factor 2 hardness. • We then consider the crossing contra-polymatroid intersection problem and obtain a (2, 2b + ∆−1)-approximation algorithm, where ∆ is the
maximum element frequency. This models for example the degree-bounded spanning-set intersection in two matroids. Finally, we introduce the crossing lattice polyhedra problem, and obtain a (1, b + 2 ∆
− 1) approximation under certain condition. This result provides a unified framework and common generalization of various problems studied previously, such as degree bounded matroids.
"... We study the mergeability of data summaries. Informally speaking, mergeability requires that, given two summaries on two data sets, there is a way to merge the two summaries into a single
summary on the union of the two data sets, while preserving the error and size guarantees. This property means t ..."
Cited by 4 (0 self)
Add to MetaCart
We study the mergeability of data summaries. Informally speaking, mergeability requires that, given two summaries on two data sets, there is a way to merge the two summaries into a single summary on
the union of the two data sets, while preserving the error and size guarantees. This property means that the summaries can be merged in a way like other algebraic operators such as sum and max, which
is especially useful for computing summaries on massive distributed data. Several data summaries are trivially mergeable by construction, most notably all the sketches that are linear functions of
the data sets. But some other fundamental ones like those for heavy hitters and quantiles, are not (known to be) mergeable. In this paper, we demonstrate that these summaries are indeed mergeable or
can be made mergeable after appropriate modifications. Specifically, we show that for ε-approximate heavy hitters, there is a deterministic mergeable summary of size O(1/ε); for ε-approximate
quantiles, there is a deterministic summary of size O ( 1 log(εn)) that has a restricted form of mergeability, ε and a randomized one of size O ( 1 1 log3/2) with full merge-ε ε ability. We also
extend our results to geometric summaries such as ε-approximations and ε-kernels. We also achieve two results of independent interest: (1) we provide the best known randomized streaming bound for
ε-approximate quantiles that depends only on ε, of size O ( 1 1 log3/2), and (2) we demonstrate that the MG and the ε ε SpaceSaving summaries for heavy hitters are isomorphic. Supported by NSF under
grants CNS-05-40347, IIS-07-
, 2011
"... To me, 2010 looks as annus mirabilis, a miraculous year, in several areas of my mathematical interests. Below I list seven highlights and breakthroughs, mostly in discrete geometry, hoping to
share some of my wonder and pleasure with the readers. Of course, hardly any of these great results have com ..."
Add to MetaCart
To me, 2010 looks as annus mirabilis, a miraculous year, in several areas of my mathematical interests. Below I list seven highlights and breakthroughs, mostly in discrete geometry, hoping to share
some of my wonder and pleasure with the readers. Of course, hardly any of these great results have come out of the blue: usually the paper I refer to adds the last step to earlier ideas. Since this
is an extended abstract (of a nonexistent paper), I will be rather brief, or sometimes completely silent, about the history, with apologies to the unmentioned giants on whose shoulders the authors I
do mention have been standing. 1 A careful reader may notice that together with these great results, I will also advertise some smaller results of mine. • Larry Guth and Nets Hawk Katz [16] completed
a bold project of György Elekes (whose previous stage is reported in [10]) and obtained a neartight bound for the Erdős distinct distances problem: they proved that every n points in the plane
determine at least Ω(n / log n) distinct distances. This almost matches the best known upper bound of O(n / √ √ √ log n), attained for the n × n grid. Their proof and some related results and methods
constitute the main topic of this note, and will be discussed later. • János Pach and Gábor Tardos [27] found tight lower bounds for the size of ε-nets for geometric set systems. 2 It has been known
for a long time
"... A well studied special case of bin packing is the 3-partition problem, where n items of size> 1 4 have to be packed in a minimum number of bins of capacity one. The famous Karmarkar-Karp
algorithm transforms a fractional solution of a suitable LP relaxation for this problem into an integral solution ..."
Add to MetaCart
A well studied special case of bin packing is the 3-partition problem, where n items of size> 1 4 have to be packed in a minimum number of bins of capacity one. The famous Karmarkar-Karp algorithm
transforms a fractional solution of a suitable LP relaxation for this problem into an integral solution that requires at most O(logn) additional bins. The three-permutations-problem of Beck is the
following. Given any 3 permutations on n symbols, color the symbols red and blue, such that in any interval of any of those permutations, the number of red and blue symbols is roughly the same. The
necessary difference is called the discrepancy. We establish a surprising connection between bin packing and Beck’s problem: The additive integrality gap of the 3-partition linear programming
relaxation can be bounded by the discrepancy of 3 permutations. This connection yields an alternative method to establish an O(logn) bound on the additive integrality gap of the 3-partition.
Reversely, making use of a recent example of 3 permutations, for which a discrepancy of Ω(logn) is necessary, we prove the following: The O(log 2 n) upper bound on the additive gap for bin packing
with arbitrary item sizes cannot be improved by any technique that isbased on rounding up items. Thislower bound holdsfor a large classof algorithms including the Karmarkar-Karp procedure. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=12175113","timestamp":"2014-04-16T15:09:31Z","content_type":null,"content_length":"25208","record_id":"<urn:uuid:28d39f52-85c4-4386-8250-a2b2b47a5e07>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Additive tree functionals with small toll functions and subtrees of random trees
Additive tree functionals with small toll functions and subtrees of random trees
Stephan Wagner
Many parameters of trees are additive in the sense that they can be computed recursively from the sum of the branches plus a certain toll function. For instance, such parameters occur very frequently
in the analysis of divide-and-conquer algorithms. Here we are interested in the situation that the toll function is small (the average over all trees of a given size n decreases exponentially with
n). We prove a general central limit theorem for random labelled trees and apply it to a number of examples. The main motivation is the study of the number of subtrees in a random labelled tree, but
it also applies to classical instances such as the number of leaves.
Full Text:
PostScript PDF | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAQ0106","timestamp":"2014-04-20T18:38:29Z","content_type":null,"content_length":"10459","record_id":"<urn:uuid:55f56610-bdd3-44f5-9e6c-c9b3019927d5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
, 1997
"... . It has been shown that there is a Hamilton cycle in every connected Cayley graph on each group G whose commutator subgroup is cyclic of prime-power order. This paper considers connected,
vertex-transitive graphs X of order at least 3 where the automorphism group of X contains a transitive subgroup ..."
Cited by 4 (0 self)
Add to MetaCart
. It has been shown that there is a Hamilton cycle in every connected Cayley graph on each group G whose commutator subgroup is cyclic of prime-power order. This paper considers connected,
vertex-transitive graphs X of order at least 3 where the automorphism group of X contains a transitive subgroup G whose commutator subgroup is cyclic of prime-power order. We show that of these
graphs, only the Petersen graph is not hamiltonian. Key words: graph, vertex-transitive, Hamilton cycle, commutator subgroup 1 Introduction Considerable attention has been devoted to the problem of
determining whether or not a connected, vertex-transitive graph X has a Hamilton cycle [1], [8], [14]. A graph X is vertex-transitive if some group G of automorphisms of X Preprint submitted to
Discrete Mathematics 5 December acts transitively on V (X). If G is abelian, then it is easy to see that X has a Hamilton cycle. Thus it is natural to try to prove the same conclusion when G is
"almost abelian." Recalling ...
- J. Europ. Math. Soc
"... Following a problem posed by Lovász in 1969, it is believed that every connected vertex-transitive graph has a Hamilton path. This is shown here to be true for cubic Cayley graphs arising from
groups having a (2, s, 3)-presentation, that is, for groups G = 〈a, b|a 2 = 1, b s = 1, (ab) 3 = 1, etc. 〉 ..."
Cited by 3 (0 self)
Add to MetaCart
Following a problem posed by Lovász in 1969, it is believed that every connected vertex-transitive graph has a Hamilton path. This is shown here to be true for cubic Cayley graphs arising from groups
having a (2, s, 3)-presentation, that is, for groups G = 〈a, b|a 2 = 1, b s = 1, (ab) 3 = 1, etc. 〉 generated by an involution a and an element b of order s ≥ 3 such that their product ab has order
3. More precisely, it is shown that the Cayley graph X = Cay(G, {a, b, b −1}) has a Hamilton cycle when |G | (and thus s) is congruent to 2 modulo 4, and has a long cycle missing only two vertices
(and thus necessarily a Hamilton path) when |G | is congruent to 0 modulo 4. 1 Introductory remarks In 1969, Lovász [21] asked whether every connected vertex-transitive graph has a Hamilton path,
thus tying together, through this special case of the Traveling Salesman Problem, two seemingly unrelated concepts: traversability and symmetry of graphs. Lovász problem is, somewhat misleadingly,
usually referred to as the Lovász conjecture, presumably in view of the fact that, after all these years, a connected vertex-transitive graph without a Hamilton path is yet to be produced. Moreover,
only four connected vertex-transitive graphs (having at least three vertices) not possessing a Hamilton cycle are known to exist: the Petersen graph, the Coxeter graph, and the two graphs obtained
from them by replacing each vertex with a triangle. All of these are cubic graphs, suggesting perhaps that no attempt to resolve the above problem can bypass a thorough analysis of cubic
vertex-transitive graphs. Besides, none of these four graphs is a Cayley graph. This has led to a folklore conjecture that every Cayley graph is hamiltonian. This problem has spurred quite a bit of
interest in the mathematical community. In spite of a large number of articles directly and indirectly related to this subject (see | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=980568","timestamp":"2014-04-21T02:46:24Z","content_type":null,"content_length":"16628","record_id":"<urn:uuid:714147ca-732e-4358-b8ac-71868ca69465>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westover Hills, TX Math Tutor
Find a Westover Hills, TX Math Tutor
...I teach question/problem decoding, question types and elimination by the Socratic method, as well as logical essay-question construction. I have made consistently high scores on tests in my own
academic career, thus proving my point: I know how to help my students succeed! I have always made h...
43 Subjects: including logic, algebra 1, prealgebra, writing
...Also, I have developed online study courses in Algebra I, Geometry, Physics, Chemistry, Biology and Grammar. These courses can be made available as well. In addition to teaching, I am active in
special education activities to include ARD meetings, establishing IEP's and intervention for students failing classes.
9 Subjects: including algebra 1, physics, trigonometry, physical science
I have enjoyed being a teacher for the past 15 years. I can help any student be a better thinker, reader and mathematician. Whether your learning needs consist of acquiring English skills or
completing an award-winning science fair project, no job is too big or too small for us to accomplish.
22 Subjects: including prealgebra, English, reading, ESL/ESOL
...While a student at Trinity, I tutored freelance between 7-10 students regularly. I also worked for Huntington Learning Center in San Antonio during my last year in college. I have experience
tutoring most math classes taught in Texas middle and high schools as well as teaching test preparation for the math portions of the ASVAB, SAT and ACT.
14 Subjects: including precalculus, ACT Math, geometry, SAT math
...Thank you for considering me to assist your student. For over ten years I've taught students in both the classroom and homeschool environments (1st grade-college). I provide targeted support in
all core subjects (Language Arts, Math, Science, Soc. Studies, Hist), specializing in helping all stu...
27 Subjects: including prealgebra, English, reading, writing | {"url":"http://www.purplemath.com/Westover_Hills_TX_Math_tutors.php","timestamp":"2014-04-16T22:02:15Z","content_type":null,"content_length":"24287","record_id":"<urn:uuid:8e020e9f-5e7b-4fad-9a61-bfef5baf124a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
This tool generates samples of regularly spaced points within polygons. The spacing in the x and y directions, and the rotation angle for the orientation of the sampling grid, can be adjusted for
each polygon or can be set as constants for every polygon. The x and y distance parameters (xdist, ydist) can, therefore, be set either to a single value, or can reference fields in the input polygon
data source that contain the appropriate spacing values for each polygon. The x spacing refers to the axis perpendicular to the rotation angle, and the y spacing refers to the axis parallel to the
rotation angle. Thus, if there is no rotation (rot=0), the x distance refers to the east-west axis.
Note that the rotation angle must be specified in radians, not degrees. To convert from degrees to radians use the formula: radians = degrees * pi / 180, where pi = 3.141529654.
The 'excl' option can be used to prevent points from being generated within the polygons of this dataset. For example, if you were to generate vegetation sampling points you might use a polygon layer
representing ponds as the exclusion layer to prevent points from occurring in water.
genregularpntsinpolys(in, uidfield, xdist, ydist, out, [rot], [excl], [random], [where]);
in the input reference polygon data source (points are only generated within polygons)
uidfield the input polygon unique ID field
xdist the x-axis sampling distance: either a number representing the distance, or the field name of field containing this value, e.g. 100 or "XDIST"
ydist the y-axis sampling distance: either a number representing the distance, or the field name of field containing this value, e.g. 100 or "YDIST"
out the output point data source
[rot] the rotation angle of the sampling axis: either a number representing the angle in radians, or the field name of field containing this value (default=0)
[excl] the polygon data source containing exclusion polygons: points are prevented from being generated within these polygons; this option can be dangerous - see the help documentation for details
[random] (TRUE/FALSE) randomize the alignment of the grid - if false, always aligns the grid in reference to the upper left corner of the polygon envelope (default=TRUE)
[where] the filter/selection statement that will be applied to the polygon feature class to identify a subset of features to process
genregularpntsinpolys(in="C:\data\stands.shp", uidfield="StandID", xdist=100, ydist=200, rot="Direction", out="C:\data\samplepnts.shp");
genregularpntsinpolys(in="C:\data\lakes.shp", uidfield="LAKEID", xdist="SpacingX", ydist="SpacingY", rot=0, out="C:\data\samplepnts.shp", where="AREA>100000"); | {"url":"http://www.spatialecology.com/gme/genregularpntsinpolys.htm","timestamp":"2014-04-17T15:51:51Z","content_type":null,"content_length":"7840","record_id":"<urn:uuid:597de257-1e45-4da4-a35b-118da302706a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
There's a few things i'll need to explain about the triangulate function...
First of all, you notice that this function is actually has two definitions, with different patterns to the left of the equal sign ((a:b:c:xs) and _) The way Haskell works is that it will try to
match to the first pattern if it can. Then, if that fails, it'll go to the next one it finds.
In the first version of the function, it checks to see if the list of points has at least three points in it- As you can imagine, it's hard to triangulate something if you don't have at least three
points. The colons in the (a:b:c:xs) expression let you pick the head item off of a list (or, for that matter, stick something on the head if we used it on the right side of the equation) so this
pattern means we need the next three "heads" of the list to be 3 values a, b, and c. If we don't have three points, the second version of the function will match instead. (anything will match the
underline character)
If we do find that we have three points, the first version of triangulate will make a triangle and will then recursively call itself to build more triangles. In a language like Haskell, which has no
loops, these types of recursive functions that consumes lists are a classic design. Most of the time, however, we can avoid explicitly creating recursive functions like this by using list functions
like map and folds, which we'll discuss later.
What's good about this code?
Using Haskell pattern matching and recursion, we can very elegantly express functions that process lists, like triangulate.
What's bad about this code?
Polygon triangulation is actually slightly more complicated than our function suggests, because there's special procedures that would need to be followed to triangulate concave polygons... In this
tutorial, we're skirting this issues by removing convex polygons when we draw our city park maps- That's why there's some oddball extra lines in the original park map. | {"url":"http://www.lisperati.com/haskell/ht5.html","timestamp":"2014-04-21T00:21:39Z","content_type":null,"content_length":"4144","record_id":"<urn:uuid:3590f94d-22b9-4f0c-8dad-f037ca9d7306>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Past, Present and Future of the Avogadro Number
Authors: U.V.S.Seshavatharam, S.Lakshminarayana
The definition of Avogadro number (N) and the current experiments to estimate it, however, both rely on the precise definition of “one gram”. Hence most of the scientists consider it as an ad-hoc
number. But in reality it is not the case. In atomic and nuclear physics, atomic gravitational constant is Avogadro number times the Newton’s gravitational constant. Key conceptual link that connects
the gravitational force and non-gravitational forces is - the classical force limit,(c^4/G). Ratio of classical force limit and weak force magnitude is (N^2). Thus in this paper authors proposed many
unified methods for estimating the Avogadro number.
Comments: 11 Pages. Searching, collecting, sorting and compiling the cosmic code is an essential part of unification
Download: PDF
Submission history
[v1] 2012-09-28 12:21:32
Unique-IP document downloads: 106 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1209.0106","timestamp":"2014-04-16T16:55:13Z","content_type":null,"content_length":"7556","record_id":"<urn:uuid:f7caa90d-5b28-4a7c-bad0-bd5b933162c1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Demand curve and cost function question! URGENT:)
March 24th 2008, 02:41 PM #1
Junior Member
Sep 2007
Demand curve and cost function question! URGENT:)
A monopolist faces the following demand curve and cost function
P= 50-0.5Q TC=2Q+1000
Find the follwing equations
Marginal cost:
Marginal Revenue:
Average total cost:
Graph the following on the same graph
Marginal cost - Wikipedia, the free encyclopedia
Therefore, MC = DTC/DQ = 2
Marginal Revenue works in a similar fashion, shown here:
Marginal revenue - Wikipedia, the free encyclopedia
Does that get you started?
$Q$: demand, $P$: price, $TC$: total costs
Marginal cost $MC$ is defined as:
Marginal Revenue:
Revenue is demand times price:
$R=Q \times P$
Average total cost:
Average cost is:
Graph the following on the same graph
March 24th 2008, 06:01 PM #2
Senior Member
Feb 2008
Berkeley, Illinois
March 24th 2008, 08:48 PM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/business-math/31933-demand-curve-cost-function-question-urgent.html","timestamp":"2014-04-18T11:59:56Z","content_type":null,"content_length":"39131","record_id":"<urn:uuid:3c4b0949-b373-4ed4-8cd1-d9943eeab183>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi — Diameter vs. Radius Paragraph (Formerly On Wikipedia)
Diameter vs. radius
Today, the radius of a circle is widely accepted as its most important measure. For example, the “unit circle,” around which radian angles are measured, has a radius of 1. But at the time when the
greek letter pi was first being associated with the circle’s circumference, it was thought that the diameter was the important measure of a circle. Since the diameter is exactly twice the radius,
this introduces an arbitrary factor of 2 into pi-related phenomena, such as:
One time around the unit circle is 2pi.
1/4 of the way around the unit circle (a right angle) is 2pi/4.
1/n of the way around the unit circle is 2pi/n.
One full cycle of a sine or cosine wave spans a width of 2pi.
A half cycle of a sine or cosine wave spans a width of pi.
Radian angles of 0 and 2pi are equal; 0 and pi are not.
If early mathematicians had measured their circles by radius, they perhaps would have assigned pi the value of 6.283.., and this factor of 2 would not be necessary. | {"url":"http://alienryderflex.com/pi.html","timestamp":"2014-04-18T15:38:46Z","content_type":null,"content_length":"4186","record_id":"<urn:uuid:04f5a814-53b6-4d0e-a3ba-86c941afaf48>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
XCOM: Enemy Within Multiplayer Balance Update is Live on PC and Console
01-08-2014 #1
Hello Commander,
XCOM: Enemy Within has received an update which adjusts the costs of units in multiplayer.
The changes are as follows:
Here are those changes again, in a text format in case you would like to be able to grab them.
Needle Grenade now costs 300, original cost decreased by 200
Heavy Floater now costs 3000, original cost decreased by 750
Sectoid Commander now costs 2700, original cost decreased by 500
Muton now costs 2200, original cost decreased by 650
Elite Muton now costs 2900, original cost decreased by 900
Lurkernow costs 1200, original cost increased by 800
Psi Guardian now costs 2200, original cost decreased by 300
Imperator now costs 1700, original cost decreased by 300
Sectoid now costs 500, original cost decreased by 100
Thin Man now costs 1100, original cost decreased by 300
Drone now costs 400, original cost decreased by 300
EXALT Sniper now costs 1400, original cost decreased by 600
Chitin Plating now costs 500, original cost increased by 200
Floater now costs 1000, original cost decreased by 300
Torch now costs 2000, original cost increased by 200
Bruiser now costs 1500, original cost increased by 500
Phoenix now costs 3500, original cost increased by 300
Demolisher now costs 2200, original cost increased by 500
Ifrit now costs 3800, original cost increased by 200
Typhoon now costs 2600, original cost increased by 500
We will be watching, Commander.
Last edited by Ga1Friday; 01-08-2014 at 03:09 PM.
Love the effort that went into balancing MP.
One note on lurker. Although it's strong, it might've been nerfed a little too hard. Tile scanning and nades often easily counter lurkers. I can predict a slight drop in costs maybe 600-1000
range. =)
Great job on everything and thank you for caring about MP.
I love all the point decreases except for the Drone and the Sectoid Commander (already an amazing unit!). Lurker needed a nerf badly, but I agree that 1200 may have been a bit much. Still, it
should cut down on unsatisfying endgame stalemate situations, so I'm basically in favor.
I'm not sure about the logic behind making MECs more expensive while making Drones cheaper. MEC/Drone combo squads should be basically unaffected, but there's now even less incentive to build
squads including MECs without drones. Strange, but I guess we'll see once we begin building squads under the new regime.
Thanks for the comments everyone. Let me know how things play out after you have a chance to give these new prices a try.
Sectoid Commander squads will be a big deal now. Psi Guardian/Sectoid Commander squad will be especially strong.
Already tried out a 4 Muton 1 Floater squad. Very satisfying : )
Next up 3 SC + 3 sectoids or 3 SC + 2 drones + floater.
Please drop the Chrys and CD next. They are out of the game at current prices. Also Psi armour could get a major drop.
Have you tried out:
2x SJ, LPR, Scope, Berzerker
2x Muton
1x Muton Elite
100 Aim Muton Elite with 18 move is pretty awesome.
Mutons are now stronger than soldier equivalents.
I think SCs were fine where they were before, now being able to field an entire squad with them is going to make games really annoying, especially since we still can't hear footsteps.
Also, why weren't the Berserker or Chryssalid adjusted? Why not just remove them completely at this rate?
EDIT: Whoops, thought that was class+soldier cost. Still, 2x SC and Psi Guard is the new meta.
Last edited by V1cT; 01-08-2014 at 03:09 PM.
I think SCs were fine where they were before, now being able to field an entire squad with them is going to make games really annoying, especially since we still can't hear footsteps.
Also, why weren't the Berserker or Chryssalid adjusted? Why not just remove them completely at this rate?
EDIT: Whoops, thought that was class+soldier cost. Still, 2x SC and Psi Guard is the new meta.
Yep. You can even afford 2x Sectoids and some gear for the Psi Guard.
MECs were expensive before....
Ouch. They'll need some careful play now, which is a good thing. Sometimes MECs with damage control got a bit too tanky unless you played hit and run for several turns.
Also, the chyssalids are unloved by all and are crying in the corner, but it was the muton buff we needed it to be.
I agree, with Sectoid Commander being cheaper and easily integrated into squads, MEC troopers are more often countered than before. There isn't any need to make them more expensive. Commando Mec
trooper is really the only "hard counter" for 6x nader squads.
On another note, most exalt units are still overpriced. They have weak aim and yet still cost more than soldiers pound for pound. The slight additional will doesn't prevent any psi attacks to a
degree that justifies the extra cost.
I agree, with Sectoid Commander being cheaper and easily integrated into squads, MEC troopers are more often countered than before. There isn't any need to make them more expensive. Commando Mec
trooper is really the only "hard counter" for 6x nader squads.
On another note, most exalt units are still overpriced. They have weak aim and yet still cost more than soldiers pound for pound. The slight additional will doesn't prevent any psi attacks to a
degree that justifies the extra cost.
To be fair, the EXALT sniper isn't all that bad now.
The EXALT medics are still both relatively good for what you get: a basic support unit.
The other guys? Well, oh well.
I think the Exalt snipers are all pretty bad even with the buff to the basic one. The Exalt Heavies and Elite Heavies are actually pretty good.
Are they good for their costs though? Can you not make a soldier version that's superior for the same price?
Santa Claws: The Exalt medics are actually great, yes. The sniper is still totally rubbish. What's the point of bring a Exalt sniper that misses more than a soldier rookie, and does less damage
than spitting on the enemy when it does hit that 1 in 1000 chance./
I played quite a few games last night. Most of my favorite builds were no longer viable with the increase to chitin and lurker but a few were salvageable. However, thinking about how the new
price structure makes aliens so much more appealing I believe there is now going to be a dramatic shift in strategy as PSI heavy and an abundance of grenades can easily enter the field.
There is an enormous variety of SC + support builds one can build that should have a high level of success. This concerns me a little, so I ran with 3 HF and 2 sectoids. Let me just say this
about that build.... Holy *^$@ does it own. Having 3 HF's plus enough points left over for some minor support is extremely powerful to the point you feel guilty of doing something wrong. The only
thing you need to concern yourself with is a couple of DE plasma snipers or a plasma agent. Staying out of LOS until you are in grenade range takes care of that problem. Staggering the HFs out a
little makes using a SC a death sentence for your opponent. So I am a little on the fence about the price drop to HFs, 3 of them are just brutal.
The second build i ran was 2 SC and a PSI Guardian, LPR, Skelly, mind shield, bastion. This build was not as deadly as the HFs but still had little problem taking anyone out. In one match I
easily rolled over twin agents.
At this point i am at a loss as to how to counter so much PSI and nades, because you have to be able to counter both at the same time now. Maybe a plethora of Tac officers or exalt heavies? I
guess it will come down to more of a game of discovery hoping your opponent wastes abilities on your sacrificial unit you put on point. My advice, lots of units and spread them out.
Just fix the Ironman breaking save corruptions and I'll play your game again. I tried playing with autosave on and quitting to the dashboard like people suggested. My save file still became
corrupted after it froze on a supply ship yesterday.
I had the same idea! I ran basically the same thing last night, but with a Berserker Hunter instead of two smokejumpers. Having both buffs was pretty rad, but elevation was a problem on a few
maps. Double grenading Sectoid Commanders is just as much fun as it ever was, though, and having all those plasma rifles helps a lot against MECs.
Speaking of, the other squad I ran was
1x MEC Commando, railgun, Typhoon Armor
2x Drone
1x Medic, LPR, Medkit, Watcher
1x Thin Man
Before the changes, I played this same squad without the Thin Man, and with Lurker on the Medic. This is 100% an improvement. Opponent running around your MEC to get at your drones? Drop some
poison on 'em to reduce their aim. Trying to run away from your punchy MEC? Drop some poison on 'em to reduce their movement. Poison is also excellent for finishing off wounded units (after a MEC
grenade, say). Just having an extra gun makes a big difference in the squad's offensive ability as well.
I agree with Mauduke about Heavy Floaters and Sectoid Commanders being the highest impact point reductions, but don't forget about the lowly Thin Man! At 1400, he was overshadowed by Watcher
Smokejumpers, but he is a steal at 1100.
Are they good for their costs though? Can you not make a soldier version that's superior for the same price?
Santa Claws: The Exalt medics are actually great, yes. The sniper is still totally rubbish. What's the point of bring a Exalt sniper that misses more than a soldier rookie, and does less damage
than spitting on the enemy when it does hit that 1 in 1000 chance./
The Exalt Heavy for 2900 is more expensive than a Rocketman, but for 300 points gets Holotargeting, Suppression, and a Frag Grenade (trading aim for will). The Elite Heavy costs 3600. A
Machinegunner with a Heavy Laser and Alien Grenade would cost 3950. The Elite Heavy trades Shooting ability for defense and cheap explosives.
In particular, there is no good XCOM soldier build that gets you Holotargeting on the cheap (only the Demolitionist has it, starting at 4950).
Exalt Heavies excel in squads that have a lot of other soldier who can benefit from the cover destruction of the rocket or the aim bonus of Holotargeting.
In many cases if you want Rockets you would be better off going with a Rocketman or Machinegunner, but I am hard pressed to say that Exalt Heavies aren't worth the points.
I had the same idea! I ran basically the same thing last night, but with a Berserker Hunter instead of two smokejumpers. Having both buffs was pretty rad, but elevation was a problem on a few
maps. Double grenading Sectoid Commanders is just as much fun as it ever was, though, and having all those plasma rifles helps a lot against MECs.
The thing that makes this build so dynamic is the incredible mobility. All those Mutons can rush people down fast on many maps, and have a lot of direct damage and fire support. In a firefight,
Mutons can trade punches with just about anything.
Speaking of, the other squad I ran was
1x MEC Commando, railgun, Typhoon Armor
2x Drone
1x Medic, LPR, Medkit, Watcher
1x Thin Man
Before the changes, I played this same squad without the Thin Man, and with Lurker on the Medic. This is 100% an improvement. Opponent running around your MEC to get at your drones? Drop some
poison on 'em to reduce their aim. Trying to run away from your punchy MEC? Drop some poison on 'em to reduce their movement. Poison is also excellent for finishing off wounded units (after a MEC
grenade, say). Just having an extra gun makes a big difference in the squad's offensive ability as well.
I agree with Mauduke about Heavy Floaters and Sectoid Commanders being the highest impact point reductions, but don't forget about the lowly Thin Man! At 1400, he was overshadowed by Watcher
Smokejumpers, but he is a steal at 1100.
Agreed, Thin Man is now playable again, and probably a much better metagame choice since he couldn't really do much in an environment dominated by Lurkers and MECs. But he is great against
soldiers, Sectoid Commanders, and Mutons.
I played quite a few games last night. Most of my favorite builds were no longer viable with the increase to chitin and lurker but a few were salvageable. However, thinking about how the new
price structure makes aliens so much more appealing I believe there is now going to be a dramatic shift in strategy as PSI heavy and an abundance of grenades can easily enter the field.
There is an enormous variety of SC + support builds one can build that should have a high level of success. This concerns me a little, so I ran with 3 HF and 2 sectoids. Let me just say this
about that build.... Holy *^$@ does it own. Having 3 HF's plus enough points left over for some minor support is extremely powerful to the point you feel guilty of doing something wrong. The only
thing you need to concern yourself with is a couple of DE plasma snipers or a plasma agent. Staying out of LOS until you are in grenade range takes care of that problem. Staggering the HFs out a
little makes using a SC a death sentence for your opponent. So I am a little on the fence about the price drop to HFs, 3 of them are just brutal.
The second build i ran was 2 SC and a PSI Guardian, LPR, Skelly, mind shield, bastion. This build was not as deadly as the HFs but still had little problem taking anyone out. In one match I
easily rolled over twin agents.
At this point i am at a loss as to how to counter so much PSI and nades, because you have to be able to counter both at the same time now. Maybe a plethora of Tac officers or exalt heavies? I
guess it will come down to more of a game of discovery hoping your opponent wastes abilities on your sacrificial unit you put on point. My advice, lots of units and spread them out.
I think there are many ways to deal with SC builds. I think there is a good build to make around Medics with Bastion which should hard counter SC builds easily. Not certain it can deal with
Mutons or Snipers though. Many builds will have access to double nades or Rockets to reliably kill post MC, too. I don't think that SC necessarily dominates even with the price reduction.
Are they good for their costs though? Can you not make a soldier version that's superior for the same price?
Santa Claws: The Exalt medics are actually great, yes. The sniper is still totally rubbish. What's the point of bring a Exalt sniper that misses more than a soldier rookie, and does less damage
than spitting on the enemy when it does hit that 1 in 1000 chance./
It's not too bad if you get the high ground.
Of course, having no method of getting there is kind of jarring.
Yeah, Chryssalids were barely used in EU, even less in base EW and now with these new changes the tumbleweed is rolling in the
Chryssalid hive (I mean come on, a Chryssalid isn't worth a Heavy Floater, they're just too situational)...
Definitely appreciate the changes to the other alien costs though, as a xenophile and someone who likes using Muton squads the point reductions make me very happy =)
I refuse to play MP with so much ephasize on sectoid commanders and mind control stuff. If they added SHIV to mp it would be interesting as a counter.
Cyber disc, even at the unfortunate current price, does a nice trick on most SC squads. Double rocket or rocket grenade combo plays another trick of the similar nature. Hell, as a radical
measure, you could even run a jumper spam with mind shields.
These multiplayer changes cry for another multiplayer tournament! Either get together the 2k squad for another nice stream, or the community should host another caster tournament, like FWG did in
the past. Please make it happen!!
Also why didn't the Cyberdiscs drop in price, my assumption was that the deathblossom, which is heavily underused and lacks opportunities to use it anyways, is responsible for this.
I was upset when EW first came out and my SC, cyberdisk and 3 sectoids was no longer workable.
but thanks to this, I get to go with the same squad mostly. SC, cyberdisk, 2 sectoids PLUS 2 drones!
should be fun when I get a chance to play it.
SC in no way needed the buff... crysallid did however, so I am surprised an already very powerful unit (made even more so by improving Mutons, which SC was declared the official counter for, even
in the beta interviews) got the price reduction instead of a unit used primarily for the satisfaction of it's kill animation. Even though devs said that they'd only modify point total costs,
instead of actual stats, I'd say that a better solution would be to increase Chrysallid damage to say 9, something to actually oneshot weak humans in nanofiber/chitin/etc. Or you know, make them
able to implant eggs into sectoids (if they can do it in dead whales and sharkes, the relatively humanoid sectoid really should be no problem!)
At least you made some other aliens good, not to mention brought back skeleton armor from the dead (who needs grapple when you can jump up tall buildings?) with lurker nerfs.
Wow, the game has gotten exponentially worse thanks to the SC price decrease.
I have no idea what the hell your testers are doing with their time, but SC point value has destroyed the game now.
You can run SCs, drones, and typhoon mecs in the same god damn squad. That is unbelievable. SCs are now fodder placement units, that is mindblowing. Instead of being forced to run hard counters
to 2 types of units now we have to run hard counters for 3.
Please hotfix this, this is unbelievable. The only thing that needed adjustment was the price of melee MECs (Still stupidly cheap), lurker, and alien foot soldiers.
SCs were too cheap in EU, now they are replacing smokejumpers. Drones were fine at 700.
Can someone also explain to me why flamethrower MECs cost so much more than melee MECs but are inferior in every way? Why isn't the Typhoon 3800?
Last edited by V1cT; 01-12-2014 at 07:42 AM.
I'm guessing the theory is in Flamer being AoE and 6dmg plus panic, but considering you only get 2 uses out of it and lose the extra movement range it's not so great a benefit that a tier-2
Flamer MEC should cost the same as a tier-3 Punchy MEC, which gets more hp, more Will and an extra weapon.
I don't think I've ever faced someone using a Flamer MEC in a 10k game...
Not certain if I agree. The SC price decrease was unnecessary, but I prefer SC spam to Lurker spam.
I have no idea what the hell your testers are doing with their time, but SC point value has destroyed the game now.
You can run SCs, drones, and typhoon mecs in the same god damn squad. That is unbelievable. SCs are now fodder placement units, that is mindblowing. Instead of being forced to run hard counters
to 2 types of units now we have to run hard counters for 3.
You could run that before. Not certain what has changed. The main build that got boosted was SC + Psi Guard. Secondarily multiple SC builds got a boost too.
Please hotfix this, this is unbelievable. The only thing that needed adjustment was the price of melee MECs (Still stupidly cheap), lurker, and alien foot soldiers.
SCs were too cheap in EU, now they are replacing smokejumpers. Drones were fine at 700.
One thing that prevents SC from taking over the game is that Muton spam is a viable build now, which means that double nade is a constant threat.
Can someone also explain to me why flamethrower MECs cost so much more than melee MECs but are inferior in every way? Why isn't the Typhoon 3800?
I agree with this. Flamethrower is not as good as melee and needs a point reduction.
You know, the game's become an interesting game of hunting SCs.
Lurkers still make great scouts for SC kill teams.
I wasn't complaining about SC teams, I was complaining about SC teams that mix in high HP high threat enemies.
It's hard to deal with Two SCs when you have a 24 hp MEC running around punching your Mutons, much like how it's hard to deal with a MEC has it's being healed by 3 Drones while it is running
around punching all your Mutons and the SC is in the back.
You have to disable the MEC, which requires another MEC... You also need to disable the SC, which requires another SC. The cheapest counter is itself and that's a big problem.
Also a kitted out max level MEC is still cheaper than a damn Cyberdisc.
No, no more reductions. The problem with the Flamethrower is not that it's too expensive, its that the KS MEC is too cheap. Most games are already looking like 20k matches, we don't need to make
it worse.
Having no footsteps is also compounding the issue of dealing with these types of teams.
At least we can hear MECs.
Hmm, I'm quite curious whether Ethereals would still be considered OP in 10k games with these balance changes, plus all the content in EW...
The current problem is that low will units are useless because every squad has a Commander, and the only units that are half decent at killing MECs are low will units.
Ethereals would not help the issue. MECs have solid will so Ethereal attacks wouldn't be all that devastating against them, but they would make regular soldiers even more worthless.
The current problem is that low will units are useless because every squad has a Commander, and the only units that are half decent at killing MECs are low will units.
Ethereals would not help the issue. MECs have solid will so Ethereal attacks wouldn't be all that devastating against them, but they would make regular soldiers even more worthless.
No soldier has good enough will to resist a commander except MECs (and even they will fall to a Psi Inspired or Mind Merged commander), and Etherials can MC MECs at 80-90% chance.
Commanders are not beaten by mind shields or hard counters. They are beaten by having enough firepower to retaliate after they MC a unit, or by running away properly. The environment has a lot
more explosives than EU, which means that far more squads can kill the commander the first turn after MC now. MEC + SC build gained nothing from the point change to SC because they lost the same
points to the MEC price increase. It's the Drone price change that made it more powerful. Above all the death of Lurkers made this build more powerful since Lurker Snipers were the greatest way
to beat MECs and SCs, and the Chitin increase made punch better (not that anybody was playing chitin).
IMO that build is not so unbeatable. 6 tough units should be able to win by spreading out, nading the SC when it shows, and ignoring the MEC to kill the drones. The MEC can only kill 1 unit a
turn and in practice often not even that since they need to be at point blank range to kill and they have terrible accuracy.
MECs have grenades too, and will grenade your squad while you move up to deal with the Commanders, that's the rough part. They can take a ton more punishment than you can, and when the Commanders
are dead you still need to deal with a 24 hp Damage Control/Shock plating MEC that can run farther than your units can.
I agree though, it's the Drones that make the team tough to deal with without resorting to a six grenade fodder squad.
^ The key then is hunting drones.
01-08-2014 #2
Junior Member
Join Date
Dec 2013
01-08-2014 #3
Junior Member
Join Date
Apr 2013
01-08-2014 #4
01-08-2014 #5
Super Duper Member
Join Date
Oct 2012
01-08-2014 #6
Senior Member
Join Date
Nov 2012
01-08-2014 #7
Super Duper Member
Join Date
Oct 2012
01-08-2014 #8
Senior Member
Join Date
Nov 2012
01-08-2014 #9
Senior Member
Join Date
Dec 2012
01-08-2014 #10
Super Duper Member
Join Date
Oct 2012
01-08-2014 #11
01-08-2014 #12
Junior Member
Join Date
Dec 2013
01-08-2014 #13
01-08-2014 #14
Super Duper Member
Join Date
Oct 2012
01-09-2014 #15
Junior Member
Join Date
Dec 2013
01-09-2014 #16
Senior Member
Join Date
Feb 2013
01-09-2014 #17
01-09-2014 #18
Junior Member
Join Date
Apr 2013
01-09-2014 #19
Super Duper Member
Join Date
Oct 2012
01-09-2014 #20
Super Duper Member
Join Date
Oct 2012
01-09-2014 #21
Super Duper Member
Join Date
Oct 2012
01-09-2014 #22
01-09-2014 #23
01-09-2014 #24
Senior Member
Join Date
Oct 2012
01-10-2014 #25
Senior Member
Join Date
Nov 2012
01-10-2014 #26
Senior Member
Join Date
Feb 2010
01-11-2014 #27
Super Duper Member
Join Date
Jul 2012
01-12-2014 #28
01-12-2014 #29
Senior Member
Join Date
Dec 2012
01-12-2014 #30
01-12-2014 #31
Super Duper Member
Join Date
Oct 2012
01-12-2014 #32
01-12-2014 #33
Senior Member
Join Date
Dec 2012
01-12-2014 #34
01-12-2014 #35
01-12-2014 #36
Senior Member
Join Date
Dec 2012
01-12-2014 #37
Super Duper Member
Join Date
Oct 2012
01-12-2014 #38
Senior Member
Join Date
Dec 2012
01-13-2014 #39
01-13-2014 #40
Senior Member
Join Date
Dec 2012 | {"url":"http://forums.2k.com/showthread.php?417981-XCOM-Enemy-Within-Multiplayer-Balance-Update-is-Live-on-PC-and-Console","timestamp":"2014-04-17T02:13:32Z","content_type":null,"content_length":"234968","record_id":"<urn:uuid:f1e89b15-2611-486b-8b7f-d458c3d5ee0f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Python Decorators: Syntactic Sugar
Python decorators are syntactic sugar—you’ll hear that a lot. Decorators are an extremely powerful tool that, at first, don’t seem to offer any real use. Take, for example:
def decorator_test(function):
print function.__name__
def foobar():
print "Hello, world!"
That will output foobar—quite clear, and a simple example. To convert this into decorator syntax:
def decorator_test(function):
print function.__name__
def foobar():
print "Hello, world!"
Again, you’ll see the output foobar. So what happened? You’ll notice the @decorator_test line above the function definition of foobar(). This is the syntax for applying a decorator to a function.
Comparing our two examples, you’ll see applying a decorator to an arbitrary function and then calling function, is the equivalent of calling decorator(function).
Why bother doing this at all? There is plenty of real-world examples for decorators, and I would consider it to be “modern” Python—it applies well to object-oriented programming, and properly, in a
Pythonic way, exposes Python’s functions as first-class objects. Here’s a more practical application:
class safe:
def __init__(self, function):
self.function = function
def __call__(self, *args):
return self.function(*args)
except Exception, e:
print "Error: %s" % (e)
There’s some major changes over the last example function. Firstly, I’ve encapsulated the functionality into the class; notice how it doesn’t affect the decorator. Rather than decorating a function
with another function, I’d do it with a class here. To make things work, I’m using the class’ __call__ method, which is going to be how I pass the functionality to my target function. I also need to
__init__ my class so that I can take the target function as a first-class object into my class. The functionality is very simple: I’m going to receive a function (self.function, created at __init__),
and test it’s execution safely. I use *args to receive all arguments from the target function so that functionality is preserved and completely generalized. Here’s a sample on how to use this:
def unsafe(x):
return 1 / x
print "unsafe(1): ", unsafe(1)
print "unsafe(0): ", unsafe(0)
This outputs:
unsafe(1): 1
unsafe(0): Error: integer division or modulo by zero
Python doesn’t like when you divide by zero^1, and so safe catches that and cleanly lets us know without killing the application.
This class can be used almost as a template for handling a large proportion of decorator functions; the combination of __init__ and __call__ is a lot more powerful and Pythonic—at least in my
opinion—than declaring a wrapper function with another one inside it to achieve the same functionality.
Outside of Django, I haven’t really used decorators a whole lot, but spending a lot of time on Project Euler meant I needed to speed up a lot of my recursive algorithms. Decorators really came to the
rescue in the form of memoization.
Let’s take a very simple Fibonacci number generator:
def fibonacci(n):
if n in (0, 1): return n
return fibonacci(n - 1) + fibonacci(n - 2)
It’s clear this is a very inefficient algorithm: the amount of function calls increases exponentially for increasing values of n—this is because the function calls values that it has already
calculated again and again. The easy way to optimize this would be to cache the values in a dictionary and check to see if that value of n has been called previously. If it has, return it’s value in
the dictionary, if not, proceed to call the function. This is memoization. Let’s look at our memoize class:
class memoize:
def __init__(self, function):
self.function = function
self.memoized = {}
def __call__(self, *args):
return self.memoized[args]
except KeyError:
self.memoized[args] = self.function(*args)
return self.memoized[args]
This is very similar to the safe class structurally. There is now a dictionary, self.memoized, that acts as our cache, and a change in the exception handling that looks for KeyError, which throws an
error if a key doesn’t exist in a dictionary. Again, this class is generalized, and will work for any recursive function that could benefit from memoization.
Let’s run a few comparisons. First, the setup:
def fibonacci(n):
if n in (0, 1): return n
return fibonacci(n - 1) + fibonacci(n - 2)
def fibonacci_memoized(n):
if n in (0, 1): return n
return fibonacci_memoized(n - 1) + fibonacci_memoized(n - 2)
Notice how fibonacci_memoized is extremely clean—it’s the exact same function. We don’t have any extraneous cache = {} calls outside the function, and there is nothing in the algorithm that detracts
from the natural flow of the process. That is what I think is the biggest benefit of decorators: it abstracts away functionality that isn’t relevant to the core of the function.
Using a simple home-brewed timer function:
Beginning trial for fibonacci_memoized(30).
fibonacci_memoized(30) = 832040 in 0.000516s.
Beginning trial for fibonacci(30).
fibonacci(30) = 832040 in 1.147118s.
The memoized function is over 2223 times faster. Even better, in this case, it scales very well.
Beginning trial for fibonacci_memoized(40).
fibonacci_memoized(40) = 102334155 in 0.000699s.
Beginning trial for fibonacci(40).
fibonacci(40) = 102334155 in 145.366141s.
The memoized function went up about 35% (an increase of 0.000183s) whereas the vanilla version went up almost 126% (an increase of 144.219023s). While the percentage values might not show a great
deal of improvement, take a look at the actual values: this is effective. In fact, you can easily reach the maximum value Python will accept before you hit maximum recursion depth:
>>> fibonacci_memoized(332)
I’m not going to give you a comparison—I guess I’m just not too big on leaving my laptop on for [DEL:a few hours:DEL] ever with the CPU working on overload. While this essentially just became a post
on the glory and wonders of memoization, note how easy it was to get was to get speed improvements of several orders of magnitude by using decorator functions. I’ve already created memoize, you just
have to use it. No hassle.
Here’s a bit of homework to practice your decorator-fu: write a decorator function that rounds off the output of another function down to an arbitrary precision. Email it to me (I have a contact form
). Let me know if you have some cool uses for decorators, again, email me. | {"url":"http://avinashv.net/2008/04/python-decorators-syntactic-sugar/","timestamp":"2014-04-16T19:01:44Z","content_type":null,"content_length":"14451","record_id":"<urn:uuid:d8ab399c-e40f-4fce-bd29-47c456ac09ac>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
STAT W3107y Introduction to Statistical Inference 3 pts. Prerequisites: STAT W3105 or W4105, or the equivalent. Calculus-based introduction to the theory of statistics. Useful distributions, law of
large numbers and central limit theorem, point estimation, hypothesis testing, confidence intervals maximum likelihood, likelihood ratio tests, nonparametric procedures, theory of least squares and
analysis of variance. | {"url":"http://apps.college.columbia.edu/unify/bulletinSearch.php?toggleView=open&school=CC&courseIdentifierVar=STATW3107&header=www.college.columbia.edu%2Finclude%2Fpopup_header.php&footer=www.college.columbia.edu%2Finclude%2Fpopup_footer.php","timestamp":"2014-04-24T23:55:22Z","content_type":null,"content_length":"2930","record_id":"<urn:uuid:216258ce-ea97-43f2-b7a9-3f17daee1973>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pacific Institute for the Mathematical Sciences - PIMS
Discrete Math Seminar: Dan Archdeacon
Speaker(s): Dan Archdeacon, University of Vermont
University of British Columbia
Embedding complete graphs with every triangle a face
A common problem is to embed the complete graph on a surface so that every face is a triangle. To be perverse, suppose that we require that every triangle is a face. Let K^{(n-2)/2} denote the
complete graph of order n where every pair of vertices are joined by (n-2)/2 parallel edges. For every even n at least 6 we construct a triangular embedding of this multigraph into both orientable
and non-orientable surfaces such that any three vertices form a face. We give many other related results. | {"url":"http://www.pims.math.ca/scientific-event/131119-dmsda","timestamp":"2014-04-18T05:31:50Z","content_type":null,"content_length":"16503","record_id":"<urn:uuid:b089f064-7201-4b5d-b4aa-e9ee4487e85b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
graph theory proof
May 20th 2010, 02:27 AM #1
Apr 2010
graph theory proof
hey everyone, any help with this one would be great. i missed a few classes while my lecturer was going through matching so im a bit unsure.
Let M be a matching in a graph G, and let S be the set of vertices matched by M. Prove that there exists a maximum matching in G under which all vertices in S are matched.
thanks heaps.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/145673-graph-theory-proof.html","timestamp":"2014-04-16T11:23:49Z","content_type":null,"content_length":"28564","record_id":"<urn:uuid:444c17f4-b302-4e80-938f-e3c7f41a4b8a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Moeraki Game No.3 - Puzzle
Moeraki Game No.3
In an interesting story, Ivan Moscovich created the same puzzles and patented their design in the US in 1985 (US patent No.4,509,756)
and were licensed to Meffert Novelties in 1983. Ivan and Kasimir have reached an agreement around the puzzles sale and distribution,
and I'm pleased to be able to once again restore the review with full consent of both Mr Moscovich and Mr Landowski!
Moeraki Game No.3, designed by Kasimir Landowski and Ivan Moscovich, is a two dimensional manipulation puzzle.
The puzzle was awarded with a gold medal at the International Trade Fair
for "Ideas-Inventions-New Products" in Nuremberg for its innovative design.
The name Moeraki originates from the Moeraki Boulders. Moeraki Boulders are up to two meters
high spherical stones which only appear in the south of New Zealand close to Moeraki.
I liked Moeraki Game No.3 at first glance, as it look has turned out well.
The colours fit perfectly together, the beads sparkle and glisten slightly and remind me of candy.
The beads can be shifted very smoothly and it feels very comfortably to hold the puzzles in the hand.
Moreover it makes a solid impression.
The puzzle is accompanied by a game CD with three different difficulty levels.
The puzzles is packed in a stable plastic case which protects the CD and the puzzle from any scratches.
The puzzle design captivates through its simplicity.
Thus, the puzzle consists of two closed elliptical rings which intersect on four places.
Both orbits have the same shape and each contains 26 beads.
One orbit lies orthogonal to the other one so that the outer shape resembles a X.
The four crossing points divide each orbit into four sections with different lengths,
i.e. with 8, 3, 8 and 3 beads. The beads are present in 4 different colours (red, blue, green and yellow).
There are 8 beads per colour as well as 16 transparent beads.
All beads of one orbit can be shifted along it.
Each of the 4 (8-bead long) sections on both orbits is given a different colour.
The aim is to order the beads according to their colour on the orbits as indicated on the board.
Many beads can be easily placed at the beginning in spite of a high combination 48! / (8!8!8!8!16!).
Thus, within a short time I had only two beads left over which were a hard nut to crack.
To develop a solution, I needed to observe where the beads were located after one movement sequence.
I could solve the puzzle within three hours and perceived it as moderately difficult.
There are only few puzzles where the puzzle design is at the same time simple and elegant
and which are nevertheless difficult to solve. This here is one of them.
Especially the fact that at the end only two beads are left over, spurs on to master the puzzle.
Therefore I can highly recommend the puzzle to anyone who likes a challenge and does not give up easily.
You can buy the Moeraki Game No.3 on the
official website | {"url":"https://sites.google.com/site/geduldspiele/PuzzleReviewMoerakiGame3","timestamp":"2014-04-20T03:54:38Z","content_type":null,"content_length":"51492","record_id":"<urn:uuid:9032bd9c-c056-4ca1-8635-17c98f63d4f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westover Hills, TX Math Tutor
Find a Westover Hills, TX Math Tutor
...I teach question/problem decoding, question types and elimination by the Socratic method, as well as logical essay-question construction. I have made consistently high scores on tests in my own
academic career, thus proving my point: I know how to help my students succeed! I have always made h...
43 Subjects: including logic, algebra 1, prealgebra, writing
...Also, I have developed online study courses in Algebra I, Geometry, Physics, Chemistry, Biology and Grammar. These courses can be made available as well. In addition to teaching, I am active in
special education activities to include ARD meetings, establishing IEP's and intervention for students failing classes.
9 Subjects: including algebra 1, physics, trigonometry, physical science
I have enjoyed being a teacher for the past 15 years. I can help any student be a better thinker, reader and mathematician. Whether your learning needs consist of acquiring English skills or
completing an award-winning science fair project, no job is too big or too small for us to accomplish.
22 Subjects: including prealgebra, English, reading, ESL/ESOL
...While a student at Trinity, I tutored freelance between 7-10 students regularly. I also worked for Huntington Learning Center in San Antonio during my last year in college. I have experience
tutoring most math classes taught in Texas middle and high schools as well as teaching test preparation for the math portions of the ASVAB, SAT and ACT.
14 Subjects: including precalculus, ACT Math, geometry, SAT math
...Thank you for considering me to assist your student. For over ten years I've taught students in both the classroom and homeschool environments (1st grade-college). I provide targeted support in
all core subjects (Language Arts, Math, Science, Soc. Studies, Hist), specializing in helping all stu...
27 Subjects: including prealgebra, English, reading, writing | {"url":"http://www.purplemath.com/Westover_Hills_TX_Math_tutors.php","timestamp":"2014-04-16T22:02:15Z","content_type":null,"content_length":"24287","record_id":"<urn:uuid:8e020e9f-5e7b-4fad-9a61-bfef5baf124a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frank Steiner's Group
Institute of Theoretical Physics
Frank Steiner's Group
Observatoire de Lyon
Ulm University Centre de Recherche Astrophysique de Lyon
Institute of Theoretical Physics Ecole Normale Supérieure de Lyon, Université Lyon 1, CNRS
Albert-Einstein-Allee 11 and 9, avenue Charles André
D - 89069 Ulm F-69230 Saint-Genis Laval
Germany France
Cosmic Microwave Background (CMB) Radiation
The cosmic microwave background (CMB) radiation is the earliest electromagnetic radiation after the Big Bang which can be observed e.g. by the NASA-satellite WMAP and the ESA-satellite Planck. The
tiny anisotropies in the CMB radiation are believed to be generated by quantum fluctuations in the early Universe and represent the seeds for the large scale structure formation that is the cause for
the formation of galaxy clusters and galaxies. An introduction into the complex physics can be found .
Our focus is on cosmic topology. It addresses the question whether a non-trivial topological structure of the Universe is betrayed by cosmological observations. The best chances for such a detection
are provided by anisotropies found in the CMB radiation. A non-trivial topology of the Universe can lead to a finite volume which in turn implies a suppression of anisotropies on the largest scales.
Such a suppression is indeed observed in the CMB radiation.
In the case of a spatially flat Universe the simplest non-trivial example is that of a 3-torus which can be interpreted as a 3-dimensional box where three pairs of opposing sides of the box are
identified. In this way a flat space of finite volume without a boundary is constructed. Below a CMB simulation for such a 3-torus topology is shown.
A cosmic microwave background simulation is shown for a 3-torus topology using more than 5.5 million eigenfunctions. The anisotropies of the CMB are encoded as colours whereby red means hotter than
the average temperature and blue cooler temperatures. The average temperature is approximately 2.7 Kelvin.
An example for a non-trivial topology in a space with constant positive curvature is provided by the Poincaré dodecahedron. A cosmic microwave background simulation for this topology is displayed
Inhomogeneous Manifolds
A non-trivial topology is specified by a group Γ of transformations which determines how the spatial points are connected. A cosmological observer constructs his fundamental domain in such a way that
the domain does not contain points that can be transformed by the group Γ closer to the observer. This natural construction leads to a fundamental domain which is also called Dirichlet domain or
Voronoi cell. The interesting point is that there are two classes of manifolds. On one hand those for which all observers obtain independent of their position the same Voronoi cell. Such spaces are
called homogeneous. On the other hand those for which the shape of the Voronoi cell can depend on the position of the observer. These are inhomogeneous manifolds. The crucial point for cosmic
topology is that the statistical properties of the CMB anisotropies vary in the latter case with the position of the observer. The comparison of such models with the measured CMB anisotropies is then
much more complex.
Such an example in flat space is provided by the so-called half-turn space where, in contrast to the above mentioned 3-torus, one pair of sides is rotated by 180° before the sides are identified. The
animation below displays the Voronoi cell of the half-turn space whereby the observer is moved along a curve parameterised by Δ. This shows the varability of the fundamental domain.
The Voronoi Cell of the Half-Turn Space | {"url":"http://www.uni-ulm.de/nawi/nawi-theophys/frank-steiners-group.html","timestamp":"2014-04-16T04:11:32Z","content_type":null,"content_length":"18832","record_id":"<urn:uuid:7d60cdea-634c-4eb1-b9a8-6c8718dfce05>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
For each component of a vector type, result[i] = if MSB of c[i] is set ? b[i] : a[i]. For scalar type, result = c ? b : a.
gentype select ( gentype a,
gentype b,
igentype c)
gentype select ( gentype a,
gentype b,
ugentype c)
For each component of a vector type, result[i] = if MSB of c[i] is set ? b[i] : a[i].
For scalar type, result = c ? b : a.
igentype and ugentype must have the same number of elements and bits as gentype.
The argment type gentype can be char, charn,uchar, ucharn, short, shortn, ushort, ushortn, int, intn, uint, uintn, long, longn, ulong, ulongn, float, floatn, double, and doublen.
The argument type igentype refers to signed integer types, i.e. char, charn, short, shortn, int, intn, long, and longn.
The argument type ugentype refers to unsigned integer types, i.e. uchar, ucharn, ushort, ushortn, uint, uintn, ulong, and ulongn. n is 2, 3, 4, 8, or 16.
If an implementation extends this specification to support IEEE-754 flags or exceptions, then all built-in relational functions shall proceed without raising the invalid floating-point exception when
one or more of the operands are NaNs.
The built-in relational functions are extended with cl_khr_fp16 to include appropriate versions of functions that take half, and half{2|3|4|8|16} as arguments and return values.
Copyright © 2007-2011 The Khronos Group Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and/or associated documentation files (the "Materials"), to
deal in the Materials without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Materials, and to permit
persons to whom the Materials are furnished to do so, subject to the condition that this copyright notice and permission notice shall be included in all copies or substantial portions of the | {"url":"http://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/select.html","timestamp":"2014-04-16T16:57:17Z","content_type":null,"content_length":"16967","record_id":"<urn:uuid:c6e87044-e9f3-49e7-9c0a-9384d26dfb1d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - what is zero
Originally posted by turin
What is i or π in mod2?
i is defined to be the square root of -1 isn't it? well, then -1=1 (mod 2) and the polynomial
x^2-1 = (x+1)(x+1) mod 2
so the answer is i=1
and n is either 0 or 1 depending on n odd or even resp. | {"url":"http://www.physicsforums.com/showpost.php?p=140344&postcount=13","timestamp":"2014-04-16T07:39:17Z","content_type":null,"content_length":"7387","record_id":"<urn:uuid:e882a62d-fe48-44dd-868c-e37714dd9f8f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about matrices with more details
up vote 5 down vote favorite
Let $A,B\in\mathbb{R}^{n\times n}$. Suppose that $B$ is nonsingular and that there exists $m$ reals pairwise distinct $\lambda_{1},\cdots,\lambda_{r},\cdots,\lambda_{m}$ such that $$B^{-1}A=\begin
{pmatrix} \lambda_{1}& 1&&&&&&\cr &\lambda_{1}&\ddots&&&&&\cr &&\ddots&&&\LARGE{0}&&\cr &&&\lambda_{r}&1&&&\cr &&&&\lambda_{r}&&&\cr &&&&&\lambda_{r+1}&&\cr &&\LARGE{0}&&&&\ddots&\cr &&&&&&&\lambda_
{m} \end{pmatrix}$$ (It is the canonical Jordan form) Can we ever find reals numbers $ t_ {1}, \cdots, t_ {p} $ so that the two following assertions are true:
1. $A\Big(\displaystyle\prod_{i=1}^{p}(A+t_{i}B)\Big)B=B\Big(\displaystyle\prod_{i=1}^{p}(A+t_{i}B)\Big)A$
2. $\Big(\displaystyle\prod_{i=1}^{p}(A+t_{i}B)\Big)B\quad\mbox{is nonsingular and diagonalizable }$?
N.B :
1. The integer $p$ is not fixed.
2. This question has arisen when studying the contollability of a real discrete-time nonlinear system. This explains why the matrices are supposed to be reals.
Thanks for help.
linear-algebra matrices matrix-theory
4 Perhaps this question will get more attention when you provide a little bit more background. For example, what about the case $n=2$? What makes you think that there are such $t_i$? – Martin
Brandenburg May 29 '12 at 9:23
3 Is $p$ fixed or variable? – Igor Rivin May 29 '12 at 14:15
6 It seems like there are two parts to this question: Understanding the set of $C$ such that $ACB=BCA$, and understanding the set of matrices which can be written as $\prod (A+t_i B)$. The first
part isn't so hard. It is a linear space, of dimension at least $n$, and generically of dimension exactly $n$. For generic $(A,B)$, the space of possible $C$'s has basis $A^{-1} (B A^{-1})^k$, for
$0 \leq k < n$. I haven't had much luck finding a way to think about the second question. – David Speyer May 29 '12 at 15:10
I just ran the following quick experiment: I generated two random $2 \times 3$ matrices (namely, {{61, 82, 81}, {99, 0, 82}, {24, 67, 11}} and {{28, 55, 16}, {63, 59, 68}, {84, 76, 35}}) and
solved the linear equation $A (p A^2 + q AB + r BA + s B^2) B = B (p A^2 + q AB + r BA + s B^2) A$ for $(p,q,r,s)$. There were no nonzero roots. So, if there is a formula like the above, the
formula for $C$ must have degree $>2$. – David Speyer May 29 '12 at 18:22
2 It is not a good idea to edit your original question so that the answers posted before don't make any sense. You should have accepted the correct answer and start a new question, I think. –
Vladimir Dotsenko Jun 1 '12 at 8:51
show 3 more comments
2 Answers
active oldest votes
Let $$A=\pmatrix{1&0\cr 0&0},\quad B=\pmatrix{0&1\cr 0&0}.$$ Then $A^2=A$, $AB=B$, $BA=0$, $B^2=0$. It follows that $$A\prod (A+t_iB)B=B,$$ $$B\prod (A+t_iB)A=0.$$
up vote 13 down vote accepted
add comment
To my taste, it seems more natural to let $A$ and $B$ play symmetric role, by asking whether there exists non-trivial factors $s_jA+t_jB$ such that $$A\left(\prod_{j=1}^p(s_jA+t_jB)\right)B=
If you pose the question in an algebraically closed field $k$ (say, $k=\mathbb C$), then the answer is yes for the following reason:
up vote There exist $2^n-1$ non-zero factors $s_jA+t_jB$ such that $\prod_{j=1}^n(s_jA+t_jB)=0$.
3 down
vote The proof is by induction over the rank of products $\prod_{j=1}^p(s_jA+t_jB)=0$. Suppose that exists such a product $\Pi$, with rank $r\ge1$. Let us write $$\Pi=\sum_{j=1}^rx_ja_j^T.$$ Then
$$\Pi M\Pi=\sum_{i,j=1}^r(a_i^TMx_j)x_ia_j^T.$$ The rank of $\Pi M\Pi$ will be less than or equal to $r-1$ if $\det(a_i^TMx_j)_{1\le i,j\le r}=0$. When $M=sA+tB$, this writes $H(s,t)=0$
where $H$ is a homogeneous polynomial of degree $r$. If $r\ge1$, it does have a non-trivial zero. Then $\Pi':=\Pi(sA+tB)\Pi$ is an other product, with rank $\le r-1$. If in addition $\Pi$
has $2^{n-r}-1$ factors, then $\Pi'$ has $2^{n+1-r}-1$ factors. After $n$ steps, one obtains a product of $2^n-1$ factors whose rank is $0$.
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices matrix-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/98260/a-question-about-matrices-with-more-details?sort=votes","timestamp":"2014-04-19T22:25:48Z","content_type":null,"content_length":"65253","record_id":"<urn:uuid:c8d0957f-2ea2-46e4-ad00-72c2d2eec4f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
taylors series
write the first 4 terms of the taylor's series for f(x)=3^x centered at c=0
recall that the Taylor series is of the form: $\sum_{n=0}^{ \infty} \frac {f^{(n)}(x_0)}{n!} (x - x_0)^n$ where the series is centered at $x_0$ the first four terms of the Taylor series is centered
at 0 is given by: $\sum_{n=0}^{3} \frac {f^{(n)}(0)}{n!} x^n$ $f(x) = 3^x \Rightarrow f(0) = 1$ $f'(x) = ln(3) \cdot 3^x \Rightarrow f'(0) = ln(3)$ $f''(x) = (ln(3))^2 \cdot 3^x \Rightarrow f''(0) =
(ln(3))^2$ $f'''(x) = (ln(3))^3 \cdot 3^x \Rightarrow f'''(0) = (ln(3))^3$ So the first 4 terms of the Taylor series is: $1 + ln(3) x + \frac {(ln(3))^2}{2!}x^2 + \frac {(ln(3))^3}{3!}x^3$
when i differentiate 3^x i get 3^x(ln3) but when i differentiate that again i get [3^x]' ln3 + 3^x [ln3]' =3^x(ln3)(ln3) + 3^x 1/3 3^x(ln3)^2 + 3^x 1/3 | {"url":"http://mathhelpforum.com/calculus/15289-taylors-series-print.html","timestamp":"2014-04-24T18:27:19Z","content_type":null,"content_length":"9042","record_id":"<urn:uuid:1016e60a-7066-4bfa-9835-3cb8b3483081>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Polynomials
Solving Polynomials (page 1 of 2)
The general technique for solving bigger-than-quadratic polynomials is pretty straightforward, but the process can be time-consuming.
The first step is to apply the Rational Roots Test to the polynomial to get a list of values that might possibly be solutions to the polynomial equation. You can follow this up with an application of
Descartes' Rule of Signs, if you like, to narrow down which possible zeroes might be best to check. Of course, if you've got a graphing calculator, it's a good idea to do a quick graph, since x-
intercepts of the graph are the same as zeroes of the equation. Seeing where the graph looks like it crosses the axis can quickly narrow down your list of possible zeroes.
Once you've found a value you want to test, you use synthetic division to see if you can get a zero remainder. If you get a zero remainder, you've not only found a zero, but you've also reduced your
polynomial by one degree. Remember that synthetic division is, among other things, division, so checking if x = a is a solution is the same as dividing out the linear factor x a. This means that
you should not return to the original polynomial for your next computation (for finding the other zeroes); you should instead work with the output of the synthetic division. It's smaller, so it's
easier to work with.
You should not be surprised to see some complicated solutions to your polynomials (that is, solutions containing square roots or complex numbers, or both); these zeroes will come from applying the
Quadratic Formula to the last (quadratic) factor of your polynomial. Here's how the process plays out:
• Find all the zeroes of the following polynomial: 2x^5 + 3x^4 30x^3 57x^2 2x + 24
First, I'll apply the Rational Roots Test
Wait. Actually, the first thing I'll do is check to see if x = 1 or x = 1 is a root, because these are the simplest roots to test for. This isn't an "official" first step, but it can often be a
timesaver, because you can just look at the powers and the numbers. When x = 1, the polynomial evaluates as 2 + 3 30 57 2 + 24 = 60, so x = 1 isn't a root. But when x = 1, I get 2 + 3 +
30 57 + 2 + 24 = 0, so x = 1 is a root, and I can take care of it right away:
This leaves me with the smaller polynomial 2x^4 + x^3 31x^2 26x + 24. (Since I've divided out the factor x + 1, I've reduced the degree of the polynomial by 1. That's how I know this is a
degree-four polynomial.) Now I'll apply the Rational Roots Test to get a list of potential zeroes to try:
From experience, I've learned that most of these exercises have their zeroes near the middle of the list, rather than at the extremes. This isn't always true, of course, but it's usually better
to stay away from the larger numbers. In this case, I won't start off by trying stuff like x = 24 or x = 12. Instead, I'll start out with smaller values like x = 2. And I can narrow down my
options further by "cheating" and looking at the graph:
This is a fourth-degree polynomial, so it has, at most, four x-intercepts, and I can see all four of them on the graph. It looks like one of the zeroes is around 3.5, but 7/2 isn't on the list
that the Rational Roots Test gave me, so this must be an irrational root. I'll leave it until last.
It also looks like there may be zeroes near 1.5 and 0.5. But the clearest solution looks to be at x = 4 and since whole numbers are easier to work with than fractions, x = 4 would probably be a
good value to try:
The zero remainder says that x = 4 is a root. The bottom row of the synthetic division tells me that I'm now left with factoring 2x^3 + 9x^2 + 5x 6. Looking at the constant term "6", I can see
that x = ±24, ±12, ±8, and 4 won't work as rational roots (even if I didn't already know from the graph), so I can cross them off of my list. (Always check the numbers as you go. The Rational
Roots Test can give a very long list of possibilities, and it can be helpful to notice that some of those values can be ignored, especially if you don't have a graphing calculator to "cheat"
with.) Comparing the remaining values on the list with the intercepts on the graph, I'll try x = 1/2:
The remainder isn't zero, so that test root didn't work. This means that the zero close to x = 1/2 on the graph must be irrational; I'll find it when I apply the Quadratic Formula later. For now,
I'll try x = 3/2: Copyright © Elizabeth Stapel 2005-2011 All Rights Reserved
The division came out evenly, leaving me with the polynomial 2x^2 + 6x 4. Since I'm looking for the zeroes of the polynomial, what I really have here is 2x^2 + 6x 4 = 0. Dividing through by 2
to get smaller numbers gives me x^2 + 3x 2 = 0, to which I can apply the Quadratic Formula:
Then the complete solution is:
Asking you to find the zeroes of a polynomial means the same thing as asking you to find the solutions to a polynomial equation. The zeroes of a polynomial are the values of x that make the
polynomial equal to zero. So the above problem could have been stated along the lines of "Find the solutions to 2x^5 + 3x^4 30x^3 57x^2 2x + 24 = 0" or "Find the solutions to 2x^5 + 3x^4 30x^
3 57x^2 2x = 24", and the answers would have been the exact same list of x-values.
You can use these same techniques to factor bigger-than-quadratic polynomials....
Top | 1 | 2 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Solving Polynomials." Purplemath. Available from
http://www.purplemath.com/modules/solvpoly.htm. Accessed | {"url":"http://www.purplemath.com/modules/solvpoly.htm","timestamp":"2014-04-18T22:08:30Z","content_type":null,"content_length":"35778","record_id":"<urn:uuid:d6e7c9d8-c981-47d8-b315-1d7bf815176a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Date Subject Author
12/23/12 Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/24/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/24/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/24/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/24/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/25/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/25/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? fom
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? fom
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? ross.finlayson@gmail.com
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? ross.finlayson@gmail.com
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? ross.finlayson@gmail.com
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
1/4/13 Re: Distinguishability of paths of the Infinite Binary tree??? ross.finlayson@gmail.com
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? forbisgaryg@gmail.com
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? ross.finlayson@gmail.com
12/30/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? gus gassmann
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/29/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? fom
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? fom
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? forbisgaryg@gmail.com
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/26/12 Re: Distinguishability of paths of the Infinite Binary tree??? fom
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? gus gassmann
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? Tanu R.
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? mueckenh@rz.fh-augsburg.de
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? Tanu R.
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? Zaljohar@gmail.com
12/28/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? fom
12/27/12 Re: Distinguishability of paths of the Infinite Binary tree??? Virgil
12/24/12 Re: Distinguishability of paths of the Infinite Binary tree??? Ki Song | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2422282&messageID=7944224","timestamp":"2014-04-21T02:49:57Z","content_type":null,"content_length":"100740","record_id":"<urn:uuid:44a1662c-cfd1-49c4-8ad3-37191e5a0f47>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to hell, my friend.
welcome to the 420th hunger games, kushniss everdank
welcome to the 420th hunger games, kushniss everdank
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts
in than circle donuts if the circumference of the circle touched the each of the corners of the square donut.
So you might end up with more donuts.
But then I also think… Does the square or round donut have a greater donut volume? Is the number of donuts better than the entire donut mass as a whole?
A round donut with radius R[1] occupies the same space as a square donut with side 2R[1]. If the center circle of a round donut has a radius R[2] and the hole of a square donut has a side 2R
[2], then the area of a round donut is πR[1]^2 - πr[2]^2. The area of a square donut would be then 4R[1]^2 - 4R[2]^2. This doesn’t say much, but in general and throwing numbers, a full box
of square donuts has more donut per donut than a full box of round donuts.
The interesting thing is knowing exactly how much more donut per donut we have. Assuming first a small center hole (R[2] = R[1]/4) and replacing in the proper expressions, we have a 27,6%
more donut in the square one (Round: 15πR[1]^2/16 ≃ 2,94R[1]^2, square: 15R[1]^2/4 = 3,75R[1]^2). Now, assuming a large center hole (R[2] = 3R[1]/4) we have a 27,7% more donut in the square
one (Round: 7πR[1]^2/16 ≃ 1,37R[1]^2, square: 7R[1]^2/4 = 1,75R[1]^2). This tells us that, approximately, we’ll have a 27% bigger donut if it’s square than if it’s round.
tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
Thank you donut side of Tumblr.
Look at what they thought women would be wearing nowadays.
I love this.
I love how they predicted we’d all turn into Xena Warrior Princess.
well its not wrong
I love living in the future. | {"url":"http://my-unfortunate-events.tumblr.com/","timestamp":"2014-04-18T23:37:45Z","content_type":null,"content_length":"60573","record_id":"<urn:uuid:789af313-ef53-4d7d-8f54-df592b7e5302>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH VERY URRRGENT!!!!!!!!!!!!!
Posted by Gabby on Saturday, March 9, 2013 at 11:47pm.
What is the term for data that are grouped closely together? (1 point)
What association would you expect if you are graphing height and weight? (1 point)
none of these
What association would you expect if you are graphing number of hours worked and money earned? (1 point)
none of these
My Answers:
This is urgent help is needed as sOOOOn as possible!
• MATH VERY URRRGENT!!!!!!!!!!!!! - Gabby, Saturday, March 9, 2013 at 11:50pm
Mrs. Sue could you help??? or Steve maybe???
• MATH VERY URRRGENT!!!!!!!!!!!!! - Gabby, Saturday, March 9, 2013 at 11:57pm
P.S. IT's 8:56 right now so why does your clock thingy say its 11:50??
• MATH VERY URRRGENT!!!!!!!!!!!!! - Gabby, Saturday, March 9, 2013 at 11:57pm
• MATH VERY URRRGENT!!!!!!!!!!!!! - bobpursley, Saturday, March 9, 2013 at 11:57pm
rethink 4 considering a person working an hourly wage.
• MATH VERY URRRGENT!!!!!!!!!!!!! - Gabby, Sunday, March 10, 2013 at 12:07am
ok is it A then?
• MATH VERY URRRGENT!!!!!!!!!!!!! - Gabby, Sunday, March 10, 2013 at 12:18am
nvm i'll go with A since no ones here ...
• MATH VERY URRRGENT!!!!!!!!!!!!! - Jill, Sunday, March 10, 2013 at 12:29am
4 would be linear. As hours worked goes up, so does pay.
• MATH VERY URRRGENT!!!!!!!!!!!!! - Writeacher, Sunday, March 10, 2013 at 8:26am
I thought I explained this to you once before, Maybe not.
The time on these posts is set for the Eastern Time Zone in the US. Many of the tutors (including Ms Sue) live in the Eastern or Central Time Zone. If you want answers from these tutors, then you
need to post your questions no later than 11:00 pm on this website's clock ... and that means no later than 8:00 pm on your clock.
• MATH VERY URRRGENT!!!!!!!!!!!!! - Gracie, Wednesday, March 13, 2013 at 11:29am
Dude! STUDY. It means to do the lesson!!! I have that same homework, and I got 100%. Not that hard!!! JUst try. Whats the point of going to school if you are not going to the assignments that
they give you?!?!?!?! Just TRY!!!
• MATH VERY URRRGENT!!!!!!!!!!!!! - Person, Wednesday, March 20, 2013 at 12:52pm
Jill is correct 4 will be linear.
• MATH VERY URRRGENT!!!!!!!!!!!!! - Sarah, Wednesday, March 20, 2013 at 12:57pm
1. What is the term for data that are grouped closely together? (1 point)
Unselected answer (0 pts) outlier
Unselected answer (0 pts) linear
Unselected answer (0 pts) positive
Correct answer (1 pt) clustering
1 /1 point
2. What association would you expect if graphing height and weight? (1 point)
Correct answer (1 pt) positive
Unselected answer (0 pts) nonlinear
Unselected answer (0 pts) negative
Unselected answer (0 pts) none of these
1 /1 point
What association would you expect if graphing number of hours worked and money earned? (1 point)
Unselected answer (0 pts) negative
Correct answer (1 pt) linear
Unselected answer (0 pts) nonlinear
Unselected answer (0 pts) none of these
1 /1 point
I promise these are totally all correct. Cross my heart!
Hope this helps!
• MATH VERY URRRGENT!!!!!!!!!!!!! - Savannah S, Friday, March 22, 2013 at 11:15am
1. What is the term for data that are grouped closely together? (1 point)outlier
clustering <---
2. What association would you expect if graphing height and weight? (1 point)
positive <--
none of these
3. What association is shown in the given scatter plot?
(1 point)
clustering <--
none of these
4. What association would you expect if graphing number of hours worked and money earned? (1 point)negative
linear <--
none of these
XD LOL Just took this and got a 100%
• MATH VERY URRRGENT!!!!!!!!!!!!! - true, Thursday, March 28, 2013 at 11:07pm
3 is D, just did it.
• MATH VERY URRRGENT!!!!!!!!!!!!! - Whats it to you?, Wednesday, February 19, 2014 at 11:06am
3. Is none of these. Lying Jerks
• MATH VERY URRRGENT!!!!!!!!!!!!! - Connexus, Monday, March 17, 2014 at 7:24pm
Whats it to you is right.
• MATH VERY URRRGENT!!!!!!!!!!!!! - k, Thursday, March 20, 2014 at 9:28am
one of them was wrong
• MATH VERY URRRGENT!!!!!!!!!!!!! - Never!!!!!,!!!!!!m!m!m, Monday, March 24, 2014 at 9:09am
• MATH VERY URRRGENT!!!!!!!!!!!!! - Runtafus, Wednesday, March 26, 2014 at 8:44pm
lol 3 is D XD
• MATH VERY URRRGENT!!!!!!!!!!!!! - Connexus, Friday, March 28, 2014 at 3:23pm
LOL...everyone from connexus is looking at this. xD
• MATH VERY URRRGENT!!!!!!!!!!!!! - M <3 J, Sunday, March 30, 2014 at 6:43pm
Related Questions
Health (Ms. Sue) - Compare the term drug with the term medicine. A: The term ...
Math - The fifth term of an arithmetic progression is three times the second ...
Math - 3,5,-5... The first term in the sequence of #'s shown above is 3. Each ...
math - The sum of second term and fourth term of G.P is 30 and difference ...
math - The 8th term of ap is 5time the 3rd term, while the 7th term is greater ...
Statistics - Mr. Crandall has assigned a term paper due at the end of the ...
math - The 4th term of an AP is 3 times its first term and the 7 term exceeds ...
math - In an AP, the thirteenth term is 27, and the seventh term is three times ...
Accounting - which of the following is true: cash flow data is superior to ...
Math Algebra - The first term of a geometric progression is more than the third ... | {"url":"http://www.jiskha.com/display.cgi?id=1362890876","timestamp":"2014-04-24T11:28:55Z","content_type":null,"content_length":"15091","record_id":"<urn:uuid:20953f8d-acf4-41e2-a907-f088ad9f09ad>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there an intrinsic way to define the group law on Abelian varieties?
up vote 6 down vote favorite
On an elliptic curve given by a degree three equation y^2 = x(x - 1)(x - λ), we can define the group law in the following way (cf. Hartshorne):
1. We note that the map to its Jacobian given by $\mathcal{O}(p - p_0)$ for a fixed point $p_0$ is an isomorphism; ergo it inherits a group structure from the Jacobian.
2. In fact, if we embed it into $\mathbb{P}^2$ via the linear system $|3p_0|$, then three colinear points have $p + q + r \sim 3p_0$ and so this is in fact the group law inherited from Pic[0].
Is there an analogous way to do this for Abelian varieties? In Lange & Birkenhake they simply define an Abelian variety to be $\mathbb{C}^n$ modulo a lattice, and so it automatically comes with a
group structure. Still, this seems unsatisfying in comparison to the way we can do so for elliptic curves.
That being said, the previous method doesn't seem to work for Abelian varieties; divisors no longer correspond to formal sums of points and so any comparison with Pic[0] wouldn't obviously yield a
group structure on the points of X.
To make matters worse, the map that I tend to think of which takes X to Pic[0](X) is given by $p \mapsto t_p^*L \otimes L^{-1}$ for a given line bundle $L$ on X, where $t_p : X \to X$ is the map...
defined by translation in X. So this map already requires the group structure on X to be defined.
So is there a way of defining the group law analogous to that of an elliptic curve?
NB: I do note that an elliptic curve can be defined as the zero locus of a cubic equation in $\mathbb{P}^2$, where I'm not sure how else we might define an Abelian variety other than as $\mathbb{C}^
n$ modulo a lattice, and so perhaps the question is moot.
ag.algebraic-geometry abelian-varieties complex-geometry
4 I think that the best definition of abelian variety is "a complete variety with algebraic group structure", so in this sense, the question $is$ moot – Victor Protsak May 19 '10 at 19:02
Yeah, I was thinking that, but that just feels so unsatisfying. – Simon Rose May 19 '10 at 19:19
add comment
3 Answers
active oldest votes
Any torsor $V$ under an abelian variety over any field $K$ is caonically isomorphic to its degree $1$ Albanese variety $\operatorname{Alb}^1(V)$, which is itself a torsor under the
degree $0$ Albanese variety $\operatorname{Alb}^0(V)$. Note that the $\operatorname{Alb}^0$ of any smooth projective variety is a complete, geometrically connected group variety.
up vote 5
down vote Assuming there exists a $K$-rational point $O$, one can subtract $O$ to obtain an isomorphism $\operatorname{Alb}^1(V) \stackrel{\sim}{\rightarrow} \operatorname{Alb}^0(V)$. Pulling back
accepted via the composite of these isomorphisms puts a group structure on $V$ depending only on the chosen base point $O$.
What do you mean by "degree 1 Albanese variety"? The definition I know of the Albanese variety of a (smooth or normal say) proper variety $X$ is as a universal map $X \to V$ to torsors
over abelian varieties. Hence, in your case the Albanese map is the identity map (which is what you say) but I do not understand what you mean by degree $1$ and degree $0$. – Torsten
Ekedahl May 19 '10 at 19:53
For a smooth projective variety $V$, one has a total Albanese scheme $\operatorname{Alb}(V)$, whose points parameterize all zero-cycles on $V$. This comes with a degree map to $\mathbb
{Z}$, and $\operatorname{Alb}^i(V)$ is the component consisting of zero-cycles of degree $i$. Each $\operatorname{Alb}^i(V)$ is -- in an evident way -- a torsor under $\operatorname
{Alb}^0(V)$. – Pete L. Clark May 19 '10 at 20:02
For some more details, see Sections 4.1 and 4.3 of math.uga.edu/~pete/wc2.pdf. (Nevertheless these concepts are not due to me.) – Pete L. Clark May 19 '10 at 20:04
I understand, I was thinking about the possibility that you meant zero-cycles but was thrown off by the fact that rational equivalence doesn't work. If I remember correctly the
equivalence relation you are talking about is "abelian equivalence", the equivalence relation generated by maps into torsors over abelian varieties. This gives the quite tautological
relation with the definition I was referring to. Of course there is a formal definition of the degree $1$ part, in that a torsor over $A$ gives an extension $0\to A \to A' \to \mathbb
Z \to 0$ with the torsor the inverse image of $1$. – Torsten Ekedahl May 19 '10 at 20:53
add comment
In dimension 1, the situation is like this: Every smooth proper genus 1 curve with a "marked" rational point has a unique group structure such that the given rational point becomes the
neutral element.
up vote 3 For abelian varieties it is still true that the group structure, the origin being fixed, is unique: an abelian variety is canonically isomorphic to its Albanese variety, and to
down vote construct these you don't need the group structure.
The canonicity of the structure of a group is easier than the Albanese stuff. Any morphism from an Abelian to another sending 0 to 0 is a morphism of groups. So if you thought you had
two group structures on (A,0), well, you only have one and they coincide! – ACL Nov 26 '10 at 19:33
add comment
For algebraically complete integrable systems Abelian varieties usually show up as follows: One has the preimage under n commuting integrals on a complex 2n-dim symplectic manifold. Then,
for regular values, assuming that the integrals are proper, one has compact fibers whith n commuting holomorphic vectorfields (the symplectic gradients of the integrals) without zeros. This
up vote 0 is an algebraic variety, and the group structure comes from its Lie algebra by flowing along the vectorfields.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry abelian-varieties complex-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/25256/is-there-an-intrinsic-way-to-define-the-group-law-on-abelian-varieties","timestamp":"2014-04-20T18:46:14Z","content_type":null,"content_length":"66900","record_id":"<urn:uuid:5e58c21a-081a-4e2d-a086-412d0007cc1c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
mixing problem
August 9th 2009, 12:49 PM #1
Senior Member
Jun 2009
mixing problem
tank contains 1000L of water with 15kg of salt dissolved in it. Pure water enters tank at 10L/min and the mixture leaves at the same rate. How much salt is in the tank after t minutes?
let y(t)=mass of salt in water
dy/dt = rate in - rate out
rate in = 0 because no salt is entering
rate out = $\frac{y(t)}{1000L}$
$\longrightarrow \frac{dy}{dt}=0-{y(t)}{1000L}\times\frac{10L}{min}=\frac{-y(t)}{100min}$
This becomes a seperatable equation so I get $\int\frac{-1}{y(t)}dy=\int\frac{1}{100t} dt = -ln|y| = \frac{ln|t|}{100}$ solving for y I get $\frac{1}{t^{\frac{1}{100}}}$
The answer given is $15e^{\frac{-t}{100}}kg$
tank contains 1000L of water with 15kg of salt dissolved in it. Pure water enters tank at 10L/min and the mixture leaves at the same rate. How much salt is in the tank after t minutes?
let y(t)=mass of salt in water
dy/dt = rate in - rate out
rate in = 0 because no salt is entering
rate out = $\frac{y(t)}{1000L}$
$\longrightarrow \frac{dy}{dt}=0-{y(t)}{1000L}\times\frac{10L}{min}=\frac{-y(t)}{100min}$
This becomes a seperatable equation so I get $\int\frac{-1}{y(t)}dy=\int\frac{1}{100t} dt = -ln|y| = \frac{ln|t|}{100}$ solving for y I get $\frac{1}{t^{\frac{1}{100}}}$
The answer given is $15e^{\frac{-t}{100}}kg$
$\frac{dy}{dt} = -\frac{1}{100} y<br />$
separate variables ...
$\frac{dy}{y} = -\frac{1}{100} dt$
integrate ...
$\ln{y} = -\frac{t}{100} + C$
change to an exponential equation ...
$y = Ae^{-\frac{t}{100}}$
utilize the given initial condition, at t = 0 ... y = 15
$15 = Ae^0$
$y = 15e^{-\frac{t}{100}}$
I get lost after "change to an exponential equation". Are you doing $e^{\ln y} = e^{\frac{-t}{100}} + e^C$ ? If yes, where does the $A$ come from?
$\ln{y} = -\frac{t}{100} + C$
$\Rightarrow e^{\ln{y}} = e^{-\frac{t}{100} + C} = e^{-\frac{t}{100}} e^C$
using the usual index law
$\Rightarrow y = e^{-\frac{t}{100}} e^C$.
Since C is arbitrary $e^C$ is also arbitrary and so can be given a new symbol, like A. Therefore:
$y = A e^{-\frac{t}{100}}$.
August 9th 2009, 01:21 PM #2
August 9th 2009, 09:42 PM #3
Senior Member
Jun 2009
August 10th 2009, 01:55 AM #4 | {"url":"http://mathhelpforum.com/differential-equations/97491-mixing-problem.html","timestamp":"2014-04-17T01:28:58Z","content_type":null,"content_length":"46581","record_id":"<urn:uuid:6a50ca83-8f3e-4421-91bb-bff87127bdca>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] inverse forcing problem
Monroe Eskew meskew at math.uci.edu
Fri Jul 23 04:53:35 EDT 2010
The answer is no. There are some simple counterexamples:
Let \kappa < \lambda be uncountable regular cardinals, with \kappa >
2^{\omega} in V. Let P = Coll(\omega,\lambda), Q =
Add(\omega,\kappa), R = Add(\omega,\omega). Let G be P-generic over
V. In V[G] there is a bijection f between \kappa and \omega. let H
be Q-generic over V[G], and let H* be the image of the isomorphism
between Q and R induced by f. Then V[G][H] = V[G][H*] = V[H][G] =
V[H*][G]. Clearly V[H] \not= V[H*].
Now is there any kind of forcing from which we can recover V as a
subclass of V[G], given G?
On Thu, Jul 22, 2010 at 11:42 AM, Monroe Eskew <meskew at math.uci.edu> wrote:
> Let N be a transitive model of ZFC. Let P be a partial order and G
> such that for some M, G is (M,P)-generic and N=M[G].
> 1) Is M unique?
> 2) Is M definable in N?
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2010-July/014941.html","timestamp":"2014-04-17T18:27:02Z","content_type":null,"content_length":"3365","record_id":"<urn:uuid:c430cceb-db84-410e-8cd7-be7aceb48038>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential Equations and Mathematical Biology
Population growth
Administration of drugs
Cell division
Differential equations with separable variables
Equations of homogeneous type
Linear differential equations of the first order
Numerical solution of first-order equations
Symbolic computation in MATLAB
Linear Ordinary Differential Equations with Constant Coefficients
First-order linear differential equations
Linear equations of the second order
Finding the complementary function
Determining a particular integral
Forced oscillations
Differential equations of order n
Systems of Linear Ordinary Differential Equations
First-order systems of equations with constant coefficients
Replacement of one differential equation by a system
The general system
The fundamental system
Matrix notation
Initial and boundary value problems
Solving the inhomogeneous differential equation
Numerical solution of linear boundary value problems
Modelling Biological Phenomena
Nerve impulse transmission
Chemical reactions
Predator–prey models
First-Order Systems of Ordinary Differential Equations
Existence and uniqueness
The phase plane and the Jacobian matrix
Local stability
Limit cycles
Forced oscillations
Numerical solution of systems of equations
Symbolic computation on first-order systems of equations and higher-order equations
Numerical solution of nonlinear boundary value problems
Appendix: existence theory
Mathematics of Heart Physiology
The local model
The threshold effect
The phase plane analysis and the heartbeat model
Physiological considerations of the heartbeat cycle
A model of the cardiac pacemaker
Mathematics of Nerve Impulse Transmission
Excitability and repetitive firing
Travelling waves
Qualitative behavior of travelling waves
Piecewise linear model
Chemical Reactions
Wavefronts for the Belousov–Zhabotinskii reaction
Phase plane analysis of Fisher’s equation
Qualitative behavior in the general case
Spiral waves and λ − ω systems
Predator and Prey
Catching fish
The effect of fishing
The Volterra–Lotka model
Partial Differential Equations
Characteristics for equations of the first order
Another view of characteristics
Linear partial differential equations of the second order
Elliptic partial differential equations
Parabolic partial differential equations
Hyperbolic partial differential equations
The wave equation
Typical problems for the hyperbolic equation
The Euler–Darboux equation
Visualization of solutions
Evolutionary Equations
The heat equation
Separation of variables
Simple evolutionary equations
Comparison theorems
Problems of Diffusion
Diffusion through membranes
Energy and energy estimates
Global behavior of nerve impulse transmissions
Global behavior in chemical reactions
Turing diffusion driven instability and pattern formation
Finite pattern forming domains
Bifurcation and Chaos
Bifurcation of a limit cycle
Discrete bifurcation and period-doubling
Stability of limit cycles
The Poincaré plane
Numerical Bifurcation Analysis
Fixed points and stability
Path-following and bifurcation analysis
Following stable limit cycles
Bifurcation in discrete systems
Strange attractors and chaos
Stability analysis of partial differential equations
Growth of Tumors
Mathematical model I of tumor growth
Spherical tumor growth based on model I
Stability of tumor growth based on model I
Mathematical model II of tumor growth
Spherical tumor growth based on model II
Stability of tumor growth based on model II
The Kermack–McKendrick model
An incubation model
Spreading in space
Answers to Selected Exercises | {"url":"http://www.maa.org/publications/maa-reviews/differential-equations-and-mathematical-biology","timestamp":"2014-04-21T03:48:06Z","content_type":null,"content_length":"102946","record_id":"<urn:uuid:946b3a66-424c-4f0d-bc48-a86764c035e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact and Heuristic Methods for Network Completion for Time-Varying Genetic Networks
BioMed Research International
Volume 2014 (2014), Article ID 684014, 13 pages
Research Article
Exact and Heuristic Methods for Network Completion for Time-Varying Genetic Networks
Bioinformatics Center, Institute for Chemical Research, Kyoto University, Gokasho, Uji, Kyoto 611-0011, Japan
Received 13 August 2013; Revised 9 January 2014; Accepted 22 January 2014; Published 9 March 2014
Academic Editor: Nasimul Noman
Copyright © 2014 Natsu Nakajima and Tatsuya Akutsu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Robustness in biological networks can be regarded as an important feature of living systems. A system maintains its functions against internal and external perturbations, leading to topological
changes in the network with varying delays. To understand the flexibility of biological networks, we propose a novel approach to analyze time-dependent networks, based on the framework of network
completion, which aims to make the minimum amount of modifications to a given network so that the resulting network is most consistent with the observed data. We have developed a novel network
completion method for time-varying networks by extending our previous method for the completion of stationary networks. In particular, we introduce a double dynamic programming technique to identify
change time points and required modifications. Although this extended method allows us to guarantee the optimality of the solution, this method has relatively low computational efficiency. In order
to resolve this difficulty, we developed a heuristic method for speeding up the calculation of minimum least squares errors. We demonstrate the effectiveness of our proposed methods through
computational experiments using synthetic data and real microarray gene expression data. The results indicate that our methods exhibit good performance in terms of completing and inferring gene
association networks with time-varying structures.
1. Introduction
Computational analysis of gene regulatory networks is an important topic in systems biology. A gene regulatory network is a collection of genes and their correlations and causal interactions. It is
often represented as a directed graph in which the nodes correspond to genes and the edges correspond to regulatory relationships between two genes. Gene regulatory networks play important roles in
cells. For example, gene regulatory networks maintain organisms through protein production, response to the external environment, and control of cell division processes. Therefore, deciphering gene
regulatory network structures is important for understanding cellular systems, which might also be useful for the prediction of adverse effects of new drugs and the detection of target genes for the
development of new drugs. In order to infer gene regulatory networks, various kinds of data have been used, such as gene expression profiles (particularly mRNA expression profiles), CHromatin
ImmunoPrecipitation (ChIP)-chip data for transcription binding information, DNA-protein interaction data, and protein-protein interaction data [1–3]. However, many existing studies have focused on
the use of gene expression profiles, because expression data from a large number of genes can be simultaneously observed due to developments in DNA microarray technology [1–3]. Various mathematical
models and computational methods have been applied and/or developed to infer gene regulatory networks from gene expression profiles, which include Boolean networks [4, 5], Bayesian networks [6, 7],
dynamic Bayesian networks [8], differential equations [9, 10], and graphical Gaussian models [11]. In Boolean networks, the state of each gene is simplified into 0 or 1 and the gene regulation rules
are given as Boolean functions, where 0 and 1 mean that a gene is active (in high expression) and inactive (in low expression), respectively. In the most widely used Boolean network model, it is
assumed that the states of genes change synchronously according to discrete time steps. In Bayesian networks, the states of genes are usually classified into discrete values and the gene regulation
rules are given in the form of conditional probabilities. Although standard Bayesian networks can only handle static data and acyclic networks, dynamic Bayesian networks can handle time series data
and cyclic networks. In differential equation models, the dynamics of gene expression are represented by a set of linear or nonlinear equations (one equation per gene). In graphical Gaussian models,
partial correlations are used as a measure of independence of any two genes, by which direct interactions are distinguished from indirect interactions. For details of these models and methods, see
review/comparison papers [1–3].
These network models assume that the topology of the network does not change through time, whereas the real gene regulatory network in the cell might dynamically change its structure depending on
time, the effects of certain shocks, and so forth. Therefore, many reverse engineering tools have recently been proposed, which can reconstruct time-varying biological networks based on time-series
gene expression data. Yoshida et al. [12] developed a dynamic linear model with Markov switching that represents change points in regimes that evolve according to a first-order Markov process. Fujita
et al. [13] proposed a method based on the dynamic autoregressive model. This model extends the vector autoregression (VAR) model, which can be applied to the inference of nonlinear time-dependent
biological correlations such as dynamic gene regulatory networks. Robinson and Hartemink [14] proposed a model called a nonstationary dynamic Bayesian network, based on dynamic Bayesian networks,
which allows inference from data generated by nonstationary processes in a time-dependent manner. Lèbre et al. [15] also introduced the autoregressive time-varying (ARTIVA) algorithm for the analysis
of time-varying network topologies from time course data, which is generated from different processes. This model adopts a combination of reversible jump Markov chain Monte Carlo (RJMCMC) and dynamic
Bayesian networks (DBN), in which RJMCMC is used for the identification of change time points and the resulting networks, and DBN is used to represent causal interactions among genes. Thorne and
Stumpf [8] presented a method to model the regulatory network structure between distinct segments with a set of hidden states by applying the hierarchical Dirichlet process hidden Markov model [16],
including a potentially infinite number of states and a Bayesian network model for estimating relationships between genes. Rassol and Bouaynaya [17] presented a new method based on constrained and
smoothed Kalman filtering, which is capable of estimating time-varying networks from time-series data, including unobserved and noisy measurements. The dynamics of genetic modules are represented as
a linear-state space equation and the observability of linear time-varying systems is defined by imposing sparse constraints in Kalman filters. Ahmed et al. [18] proposed an algorithm called Tesla
with machine learning, which can be cast in the form of a convex optimization problem. The basic assumption in this method is that networks at close time points do not have significant topological
differences but have common edges with high probability; in contrast, networks at distant time points are markedly different. The regulatory networks are represented by Markov random fields at
arbitrary time intervals.
As mentioned above, there have been many studies and attempts to analyze both time-independent and time-dependent networks from time-series expression data; however, gene regulatory systems in living
organisms are so complicated that any mathematical model has limitations and there is not yet a standard or established method for inference, even for time-independent networks. One of the possible
reasons is that there exists an insufficient number of high-quality time-series datasets to reconstruct the dynamic behavior of the network. In other words, it is difficult to reveal a correct or
nearly correct network based on a small amount of data that includes some noise. Hence, in our recent study, we proposed a new approach for the analysis of time-independent networks, called network
completion [19, 20], in which the minimum amount of modifications are made to a given network so that the resulting network is most consistent with the observed data. Similar concepts have been
independently proposed [21–24]. In addition, network completion can be applied to inference of networks by starting with the null network.
In this paper, we present two novel methods for the completion and inference of time-varying networks using dynamic programming and least squares fitting (DPLSQ): DPLSQ-TV (DPLSQ-TV was presented in
a preliminary version of this paper [25]; however, in this paper, more detailed computational experiments are performed and DPLSQ-HS is newly introduced) and DPLSQ-HS, where TV and HS stand for time
varying and heuristics. DPLSQ-TV is an extension of DPLSQ [20] such that it can identify the time points at which the structure of the gene regulatory network changes. Since the additions and
deletions of edges are basic modifications in network completion, we need to extend DPLSQ so that these operations can be performed at several time points. In DPLSQ-TV, these edges and time points
are identified by a novel double dynamic programming method in which the inner loop is used to identify static network structures and the outer loop is used to determine change points. It is to be
noted that a single dynamic programming (DP) method was used in our previous work on the completion and inference of time-independent networks [20], whereas a double DP method is employed here in
order to cope with time-varying networks. Our proposed methods also allow us to find an optimal solution in polynomial time if the maximum indegree (i.e., the maximum number of input genes to a gene)
is bounded by a constant. Although DPLSQ-TV is guaranteed to find an optimal solution in polynomial time, the degree of the polynomial is not low, which prevents the method from being applied to the
completion of large networks. Therefore, we further propose a heuristic method, called DPLSQ-HS, to speed up the calculation of the minimum least squares error by applying restriction constraints
that limit the number of combinations of incoming nodes.
We evaluate the efficiency of our methods through computational experiments using synthetic data and microarray gene expression data from the life cycle of D. melanogaster and the cell cycle of S.
cerevisiae. We also demonstrate the effectiveness of the proposed methods by comparing our results with those of ARTIVA [15].
2. Method
In this section, we present DPLSQ-TV, a DP-based method for the completion of a time-varying network. We assume that there exist time points , which are divided into intervals: , , where indicates
the number of change points. A different network is associated with each interval. We assume that the set of genes does not change; therefore, only the edge set changes according to the time
interval. Let be the set of genes. Let be the initial set of directed edges (i.e., initial set of gene regulation relationships), and let be the sets of directed edges (i.e., the output), where
denotes the edge set for the th interval.
Then, the problem is defined as follows: given an initial network consisting of genes, time series datasets, each of which consists of time points for genes and the positive integers , , and , infer
change points (i.e., ) and complete the initial network by adding edges and deleting edges in total such that the total least-squares error is minimized. This results in the set of edges at the
corresponding time intervals (see Figure 1). It is to be noted that if we start with an empty set of edges (i.e., ), the problem corresponds to the inference of a time-varying network.
2.1. Model of Differential Equation and Estimation of Parameters
We assume that the dynamics of each node are determined by the following differential equation: where corresponds to the expression value of node , denotes random noise, and are incoming nodes to .
The second and third terms of the right-hand side of the equation represent the linear and nonlinear effects to node , respectively (see Figure 2), where a positive value for or corresponds to an
activation effect, and a negative value for or corresponds to an inhibition effect. This model is an extension of the linear differential equation model [3]. It is also a variant of the recurrent
neural network model [27], although the sigmoid function is replaced here by an identify function and nonlinear terms representing cooperating regulations are added instead.
In practice, we replace the derivative of (1) by the difference and ignore the noise term as follows: where denotes the unit time. This kind of discretization is also employed for linear and
recurrent neural network models [3, 27].
In our previous method DPLSQ [20], we assume that time series data s, which correspond to s in (2), are given for , where we distinguish an observed expression value from an expression value in the
mathematical model equation (2). Then, the parameters s and s are estimated from these time series data by minimizing the following objective function (i.e., the sum of the least squares errors) for
each node : It should be noted that is the observed expression value of gene at time , and are tentative incoming nodes to node . Incoming nodes to each node are determined so that the sum of these
values for all nodes is minimized under the constraint that the total number of edges is equal to the specified number. In order to minimize the sum of least squares errors for all genes along with
determining the incoming nodes and corresponding parameters, DP is applied. Readers are referred to [20] for the details of DPLSQ.
2.2. Completion by Addition of Edges
In this subsection, we present our proposed method for network completion of time-varying networks by the addition of edges and extend this to a general case (i.e., network completion by the addition
and deletion of edges) in the following subsection. For simplicity, we assume , where we can extend the method to the case of by changing the definition of only.
We assume that the set of nodes (i.e., the set of genes) and the set of initial edges are given. Let the current set of incoming nodes to be . We define the least squares error for during the time
period between and as where denotes the observed expression value of gene at time . The parameters (i.e., , , ) needed to attain this minimum value can be computed by a standard least squares fitting
Because network completion is considered to involve the addition of edges, let be the set of initial incoming nodes to . Let denote the minimum least squares error when adding edges to the th node
during the time from to , which is formally defined as where each must be selected from . In order to avoid combinatorial explosion, we constrain the maximum to be a small constant, , and let , for
or .
Then, the problem is stated as where and .
Here, we define as
The entries of can be computed by the following DP algorithm:
It is to be noted that is determined uniquely regardless of the ordering of nodes in the network. The correctness of this DP algorithm can be seen as follows:
Next, we define as where . can be computed by the following DP algorithm:
The introduction of and the corresponding DP procedure are the methodologically novel points of this work, compared with our previous work [20].
The correctness of this DP algorithm can be seen as follows:
2.3. Completion by Addition and Deletion of Edges
The above DP procedure can be modified for the deletion of edges and for the addition and deletion of edges as in DPLSQ [20]. Since the former case is a subcase of the latter one, we describe only
the latter one (addition and deletion of edges) here.
Let denote the minimum least squares error for the time period between and when adding edges to and deleting edges from , where the added and deleted edges must be disjointed. We constrain the
maximum and to the small constants and . We let if , , , or hold. Then, the problem is stated as Here, we define as
As in the previous subsection, can be computed by
Next, we define as
can be computed by the following DP algorithm:
2.4. Time Complexity Analysis
In this subsection, we analyze the time complexity of DPLSQ-TV. Since completion by the addition of edges and completion by the deletion of edges are special cases of completion by the addition and
deletion of edges, we focus on completion by the addition and deletion of edges.
First, we analyze the time complexity required per least squares fitting. It is known that least squares fitting for a linear system can be done in time where is the number of data points and is the
number of parameters. In our proposed method, we assume that the maximum indegree is bounded by a constant, and the numbers of addition and deletion edges in a given network are bounded by the
constants and , respectively. In this case, the time complexity for least squares fitting can be estimated as .
Next, we analyze the time complexity required for computing . The total time required to compute is [20], where we assume that and are . Therefore, the time complexity for s is , because and are .
Next, we analyze the time complexity required for computing s. In this computation, we note that the size of table is . Furthermore, in order to compute the minimum value for each entry in the DP
procedure, we need to examine combinations, which is . Hence, the time complexity for s is .
Finally, we analyze the time complexity required for computing s. We note that the size of table is , where we assume that is a constant. Since the number of combinations for computing the minimum
value using DP is per entry, the computation time required for computing s is . Hence, the total time complexity is
It is to be noted that if we use time series datasets, each of which consists of points, the time complexity becomes . Although this complexity is not small, it is allowable in practice if and and
are not too large. Indeed, as shown in Section 4.2, DPLSQ-TV works for the completion and inference of time-varying networks with a few tens of genes if .
3. Heuristic Method
Although our previous algorithm, DPLSQ-TV, is guaranteed to find an optimal solution in polynomial time, the degree of the polynomial is not low, preventing the method from being applied to the
completion of large-scale networks. Therefore, we propose a heuristic algorithm, DPLSQ-HS, to significantly improve the computational efficiency by relaxing the optimality condition. The reason why
DPLSQ-TV requires a large amount of CPU time is that the least squares errors are calculated for each node by considering all possible combinations of incoming nodes and taking the minimum value of
these. In order to significantly improve the computational efficiency, we introduce an upper limit on the number of combinations of incoming nodes. Although DPLSQ-HS does not guarantee an optimal
solution, it allows us to speed up the calculation of the minimum least squares in the case of adding edges. A schematic illustration of least squares computation is given in Section 3.1. The
DPLSQ-HS algorithm is described in Section 3.2, and we analyze the time complexity of DPLSQ-HS in Section 3.3.
3.1. Schematic Illustrations of DPLSQ-HS
Although DPLSQ-HS can be applied to the addition and deletion of edges, we consider only additions of edges as modification operations in this subsection. We have developed DPLSQ-HS, which
contributes to reducing the time complexity, by imposing restrictions on the number of combinations of incoming nodes to each node. In Figure 3, the diagram indicates that, for each node , we
maintain combinations of incoming nodes with lowest errors at the th step. Let denote the set of combinations computed at the th step. At the th step, for each combination where , we calculate the
least squares error for each such that is the th incoming node to . The calculated least squares errors are sorted in descending order, the top values are selected, and the corresponding combinations
are stored in .
3.2. Algorithm
The following is the description of the algorithm to compute in DPLSQ-HS, where does not necessarily mean the minimum value and the meaning of “step” is different from that in Section 3.1.
Step 1. For each period , repeat Steps 2–6.
Step 2. Let for all .
Step 3. For to do Steps 4–7.
Step 4. Repeat Steps 5–7 for node from to .
Step 5. For each combination and each node such that ( if ), calculate the least squares error for the edge set .
Step 6. Sort the obtained least squares errors in descending order and select the top combinations, which are stored in .
Step 7. Let be the minimum least squares error among these top combinations.
The other parts of the algorithm are the same as in DPLSQ-TV.
3.3. Time Complexity Analysis
In this subsection, we analyze the time complexity of DPLSQ-HS. Since DPLSQ-HS can be applied to additions and deletions of edges, we consider the time complexity of completion for adding and
deleting edges.
In our proposed method, we assume that the numbers of adding and deleting edges in a given network are, respectively, bounded by constants and . In this case, the time complexity for least squares
fitting can be estimated as .
As for the time complexity of computing , we assume that the addition of edges is operated only in the case of adding edges to the nodes with respect to the top of the sorted list. Therefore, the
number of combinations of addition of edges, which is bounded by a constant , is . It is well known that the sorting of data can be done in time. Based on such an assumption, the total time required
for the computation of is [20], since the factor can be regarded as a constant. Therefore, the time complexity for is , because and are .
Furthermore, for the time complexity required for computing s and s, the calculation process is the same as that in DPLSQ-TV. Therefore, the computation time for both s and s are as described in
Section 2.4. Hence, the total time complexity of DPLSQ-HS is
If we use time series datasets, each of which consists of points, the time complexity becomes . DPLSQ-HS requires less time complexity than DPLSQ-TV, because is much smaller than . Indeed, as shown
in Section 4.2, DPLSQ-HS is much faster than DPLSQ-TV in practice.
4. Results
We performed computational experiments using both artificial data and real data. All experiments were performed on a PC with an Intel Core(TM)2 Quad CPU (3.0GHz). We employed the liblsq library (
http://www2.nict.go.jp/aeri/sts/stmg/K5/VSSP/install_lsq.html) for the least squares fitting method.
4.1. Completion Using Artificial Data
In order to evaluate the potential effectiveness of DPLSQ-TV and DPLSQ-HS, we begin with network completion for time-varying networks using artificial data. We demonstrate that our proposed methods
can determine change time points quite accurately when the network structure changes. We employed the structure of the real biological network WNT5A (Figure 4) [26] as the initial network and those
of three different networks , , and generated by randomly adding and deleting edges from the initial network. In this method, for each node with input nodes, we considered the following model: where
s and s are constants selected uniformly at random from and , respectively. The reason why the domain of s is smaller than that for s is that nonlinear terms are not considered as strong as linear
terms. It should also be noted that is a stochastic term, where is a constant (we used ) and is random noise taken uniformly at random from . For the artificial generation of the observed data , we
used where is a constant denoting the level of observation errors and is random noise taken uniformly at random from .
As for the time series data, we generated an original dataset with 30 time points including two change points , , which is generated by merging three datasets for , , and . Since the use of time
series data beginning from only one set of initial values easily resulted in numerical calculation errors, we generated additional time series data beginning from 200 sets of initial values that were
obtained by slightly perturbing the original data. Under the above model, we conducted computational experiments by DPLSQ-TV in which the initial network was modified by randomly adding edges and
deleting edges per node, resulting in , , and ; additionally, we also conducted DPLSQ-HS experiments in which the initial network was modified by randomly adding edges per node, using the default
values of . We evaluated the performance of this method by measuring the accuracy of modified edges, the time point errors for time intervals, and the computational time for completion (CPU time).
Furthermore, in order to examine how CPU time changes as the size of the network grows, we generated networks with 20 genes, 30 genes, and 40 genes by making 2, 3, and 4 copies of the original
networks. We took the average time point errors, accuracies, and CPU time over 10 random modifications with several s. In addition, we performed computational experiments on DPLSQ-TV and DPLSQ-HS
using 60 genes, where additional time series data beginning from 100 sets (in place of 200 sets) of initial values were used, and , , and were obtained by addition and deletion of edges. However,
DPLSQ-TV took too long time (more than 1000sec. per execution) and thus the result could not be included in Table 1.
The accuracy is defined as follows: where and are, respectively, the sets of edges in the original network and the completed network in each time interval. This value is 1 if all the added and
deleted edges are correct and 0 if none of the added and deleted edges are correct. If we regard a correctly (resp., incorrectly) added or deleted edge as a true (resp., false) positive, corresponds
to the number of false positives and corresponds to the number of true positives. The time point error is the average difference between the original and estimated values for change time points and
is defined as where are the estimated change points. As for the computation time, we show the average CPU time.
The results of the two methods are shown in Table 1. It can be seen from this table that the change time point errors are quite small regardless of the size of the network with a low level of
observation errors. In addition, it is also seen that the time point errors with DPLSQ-TV are close to those with DPLSQ-HS with the exception of high levels of observation errors. We observe that CPU
time using DPLSQ-TV increases rapidly as the size of the network grows. On the other hand, CPU time by DPLSQ-HS increases gradually as the size of the network grows. It is also observed that the
DPLSQ-HS algorithm is about 4 times faster than the DPLSQ-TV algorithm in case of 40 genes, while maintaining good accuracy. Hence, these results suggest that DPLSQ-TV and DPLSQ-HS can correctly
identify the change time points if the error levels are not very large and that it can complete the initial network by modifying the edges with relatively good accuracy if the observation error is
It is also observed that DPLSQ-HS worked reasonably fast even for , although DPLSQ-TV took more than 1000 seconds per execution and thus the result could not be included in Table 1. However, the
accuracy on DPLSQ-HS became around 0.4 even if the observation error level was low (i.e., ). Therefore, the applicability of DPLSQ-HS is also limited in terms of the accuracy, although it may still
be useful for networks with if the purpose is to identify change time points.
Since DPLSQ-HS is a heuristic method, the results may be greatly influenced by data. Therefore, we evaluated the stability of DPLSQ-HS by comparing the variance of the accuracy with that for
DPLSQ-TV, where . The variances for DPLSQ-TV were 0.00602 and 0.00446 for and , respectively. The variances for DPLSQ-HS were 0.01188 and 0.00732 for and , respectively. This result suggests that
DPLSQ-HS is less stable than DPLSQ-TV. However, the variances of DPLSQ-HS were less than twice those of DPLSQ-TV. Therefore, this result also suggests that DPLSQ-HS has some stability.
In order to examine the effect of the number of change points and the maximum number of added and deleted edges per nodes and on the least squares error, we performed computational experiments with
varying these parameters (one experiment per parameter). Then, the resulting least squares errors (i.e., s) for DPLSQ-TV are 5.495, 7.016, 7.875, 3.886, and 3.799 for , , , , and , respectively. It
is seen that use of larger , resulted in smaller least squares errors. It is reasonable that more parameters resulted in better least squares fitting. However, use of larger did not result in smaller
least squares errors. It may be because addition of unnecessary change points increases the error if an enough number of edges are not added. It is to be noted that although the least squares errors
are reduced, use of larger , is not always appropriate because it needs much longer CPU time and may cause overfitting.
We also compared our results with those obtained by the ARTIVA algorithm [15]. It is to be noted that most of the other tools for the inference of time-varying networks are unavailable. This model is
based on a combination of DBN and RJMCMC sampling, where RJMCMC is used for approximating the posterior distribution and DBN is used for inferring simultaneously the change points and resulting
network structures. We applied ARTIVA to the synthetic datasets that were generated in the same way as for our proposed methods. We used the default parameter settings for ARTIVA and evaluated the
results by inferring the change points. As the result of the comparative experiment, there are two change time points in the synthetic datasets, but ARTIVA can only infer one change point regardless
of the observation error level, as shown in Figure 5, where ARTIVA does not uniquely determine change points but output probabilities of change points.
4.2. Inference Using Real Data
We examined two types of proposed methods for the inference of change time points using gene expression microarray data and also compared our results with those obtained using the ARTIVA algorithm.
We applied our methods to two real gene expression datasets, measured during the life cycle of D. melanogaster and the cell cycle of S. cerevisiae.
The first microarray dataset is the gene expression time series collected by Spellman et al. [28]. We employed part of the cell cycle network of S. cerevisiae extracted from the KEGG database [29]
shown in Figure 6. As for time series data, we combined and employed four sets of time series data (alpha, cdc15, cdc28, and elu) in [28] that were obtained in four different experiments. We adopted
the datasets of 10 genes with 71 time points including three change time points. Since there were several expression values that were far from the average in the cdc15 dataset, these values were
discarded. As a result, the alpha, cdc15, cdc28, and elu datasets consist of 18, 23, 17, and 13 time points of gene expression data, respectively.
The second microarray dataset is the gene expression time series from experiments by Arbeitman et al. [30]. This data set includes the expression levels of 4028 genes with 67 time points spanning
four distinct stages: embryonic (31 time points), larval (10 time points), pupal (18 time points), and adulthood (8 time points) in the D. melanogaster life cycle. We used the expression datasets of
30 genes selected from this microarray data with 67 time points, which include three change time points.
In this computational analysis, with regard to applying the two different types of microarray datasets, we generated 200 datasets that were obtained by slightly perturbing the original data in order
to avoid numerical calculation errors. Since the correct time-varying networks are not known, we only evaluated the time point errors and the average CPU time, where and were used with the S.
cerevisiae dataset and and were used with the D. melanogaster dataset.
The results are shown in Tables 2 and 3. s are the values of the change point in the original data and s are the estimated values. In the experimental analysis with S. cerevisiae data, as for the
change time points, there seems to be almost no difference between the results of DPLSQ-TV and DPLSQ-HS, which can correctly identify the time points where the network topology changes. It is also
observed from Table 2 that the CPU time required for DPLSQ-HS is about 15 times faster than that needed for DPLSQ-TV. In the experiments using data from D. melanogaster, it is seen from Table 3 that
both methods can determine exactly the same three change points. At first glance, readers may think that the errors are large at all change point positions. However, both methods could precisely
identify two time points when topology of the network changes, excluding the case of . From the point of view of computational time, DPLSQ-HS performs significantly better than DPLSQ-TV; DPLSQ-HS
runs about 46 times faster than DPLSQ-TV. Therefore, DPLSQ-HS allows us to significantly decrease the computational time. These results suggest that, in many cases, we can expect DPLSQ-HS to find a
near-optimal solution, at least for change time points, while also speeding up the calculation.
Furthermore, for the ARTIVA analysis, we employed both the above-mentioned S. cerevisiae and D. melanogaster microarray datasets, which consist of 71 measurements of 10 genes and 67 measurements of
30 genes, respectively, and tried to identify the change time points. Computational experiments on ARTIVA were performed under the same computational environment as that used in our methods.
The results from the yeast microarray data are shown in Table 2. There are three change time points, as described in this table. It is seen from this table that two of them, 24 and 60, can be
determined precisely by ARTIVA, but the third is not. In contrast, our proposed methods demonstrate good performance for inferring the change points at which the network topology changes. Lèbre et
al. [15] demonstrated the number of identified change points with D. melanogaster data using the ARTIVA algorithm. According to this validation, it has been observed that the time intervals 18-19,
31–33, 41–43, and 59–61 contain more than 40% of all change points. In order to compare with the ARTIVA results, we attempted to identify four change points using our proposed methods. The results of
the comparative experiment using D. melanogaster microarray data are shown in Table 4. s are three change time points in original data. Although DPLSQ-HS identified change time points similar to
those identified by ARTIVA, the results of ARTIVA appear to be slightly better. This suggests that the ARTIVA algorithm shows slightly better performance with respect to the inference of change
points than our proposed methods. However, ARTIVA does not determine change time positions but determines time intervals at which the network topology might change. Therefore, DPLSQ-HS is more suited
for identifying change time positions at all-time points. (Since the comparative experiment by DPLSQ-TV did not finish within 3 weeks, the results of DPLSQ-TV are not given in Table 4.)
5. Conclusion
In this paper, we have proposed two novel network completion methods for time-varying networks by extending our previous method, DPLSQ [20]. In order to identify the change time points and sets of
modified edges in network completion, we developed two different types of double DP algorithms. The first algorithm, DPLSQ-TV, is intended to complete and precisely infer time-varying networks.
Although DPLSQ-TV allows us to guarantee the optimality of its solution, it requires a large amount of computational time as the size of the network grows.
To improve the computational efficiency of DPLSQ-TV, we developed an effective heuristic method, DPLSQ-HS, by speeding up the calculation of the minimum least squares error by posing restrictions to
the number of combinations of incoming nodes. We showed that each of these two methods works in polynomial time if the maximum indegree is bounded by a constant.
The results of computational experiments reveal that the two proposed methods can identify change time points rather accurately and can infer edges to be deleted and added with good accuracy.
DPLSQ-TV provided a wide range of applications, not only in network completion but also in network inference, with good accuracy. Additionally, DPLSQ-HS allowed us to identify change time points
rather precisely, while reducing the computational time for both synthetic data and microarray data. This result suggests that, in many cases, DPLSQ-HS can be expected to find near-optimal solutions,
while speeding up the calculation.
Although DPLSQ-HS is much faster than DPLSQ-TV, it has a drawback: the accuracy and time point error were worse than those by DPLSQ-TV, especially, when the observation error level was large.
Therefore, we need to improve the accuracy of DPLSQ-HS without significantly undermining its efficiency. In our experiments, we specified the number of change time points and the number of edges to
be added and deleted. In real use, we may examine several values and select the best one (e.g., the values with the minimum least squares errors). However, as discussed in Section 4.1, it may lead to
overfitting. In order to avoid overfitting, use of AIC (Akaike's Information Criterion) or other information criteria is useful as demonstrated in [27] for network inference. However, since network
completion is more complex than network inference, the method in [27] cannot be directly applied. Therefore, incorporation of an appropriate information criterion into network completion is important
future work. Another issue to be tackled is to take into account the relationship between and . Although and are inferred independently from the original network by the proposed method, there should
be some strong relationship between them. Therefore, such an extension is also important future work.
Conflict of Interests
The authors declare that they have no conflict of interests.
The authors would like to thank Professor Hideo Matsuda in Osaka University and Takanori Hasegawa in Kyoto University for helpful discussions. This work was partially supported by JSPS, Japan,
(Grants-in-Aid 22240009 and 22650045).
1. K.-H. Cho, S.-M. Choo, S. H. Jung, J.-R. Kim, H.-S. Choi, and J. Kim, “Reverse engineering of gene regulatory networks,” IET Systems Biology, vol. 1, no. 3, pp. 149–163, 2007. View at Publisher ·
View at Google Scholar · View at Scopus
2. H. Hache, H. Lehrach, and R. Herwig, “Reverse engineering of gene regulatory networks: a comparative study,” Eurasip Journal on Bioinformatics and Systems Biology, vol. 2009, Article ID 617281,
2009. View at Publisher · View at Google Scholar · View at Scopus
3. M. Hecker, S. Lambecka, S. Toepferb, E. van Somerenc, and R. Guthkea, “Gene regulatory network inference: data integration in dynamic models: a review,” BioSystems, vol. 96, pp. 86–103, 2009.
4. S. Liang, S. Fuhrman, and R. Somogyi, “Reveal, a general reverse engineering algorithm for inference of genetic network architectures,” Proceedings of the Pacific Symposium on Biocomputing, vol.
3, pp. 18–29, 1998. View at Scopus
5. T. Akutsu, S. Miyano, and S. Kuhara, “Inferring qualitative relations in genetic networks and metabolic pathways,” Bioinformatics, vol. 16, no. 8, pp. 727–734, 2000. View at Scopus
6. N. Friedman, M. Linial, I. Nachman, and D. Pe'er, “Using Bayesian networks to analyze expression data,” Journal of Computational Biology, vol. 7, no. 3-4, pp. 601–620, 2000. View at Publisher ·
View at Google Scholar · View at Scopus
7. S. Imoto, S. Kim, T. Goto et al., “Bayesian network and nonparametric heteroscedastic regression for nonlinear modeling of genetic network,” Journal of Bioinformatics and Computational Biology,
vol. 1, no. 2, pp. 231–252, 2003. View at Scopus
8. T. Thorne and M. P. H. Stumpf, “Inference of temporally varying Bayesian networks,” Bioinformatics, vol. 28, pp. 3298–3305, 2012.
9. P. D'Haeseleer, S. Liang, and R. Somogyi, “Genetic network inference: from co-expression clustering to reverse engineering,” Bioinformatics, vol. 16, no. 8, pp. 707–726, 2000. View at Scopus
10. Y. Wang, T. Joshi, X. Zhang, D. Xu, and L. Chen, “Inferring gene regulatory networks from multiple microarray datasets,” Bioinformatics, vol. 22, no. 19, pp. 2413–2420, 2006. View at Publisher ·
View at Google Scholar · View at Scopus
11. H. Toh and K. Horimoto, “Inference of a genetic network by a combined approach of cluster analysis and graphical Gaussian modeling,” Bioinformatics, vol. 18, no. 2, pp. 287–297, 2002. View at
12. R. Yoshida, S. Imoto, and T. Higuchi, “Estimating time-dependent gene networks from time series microarray data by dynamic linear models with Markov switching,” in Proceedings of the 2005 IEEE
Computational Systems Bioinformatics Conference (CSB '05), pp. 289–298, August 2005. View at Publisher · View at Google Scholar · View at Scopus
13. A. Fujita, J. R. Sato, H. M. Garay-Malpartida, P. A. Morettin, M. C. Sogayar, and C. E. Ferreira, “Time-varying modeling of gene expression regulatory networks using the wavelet dynamic vector
autoregressive method,” Bioinformatics, vol. 23, no. 13, pp. 1623–1630, 2007. View at Publisher · View at Google Scholar · View at Scopus
14. J. W. Robinson and A. J. Hartemink, “Non-stationary dynamic Bayesian networks,” in Proceedings of the 22nd Annual Conference on Neural Information Processing Systems (NIPS '08), pp. 1369–1376,
December 2008. View at Scopus
15. S. Lèbre, J. Becq, F. Devaux, M. P. H. Stumpf, and G. Lelandais, “Statistical inference of the time-varying structure of gene-regulation networks,” BMC Systems Biology, vol. 4, p. 130, 2010. View
at Publisher · View at Google Scholar · View at Scopus
16. Y. W. Teh and M. I. Jordan, “Hierarchical Bayesian nonparametric models with applications,” in Bayesian Nonparametrics, pp. 158–207, Cambridge University Press, Cambridge, UK, 2010.
17. G. Rassol and N. Bouaynaya, “Inference of time-varying gene networks using constrained and smoothed Kalman filtering,” in Proceedings of the International Workshop on Genomic Signal Processing
and Statistics, pp. 172–175, 2012.
18. A. Ahmed, L. Song, and E. P. Xing, “Time-varying networks: recovering temporally rewiring genetic networks during the lofe cycle of drosophila,” SCS Technical Report Collection CMU-ML-08-118,
19. T. Akutsu, T. Tamura, and K. Horimoto, “Completing networks using observed data,” in Proceedings of the 20th International Conference on Algorithmic Learning Theory, pp. 126–140, 2009.
20. N. Nakajima, T. Tamura, Y. Yamanishi, K. Hiromoto, and T. Akutsu, “Network completion using dynamic programming and least-squares fitting,” The Scientific World Journal, vol. 2012, Article ID
957620, 8 pages, 2012. View at Publisher · View at Google Scholar
21. A. Clauset, C. Moore, and M. E. J. Newman, “Hierarchical structure and the prediction of missing links in networks,” Nature, vol. 453, no. 7191, pp. 98–101, 2008. View at Publisher · View at
Google Scholar · View at Scopus
22. R. Guimerà and M. Sales-Pardo, “Missing and spurious interactions and the reconstruction of complex networks,” Proceedings of the National Academy of Sciences of the United States of America,
vol. 106, no. 52, pp. 22073–22078, 2009. View at Publisher · View at Google Scholar · View at Scopus
23. M. Kim and J. Leskovec, “The network completion problem: inferring missing nodes and edges in networks,” in Proceedings of the 2011 SIAM International Conference on Data Mining, pp. 47–58, 2011.
24. S. Hanneke and E. P. Xing, “Network completion and survey sampling,” Journal of Machine Learning Research, vol. 5, pp. 209–215, 2009.
25. N. Nakajima and T. Akutsu, “Network completion for time-varying genetic networks,” in Proceedings of the 7th International Conference on Complex, Intelligent, and Software Intensive Systems, vol.
2013, pp. 553–558, 2013.
26. S. Kim, H. Li, E. R. Dougherty et al., “Can Markov chain models mimic biological regulation?” Journal of Biological Systems, vol. 10, no. 4, pp. 337–357, 2002. View at Publisher · View at Google
Scholar · View at Scopus
27. N. Noman, L. Palafox, and H. Iba, “On model selection criteria in reverse engineering gene networks using RNN model,” in Proceedings of International Conference on Convergence and Hybrid
Information Technology, pp. 155–164, 2012.
28. P. T. Spellman, G. Sherlock, M. Q. Zhang et al., “Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization,” Molecular Biology
of the Cell, vol. 9, no. 12, pp. 3273–3297, 1998. View at Scopus
29. M. Kanehisa, S. Goto, M. Furumichi, M. Tanabe, and M. Hirakawa, “KEGG for representation and analysis of molecular networks involving diseases and drugs,” Nucleic Acids Research, vol. 38, no. 1,
Article ID gkp896, pp. D355–D360, 2009. View at Publisher · View at Google Scholar · View at Scopus
30. M. N. Arbeitman, E. E. M. Furlong, F. Imam et al., “Gene expression during the life cycle of Drosophila melanogaster,” Science, vol. 297, no. 5590, pp. 2270–2275, 2002. View at Publisher · View
at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/bmri/2014/684014/","timestamp":"2014-04-17T19:11:47Z","content_type":null,"content_length":"488956","record_id":"<urn:uuid:40a9cfac-57a2-495f-ad03-f202e3f14d8d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Significant Figures
Key Concepts
•   Significant figures, or significant digits, establish the value of a number.
•   Zeros shown merely to locate a decimal point are NOT significant figures
•   Zeros located to the right of another number after a decimal point are significant
•   The last significant figure on the right is the one which is somewhat uncertain
•   An exact number, such as the number of objects counted, can be considered to have an infinite number of zeros after the decimal point, all of which are significant
•   It is impossible to tell how many significant figures are in a large number with zeros to the left of the decimal point without converting the number to scientific notation
•   To find the number of significant figures in a given number:
1. count all the digits starting at the first non-zero digit on the left
2. for a number written in scientific notation count only the digits in the coefficient
•   When adding or subtracting numbers, the number of digits to the right of the decimal point in the result should be the same as the number of digits to the right of the decimal point in the
number with the fewest digits to the right of the decimal point
•   When multiplying or dividing numbers, the number of significant figures in the result is the same as the least number of significant figures in any of the multiplied or divided terms
Finding the Number of Significant Figures in:
(a) 5 mL
    Count all the digits starting at the first non-zero digit on the left.
    1 significant figure
(b) 5.2 g
    Count all the digits starting at the first non-zero digit on the left.
    2 significant figures
(c) 5.0 kg
    Count all the digits starting at the first non-zero digit on the left.
    2 significant figures
(d) 5.000 L
    Count all the digits starting at the first non-zero digit on the left.
    4 significant figures
(e) 0.005 m
    Count all the digits starting at the first non-zero digit on the left.
    1 significant figure
(f) 5 football players
    An exact number, such as the number of objects counted, can be considered to have an infinite number of zeros after the decimal point, all of which are significant.
    infinite number of significant figures
(g) 500 mm
    It is impossible to tell how many significant figures are in a large number with zeros to the left of the decimal point without converting the number to scientific notation.
    unknown number of significant figure
(h) 5.00 x 10^3 g
    For a number written in scientific notation count only the digits in the coefficient.
    3 significant figures
Finding the number of Significant Figures in the Result of Calculations:
(a) 12.47g + 7g
    When adding or subtracting numbers, the number of digits to the right of the decimal point in the result should be the same as the number of digits to the right of the decimal point in
the number with the fewest digits to the right of the decimal point.
    "7" has no numbers to the right of the decimal point so the final result will also have no numbers to the right of the decimal point
    12.47 + 7 = 19 (rounded down to 19 from 19.47 because the number after the decimal point is less than 5)
(b) 32.56mm - 4.9mm
    When adding or subtracting numbers, the number of digits to the right of the decimal point in the result should be the same as the number of digits to the right of the decimal point in
the number with the fewest digits to the right of the decimal point.
    "4.1" has one number to the right of the decimal point so the final result will also have one number to the right of the decimal point
    32.56 - 4.9 = 27.7 (rounded up to 27.7 from 27.66 because the number to the right of the last significant figure was greater than 5)
(c) 1.473 ÷ 2.6
    When multiplying or dividing numbers, the number of significant figures in the result is the same as the least number of significant figures in any of the multiplied or divided terms.
    1.473 has 4 significant figures, 2.6 has only 2 significant figures, the result will have 2 significant figures.
    1.473 ÷ 2.6 = 0.57 (rounded up to 0.57 from 0.5665 because the number to the right of the last significant figure was greater than 5)
(d) 4.1 x 10^3 x 8.635 x 10^2
    When multiplying or dividing numbers, the number of significant figures in the result is the same as the least number of significant figures in any of the multiplied or divided terms.
    4.1 x 10^3 has 2 significant figures, 8.635 x 10^2 has 4 significant figures, the result will have 2 significant figures.
    3.5 x 10^6 (rounded down to 3.5 x 10^6 from 3.54 x 10^6 because the number to the right of the last significant figure is less than 5)
What would you like to do now? | {"url":"http://www.ausetute.com.au/sigfig.html","timestamp":"2014-04-17T03:49:03Z","content_type":null,"content_length":"16941","record_id":"<urn:uuid:362056de-ff83-49b4-8300-c7caba97300b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kutomba - 28th March 2011, 11:50 AM - permalink
The math we learn is totally inapplicable in all areas of life, with the exception of a few jobs that I have no interest in whatsoever.
Alex800 - 29th March 2011, 08:57 AM - permalink
*sigh* Math... you know, 1-10th grade math isn't so hard. I was even half a year before anyone so I got some high school math (you go to high school at 16-17 yr old here). it. is. fucking.
hard. I'm actually behind now and it's never happened before.
I'm taking high school science and english as well but no problems with that, just the math.
and it doesn't help that I'm all alone in a classroom filled with kids that are learning something completly different and ten times easier than I am. Curse you Icelandic school system, curse
you -_-
AlexandraTheZoroark - 17th April 2011, 08:07 AM - permalink
Oh yeah, and guys, don't ask me how many math tests I'VE failed
Arc Blader - 11th August 2011, 06:07 PM - permalink
I don't like math because it's so objective. There's no room for interpretation, like in English and History; 2 + 2 has always and forever equal 4. There's no metaphor behind that, and
there's no hypothetical alternate reality where the square root of -1 is a real number. It's just dull, artless fact.
Sir Red - 20th January 2012, 08:11 AM - permalink
I was put into pre-Algebra this year. Right now I have a C-. DAMN YOU, ALGEBRA!
Captain_Kaos - 12th November 2012, 10:35 PM - permalink
MMMAAATHHHHH ITS EXTRA HERETIICAL *BLAM* *BLAM* EAT MY LEADD ARISTOTLE AND AUSTRALIAN CURRICULUM
Kutomba - 17th November 2012, 12:31 PM - permalink
It's really not that hard for me yet because I'm only in Algebra I, but I hate it anyway. I don't care in the slightest.
Niji - 4th December 2012, 08:47 PM - permalink
Math is such a turd
Now my math homework is taking forever...DSLJFLKSJFLKESJFLEWJFLKJLSJSLKFLJSJKLSFJKLSFJLKSJLKSFJLSKFNLSKNFLKSFNLKZMFLZFLKSAJFDJFOEJR IJOIJDAJFDAPJFPAJAPJFPAJFAHFOIQHROIUWEIREIOAGJ. | {"url":"http://bmgf.bulbagarden.net/groups/i-strongly-dislike-subject-know-math/grrrrrrr-math-5009/index3.html","timestamp":"2014-04-20T12:21:26Z","content_type":null,"content_length":"45063","record_id":"<urn:uuid:55c6e824-f5b8-4592-a806-a81030027a29>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Submitted by Anonymous on September 14, 2012.
I implied that Polygons, cant have holes, but most mathematicians define them so that they can have holes. Anyways, I think you defined them the former, way so I was going with that. In the latter
case, my example of a non convex polyhedron with Euler characteristic 3 is a pretty useful one. | {"url":"http://plus.maths.org/content/comment/reply/2308/3579","timestamp":"2014-04-17T00:52:27Z","content_type":null,"content_length":"20090","record_id":"<urn:uuid:97f0f4df-1b97-4897-99eb-9e65b7b32dbc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conjecture 4. Fermat primes are finite.
Problems & Puzzles: Conjectures
Conjecture 4. Fermat primes are finite.
A Fermat number F[m] is defined this way : F[m]=2^(2^m)+1. The first 5 Fermat numbers are : F[0]=3, F[1]=5, F[2]=17, F[3]=257 & F[4]=65537, all primes !
It s known that Pierre Fermat believed and tried to prove that all the Fermat numbers are primes, but he didn t succeed.
Soon Euler not only factorised F[5]=641x6700417, but proved that all the factors of the composite Fermat numbers are of the form k.2^n+1.
After that we know that some Fermat numbers are primes (five) and others are composites.
In 1877 Pepin devised a rigorous proof for testing the primality of the Fermat numbers, but it hasn t been used with a positive result for any other Fermat number greater than F[4], that is to say,
the Pepin test has never proved that another Fermat number F[m]>F[4] is prime. The larger Fm that has been tested using the Pepin's proof is F24, as recent as 1999 by Mayer, Papadopoulus & Crandall
(see the whole story -according to Leonid Durman - here).
On the other hand nobody has proved that all the F[n]>F[4] are composites and different prime hunters have found one or several factors of many Fermat numbers
(See: http://www.prothsearch.net/index.html )
Then we have an open question : "are F[0] to F[4] the only Fermat prime numbers ?" or "are composite all the Fermat numbers F[m]>F[4 ]?"
The following argument to the finiteness of the Fermat prime numbers has been sent by Leonid Durman (18/8/01):
"Certainly strict proofs are necessary, But...
As it is not revealed of anomalies Mersenne prime and Fermat from others prime numbers yet. Therefore it is possible to tell, that they submit to the law of allocation prime numbers. function pi
(x), the number of primes below x. Is proved, that at x ==> infinity, pi(x)==>x/ln x -known function.
Then for enough large x to find anyone prime makes ==> 1/ln x Then to receive an amount expected prime numbers we should summarize for all numbers C*sum (1/ln x, x=x1.. x2). Where C coefficient,
I shall take 1.57, but I read the large theories on this coefficient. Sense not in him. C:=1.57:
The sums which we obtain: For Mersenne x= 2^n:
sum(C/ln((2.^x)),x=1..1e7) = 37.81540138
sum(C/ln((2.^x)),x=1..infinity) = infinity
As you can see up to x < 10^7 is obtained equally 38 known Mersenne prime. But for x= infinity amount Mersenne prime - is infinite!
Further. For Fermat number x=2^2^m
sum(C/ln(2^(2.^x)),x=0..4) =4.4
sum(C/ln(2^(2.^x)),x=33..infinity) = 0.0000000005273686755
From F0-F4 all Fermat prime them 5. The formula is not mistaken. For all remaining numbers with unknown character F33... We obtain very small number. Infinite series converges also it gives the
basis to speak, that probability to detect another Fermat prime practically is not present."
Needles to say that this is not a proof but an acceptable argument. Does anybody has another approach-argument, while the real proof comes?
Leonid Durman has sent a follow up to this Conjecture 4 stating a stronger conjecture about the Fermat numbers:
"All Fermat numbers m > 4 have divisor and anyone first smaller divisor k*2^n+1 < 2^(16*n)"
Here is how he arrives to this statement:
"I would like to continue reviewing Conjecture 4. I am independent has offered the formula for quantifying divisors in some range. The similar formula has output Paul Jobling integrating
probability of a divisor offered Yves Gallot: probability 1/k.
Here obtained formula by the various people: C • ln (nMax / (nMin-1)) • ln (kMax / kMin) Where C some coefficient. I would tell, that he is equal C~1 for very large n, which we can consider on
But at smaller ranges, n's and k's he changes 0.7<C<1.6, that is characterizes some discontinuities of allocation. Therefore we admit having taken the obviously underestimated value C=0.1, we
shall try to estimate limits, what maximum value k's to us is necessary for taking to find even one divisor:
at: kMin=3, C=0.1, nMax=nMin+1
n=16, F_14, k<10^72 !!!
So far should be checked k to be as much as possible sure 99.99999% :-)) but is not absolute, that we shall detect out a divider for F14.
n=22, F_20, k<10^98
n=24, F_22, k<10^107
n=26, F_24, k<10^115
n=35, F_33, k<10^154
n=402, F_400, k<10^1750
n=1000002, F_1000000, k<10^4340000
Or very roughly, but it is possible to tell. k<10^(4.5*n) or divisor Fermat k*2^n+1=> ~10^(4.5*n)*2^n =>~2^(15*n)*2^n=> ~2^(16*n)
This value most not favorable, which is improbable, is more probable, that we shall discover a divider much earlier. Thus, I give new Conjecture for Conjecture 4:
"All Fermat numbers m > 4 have divisor and anyone first smaller divisor k*2^n+1 < 2^(16*n)"
But Leonid is well aware that Emil Artin has a conjecture in the opposite direction, that is to say, conjecturing that there must be more Fermat primes, and ask us to find and publish that
"I would be very pleased, if you could find in libraries or on Web, "proof" Artine, (is written for me on russian), where there are serious arguments about others prime Fermat. It would be the
good counterbalance to my guesses."
Does anybody has that conjecture in his books?
It was the same Leonid that found a very interesting argument elaborated by John B. Cosgrave ( Mathematics Department, St. Patrick’s College, Drumcondra, Dublin 9, IRELAND) in a document titled "
Could there exist a sixth Fermat prime? I believe it is not impossible", available in his own site.
In short (and with all the risks of the shortening): Cosgrave proceeds the following way:
1. He reformulate the compositeness conjecture for the Fermat numbers larger than F4 in the following elegant statement: "if F[m] is composite then F[m+1] is also composite"
2. He defines a new numbers class (unhappily also named Generalised Fermat Numbers, name that has been used for a very different class of other numbers) that contains the classical Fermat numbers as
one of its members
3. He shows that the compositeness conjectural statement is not valid for these new class of numbers in certain domain ( that he calls "rank")
4. He hopes then that then probably that statement is also not valid in the rank where these new class of numbers coincides with the classical Fermat numbers.
In his own (Cosgrave) words:
"What I have done is to place - in, I believe, a very natural way - the Fermat numbers in a larger setting, and point out that in that larger setting-almost certainly at the 17^th rank-the
corresponding behavior is different. If that can happen at the 17^th rank, then surely it is fair to note that it could happen at any rank, and therefore that it is not impossible (until proven
otherwise) for a sixth Fermat prime to exist."
Pepin tests story, according to Leonid Durman
Morehead & Western (independently) using Pepin's test with a=3 verified that F7 is composite. J. C. Morehead, Note on Fermat's numbers, Bull. Amer. Math. Soc. vol. 11 (1905) pp. 543-545.
Morehead & Western (by a very long computation) verified that F8 is composite. (Pepin test a=3) J. C. Morehead and A. E. Western, Note on Fermat's numbers, Bull. Amer. Math. Soc. vol. 16 (1909) pp.
F_13 Paxson 6 hour on IBM 7090 G. A. PAXSON, "The compositeness of the thirteenth Fermat number," Math. Comp., v. 15, 1961, p. 420
F_14 Selfridge & Hurwitz on IBM 7090, the outcomes Paxson for F13 were checked, the attempt is made to check up F17, but have fulfilled only 20 operations modulo from necessary 131071. The complete
operation would occupy then 128 complete weeks ~ 2 years. see: A. HURWITZ & J. L. SELFRIDGE, "Fermat numbers and perfect numbers," Notices Amer. Math. Soc., v. 8, 1961, p. 601, abstract 587-104. and
J.L.Selfridge and A.Hurwitz, "Fermat numbers and Mersenne numbers," Math. Comp. 18 (1964), 146-148
F_20 Buell & Young see: J.Young and D.Buell, "The Twentieth Fermat Number is Composite" Math. Comp 50 (1988), 261-263
F_22 Crandall, Doenias, Norrie & Young see: V.Trevisan and J.B. Carvalho, "The composite character of the twenty- second Fermat number" J.Supercomputing 9 (1995), 179-182
F_24 Mayer, Papadopoulos & Crandal see http://www.perfsci.com/F24
and letters :-) | {"url":"http://www.primepuzzles.net/conjectures/conj_004.htm","timestamp":"2014-04-16T18:56:58Z","content_type":null,"content_length":"20331","record_id":"<urn:uuid:6c199dc1-f155-4155-b265-6f1f900c1334>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course notes
• Algebraic number theory (ps pdf) Course notes from a tutorial I gave at Harvard in 1999. Not brilliant, and I really dislike some parts, but you could do a lot worse. A couple of somewhat more
recent supplementary articles:
□ An introduction to local fields (ps pdf) A quick introduction to local fields and their Galois theory.
□ The idelic approach to number theory (ps pdf) An introduction to adeles and ideles, working towards a proof of the Dirichlet unit theorem and the finiteness of the class group. Similar to the
relevant portion of Lang's Algebraic number theory, but with more details and more of an emphasis on the underlying topology.
• Euler systems in arithmetic geometry (ps pdf) My course notes from Barry Mazur's 1998 course on Euler systems. Any errors in the notes are, of course, probably my fault.
• Barry Mazur's 1998 notes on Euler systems (ps pdf) Mazur's own notes on that course. Much less extensive than mine, but somehow much more charming.
• Barry Mazur's 1999 notes on Euler systems (ps pdf) Mazur's notes from his 1999 course on Euler systems. | {"url":"http://www.math.umass.edu/~weston/cn.html","timestamp":"2014-04-16T13:02:06Z","content_type":null,"content_length":"1883","record_id":"<urn:uuid:601b938e-a031-46b3-b121-3bd3567549c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to do company valuation
Pre-money valuation is hard. It’s a total guess. And, if you’ve been paying attention to this blog lately, you probably know that I’ve been helping to create another company. This one has the most
promise of any other I’ve built yet. Although I’m not going to reveal the details of the product/service quite yet (which is generally against my current business principles, but hey, I’ve got
partners this time and some heavy logistics to work out), I do want to describe some of the things I’ve been learning along the way.
Now, I’ve founded companies in the past and sold them successfully. However, I’ve always used my own money to build them, which works out pretty well because you get to sell it however, whenever, and
for whatever you think it’s worth.
When you start looking at pulling in outside investment, though, things change a bit. So, here’s the formula we worked with, for better or worse.
Side note: if you know anything about this arena and find fault in how we did things, feel free to say so. I’d love to learn more, and I know my readers would respect the info as well. Although I
talked to some smart folks to come up with this, I don’t have it all figured out.
Here’s a few known variables regarding expected ROI:
1. VCs expect 5-10x their money within 3 years.
2. Angels expect 5x their money within 5 years.
We changed ours a bit because we’re bashful. We wanted our angel investor to get 5x in 3 years. So…
3. Our growth rate for revenue at year 3 is about 130% annually.
4. Let’s assume our angel wants 5% of our company.
□ Start with an exit year (we took year 3).
□ Take the expected revenue that year ($20M) and multiply it by what percentage ownership our angel wants (5%). $20m x 5% = $1m
□ Our angel expects a 5x return within these 3 years, so let’s back that out to today by: $1m / 5 = $200k.
□ That $200k is what we would expect they would invest today to get 5x their money in 3 years. So, the rest is pretty easy if they want to own 5% of the company. $200k / 5% = $4m.
□ So, today our company would be valued at $4 million.
Now, these numbers are pretty fictional. So, let’s just say we we didn’t want to estimate quite so high. We thought that was a bit presumptuous, just as a hunch. So, we recalculated the numbers based
on profit instead of revenue. For our model, let’s just say that got us closer to a $1.2m valuation, which seems a bit closer to expectations.
Anyway, that’s how we did our pre-money valuation. If you’re doing anything else, then forget everything I just said and go listen to these guys chat about cap tables. They know what they’re talking
[tags]finance, money, corporate, valuation, company, pre-money, estimate, angel, vc, venture capital[/tags]
Here’s another great write-up regarding Startup Funding by my friend Ben Yoskovitz. You should read it.
Just a couple of points:
1) Angels invest earlier than VC’s and as such should expect a higher multiple than VC’s. Why would they want or except less ?
2) As a startup, all revenue estimates are wishful thinking – particularly revenues that are 3 years away. Valuations are therefore wishful thinking and should not be relevant in any calculation.
What I have found to be a more appropriate “valuation” can be calculated by looking at a) a Founders estimate of how much total investment is needed before break-even / exit and b) how much of the
company the Founder would like at break-even/exit and how much the Founder is willing to let the Angel own (or the Angel demands to own) at break-even / exit. These figures will allow the Founder to
work backwards to an appropriate ownership position for all parties at the time of the Angel investment (and thus a theoretical valuation.
Note that my comments are really aimed at very early stage startups who have achieved very few milestones and are dealing with Angel investors.
As far as I know, VCs use much simpler formula. They figure out how much money will it take to get to the first meaningful milestone, know that founders will not work hard if more than 20-40% of the
equity is taken in the first round, and calculate the “valuation” from these numbers.
You have confused Revenue with Valuation. In year three is the company worth $20M or has revenue of $20M. It is very strange they are identical.
If you change $20M Revenue in year 3 to $20M Validation in year 3 then it is correct.
The company is worth $4M now and five times more in three years $20M. However much they invest it is 5x growth. The question is still how do you go from year 3 revenue to year 3 value.
John, I don’t believe the angels expect higher multiples on their money because they generally get a good deal in terms of the % of the company they get for the money they shell out. Other than that,
I’m not so sure, but those are the numbers I’ve been told by multiple VCs and angels.
Nick, as John said, valuation is a complete guess. Therefore, I can’t say with any certainty that the company will have a 2x or 3x multiple of revenue to equal a $40-60M valuation. So, I use revenue.
Obviously valuation of a company which has been standing for a few years has a very different calculation for valuation. But this early, we based our valuation off a 1x multiple of revenue.
Thank you! As a first-time investment seeker (not first time entrepreneur), this is the first article I’ve found written in plain English about valuation. Very, very helpful.
1. Bill Payne has survey results of average pre-money valuations for every geographic region in the country. Silicon Valley is the highest.
2. Dave McClure’s 5 “million dollar points”. Basically you get $1M valuation credit for having any of the following:
1. Market
2. Product
3. Team
4. Customers
5. Revenue
3. If you have Revenue, you can use your run rate, and some revenue multiple comparable to your industry, and stage. I wrote an answer giving the revenue multiples of venture backed companies here: | {"url":"http://blog.perfectspace.com/2007/10/16/how-to-do-company-valuation/","timestamp":"2014-04-19T23:16:27Z","content_type":null,"content_length":"32818","record_id":"<urn:uuid:e3c42944-c4b8-46af-ac94-b5ff0aa99b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
By Kardi Teknomo, PhD.
<Next | Previous | Index>
Vector cross product is also called vector product because the result of the vector multiplication is a vector. It can only be performed for two vectors of the same size.
Geometrically, when you have two vectors on a plane, the cross product will produce another vector perpendicular to the plane span by the two input vectors.
The direction of the cross product vector is following the direction of the thumb finger in your right hand when the four other fingers indicate the angle from the first vector to the second vector,
as shown in the following figures.
Computation of cross product
For 1 or two dimensional vector, the cross product produces zero vectors because they do not make a plane yet. Starting from 3 dimensions, the cross product can be computed algebraically using simple
arrangement and a simple rule below
The arrangement to compute vector cross product
1. Arrange the vector as row vector with the first input vector in second position and the second input vector in the third position
2. Put notation of vector element in the first position
3. Repeat the arrangement on the right
Simple rule to compute cross product
After the arrangement above, multiply the elements of the vectors in diagonal direction and then minus with the product of elements in counter diagonal direction
Now we put them together
Unit vector
For higher dimension, we use the same rule but programmatically, it is easier if we use a formula
Let d = dimension of the vector (that is equal to the length of the vector)
Let assume the index of array vector start from 0 then the pseudo code below produce vector cross product
Input: vector a, vector b both have equal length
Output: vector c
d = vector length
For r = 0 to vector length -1
c[r] = a[mod(r+1, d)] * b[mod(r+2, d)] - a[mod(r+d-1, d)] * b[mod(r+d-2, d)]
Next r
The interactive program of cross product below shows the cross product of two vectors of the same dimension. The program will also show you the internal computation so that you can check your own
manual computation. If you click “Random Example” button, the program will generate random input vectors in the right format.
In the applications, cross product is useful for constructing coordinate system mostly in 3-dimensional space.
Some important properties of vector cross product are
• Vector cross product is a not commutative operation. If you reverse the order you will get the same magnitude but opposite direction
• Vector cross product is a distributive operation. You can distribute (and group) the vectors with respect to addition or subtraction such that
• Vector cross product is an associative operation with respect to scalar multiple of vector. You can exchange the order of computation (operation inside parentheses are to be computed first) does
not change the result
• Vector cross product to itself always produces zero vector
• The magnitude of vector cross product is equal to the product of their norms and sine angle between the two vectors,
• Cross product of the same standard unit vector is zero
• Cross product of the perpendicular standard unit vector form a cycle
• Relationship of norm of cross product and dot product is
See also: triple dot product, triple cross product, scalar triple product, inner product
<Next | Previous | Index>
Rate this tutorial or give your comments about this tutorial
This tutorial is copyrighted.
Preferable reference for this tutorial is
Teknomo, Kardi (2011) Linear Algebra tutorial. http:\\people.revoledu.com\kardi\ tutorial\LinearAlgebra\ | {"url":"http://people.revoledu.com/kardi/tutorial/LinearAlgebra/VectorCrossProduct.html","timestamp":"2014-04-20T18:23:45Z","content_type":null,"content_length":"27873","record_id":"<urn:uuid:40b41bd6-d26e-45c1-9e70-936fc432de74>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
CMS Winter 2007 Meeting
I will discuss the stability of solitary waves on an infinite elastic curve moving in three-dimensional space. While slow-moving waves are unstable, higher wave speeds stabilize the solitary
wave; I will try to explain the mechanism.
This is joint work with Erin Valenti.
I will discuss the construction of CMC surfaces by gluing techniques in general Riemannian manifolds. These are found by assembling an approximate solution consisting of small geodesic spheres
connected by embedded catenoids, which can then be perturbed to an actual solution under certain conditions on the geometry of the approximate solution. The resulting surfaces have very large
mean curvature. An interesting phenomenon is that in the case where the ration of the norm of the second fundamental form to the size of the mean curvature is very large, then it is possible to
have solutions which exhibit behaviour that is very different from the `classical' behaviour (e.g. no one-ended CMC surfaces, CMC surfaces are cylindrically bounded) that occurs in Euclidean
Introduced in 1982 by Richard Hamilton, the Ricci flow is one of the most important equations in differential geometry. It is a geometric evolution equation providing a powerful analytic tool
used to deform a Riemannian metric on a Riemannian manifold. The Ricci flow has found fundamental application in topology, Riemannian geometry and complex/Kahler geometry. In this talk I will
discuss the Kahler-Ricci flow on complete non-compact Kahler manifolds. I will then discuss the application of the Kahler-Ricci flow to the uniformization of complete non-compact Kahler manifolds
and to Yau's uniformization conjecture. Yau's conjecture states: a complete non-compact Kahler manifold with positive holomorphic bisectional curvature is biholomorphic to complex Euclidean
In this talk I will show how one can prove a sharp quantitative version of the anisotropic isoperimetric inequality by exploiting mass transportation theory, especially Gromov's proof of the
isoperimetric inequality and the Brenier-McCann Theorem.
This is a joint work with F. Maggi and A. Pratelli.
Given a couple of smooth positive measures of same total mass on a compact Riemannian manifold M, we look for a smooth optimal transportation map G, pushing one measure to the other at a least
total squared distance cost. The recent local C^2 estimate of Ma-Trudinger-Wang enabled G. Loeper to treat the standard sphere case. In this talk, we discuss this topic on manifolds with
curvature sufficiently close to 1 in C^2 norm.
This is a joint work with P. Delanoe.
A general framework is given to analyze the falsifiability of economic models based on a sample of their observable components. It is shown that, when the restrictions implied by the economic
model are insufficient to identify the unknown quantities of the model, the duality of optimal transportation with zero-one cost function delivers interpretable and operational formulations of
the hypothesis of model correctness from which tests can be constructed to falsify the model.
We will discuss continuity of optimal transport maps, in view of a pseudo-Riemannian structure which we have formulated recently. A necessary condition for the continuity is given as some
non-negativity condition on the curvature of this pseudo-Riemannian metric. This result gives a natural geometric frame work and new perspectives for the regularity theory of Ma, Trudinger, Wang
and Loeper on the potential functions of optimal transport; it also yields some extensions of previous results and new examples.
This is joint work with Robert McCann (University of Toronto).
In joint work with Samia Challal, we show an optimal Hölder continuity for bounded solutions of the equation -D[A] u = m provided that m( B[r](x) ) \leqslant C r^n-1 for any ball B[r](x) Ì W. The
A-Laplace operator is defined by D[A] u = div ( [(a(|Ñu|))/(|Ñu|)] Ñu ), where A(t) = ò[0]^t a(s) ds, a is an increasing C^1 function from [0,+¥) into [0,+¥) which satisfies a(0)=0 and
a[0] \leqslant ta¢(t) a(t) \leqslant a[1] "t > 0, a[0], a[1] positive constants.
In this talk we prove the existence and qualitative properties of positive bound state solutions for a class of quasilinear Schrödinger equations in dimension N ³ 3. We rely on a penalization
technique, in a nonstandard Orlicz space context, to build up a one parameter family of classical solutions which have finite energy and exhibit, as the parameter goes to zero, a concentrating
behavior around some point which we localize.
We study a Principal-Agent model of optimal derivative design where the agents' preferences are of mean-variance type and their types characterize their risk aversion. The set of contracts traded
expose the principal to additional risk as measures by a convex risk measure in exchange for a known revenue.
The principal's aim is to minimize her risk exposure by trading with the agents subject to the standard incentive compatibility and individual rationality conditions on the agents' choices. In
order to prove that the principal's risk minimization problem has a solution, we first follow the seminal idea of Rochet and Choné and characterize incentive compatible catalogues in terms of U
-convex functions. When the impact of a single trade on the principal's revenues is linear as in the recent paper by Carlier, Ekeland and Touzi, the link between incentive compatibility and U
-convexity is key to establish the existence of an optimal solution. In our model the impact is non-linear as a single trade has a non-linear impact on the principal's risk assessment. Due to
this non-linearity we face a non-standard variational problem where the objective cannot be written as the integral of a given Lagrangian. Instead, our problem can be decomposed into a standard
variational part representing the aggregate income of the principal, plus the minimization of the principal's risk evaluation, which depends on the aggregate of the derivatives traded.
We study subsystems of the N-body problem, constructing minimizing noncollision periodic orbits using a symmetric variational method with a finite order symmetry group. The solution of the
variational problem gives existence of periodic orbits which realize certain symbolic sequences of rotations and oscillations for any choice of the mass ratio.
The Maslov index of the periodic orbits is then investigated and used to prove the main result which states that the minimimizing curves in the three dimensional reduced energy momentum surface
are naturally extended to periodic integral curves which are generically hyperbolic.
We consider 2-dimensional flows of ideal incompressible fluid, i.e., vector fields in a 2-d domain which are divergence-free and tangent to the boundary. The flows have an intrinsic partial
order. Minimal elements of this order are stationary and stable solutions of the Euler equations. This is a non-classical variational principle which may be regarded as dual to the Arnold
variational principle for stationary flows, while having very different meaning.
The talk is devoted to the connections of this principle and the problem of the long time asymptotics of solutions of the Euler equations, and the numerical solution of this nonclassical
variational problem.
Perhaps the most known characterization of ellipsoids defines them as the convex bodies maximizing the affine isoperimetric ratio. In this talk, we will present some new characterizations
involving floating bodies and, respectively, illumination bodies.
With respect to the action of a symmetry group G, the principle of symmetric criticality (PSC) roughly states that "critical symmetric points" are "symmetric critical points". PSC is well known
to hold if G is compact. After reviewing its formulation due to Palais and more recently Anderson, Fels, and Torre, we:
(1) establish that PSC holds if the orbits are Riemannian symmetric spaces, and
(2) discuss PSC in the context of G-invariant Lagrangians defined on the bundle of connections over homogeneous spaces G/K.
In particular, we examine the non-reductive pseudo-Riemannian homogeneous spaces of dimension 4 recently classified by Fels and Renner (2006). These provide a class of examples where PSC
generally fails. There is one interesting exception in this class where PSC holds-in this case, there is a unique G-invariant connection which is "universal" in the sense that it is necessarily a
solution of the Euler-Lagrange equations of any G-invariant Lagrangian defined on the bundle of connections (in particular, the Yang-Mills Lagrangian).
I will talk about nonlinear parabolic systems that are generalizations of scalar diffusion equations. More precisely, I consider systems of the form
where F(z) is a strictly convex function. I will show that when F is a function only of the norm of u, then bounded weak solutions of these parabolic systems are everywhere Hölder continuous and
thus everywhere smooth. I will also show that the method used to prove this result can be easily adopted to simplify the proof of the result due to Wiegner on everywhere regularity of bounded
weak solutions of strongly coupled parabolic systems. | {"url":"http://cms.math.ca/Events/winter07/abs/cv.html","timestamp":"2014-04-16T10:20:48Z","content_type":null,"content_length":"19093","record_id":"<urn:uuid:1bb45582-9388-4349-a20b-d51b5c694467>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reticle Perpendicularity
March 9, 2012, 09:33 AM
I recently purchased a new long range scope, and while I was shopping around online I came across some incorrect information regarding reticle perpendicularity. Many people were experiencing slightly
canted reticles (in relation to turrets) and were frightened that this would somehow put them 3 feet or more off target at the coveted range of 1000 yards!
So, I just wanted to do a little math and clear things up.
First, let me just say that it is not all that difficult to notice as little as a 1 degree variation in angle by eye at the reticle. Some people were stating as much as 5 degrees of variance between
reticle and turret! That's an extreme case, and most were talking well under that, so I will use 2.5 degrees for my calculations. It seems that canted reticles are not all that uncommon, but I will
show you that it is nothing to worry about. All of this also holds true if the reticle and turret are aligned but the scope is installed slightly tilted one way or the other. So, those scope leveling
systems are just another bit of marketing BS in the gun world...
Imagine a right triangle in the same plane as your reticle while looking through the scope, pointing downward. If you align the reticle squarely, and your turret is angled 2.5 degrees off to the
right, the bottom angle of your triangle is obviously 2.5 degrees. For this example, I will use MOA measurements. At 100 yards, 1 MOA is about 1.047 inches. We will figure out how many inches of
horizontal variance result per click of a 1/4 MOA per click scope with a turret that is offset by 2.5 degrees from the reticle.
1/4 x 1.047 = 0.26175" This is the hypotenuse of your triangle, since this is how much each click of the turret raises the reticle, but the turret is offset, not the reticle. Now we have the simple
equation: sin(2.5 degrees) = x/0.26175, where x = the horizontal leg of your triangle. This X value is how much horizontal variance will result for each click of your scope. Doing the math, we find
that X = about 0.0114. So, just making up a number of 30 MOA of correction needed to get a large caliber rifle to 1000 yards while zeroed at 100 yards, this comes to about 1.37 inches of horizontal
In simple terms, if you zeroed your rifle at 100 yards, drew a perfectly vertical line through the bullseye, cranked up 30 MOA and fired another group while aiming at the same bullseye, the group
would be less than 1.5 inches off to the right of the vertical line.
Therefore, the only people who need to worry about a few degrees of cant between reticle and turret (which is evidently pretty common) are those who are worried about 1.5 inches at 1000 yards... | {"url":"http://www.thehighroad.org/archive/index.php/t-648246.html","timestamp":"2014-04-20T00:48:26Z","content_type":null,"content_length":"22003","record_id":"<urn:uuid:c8dc78c9-4f6c-42e1-99de-88d6acaf3868>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
NAG Library Routine Document
1 Purpose
E04GYF is an easy-to-use quasi-Newton algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $\left(m\ge n\right)$. First derivatives are
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
2 Specification
SUBROUTINE E04GYF ( M, N, LSFUN2, X, FSUMSQ, W, LW, IUSER, RUSER, IFAIL)
INTEGER M, N, LW, IUSER(*), IFAIL
REAL (KIND=nag_wp) X(N), FSUMSQ, W(LW), RUSER(*)
EXTERNAL LSFUN2
3 Description
E04GYF is similar to the subroutine LSFDQ2 in the NPL Algorithms Library. It is applicable to problems of the form
$x={\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)}^{\mathrm{T}}$
$m\ge n$
. (The functions
are often referred to as ‘residuals’.) You must supply a subroutine to evaluate the residuals and their first derivatives at any point
Before attempting to minimize the sum of squares, the algorithm checks the subroutine for consistency. Then, from a starting point supplied by you, a sequence of points is generated which is intended
to converge to a local minimum of the sum of squares. These points are generated using estimates of the curvature of $F\left(x\right)$.
4 References
Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least-squares problem SIAM J. Numer. Anal. 15 977–992
5 Parameters
1: M – INTEGERInput
2: N – INTEGERInput
3: LSFUN2 – SUBROUTINE, supplied by the user.External Procedure
4: X(N) – REAL (KIND=nag_wp) arrayInput/Output
5: FSUMSQ – REAL (KIND=nag_wp)Output
6: W(LW) – REAL (KIND=nag_wp) arrayWorkspace
7: LW – INTEGERInput
8: IUSER($*$) – INTEGER arrayUser Workspace
9: RUSER($*$) – REAL (KIND=nag_wp) arrayUser Workspace
10: IFAIL – INTEGERInput/Output
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Note: E04GYF may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the routine:
On entry, ${\mathbf{N}}<1$,
or ${\mathbf{M}}<{\mathbf{N}}$,
or ${\mathbf{LW}}<8×{\mathbf{N}}+2×{\mathbf{N}}×{\mathbf{N}}+2×{\mathbf{M}}×{\mathbf{N}}+3×{\mathbf{M}}$, when ${\mathbf{N}}>1$,
or ${\mathbf{LW}}<11+5×{\mathbf{M}}$, when ${\mathbf{N}}=1$.
There have been
calls of
, yet the algorithm does not seem to have converged. This may be due to an awkward function or to a poor starting point, so it is worth restarting E04GYF from the final point held in
The final point does not satisfy the conditions for acceptance as a minimum, but no lower point could be found.
An auxiliary routine has been unable to complete a singular value decomposition in a reasonable number of sub-iterations.
There is some doubt about whether the point
found by E04GYF is a minimum of
. The degree of confidence in the result decreases as
increases. Thus, when
, it is probable that the final
gives a good estimate of the position of a minimum, but when
it is very unlikely that the routine has found a minimum.
It is very likely that you have made an error in forming the derivatives
$\frac{\partial {f}_{i}}{\partial {x}_{j}}$
If you are not satisfied with the result (e.g., because
lies between
), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. Repeated failure may
indicate some defect in the formulation of the problem.
7 Accuracy
If the problem is reasonably well scaled and a successful exit is made then, for a computer with a mantissa of $t$ decimals, one would expect to get $t/2-1$ decimals accuracy in the components of $x$
and between $t-1$ (if $F\left(x\right)$ is of order $1$ at the minimum) and $2t-2$ (if $F\left(x\right)$ is close to zero at the minimum) decimals accuracy in $F\left(x\right)$.
The number of iterations required depends on the number of variables, the number of residuals and their behaviour, and the distance of the starting point from the solution. The number of
multiplications performed per iteration of E04GYF varies, but for
$m\gg n$
is approximately
. In addition, each iteration makes at least one call of
. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in
Ideally the problem should be scaled so that the minimum value of the sum of squares is in the range $\left(0,1\right)$ and so that at points a unit distance away from the solution the sum of squares
is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible
scaling will reduce the difficulty of the minimization problem, so that E04GYF will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a
subsequent call to
, using information returned in segments of the workspace array
. See
for further details.
9 Example
This example finds the least squares estimates of
in the model
using the
sets of data given in the following table.
$y t1 t2 t3 0.14 1.0 15.0 1.0 0.18 2.0 14.0 2.0 0.22 3.0 13.0 3.0 0.25 4.0 12.0 4.0 0.29 5.0 11.0 5.0 0.32 6.0 10.0 6.0 0.35 7.0 9.0 7.0 0.39 8.0 8.0 8.0 0.37 9.0 7.0 7.0 0.58 10.0 6.0 6.0 0.73 11.0
5.0 5.0 0.96 12.0 4.0 4.0 1.34 13.0 3.0 3.0 2.10 14.0 2.0 2.0 4.39 15.0 1.0 1.0$
The program uses
as the initial guess at the position of the minimum.
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/E04/e04gyf.html","timestamp":"2014-04-19T15:57:05Z","content_type":null,"content_length":"34841","record_id":"<urn:uuid:b294d00f-3357-45bd-90cf-9489fc93e8d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the perimeter of this rhombus
August 23rd 2009, 04:04 PM #1
Aug 2009
"In rhombus RHOM, the length of diagonal MH is 18 cm, and the measure of angle RHO = 60 degrees."
thank you in advance.
first, make a sketch.
you need to use the following facts ...
1. all sides of a rhombus are equal in length.
2. the diagonals of a rhombus form four congruent right triangles within the rhombus.
3. the ratio of the sides of a 30-60-90 triangle is
$1 : 2 : \sqrt{3}$
August 23rd 2009, 04:59 PM #2 | {"url":"http://mathhelpforum.com/geometry/99029-what-perimeter-rhombus.html","timestamp":"2014-04-20T04:43:38Z","content_type":null,"content_length":"33273","record_id":"<urn:uuid:b791a8ef-6a4b-4438-a9b0-7f39ef63fbf2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Related Drawings
My primary research area is geometric topology. Lately I have been looking at knotted surfaces in 4-dimensional space. Here is a picture of a knotted surface. This is constructed from Fox's example
10 in the article A Quick Trip Through Knot Theory. You can make the image bigger by clicking on it. A similar picture appears on the cover of my book Knotted Surfaces and Their Diagrams with
Masahico Saito
I have been playing with symmetry a little.
From the fifth dimension. I am sorry for the extreme file size here.
The Crosscap. This is a generic version of the standard cross cap with associated movie.
The cone on the cube is the square of a triangle. I learned of this from Dylan Thurston. It is a geometric interpretation of the fact that 1*1*1 + 2*2*2+ ... + n*n*n = (n*n)*((n+1)(n+1))/4.
A few sketches that I drew while in Korea. These are more representational than usual.
Spinning 1 The imagery associated to spinning the braid s1.s1.s1.s1
Spinning 2 The braid picture of the spun trefoil, after Kamada.
A knotted 3-manifold in 5-space.
The Rhombic Dodecahedron As the projection of a hypercube.
A knotted 3-manifold in 5-space alternate view.
A knotted 3-sphere in 5-space draft version.
Paper Constructing a 2-fold branched covering.
Coin Flip: Simple branch points.
Work inspired by recent museum visit
And He Built a Crooked House This one goes to eleven!
Counting laps while swimming. The title says it all: I use eggs as markers for progress.
Improvisations on a tiling of a hexagon based on a triangle times triangle.
Pdf's together book of pdfs.
the model for the next two pieces.
Works from 2010
improvisation #1 playing with a weird projection of the triangle times triangle
improvisation #2 again.
improvisation #3 and again.
improvisation #4 and once more.
tiles 1: I was doing a calculation on a nine-element strict 2-quandle that I encoded here.
tiles 2: Same calculation, a little improvisation.
tiles 3: Same calculation, a little more improvisation.
tiles 4: Same calculation, a little more improvisation.
tiles 5: Same calculation, a little more improvisation.
Works from 2009
Velvet Elvis # 1. A late night improvisation on the double point set of the sphere eversion near the quadruple point.
One Quarter An improvisation on the 4 copies of the triangle times the triangle that fill a hypercube.
Waving windows to the 4th dimension.
I had the same dream a week later.
My dream under the intense scrutiny of psychoanalyis.
Cirque de Soleil series: # 1, # 2, # 3, # 4, #5, and the contortionists: #6.
Triangle 3 v2 c The ocean of fish in the net is how one of the above should look. The computer needs to free its mind, No?
Triangle 3 v2 b Corrected version.
Triangle 3 v2 d Corrected version.
Triangle # 4 A work developed for an up coming math art conference.
An Ocean in New Mexico An improvisation on the same data. Appropriate for a Doctor's office, no?
My computer is taking LSD. File transfer problems for Triangle 3 version 2a .
My computer is still tripping. File transfer problems for Triangle 3 version 2b .
Triangle 3 v2 d1 Maybe the computer is starting to come down.
The main idea of the geometric topology of knotted surfaces is that surfaces in 4-space when they are projected to 3-space have self-intersection. To learn more read the book Knotted Surfaces and
Their Diagrams Sometimes when you take an intersecting surface in 3-space, it won't lift to 4-space.
An immersed sphere in 3-space that doesn't lift to four dimensions is illustrated here. See if you can see that this represents a sphere by cutting and regluing the pieces along the double curves.
Koschorke's example is a surface formed from the connected sum of 3 Mobius bands that has one triple point. It has a double cover that does lift to 4-space even though it does lift to 4-space.
A knotted surface in 4-dimensional space is illustrated on the page The Seifert Algorithm.
Some Art
Three peices that I did in an effort to understand the cartesean product of a pair of triangles. Prints of these can be ordered from me.
Seminar sketches
Boy's surface in movie form and the 3-sphere covering the quaternions.
trefoil. The handle structure in the trefoil complement (rough sketch)
Studies in Binomial and Multinomial
These are some completed things. There are some more in the works that help illustrate Pascal's recursion, and the multinomial recursion from a geomatric point of view.
Pascal cube. Observe that the 1,4,6,4,1 is represented by vertex,tetrahedon, octohedron, terahedron, vertex.
Multinomial. These are the structures for the trinomial theorem.
trinomial. A short improv on the same theme.
Don't forget to write.
J. Scott Carter
Professor of Mathematics
Department of Mathematics and Statistics
ILB 325
University of South Alabama
Mobile, AL 36688-0002
(251)-460-6264 /(251)-460-7969 FAX
click here for e-mail.
The views and opinions expressed in these web page(s) are strictly those of the author. The contents of these page(s) have not been reviewed or approved by the University of South Alabama. I am not
responsible for content in linked material. Come to think of it, I am not repsonsible for much! | {"url":"http://www.southalabama.edu/mathstat/personal_pages/carter/surfaces.html","timestamp":"2014-04-16T21:56:51Z","content_type":null,"content_length":"18885","record_id":"<urn:uuid:aee146e8-5333-477c-afe6-5b1613607b46>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Representing logic program schemata in * Prolog
, 1999
"... C.A.R. Hoare's Unified Theory of Programming gives a single framework for describing the algebraic semantics of different programming paradigms. The work presented in this thesis is aimed
towards incorporating the lacking parts of logic programming paradigm into this Unified Theory. As a first step ..."
Add to MetaCart
C.A.R. Hoare's Unified Theory of Programming gives a single framework for describing the algebraic semantics of different programming paradigms. The work presented in this thesis is aimed towards
incorporating the lacking parts of logic programming paradigm into this Unified Theory. As a first step in my algebraic study of logic programming, I propose a shallow embedding of logic programs
into Gofer programs. This embedding translates each Prolog predicate into a Gofer function such that both the declarative and the procedural reading of the Prolog predicate are preserved. In the
standard approach to mapping logic programs to functional ones the declarative reading is lost. The shallow embedding computes by means of operations on lazy lists. The state of each step in
computation is passed on as a list of substitutions, and all the implicit logic operators in Prolog are replaced by explicit Gofer operators on lists. I express a set of algebraic laws for these
operators and discuss how...
"... SCHEMA-BASED LOGIC PROGRAM TRANSFORMATION Halime Buyukyildiz M.S. in Computer Engineering and Information Science Supervisor: Ass't Prof. Pierre Flener August 1997 In traditional programming
methodology, developing a correct and efficient program is divided into two phases: in the first phase, c ..."
Add to MetaCart
SCHEMA-BASED LOGIC PROGRAM TRANSFORMATION Halime Buyukyildiz M.S. in Computer Engineering and Information Science Supervisor: Ass't Prof. Pierre Flener August 1997 In traditional programming
methodology, developing a correct and efficient program is divided into two phases: in the first phase, called the synthesis phase, a correct, but maybe inefficient program is constructed, and in the
second phase, called the transformation phase, the constructed program is transformed into a more efficient equivalent program. If the synthesis phase is guided by a schema that embodies the
algorithm design knowledge abstracting the construction of a particular family of programs, then the transformation phase can also be done in a schema-guided fashion using transformation schemas,
which encode the transformation techniques from input program schemas to output program schemas by defining the conditions that have to be verified to have a more efficient equivalent program. Seven
program schemas ar... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=919320&sort=cite&start=10","timestamp":"2014-04-19T06:30:23Z","content_type":null,"content_length":"15222","record_id":"<urn:uuid:889d053f-135d-47e3-8063-9ffa23a3448d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Connes on Spectral Geometry of the Standard Model, I
Posted by Urs Schreiber
Alain Connes has a new report on recent progress in his old program of identifying the spectral geometry of the standard model coupled to gravity.
Alain Connes
Noncommutative Geometry and the standard model with neutrino mixing
Similar results have simultaneously found in
John W. Barrett
A Lorentzian version of the non-commutative geometry of the standard model of particle physics
In this first entry I’ll provide some background material. A followup will look at some of the details of the recent paper.
Recall from our discussion of the syntax of quantum mechanics that we can think of quantum particles, like those that appear in the standard model of particle physics, as being described by certain
smooth functors
(1)$\text{SM}_\mathrm{WL} : 1\mathrm{Cob} \to \mathrm{Hilb} \,,$
where the domain is supposed to be some realization of the idea of the category of 1-dimensional Riemannian manifolds. (There is a sublety concerning the distinction between non-relativistic and
relativistic QM, which, like many other subtleties, I shall ignore here. More discussion of this functorial way of looking at QM is going on here.)
My funny symbol $\text{SM}_\mathrm{WL}$ is short for standard model in worldline formulation. This formulation of quantum field theories like QED or the entire standard model, described for instance
Christian Schubert
QED in the Worldline Formalism
is particularly well suited for the ($n$-)categorical point of view on the world of particle physics.
A functor as above is specified by two pieces data:
• a Hilbert space $H$, which is the image of the single object $\bullet$$H = \text{SM}_\mathrm{WL}(\bullet) \,,$ known as the space of states;
• an operator $\Delta$ on $H$, usually addressed as the Hamiltonian, such that $\text{SM}_\mathrm{WL}(\bullet \stackrel{t}{\to} \bullet) \;\;=\;\; H \stackrel{\exp(i t \Delta)}{\to} H \,.$
In the applications that we are concerned with in physics, there is also usually a third datum, namely
• a $C^*$-algebra $A$ represented by bounded operators on H, usually (vaguely) called (a sub-)algebra of observables
Back in the old days, Alain Connes noticed that this triple of data provided by quantum mechanics is a nice algebraic way to talk about Riemannian geometry.
To see this, notice that the nonrelativistic spinless boson propagating freely on a compact Riemannian manifold $X$ is described by a functor of the above sort such that
• $H = L^2(X)$;
• $\Delta$ is the Laplace operator on $X$ ;
• $A$ is the algebra of smooth (real/complex) functions on $X$ .
By analogy, any quantum system more complicated than the free spinless boson on a compact space can be regarded as defining a generalized notion of Riemannian geometry. Since the metric data is
entirely encoded in the spectrum of the Hamiltonian, this approach is called spectral geometry.
(It is, slightly unfortunately, in fact often just addressed instead as noncommutative geometry.)
But in fact, both from the physic side as well as from the functional analytic side, we are lead to consider a slight refinement of this setup.
On the one hand, spinless bosons are rate in nature. In a way, spinning fermions are more “natural”.
On the other hand, Laplace operators are second order differential operators, hence not quite as elementary as first order operators, in a sense.
Both considerations lead us to the same conclusion.
Functorially, what happens is that we pass from the domain category of 1-dimensional Riemannian manifolds to (1,n)-dimensional super-Riemannian manifolds and pass from Hilbert spaces to graded
Hilbert spaces.
(2)$\text{SM}_\mathrm{WL} : 1\mathrm{SCob} \to \mathrm{SHilb} \,.$
You can find a review of work by Stolz, Teichner & Markert on what this means in detail at the end of this entry.
It turns out that such functors are no longer characterized by a Hamiltonian but by a (generalized) Dirac operator $D$ on $H$, an odd-graded operator satisfying a few obvious algebraic conditions.
(3)$\text{SM}_\mathrm{WL}( \bullet \stackrel{(t,\theta)}{\to} \bullet) \;\; = \;\; H \stackrel{\exp(it D^2)(1 + i\theta D)}{\to} H \,.$
So our refined notion of a spectral triple $(H,D,A)$ involves a graded Hilbert space $H$, an operator $D$ of odd-degree and a representation on $H$ of the $C^*$-algebra $A$.
While it can be made plausible along the above lines why this notion of a spectral triple is useful, it is still amazing to me how very useful it is indeed.
It is hard to give a comprehensive idea of the available literature. Maybe I just point out the recent review
Alain Connes & Matilde Marcolli
A walk in the noncommutative garden
math.QA/0601054 .
On the other hand, while the noncommutative aspect of not-necessarily commutative spectral geometry has risen to immense popularity in the physics community, having given rise to the entire fields of
noncommutative field theory (in field theory) - see for instance
Michael R. Douglas, Nikita A. Nekrasov
Noncommutative Field Theory
hep-th/0106048 -
and noncommutative D-brane configurations (in string theory), there is a remarkable scarcity of practitioners who take the spectral aspect seriously.
So far at least. Maybe Connes’ latest insights into the standard model help to change that.
Some notable exceptions from this rule that I am aware of are
• work on algebraic reformulations of central parts of string theory by Mathai Varghese and several others, mostly in the context of topological T-duality but more recently also, and more to our
point here, addressing spectral reformulations of the nature of D-branes and RR-charges;
• work by Soibelman, Kontsevich, Roggenkamp, Wendland and others, which prominently involves spectral triples obtained from some sort of categorified version of what I was talking about above,
namely the quantum mechanics not of point particles, but of 2-particles (= strings) as well as its “decategorification” obtained by taking the point particle limit.
The most farsighted application of these ideas to physics, however, has been followed by Connes and collaborators. Namely the idea of a spectral action principle.
It is known generally, that worldline theories of the kind I have discussed so far give rise to respective “effective” theories on target space ($\to$).
Connes proposed that, since all the information is encoded in the spectral triple, there must be a way to define that theory on target space (which, in phenomenologically viable applications, is
nothing but the spacetime (parts of which) we observe) entirely in terms of natural operations on our spectral triple.
This idea is in fact well motivated by standard results obtained in heat kernel expansion
(4)$\mathrm{Tr}(\exp(-i t \Delta)) = \cdots \,,$
which is well known to yield terms that look very similar to various terms that appear in the action functionals for physical theories involving gravity and other forces.
Similar expansion formulas can be found for the cases where instead of a generalized Laplace operator we have a generalized Dirac operator sitting in a spectral triple. Instead of the above heat
kernel we use
(5)$\mathrm{Tr}(f(D/ \Lambda)) \,,$
where $f$ is some regularizing function whose properties mostly drop out, $D$ is the Dirac operator and $\Lambda$ is some scale that we want to keep track of.
When $D$ is the ordinary Dirac operator on sections of a spinor bundle on some compact Riemannian space, the first order terms of the above expression reproduce the Einstein-Hilbert action functional
describing general relativity.
This is in itself interesting, if maybe not shocking. What makes this approach really interesting, though, is that it admits a neat unification of the actions functionals for gravity and the other
gauge forces.
Namely if we let $D$ be a Dirac operator as before, but now with respect to an associated spinor bundle on which we have an associated $U(N)$-connection $A$
(6)$D \mapsto D_A \,,$
then the above “heat kernel expansion” produces to lowest order not just the action principle of general relativity, but in fact that of general relativity coupled to the correct Yang-Mills action
functional describing the gauge bosons given by $A$.
So this provides a neat way to encode all the forces encountered in the world entirely in the algebraic data provided by a spectral triple.
If this works for forces (bosons), it should also work for matter (fermions). And indeed it does - if we add one more term to our spectral action, one of the rough form
(7)$\langle \psi ,\; D_A \psi \rangle \,,$
for $\psi$ certain elements of $H$ (our generalized spinors).
In summary, the spectral action principle says that we should build action functionals $S$ for physical theories by picking spectral triples $(H,D,A)$ and writing
(8)$S(D,\psi) := \mathrm{Tr}(f(D/\Lambda)) + \frac{1}{2}\langle J \psi,\; D\psi \rangle \;.$
You can find details on this technique for instance in this review:
Ali H. Chamseddine, Alain Connes
The Spectral Action Principle
Once this idea was out in the world, an obvious quest was opened:
What is the spectral triple whose associated spectral action is that describing our world, i.e. that giving rise to the standard model action of particle physics coupled to the Einstein-Hilbert
action of gravity?
It is not clear a priori what finding this spectral triple implies for our view of the world. If you are not impressed by games involving algebraic reformulations of otherwise well-understood
concepts, you might not see more in it than a curious way to repackage information in a weird form.
On the other hand, it may happen that what looks weird afterwards is not the spectral triple, but the formerly so familiar standard formulation of the standard model it encodes…
More on that in a followup to this entry.
Posted at September 6, 2006 10:45 AM UTC
Re: Connes on Spectral Geometry of the Standard Model, I
Date: Mon, 23 Oct 2006 11:16:35 GMT
Gravity and the standard model with neutrino mixing
Authors: Ali H. Chamseddine, Alain Connes, Matilde Marcolli
Posted by: Alejandro Rivero on October 24, 2006 3:02 PM | Permalink | Reply to this
Re: Connes on Spectral Geometry of the Standard Model, I
Thisis, as promised, a discussion on the expected Chamseddine-Connes-Marcolli paper, hep-th/0610241, which is the consolidated version of the one we discussed before. It will is being presented in
some detail during the NCG Semester in Newton Institute, and probably another talks around the word. You can get slides and audio of one of these talks, perhaps the central one, here.
The paper does a detailed travel from the newest formalism, based on an algebra $\mathbb{C}\oplus\mathbb{H}\oplus\mathbb{H}\oplus M_3(\mathbb{C})$ down to the pair of algebras $\mathbb{C}\oplus\
mathbb{H}$ and $\mathbb{C}\oplus M_3(\mathbb{C})$ of the Red Book. This includes to consider (to introduce) the notion of Reality in the spectral triple (I suspect that it can be related to having
quaternions around) and the notion of “unimodular” gauge potentials. Moreover, the explicit built of a curvature for the action, that in the Book forced a painful task of “junk removal”, is
substituted with the concept of “Spectral Action” (plus Reality to do the full trick, perhaps). And in turn, having an Spectral Action allows us to put gravity in the same bag.
The Spectral Action is a regulated trace of the square of the Dirac operator (or of the Dirac operator with an even regulation function) plus a term for fermions, the later a variant of the old work
on noncommutative geometry and reality, the former a new coming of the Chamseddine-Connes work. Fedele Lizzi has a conjecture that the term with the fermions comes from the trace action too but it is
kept apart due to some irregularity in the density of eigenvalues of the Dirac Operator.
With respect to hep-th/0608226, the more striking addition is a remark (in section 2.7 and in the introduction, p. 4) about the “Moduli Space of Dirac Operators”, ie the possible values of Yukawa
parameters. I can not avoid to think that this is related to a geometrical interpretation of the CP violating phases. This is because the more puzzling parameter just now is the number of
generations, and CP violation is a way to ask them to be $\geq 3$, the fact of being 3 and not a greater number could then be argued from considerations on the functional integration. There are some
remark about this in the audio of the talk, I believe.
Another puzzling remark appears in section 2.1 where complex and $3\times 3$ matrices are said to correspond to “integer spin” and quaternions to “half-integer spin”. This is in the spirit of
considering that the algebra comes from a truncation at spin one of $\sum_n M_n(\mathbb{C})$, from $SU_q(2)$. Again, see the audio (???) of the talk. At some point Connes stops, chalks the summation,
and asks the public “what is this”?.
Of the old axioms relating Spectral Triples to Non-Commutative Manifolds, Poincaré duality remains, but orientability is now under trial and perhaps discarded. This is a pitty, because a way to
decide how to choose an algebra for the model was to check if the axioms for manifolds were working. On the other hand, it leaves room to try to discuss where this algebra comes from.
I find myself uneasy about the take on the predictions, supposed to be at GUT scale. I would very much prefer to aim for predictions at electroweak scale, because the more relevant physics for this
geometry is the electroweak one. The work limps about this, on one hand the finite geometry supposed to be an approximation of some other object at high energy, on other hand the predictions been
taken at GUT because, well, they fit when the renormalisation group runs them downwards. I’d like to point our that having the Weinberg angle of GUT is not by itself a strong indication of GUT, as
its formula (the sum of hypercharges etc) can appear in other contexts. For instance, as the value of the Weinberg angle that minimizes $Z^0$ decay. On the other hand, while it is very encouraging
that the paper gets a non susy formula relating the masses of bosons and fermions, I had enjoyed a lot more if the low energy, but 0.01 exact, relationship between Top mass and Higgs Vacuum had been
derived from it.
Currently I am still trying to understand how asymptotical the asymptotic formula for the spectral action is. Ideally we should be able to switch off Newton Constant and then get the standard model,
and to switch off the standard model couplings and get [a sort of] General Relativity. But this zone of the paper is a jungle of coefficients. Moreover, with thh standard model switched off, I would
expect Gravity to come with the same proof that the theorem for commutative manifolds uses (see for instance the book of Varilly, figueroa and Gracia-Bondia). In fact the technique is the same,
basically it amounts to ingenious use of the Lichnerowiz formula.
Posted by: Alejandro Rivero on October 26, 2006 8:46 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2006/09/connes_on_spectral_geometry_of.html","timestamp":"2014-04-16T18:58:32Z","content_type":null,"content_length":"43930","record_id":"<urn:uuid:ca342623-bb5b-4b62-93f1-6e759fee7d87>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can you tell if a series is absolutely convergent or conditionally convergent?
May 1st 2010, 07:43 PM
How can you tell if a series is absolutely convergent or conditionally convergent?
How can you tell if a series is absolutely convergent or conditionally convergent? I am only able to tell if it's convergent and that's it. I read my book and noticed that if you rearrange terms
with negative signs, then the sum could change but I don't see how that works in practice so it would be great if someone can compare and contrast the two cases for me.
Any input would be greatly appreciated!
Thanks in advance!
May 1st 2010, 08:01 PM
a good explanation ...
YouTube - Absolute Convergence, Conditional Convergence and Divergence
May 2nd 2010, 03:25 AM
The most general method is to use the definition.
A series, $\sum a_n$ is "absolutely convergent" if $\sum|a_n|$ converges and "conditionally convergent" if $\sum a_n$ converges but $\sum|a_n|$ does not.
Obviously, any series consisting of only positive numbers has $|a_n|= a_n$ and so converges absolutely if and only if it converges. Since, if a series consists only of negative numbers, we could
factor out "-1" and have a series of positive numbers, so, similarly, a sequence of negative numbers is absolutely convergent if and only if it converge.
Note that many of the basic "convergence" tests require positive values and so really test for absolute convergence. That's why we always take the absolute value of a power series to test for
convergence and why a power series converges absolutely inside it radius of convergence.
The simplest example of a conditionally convergent sequence is $\sum_{n=1}^\infty \frac{(-1)^n}{n}$. That converges by the "alternating sequence test" but $\sum_{n=1}^\infty \left|\frac{(1)^n}{n}
= \sum_{n=1}^\infty \frac{1}{n}$ does not converge by the integral test. | {"url":"http://mathhelpforum.com/calculus/142559-how-can-you-tell-if-series-absolutely-convergent-conditionally-convergent-print.html","timestamp":"2014-04-19T10:30:25Z","content_type":null,"content_length":"6773","record_id":"<urn:uuid:b59ffa42-fb17-4ee4-8234-e7066a692164>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math: The Black Diamond Trail of Science Writing (#scio11)
The comment thread for my post about good writing has turned into a fascinatingly well-focused discussion on writing about math. A mathematician arrived, rending his garments in despair, and now
others–both writers and readers–are responding. I’ve always considered math the toughest subject a science writer can tackle, so I find the conversation especially interesting. Check it out.
Comments (10)
1. I had a quick browse of the comments. It seems to me that the topic of semi-simplicial objects is a secondary level development in algebraization of topology. If I were to explain that sort of
program, I would first start with examples of topological spaces like cylinder, Mobious Band, how both are formed from two squares (modulo bending and stetching) by different identifications and
then how spaces like projective spaces which cannot be visualized can be seen by glueing a cell to a Mobius Band. Then I would go on with more examples of such spaces obtained by glueing or
identification (identification spaces) and how many of them occur natuarally; one of the first examples of Poincare is the product of three dimensional projective space with three dimensional
euclidean space in his study of celestial mechanices. After a while such spaces are difficult to distinguish visually and one of the first mechanisms found by Poincare was a set of algebraic
invariants called Betti numbers which are sort of the number of various dimensional holes in the topological space. The technique is to decompose the space in to familiar ones called polyhedra
and finally simplexes which are higher dimensional versions of triangles. These are prefered to squares and general polyhedra since it is easier write doen boundaries. The more formal association
of associating algebraic invariants to topological spaces took several decades with one of the crucial suggestions coming from Emmy Noether ; that the Betti numbers are the invariants some groups
associated with the spaces. The result is associating groups with spaces so that topological problems associated with spaces can be converted to problem in the algebraic objects groups. The
advantage is that equivalence problems in groups can be done mechanically without the pitfalls of trying to visulaize complicated spaces. However it is not a complete equivalence problem which is
in some sense better as one has hopes of getting in to more tractable problems. After this one can go on to explain how semisimplicial objects come in as a part of this algebrization program.
I think that the whole program is heavily influenced by Galois Theory about solving equations by radicals. Galois showed how this can be seen as a problem in extensions of fields, a topic which
he created. He showed how the problems in these extensions of fields ( which are infinite) can be related to finite groups and whether a particular, called the Galois group of the equation has a
simple structure which he called sovable. In the process, he created two topics, field theory and group theory and how in this case, the problems in fields are completely equivalent to related
problems in finite groups ( in the case of topology from spaces to groups and other algebraic objects.)
I have not been keeping track of popular or technical writing in mathematics for a while, but off and on I have seen some fine writing by Julie Rehmeyer, Erica Klarreich in Science News while
browing about other topics.
2. Dear gaddeswarup, the area that I work in uses the combinatorial/categorical theory of simplicial sets as a substrate for building up other structures (in particular, weak higher categories). The
extent to which this is involved in the classical applications of simplicial sets is somewhat minimal, but for instance, given a simplicial set that is _not_ a Kan complex, its “fundamental
groupoid” (the left adjoint of the nerve functor) is not necessarily a groupoid, but a category. This gives us some of the motivation to rebuild category theory out of the theory of simplicial
3. I write about math for a living, so naturally this discussion is fascinating to me.
The first question you have to answer, always, is “Why should my reader care?” If you can’t answer that, then stop. Whatever else you say, it’s going to be a failure.
It’s true that answering that question with current pure math research can very challenging, because it’s so very far from removed from ordinary experience. To some extent, I solve that problem
by picking my subjects carefully. For example, I’ve written about a computer that plays poker using game theory (http://bit.ly/f9ozKs); how mathematicians restored the only known live recording
of Woody Guthrie (http://bit.ly/grammy_in_math); and how baseball players might be able to round the bases faster by following a bizarre path mathematically shown to be faster (http://bit.ly/
i4VTQD ). Those are all inviting topics where the math is connected to everyday experience.
But recently, I was asked to write summaries of the work of the Fields Medalists. The Fields Medals are one of the highest honors in math and are often called the mathematicians’ equivalent of
the Nobel Prize. I accepted the assignment with some trepidation — after all, this was extremely abstract stuff, chosen not for its potential interest to laypeople but for its inherent
mathematical significance. Who the winners were was still a secret when they hired me, so I didn’t even know what the topics were until I said yes. Would I be able to find reasons for my readers
to care about these topics?
I got lucky in that some of the topics had applications, which made things much easier. But not all of them — take, for example, the fundamental lemma. I’ll be honest with you: I don’t know what
the fundamental lemma is. Heck, very few mathematicians know what the fundamental lemma is! It would take me years of hard work to really know that, and when I was done, I’d have to ask my
readers to spend just as long in order to share what I’d learned. Hopeless!
And here’s the truth: Almost no one cares what the fundamental lemma is. Really. Even mathematicians (except for the very few working in that immediate field). It’s a technical tool, a very
powerful theoretical one that was extremely hard to build. What the tool does exactly, how it’s built — doesn’t matter.
But of course, mathematicians *do* care about the result even if they don’t know quite what it is, and here’s why: The lack of a proof has been a huge stumbling block to a mind-blowing theory
that aims to unify mathematical fields that appear to be only distantly related, and the development of this theory is what’s led to huge breakthroughs like the proof of Fermat’s Last Theorem.
And the importance of the fundamental lemma is particularly amazing because the result itself is so boring and seems so much like a small technical problem that it got named a “lemma” (what
mathematicians call a small, boring, technical result). Then, when mathematicians banged their head on this thing for decades and couldn’t prove it, the title got elevated to a kind of oxymoron:
The Fundamental Lemma, or, translated, “The Really Important Little Boring Thing.” The lack of a proof was such a logjam that many mathematicians responded by simply assuming the thing was true
and working out the consequences — building a huge edifice of theory that would come crashing down if it turned out to be false.
So my readers care (I hope) about the fundamental lemma because its amazing that something that appeared easy and boring could turn out to be so hard and so vital. They care because they can get
a glimpse of the beauty of the field that it’s a part of. They care because they can enter the emotional world of the mathematicians whose life work had teetered on top of this unproven theorem.
My write-up of this is here, if anyone would like the bigger story: http://bit.ly/i5uDo7
And, should you really want your math fix, here are my write-ups of the other Fields Medalists: bit.ly/lindenstrauss_fields, bit.ly/smirnov_fields, and bit.ly/villani_fields.
5. In my student days I used to read popular science writing by scientists like Jame Jeans, A.S. Eddington, George Gamow and E.T. Bell. It is a puzzle to me how non-scientists can pick up
interesting topics, somehow get the gist of the arguments and convey in an interesting way to lay men. Thanks to Julie Rehmeyer for explaing it to some extent. There is also a slightly technical
piece by her which may be of interest to bloggers on evolution:
6. Dear Julie, I think that giving the drama of the discovery without discussing the meaning is seen by mathematicians as fluff journalism. The fundamental lemma has a moral meaning, even if you
don’t want to give the exact technical statement. For instance, take a look at this post from Bill Lawvere on the categories mailing list: http://rfcwalters.blogspot.com/2010/10/
old-post-why-are-we-concerned-fw.html .
Instead of filling up pages with “mathematics news!”, why not actually include some mathematical content? If you’re just publishing about the drama, you might as well publish about fake
breakthroughs. Mathematicians don’t care if the news about the mathematical community is celebrated in newspapers. That time rated the fundamental lemma among the top scientific discoveries of
the year cheapens the accomplishment. Mathematicians don’t want recognition from the general public. They want people to understand what they do.
I don’t know if you’ve ever read Hardy’s _A Mathematician’s Apology_, but we would love to see expository articles giving fun and interesting little proofs of things like the infinitude of
primes. I’ve shown many people that proof, and they always get a huge kick out of it. “Is that what math is about?” they ask with interest.
Write an article about the two-dimensional crystallographic restriction theorem, which states that wallpaper patterns can only have certain symmetries of 1,2,3,4, and 6. The formal proof may be
hard, but the general idea of how it works isn’t. Draw a lattice and show that a fivefold symmetry does not fit on the lattice.
You’d be doing the mathematical community and the general public a huge favor. If they want to read about drama they can read the sports section. Science should teach and enrich, not convince
people how impossible it is to think scientifically.
Sorry if I came off a bit harsh, it wasn’t directed specifically at you.
7. Different articles call for different treatments. If you take a look at the other Fields Medals write-ups, you’ll see that they have significantly more mathematical content, because I was able to
find ways of connecting the mathematical content to things people understand and care about. My point is that one way or another, you’ve always got to give people a reason to care.
And drama is certainly a part of science and math! God forbid that we ban it to the sports page.
“Fun and interesting little proofs of things like the infinitude of primes” are only one small bit of what mathematics is really about. I do that from time to time, when there’s something new
along those lines. But if that’s all I did, I’d be misrepresenting what math is really about. It’s also about art and stopping genocides and saving lives and how many ways you can tie your
shoelaces. It’s a huge, fabulous, rich thing, and the only way non-mathematicians have a chance of getting a glimpse into it is if those of us who know something about it learn to present it
invitingly, to stop using jargon, and to connect it to the lives of non-mathematicians. I think there’s always a way of doing it, though the connection may be more or less mathematically precise
depending on the subject.
8. Dear Julie,
When you teach people about math in ways that are relevant to their lives, you make it seem passé and pedestrian. Have you ever read this: http://www.maa.org/devlin/LockhartsLament.pdf ?
Or how about Hardy’s apology? I refer you to a very nice paragraph from that book (reproduced below), which discusses the value of the theorems of Euclid and Pythagoras, on the infinitude of
primes, and on the irrationality of 2^{1/2}:
There is no doubt at all, then, of the ‘seriousness’ of either theorem. It is therefore the better worth remarking that neither theorem has the slightest ‘practical’ importance. In practical
application we are concerned only with comparatively small numbers; only stellar astronomy and atomic physics deal with ‘large’ numbers, and they have very little more practical importance, as
yet, than the most abstract pure mathematics. I do
not know what is the highest degree of accuracy ever useful to an engineer-we shall be very generous if we say ten significant figures. Then
(the value of pi to eight places of decimals) is the ratio
of two numbers of ten digits. The number of primes less than 1,000,000,000 is 50,847,478: that is enough for an engineer, and he can be perfectly happy without the rest. So much for Euclid’s
theorem; and, as regards Pythagoras’s, it is obvious that irration-
als are uninteresting to an engineer, since he is concerned only with approximations, and all approximations are rational.
The fact is, ordinary people can do without mathematics and mathematical proof just fine, but throughout the ages, we have seen that people from every culture and every continent have delighted
in the puzzles of mathematics proper. They were not inspired by problems of engineering but simply because math by itself is enjoyable.
You are doing your readers a disservice by making your mathematics articles about saving lives or stopping genocides or how many ways you can tie your shoelaces.
Here are some more passages from Hardy (a book that I very much suggest that you read (at least §20-30).
It will probably be plain by now to what conclusions I am coming; so I will state them at once dogmatically and then elaborate them a little. It is undeniable that a good deal of
elementary mathematics – and I use the word ‘elementary’ in the sense in which professional mathematicians use it, in which it includes, for example, a fair working knowledge of the differential
and integral calculus – has considerable practical utility. These parts of mathematics are, on the whole, rather dull; they are just the parts which have the least aesthetic value. The ‘real’
mathematics of the ‘real’ mathematicians, the mathematics of Fermat and Euler and Gauss and Abel and Riemann, is almost wholly ‘useless’ (and this is as true of ‘applied’ as of ‘pure’
mathematics). It is not possible to justify the life of any genuine professional mathematician on the ground of the ‘utility’ of his
I can remember Eddington giving a happy example of the unattractiveness of ‘useful’ science. The British Association held a meeting in Leeds, and it was thought that the members might like to
hear something of the applications of science to the ‘heavy woollen’ industry. But the lectures and demonstrations arranged for this purpose were rather a fiasco. It appeared that the members
(whether citizens of Leeds or not) wanted to be
entertained, and the ‘heavy wool’ is not at all an entertaining subject. So the attendance at these lectures was very disappointing; but those who lectured on the excavations at Knossos, or on
relativity, or on the theory or prime numbers, were delighted by the audiences that they drew.
What parts of mathematics are useful?
First, the bulk of school mathematics, arithmetic, elementary algebra, elementary Euclidean geometry, elementary differential and integral calculus. We must except a certain amount of what is
taught to ‘specialists’, such as projective geometry. In applied
mathematics, the elements of mechanics (electricity, as taught in schools, must be classified as physics).
Next, a fair proportion of university mathematics is also useful, that part of it which is really a development of school mathematics with a more finished technique, and a certain amount of the
more physical subjects such as electricity and hydromechanics. We must also remember that a reserve of knowledge is always an advantage, and that the most practical of mathematicians may be
seriously handicapped if his knowledge is the bare minimum which is essential to him; and for this reason we must add a little under every heading. But our general conclusion must be that such
mathematics is useful as is wanted by a superior engineer or a moderate physicist; and that is roughly the same thing as to say, such mathematics as has no particular aesthetic merit. Euclidean
geometry, for example, is useful in so far as it is dull-we do not want the axiomatics of parallels, or the theory of proportion, or the construction of the regular pentagon.
One rather curious conclusion emerges, that pure mathematics is on the whole distinctly more useful than applied. A pure mathematician seems to have the advantage on the practical as well as on
the aesthetic side. For what is useful above all is technique, and mathematical technique is taught mainly through pure mathematics.
I hope that I need not say that I am trying to decry mathematical physics, a splendid subject with tremendous problems where the finest imaginations have run riot. But is not the position of an
ordinary applied mathematician in some ways a little pathetic? If he wants to be useful, he must work in a humdrum way, and he cannot give full play to his fancy even when he wishes to rise to
the heights. “Imaginary” universes are so much more beautiful than this stupidly constructed ‘real’ one; and most of the finest products of an applied mathematician’s fancy must be rejected, as
soon as they have been created, for the brutal but sufficient reason that they do not fit the facts.
I hope you will at least consider what I’m trying to say.
9. I’ve read “A Mathematician’s Apology” and both enjoyed it and found it short-sighted.
The article on the number of ways to tie your shoelaces, by the way, is about cutting-edge, pure math. I love writing about pure math — but I still believe that you have to find a way to connect
it to things readers start off knowing they care about. Otherwise, they’re not going to keep reading. | {"url":"http://blogs.discovermagazine.com/loom/2011/01/18/math-the-black-diamond-trail-of-science-writing-scio11/","timestamp":"2014-04-19T21:29:16Z","content_type":null,"content_length":"117218","record_id":"<urn:uuid:a0f9de4c-fbbf-4ce3-a2a3-5a4fed702eb6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
• Index
• » Help Me !
• » Scientific Notation
Post a reply
Topic review (newest first)
2010-09-10 22:22:27
Hi Bob
Sorry for the delay, been a while since my last post.
Love it :---- houseboat
bob bundy
2010-09-08 06:08:06
Hi Dave
Your log fire reminded me of the following:
2010-09-07 21:31:42
Scientific notation, also known as standard form or in exponential notation, is a way of writing numbers, which can contain values for large or small to be easily written in
standard notation decimal notation.Scientific was developed to easily represent numbers are either very large or very small.Sometimes, especially when you use a calculator, you
can come up with a very wide range.
2010-05-31 19:23:56
2010-05-31 19:21:07
2010-05-31 19:20:35
Nope, no trips planned. So you had logs and logarithms. An extra week, sounds great.
2010-05-31 19:17:42
It was great, until I was literally handing my passport over to the baggage handler when a woman leant over and wispered in her ear. She then said that all flights from Corsica
to UK we cancelled for at least a week. There was a huge panic and a run on the airline desk. Luckily we have a house over there, so we just went home and had an extended
Took Emma to the beach everyday and made sand castles, and studied logarithms in the afternoons. Log fires in the evenings. Very nice
You off anywhere this year?
2010-05-31 18:32:09
DERIVED UNITS!!!!
that's Exactly what you call them!!
I had forgotten he he . .
2010-05-31 18:26:10
How was Corsica!
2010-05-31 18:25:05
I see now.
Unit for Acceleration (m/s)/s
Derived units.
Cool, happy now
2010-05-31 18:03:50
Hi Dave;
That's the general way with experimental data. Although you can ask for decimal points too.
2010-05-31 18:02:12
Hi Bobby
Thanks for that, I didn't know about the accuracy. That's handy to know and makes sense too, kind of like Amdahls's Law for functions, although none of my tutors have ever
mentioned it.
Just shows you that you should always consult others rather than taking one persons word for it.
2010-05-31 17:49:29
DaveRobinsonUK wrote:
How is it that s is -2 though kg is -1. Does it just denote that m/s is a ratio and kg is a definite unit?
s is -2 coz its meter per second squared (unit of acceleration).
understand it like this..
As you can see, it appears to be a ratio but is rather a "Rate of Change w.r.t Time"!
2010-05-31 17:08:52
Hi David;
Did you get the same units I did? 5.962 is 4 significant digits. Question is slightly off. You can never get more significant digits that than the least accurate input. In this
case 9.8 is the least precise input with 2 significant digits. Therefore all rounding if I remember correctly is to 2 significant digits.
m/s is read meters per second.
2010-05-31 17:05:45
Hi Bobby and ZHero
Yes, I got
, it asked for the answer to 3d.p.
How is it that s is -2 though kg is -1. Does it just denote that m/s is a ratio and kg is a definite unit?
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
Hi BobSorry for the delay, been a while since my last post.Love it :---- houseboat
Scientific notation, also known as standard form or in exponential notation, is a way of writing numbers, which can contain values for large or small to be easily written in standard notation decimal
notation.Scientific was developed to easily represent numbers are either very large or very small.Sometimes, especially when you use a calculator, you can come up with a very wide range.
Nope, no trips planned. So you had logs and logarithms. An extra week, sounds great.
It was great, until I was literally handing my passport over to the baggage handler when a woman leant over and wispered in her ear. She then said that all flights from Corsica to UK we cancelled for
at least a week. There was a huge panic and a run on the airline desk. Luckily we have a house over there, so we just went home and had an extended holiday.Took Emma to the beach everyday and made
sand castles, and studied logarithms in the afternoons. Log fires in the evenings. Very nice You off anywhere this year?
DERIVED UNITS!!!!that's Exactly what you call them!!I had forgotten he he . .
Hi Dave;That's the general way with experimental data. Although you can ask for decimal points too.
Hi BobbyThanks for that, I didn't know about the accuracy. That's handy to know and makes sense too, kind of like Amdahls's Law for functions, although none of my tutors have ever mentioned it. Just
shows you that you should always consult others rather than taking one persons word for it.
DaveRobinsonUK wrote:How is it that s is -2 though kg is -1. Does it just denote that m/s is a ratio and kg is a definite unit?
How is it that s is -2 though kg is -1. Does it just denote that m/s is a ratio and kg is a definite unit?
s is -2 coz its meter per second squared (unit of acceleration).understand it like this..
Hi David;Did you get the same units I did? 5.962 is 4 significant digits. Question is slightly off. You can never get more significant digits that than the least accurate input. In this case 9.8 is
the least precise input with 2 significant digits. Therefore all rounding if I remember correctly is to 2 significant digits.m/s is read meters per second. | {"url":"http://www.mathisfunforum.com/post.php?tid=14111&qid=142362","timestamp":"2014-04-20T13:35:17Z","content_type":null,"content_length":"26996","record_id":"<urn:uuid:19cbe3c8-3d00-4f67-80b4-1c61d5bf3826>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Experimental and Simulation Results on the Dynamic
Behaviour of Spur and Helical Geared Transmissions
with Journal Bearings
Advances in Tribology
Volume 2012 (2012), Article ID 163575, 9 pages
Research Article
Some Experimental and Simulation Results on the Dynamic Behaviour of Spur and Helical Geared Transmissions with Journal Bearings
^1Département Dynamique des Structures, DCNS Research, Centre d'Expertise des Structures & Matériaux Navals, 44 620 La Montagne, France
^2LaMCoS, UMR CNRS 5259, INSA Lyon, Université de Lyon, Bâtiment Jean d'Alembert, 20 avenue Albert Einstein, 69 621 Villeurbanne Cédex, France
Received 11 July 2012; Accepted 15 November 2012
Academic Editor: Benyebka Bou-Saïd
Copyright © 2012 R. Fargère and P. Velex. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Some interactions between the dynamic and tribological behaviour of geared transmissions are examined, and a number of experimental and simulation results are compared. A model is introduced which
incorporates most of the possible interactions between gears, shafts and hydrodynamic journal bearings. It combines (i) a specific element for wide-faced gears that includes the normal contact
conditions between actual mating teeth, that is, with tooth shape deviations and mounting errors, (ii) shaft finite elements, and (iii) the external forces generated by journal bearings determined by
directly solving Reynolds' equation. The simulation results are compared with the measurement obtained on a high-precision test rig with single-stage spur and helical gears supported by hydrodynamic
journal bearings. The experimental and simulation results compare well thus validating the simulation strategy both at the global and local scales.
1. Introduction
Despite their inherent drawbacks such as the generation of noise, vibrations, and contact failures, geared systems are commonly used in mechanical transmissions for their high efficiency and power
transmission capacity. In some applications for which noise can be a critical issue, such as marine propulsion, journal bearings offer a viable alternative to rolling element bearings because of
their interesting damping properties which, however, can be counterbalanced by instabilities and nonlinear phenomena. From a modelling point of view, the coupling of all these mechanical parts
requires simultaneous treatment of the structural problem associated with the shaft lines and the contact problems not only between the mating teeth but also at the shaft/bearing interfaces. Each of
these individual mechanical topics has generated a vast body of literature over the years. Concerning journal bearings, the first seminal papers date back to the second half of the 20th century [1–4
], and a valuable synthesis can be found in [5] for the basic phenomena. The influences of thermal effects [6], oil injection properties, lubricant rheology, shaft misalignments, [7] and the local
elastic deflections on shafts and bearings [8] have been studied over the past 40 years and are, today, correctly mastered; however the computational costs can be prohibitive, particularly in an
industrial context. On the other hand, gear dynamics has been extensively analyzed in recent decades based on increasingly refined models [9, 10] which usually combine rigid gears, discrete
stiffness, and damping elements [11, 12]. Later, time-varying mesh stiffness along with mounting errors and tooth shape modifications have been considered [13, 14], and gear body deflections have
been introduced via shaft [15] or 3-dimensional solid finite elements.
Several models dealing with the interactions between gears and bearings can be found in the literature, but, most of them, do not consider either gear mesh or bearing nonlinearities. Theodossiades
and Natsiavas [17] and Chen et al. [18] used a simplified mesh interface coupled with the nonlinear properties of journal bearings. Baud and Velex [16] simulated journal bearings via stiffness and
damping coefficients while employing the sophisticated gear model presented in [14]; a number of comparisons with the evidence from a test rig were presented. More recently, journal bearing-gear
nonlinear interactions have been studied by Baguet and Jacquenot [19] who coupled gear and shaft elements along with bearing forces calculated using a multigrid method.
The present work is in the continuation of [19] but a more refined simulation of the interactions is proposed which relies on the original mesh model of [15], an efficient bearing approximation [6,
20], and introduces updated centre-distance, pressure angle, misalignments, and mesh characteristics in relation to the positions of the shafts in the bearings. Bearing and meshing models have been
chosen to deal with most of the possible parameters and phenomena met in real ship transmissions: wide-faced gear bodies with profile modifications (necessary for high power transmission and silent
running) and finite length journal bearing model with oil injection area, cavitation, and thermal effects. The model is fully configurable in terms of geometry and running conditions, and its results
are compared with the experimental findings of Baud and Velex [16]. The comparisons deal with tooth contacts (dynamic amplification) and the bearing behaviour (steady-state position) for various
geometries and running conditions.
2. Mechanical Model
A hybrid model has been developed which incorporates the gear simulation presented in [15], in which a pinion and a gear are assimilated to two deformable shafts linked by a time-varying series of
nonlinear stiffness elements distributed along the potential contact lines on the base plane. The shafts are modelled by two-node Timoshenko beam elements which account for traction, torsion, and
bending whereas the other components such as couplings and load machines, are represented by lumped stiffness and/or inertia elements. Following [21], the bearings contribute via external force
vectors calculated by solving Reynolds’ equation in relation to the instant shaft positions and velocities in each bearing [5].
The mesh stiffness elements are evaluated from the bidimensional results of Weber and Banaschek [22] for structural deflections (tooth bending, base) and Lundberg’s formula for contact compliance [23
]. As tooth flanks move relative to each other, the contact geometries and global mesh stiffness are updated based on rigid-body displacements. Tooth friction is neglected, and only the normal
compressive forces are considered. Any given stiffness element which is not in compression is set to zero (e.g., in the case of partial contacts on tooth flanks) and the gear stiffness matrix and
forcing terms are recalculated until convergence is achieved. It is also checked that there is no compression outside the contact area. Further details about the mathematical developments can be
found in [14, 15].
Bearing reactions are considered as external lumped forces acting at one shaft node which are calculated by integrating the pressure field over the bearing area. The classic method of Rhode and Li [
20] (also known as the generalised short-bearing theory) is used which relies on the hypothesis of a parabolic pressure variation in the axial direction so that the remaining unknown is the
circumferential pressure distribution. By so doing, the size of the problem is considerably reduced, and systematic parameter analyses are possible. A finite difference scheme combined with a
Gauss-Seidel method is employed to find the angular pressure distribution. This method is very accurate for bearings such that the ratio (with , the bearing length, and , the shaft diameter) can deal
with realistic boundary conditions for oil injection and cavitation (Reynolds’ conditions and at the rupture abscissa) when using Christopherson’s algorithm [24]. The bearing model is coupled with
(i) a global thermal model [5] in which the temperature increase is calculated by equating a percentage of the heat generated by the fluid shearing with the heat ejected at the bearing edges, along
with (ii) a fluid circulation model [6] so that the lubricant density and viscosity can be updated separately in each bearing using the empirical laws given in [25].
The coupling of all the system components leads to the parametrically excited nonlinear equations of motion of unknown : where and are, respectively, the mass matrix of the shafts and additional
inertial elements and [C] is the damping matrix.
Index “” in stiffness matrices and external force vectors , respectively, refers to shafts for “,” gear mesh for “,” external couplings for “” and bearings for “.” contains the equivalent nodal
forces corresponding to the external torques, mass imbalance, and weight of the parts.
represents the rigid-body displacement field which is used as the datum for the DOFs, mesh geometry, and shaft misalignments (deviation and inclination).
The nonlinear system (1) is directly solved by combining a Newmark scheme, a Newton-Raphson algorithm, and an iterative process aimed at updating the dynamic characteristics of the meshing process.
At each time step, the bearing reaction forces are calculated by integrating the pressure distribution as opposed to the classic linear theory which relies on first order expansions in the vicinity
of the static solution and leads to stiffness and damping dynamic coefficients.
The initial conditions are and with X[0], solution to where is an averaged mesh stiffness matrix.
The static equilibrium is found by iterating with updated values of oil viscosity, density, and conductivity in each bearing until the running temperature of each bearing has converged.
3. Test Rig and Simulation
The test rig represented in Figure 1 and Figures 2(a) and 2(b) consists in a single-stage spur or helical geared system with parallel shafts resting on four hydrodynamic journal bearings which are
fixed to the pedestal. The reduction unit is mounted on a cast iron base which is fixed to a reinforced concrete block lying on springs and dampers. The shafts were made to close tolerances, and
particular care was taken in the manufacture of the test rig in order to be consistent with the accuracy of the gears (ISO precision grade 4, close to those used in ship reducers). The gears are jet
lubricated from the oil circulating system common to the gears and the bearings (ISO VG 100). Thermostatic control is provided to keep the unit temperature as constant as possible, and the
temperature of the oil in the sump is 45°C for all tests. Prior to recording data, the transmission was heated until oil temperatures and displacements of the base were stabilized. The pinion speed
varies between 50 and 700rad/s, and the maximum output torque is 4200N·m. The spur and helical gear tooth profiles are modified by short linear tip relief of amplitude 20μm (spur gears) and 13μm
(helical gears) over 20% of the nominal active profile on the pinion and the gear teeth. The peak to peak of cumulative pitch errors is within 10μm for the pinion and 20μm for the gear.
The instrumentation comprises (i) torque measurements, (ii) displacement probes which are positioned in pairs 90 degrees apart at four locations on each shaft, and (iii) strain gauges at the root of
several teeth. Three successive teeth on the pinion are strain-gauged at the tensile side as shown in Figures 3(a) and 3(b) with four active gauges across the face width. Output signals are
transmitted by leads cemented to the pinion/gear faces and inside the hollow shafts to two slip rings which transfer this data from the rotary to the stationary system.
From a modelling point of view, the profiles of all the teeth have been discretized to account for profile modifications and also pitch errors. Each shaft is decomposed into 5 finite elements (Figure
3) whose dimensions are specified in Tables 1, 2, and 3. A unique modal damping factor of 4% has been used to simulate the dissipation within the shaft-gear mesh subsystem whereas the damping
provided by the journal bearings directly stems from the reaction forces determined from Reynolds’ equation. In order to consider steady-state solutions, the simulations were launched over 128mesh
periods with 64 time steps per mesh period, and results were considered over the last 64mesh periods only.
4. Comparisons between Experimental and Numerical Results
Using a classic beam approach for calculating root stresses and a thin-slice model for the teeth, the following approximate expression for slice is introduced (Figure 3): where is the bending moment
due to the tooth load when isolating a slice , is the moment of inertia of the tooth section, and is the root tooth thickness at the location where the stress is calculated/measured.
Note that for spur gears, index can be omitted since it is supposed that there is no axial variation for perfectly aligned gears.
Denoting by the reference fillet stress calculated for the total static normal load and passing by the pitch circle at the tooth centre line, a dimensionless tooth root stress is defined as
Considering the spur gear example, Figures 4(a) and 4(b) display a number of comparisons between the measured dimensionless root stresses [16] and the simulation results derived from the dynamic
model and (3)-(4) for two bearing centre distances. A very good agreement is reported particularly when the distance between the bearings is maximal (640mm). Contrary to the results in [16] where a
simplified bearing model based on dynamic coefficients was used, the three major response peaks are correctly simulated (Figure 4(a)) suggesting that the bearings are influential on dynamic tooth
root stresses or loads. Both the experimental and numerical results reveal that moving the bearings to the minimal centre distance of 320mm significantly alters the dynamic load pattern on the
teeth. The highest critical speed is shifted from 550rad/s to more than 600rad/s for the minimum bearing spacing which logically renders the system stiffer. In this configuration, the two secondary
peaks do not clearly emerge any longer in the response curve but the speed range 300–350rad/s exhibits significantly higher stress levels than the other speeds. This effect is correctly reproduced
by the simulations even if the amplitudes are slightly larger than the experimental ones. It has to be noticed that the simulation curves are generally smoother, probably because of the limited
number of shaft elements used in this model which, especially for the minimum bearing centre distance, cannot properly integrate the influence of the highest modes. The comparisons have been extended
to two different nominal load levels, that is, a gear torque of and 770N·m. It can be observed in Figure 5 that the experimental and numerical dynamic responses are only slightly affected, and, in
particular, the tooth critical speeds are unchanged. Here again, the measurements and the simulation results compare very well.
The corresponding results for the helical gear pair are shown in Figure 6 where the dimensionless maximum root stresses for two different gauges on the same tooth (PA2 and PA4 in Figure 3) are
plotted against the pinion speed. As opposed to the spur gear example, the dynamic stress distribution appears as nonuniform across the tooth face width. The agreement between the experimental
evidence and the simulations is still acceptable, and the variations with the gauge axial position are actually captured by the model. Generally speaking, the dynamic amplifications of the tooth
fillet stress are less marked than for spur gears (a maximum of 1.3 versus 1.6 for the spur gear), and the main critical speed, around 540rad/s, is only visible on the signals delivered by gauge 2.
Focusing on the bearing behaviour, Figure 7 shows the evolutions of the shaft centre positions with speed within the bearing clearance (black circle in the figure) for the spur gear case. Although
only relative measurements have been performed, the experimental and numerical curves are similar both in terms of orientation and magnitude. In particular, a slight difference is reported between
bearings 1 and 2 on the pinion shaft caused by the shaft asymmetry. The following conclusions can be drawn.(i)The lubricant viscosity at each bearing is different depending on speed (Figure 8) since
the actual temperature is modified by the running conditions (higher temperatures in bearings 1 and 2 where the rotational speed is higher than that at bearings 3 and 4).(ii)The bearing reaction
forces vary with speed in relation to the bearing position with respect to the couplings. However, their amplitudes remain close to half the mesh force, and their orientations correspond
approximately to that of the base plane.
Similar results for the helical gear are presented in Figure 9 which prove that, in this case too, the experimental and numerical trajectories are in good agreement. It can be clearly observed that
the shaft positions vary from one bearing to the next, even on the same shaft, and differ from what is found for the spur gear arrangement. The bearing reaction forces are found to have different
orientations, probably caused by the rocking moments brought by the helical teeth (it can be noticed that the differences are more significant on bearings 3 and 4 on the gear shaft because the gear
radius, hence the moment of rocking, is larger).
5. Conclusion
In this paper, a model has been presented which couples the global behaviour of the system (shaft vibrations) and the contacts between the gear teeth and those in the journal bearings. Tooth
microgeometry including shape modifications and errors is taken into consideration, and the influence of temperature on the properties of the bearing lubricant is also considered. The resulting state
equations are solved by a multi-iterative numerical process which simultaneously converges on the DOFs, instant load distributions, and geometries along with the bearing temperatures. The
experimental evidence from a sophisticated single-stage test rig with spur and helical gears has been used to assess the validity and the precision of the model. Despite the limited number of DOFs,
the experimental and numerical root stresses compare very well for both spur and helical gears and for various bearing centre distances. The tooth critical speeds are correctly positioned, and the
associated amplitudes are satisfactory thus validating the proposed model. The average running positions of the shafts within the bearings seem to be correctly simulated, and some differences between
the spur and helical gear cases are pointed out probably because of the rocking moments induced by the helix angle. From a general viewpoint, it is confirmed that gears, shafts, and journal bearings
are dynamically coupled and that accurate bearing models are required in order to predict tooth critical speeds.
1. D. Dowson, “A generalized Reynolds equation for film lubrication,” International Journal of Mechanical Science, vol. 4, no. 2, pp. 159–170, 1962.
2. J. W. Lund, “The stability of an elastic rotor in journal bearings with flexible damped supports,” Journal of Applied Mechanics, vol. 32, no. 4, pp. 911–918, 1965.
3. A. Cameron, The Principles of Lubrication, Longmans, New York, NY, USA, 1966.
4. O. Pinkus and B. Sternlight, Theory of Hydrodynamic Lubrication, McGraw-Hill, New York, NY, USA, 1971.
5. J. Frêne, D. Nicolas, B. Degueurce, D. Berthe, and M. Godet, Lubrification Hydrodynamique—Paliers Et Butées, Éditions Eyrolles, 1998.
6. O. Pinkus and D. J. Wilcock, “Thermal effects in fluid film bearings,” in Proceeding of the 6th Leeds-Lyon Symposium on Tribology: Thermal effects on tribology, pp. 3–23, Mechanical Engineering
Publications, 1980.
7. D. Nicolas, Les paliers hydrodynamiques soumis à un torseur de forces quelconque [Ph.D. thesis], INSA de Lyon, Lyon, France, 1972.
8. B. Fantino, J. Frene, and J. Du Parquet, “Elastic connecting-rod bearing with piezoviscous lubricant: analysis of the steady-state characteristics,” Journal of lubrication technology, vol. 101,
no. 2, pp. 190–200, 1979. View at Scopus
9. H. N. Özgüven and D. R. Houser, “Athematical models used in gear dynamics. A review,” Journal of Sound and Vibration, vol. 121, no. 3, pp. 383–411, 1988. View at Scopus
10. P. Velex, “Modélisation du comportement dynamique des transmissions par engrenages,” in Comportement Dynamique Et Acoustique Des Transmissions Par Engrenages, CETIM, Ed., Chapter 2, pp. 39–95,
11. G. W. Blankenship and R. Singh, “A new gear mesh interface dynamic model to predict multi-dimensional force coupling and excitation,” Mechanism and Machine Theory, vol. 30, no. 1, pp. 43–57,
1995. View at Scopus
12. F. Küçükay, “Dynamic behaviour of high speed gears,” in Proceedings of the 3rd International Conference “Vibrations on Rotating Machinery”, pp. 81–90, The Institution of Mechanical Engineers,
York, UK, September 1984.
13. R. W. Munro, The dynamic behaviour of spur gears [Ph.D. thesis], Cambridge University, Cambridge, UK, 1962.
14. P. Velex and M. Maatar, “A mathematical model for analyzing the influence of shape deviations and mounting errors on gear dynamic behaviour,” Journal of Sound and Vibration, vol. 191, no. 5, pp.
629–660, 1996. View at Publisher · View at Google Scholar · View at Scopus
15. M. Ajmi and P. Velex, “A model for simulating the quasi-static and dynamic behaviour of solid wide-faced spur and helical gears,” Mechanism and Machine Theory, vol. 40, no. 2, pp. 173–190, 2005.
View at Publisher · View at Google Scholar · View at Scopus
16. S. Baud and P. Velex, “Static and dynamic tooth loading in spur and helical geared systems-experiments and model validation,” Journal of Mechanical Design, vol. 124, no. 2, pp. 334–346, 2002.
View at Publisher · View at Google Scholar · View at Scopus
17. S. Theodossiades and S. Natsiavas, “On geared rotordynamic systems with oil journal bearings,” Journal of Sound and Vibration, vol. 243, no. 4, pp. 721–745, 2001. View at Publisher · View at
Google Scholar · View at Scopus
18. C. S. Chen, S. Natsiavas, and H. D. Nelson, “Coupled lateral-torsional vibration of a gear-pair system supported by a squeeze film damper,” Journal of Vibration and Acoustics, vol. 120, no. 4,
pp. 860–867, 1998. View at Scopus
19. S. Baguet and G. Jacquenot, “Nonlinear couplings in a gear-shaft-bearing system,” Mechanism and Machine Theory, vol. 45, no. 12, pp. 1777–1796, 2010. View at Publisher · View at Google Scholar ·
View at Scopus
20. S. M. Rhode and D. F. Li, “A generalized short bearing theory,” Journal of Lubrication Technology, vol. 102, no. 3, pp. 278–282, 1980.
21. N. Abdul-Wahed, Comportement dynamique des paliers fluides - Etude linéaire et non linéaire [Ph.D. thesis], INSA Lyon, Lyon, France, 1982.
22. C. Weber and K. Banaschek, “The deformation of loaded gears and the effect on their load-carrying capacity. Part 5,” Tech. Rep. 6, Department of Scientific and Industrial Research, London, UK,
23. G. Lundberg, “Elastische Berührung zweier Halbräume,” Forschung auf dem Gebiete des Ingenieurwesens, vol. 10, no. 5, pp. 201–211, 1939. View at Publisher · View at Google Scholar · View at Scopus
24. D. G. Christopherson, “A new mathematical method for the solution of oil film lubrication problems,” Proceedings of the Institution of Mechanical Engineers, vol. 146, pp. 126–135, 1941.
25. Norme et standard, ISO/TR, 15144-1, 2009. | {"url":"http://www.hindawi.com/journals/at/2012/163575/","timestamp":"2014-04-19T05:44:54Z","content_type":null,"content_length":"108954","record_id":"<urn:uuid:4e8aaf8d-c5ed-4161-b2f6-acdd490ac22b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given that x1*x2 = - 3 calculate t if x^2 - 2x +t = 0 - Homework Help - eNotes.com
Given that x1*x2 = - 3 calculate t if x^2 - 2x +t = 0
Being a quadratic, the equation has 2 roots.
We know, from enunciation that the product of the roots is -3.
We know also, using Viete's relations, that the product of the roots of the quadratic is the ratio: c/a.
We'll identify the coefficients c and a.
c = t
a = 1
So, x1*x2 = c/a
We'll sbstitute the product by the value -3 and the ratio by the identified coefficients.
-3 = t/1
So, t = -3.
The quadratic equation is:
x^2 - 2x -3= 0
x1={-b-[b^2-4ac]^1/2}/2a= {2-(2^2-4*1*t)^1/2}/2*1
x1*x2=-3 {[2-(4-4t)^1/2]*[2+(4-4t)^1/2}*1/4=-3
[4-(4-4t)]*1/4=-3 (4-4+4t)*1/4=-3
(0+4t)*1/4=-3 t=-3
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/given-that-x1-x2-3-calculate-t-x-2-2x-t-0-202037","timestamp":"2014-04-18T08:15:56Z","content_type":null,"content_length":"26975","record_id":"<urn:uuid:08ca0fb0-a9f4-491f-8ee7-d29572dd981b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
geometry question
August 14th 2011, 08:46 AM
geometry question
hey everybody,
im tring to solve this question for a long time without much success...
i succeeded only with trigonemtry but this is not helping because i need to solve the question with geometry only!
so here the question:
The center of the squares are Q and P
I Need to prove OP=OQ
Now, i found this question helpful:
on each side of Parallelogram there is a square
I need to prove that:
If the centers of the squares are connectd
a square forms in the middle
if you can prove that the square in the middle has right angles
then you can prove the first question (look at the green line its Symmetry line and also the hypotenuse)
thank you for the help
August 14th 2011, 11:22 AM
Re: geometry question
It looks as though you have made a very good start on this problem. You do not actually need to show that the "square in the middle" (which I assume is the thing with the blue diagonals) is a
square. It will be sufficient to show that it is a rhombus, because the diagonal of a rhombus intersect at right angles. To do that, prove that the red triangles are congruent (two sides,
included angle).
In fact, it certainly seems to be true that the "square in the middle" really is a square, but I don't offhand see how to prove that.
August 15th 2011, 01:42 AM
Re: geometry question
thank you for the help
i found a way to show that this is a square:
if you can prove that the triangles are congruent (I couldnt find how to prove that one of the angles are even)
you can see that there is a right angle in the middle and then by reducing angle and adding it again you can find that the square has a right angle and then the question is solved!
but i cant find the angle so ill be glad if you write the angle you were talking about and how to prove that offcourse
thank you!!!
August 15th 2011, 01:54 AM
Re: geometry question
i found a way to show that this is a square:
if you can prove that the triangles are congruent (I couldnt find how to prove that one of the angles are even) you can see that there is a right angle in the middle and then by reducing angle
and adding it again you can find that the square has a right angle and then the question is solved!
Very nice! (Clapping)
Look at the obtuse angle in the two triangles. In each case it consists of three parts. Two of these parts are 45º angles (the angle between the side of a square and the diagonal). The part in
the middle, in the left red triangle, is the acute angle of the parallelogram. In the right red triangle, you can see (from the fact that all the angles at that point add up to 360º) that the
part in the middle is the supplement of the obtuse angle of the parallelogram, and is therefore equal to the acute angle of the parallelogram.
August 15th 2011, 01:58 AM
Re: geometry question
thank you very much problem solved!!!
August 15th 2011, 02:55 AM
Re: geometry question
Here's another solution which i think is simpler.
$K$ is mid-point of $BP$. $L$ is mid point of $CQ$. $D$ is mid-point of $BC$.
this gives $|CP|=2|DK|, \, |BQ|=2|DL|.$.
Its easy to see that $\Delta PAC \cong \Delta BAQ \Rightarrow |BQ|=|CP| \Rightarrow |DK|=|DL|$. | {"url":"http://mathhelpforum.com/geometry/186123-geometry-question-print.html","timestamp":"2014-04-17T22:50:28Z","content_type":null,"content_length":"10597","record_id":"<urn:uuid:2268f52a-733e-4cb0-ad75-bec469638df5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: MODELING CIRCUIT OF HIGH-FREQUENCY DEVICE AND MODELING METHOD THEREOF
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
There are provided a modeling circuit of a high-frequency device capable of providing a more accurate modeling circuit having a higher-order resonance by dividedly modeling an overlap zone and a
non-overlap zone of the high-frequency device, and a modeling method thereof. The modeling circuit of a high-frequency device, which comprises an overlap zone where the two electrodes are overlapped
with each other, a non-overlap zone where the overlap zone is absent between the two electrodes, the overlap and non-overlap zones being formed by stacking two or more electrodes on top of each other
in a constant distance, and terminations electrically coupled with some parts of the two electrodes, comprises a first circuit block comprising a first capacitor and a first conductor that model the
overlap zone of the high-frequency device on the basis of coupled transmission line theory; and a second circuit block comprising a first inductor and a first register that model the overlap zone of
the high-frequency device on the basis of coupled transmission line theory and model the non-overlap zone and the terminations of the high-frequency device on the basis of a Series RL model.
A modeling circuit of a high-frequency device that comprises an overlap zone where the two electrodes are overlapped with each other, a non-overlap zone where the overlap zone is absent between the
two electrodes, the overlap and non-overlap zones being formed by stacking two or more electrodes on top of each other in a constant distance, and terminations electrically coupled with some parts of
the two electrodes, comprising:a first circuit block comprising a first capacitor and a first conductor that model the overlap zone of the high-frequency device on the basis of coupled transmission
line theory; anda second circuit block comprising a first inductor and a first register that model the overlap zone of the high-frequency device on the basis of coupled transmission line theory and
model the non-overlap zone and the terminations of the high-frequency device on the basis of a Series RL model,wherein the first and second circuit blocks are combined to form a primary self
resonance of the high-frequency device.
The modeling circuit of claim 1, wherein the first and second circuit blocks are arranged between first and second ports for inputting/outputting external signals and coupled in series with the first
and second ports,the first capacitor and the first conductor of the first circuit block are arranged between the first port and the second circuit block and coupled in parallel with each other,
andthe first inductor and the first register of the second circuit block are arranged between the first circuit block and the second port and coupled in series with each other.
The modeling circuit of claim 2, wherein the first capacitor is formed on the basis of Equation: C
lN,the first conductor is formed on the basis of Equation: G
lN,the first inductor is formed on the basis of Equation: L 1 st = l 2 N ( L self + L m ) + ( 4 l ' N L self + 2 L T ) , ##EQU00023## andthe first register is formed on the basis of Equation: R 1 st
= l 2 N ( R self + R m ) + ( 4 l ' N R self + 2 R T ) , ##EQU00024## wherein, C
st represents a first capacitor, C
represents a capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, G
st represents a first conductor, G
represents conductance per unit distance, L
st represents a first inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, L
represents equivalent inductance of the terminations, l' represents a length of a non-overlap zone, R
st represents a first register, R
represents self resistance per unit distance, R
represents resistance per unit distance, and R
represents equivalent resistance of the terminations.
The modeling circuit of claim 2, wherein the high-frequency device is mounted on a printed circuit board,the printed circuit board comprises a signal transmission line having the high-frequency
device mounted on one surface thereof and a ground pattern formed in the other surface that is opposite to the one surface thereof, andthe modeling circuit further comprises first and second
substrate circuit blocks that model a parasitic admittance between the high-frequency device and the printed circuit board.
The modeling circuit of claim 4, wherein the first substrate circuit block is arranged between the first port and a ground and coupled in series with the first port and the ground, and the second
substrate circuit block is arranged between the second port and a ground and coupled in series with the second port and the ground, andeach of the first and second substrate circuit blocks
comprises:a parasitic conductor arranged between the first and second ports and the ground and coupled in series with the first and second ports and the ground; anda parasitic register and a
parasitic capacitor arranged between the first and second ports and the ground and coupled in series with the first and second ports and the ground, and coupled in parallel with the parasitic
The modeling circuit of claim 2, wherein the modeling circuit further comprises a higher order resonant circuit block having impedance that models the overlap zone of the high-frequency device on the
basis of coupled transmission line theory and forms a second or higher self resonance of the high-frequency device.
The modeling circuit of claim 6, wherein the higher order resonant circuit block comprises:a second capacitor formed on the basis of Equation: C
st;a second conductor formed on the basis of Equation: G
st;a second inductor formed on the basis of Equation: L 2 nd = l 6 N ( L self - L m ) ; ##EQU00025## anda second register formed on the basis of Equation: R 2 nd = l 6 N ( R self - R m ) , ##EQU00026
## wherein, C
nd represents a second capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, C
st represents a first capacitor, G
nd represents a second conductor, G
represents conductance per unit distance, G
st represents a first conductor, L
nd represents a second inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, R
nd represents a first register, R
represents self resistance per unit distance, and R
represents resistance per unit distance.
The modeling circuit of claim 7, wherein the higher order resonant circuit block is arranged between the second circuit block and the second port and coupled in series with the second circuit block
and the second port,the second inductor and the second register are coupled in series with each other, andthe second inductor and the second register, the second capacitor, and the second conductor
are arranged between the second circuit block and the second port and coupled in parallel with each other.
The modeling circuit of claim 8, wherein the capacitance per unit distance is calculated on the basis of Equation: C
st/lN,the conductance per unit distance is calculated on the basis of Equation: G
st/1N,the self resistance per unit distance is calculated on the basis of Equation: R
nd,the inductance per unit distance is calculated on the basis of Equation: L m = L self - 6 N l L 2 nd , ##EQU00027## the equivalent resistance of the terminations is basis of Equation: R T = 1 2 {
R 1 st - ( l 2 N + 4 l ' N ) R self } , ##EQU00028## andthe equivalent inductance of the terminations is calculated on the basis of Equation: L T = L 1 st 2 + 3 2 L 2 nd - ( l + 4 l ' 2 N ) L self ,
##EQU00029## wherein, C
st represents a first capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, G
st represents a first conductor, G
represents conductance per unit distance, R
represents self resistance per unit distance, R
nd represents a second register, L
nd represents a second inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, L
represents equivalent inductance of the terminations, l' represents a length of a non-overlap zone, and R
represents equivalent resistance of the terminations.
A method for modeling a high-frequency device that comprises an overlap zone where the two electrodes are overlapped with each other, a non-overlap zone where the overlap zone is absent between the
two electrodes, the overlap and non-overlap zones being formed by stacking two or more electrodes on top of each other in a constant distance, and terminations electrically coupled with some parts of
the two electrodes, comprising:modeling the overlap zone of the high-frequency device on the basis of coupled transmission line theory, and the non-overlap zone and the terminations of the
high-frequency device on the basis of a Series RL model; andextracting each parameter of the modeled circuits from actually measured self resonance frequency of the high-frequency device to
substitute the each parameter to the modeled circuits.
The method of claim 10, wherein the step of modeling the overlap zone, the non-overlap zone and the terminations of the high-frequency device comprises:modeling, at a first circuit block comprising a
first capacitor and a first conductor, the overlap zone of the high-frequency device on the basis of coupled transmission line theory; andmodeling, at a second circuit block comprising a first
inductor and a first register, the overlap zone of the high-frequency device on the basis of coupled transmission line theory and the non-overlap zone and the terminations on the basis of a Series RL
The method of claim 11, wherein the first capacitor is formed on the basis of Equation: C
lN,the first conductor is formed on the basis of Equation: G
lN,the first inductor is formed on the basis of Equation: L 1 st = l 2 N ( L self + L m ) + ( 4 l ' N L self + 2 L T ) , ##EQU00030## andthe first register is formed on the basis of Equation: R 1 st
= l 2 N ( R self + R m ) + ( 4 l ' N R self + 2 R T ) , ##EQU00031## wherein, C
st represents a first capacitor, C
represents a capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, G
st represents a first conductor, G
represents conductance per unit distance, L
st represents a first inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, L
represents, equivalent inductance of the terminations, l' represents a length of a non-overlap zone, R
st represents a first register, R
represents self resistance per unit distance, R
represents resistance per unit distance, and R
represents equivalent resistance of the terminations.
The method of claim 11, wherein the high-frequency device is mounted on a printed circuit board,the printed circuit board comprises a signal transmission line having the high-frequency device mounted
on one surface thereof and a ground pattern formed in the other surface that is opposite to the one surface thereof, andthe modeling circuit further comprises first and second substrate circuit
blocks that model a parasitic admittance between the high-frequency device and the printed circuit board.the step of modeling the overlap zone, the non-overlap zone and the terminations of the
high-frequency device further comprises: modeling first and second substrate circuit blocks each having a parasitic admittance between the high-frequency device and the printed circuit board.
The method of claim 13, wherein the step of modeling first and second substrate circuit blocks further comprises: modeling a higher order resonant circuit block having impedance that models the
overlap zone of the high-frequency device on the basis of coupled transmission line theory and forms a second or higher self resonance of the high-frequency device.
The method of claim 14, wherein the higher order resonant circuit blocka second capacitor formed on the basis of Equation: C
st;a second conductor formed on the basis of Equation: G
st;a second inductor formed on the basis of Equation: L 2 nd = l 6 N ( L self - L m ) ; ##EQU00032## anda second register formed on the basis of Equation: R 2 nd = l 6 N ( R self - R m ) , ##EQU00033
## wherein, C
nd represents a second capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, C
st represents a first capacitor, G
nd represents a second conductor, G
represents conductance per unit distance, G
st represents a first conductor, L
nd represents a second inductor, L
nd represents self inductance per unit distance, L
represents inductance per unit distance, R
nd represents a first register, R
represents self resistance per unit distance, and R
represents resistance per unit distance.
The method of claim 15, wherein the capacitance per unit distanceis calculated on the basis of Equation: C
st/lN,the conductance per unit distance is calculated on the basis of Equation: G
st/1N,the self resistance per unit distance is calculated on the basis of Equation: R
nd,the inductance per unit distance is calculated on the basis of Equation: L m = L self - 6 N l L 2 nd , ##EQU00034## the equivalent resistance of the terminations is calculated on the basis of
Equation: R T = 1 2 { R 1 st - ( l 2 N + 4 l ' N ) R self } , ##EQU00035## andthe equivalent inductance of the terminations is calculated on the basis of Equation: L T = L 1 st 2 + 3 2 L 2 nd - ( l +
4 l ' 2 N ) L self , ##EQU00036## wherein, C
st represents a first capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, G
st represents a first conductor, G
represents conductance per unit distance, R
represents self resistance per unit distance, R
nd represents a second register, L
nd represents a second inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, L
represents equivalent inductance of the terminations, l' represents a length of a non-overlap zone, and R
represents equivalent resistance of the terminations.
This application claims the priority of Korean Patent Application No. 2008-132664 filed on Dec. 23, 2008, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by
BACKGROUND OF THE INVENTION [0002]
1. Field of the Invention
The present invention relates to a modeling circuit and a modeling method thereof, and more particularly, to a modeling circuit of a high-frequency device capable of providing a more accurate
modeling circuit having a higher-order resonance by dividedly modeling an overlap zone and a non-overlap zone of the high-frequency device, and a modeling method thereof.
2. Description of the Related Art
In recent years, wireless communication systems have been widely used with their advantages such as portability and accessibility.
These wireless communication systems use a radio frequency signal to process information. For this purpose, a high frequency circuit for processing a radio frequency signal is used in the wireless
communication systems.
Devices determining electrical characteristics of circuits are used in the above-mentioned high frequency circuit, and an inductor, a capacitor, a transmission line and the like are used as these
For example, a multi-layer ceramic capacitor (MLCC) used in processes such as impedance matching and filtering is used in the above-mentioned high frequency circuit. When high-frequency devices such
as a multi-layer chip capacitor are driven at a high frequency bandwidth, an accurate and reliable modeling circuit is required due to a variety of electrical characteristics such as parasitic
capacitance, parasitic inductance, etc.
Also, when high-frequency devices are driven within high frequency, the high-frequency devices show characteristics such as self resonance frequency. Also when the high-frequency devices are driven
at a higher frequency bandwidth higher, than self resonance frequency, the high-frequency devices show a second or higher resonance at a higher-order resonance frequency. Therefore, a modeling
circuit, which can accurately show characteristics such as higher order frequency response functions, is required.
SUMMARY OF THE INVENTION [0010]
An aspect of the present invention provides a modeling circuit of a high-frequency device capable of providing a more accurate modeling circuit having a higher-order resonance by dividedly modeling
an overlap zone and a non-overlap zone of the high-frequency device.
Another aspect of the present invention provides a modeling method of the modeling circuit.
According to an aspect of the present invention, there is provided a modeling circuit of a high-frequency device that includes an overlap zone where the two electrodes are overlapped with each other,
a non-overlap zone where the overlap zone is absent between the two electrodes, the overlap and non-overlap zones being formed by stacking two or more electrodes on top of each other in a constant
distance, and terminations electrically coupled with some parts of the electrodes, the modeling circuit including a first circuit block including a first capacitor and a first conductor that model
the overlap zone of the high-frequency device on the basis of coupled transmission line theory; and a second circuit block including a first inductor and a first register that model the overlap zone
of the high-frequency device on the basis of coupled transmission line theory and model the non-overlap zone and the terminations of the high-frequency device on the basis of a Series RL model,
wherein the first and second circuit blocks are combined to form a primary self resonance of the high-frequency device.
In this case, the first and second circuit blocks may be arranged between first and second ports for inputting/outputting external signals and be coupled in series with the first and second ports,
the first capacitor and the first conductor of the first circuit block may be arranged between the first port and the second circuit block and be coupled in parallel with each other, and the first
inductor and the first register of the second circuit block may be arranged between the first circuit block and the second port and be coupled in series with each other.
Also, the first capacitor may be formed on the basis of Equation: C
lN, the first conductor may be formed on the basis of Equation: G
lN, the first inductor may be formed on the basis of Equation:
1 st = l 2 N ( L self + L m ) + ( 4 l ' N L self + 2 L T ) , ##EQU00001##
and the first register may be formed on the basis of Equation
1 st = l 2 N ( R self + R m ) + ( 4 l ' N R self + 2 R T ) , ##EQU00002##
, C
st represents a first capacitor, C
represents a capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, G
st represents a first conductor, G
represents conductance per unit distance, L
st represents a first inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, L
represents equivalent inductance of the terminations, l' represents a length of a non-overlap zone, R
st represents a first register, R
represents self resistance per unit distance, R
represents resistance per unit distance, and R
represents equivalent resistance of the terminations.
In addition, the high-frequency device may be mounted on a printed circuit board, the printed circuit board may include a signal transmission line having the high-frequency device mounted on one
surface thereof and a ground pattern formed in the other surface that is opposite to the one surface thereof, and the modeling circuit may further include first and second substrate circuit blocks
that model a parasitic admittance between the high-frequency device and the printed circuit board.
Additionally, the first substrate circuit block may be arranged between the first port and a ground and be coupled in series with the first port and the ground, and the second substrate circuit block
is arranged between the second port and a ground and coupled in series with the second port and the ground, and each of the first and second substrate circuit blocks may include a parasitic conductor
arranged between the first and second ports and the ground and coupled in series with the first and second ports and the ground; and a parasitic register and a parasitic capacitor arranged between
the first and second ports and the ground and coupled in series with the first and second ports and the ground, and coupled in parallel with the parasitic conductor.
Also, the modeling circuit may further include a higher order resonant circuit block having an impedance that models the overlap zone of the high-frequency device on the basis of coupled transmission
line theory and forms a second or higher self resonance of the high-frequency device.
In addition, the higher order resonant circuit block may include a second capacitor formed on the basis of Equation: C
st; a second conductor formed on the basis of Equation: G
st; a second inductor formed on the basis of Equation:
2 nd = l 6 N ( L self - L m ) ; ##EQU00003##
and a second register formed on the basis of Equation
2 nd = l 6 N ( R self - R m ) , ##EQU00004##
wherein, C
nd represents a second capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, C
st represents a first capacitor, G
nd represents a second conductor, G
represents conductance per unit distance, G
st represents a first conductor, L
nd represents a second inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, R
nd represents a second register, R
represents self resistance per unit distance, and R
represents resistance per unit distance.
Additionally, the higher order resonant circuit block may be arranged between the second circuit block and the second port and be coupled in series with the second circuit block and the second port,
the second inductor and the second register may be coupled in series with each other, and the second inductor and the second register, the second capacitor, and the second conductor may be arranged
between the second circuit block and the second port and be coupled in parallel with each other.
Furthermore, the capacitance per unit distance may be calculated on the basis of Equation: C
st/lN, the conductance per unit distance may be calculated on the basis of Equation: G
st/1N, the self resistance per unit distance may be calculated on the basis of Equation: R
nd, the inductance per unit distance may be calculated on the basis of Equation:
L m
= L self - 6 N l L 2 nd , ##EQU00005##
the equivalent resistance of the electrodes may be calculated on the basis
of Equation:
R T
= 1 2 { R 1 st - ( l 2 N + 4 l ' N ) R self } , ##EQU00006##
and the equivalent inductance of the electrodes may be calculated on the
basis of Equation:
L T
= L 1 st 2 + 3 2 L 2 nd - ( l + 4 l ' 2 N ) L self , ##EQU00007##
, C
st represents a first capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, G
st represents a first conductor, G
represents conductance per unit distance, R
represents self resistance per unit distance, R
nd represents a second register, L
nd represents a second inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, L
represents equivalent inductance of the terminations, l' represents a length of a non-overlap zone, and R
represents equivalent resistance of the terminations.
According to another aspect of the present invention, there is provided a method for modeling a high-frequency device that includes an overlap zone where the two electrodes are overlapped with each
other, a non-overlap zone where the overlap zone is absent between the two electrodes, the overlap and non-overlap zones being formed by stacking two or more electrodes on top of each other in a
constant distance, and terminations electrically coupled with some parts of the two electrodes. Here, the method includes: modeling the overlap zone of the high-frequency device on the basis of
coupled transmission line theory, and the non-overlap zone and the terminations of the high-frequency device on the basis of a Series RL model; and extracting each parameter of the modeled circuits
from actually measured self resonance frequency of the high-frequency device to substitute the each parameter to the modeled circuits.
BRIEF DESCRIPTION OF THE DRAWINGS [0023]
The above and other aspects, features and other advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying
drawings, in which:
[0024]FIG. 1
is a cross cut diagram illustrating a conventional high-frequency device.
[0025]FIG. 2
is a cross-sectional view illustrating a high-frequency device having divided zones according to one exemplary embodiment of the present invention.
[0026]FIG. 3
is a flowchart illustrating a modeling method according to one exemplary embodiment of the present invention.
FIGS. 4A to 4C are diagrams illustrating modeling circuits of a high-frequency device using coupled transmission line theory.
FIGS. 5A and 5B are diagrams illustrating exploded models of modeling circuits of a high-frequency device according to one exemplary embodiment of the present invention.
FIGS. 6A and 6B are diagrams illustrating modeling circuits of a high-frequency device according to one exemplary embodiment of the present invention.
FIGS. 7A and 7B are diagrams illustrating finally assembled modeling circuits of a high-frequency device according to one exemplary embodiment of the present invention.
FIGS. 8A to 8D are diagrams illustrating a modeling circuit of a high-frequency device according to one exemplary embodiment of the present invention, and actually measured electrical characteristics
of the high-frequency device.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0032]
Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
[0033]FIG. 1
is a cross cut diagram illustrating a conventional high-frequency device.
Referring to
FIG. 1
, a conventional high-frequency device, particularly a multi-layer ceramic capacitor (MLCC) is formed of dielectric, and has a plurality of electrodes stacked on top thereof. Here, the electrodes are
electrically coupled with terminations formed outside the dielectric, respectively.
[0035]FIG. 2
is a cross-sectional view illustrating a high-frequency device having divided zones according to one exemplary embodiment of the present invention.
Referring to
FIG. 2
, the high-frequency device thus configured may have a plurality of electrodes (a first electrode to an N+1
electrode) stacked on top thereof. Here, first and second electrodes may be defined as a first layer, and, thus, N
and N+1
electrodes may be defined as an N
When electrodes are stacked on top of each other, the above-mentioned layers may be divided into an overlap zone (l) and a non-overlap zone (l'). Here, each of the layers may be electrically coupled
with one of the terminations.
Also, the above-mentioned high-frequency device may be mounted on a transmission line formed on a printed circuit board (PCB). In this case, a parasitic admittance may be present between the
high-frequency device and the printed circuit board.
That is to say, the high-frequency device according to one exemplary embodiment of the present invention may be divided into an overlap zone, a non-overlap zone, terminations and a parasitic
admittance, depending on the electrical characteristics.
Therefore, respective parts of the high-frequency device may be modeled to form a modeling circuit as described later.
[0041]FIG. 3
is a flowchart illustrating a modeling method according to one exemplary embodiment of the present invention.
Referring to
FIG. 3
, for the modeling method according to one exemplary embodiment of the present invention, an overlap zone of the high-frequency device is first modeled on the basis of coupled transmission line
theory, and a non-overlap zone of the high-frequency device is modeled on the basis of a Series RL model (S10). In addition, it is possible to model a parasitic admittance between the high-frequency
device and the printed circuit board.
Next, a modeling circuit applied to a first layer is extended into the whole layers (S20), a self resonance frequency (SRF) of the high-frequency device is measured (S30), and each of the parameters
of the extended modeling circuit are extracted to complete a modeling circuit (S40).
The above-mentioned modeling method is described in more detail with reference to the accompanying drawings.
FIGS. 4A to 4C are diagram illustrating modeling circuits of a high-frequency device using coupled transmission line theory.
When the coupled transmission line theory applies to the first layer, a modeling circuit may be shown as in
FIG. 4A
. That is, when it is assumed that each of the first and second electrodes is referred to as one transmission line, the first and second electrodes are arranged electrically close to each other,
which makes it possible to apply the coupled transmission line theory to the first layer.
Each of the first and second electrodes has resistances (R1 and R2) and inductances (L1 and L2) within a predetermined unit length (Δx). Here, coupling inductance (L12), coupling resistance (R12),
conductance (G12) and capacitance (C12) are present between the first and the second electrodes. Also, conductance (G1) and capacitance (C1) are present between the first electrodes and a ground, and
conductance (G2) and capacitance (C2) are present between the second electrodes and the ground.
The above-mentioned electrical parameters in a time domain may be defined as currents (I1 and I2) and voltages (V1 and V2) according to the Telegrapher's Equation, as follows.
[ V 1 ( x ) V 2 ( x ) ] = - [ R 1 R 12 R 12 R 2 ] [ I 1 ( x ) I 2 ( x ) ] - jω [ L 1 L 12 L 12 L 2 ] [ I 1 ( x ) I 2 ( x ) ] Equation 1 x [ I 1 ( x ) I 2 ( x ) ] = [ - ( G 1 + G 12 ) G 12 G 12 - ( G
2 + G 12 ) ] [ V 1 ( x ) V 2 ( x ) ] - jω [ C 1 + C 12 - C 12 C 12 C 2 + C 12 ] [ V 1 ( x ) V 2 ( x ) ] Equation 2 ##EQU00008##
Since the grounds are arranged in a more remote distance than a gap between the first and second electrodes in the case of the modeling circuit as shown in
FIG. 4A
, it may be considered that an electrical effect of the grounds on each electrode is slightly taken, compared to that between the first and second electrodes. Therefore, since the conductances (G1
and G2) and capacitances (C1 and C2) are ignorably low, the modeling circuit where there is no electrical effect of the grounds may be shown as in
FIG. 4B
. On the basis of the modeling circuit according to the coupled transmission line theory as shown in
FIG. 4B
, a modeling circuit of the first layer may be shown as in
FIG. 4c
That is, when a voltage (Vo) is applied to a capacitor of the first layer, an electrical circuit of the capacitor is shown in
FIG. 4c
. Currents (I1 and I2) flow when a voltage (Vo) is applied to the capacitor. As a length (x) increases, the current (I1) in the first electrode gradually decreases from Io to 0, and the current (I2)
in the second electrode gradually increases from 0 to Io.
Also, it may be defined that the current at the starting point of the first electrode and the current at the end point of the second electrode have the same capacity as Io, and that the sum (I1 and
I2) of current at each unit length has the same capacity as Io.
Therefore, an impedance (Zoverlap) in the overlap zone may be calculated as represented by the following Equation 3.
Z Overlap
= l 2 ( Z + Z M ) + ( Z - Z M ) γ ( 1 + cosh γ l sinh γ l ) Z = R self + jω Lself = R 1 + jω L 1 = R 2 + j ω L 2 , ZM = R m + jω Lm = R 12 + jω L 12 , γ 2 = 2 YM ( Z - ZM ) , YM = Cm + jω Gm .
Equation 3 ##EQU00009##
wherein l represents a length of an overlap zone
, sinh (rl) and cosh (rl) may be interpreted according to the Maclaurin series, as follows.
( γ l ) = ( γ l ) + 1 6 ( γ l ) 3 + 1 120 ( γ l ) 3 + ##EQU00010## cosh ( γ l ) = 1 + 1 2 ( γ l ) 2 + 1 24 ( γ l ) 4 + ##EQU00010.2##
Also, an impedance (Znon-overlap) in the non-overlap zone may be calculated as represented by the following Equation 4.
.sub.non-overlap=Zl' Equation 4
wherein, l' represents a length of a non-overlap zone.
Therefore, an impedance (Zcap_1-layer) of the first layer is calculated as represented by the following Equation 5.
+(Z.sub.overlap+2Z.sub.non-overlap) Equation 5
wherein, ZT represents impedance of the terminations, and may be represented by Equation: Z
, by using the loss of the terminations and the impedance.
The modeling circuit in the above-mentioned first layer may be extended into the first layer to an N
layer, as shown in
FIG. 5A
, and may have a parallel electrical configuration as shown in
FIG. 5B
when the overlap zone (Zoverlap) and the non-overlap zone (Znon-overlap) have the same electric potential at their junction.
The electrical configuration as shown in
FIG. 5B
may be represented by the following Equation 6.
Z cap
_ N - layer = 2 Z T + ( Z overlap + 3 Z non - overlap 2 ) // ( Z overlap + 4 Z non - overlap N - 2 ) ≈ Z overlap N + 4 Z non - overlap N + 2 Z T Equation 6 ##EQU00011##
The following Equation 7 is presented by substituting the Equation 6 for the above-mentioned Equations 3 and 4.
Z cap
_ N - layer = l 2 N ( Z + Z M ) + ( Z - Z M ) γ N ( 1 + cosh γ l sinh γ l ) + 4 Z l ' N + 2 Z T Equation 7 ##EQU00012##
As shown in
FIG. 6A
, a first-order modeling circuit of the high-frequency device according to one exemplary embodiment of the present invention may be obtained by substituting linear terms of sinh (rl) and cosh (rl)
for the above-mentioned Equation 7, and, as shown in
FIG. 6B
, a second-order modeling circuit of the high-frequency device according to one exemplary embodiment of the present invention may be obtained by substituting linear and quadratic terms of sinh (rl)
and cosh (rl) for the above-mentioned Equation 7.
Here, modeled circuits 110 and 210 of the overlap zone and modeled circuits 121, 122, 221 and 222 of the non-overlap zone, a first-order self resonance frequency modeling circuit 211 of the overlap
zone, a second-order self resonance frequency modeling circuit 212 of the overlap zone, termination-modeled circuits 131, 132, 231 and 232, and substrate-modeled circuits 141, 142, 241 and 242 may be
defined to obtain a finally assembled modeling circuits, as shown in FIGS. 7A and 7B.
Here, a first capacitor (C
st) of a first circuit block 310 may be formed on the basis of Equation: C
lN, and a first conductor (G
st) may be formed on the basis of Equation: G
lN. Also, a first inductor (L
st) of a second circuit block 320 may be formed on the basis of Equation:
1 st = l 2 N ( L self + L m ) + ( 4 l ' N L self + 2 L T ) , ##EQU00013##
and a first register
st) may be formed on the basis of Equation:
1 st = l 2 N ( R self + R m ) + ( 4 l ' N R self + 2 R T ) . ##EQU00014##
, C
st represents a first capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, G
st represents a first conductor, G
represents measured conductance per unit distance, L
st represents a first inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, L
represents equivalent inductance of the terminations, l' represents a length of a non-overlap zone, R
st represents a first register, R
represents self resistance per unit distance, R
represents resistance per unit distance, and R
represents equivalent resistance of the terminations.
Each of the above-mentioned parameters of the modeling circuit may be extracted by measurement of the first-order self resonance of the high-frequency device.
That is, the above-mentioned modeling circuit may be calculated as represented by the following Equation 8.
1 ( ω ) = ( G 1 st G 1 st 2 + ω 2 C 1 st 2 + R 1 st ) + jω ( L 1 st - C 1 st G 1 st 2 + ω 2 C 1 st 2 ) Equation 8 ##EQU00015##
First of all, the difference between theoretical capacitance and actually measured capacitance of products is not so high with the development of technologies of manufacturing a capacitor. In this
case, the theoretical capacitance of the products may apply to the first capacitor (C
Next, when a frequency is set to 0, the conductance of the first conductor (G
st) may be extracted from a real component of the impedance (Z
(ω)). That is, when a frequency is set to a very low range, a current unavoidably flows through the conductance of the first conductor (G
st), and a reciprocal value of the first conductor (G
st) is not as high as the conductance of the first resister (R
st) is ignored when a frequency is set to 0.
Also, since an imaginary component of the impedance (Z
(ω)) at a primary resonance frequency is 0, the resistance of the first register (R
st) at the primary resonance frequency may be extracted from a real component of the impedance (Z
(ω)). On the assumption that the imaginary component of the impedance (Z
(ω)) at the primary resonance frequency is set to 0, the inductance of the first inductor (L
st) may be extracted in the same manner as described above.
Each of the parameters thus extracted may be presented, as follows.
1 st ≈ Capacitance of Product ##EQU00016## R 1 st ≈ Re { Z 1 ( ω ) } ω = ω 1 st , L 1 st = C 1 st G 1 st 2 + ω 1 st 2 C 1 st 2 ≈ 1 ω 1 st 2 C 1 st 2 ##EQU00016.2## G 1 st ≈ Re { Z 1 ( ω ) } - 1 | ω =
0 ##EQU00016.3##
Each of the first and second substrate circuit blocks 331 and 332 has a parasitic admittance between the high-frequency device and the printed circuit board. Here, each of the first and second
substrate circuit blocks 331 and 332 may include a parasitic conductor (Gsub) arranged between the first and second ports (port1 and port2) and the ground and coupled in series with the first and
second ports (port1 and port2) and the ground, and a parasitic register (Rsub) and a parasitic capacitor (Csub) arranged between the first and second ports (port1 and port2) and the ground and
coupled in series with each other, and coupled in parallel with the parasitic conductor (Gsub). The above-mentioned parasitic admittance extraction method is known to those skilled in the art, and
therefore its detailed description is omitted for clarity.
Also, a second capacitor (C
nd) of the higher order resonant circuit block 340 as shown in FIG. 7B may be formed on the basis of Equation: C
st; second conductor (G
nd) may be formed on the basis of Equation: G
st; a second inductor (L
nd) may be formed on the basis of Equation:
2 nd = l 6 N ( L self - L m ) ; ##EQU00017##
and a second register
nd) may be formed on the basis of Equation:
2 nd = l 6 N ( R self - R m ) , ##EQU00018##
, C2nd represents a second capacitor, C
represents capacitance per unit distance, l represents a length of an overlap zone, N represents the layer number of stacked electrodes, C
st represents a first capacitor, G
nd represents a second conductor, G
represents conductance per unit distance, G
st represents a first conductor, L
nd represents a second inductor, L
represents self inductance per unit distance, L
represents inductance per unit distance, R
nd represents a first register, R
represents self resistance per unit distance, and R
represents resistance per unit distance.
On the basis of the above-mentioned equations, the capacitance (C
) per unit distance, the conductance (G
) per unit distance, the self resistance (R
) per unit distance, the inductance (L
) per unit distance, the equivalent resistance (R
) of the terminations, and the equivalent inductance (L
) of the terminations may be calculated, as follows.
That is, the capacitance (C
) per unit distance may be calculated on the basis of Equation: C
st/lN, the conductance (G
) per unit distance may be calculated on the basis of Equation: G
st/1N, the self resistance (R
) per unit distance may be calculated on the basis of Equation: R
nd, and the inductance (L
) per unit distance may be calculated on the basis of Equation:
L M
= L self - 6 N l L 2 nd . ##EQU00019##
, the equivalent resistance (R
) of the terminations may be calculated on the basis of Equation:
R T
= 1 2 { R 1 st - ( l 2 N + 4 l ' N ) R self } , ##EQU00020##
and the equivalent inductance
) of the terminations may be calculated on the basis of Equation:
L T
= L 1 st 2 + 3 2 L 2 nd - ( l + 4 l ' 2 N ) L self . ##EQU00021##
Here, the resistance (R
) per unit distance may be as low as it is ignored, and the self inductance (L
) per unit distance may be calculated on the basis of the Ruehil's Self-Inductance Formula, as follows.
L self
' l = μ 6 π [ 3 ln ( u + u 2 + 1 ) + u 2 + 1 / u + 3 u ln ( 1 / u + 1 / u 2 + 1 ) - u 4 / 3 + ( 1 / u ) 2 / 3 3 ] ##EQU00022##
Therefore, the parameters (C
, G
, L
and R
) per unit distance and the parameters (R
and L
) of the terminations may be calculated from the parameters (C
st, C
nd, G
st, G
nd, R
st, R
nd, L
st and L
nd) of the modeling circuit, the inner information (l and N) of the high-frequency device and the self inductance (L
) obtained by the Ruehil's Formula.
The measured electrical characteristics of the high-frequency device are compared with those of the modeling circuit to which the extracted parameters are applied according to one exemplary
embodiment of the present invention, and the comparison results are shown in FIGS. 8A to 8D.
FIGS. 8A to 8D are diagrams illustrating a modeling circuit of a high-frequency device according to one exemplary embodiment of the present invention, and actually measured electrical characteristics
of the high-frequency device.
In accordance with the modeling circuit of the high-frequency device according to one exemplary embodiment of the present invention, it may be revealed that a measured S-parameter of the
high-frequency device is similar to an S-parameter of the modeled circuit, as shown in
FIG. 8A
, and that the measured values of parasitic admittance between the high-frequency device and the PCB substrate is similar to the values of the modeled circuit, as shown in
FIG. 8B
Also, it may be confirmed that the measured second-order self resonance frequency of the high-frequency devices having various capacitances is similar to the values of the modeled circuit, as shown
in FIGS. 8C and 8D. Therefore, it may be seen that the modeling circuit according to one exemplary embodiment of the present invention may accurately model the high-frequency device.
As described above, the modeling circuit of a high-frequency device according to one exemplary embodiment of the present invention may be useful to provide a more accurate modeling circuit having a
higher-order resonance by dividedly modeling an overlap zone and a non-overlap zone of the high-frequency device.
While the present invention has been shown and described in connection with the exemplary embodiments, it will be apparent to those skilled in the art that modifications and variations can be made
without departing from the spirit and scope of the invention as defined by the appended claims.
Patent applications by Ill Kyoo Park, Seoul KR
Patent applications by Myoung Gyun Kim, Seoul KR
Patent applications by Tae Yeoul Yun, Seoul KR
Patent applications by Industry-University Cooperation Foundation, Hanyang-University
Patent applications by Samsung Electro-Mechanics Co., Ltd.
Patent applications in class MODELING BY MATHEMATICAL EXPRESSION
Patent applications in all subclasses MODELING BY MATHEMATICAL EXPRESSION
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20100161291","timestamp":"2014-04-24T19:51:58Z","content_type":null,"content_length":"86688","record_id":"<urn:uuid:52f7c278-c75f-4cab-936c-5e92bbf324f1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Checking the divergence theorem on a sphere
I have not been able to solve this problem. Can someone please help me find the error?
I think your problem is that $\bigtriangledown \cdot r^2 \hat r$ is not 2r, but rather is 4r. In spherical coordinates you have: $\bigtriangledown \cdot \vec v = \frac 1 {r^2} \frac {\partial (r^2
v_r)}{\partial r} + \frac 1 {r \sin \theta} \frac {\partial (v_\theta \sin \theta)}{\partial \theta} + \frac 1 {r \sin \theta} \frac {\partial v_\phi}{\partial \phi}$ For the case of $\vec v=r^2 \hat
r$ this becomes: $\bigtriangledown \cdot \vec v = \frac 1 {r^2} \frac {\partial (r^4)}{\partial r} = 4r$ Use this to determine $\int _{Vol} ( \bigtriangledown \cdot \vec v ) dV$ and it turns out to
be $4 \pi R^4$.
Last edited by ebaines; January 14th 2014 at 10:31 AM. | {"url":"http://mathhelpforum.com/advanced-math-topics/225403-checking-divergence-theorem-sphere.html","timestamp":"2014-04-18T19:08:21Z","content_type":null,"content_length":"39133","record_id":"<urn:uuid:70131987-0d40-425c-abb3-dc6b15925c62>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
slope and line
October 7th 2008, 04:17 PM #1
Oct 2008
A rectangle ABCD, where A(3,2) and B(1,6).
(i) Find the equation of BC.
(ii)Given that the equation of AC is y=x-1, Find the coordinates of C.
(iii) the perimeter of the rectangle ABCD. PLEASE HELP!!!
It might be helpful to start by plotting the points and the line you are given.
Can you find the equation of a line from two points? Do you know how to find the gradient of a line perpendicular to another line?
The point C must lie on the line which is perpendicualr to AB and passes through point B (1,6).
Using the points A and B you can find the equation of AB and then BC, see above.
AC is a diagonal of the rectangle and you can find the point C by finding the intersection of the lines AC and BC.
If you plot the co-ordinates of A, B and C you might be able to see how to find the perimeter.
Can you proceed from here. Please ask if you need more help.
The equation of BC can be found by the point-slope form of the equation of the line.
(y -y1) = m(x -x1)
You have the point (x1,y1) = B(1,6)
So find the m.
m is the negative reciprocal of the slope of side AB.
You should get the equation of BC as.
(y -6) = (1/2)(x -1)
y = (1/2)x +(11/2) ------------answer.
(ii)Given that the equation of AC is y=x-1, Find the coordinates of C.
Get the intersection of
y = (1/2)x +11/2 ----------(1)
y = x -1 ------------------(2)
to find the coordinates of point C
You should get C(13,12) ----------answer.
(iii) the perimeter of the rectangle ABCD.
The perimeter of ABCD is twice the sum of AB +BC,
P = 2(AB +CD)
Distance, d, between two points is
d = sqrt[(x2 -x1)^2 +(y2 -y1)^2]
So for AB,
AB = sqrt[(1 -3)^2 +(6 -2)^2] = sqrt(20) = 2sqrt(5)
You should get perimeter of ABCD = 16sqrt(5) ------answer.
October 7th 2008, 05:07 PM #2
Junior Member
Mar 2008
October 7th 2008, 05:08 PM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/pre-calculus/52510-slope-line.html","timestamp":"2014-04-17T23:06:26Z","content_type":null,"content_length":"35108","record_id":"<urn:uuid:09436962-7dbf-4535-ab8f-448d839886c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Report in Wirtschaftsmathematik (WIMA Report)
We consider the problem of locating a line or a line segment in three- dimensional space, such that the sum of distances from the linear facility to a given set of points is minimized. An example
is planning the drilling of a mine shaft, with access to ore deposits through horizontal tunnels connecting the deposits and the shaft. Various models of the problem are developed and analyzed,
and effcient solution methods are given. | {"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16168/start/0/rows/10/yearfq/2000/author_facetfq/Anita+Sch%C3%B6bel","timestamp":"2014-04-16T04:33:39Z","content_type":null,"content_length":"15631","record_id":"<urn:uuid:55b71338-0542-4645-a0e7-ff7f58e04b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Use "?" for one missing letter: pu?zle. Use "*" for any number of letters: p*zle. Or combine: cros?w*d
• Select number of letters in the word, enter letters you have, and find words!
asymptotic's examples
• The asymptotic properties of estimators are their properties as the number of observations in a sample becomes very large and tends to infinity. The asymptotic properties of estimators are their
properties as the number of observations in a sample becomes very large and tends to infinity. — “Asymptotic properties of estimators: plim and consistency”,
• (ii) Γ has finite asymptotic dimension and admits a cocompact Γ-CW -model Let X be a proper metric space of finite asymptotic dimension n. Then for every α there exists a locally finite. — “ON THE
K-THEORY OF GROUPS WITH FINITE ASYMPTOTIC DIMENSION”, facpub.stjohns.edu
• Asymptotic definition, of or pertaining to an asymptote. See more. — “Asymptotic | Define Asymptotic at ”,
• Asymptotic Defined - A Dictionary Definition of Asymptotic Definition: Asymptotic is an adjective meaning 'of a probability distribution as some variable or parameter of it (usually, the size of
the sample from another distribution) goes to. — “Asymptotic - Dictionary Definition of Asymptotic”,
• Asymptotic ***ysis is based on the idea that as the problem size The asymptotic complexity will be . You may object that the list is not long until all. — “Asymptotic ***ysis”, userpages.umbc.edu
• Then by using a suitable rescaling, find the first three terms of an asymptotic This nicely demonstrates the dif- ference between convergent and asymptotic series. — “Asymptotic ***ysis notes”,
• A function f(x) is asymptotic to the straight line y = mx + n (m 0) if order to get better approximations of the curve, asymptotes that are general curves have also been used [12] although the
term asymptotic curve seems to be preferred.[13]. — “Asymptote - Wikipedia, the free encyclopedia”,
• Asymptotic cycles and winding numbers. If we are given a flow on we say is quasi-regular provided for any continuous real valued function on , exists. If for any quasi-regular point we let be the
function from into sending into then will determine an asymptotic cycle, which we will denote by. — “Asymptotic cycles - Scholarpedia”,
• Definition of asymptotic in the Online Dictionary. Meaning of asymptotic. Pronunciation of asymptotic. Translations of asymptotic. asymptotic synonyms, asymptotic antonyms. Information about
asymptotic in the free online English dictionary and. — “asymptotic - definition of asymptotic by the Free Online”,
• Most operations on asymptotic expansions can be carried out in exactly the same manner as for convergent power series. Asymptotic expansions of the forms (2.1.14), (2.1.16) are unique. But for
any given set of coefficients , and suitably restricted. — “DLMF: 2.1 Definitions and Elementary Properties”, dlmf.nist.gov
• More precisely, the series is said to represent asymptotically, that is Furthermore, it is often the case that different asymptotic series are used to represent the same single. — “Asymptotic
series: A mathematical aside”, farside.ph.utexas.edu
• CHAPTER 2. ASYMPTOTIC NOTATIONS. called "big oh" (O) and "small-oh" (o) notations, and asymptotic expressions. 2.1.1. Definition of "big oh", special case. We consider first the. — “Asymptotic
notations”, math.uiuc.edu
• Here we look to provide an introduction to the theory of asymptotic ***ysis, starting from approximation to the solution using asymptotic expansions. — “ASYMPTOTIC METHODS”,
• Asymptotic enumeration methods provide quantitative information about the rate of Asymptotic enumeration methods are a subfield of the huge area of general asymptotic ***. — “Asymptotic
Enumeration Methods”, dtc.umn.edu
• asymptotic normality are proved under general conditions of independent but not Formulas for the asymptotic covariance. are derived for the case where the data is both. — “Asymptotic Properties
of Extended Least Squares Estimators”,
• Unless otherwise specified, asymptotic estimates are typically valid An example of an asymptotic estimate that is different from those above in this aspect is. — “PlanetMath: asymptotic estimate”
• Using asymptotic morphisms between graded C -algebras, we construct for. every open m-dimensional spin manifold M a fundamental class in the m-th. ***ytic K -homology group of M. This class is
associated to the not neces- sarily essentially. — “Asymptotic morphisms, K-homologyand Dirac operators”, math.arizona.edu
• Those lines are asymptotic lines because they mark the transition between cross sections with positive There are many asymptotic lines on a surface, and the rulers are the asymptotic lines in
this case. — “Volatile”,
• Encyclopedia article about asymptotic. Information about asymptotic in the Columbia Encyclopedia, Computer Desktop Encyclopedia, computing dictionary. — “asymptotic definition of asymptotic in
the Free Online”, encyclopedia2
• These are the lecture notes from the Workshop in Asymptotic Dimension paper [5]: a Hurewicz-type theorem for asymptotic dimension. Although the. proof of the theorem is omitted (it is quite
technical) the idea of the proof. is given in the proof of a much simpler version of the theorem that does not. — “ASYMPTOTIC DIMENSION IN BĘDLEWO”, math.ufl.edu
• This paper describes asymptotic properties of solutions of some linear difference systems. First we consider system of a general form and estimate its solutions by use of a solution of an
auxiliary scalar difference inequality assuming that this. — “Asymptotic Bounds for Linear Difference Systems”,
• asymptotic as'ymp·tot'ic (-tŏt'ĭk) or as'ymp·tot'i·cal adj. of the curve, asymptotes that are general curves have also been used [12] although the term asymptotic curve seems to be preferred.
[13]. — “asymptote: Definition from ”,
related images for asymptotic
Code relating to the topic of Tetration Sourse that draws figure include <math h> include <stdio h> include <stdlib h> define DB double include <complex h> define z type complex<double> define Re
x x real define Im x x imag define I z type 0
off to find the Bannon residence We spent a much needed restful night in the Bannon s living room and then headed off with them in the morning to Newport Beach to do some spear fishing
Unfortunately we had a little trouble procuring the necessary licenses on the way REI doesn t sell fishing licenses apparently so we spent the morning doing some nice beach lounging That
Wednesday 9 15 04 6 describeAlgo 6 computeMax 6 bestCaseworstCase 6 asymptoticRuntime 6 bigOh 6 bigOhEx1 6 bigOhEx2
system equipped with a suitable topology If the dynamical system is a unimodal say quadratic map this resulting space resembles to some extend the well known Henon attractor see left I view the
study of unimodal inverse limit spaces as step towards studying the topology of Henon attractors Whereas Henon attractors are at least as complicated as unimodal inverse limit
2 3 Note Click on image to see it in original size In other work it is the quantum mechanical nature of light that is important Nonlinear processes have been used for years to create nonclassical
states of light such as pairs of entangled
Thursday 1 15 09 insertionsortRuntime insertionsortRuntimeWorstCase summations asymptoticRuntime***ysis runtimesAsFunctions bigOh bigOhExample
on campus across the lawn with the large stone bear picture here We have had a fine time both hanging around the house and getting ice cream in San Diego s Gaslamp district picture Hannah and I
are also sharing a room this summer as well as an office since she is at least partially under my adviser Bill Griswold Due to this fact it is really wonderful that we get
of building space anytime soon The capitol was alright and we got to see the House begin it s daily session a much less exciting event than I expected they mostly they took attendance The summer
school was really great I won t go into super close detail here since for our non computer scientist readers it will probably not sound too exciting but I had a really good time
a point of contact for their questions In addition I was part of providing transportation God blessed me with the means to have an efficient people mover I feel that I should share it Elizabeth
took charge of coordinating drivers to get everyone moved from local to local It can be difficult to get 22 graduate students to commit to cleaning their cars and remembering to
Chart 1 Ballistic markets and overbought oversold indicators The red and green arrows are rapidly and asymptotically approaching the vertical Knowing that no market ever goes straight up or
Detection efficiency Coulomb distortions Asymptotic forms Multiply Excited States Spin considerations
Spring break involved a broad spectrum of events spanning everything from jury duty to Disneyland It all started bright and early Monday morning as Nathan and I both showed up for jury duty
Well sometimes you get so busy that when something important happens you forget to tell people about it Then you procrastinate it farther because you feel ashamed to tell them late …
working my way through Prof N G de Bruijn s book Asymptotic Methods in ***ysis and I want to share some of the fun I ve had reading it This post is not a review or anything here s an image of the
back cover with some reviews Below are just fragments of what I ve read so far and found fascinating Read more
I recently got two new roommates who are here in San Diego for the summer Mona came first and is an electrical engineering grad student visiting from Rice Hannah is computer science
SimuData tif 25 Sep 2007 10 05 46K VarEstimate1000 fig 25 Sep 2007 10 38 5 8K VarEstimate1000 png 25 Sep 2007 10 38 10K VarEstimate1000 tif 25 Sep 2007 10 37 302K
12 15 18K 6 jpg 01 Aug 2009 12 15 25K Asymptotic jpg 31 Jul 2009 17 02 39K BioResume jpg 01 Aug 2009 12 15 8 7K asymptotic jpg 01 Aug 2009 12 15 39K i jpg 31 Jul 2009 17 02 25K ii jpg 31 Jul 2009
17 02 29K iii jpg
related videos for asymptotic
Using asymptotic notation for terms within equations How should one interpret equations or inequalities in which asymptotic notations make an appearance as one of the terms on the LHS or RHS?
Asymptotic Behavior- Student Film Project Directed/Shot by: Josh Spires Cast: Eric Orman, Kayla Hergenrother
Asymptotic Freedom @ Third Annual Physics Rock Concert (2012) Part 2 Second half of the performance by Asymptotic Freedom at the Physic Rock Concert hosted by MIT Society of Physics Students
lec-43 Asymptotic DB Gain Lecture Series on Control Engineering by Prof. SD Agashe, Department of Electrical Engineering,IIT Bombay. For more details on NPTEL visit nptel.iitm.ac.in
Problem: Asymptotic notation properties 1 Prove or disprove the given property. Problem 3-4a from CLRS 3rd edition.
Binary Heaps: Asymptotics of Heap Building 1. Pseudocode for turning a 1D array into a binary heap 2. Asymptotic ***ysis of heap building 3. Python source code available at 4. Video narration:
Vladimir Kulyukin
Clarification about semantics of asymptotic notations One needs to be careful when extrapolating the asymptotic notations for a particular type of running time T(n) to the running times of all
inputs of size n.
Problem: Asymptotic notation properties 4 Prove or disprove the given property. Problem 3-4d from CLRS 3rd edition.
Asymptotic Freedom covers Ironic by Alanis Morissette The MIT band Asymptotic Freedom performs at the MIT Physics Rocks! 2010 concert
Setting the stage for beginning a formal treatment of asymptotic notation ***ysis of space efficiency of merge sort, and background for a formal treatment of asymptotic notation
Lec-20 Asymptotic ***ysis of Algorithms Lecture Series on Programming and Data Structure by Dr.PPChakraborty, Department of Computer Science and Engineering, IIT Kharagpur. For more details on
NPTEL visit nptel.iitm.ac.in
IB Math Section GH Rational Functions Asymptotic Behavior Rational functions and asymptotic behaviors
Algorithms - Asymptotic Notation- Part1(Introduction) Algorithms- Asymptotic Notation by Dr. Ankush Mittal For more videos, contact: ramanclasses11@
CS 61B Lecture 19: Asymptotic ***ysis CS61B: Data Structures - Fall 2006 Instructor Jonathan Shewchuk Fundamental dynamic data structures, including linear lists, queues, trees, and other linked
structures; arrays strings, and hash tables. Storage management. Elementary principles of software engineering. Abstract data types. Algorithms for sorting and searching. Introduction to the Java
programming language. www.cs.berkeley.edu
Asymptotic notation is independent of type of running time The asymptotic notations are independent of the type of running time T(n) on which they're applied - worst case, average case or best
case. Any notation can be applied on any kind of running time.
Lecture 02: Asymptotic Notation/Recurrences/Substitution, Master Method 2 of 23 This course teaches techniques for the design and ***ysis of efficient algorithms, emphasizing methods useful in
practice. Topics covered include: sorting; search trees, heaps, and hashing; divide-and-conquer; dynamic programming; amortized ***ysis; graph algorithms; shortest paths; network flow;
computational geometry; number-theoretic algorithms; polynomial and matrix calculations; caching; and parallel computing. ocw.mit.edu; Creative Commons Attribution-NonCommercial-ShareAlike 3.0;
http Image courtesy of MIT Press.
Asymptotic expansion of the difference of two Mahler measures Condon, John D. Department of Mathematics Amherst College Amherst, MA 01002 USA Email: jconecker@ M***cript Number: JNT-D-11-00433
Lec-19 Asymptotic Growth Functions Lecture Series on Programming and Data Structure by Dr.PPChakraborty, Department of Computer Science and Engineering, IIT Kharagpur. For more details on NPTEL
visit nptel.iitm.ac.in
Problem: Asymptotic notation properties 5 Prove or disprove the given property. Problem 3-4e from CLRS 3rd edition.
Problem: Asymptotic notation properties 3 Prove or disprove the given property. Problem 3-4c from CLRS 3rd edition.
AISTATS 2012: Factorized Asymptotic Bayesian Inference for Mixture Modeling Factorized Asymptotic Bayesian Inference for Mixture Modeling, by Ryohei Fujimaki and Satoshi Morinaga
20 - 3 - BIC and Asymptotic Consistency-PGM-Daphne Koller If you are interest on more free online course info, welcome to: Professor Daphne Koller is offering a free online course on
Probabilistic Graphical Models starting in March 19, 2012. www.pgm- Offered by Coursera:
Can the initial conditions affect asymptotic complexity? Is it possible for the value of the initial conditions to affect the asymptotic complexity of T(n)? Theoretically, yes, but practically
speaking, no.
The Lord's Lair (Asymptotic Edition) - Uroshnor A preliminary cut of the fourth song by the Connecticut-based progressive rock band. (The final version will also feature female vocals and
acoustic guitar and should be ready in a couple weeks.) Lyrics: Our father which aren't in heaven Hollow rings thy name Thy never come Thy soon be done When earth deals defeat unto heaven Give us
this day our lives instead Of suspicion of trespassing As we forgive thee who trespass against us And cleave us not from our sensations But deliver us from your will For swine rule the kingdom
With power, and with glory Better they never Amen [Lawrence Krauss] "Is our morality determined externally by the society in which we live? Those are key questions, but I want to try and take a
different tack, which is to argue that we're also constrained by reality, and that reality is only determinable by science. So I would argue, in fact, that not only can science help us tell right
from wrong, it's impossible to tell right from wrong without science, because science informs us of what the real world is. And until we know that, we can't make valuable, consistent statements
about the world." [Lawrence Krauss] "Which is now one of my favorite quotes, it's by one of my favorite authors, Philip K. Dick, he said 'Reality is that which, when you stop believing in it, is
still there.'" [Richard Dawkins] "I'm trying to raise consciousness, I'm not trying to change the law/ I'm trying to say, when you hear a child labeled/ as a Christian child, simply ...
Lecture - 4 Asymptotic Notation Lecture Series on Design & ***ysis of Algorithms by Prof.Abhiram Ranade, Department of Computer Science Engineering,IIT Bombay. For more details on NPTEL visit
Using limits in asymptotic ***ysis Taking limits of ratios of functions can simplify calculations when comparing their relative rates of growth.
Mod-01 Lec-06 Asymptotic Properties of Entropy and Problem Solving in Entropy Information Theory and Coding by Prof. SNMerchant, Department of Electrical Engineering, IIT Bombay. For more details
on NPTEL visit nptel.iitm.ac.in
The Asymptotic Performance of AdaBoost Google Tech Talks May 24, 2007 ABSTRACT Many popular classification algorithms, including AdaBoost and the support vector machine, minimize a cost function
that can be viewed as a convex surrogate of the 0-1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences
that must be balanced against the computational virtues of convexity. In this talk, we consider the universal consistency of such methods: does the risk, or expectation of the 0-1 loss, approach
its optimal value, no matter what iid process generates. Credits: Speaker:Peter Bartlett
Asymptotic Freedom @ Third Annual Physics Rock Concert (2012) Part 1 First half of the performance by Asymptotic Freedom at the Physic Rock Concert hosted by MIT Society of Physics Students
The Asymptotic Curve of the Subtle Sphere Phenomena, new age beliefs, meditation experiences, insights, visions, astral projection, channeling, chakras etc. such things are not the path. The path
does not lie in the known. It lies in 180 degree opposite direction: towards the mystery that is both the essence and center of your being.
Problem: Asymptotic notation properties 2 Prove or disprove the given property. Problem 3-4b from CLRS 3rd edition.
Efim Zelmanov - Asymptotic properties of finite groups and finite dimensional algebras (Part 3 of 4) CRM-uOttawa Distinguished Lecture (Part 3 of 4) Asymptotic properties of finite groups and
finite dimensional algebras Efim Zelmanov September 19, 2008
Design ***ysis & Algorithm: Lecture 2: Asymptotic Notation by Dr. Suhail from ASU Design ***ysis & Algorithm: Lecture 2: Asymptotic Notation by Dr. Suhail from Applied Science University from
Faculty of Information Technology
Inorganic1 v 1.0 asymptotic valley virtual synthesizer based on subtractive synthesis like ***og synthesizer Specification with pwm : - 2 Oscillators, each with subtractive option - 2 Oscillator
Phase Offset Modulations - Adjustable Oscilloscope with input selection - State Variable Filter with ADSR Envelope - Amplitude Envelope ADSR - Delay Effect - Reverb Effect for more information
visit for more information about pwm pulse width modulation visit it too early i the morning to write anthing so i have just copied the specs and a bit from wikipedia about pwm, you should try
wikpedia its great for info try this and follow
On Twitter
twitter about asymptotic
Blogs & Forum
blogs and forums about asymptotic
• “A hodge podge of writings such as brief descriptive scenes, opinions, inspirational articles, bizarr Blog: The Asymptotic Faery - The Writings of Allyson N. Jason. Topics: writing, art,
creative. Follow my blog”
— The Asymptotic Faery - The Writings of Allyson N. Jason,
• “In this post, I describe a couple of items taken from Prof. de Bruijn's book Asymptotic Methods in ***ysis. A bit of background is also given”
— Ashutosh Mehra's Blog " On Asymptotic Methods in ***ysis,
• “83151 - asymptotic freedom. Reload Page | Topic Thread | Post a Followup | The Hawking Forum | Search | FAQ asymptotic freedom - keith 9/14/2010 (83151-1-1) Re: asymptotic freedom - Stephen A”
— asymptotic freedom,
• “Does anyone have good examples of calculated fields used to create an asymptotic curve where you can control for either/both the vertical and horizontal asymptote? Examples would be very
helpful. Thanks, Clint”
— Asymptotic Curves - Looking for good formulae | Tableau Software,
• “[Archive] Kuiper Asymptotic Distribution Methods: All Chapters in NR3 Numerical Recipes Forum > Numerical Recipes Third Edition Forum > Methods: All Chapters in NR3 > Kuiper Asymptotic
Distribution. PDA. View Full Version : Kuiper Asymptotic Distribution. ichbin. 07-29-2009, 01:44 PM”
— Kuiper Asymptotic Distribution [Archive] - Numerical Recipes,
• “The Asymptotic Twitter Curve. 0 replies on 1 page. Welcome Guest. Sign In. Back to Topic List. Reply to this Original Post: The Asymptotic Twitter Curve. Feed Title: Creating Passionate Users.
— Java Buzz Forum - The Asymptotic Twitter Curve,
• “The blog of John D. Cook. Two useful asymptotic series. by John on September 7, 2010. This post will present a Even though the gamma function is more common, we'll start with the asymptotic
series for the error function because it is a little simpler”
— Two useful asymptotic series — The Endeavour,
• “A blog about software development, featuring book reviews as well as tutorials, tips, tricks and hacks for C#”
— Tim Martin's blog,
• “Energy From the Asymptotic Future Not only does it happens quickly, but I don't check the forum as often as I used to, so it is imperative that you report spam when it is found”
— Energy From the Asymptotic Future, theidea.bz
related keywords for asymptotic
similar for asymptotic | {"url":"http://wordsdomination.com/asymptotic.html","timestamp":"2014-04-16T04:14:49Z","content_type":null,"content_length":"66444","record_id":"<urn:uuid:7c88c942-11c4-4339-ba28-bb6b42005a09>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Simplify. Express the product as a radical expression. cubed root of x squared times fourth root of x
Best Response
You've already chosen the best response.
Okay you know what? You're gonna freaking kill me because I'm really dyslexic tonight :-(. I accidentally got the first term wrong. It's (x^2)^1/3 and I accidentally had it as x^3/2 at first. So
sorry about that :-((( x^(2/3)*x^(1/4) Now make it so they have a common denominator. x^(8/12)*x^(3/12) =x^(11/12) OR: the 12 root of x to the 11th :-)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f768620e4b0ddcbb89d6dbd","timestamp":"2014-04-20T03:20:31Z","content_type":null,"content_length":"27945","record_id":"<urn:uuid:dea38bbf-0f60-4568-be54-4a76913b3246>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
A simple problem in markov chains
up vote 4 down vote favorite
I'm trying to understand a 1954 paper of Kubo intitled "Note on the stochastic theory of resonance absorption". The specific problem can be stated mathematically as follows: let $X(t)$ be a random
process taking $n$ positive real values $\{\omega_1,...,\omega_n\}$. Suppose that $X(t)$ is Markov and its probablility transitions $P_{ij}(t) := P(X(t)= \omega_j | X(0) = \omega_i)$ satisfy
$P'_{jk}(t)= -c_kP_{jk}(t)+\sum_m P_{jm}(t)c_m P_{mk}$,
where $p_{ii}=0$ and $\sum_j p_{ij} = 0$.
We want to find the expectation value of $M(t):=\exp(i\int_0^t X(t')dt')$ in terms of parameters $\omega_i$, $c_i$ and $p_{ij}$.
Kubo's strategy seems to condition on $X(0) = \omega_i \wedge X(t) = \omega_j$ so he can find a link to the $P_{ij}(t)$, but I don't understand this too much... specifically he introduce a funtion
$Q_{ij}(t)$ which is the average of $M(t)$ on the condition that the process is in $\omega_i$ at time $t=0$ and is found in the state $\omega_k$ at time $t$. He concludes that
$Q'_{jk}(t) = (i\omega_k-c_k)Q_{jk} (t) + \sum_m Q _{jm}(t)c_m p_{mk}$
I can't figure how to get to this conclusion.
pr.probability stochastic-processes markov-chains mp.mathematical-physics
There should be an answer! The problem is so easily formulated: given the Markov process $X(t)$ taking $n$ positive real values, find the expectation value of $M(t):=exp(i\int_0^t X(t') dt')$. –
The man in the box Nov 9 '10 at 9:06
Does the answer below correspond to what you were asking for? – Did Apr 11 '11 at 17:10
add comment
1 Answer
active oldest votes
Indeed, these formulas are standard. Their derivation in a slighly more general setting than yours is as follows.
For every $t\ge0$, let $$ M_t=\displaystyle\exp\left(\int_0^tv(X_s)\mathrm{d}s\right), $$ for a given function $v$ defined on the state space of the process $(X_t)$. (In your setting, every
$X_t$ is real valued and $v(x)=\mathrm{i}x$ for every $x$ but these details are irrelevant.) For every states $x$ and $y$, let $$A_t(y)=[X_t=y],\qquad Q_t(x,y)=E(M_t1_{A_t(y)}\vert X_0=x).
$$ Let $Q_t$ denote the associated square matrix (indexed by the state space, possibly infinite). For instance, $Q_0$ is the identity matrix. Note also that in the expression of $Q_t(x,y)$,
$[X_0=x]$ appears as a conditioning while $A_t(y)=[X_t=y]$ is the event to which the expectation is restricted and that these are different operations hence your interpretation of Kubo's
method should be rephrased.
The dynamics of $(Q_t)$ is driven by a linear differential equation $Q'_t=GQ_t$, where $G$ is a deformation of the infinitesimal generator of the process $(X_t)$. To identify $G$, one can
compute $Q_{t+s}$ at the order $s$, for $s > 0$, when $s$ is small.
up vote
8 down To do so, call $r(x,y)$ the transition rate of $(X_t)$ from $x$ to $y\ne x$, and $c(x)$ the sum over $y\ne x$ of $r(x,y)$. (In Kubo's setting as reproduced in your post, $c(x)$ is your $c_x$
vote and $r(x,y)$ is your $c_xp_{xy}$. By the way, the sum over $y\ne x$ of your $p_{xy}$ should be $1$ instead of $0$ and you should make up your mind between the notations $p_{xy}$ and $P_{xy}
Then, conditioning on $[X_0=x]$, one can decompose the expectation which defines $Q_{t+s}(x,y)$ along the values of $X_s$. This decomposition goes as follows. For every $z\ne x$, $X_s=z$
with probability $r(x,z)s+o(s)$, and $X_s=x$ with probability $1-c(x)s+o(s)$. Furthermore, for every $z\ne x$, $M_{t+s}=(1+o(1))M_t$ on $[X_0=x,X_s=z]$. And on $[X_0=X_s=x]$, the probability
of a double transition in the time interval $[0,s]$ is $o(s)$, hence $M_{t+s}=(1+v(x)s+o(s))M_{t+s}/M_s$ where $M_{t+s}/M_s$ is distributed like $M_t$ conditional on $[X_0=x]$.
All this leads to $$ Q_{t+s}(x,y)=Q_t(x,y)(1+v(x)s)(1-c(x)s)+\sum_zQ_t(z,y)r(x,z)s+o(s). $$ When $s\to0$, one gets $$ Q'_t(x,y)=(v(x)-c(x))Q_t(x,y)+\sum_zr(x,z)Q_t(z,y). $$ In other words,
$G(x,x)=v(x)-c(x)$ for every $x$ and $G(x,y)=r(x,y)$ for every $y\ne x$. These are the equations in Kubo's paper.
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability stochastic-processes markov-chains mp.mathematical-physics or ask your own question. | {"url":"http://mathoverflow.net/questions/44838/a-simple-problem-in-markov-chains?sort=votes","timestamp":"2014-04-16T11:01:05Z","content_type":null,"content_length":"54743","record_id":"<urn:uuid:a4c1b4ae-7bf5-4ac4-9e83-96224708c3c8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
This seemed easy...at first
November 20th 2013, 09:40 AM #1
Nov 2013
This seemed easy...at first
Can someone please help me with a problem?
I'm studying op-amps and can't quite figure this one out. It's analyzing a noninverting op-amp circuit to find the gain. You shouldn't need any electronic knowledge to figure this out - it's just
Here goes:
V[in] = V[out]R[1]/(R[1] + R[2])
gain = V[out]/V[in] = 1 + R[2]/R[1]
I cannot figure out, no matter what I try, how to get from the first equation to the second. Can someone please give me a step-by-step solution?
Attached is a copy of the entire text for context.
Re: This seemed easy...at first
$\frac{V_{\text{out}}}{V_{\text{in}}} =\frac{V_{\text{out}}}{V_{\text{out}}R_1/(R_1+R_2)} =\frac{1}{R_1/(R_1+R_2)} =\frac{R_1+R_2}{R_1} =\frac{R_1}{R_1}+\frac{R_2}{R_1}=1+\frac{R_2}{R_1}$
November 20th 2013, 11:19 AM #2
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/algebra/224472-seemed-easy-first.html","timestamp":"2014-04-19T19:44:39Z","content_type":null,"content_length":"34326","record_id":"<urn:uuid:e9c50be5-6eb1-4486-81ae-f270f3ac25ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Calculator
Hi there,
I want to program an advanced calculator and need your help. I'd like to enter some more complex expressions like
and i want, that mathematical operations are done the correct order, so at first 4-sqrt(4) is calculated, then 3*4*2 and then -17 is subtracted.
Problem 1: Convert a string into a mathematical calculation
Problem 2: Calculate in the correct order
How would I do that (I dont expect perfecly precoded calculators from you, just the way how to do it)
Google search just delivers primitive calculations with entry methods like
1 Enter first number 1
2 Enter operator +
3 Enter second number 2
But thats not what i want
Thanks in advance,
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/101315/","timestamp":"2014-04-18T08:06:05Z","content_type":null,"content_length":"8203","record_id":"<urn:uuid:bd44b798-7624-451e-84cf-3289ced3529e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Current Search Limits
Results 1 - 20 of 23 matches
Effect of Proportionality Constant on Exponential Graph (k>0) part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the effect of the proportionality constant, k, on the y-intercept and
position of an exponential graph where k>0 and C is an arbitrarily fixed value in f(x)=Ce^(kx).
Effect of Initial Value on Graph of Exponential Function (C < 0) part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the effect of the initial value, C, on the y-intercept and position of an
exponential function where C<0 and k is an arbitrarily fixed value in f(x)=Ce^(kx
Effect of Proportionality Constant on Exponential Graph (k < 0) part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the effect of the proportionality constant, k, on the y-intercept and
position of an exponential graph where k<0 and C is an arbitrarily fixed value in f(x)=Ce^(kx).
Effect of Initial Value on Graph of Exponential Function (C>0) part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents College Algebra students with a ConcepTest and a Question of the Day activity concerning the effect of the initial value, C, on the y-intercept and position of an
exponential function where C>0 and k is an arbitrarily fixed value in f(x)=Ce^(kx).
Effect of Coefficient of x^0 on Parabola Vertex part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents College Algebra students with a ConcepTest, a Question of the Day, and a Write-pair-share activity concerning the effect of the coefficient of x^0 (i.e., the
constant, c) on the vertex of a parabola where a and b are arbitrarily fixed values in f(x)=ax^2+bx+c.
Effect of Coefficient of x^2 on Parabola Shape part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents College Algebra students with a ConcepTest, a Question of the Day, and a Write-pair-share activity concerning the effect of the coefficient of x^2 on the shape of a
parabola where b and c are arbitrarily fixed values in f(x)=ax^2+bx+c.
Effect of Coefficient of x on Parabola Vertex (b < 0) part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents College Algebra students with a ConcepTest, a Question of the Day, and a Write-pair-share activity concerning the effect of the coefficient of x on the vertex of a
parabola where a>0, b<0 and a and c are fixed values in f(x)=ax^2+bx+c.
Histogram Sorting Using Cooperative Learning part of Pedagogy in Action:Library:Cooperative Learning:Examples
Intended as an early lesson in an introductory statistics course, this lesson uses cooperative learning methods to introduce distributions. Students develop awareness of the different versions of
particular shapes (e.g., different types of skewed distributions, or different types of normal distributions), and that there is a difference between models (normal, uniform) and characteristics
(skewness, symmetry, etc.).
Body Measures: Exploring Distributions and Graphs Using Cooperative Learning part of Pedagogy in Action:Library:Cooperative Learning:Examples
This lesson is intended as an early lesson in an introductory statistics course. The lesson introduces distributions, and the idea that distributions help us understand central tendencies and
variability. Cooperative learning methods, real data, and structured interaction emphasize an active approach to teaching statistical concepts and thinking.
Understanding the standard deviation: What makes it larger or smaller? part of Pedagogy in Action:Library:Cooperative Learning:Examples
Using cooperative learning methods, this activity helps students develop a better intuitive understanding of what is meant by variability in statistics.
How well can hand size predict height? part of Pedagogy in Action:Library:Cooperative Learning:Examples
This activity is deigned to introduce the concepts of bivariate relationships. It is one of the hands-on activities of the ‘real-time online hands-on activities’. Students collect their
own data, enter and retrieve the data in real time. Data are stored in the web database and are shared on the net.
Nature of the chi-square distribution part of Pedagogy in Action:Library:Cooperative Learning:Examples
Explaining the chi-square and F distributions in terms of the behavior of variables constructed by generating random samples of normal variates and summing the sqaures of the values.
Just Sort It! An Activity for Algorithm Development part of MnSCU Partnership:PKAL-MnSCU Activities
This activity is designed to give students the opportunity to develop an algorithm than can be executed by others from the development team's written description of the algorithm.
The Crusty Loaf of Bread: An Exploration of Area of a Surface of Revolution part of Pedagogy in Action:Library:Interactive Lectures:Examples
This write-pair-share activity for Calculus II students involves a hypothetical hemispherical loaf of bread with a 12-inch diameter that has been sliced into twelve one-inch-thick slices. The
objective is to determine which slice contains the most upper crust (i.e., most area of its surface of revolution).
Volumes of Solids of Revolution part of Pedagogy in Action:Library:Interactive Lectures:Examples
This write-pair-share activity presents Calculus II students with a worksheet containing several exercises that require them to find the volume of solids of revolution using disk, washer and shell
methods and to sketch three-dimensional representations of the resulting solids.
How Much Work is Required: Intuition vs. Mathematical Calculation part of Pedagogy in Action:Library:Interactive Lectures:Examples
This classroom activity presents Calculus II students with some Flash tutorials involving work and pumping liquids and a simple question concerning the amount of work involved in pumping water out of
two full containers having the same shape and size but different spatial orientations.
Partial Derivatives: Geometric Visualization part of Pedagogy in Action:Library:Interactive Lectures:Examples
This write-pair-share activity presents Calculus III students with a worksheet containing several exercises that require them to find partial derivatives of functions of two variables. Afterwards, a
series of Web-based animations are used to illustrate the surface of each function, the path of the indicated partial derivative for a specified value of the variable and the value of the derivative
at each point along the path.
Mathematical Curve Conjectures part of Pedagogy in Action:Library:Interactive Lectures:Examples
In this activity, a six-foot length of nylon rope is suspended at both ends to model a mathematical curve known as the hyperbolic cosine. In a write-pair-share activity, students are asked to make a
conjecture concerning the nature of the curve and then embark on a guided discovery in which they attempt to determine a precise mathematical description of the curve using function notation. | {"url":"http://serc.carleton.edu/sp/library/earthhistory/examples.html?q1=sercvocabs__43%3A8","timestamp":"2014-04-20T13:34:51Z","content_type":null,"content_length":"38521","record_id":"<urn:uuid:37ddac55-e4ec-4ed1-b452-b553623e022c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/darthissis/asked","timestamp":"2014-04-19T22:40:50Z","content_type":null,"content_length":"102720","record_id":"<urn:uuid:ef055b04-2060-432c-9e5d-6b7a5dc54996>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radiance Caching and Local Geometry Correction
Okan Arikan, David A. Forsyth and James F. O'Brien
EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-04-1318
April 2004
We present a final gather algorithm which splits the irradiance integral into two components. One component captures the incident radiance due to distant surfaces. This incident radiance is
represented as a spatially varying field of spherical harmonic coefficients. Since distant surfaces do not cause rapid changes in incident radiance, this field is smooth and slowly varying and can be
computed quickly and represented efficiently.
On the other hand, nearby surfaces may create drastic changes in irradiance, because their position on the visible hemisphere change quickly. We correct the irradiance we obtain from spherical
harmonics using an explicit representation of nearby geometry. By assuming nearby geometry is always visible, we can efficiently restore the high frequency detail missing from the irradiance.
Current techniques need to sample the nearby surfaces densely to approximate this rapid change of irradiance. This creates unnecessary visibility tests (or raytraces) that slow down the final gather.
We demonstrate that by assuming nearby surfaces are always visible, we obtain very fast final gather results whose quality compares well with standard techniques but is computed much faster. We also
demonstrate the feasibility of using nearby surfaces on scenes without global illumination to restore the high frequency shading detail due to geometric detail.
BibTeX citation:
Author = {Arikan, Okan and Forsyth, David A. and O'Brien, James F.},
Title = {Radiance Caching and Local Geometry Correction},
Institution = {EECS Department, University of California, Berkeley},
Year = {2004},
Month = {Apr},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2004/5217.html},
Number = {UCB/CSD-04-1318},
Abstract = {We present a final gather algorithm which splits the irradiance integral into two components. One component captures the incident radiance due to distant surfaces. This incident radiance is represented as a spatially varying field of spherical harmonic coefficients. Since distant surfaces do not cause rapid changes in incident radiance, this field is smooth and slowly varying and can be computed quickly and represented efficiently. <p> On the other hand, nearby surfaces may create drastic changes in irradiance, because their position on the visible hemisphere change quickly. We correct the irradiance we obtain from spherical harmonics using an explicit representation of nearby geometry. By assuming nearby geometry is always visible, we can efficiently restore the high frequency detail missing from the irradiance. <p> Current techniques need to sample the nearby surfaces densely to approximate this rapid change of irradiance. This creates unnecessary visibility tests (or raytraces) that slow down the final gather. We demonstrate that by assuming nearby surfaces are always visible, we obtain very fast final gather results whose quality compares well with standard techniques but is computed much faster. We also demonstrate the feasibility of using nearby surfaces on scenes without global illumination to restore the high frequency shading detail due to geometric detail.}
EndNote citation:
%0 Report
%A Arikan, Okan
%A Forsyth, David A.
%A O'Brien, James F.
%T Radiance Caching and Local Geometry Correction
%I EECS Department, University of California, Berkeley
%D 2004
%@ UCB/CSD-04-1318
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/2004/5217.html
%F Arikan:CSD-04-1318 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2004/5217.html","timestamp":"2014-04-20T08:19:07Z","content_type":null,"content_length":"7422","record_id":"<urn:uuid:6316a005-9b49-4ccd-9583-6eb380450003>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hallandale Algebra 2 Tutor
Find a Hallandale Algebra 2 Tutor
...I came to the USA without speaking English at age 16. I was able to graduate from the University of Miami at age 20 and taught a graduate course at 22. I can also converse in other languages
and have taught English to Spanish people as a group and vice versa.
21 Subjects: including algebra 2, reading, Spanish, physics
...Allow me to prove it to you! Chemistry is amazing! It's also not difficult at all when you understand the main, underlying principles that guide chemistry problem solving.
17 Subjects: including algebra 2, English, chemistry, Spanish
...From 1974 to 1980 I taught Hydraulic, Mechanic of Fluids, and Thermodinamic as an Assistant Professor and Graduate Instructor. During the last period of my studies, I took all lectures required
for a Master's degree in Thermal Energy at Havana University Polytechnical Institute (CUJAE). Many st...
8 Subjects: including algebra 2, calculus, prealgebra, geometry
...We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid. Trigonometric functions and angle derivation will be
explained and applied. Geometric proofs are an important aspect of geometry and so these will be extensively explained.
46 Subjects: including algebra 2, Spanish, reading, writing
...During my time as an undergraduate student and later as a Physics Professor, I was formally responsible for teaching Physics and Math courses to University students. Personally, I believe that
such experience is very much in phase with my personality and has given me a very particular perspectiv...
11 Subjects: including algebra 2, Spanish, physics, calculus | {"url":"http://www.purplemath.com/hallandale_fl_algebra_2_tutors.php","timestamp":"2014-04-19T04:54:12Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:c527c9c0-8581-47e4-8caf-e6811414062b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the 5th Day of Christmas… A Keurig Giveaway! (winner announced)
UPDATE: The winner of the set of Keurig Platinum Edition Brewer and Barista Prima Italian Roast K-Cup pack is:
#2,752 – Allison (Spontaneous Tomato): “I follow you on twitter.”
Congratulations, Allison! Be sure to reply to the email you’ve been sent, and your new Keurig package will be shipped out to you!
Whether you love coffee, tea or hot chocolate (me!), a Keurig is one of the most fabulous inventions to come along in quite awhile. Long gone are the days when my grandma would make a whole pot of
coffee, pour it into a warmer, and then drink it throughout the day. Thanks to the emergence of coffee shops on every corner, everyone seems to have different tastes when it comes to coffee – some
like decaf, some like flavored, some like a more robust flavor, some more mild. It’s become increasingly harder to please everyone, plus brewing coffee in a pot isn’t the most efficient when just one
person wants one cup of coffee. Enter Keurig. Brew whatever you want, whenever you want. Not a big coffee drinker? Make some tea, or hot chocolate! It’s the perfect machine for hot drinks, and I’m
giving one away to a lucky reader! Read below for details on how to enter!
(The 12 Days of Christmas Giveaways will resume on Monday with Day #6!)
One lucky winner will receive a Keurig Platinum Edition Brewing System, along with one (1) Barista Prima Italian Roast K-Cup pack.
To enter to win, simply leave a comment on this post and answer the question:
“What’s your hot beverage of choice? Coffee? Tea? Hot Chocolate? Something else?”
You can receive up to FIVE additional entries to win by doing the following:
1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment on this post.
2. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment on this post.
3. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment on this post.
4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment on this post.
5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment on this post.
Deadline: Saturday, December 8, 2012 at 11:59pm EST.
Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected.
Disclaimer: This giveaway is sponsored by Green Mountain Coffee; all opinions are my own.
Good Luck!!
4,850 Responses to “On the 5th Day of Christmas… A Keurig Giveaway! (winner announced)”
3. My hot beverage of choice is hot chocolate!
5. I can’t help it. I looooooove coffee. Especially flavored coffees around the holidays bc all the yummy flavors emerge!
6. I’m subscribed to BEB via email!
7. My favorite hot drink is a vanilla latte, but I also love just plain coffee!
10. I also follow you on Facebook!
11. I follow you on pinterest.
12. I follow you on Pinterest
13. Diet coke. Hangs head in shame…
14. I follow you on Pinterest!
15. My beverage of choice is a mocha. Best of both worlds- chocolate and coffee!
16. AND I already subscribe to your feed!
17. I suscribe through e-mail.
19. I follow you on Pinterest.
20. Definitely hot chocolate. Except when it BRUTALLY cold, then there’s nothing like coffee to warm up, body and soul!
24. AND I follow you on pinterest, bc let’s face it- it’s the easiest way to see and keep track of amazing recipes!
25. I follow you on Twitter (@girlgetsaway)
26. My hot beverage of choice is definitely milk hot chocolate. Yumm!
28. I have subscribed to your RSS feed
31. i love coffee coffee coffee!
32. i follow you in instagram
is that your golden, so sweet!
33. I follow you on Instagram!
34. I also opt for a latte from Starbucks. At home it’s coffee with half &half. Thanks for a great giveaway.
35. And I follow on Pinterest
37. Tea has totally become my go to hot beverage now. Madagascar vanilla is quite possible the best I have ever tasted.
38. I follow you on Twitter! @breebabii
40. I love having a mocha in the morning! Perks me right up.
41. I follow you on Pinterest
43. Definitely hot chocolate!
47. My favorite drink is half coffee half hot chocolate. It is the best!
48. I subscribe to your website via e-mail
51. already subscribed to rss
53. I follow you on pinterest.
54. Follow you via email. Thanks for the giveaway.
56. I follow you on facebook!
61. I follow you on facebook!
63. Hot Tea – but only when I am under the weather.
64. I follow you on Pinterest
65. I follow you on Instagram.
66. Coffee is a must for the mornings! Though I enjoy tea in the evenings
68. My favorite hot beverage is chai tea.
69. I follow you on Facebook.
71. I follow you on pinterest.
72. I follow BEB on Pinterest
74. Used to LOVE coffee but gave up caffeine and cannot stand decaf…..so….now it is hot chocolate!!
77. Oh I gotta have my coffee in the morning, preferably with coconut cream!
78. My favorite is chai latte.
79. Favorite hot beverage has definitely got to be some peppermint green tea. I drink like 5 mugs of it a day in the winter
83. I subscribe to your emails!
87. I love hot tea and hot chocolate. I also love the K cups of mocha cocoa – a hybrid of hot chocolate and coffee.. Plus I really need a Keurig for my new house!
90. Coffee!!! Can’t live without it!
91. Nothing beats a good hot cup of coffee in the morning!
93. coffeeee although there must be a liiittle cream and sugar
95. I am subscribed to Brown Eyed Baker by email updates
96. I am subscribed via email | {"url":"http://www.browneyedbaker.com/2012/12/07/on-the-5th-day-of-christmas-a-keurig-giveaway/comment-page-9/","timestamp":"2014-04-19T04:21:16Z","content_type":null,"content_length":"119482","record_id":"<urn:uuid:ada54bb6-8909-491a-aed3-6754b312c4d3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wave Packet Dynamics
Return to "phonon squeezing" main web page
a) A displaced ground state of a simple harmonic oscillator of the correct width (i.e., a coherent state) oscillates back and forth with constant width.
b) If the wave packet width or variance in a) is initially squeezed, it will spread for a quarter of a cycle, then return to the squeezed value at the half-cycle, spread for another quarter of a
cycle, and so on.
c) Squeezed vacuum state: the wave packet width or variance is oscillating in a simple harmonic oscillator potential well. The packet initially had a squeezed variance.
The paths traced by the oscillating packets as a function of time are also available as animations.
Expanded/Modified from: P. Meystre and M. Sargent III, Elements of Quantum Optics, Springer-Verlag, 1991.
Return to Franco Nori's main web page | {"url":"http://www-personal.umich.edu/~nori/squeezed/animations2.htm","timestamp":"2014-04-17T12:56:22Z","content_type":null,"content_length":"1912","record_id":"<urn:uuid:c7c90e4f-315f-4bc1-9d33-326391c0aa40>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Healing Natural Oils
Select your condition alphabetically:
Select your product:
General Questions
General Questions
Q. Do you ship to Australia, Canada, NZ, Europe and other countries?
A. We ship all over the world, select "Global Shipping" when you checkout of our store.
Q. I do not have a credit card, can I mail in my order?
A. To order by mail.. Select your products and proceed to the checkout page. Under -Payment Information - select "Pay by mail" as your payment method. This enables you to complete the transaction,
print out the invoice which you enclose together with your check, postal order or cash.
Q. Why use your formulas when there are many prescription and over the counter drugs?
A. Our formulas are all natural and contain established homeopathic ingredients. Many of these homeopathic ingredients used in the production at Amoils.com have properties that have been applied
since ancient times and the success of these same properties have been upheld in modern times.
Q. Can your products be used on children?
A. Our H-Eczema Formula may be used on children ages 2 and up. All other formulas: ages 4 and up.
Q. I am pregnant, can I use your products?
A. Please consult your Physician before using any products if you are pregnant or nursing. H-Stretch Marks Formula, H-Varicose Veins Formula, H-Fissures Formula, H-Hemorrhoids Formula and H-Eczema
Formula are formulated for pregnant and nursing mothers. Unfortunately we do not advise using the rest of our product range when pregnant or nursing.
Q. Are your products used in any hospitals or by any doctors or medical professionals?
A. Yes, our products are used in medical practices throughout Europe and North America. We have many doctors, pharmacists and other medical professionals who recommend our formulas.
Order Now | Return To The Top
H-Acne Formula
Q. How soon will I see results?
A. Results vary from person to person, it may take anywhere from two weeks to a number of weeks for the product to work.
Q. Will I need more than one bottle of your formula?
A. This depends on your condition. It is important that you do not run out of formula and interrupt the program. There is 11ml of H-Acne Formula product in one bottle, sufficient for over 120
applications. If you have the condition in numerous places on your body or a severe case of acne, we suggest you get at least two bottles of formula or the economy large size.
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically.
Q. Does diet affect the condition?
A. It has been shown that fried foods and sugars may contribute to the skin condition. It is important to drink a lot of water in order to cleanse your system.
Q. What if I have scars from Acne?
A. Our H-Scars Formula is excellent for any type of scar and can be used in conjunction with H-Acne Formula if you have scars as well as a current acne outbreak.
Q. I am pregnant, can I use the acne product?
A. H-Acne may not be used while pregnant or breastfeeding.
Order Now | Return To The Top
H-Arthritis Formula
Q. How soon will I see results?
A. The process differs from person to person. With regular applications, the formula will help treat your arthritis symptoms.
Q. Will I need more than one bottle of your formula?
A. This depends on the severity of the condition. More severe cases may require additional formula. We offer two size bottles: An 11ml and a 33ml size. The 11ml size is sufficient for a small area,
if you are using the product on a large area, you will need the 33ml size bottle. Save 23% when you purchase the 33ml over the price and volume of the 11ml bottle!
Q. How do I apply the product?
A. Use your fingers to apply a few drops to the affected areas and massage for 10-20 seconds to allow the formula to be fully absorbed.
Q. I am 75 years old, can I still use this product ?
A. Yes, this formula is safe to use from the age of 4 and up.
Q. I currently take high blood pressure medication, can I use the product with my medication?
A. All our formulas contain natural established homeopathic ingredients and natural essential oils, no chemicals or additives. Our formulas do not interfere with other treatment or medication. If you
are unsure, we suggest you consult your medical professional.
Q. I am in constant pain, can I use painkillers with this product ?
A. Yes, you can use painkillers with this formula, it will not interfere with the process.
Q. What about Psoriatic Arthritis?
A. For Psoriatic Arthritis, use in conjunction with H-Psoriasis Formula.
Order Now | Return To The Top
H-Athlete's Foot Formula
Q. Will your product address the symptoms of athletes foot?
A. H-Athlete's Foot Formula will address the symptoms of athletes foot including redness, itching and odor. However, the fungus reproduces itself by spores which are kept in ideal conditions. It is
advisable that you replace your socks as the dead skin can contain the fungus and cause it to spread to new areas on the foot. We also suggest that you wrap your shoes in a plastic bag and place them
in the freezer for 12 hours.
Q. How soon will I see results?
A. Results vary from person to person, the process may take anywhere from one week to a few weeks. However, most people experience instant relief from the symptoms when using H-Athlete's Foot
Q. Will I need more than one bottle of your formula?
A. This depends on your condtion. It is important that you do not run out of formula and interrupt the program. There is 11ml of H-Athlete's Foot Formula in one bottle, sufficient for over 100
applications. If you have the condition in numerous places on your feet and elsewhere, we suggest you get the large, 33ml bottle (Save 23%).
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically.
Q. Can I use your formula for nail fungus.
A. H-Athlete's Foot Formula is formulated specifically for foot fungus however our H-Nail Fungus Formula is formulated specifically for nail fungus.
Q. Can I apply the formula to my groin area?
A. In severe cases of athlete's foot the fungus may find its way to the groin area. Our H-Jock Itch Formula is formulated specifically for this condition.
Q. Is athlete's foot contagious?
A. The fungus is very contagious. We shed skin all the time which usually ends up on the floor. If someone walks on the dead skin, they could be infected with the fungus.
Order Now | Return To The Top
H-Cellulite Formula
Q. I have used other products for cellulite, why will your product help me?
A. Constituents in our formula help reduce the appearance of cellulite by smoothing cellulite dimples and skin.
Q. How soon will I see results??
A. The process varies from person to person. The appearance of cellulite starts to diminish after a few weeks of application.
Q. How do I apply the product?
A. Place a few drops on your fingers and gently massage onto the affected areas. Full instructions will be included.
Q. How often do I apply the product?
A. The formula is applied three times per day to the condition. The easiest times to apply are early morning, early evening and before bedtime.
Q. How much formula will I need?
A. This will depend on the area and severity of your condtion. It is important that you do not run out of formula and interrupt the program. There is 33ml of H-Cellulite Formula in one bottle,
sufficient for over 200 applications. If you have a large area of cellulite, we suggest you start with at least two bottles of formula.
Q. What about diet and exercise?
A. A healthy diet and exercise is very important to a healthy lifestyle and will assist with H-Cellulite Formula in improving the appearance of cellulite. We also recommend drinking plenty of water.
Q. I also have stretch marks, Will this product work?
A. Our H-Stretch Marks Formula is formulated specifically to improve stretch marks by smoothing skin and is excellent for stretch marks anywhere on the body. H-Stretch Marks Formula can be used in
conjunction with H-Cellulite Formula if you have stretch marks as well as cellulite.
Order Now | Return To The Top
H-Cold Sores Formula
Q. How soon will I see results?
A. Results vary from person to person. Typical cold sores symptoms start to dissapear after a few days.
Q. How many applications will I get out of one bottle?
A. We have two size bottles: 11 ml bottles contains 100 applications OR you can save 23% with our large 33 ml bottles. H-Cold Sore Formula is used for outbreaks only as it is very concentrated. Only
a few drops of formula are used for the overall application.
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically. Full instructions will be included.
Q. Is it possible to spread the virus to my partner when I do not have an outbreak?
A. Certain reports state that this could be the case. However, there is no solid documented proof at this time.
Q. My cold sore is on the inside of my lip, is the product okay to use there ?
A. H-Cold Sore Formula should be applied directly to the cold sore. The formula is okay to use on the inside of your lip, simply use a small amount of formula, and apply to the affected area.
Order Now | Return To The Top
H-Cracked Heels Formula
Q. Will I need more than one bottle of your formula?
A. This will depend on the severity of your condition. During the process it is important you do not run out of formula or the program will be interrupted. One bottle is 11ml and contains
approximately 120 applications.
Q. How soon will I see results?
A. Time varies from person to person, it generally takes anywhere from a few days to a few weeks to significantly smooth the skin of cracked heels.
Q. How do I apply the product?
A. Place a drop or two on a Q-Tip or use your fingers to apply topically directly to your affected heels. Full instructions will be included.
Order Now | Return To The Top
H-Eczema Formula
Q. I have used a number of medications including cortisone, why will your product help me?
A. Our H-Eczema Formula is all natural and contains established homeopathic ingredients which work directly on the eczema symptoms.
Q. I have dry / weeping eczema, can I use your product?
A. Our formula is specifically formulated for all eczema symptoms including those for dry and weeping eczema
Q. How soon will I see results?
A. The process varies from person to person. Typical results take about 2 weeks but generally relief from the symptoms such as itching will be immediate.
Q. How do I apply the product?
A. Place a few drops of formula on your fingertips and apply topically.
Q. Can your formula be used on children
A. H-Eczema Formula may be used on children ages 2 and up.
Order Now | Return To The Top
H-Fissures Formula (anal fissures, minor rectal bleeding)
Q. How soon will I see results?
A. The process varies from person to person. The majority of people who use our products experience quick relief of symptoms. The overall process can take as little as 2 - 6 weeks. Please be patient
and be consistent with applications.
It is important to apply three times a day without missing applications.
Q. How do I apply the product?
A. Place a few drops on your fingers to apply topically.
The easiest way to apply would be as follows: allow one drop to fall onto your finger, gently apply and repeat with another drop. Repeat this process three times a day.
Q. How many bottles of formula will I need?
A. Symptoms of fissures are generally reduced within 2 to 6 weeks. One 11ml size bottle of H-Fissures Formula is sufficient for 3-4 weeks of application. If there is still discomfort after 3 weeks,
you will require a second bottle, or you can save 23% on our large 33ml size bottle. SAVE 23%! when selecting the large bottle versus the smaller 11ml bottle.
Fissures may recur, many of our customers keep a bottle handy in order to use the formula immediately when the symptoms appear.
Q. Do I need to change my diet?
A. Increase your raw fruit and vegetables intake during the program. It is also advisable to eliminate dairy, wheat and sugar. Drink lots of water and consume natural yogurt. (yogurt is not
classified as dairy, the molecular structure has changed)
Q. What about hemorrhoids?
A. We have a specially formulated product called H-Hemorrhoids Formula. If you have fissures and hemorrhoids, use H-Fissures Formula first, followed by H-Hemorrhoids Formula.
Order Now | Return To The Top
H-Glow Formula
Q. How soon will I see results?
A. Results vary from person to person, expect several weeks for results, however early signs that the formula is beginning to take effect can be noticed after about four weeks.
Q. Will I need more than one bottle of your formula?
A. This depends on your needs. It is important that you do not run out of formula and keep applying. There is 11ml of H-Glow formula in one small bottle, sufficient for over 100 applications, and one
drop of formula is enough for a relatively large area of wrinkles. Should you be applying to a very large area you may need more formula. Save when purchasing our 33ml size bottle. This formula can
be used as part of your regular skin care program.
Q. How do I apply the product?
A. Place a few drops on your fingers to apply topically, take care to avoid direct contact with eyes.
Q. Can I use your formula for wrinkles around the eye and mouth areas.
A. H-Glow is gentle yet effective and may be used for wrinkles around the eye or mouth areas, take care to avoid direct contact with the eyes and do not ingest.
Order Now | Return To The Top
H-Headaches Formula (headaches, migraines)
Q. How soon will I see results?
A. Common headache symptoms will dissipate immediately. The entire headache will be significantly reduced within a short period of time depending on the severity.
Q. How much formula do I need?
A. This depends on how often you get a headache as the formula is only applied when a headache exists. A small amount of formula is used per application.
Q. How do I apply the product?
A. Use a few drops of formula and massage a small amount to your temples, and back of neck.
Q. I suffer from migraines, will this product help me?
A. Yes, H-Headaches Formula is effective in reducing the pain associated with all types of headaches, including migraines, cluster headaches, stress headaches, etc.
Q. I get headaches quite frequently, will this help get rid of headaches in the future?
A. H-Headaches Formula works to reduce the pain associated with headaches very quickly. The formula delivers a broad spectrum healing effect.
Order Now | Return To The Top
H-Hemorrhoids Formula / H-Bl Hemorrhoids Formula
Q. You have two products, I am not sure which one to use?
A. H-Bl Hemorrhoids Formula is formulated specifically for hemorrhoids that have ruptured causing minor bleeding. If you have hemorrhoids that have ruptured, you will need both our H-Hemorrhoids
Formula and H-Bl Hemorrhoids Formula. If you do not have any bleeding, you will only need the H-Hemorrhoids Formula.
Q. I have both conditions, which product should I use first?
A. Apply H-Bl Hemorrhoids Formula and when the bleeding has stopped, continue with the formula for two to three days and then switch to the H-Hemorrhoids Formula.
Q. How soon will I see results?
A. The process varies from person to person. The majority of people who use our products experience immediate relief. It can take as little as 2 weeks to 6 weeks to complete the process. At first you
may not notice any change in the condition but, be patient! Usually the first signs will be when the hemorrhoids shrink in size in the early morning and come up again later in the day. Continue with
regular applications.
It is important to apply three times a day without missing applications.
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically.
The easiest way to apply would be as follows: allow one drop to fall onto your finger, gently apply and repeat with another drop.
Q. How many bottles of formula will I need?
A. Hemorrhoids shrinkage may take up to 6 weeks. One bottle of 11 ml H-Hemorrhoids Formula will last +- 3 to 4 weeks. If shrinkage does not occur after 3 weeks, you will require a second bottle, or
save 23% with our large 33 ml bottle.
Hemorrhoids may recur, many of our customers keep a bottle handy because the second time you use the formula, it works much faster, often in 10 days.
Q. Do I need to change my diet?
A. Increase your raw fruit and vegetables intake during the program. It is also advisable to eliminate dairy, wheat and sugar, this will aid your body in healing the condition. Drink lots of water
and consume natural yogurt. (yogurt is not classified as dairy, the molecular structure has changed)
Q. What about anal fissures?
A. We have a specially formulated product for anal fissures called H-Fissures Formula. If you have hemorrhoids as well as fissures, use H-Fissures Formula first, followed by H-Hemorrhoids Formula
once the fissures have cleared.
Order Now | Return To The Top
H-Insomnia Formula
Q. I've used other products, how does yours differ?
A. Many products contain harmful chemicals that are foreign to the body and may have side effects. H-Insomnia Formula is all natural and contains established homeopathic ingredients. There are no
harmful chemicals used in our products. H-Insomnia Formula works to relax your body and soothes your mind to promote deep sleep.
Q. How do I use H-Insomnia Formula?
A. Massage a few drops into your temples and back of neck 30 minutes before bedtime. You may also use a few drops of formula in your bath water, shortly before bedtime. This will help to relax you
and prepare for sleep.
Q. What can I expect to happen?
A. H-Insomnia Formula will relax you and aid in sleeping without the harmful side effects of sleeping pills. The formula naturally assists with sleeping.
Q. How else can I prevent insomnia?
A. We strongly recommend avoiding any type of stimulant in the evening, especially tea or coffee after 4pm in order to develop a relaxed evening routine. It is also recommended to not eat or exercise
a few hours before bedtime. During the day, be sure to exercise and get plenty of fresh air to help get a better night's sleep.
Order Now | Return To The Top
H-Jock Itch Formula
Q. How do I apply the product?
A. Using a few drops, apply the formula onto the affected area, three times per day.
Q. How soon will I see results?
A. H-Jock Itch Formula will immediately assist in relieving the symptoms of fungal infection.
Q. I have Tinea Cruris - will this product work?
A. Yes, Tinea Cruris is Jock Itch and the formula is specifically formulated to work on the symptoms of tinea cruris including itching and burning.
Q. I also have Athlete's Foot - Can I still use this product?
A. H-Jock Itch Formula works on the symptoms of Jock Itch in the groin and leg areas, if you have also have athlete's foot, we offer H-Athlete's Foot Formula
Order Now | Return To The Top
H-Moles Formula
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically directly to the moles.
Q. How soon will I see results?
A. The time varies from person to person, it may take anywhere from two weeks to six weeks to treat the symptoms of skin moles.
Q. Will I need more than one bottle of your formula?
A. This depends on how many moles you have and the size of the moles. It is important that you do not run out of formula and interrupt the program. There is 11ml of H-Moles Formula product in one
bottle, sufficient for over 120 applications. If you have the condition in numerous places on your body, we suggest you get at least two bottles of formula or save 23% with our large size - 33ml with
over 360 applications per bottle.
Q. What if I have scars from old moles?
A. Our H-Scars Formula is excellent for any type of scar and can be used in conjunction with H-Moles Formula if you have scars as well as a current mole.
Order Now | Return To The Top
H-Molluscum Formula
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically directly to the condition. Be sure to wash your hands before and after each application.
Q. How soon will I see results?
A. The time varies from person to person, but typically you will see a change in the molluscum symptoms within a few days. The entire process usually takes 2-6 weeks.
Q. Will I need more than one bottle of your formula?
A. This depends on the surface area of the condition. It is important that you do not run out of formula and interrupt the program. There is 11ml of H-Molluscum Formula in one bottle, sufficient for
over 120 applications. If you have the condition in numerous places on your body or a severe case of molluscum, we suggest you get at least two bottles of formula or save 23% with our large size -
33ml with over 360 applications per bottle.
Order Now | Return To The Top
H-Nail Fungus Formula
Q. Will your product get rid of nail fungus symptoms permanently?
A. H-Nail Fungus Formula will significantly reduce all symptoms of nail fungus. However, the fungus reproduces itself by spores which are kept in ideal conditions. Dead skin is constantly being shed
and some of that dead skin will be present in the socks or gloves you have worn and washing does not get rid of them. It is advisable to get rid of your socks and/or gloves. We also suggest that you
wrap your shoes in a plastic bag and place them in the freezer for 12 hours.
Q. Will I need more than one bottle of your formula?
A. This will depend on the number of fingernails / toenails that are affected. This is a process, so it is important you do not run out of formula or the program will be interrupted. One regular size
bottle is 11ml and contains over 120 applications. Or save 23% with our large size - 33ml with over 360 applications per bottle.
Q. How soon will I see results?
A. Time varies from person to person, ithe process takes anywhere from a week to a few weeks. Most people will experience quick improvement of the symptoms of nail fungus.
Q. How do I apply the product?
A. Place a drop or two on a Q-Tip or use your fingers to apply topically directly to your affected nails.
Q. Can I use your formula for athlete's foot.
A. H-Nail Fungus formula is formulated specifically for fingernail and toenail fungus. We suggest using our H-Athlete's Foot Formula to treat the symptoms of athlete's foot and foot fungus.
Q. What about greenies - will this product work ?
A.Yes, H-Nail Fungus Formula is effective on greenies, otherwise known as green nail beds, which are caused by excess moisture stuck under false nails.
Q. Is my nail fungus contagious?
A. Nail fungus can be very contagious. We shed skin all the time which usually ends up on the floor. If someone walks on the dead skin, they could be infected with the fungus.
Order Now | Return To The Top
H-Psoriasis Formula
Q. How soon will I see results?
A. Psoriasis is a non-contagious skin condition that varies enormously in severity. Therefore, application will differ. Symptoms such as the discomfort and itching are generally relieved with topical
applications, followed by the flaking and lesions. The formula will then work on repairing the damaged skin.
Q. Will I need more than one bottle of your formula?
A. This depends on the severity of your psoriasis. More severe cases may require additional formula. We offer two size bottles: An 11ml and a 33ml size. The 11ml size is sufficient for a small area,
if you are applying to a larger area, you will need the 33ml size bottle.
Q. How do I apply the product?
A. Use your fingers or a Q-Tip to apply a small amount of formula to any affected areas. Two drops can also be added to your bath water when bathing.
Q. What about Psoriatic Arthritis?
A. For Psoriatic Arthritis, use in conjunction with H-Arthritis Formula.
Click here for our Psoriasis and Arthritis bundle packs
Order Now | Return To The Top
H-Rosacea Formula
Q. How soon will I see results?
A. Results differ from person to person. Visible outbreaks will diminish over time and it may take anywhere from a few days to a number of weeks to control symptoms of Rosacea including reduction of
redness and sensitivity.
Q. Will I need more than one bottle of your formula?
A. This will depend on the severity. It is important that you do not run out of formula and interrupt the program. There is 11ml of H-Rosacea formula in one regular size bottle, sufficient for over
100 applications and 33ml in our large size, sufficient for over 300 applications. You save 23% with our large 33ml size.
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically directly to the affected areas.
Q. Does diet affect the condition?
A. A healthy diet is always important. A healthy diet and exercise will increase lymphatic flow and along with H-Rosacea can assist in reduction of those symptoms. It is also important to drink a lot
of water in order to cleanse your system.
Q. What if I have scars from Rosacea?
A. Our H-Scars Formula is excellent for any type of scar and can be used in conjunction with H-Rosacea Formula if you have scars as well as a current rosacea outbreak.
Q. What if I have also have acne?
A. Our H-Acne Formula is formulated specifically to reduce the symptoms of acne outbreaks and is perfect for acne anywhere on the body and can also be used in conjunction with H-Rosacea Formula if
you have acne as well as a current rosacea outbreak.
Order Now | Return To The Top
H-Scars Formula
Q. I’ve had a scar for 30 years, will your product help?
A. Yes, the formula is effective in reducing the symptoms of any scar, no matter how old it is.
Q. How much formula do I need?
A. This depends on how many scars you are addressing. One small 11ml bottle is sufficient for a small area.
Q. How do I apply the product?
A. Use 1-2 drops on your finger or on a Q-Tip and gently masssage into the affected areas three times per day.
Q. Can I use this product if I am pregnant?
A. H-Scars Formula may not be used when pregnant or breastfeeding.
Q. Can I use this on my child?
A. H-Scars Formula may be used for children from age 4 years.
Order Now | Return To The Top
H-Skin Tags Formula
Q. How soon will I see results?
A. Results will vary from person to person. The process takes anywhere from 2 to 6 weeks, stubborn skin tags may take a little longer for results. Please be patient, the formula works naturally to
treat skin tag symptoms, leaving no scarring.
It is important to apply three times a day and do not miss any applications. If you miss one day it will push back the process by 2 to 4 days. Do not pick or scrape the skin tags, allow the formula
to do the work.
Q. Do I need more than one bottle of H-Skin Tags Formula?
A. This depends on the amount and size of your skin tags. It is important that you do not run out of formula and interrupt the program. There is 11ml of H-Skin Tags Formula, sufficient for over 120
applications. This will work for a few skin tags. If you have many skin tags, we suggest starting with at least two 11ml bottles of formula or save 23% with our large 33 ml bottle.
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically. Simply dab each skin tag, do not saturate them.
Q. My son is 3 years old, can I use H-Skin Tags Formula?
A. We do not recommend using our formula on children under the age of 4.
Q. Do I need to place a band aid onto the skin tags after application?
A. If you choose, you may use a band aid on hands or fingers. However, it is not a necessity.
Order Now | Return To The Top
H-Shingles Formula
Q. How soon will I see results?
A. Results vary from person to person. Please be patient and be consistent with applications. Regular use of H-Shingles Formula will help treat the symptoms of Shingles and future outbreaks.
Q. How many applications will I get out of one bottle?
A. H-Shingles Formula is very concentrated, used for outbreaks and Neuralgia. Only a few drops of formula are used for each application. If you have an outbreak on a large area of your body we
suggest getting the large 33ml bottle.
Q. How do I apply the product?
A. Place a few drops of formula on your fingertips and apply topically to the condition.
Q. What about PHN (Post Herpetic Neuralgia) ?
A. PHN can occur after an outbreak and can be extremely painful. If shingles symptoms are treated early, PHN can be prevented completely. If, however, you suffer PHN after the outbreak, H-Shingles
Formula is one of the few products on the market which will successfully treat PHN symptoms too!
Order Now | Return To The Top
H-Stretch Marks Formula
Q. I have used other products, why will your product help me?
A. Our natural product works to gently reduce the appearance of stretch marks using pure natural essential oils. The formula is gentle and extremely effective.
Q. Can I use your product during nursing?
A. Our formula has been specifically formulated for pregnant and nursing mothers. Please consult your Physician before using this product if pregnant or nursing
Q. When do I begin applying the formula during pregnancy?
A. Start applying the formula on your abdomen and breasts from your fourth month.
Q. How soon will I see results?
A. The process varies from person to person. You will start to see changes after 6-8 weeks of use.
Q. How do I apply the product?
A. Place a few drops on your fingers and gently massage onto the affected areas.
Q. How often do I apply the product?
A. A minimum of two times per day, morning and evening. If you wish and it is convenient, you may also apply the formula in the middle of the day.
Order Now | Return To The Top
H-Varicose Veins Formula
Q. How do I apply the product?
A. The formula is applied topically to the condition. With a few drops of the formula on your fingers, apply above and below the veins (not directly onto them) gently working upwards from ankle to
thigh Apply 3 times a day.
Q. How soon will I see results?
A. The process differs from person to person. The formula will help to reduce the symptoms and discomfort of varicose veins. Continue to use throughout your pregnancy and for a few months after
giving birth. Please consult a physician before using this product if you are pregnant or nursing.
Q. How can I prevent varicose veins?
A. Walking is recommended to stimulate blood flow. Once a day lie in your bed for 15 minutes with your feet propped up higher than your head, this will help to stimulate blood flow through your
entire body
Order Now | Return To The Top
H-Warts Formula (common warts, plantar warts and facial warts)
Q. How soon will I see results?
A. Results will vary from person to person. The process takes anywhere from 2 to 6 weeks, stubborn warts may take a little longer. Please be patient, the formula works daily to reduce the symptoms of
plantar, facial, common and flat warts with no scarring.
It is important to apply three times a day and do not miss any applications. If you miss one day it will push back the process by 2 to 4 days. Do not pick or scrape the warts, allow the formula to do
the work.
Q. Do I need more than one bottle of H-Warts Formula?
A. This depends on the amount and size of your warts. It is important that you do not run out of formula and interrupt the program. There is 11ml of H-Warts Formula, sufficient for over 120
applications. This will work for a few warts. If you have many warts, we suggest starting with at least two 11ml bottles of formula or save 23% with our large 33 ml bottle.
Q. How do I apply the product?
A. Place a few drops on a Q-Tip or use your fingers to apply topically. Simply dab each wart, it is not necessary to saturate them.
Q. My son is 3 years old, can I apply H-Warts Formula to his fingers?
A. We do not recommend using our formula on children under the age of 4.
Q. Do I need to place a band aid onto the warts after application?
A. If you choose, you may use a band aid on hands or fingers. However, it is not a necessity.
Order Now | Return To The Top
Msensual Formula (For male sexual enhancement)
Q. It says the formula is not to be used orally; can I still have oral sex?
A. You may still participate in oral sex.
Q. How long does Msensual Formula take to work?
A. For most people Msensual Formula begins to take effect soon after applying. A tingling feeling and warmth will enhance arousal.
Q. How do I apply the product?
A. Apply a few drops of Msensual Formula to the shaft and head of the penis, as well as to the base of the scrotum, which is the start of the perineum (area of skin between the scrotum/testicles and
the anus).
Q. So Msensual Formula improves my erection?
A. Msensual Formula will help to increase pleasure from sex and may also increase the strength of your erection.
Order Now | Return To The Top
Fsensual Formula (For female sexual enhancement, increased libido)
Q. The formula is not to be used orally; can I still have oral sex?
A. You may still participate in oral sex.
Q. How long does Fsensual Formula take to work?
A. For most people Fsensual Formula begins to take effect soon after applying. A tingling feeling and warmth will enhance arousal.
Q. How do I apply the product?
A. Apply a few drops of Fsensual Formula to the inner labia and clitoris before sexual interaction for rapidly heightened arousal and an enhanced sexual experience. To increase libido, and improve
your sex drive, try Fsensual Formula every day.
Order Now | Return To The Top | {"url":"http://www.amoils.com/faq.html","timestamp":"2014-04-17T10:01:09Z","content_type":null,"content_length":"73788","record_id":"<urn:uuid:118ee047-3bf9-425a-be49-f676a79b6160>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
New York Prealgebra Tutor
Find a New York Prealgebra Tutor
Hello! My name is Jessika and I am a recent graduate of Carnegie Mellon University, with a Bachelors of Science in Biological Sciences and a Bachelors of Science in Policy & Management. I hope to
become a veterinarian one day and I currently work as a veterinary assistant at an emergency animal hospital.
18 Subjects: including prealgebra, chemistry, geometry, biology
...I have prepared students for Global and American History, English, Integrated Algebra and Spanish. I have been a proctor in many other Regents subjects. As a former Social Studies teacher, I
often prepare students for regents in American History and Global History.
27 Subjects: including prealgebra, reading, Spanish, SAT math
...I have taught low- and high-level students, have been co-teaching students with different learning modes, and collaborating with professionals since the first day I stepped into the classroom.
In addition, I have tutored many students not just in science and in high school, but also middle schoo...
18 Subjects: including prealgebra, reading, biology, algebra 1
...I'm patient and try to make math fun, and as a teacher I understand that students have different learning styles - I can teach to those. I've taught math across grades 6-8 in Manhattan, the
Bronx and currently Queens. I can help your child with any/all middle school math concepts/skills and hel...
4 Subjects: including prealgebra, algebra 1, algebra 2, linear algebra
I am one of the 100 or so people (out of nearly 300,000 each year) who scored 790 or above. But more than that, I have been tutoring for more than 25 years, and I know how to get the most out of a
student. I have had great success tutoring GMAT both independently and for GMAT prep companies.
11 Subjects: including prealgebra, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/New_York_Prealgebra_tutors.php","timestamp":"2014-04-17T13:43:14Z","content_type":null,"content_length":"23979","record_id":"<urn:uuid:5892027f-8cc9-4008-b829-54a0127dcc99>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variable Torque Control of Offshore Wind Turbine on Spar Floating Platform Using Advanced RBF Neural Network
Abstract and Applied Analysis
Volume 2014 (2014), Article ID 903493, 7 pages
Research Article
Variable Torque Control of Offshore Wind Turbine on Spar Floating Platform Using Advanced RBF Neural Network
^1Intelligent Systems and New Energy Technology Research Institute, Chongqing University, Chongqing 400044, China
^2Institute of Intelligent System and Renewable Energy Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
^3Web Science Center, University of Electronic Science and Technology of China, Chengdu 611731, China
Received 2 January 2014; Revised 14 January 2014; Accepted 15 January 2014; Published 6 March 2014
Academic Editor: Xiaojie Su
Copyright © 2014 Lei Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Offshore floating wind turbine (OFWT) has been a challenging research spot because of the high-quality wind power and complex load environment. This paper focuses on the research of variable torque
control of offshore wind turbine on Spar floating platform. The control objective in below-rated wind speed region is to optimize the output power by tracking the optimal tip-speed ratio and ideal
power curve. Aiming at the external disturbances and nonlinear uncertain dynamic systems of OFWT because of the proximity to load centers and strong wave coupling, this paper proposes an advanced
radial basis function (RBF) neural network approach for torque control of OFWT system at speeds lower than rated wind speed. The robust RBF neural network weight adaptive rules are acquired based on
the Lyapunov stability analysis. The proposed control approach is tested and compared with the NREL baseline controller using the “NREL offshore 5MW wind turbine” model mounted on a Spar floating
platform run on FAST and Matlab/Simulink, operating in the below-rated wind speed condition. The simulation results show a better performance in tracking the optimal output power curve, therefore,
completing the maximum wind energy utilization.
1. Introduction
Wind energy has been an important part of the renewable energy. It is significantly meaningful for optimizing the energy system structure, easing the energy crisis, and protecting the environment by
actively developing wind energy. With the rapidly development of wind energy all over the world, promising and reliable wind turbine concepts have been developed. Offshore wind turbine makes it
possible to go further into water deeper than 60m [1]; therefore, it has become the key research in the field of renewable energy.
The floating offshore wind turbine (OFWT) concept provides a groundbreaking strategy to fully utilize the high-quality wind power in deep waters. The design concept of “large floating offshore wind
turbine” was firstly proposed by Heronemus from Massachusetts Institute of Technology (MIT) in 1972 [2, 3]. American Renewable Energy Laboratory (NREL) and MIT have completed the dynamic system
modeling of OFWT and the three types of floating platform: tension leg platform with suction pile anchors, Spar-buoy with catenary mooring, drag-embedded anchors and barge with catenary mooring lines
through OC3 projects [4]. Figure 1 shows the three primary types of floating offshore wind turbine concepts.
Previous research results show that, compared to onshore wind turbines, OFWTs with six degrees of freedom are prone to pitching motion and to produce complex dynamic load because of proximity to load
centers and strong wave coupling [5]. Meanwhile, with the larger scale (the capacity of OFWTs reaches up to 10MW, the diameter of blades approximates 200 meters), the blades of OFWT produce higher
uneven loads due to the effect of turbulence, wind shear, tower shadow, and spindle tilt. Accumulating of the above two types of loads will result in devastating impact on the fatigue life and output
power quality of the OFWT system. Therefore, it is urgently needed to reduce fatigue loads and improve output power quality for OFWT system by utilizing advanced control strategies.
Control of OFWT is a relatively new yet challenging research area. There have been a large number of recent achievements in the research of blade pitch control for OFWT in the above-rated wind speed
region [6–13]. In our previous work [6], we propose a computationally inexpensive robust adaptive control approach with memory-based compensation for blade pitch control. However, works on the
variable speed control for OFWT system in below-rated wind speed region are relatively few.
In this study, to address the challenge that the system parameters of OFWT are varying and uncertain due to the complex external wind and wave disturbances, an adaptive radial basis function (RBF)
neural network approach is proposed for torque control of OFWT system at speeds lower than rated wind speed. The robust RBF neural network weight adaptive rules are acquired based on the Lyapunov
stability analysis. The proposed torque controller based on RBF neural network is presented and mounted on a Spar floating platform for performance comparison with the baseline torque controller in
the below-rated wind speed region.
Section 2 briefly presents the wind turbine model and the Spar floating platform utilized in this paper. Section 3 describes the two implemented controllers: the baseline torque controller and the
proposed variable torque controller based on RBF neural network. Section 4 shows the simulation and results, in which performances of the above two controllers are compared with each other on Spar
floating platform. Eventually, conclusions are reported in Section 5.
2. Wind Turbine and Platform Models
2.1. 5MW Offshore Wind Turbine Model
The basic properties of future offshore turbines can be estimated by considering the amount of kinetic energy density in the wind, which can be converted into kinetic energy of the turbine shaft. The
expression for power produced by the wind is simply given by where is air density and is the swept area of the turbine rotor with a radius , giving . is wind speed passing the rotor. denotes power
coefficient of wind turbine, which is a nonlinear function of the tip-speed ratio and the pitch angle [14]. Figure 2 depicts the curve of power coefficients for variable speed and variable pitch wind
turbine. It indicates that, for a different , there will be a different curve for the , while, for a fixed , there will be an optimal at which the power output is maximum. In addition, for any
tip-speed ratio , power coefficient is relatively maximum when blade pitch angle . When increases, decreases simultaneously.
Note that the tip-speed ratio is defined as where is the tip speed and is the rotor speed.
For a constant value of , the mathematical model of is expressed as where the coefficients (, , , ) depend on the aerodynamic design of the blade and operating conditions of the wind turbine. In this
paper, the coefficients are , , , , and [15]. For the “NREL 5MW reference offshore wind turbine” model simulated in this paper, the peak power coefficient of 0.482 occurred at a tip-speed ratio of
7.55 and a rotor-collective blade-pitch angle of [16].
In the case of the variable speed wind power generation system, the maximum power point control from the wind turbine can be adopted. The maximum power of the wind turbine is given by
The physical properties of the specified wind turbine model used for analysis, the “NREL 5MW reference offshore wind turbine,” are listed in Table 1 [16]. This wind turbine is mounted on a Spar
floating platform.
2.2. Floating Platform
The Spar-buoy platform is modeled for the support structure. The NREL 5 MW offshore floating platform input properties for the OC3-Hywind Spar-buoy used in this paper are briefly summarized in Table
2 [4].
3. Implemented Controllers
This section gives the detailed information about the two controllers simulated in the analysis.
3.1. The Baseline Generator Torque Controller
The baseline generator torque controller is built on the best performance presented by Jonkman in his previous research on the Spar-buoy platform [17].
In the below rated wind speed region, the purpose is to optimize power capture. The generator torque is proportional to the square of the filtered generator speed to maintain a constant optimal
tip-speed ratio.
The generator torque for this region is expressed as where is rotor speed, is the generator torque at the rotor speed in which this region starts (), is rated torque, and is the rotor speed in which
the rated torque is reached.
3.2. Advanced Generator Torque Controller Based on RBF Neural Network
We propose a RBF neural network for variable torque control of the OFWT system. The total number of input signals in the OFWT torque control system is no more than 4. Consequently, it is a
computationally inexpensive approach to utilize the RBF neural network for linearization and approximation.
In this paper, the RBF neural network is a three-layer forward network, including an input layer, a hidden layer with a Gaussian activation function, and a linear output layer. The mapping from input
to output is nonlinear, while the mapping from hidden layer to output layer is linear, therefore speeding up the process of study obviously and avoiding local minimum problem. The topological
structure of RBF network is presented in Figure 3.
The control block diagram of RBF neural network is illustrated in Figure 4.
In RBF network, is the input vector, is a nonlinear RBF activation function, which is given by where is the number of neurons in the hidden layer and is the central vector of th hidden neuron. is the
basis-width vector, is the base width constant of th mode, and the weight vector of the linear output neurons is .
The output of the neural network is defined as
From previous research results [13, 18–25], we could learn that, a RBF neural network with enough hidden neurons can approximate any nonlinear continuous functions with arbitrary precision. In this
paper, in order to train the RBF neural network, we utilize the Lyapunov stability to get the weights updating rules of the RBF neural network.
In the first mode of operating at variable torque control, where the wind speed is less than the rated speed region, the electrical torque of the wind turbine must be adjusted to make the rotor speed
track the desired speed that is specified according to the optimal tip-speed ratio. The drive train dynamics are depicted in Figure 5. The mechanical motion equations are given by where and are the
moment of inertia of the rotor and the generator. and are the coefficient of viscous reaction of rotor and generator, respectively. and are the coefficient and stiffness of rotor and generator,
respectively. , , , and are the shaft torque at wind turbine end, generator end, and before and after gear box, respectively. is the tower displacement and is the gearbox ratio. and are the
mechanical angular position of the rotor and generator.
We rewrite the above mechanical motion equations in a compact form as follows: where, are lumped parameters given by
is given by
The affine form of the rotor speed equation can be characterized by the following equation: where is a constant negative value and is the input signal, with
Construct a nonlinear approximation function through RBF neural network given by where represents the lumped RBF neural network approximation error.
To design the rotor speed tracking controller, define the rotor tracking error as follows: where is the optimal rotor speed, which is defined as where the optimum tip speed ratio is given in Table 1.
The control system can be justified by considering the Lyapunov function candidate as follows: where is the positive adaptation gain. is the weight error. and are the ideal weight and estimated
weight of the network, respectively. The Lyapunov function candidate is a positive definite function and is the sufficient condition for the robust stability of the nonlinear system. We can get the
Deriving the approximation through the neural networks and . For the stability of the nonlinear system, consider the following controller: where is the rotor speed tracking error feedback gain.
Proof. Based on (18) and (19), we can get
The weight updating rule of the network can be obtained through the e-modification method given by where is a constant positive value. Combine (20) and (21) to get the following:
It is assumed that and are bounded, so
If or , we could get
Therefore, the overall dynamic system is uniformly ultimately bounded.
From the above equations, we can see that the estimated wind speed input enables the generator to track the optimal output power curve by generating a reference rotor speed. There are many previous
researches working on estimating wind speed without directly measuring the wind speed. In this paper, we utilize the sensorless scheme presented in [26] to estimate wind speed based on neural
network. Then we could get the reference rotor speed by the following equation:
The block diagram of the RBF neural network variable speed control scheme of the OFWT system is depicted in Figure 6.
4. Simulation and Results
In this section, the “NREL 5MW reference offshore wind turbine” installed on a OC3-Hywind Spar-buoy floating platform is tested and simulated with the FAST and MATLAB/Simulink under mean value of
8m/s turbulence wind speed, which is below the rated wind speed.
To verify the robustness and self-adaptation of the proposed variable torque controller based on RBF neural network, compared simulations of two types of controllers, the baseline torque controller
and the proposed torque controller, have been performed on the same offshore wind turbine system. Two comparison performances are simulated based on power tracking: generator output power and torque
Figure 7 shows the turbulence wind and wave conditions.
Figure 8 compares the average generator output power tracking for the proposed torque controller based on RBF neural network and the baseline torque controller with the optimal output power
trajectory. It can be observed that, the proposed adaptive torque controller is able to follow the optimal output power curve with better tracking accuracy than the baseline torque controller,
therefore completing the maximum offshore wind energy utilization.
Figure 9 presents the compared curve in generator torque.
5. Conclusions
This paper mainly focuses on the variable torque control of OFWT system for power tracking in below-rated wind speed region on a Spar-buoy floating platform. In allusion to the external disturbances
and uncertain system parameters of OFWT due to the much more complicated external load environment and strong wave coupling compared to the onshore wind turbine, a robust adaptive torque controller
based on RBF neural network is proposed and tested. Two types of controllers are implemented on the OC3-Hywind Spar-buoy floating platform for performance comparison: the baseline torque controller
and the proposed torque controller
According to the average simulation results, the proposed torque controller based on RBF neural network is not only robust to complex wind and wave disturbances but also adaptive to varying and
uncertain system parameters as well. As a result, the advanced controller shows a better performance in tracking the optimal generator output power curve, therefore completing the maximum wind energy
Conflict of Interests
The authors declare that there is no conflict if interests regarding the publication of this paper.
This work was supported in part by the National High Technology Research and Development Program of China (SS2012AA052302), the National Natural Science Foundation of China (no. 51205046), and the
Fundamental Research Funds for the Central Universities (no. CDJZR170008).
1. F. G. Nielsen, T. D. Hanson, and B. Skaare, “Integrated dynamic analysis of floating offshore wind turbines,” in Proceedings of the 25TH International Conference on Offshore Mechanics and Arctic
Engineering (OMAE '06), pp. 671–679, Hamburg, Germany, June 2006. View at Publisher · View at Google Scholar · View at Scopus
2. W. E. Heronemus, “Pollution-free energy from offshore wind,” in Proceedings of the 8th Annual Conference and Exposition Marine Technology Society, Washington, DC, USA, 1972.
3. W. Musial and S. Butterfield, “Future for offshore wind energy in the United States,” Tech. Rep. 36313, National Renewable Energy Laboratory, Golden, Colo, USA, 2004.
4. J. M. Jonkman, “Dynamics modeling and loads analysis of an offshore floating wind turbine,” Tech. Rep. 41958, National Renewable Energy Laboratory, Golden, Colo, USA, 2007.
5. J. M. Jonkman and D. Matha, “Dynamics of offshore floating wind turbines-analysis of three concepts,” Wind Energy, vol. 14, no. 4, pp. 557–569, 2011. View at Publisher · View at Google Scholar ·
View at Scopus
6. S. Zuo, Y. D. Song, L. Wang, and Q.-W. Song, “Computationally inexpensive approach for pitch control of offshore wind turbine on barge floating platform,” The Scientific World Journal, vol. 2013,
Article ID 357849, 9 pages, 2013. View at Publisher · View at Google Scholar
7. W. Lei, Y.-L. He, X. Jin, J. Du, and S. Ma, “Dynamic simulation analysis of floating wind turbine,” Journal of Central South University: Science and Technology, vol. 43, no. 4, pp. 1309–1314,
8. L. Wang, B. Wang, Y. Song, et al., “Fatigue loads alleviation of floating offshore wind turbine using individual pitch control,” Advances in Vibration Engineering, vol. 12, no. 4, pp. 377–390,
9. H. Namik and K. Stol, “Individual blade pitch control of floating offshore wind turbines,” Wind Energy, vol. 13, no. 1, pp. 74–85, 2010. View at Publisher · View at Google Scholar · View at
10. M. A. Lackner, “An investigation of variable power collective pitch control for load mitigation of floating offshore wind turbines,” Wind Energy, vol. 16, no. 3, pp. 435–444, 2012. View at
Publisher · View at Google Scholar · View at Scopus
11. Y. D. Song, “Control of wind turbines using memory-based method,” Journal of Wind Engineering and Industrial Aerodynamics, vol. 85, no. 3, pp. 263–275, 2000. View at Publisher · View at Google
Scholar · View at Scopus
12. Y. D. Song, B. Dhinakaran, and X. Bao, “Control of wind turbines using nonlinear adaptive field excitation algorithms,” in Proceedings of the IEEE American Control Conference, vol. 3, pp.
1551–1555, Chicago, Ill, USA, 2000. View at Publisher · View at Google Scholar
13. L. Wu, W. X. Zheng, and H. Gao, “Dissipativity-based sliding mode control of switched stochastic systems,” IEEE Transactions on Automatic Control, vol. 58, no. 3, pp. 785–793, 2013. View at
Publisher · View at Google Scholar
14. J. F. Conroy and R. Watson, “Frequency response capability of full converter wind turbine generators in comparison to conventional generation,” IEEE Transactions on Power Systems, vol. 23, no. 2,
pp. 649–656, 2008. View at Publisher · View at Google Scholar · View at Scopus
15. J. Zaragoza, J. Pou, A. Arias, C. Spiteri, E. Robles, and S. Ceballos, “Study and experimental verification of control tuning strategies in a variable speed wind energy conversion system,”
Renewable Energy, vol. 36, no. 5, pp. 1421–1430, 2011. View at Publisher · View at Google Scholar · View at Scopus
16. J. Jonkman, S. Butterfield, W. Musial, and G. Scott, “Definition of a 5-MW reference wind turbine for offshore system development,” Tech. Rep. TP 500-38060, National Renewable Energy Laboratory,
Golden, Colo, USA, 2009.
17. J. M. Jonkman, “Influence of control on the pitch damping of a floating wind turbine,” in Proceedings of the 46th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nev, USA, January 2008. View
at Scopus
18. R. M. Sanner and J.-J. E. Slotine, “Gaussian networks for direct adaptive control,” IEEE Transactions on Neural Networks, vol. 3, no. 6, pp. 837–863, 1992. View at Publisher · View at Google
Scholar · View at Scopus
19. Y. Kourd, D. Lefebvre, and N. Guersi, “Fault diagnosis based on neural networks and decision trees: application to DAMADICS,” International Journal of Innovative Computing, Information and
Control, vol. 9, no. 8, pp. 3185–3196, 2013.
20. C. K. Ahn and M. K. Song, “New sets of criteria for exponential ${L}_{2}-{L}_{\infty }$ stability of Takagi-Sugeno fuzzy systems combined with Hopfield neural networks,” International Journal of
Innovative Computing, Information and Control, vol. 9, no. 7, pp. 2979–2986, 2013.
21. S. Sefriti, J. Boumhidi, M. Benyakhlef, and I. Boumhidi, “Adaptive decentralized sliding mode neural network control of a class of nonlinear interconnected systems,” International Journal of
Innovative Computing, Information and Control, vol. 9, no. 7, pp. 2941–2947, 2013.
22. K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 4–27, 1990. View at
Publisher · View at Google Scholar · View at Scopus
23. L. Wu, X. Su, P. Shi, and J. Qiu, “Model approximation for discrete-time state-delay systems in the T-S fuzzy framework,” IEEE Transactions on Fuzzy Systems, vol. 19, no. 2, pp. 366–378, 2011.
View at Publisher · View at Google Scholar · View at Scopus
24. X. Su, Z. Li, Y. Feng, and L. Wu, “New global exponential stability criteria for interval-delayed neural networks,” Journal of Systems and Control Engineering, vol. 225, Proceedings of the
Institution of Mechanical Engineers, no. 1, pp. 125–136, 2011. View at Publisher · View at Google Scholar · View at Scopus
25. X. Su, P. Shi, L. Wu, and Y.-D. Song, “A novel control design on discrete-time Takagi-Sugeno fuzzy systems with time-varying delays,” IEEE Transactions on Fuzzy Systems, vol. 20, no. 6, pp.
655–671, 2013.
26. H. Li, K. L. Shi, and P. G. McLaren, “Neural-network-based sensorless maximum wind energy capture with compensated power coefficient,” IEEE Transactions on Industry Applications, vol. 41, no. 6,
pp. 1548–1556, 2005. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/aaa/2014/903493/","timestamp":"2014-04-20T11:52:03Z","content_type":null,"content_length":"231780","record_id":"<urn:uuid:14bbe53a-1e4e-408b-a73f-7ee959b5c9b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus: Finding Tangent Line Slope in Polar Form Video | MindBites
Calculus: Finding Tangent Line Slope in Polar Form
About this Lesson
• Type: Video Tutorial
• Length: 7:35
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 81 MB
• Posted: 06/26/2009
This lesson is part of the following series:
Calculus (279 lessons, $198.00)
Calculus: Parametric Equations, Polar Coordinates (18 lessons, $27.72)
Calculus: Polar Functions and Slope (2 lessons, $3.96)
Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http://
www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their inverses,
improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus,
College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The
Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
Parametric Equations and Polar Coordinates
Polar Functions and Slope
Finding the Slopes of Tangent Lines in Polar Form Page [1 of 2]
When you're thinking about these polar functions that have these beautiful sweeping-type curves, one natural question that we tend to ask in calculus is how would you find the slopes of tangent
lines? Well, that requires us to find - change in y over change in x - . So what does mean when you're thinking about 's and r's? Well, the answer is not much. What we have to do is remember the
conversion - how to go from polar form back to Cartesian form. So what I'd like to do is show you how to find when you're given a polar form.
Let's just pretend that we're given some function: r = f(). So this is a polar form. What I'd like to do now is figure out - find - because represents slope always - the slope of a line. How do I do
that? What I've got to do is somehow bring this back to y's and x's with the conversion method. Let me remind you that in the conversion method, what we see is that the y is always r sin , and the x
is defined to be r cos by a little application of a right triangle - right triangle action.
So these two things allow us to find the y and the x. What I want to find is . So how do I do that? Well, the answer is, "It's too hard." So when something's too hard, don't do it! Instead, let's do
something easier. Now what would be easy to do? Well, since I know that r is really a function of in disguise, I could actually insert that right into r. And if I were to do that, I would see the
following: I would see y = f() sin , and I would see x = f() cos . Just plugging this fact in to this general formula.
Now if I do this, this is great news because what I see here is that y now just depends on upon . It doesn't matter what anything else in life is, if I just know , then I can plug it in and find out
what y is. Similarly with x, if I just know , I can plug it in and find out what x is. That means that it would be really easy for me to take the derivative of y with respect to . I can just
differentiate. And it would be really easy for me to take the derivative of x, with respect to , I could just differentiate. So could I somehow use those two pieces of information to get what I want,
and the answer is yes, by using the chain rule. If I use the chain rule, what would I see?
Let's take a look at the chain rule here. Let's take a look at something like because that's an easy thing to compute - I just take a derivative here. In fact, all I would have to do is use the
products rule. How could I think about taking ? What I could do is say, "Well, first what I'll do is I'll take , and now I'm going to do some fantasy math." This is something you should never do at
home. I'm going to say, "Okay, what would I have to write in here in order to make it ?" Well, I already have the dy on top, so that's looking great. I want a don the bottom. I don't have that, so
I'd better put a d right here. And I certainly don't want that in here, so I'd better get rid of it by putting it on top. Well, look, this is fantastic because this says take the derivative of y with
respect to , and we just determined that's pretty easy. And this says take the derivative of x with respect to , and we've already said that's really easy. So, in fact, these both are easy to
compute, and I can solve for that very easily and find what I'm after. So, in fact, what I see here is that this equals what? Well, it equals divided by . So if I just take the derivative of y with
respect to and divide it by the derivative of x with respect to , I actually have , which is what I was after.
Okay, now what would that actually work out to be? Let's actually figure it out. If I'm given the function f(), I plug it in here, I just have to take some derivatives. So let's take some derivatives
right now and see what we get. Where should I take the derivatives, over here or over here? All right, so let's now compute the derivative. What is ? I have to use the product rule. So it's the first
times the derivative of the second. So that's the first times the derivative of the second, plus the second times the derivative of the first. So that would be f prime () sin. So that's . There it
is. Now what is ? Well, first times derivative of second, that's going to give me a -sin - that's the first -- then plus the second times the derivative of the first, so that's going to be f prime
times cos.
So, if you put all of this together, what I see is the following fact: is going to equal , which is up here, f() cos + f prime () sin divided by , which is -f() sin + f prime () cos . So, in fact,
there is the formula for finding . And in fact if you try it - if you actually work this out with the example of the Rose curve here - what you would actually see is that if you were to plug in = ,
this number would turn out to be tan because this number, if you work this out, will just become , just by taking the derivative and plugging in. So what you see here is that this slope should be
tan, and that's exactly right. Since this angle is , that means that its slope - -- is tangent. is tangent, so the slope should be , which would be tan. So this actually verifies once again the fact
that right here this is going to graze the origin along this tangent line. So this, indeed, is the tangent line of this function. The function, remember, is r = cos(3), and it braves it right along
there. Similarly, if you plug in or , you'll see the same effect. So in fact this will always give you the slope of a tangent line of any polar form of this type just by knowing the and the function.
See you at the next lesson.
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/3543-calculus-finding-tangent-line-slope-in-polar-form","timestamp":"2014-04-18T23:58:15Z","content_type":null,"content_length":"59075","record_id":"<urn:uuid:363c7fcd-59b0-4395-ae7b-4ead2fdb25ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ancient Math Symbols
Date: 09/07/97 at 10:43:21
From: Carly Siegel
Subject: Ancient math symbols
I need to know the numerals for 1,10 100, 1000 in Arab, Sumarian,
Greek, Roman, and Hindu. We have looked in Encarta and we need to
confirm our findings.
Date: 09/13/97 at 18:31:53
From: Doctor Mike
Subject: Re: Ancient math symbols
Hello Carly,
You are wise to ask for confirmation. One reputable source that I
checked had an error in the Sumerian (Babylonian) 900. They changed
the incorrect example in a later edition, and it still was wrong!
So, it's best to understand things yourself, to be sure.
I have not seen Encarta but some friends of mine like it. If my
answers differ from what you found there, it does not necessarily
mean that they are wrong or I am wrong. History is a "long time";
there have been several versions and writing styles. Even modern
numerals are written somewhat differently in Europe than in the U.S.
ROMAN numerals for 1, 10, 100, and 1000 are I, X, C, and M.
ARABIC and HINDU are very similar to modern numerals except that a
dot may be used for the place-holder zero, ie, "1..." for 1000.
There's another older Arabic system that's significantly different.
GREEK has an older system, and a newer one (still over 2000 years
old!) The newer system uses Greek letters for 1 to 9, 10 to 90, and
100 to 900. 1 is written as A (alpha), 10 as I (iota), and 100 as P
(rho). They did use a limited place system, so 111 was written as PIA.
For 1000 and above they used a mark such as "," or "/" before the
number of thousands. So, 1000 is ,A or /A , and ten thousand is
,I or /I.
Now, for something completely different, SUMERIAN (Babylonian), which
is sometimes called cuneiform writing. They used a symbol sort of
like a "Y" for one, and a symbol sort of like "<" for ten. There have
been several ways of writing these, and I won't get into those
differences. These 2 symbols were combined in pretty obvious ways,
such as:
<YYY and <<<
YYY <
for 16 and 40. The left arrangement has one 10-symbol and six
1-symbols for a total of 16, and the right arrangement has four
10-symbols for a total of 40. So far you can see that these numbers
take up a lot of space, but otherwise this system SEEMS fairly
predictable. But hold on to your hat; it's going to be a bumpy ride
from here on!
Instead of using powers of 10 as we do for place values, they used
powers of 60. This is similar to the way we count seconds and minutes
of time. We count 14 min. 58 sec., 14 min. 59 sec., 15 min., 15 min.
1 sec., etc. Also we count minutes up to 59 minutes and then add on
another hour. They did this for all their numbers for counting just
anything. You asked about 100, which equals 60 + 40. I wrote it this
way because for cuneiform numerals, we put 1 in the 60's place and
40 in the 1's or units place. We have seen 40 above, so 100 is:
Y <<<
To figure out 1000 we first need to re-write it as 60*16 + 40, which
you should verify yourself. So, 16 in the 60's place and 40 in the
units place makes 1000 comes out as:
<YYY <<<
YYY <
If that seems strange, keep in mind that it is not much stranger than
saying "16 minutes 40 seconds" instead of "1000 seconds".
At this point, I'm wondering how close Dr. Math is to Encarta. (grin)
For learning some more about this I would suggest looking in a major
encyclopedia for an article on "Number," "Numeration," or "History of
Math." Also look in your public library catalog for books whose
title is "Number ...." or includes the word "number." Have fun.
Good question. I enjoyed explaining it, and not just giving you the
"bottom line" answers. I hope this helps.
-Doctor Mike, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/57039.html","timestamp":"2014-04-20T06:31:04Z","content_type":null,"content_length":"9244","record_id":"<urn:uuid:35697481-a653-4eb7-a97b-5e7a19e4cd67>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Each part of the spatial reference has a number of properties, especially the coordinate system, which defines what map projection options are used to define horizontal coordinates.
A SpatialReference can be easily created from existing datasets and PRJ files:
1. Use a PRJ file as an argument to the SpatialReference class.
import arcpy, os
prjFile = os.path.join(arcpy.GetInstallInfo()["InstallDir"],
"Coordinate Systems/Geographic Coordinate Systems/North America/NAD 1983.prj")
spatialRef = arcpy.SpatialReference(prjFile)
2. Describe a dataset and access its spatialReference property.
dataset = "C:/Data/Landbase.gdb/Wetlands"
spatialRef = arcpy.Describe(dataset).spatialReference
Which spatial reference properties are available depends on the coordinate system used. In the properties list below, those properties only available with a Projected Coordinate system are denoted
with a ^1; properties available only with a Geographic Coordinate system are denoted with a ^2.
SpatialReference (prjFile)
Parameter Explanation Data Type
prjFile The projection file used to populate the spatial reference object. String
Property Explanation Data Type
The extent of the measure domain. String
(Read Only)
The measure false origin and units. String
(Read Only)
The measure resolution. Double
(Read and Write)
The measure tolerance. Double
(Read and Write)
The xy resolution. Double
(Read and Write)
The xy tolerance. Double
(Read and Write)
The extent of the Z domain. String
(Read Only)
The Z false origin and units. String
(Read Only)
The Z resolution property. Double
(Read and Write)
The Z tolerance property. Double
(Read and Write)
The abbreviated name of the spatial reference. String
(Read and Write)
The alias of the spatial reference. String
(Read and Write)
The extent of the xy domain. String
(Read Only)
The factory code of the spatial reference. Integer
(Read and Write)
The false origin and units. String
(Read Only)
Indicates whether or not m-value precision information has been defined. Boolean
(Read Only)
Indicates whether or not xy precision information has been defined. Boolean
(Read Only)
Indicates whether or not z-value precision information has been defined. Boolean
(Read Only)
Indicates whether or not the spatial reference has high precision set. Boolean
(Read and Write)
The name of the spatial reference. String
(Read and Write)
The comment string of the spatial reference. String
(Read and Write)
The type of the spatial reference. String
(Read and Write)
The usage notes. String
(Read Only)
The projected coordinate system code.^1 Integer
(Read and Write)
The projected coordinate system name.^1 String
(Read and Write)
The azimuth of a projected coordinate system.^1 Double
(Read and Write)
The central meridian of a projected coordinate system.^1 Double
(Read and Write)
The central meridian (Lambda0) of a projected coordinate system in degrees.^1 Double
(Read and Write)
The central parallel of a projected coordinate system.^1 Double
(Read and Write)
The classification of a map projection.^1 String
(Read Only)
The false easting of a projected coordinate system.^1 Double
(Read and Write)
The false northing of a projected coordinate system.^1 Double
(Read and Write)
The latitude of the first point of a projected coordinate system.^1 Double
(Read and Write)
The latitude of the second point of a projected coordinate system.^1 Double
(Read and Write)
The linear unit code.^1 Integer
(Read and Write)
The linear unit name.^1 String
(Read and Write)
The longitude of the first point of a projected coordinate system.^1 Double
(Read and Write)
The longitude of the second point of a projected coordinate system.^1 Double
(Read and Write)
The longitude of origin of a projected coordinate system.^1 Double
(Read and Write)
The projection code.^1 Integer
(Read and Write)
The projection name.^1 String
(Read and Write)
The scale factor of a projected coordinate system.^1 Double
(Read and Write)
The first parallel of a projected coordinate system.^1 Double
(Read and Write)
The second parallel of a projected coordinate system.^1 Double
(Read and Write)
The geographic coordinate system code.^2 Integer
(Read and Write)
The geographic coordinate system name.^2 String
(Read and Write)
The angular unit code.^2 Integer
(Read and Write)
The angular unit name.^2 String
(Read and Write)
The datum code.^2 Integer
(Read and Write)
The datum name.^2 String
(Read and Write)
The flattening ratio of this spheroid.^2 Double
(Read and Write)
The longitude value of this prime meridian.^2 Double
(Read and Write)
The prime meridian code.^2 Integer
(Read and Write)
The prime meridian name.^2 String
(Read and Write)
The radians per angular unit.^2 Double
(Read Only)
The semi-major axis length of this spheroid.^2 Double
(Read and Write)
The semi-minor axis length of this spheroid.^2 Double
(Read and Write)
The spheroid code.^2 Integer
(Read and Write)
The spheroid name.^2 String
(Read and Write)
Method Overview
Method Explanation
create () Creates the spatial reference object using properties.
createFromFile (prj_file) Creates the spatial reference object from a projection file.
exportToString () Exports the object to its string representation.
loadFromString (string) Restore the object using its string representation. The exportToString method can be used to create a string representation.
setDomain (x_min, x_max, y_min, y_max) Sets the XY domain.
setFalseOriginAndUnits (false_x, false_y, xy_units) Sets the XY false origin and units.
setMDomain (m_min, m_max) Sets the M domain.
setZDomain (z_min, z_max) Sets the Z domain.
setMFalseOriginAndUnits (false_m, m_units) Sets the M false origin and units.
setZFalseOriginAndUnits (false_z, z_units) Sets the Z false origin and units.
createFromFile (prj_file)
Parameter Explanation Data Type
prj_file The projection file used to populate the spatial reference object. String
Return Value
Data Type Explanation
String The string representation of the object.
Parameter Explanation Data Type
string The string representation of the object. String
setDomain (x_min, x_max, y_min, y_max)
Parameter Explanation Data Type
x_min The minimum x-value. Double
x_max The maximum x-value. Double
y_min The minimum y-value. Double
y_max The maximum y-value. Double
setFalseOriginAndUnits (false_x, false_y, xy_units)
Parameter Explanation Data Type
false_x The false x value. Double
false_y The false y value. Double
xy_units The xy units. String
setMDomain (m_min, m_max)
Parameter Explanation Data Type
m_min The minimum m-value. Double
m_max The maximum m-value. Double
setZDomain (z_min, z_max)
Parameter Explanation Data Type
z_min The minimum z-value. Double
z_max The maximum z-value. Double
setMFalseOriginAndUnits (false_m, m_units)
Parameter Explanation Data Type
false_m The false m-value. Double
m_units The m units. Double
setZFalseOriginAndUnits (false_z, z_units)
Parameter Explanation Data Type
false_z The false z-value. Double
z_units The false z units. Double
Code Sample
SpatialReference example
For each feature class in a workspace, print the name of its spatial reference.
import arcpy
from arcpy import env
# Set the workspace environment
env.workspace = "C:/base/base.gdb"
# Get a list of the feature classes in the input folder
fcs = arcpy.ListFeatureClasses()
# Loop through the list
for fc in fcs:
# Create the spatial reference object
sr = arcpy.Describe(fc).spatialReference
# If the spatial reference is unknown
if sr.name == "Unknown":
print fc + " has an unknown spatial reference\n"
# Otherwise, print out the feature class name and
# spatial reference
print fc + ": " + sr.name + "\n"
SpatialReference example 2
Create a SpatialReference using a .prj file.
import arcpy
prjFile = "c:/Program Files/ArcGIS/Desktop10.0/Coordinate Systems/Projected Coordinate Systems" + \
"/Continental/North America/USA Contiguous Equidistant Conic.prj"
# Create a spatial reference object using a projection file
sr = arcpy.SpatialReference(prjFile)
SpatialReference example 3
Create a SpatialReference from a factory code.
import arcpy
# Create a spatial reference object using a factory code
sr = arcpy.SpatialReference()
sr.factoryCode = 3857 | {"url":"http://help.arcgis.com/en/arcgisdesktop/10.0/help/000v/000v000000p6000000.htm","timestamp":"2014-04-20T10:50:47Z","content_type":null,"content_length":"35877","record_id":"<urn:uuid:7e28745d-8e4a-4194-8882-85b9f8cb6d20>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
The world of Pi - Simon Plouffe / David Bailey
Warning !Genius !!!
Srinivasa Ramanujan
(1887 - 1920)
A few formulae (out of the many possible ones)
By denoting (x)[n] the value :
Slices of his life
With Ramanujan, we reach the quintessence of the study of Pi. He is the master of all research in the 20th century in this domain, let's not be afraid to acclaim it!
I could spend days talking about this unrecognise genius who nevertheless left a magistal work still yet not understood in a few domains. His life is furthermore a true novel....
But let us start by the beginning :
Srinivasa Ramanujan was born the 22 decembre 1887 in the town of Erode in the south of India in a poor family. His dad was accountant. His mathematic precocity was quickly noticed and at seven,
he recieved a bursary for the secondary school of Kumbakonam (!). It is said that he would recite mathematical formulae to his classmates and that he knew a surprising amount of the decimals of
Since the start of his study on trigonometry, he discovered cos and sin, found all the relation linking the two and showed to be very dissapointed when he learned that they were already known!!!
When he was 12, Ramanujan master a huge and dense book: Plane Trigonometry by Loney. When he was 15, he got hold of Synopsis of elementary results in pure and applied mathematics by G.S.Carr, a
list of 6165 theorems states often without proofs. We assume that it's from this book inspired him abd his bad habit of not giving out proofs with his results!
In fact at that period he was so obssesed with his research that he failed all his exams!
Luckly, after his wedding in 1909, he received a monthly sum from a rich patron of the art passionate with mathematics (R. Rao) on the recomodation of the indians mathematicians who appriciate
his discovery already written in what we commonly call his notebooks.
Gaining a stable job in 1912 as a beaurocrat at Madras' counter, he was encouraged by his managers to send his result to 3 distinguished british mathematicians, of which only G. Hardy replied to
his letter dated 16/01/1913.
In fact, when Hardy and his collegue Littlewood examined a few of the 120 formulae and theorems send by Ramanujan, their conviction was made a few hours later: they were looking at a genuis!
(Hardy had build a "scale of pure capacities" on which he situated himself at 25, gave 30 to Littlewood, and 80 to Hilbert, shinning figure of the german mathematics from the start of the
century. Ramanujan was immediatly estimated at 100 !!!).
Hardy described later that the intellectual discovery of Ramanujan and its consequences as the only "romantic" evenment of his life....
When he looked at Ramanujan's formulae, he was disconcert and had no idea on how to proof the. But, he maintained, "they must be ture because he they weree not, nobody in the world would have
enough imagination to invent them!"m
he made Ramanujan come to England and work with him for the next five fruitful following years on the properties of several arithmetics functions. Srinivasa became the first Indian to be member
of the Royal Society in 1918 and of Trinity College (Cambridge).
Unfortunatly, and its such a shame, Ramanujan was strictly vegetarian (due to a promise made to his mother!), and in England in the full blow of war, his needs were hard to fufill... After the
war, in 1919, he came back in India seriously ill from a tuberculose and a lack of vitamines (We all know how wet Brittain is!).
His work would stay of great quality despite his suffering, but he drew his last breath the 26 april 1920, he was only 32.
A little anecdote very well known by Hardy :
I remember once going to see him when he was ill at Putney. I had ridden in a taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an
unfavorable omen.
"No" he replied "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways." !!
I then asked him if he knew what was the next one. He thought about it for a minute and told me that he could not see any close one... In fact , the next one is several thousands later!! (4104)
Ramanujan was passoniated with pi. Many of his results involve our favourite constant...
Ramanujan wrote down his work in his notebooks as I stated above. Unfortunatly many formulae are written in with non-standards notation and without any proofs. For more than 80 years, several
mathematicians (Bruce Berndt for the moment) tried to decipher his coded notebooks for the happiness of Science!
Because Ramanujan worked on the modular equations. But what exactly are they?
I will 'borrow' a very clear example from the book Les mathématiciens to clarify the definition.
A modular equation is an equation that satisfy a modular function (q) where the variable q intervine with various power, for example (q^n). n denotes here the order of the modular equation.
Consider for example the modular equation of 7th ordre (n=7) :
We then look for the solution to this equation. For our case, we have:
(don't ask me how it was found...)
Up do here, no hints of Pi...
Well, we call for singular values from the values of the modular function (q) which satisfy some extra properties. For example, if we define for p integer:
Straight away we see that the bigger p is, the smaller the exponential and the product in (q) tends toward 16q
So, if we take Pi !
Of course, the number of decimals increase with p. We can see the whole advantage in having a relation between (q) and (q ^p), this last number being closer to Pi than the first (because the
exponential is even smaller!)
The amazing thing about this theory, is that the singular value does not depend at all on Pi even with their definition
Ramanujan was a great specialist of those values and calculated them in a remarquable way. In his lettler to Hardy, he said:
which allows us to get 20 decimals of Pi.
K[240] allowed his to get the first million decimals of Pi !!
But he never did any research on the algorithm that we could build from them. The Borwein brothers took up that idea, here is how they proceded.
Principle of the proof of the Borweinein
If we look at the Borwein's proof for the algorithm of second order, we can see cleary that we consider the modular equation of second degree:
with a(q)=[3]^4(q)+[2]^4(q), b(q)=[4]^4(q), c(q)=2[2]^2(q)[3]^2(q) and the Thêta Functions :
This equation lead to let
whose solution is well known according to the definition of s(q) (in fact we started here from the solution to get to the modular equation.)
The initial value s[1] correspond to a first singular value, the sequence s[n] hence allows to calculate a sequence of singular values.
The rest now like above consist of finding a way to come back to Pi, since logarithm did the job in the first example. The role here is played by the [p](p^2r), and then we find from it an
algorithm. the justification is given by the apparition of the function Legendre's equation.
In all, a lovely little theory....
While we're talking about Ramanujan's formulae :
This page is a little bit special as it does not contain any proofs. I just reproduces the intermediate steps of the detailled calculation on the Borwein's canadians page. a part form the general
formula of the formation of series of type Ramanujan found by the Borwein and written on my page dedicated to them, I prefer allowing you to see their site....
Other results by Ramanujan :
So, what else ?
There's first of all numberous approximation of Pi that he found in a prodigious way! They are written in the page on approximations of Pi as well as the famous result showing that
Otherwise, there doesn't unfotunatly exist much information on the results of Ramanujan. Nevertheless, here is what I could find:
Prime numbers (function Tau of Ramanujan) :
Be GCD(n,n')=1,
This first property allows to calculate the function Tau for all the product of prime numbers knowing the value Tau(p).
And since the naturals can be expressed as the product of prime numbers, the function Tau is completly evaluated on N if we can evaluate it for the powers of prime numbers, which is what the
following theorem does by recurrence:
This function Tau has quite a few more properties:
There exists some congruence relations for 7, 23 and 691. For instance,
Moreover, Ramanujan conjectured that
Note that the relation of congruence is only true if k is without quadratic remainders modulo 23, that is that there exists no integers x such that x^2=k[23]. Since there are (p-1)/2 quadratic
remainders for p>2, we can deduce that on average, one natural integer N in two is such that tau(N) is divisible by 23.
Pierre Deligne (belge and not french) showed in 1971 that the conjecture above was one of the consequence of the conjectures by Weil. And since he obviously didn't want to leave it at that, he
proved them in 1973, which was responsible in part for his Fields Medal in 1978.
And we also have :
For more information on sigma, cf.A013959
Landau-Ramanujan's constant :
The proof is self obvious if you do a variable change t=x-1, then write the finite expansion of Ln(1-t), and by justifying the change between sum and integral. I did not look for a[2],but it
seems completle doable!
Spread of prime numbers :
For more information on the function pi(x), cf.A000720 on Möbius' function, cf.A008683 and the relation between Möbius and Pi, cf arithmetic functions page.
Equations on the theory of numbers :
Ramanujan gave himself the problem to find the solutions to the equation :
The computers have been able to find up till N=10^40 but only the solutions (N=3,4,5,7,15) were confirmed. We have in fact proved recently that they are the only valid !
For more information on this equation, cf.A060728
An other problem by Ramanujan ? But of course :
Find for example all the solutions of n!+1=x^2
(I unfortunatly don't have the list...)
Thanks to Christian Radoux for his precision on the function Tau
back to home page | {"url":"http://www.pi314.net/eng/ramanujan.php","timestamp":"2014-04-18T13:41:05Z","content_type":null,"content_length":"29338","record_id":"<urn:uuid:39164df1-da33-4b60-afde-ba383c077200>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
you are viewing a single comment's thread.
[–]TheEveningStar[S] 1 point2 points3 points
sorry, this has been archived and can no longer be voted on
Well mostly I'm just curious if it's a common (though perhaps somewhat private) practice among mathematicians to have some view on the foundations of maths, analogous to how many physicists will
adopt instrumentalism, realism, positivism, etc., while acknowledging that their philosophical views should be kept quite separate from their day-to-day work.
I can't really think of an interesting, relevant question in the philosophy of math. Can you give an example?
So just to throw one out there, some philosophers have insisted that because existential quantifiers are employed in set theoretic systems as well as other axiomatic systems, we should accept realism
with regard to certain mathematical objects and relations. So to broaden the subject to metaphysics, how do mathematicians think about the objectivity of their claims? I imagine that based on the
comments I've received so far they don't see the objectivity being based on a correspondence with mind-independent objects, but on the meanings of the terms. Call me biased, but this to me seems like
an extremely interesting question! What are talking about when we're talking mathematics?
[–]yesmanapple2 points3 points4 points
sorry, this has been archived and can no longer be voted on
With regards to your first point, I'd say that most mathematicians are probably formalists. As in, everything is about the definitions, but you make the definitions to pertain to the object you want
to study.
some philosophers have insisted that because existential quantifiers are employed in set theoretic systems as well as other axiomatic systems, we should accept realism with regard to certain
mathematical objects and relations.
Can you explain that? I'm not sure what it means.
Call me biased, but this to me seems like an extremely interesting question! What are talking about when we're talking mathematics?
I personally feel like this question is really meaningless without defining mathematics. And, once you define mathematics, the question is answered.
[–]antonfire1 point2 points3 points
sorry, this has been archived and can no longer be voted on
It's an interesting question but it's not a mathematical question. That is, such question are not usually relevant to a mathematician while he's doing mathematics. If you ask such a question to
someone who's just given a talk, the speaker will be justifiably annoyed, whether that talk was about applied math or pure.
A pure mathematician might be thinking about certain topological spaces, but thinking about whether those topological spaces "have objective existence" doesn't get him anywhere. He might think about
philosophical-sounding questions like "what is a sphere, really?", but answers like "it's a bunch of neurons going off in my mind" are useless to him. He's looking for things like "it's a
simply-connected closed 3-manifold".
Basically, for mathematicians other than set theorists and logicians and others working close to the foundations of mathematics, such questions don't lead to interesting mathematical insights. You
seem to think that pure mathematics always lives very close to foundations, and if so you are mistaken. Topologists, number theorists, combinatorialists, analysts, algebraists, and so on have no
mathematical reason to care, for the most part.
With that said, let me try to answer a special case of your question. When a topologist thinks about topological spaces, she is thinking about shapes. She can give you lots of examples of topological
spaces, from mathematics and elsewhere. For instance, did you know that the space of lines in a plane is shaped like a mobius strip? She'd be happy to explain to you exactly what she means by "shaped
like". Then you ask her, "wait, this mobius strip?", holding a paper one up in your hand. Well no, that one has thickness and is made up of atoms, and if you zoom in far enough you can't even tell
one point from another, so the one she's talking about is an abstraction of that; or maybe that is a poor depiction of the one she's talking about. "Well which of those two is it? Is the one you're
talking about real? Is it more real than this one?", you ask. At this point, I don't know what she'll say, or if she'll care about the question at all. At this point, it's clear to her that you don't
really care about the fact she started with. So, yes, in that sense I think mathematicians are like the physicists you describe, in that some have different philosophical positions on the matter,
some just don't care, and for practically all of them it has nothing to do with their work.
Finally, if you're interested in a living mathematician who actually does take a position, whose position is pretty controversial, and who actually likes to talk about it, check out Doron
Zielberger's opinions, and particularly this article[pdf]. | {"url":"http://www.reddit.com/r/puremathematics/comments/wut90/foundations_after_godel/c5gni1w","timestamp":"2014-04-25T08:28:19Z","content_type":null,"content_length":"57692","record_id":"<urn:uuid:a6e0c6d1-c99d-46bc-8721-fded8a6943f9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
(5x-8) (2x-5)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51392c6de4b029b0182a1967","timestamp":"2014-04-16T22:36:54Z","content_type":null,"content_length":"140753","record_id":"<urn:uuid:fe21e943-2bc5-43a5-badf-7cc62e69866c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
FPT algorithms and kernels for the directed k-leaf problem
- STACS 2009 , 2009
"... The k-Leaf Out-Branching problem is to find an out-branching, that is a rooted oriented spanning tree, with at least k leaves in a given digraph. The problem has recently received much attention
from the viewpoint of parameterized algorithms. Here, we take a kernelization based approach to the k-Lea ..."
Cited by 9 (4 self)
Add to MetaCart
The k-Leaf Out-Branching problem is to find an out-branching, that is a rooted oriented spanning tree, with at least k leaves in a given digraph. The problem has recently received much attention from
the viewpoint of parameterized algorithms. Here, we take a kernelization based approach to the k-Leaf-Out-Branching problem. We give the first polynomial kernel for Rooted k-Leaf-Out-Branching, a
variant of k-Leaf-Out-Branching where the root of the tree searched for is also a part of the input. Our kernel has cubic size and is obtained using extremal combinatorics. For the
k-Leaf-Out-Branching problem, we show that no polynomial kernel is possible unless the polynomial hierarchy collapses to third level by applying a recent breakthrough result by Bodlaender et al.
(ICALP 2008) in a non-trivial fashion. However, our positive results for Rooted k-Leaf-Out-Branching immediately imply that the seemingly intractable k-Leaf-Out-Branching problem admits a data
reduction to n independent O(k³) kernels. These two results, tractability and intractability side by side, are the first ones separating many-to-one kernelization from Turing kernelization. This
answers affirmatively an open problem regarding “cheat kernelization” raised by Mike Fellows and Jiong Guo independently.
"... In 2000 Alber et al. [SWAT 2000] obtained the first parameterized subexponential algorithm on undirected planar graphs by showing that k-DOMINATING SET is solvable in time 2 O( √ k) ..."
Cited by 6 (5 self)
Add to MetaCart
In 2000 Alber et al. [SWAT 2000] obtained the first parameterized subexponential algorithm on undirected planar graphs by showing that k-DOMINATING SET is solvable in time 2 O( √ k)
"... Abstract. In this paper we initiate a systematic study of the Reduced Degree Spanning Tree problem, where given a digraph D and a nonnegative integer k, the goal is to construct a spanning
out-tree with at most k vertices of reduced out-degree. This problem is a directed analog of the wellstudied Mi ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. In this paper we initiate a systematic study of the Reduced Degree Spanning Tree problem, where given a digraph D and a nonnegative integer k, the goal is to construct a spanning out-tree
with at most k vertices of reduced out-degree. This problem is a directed analog of the wellstudied Minimum-Vertex Feedback Edge Set problem. We show that this problem is fixed-parameter tractable
and admits a problem kernel with at most 8k vertices on strongly connected digraphs and O(k 2) vertices on general digraphs. We also give an algorithm for this problem on general digraphs with
runtime O ∗ (5.942 k). This adds the Reduced Degree Spanning Tree problem to the small list of directed graph problems for which fixed-parameter tractable algorithms are known. Finally, we consider
the dual of Reduced Degree Spanning Tree, that is, given a digraph D and a nonnegative integer k, the goal is to construct a spanning out-tree of D with at least k vertices of full out-degree. We
show that this problem is W[1]-hard on two important digraph classes: directed acyclic graphs and strongly connected digraphs. 1
"... Given an undirected graph with n vertices, the Maximum Leaf Spanning Tree problem is to find a spanning tree with as many leaves as possible. When parameterized in the number of leaves k, this
problem can be solved in time O(4 k poly(n)) using a simple branching algorithm introduced by a subset of t ..."
Add to MetaCart
Given an undirected graph with n vertices, the Maximum Leaf Spanning Tree problem is to find a spanning tree with as many leaves as possible. When parameterized in the number of leaves k, this
problem can be solved in time O(4 k poly(n)) using a simple branching algorithm introduced by a subset of the authors [16]. Daligault, Gutin, Kim, and Yeo [6] improved the branching and obtained a
running time of O(3.72 k poly(n)). In this paper, we study the problem from an exponential time viewpoint, where it is equivalent to the Connected Dominating Set problem. Here, Fomin, Grandoni, and
Kratsch showed how to break the Ω(2 n) barrier and proposed an O(1.9407 n)-time algorithm [11]. Based on some useful properties of [16] and [6], we present a branching algorithm whose running time of
O(1.8966 n) has been analyzed using the Measure-and-Conquer technique. Finally we provide a lower bound of Ω(1.4422 n) for the worst case running time of our algorithm.
, 2011
"... The k-LEAF OUT-BRANCHING problem is to find an out-branching, that is a rooted oriented spanning tree, with at least k leaves in a given digraph. The problem has recently received much attention
from the viewpoint of parameterized algorithms. Here, we take a kernelization based approach to the k-LEA ..."
Add to MetaCart
The k-LEAF OUT-BRANCHING problem is to find an out-branching, that is a rooted oriented spanning tree, with at least k leaves in a given digraph. The problem has recently received much attention from
the viewpoint of parameterized algorithms. Here, we take a kernelization based approach to the k-LEAF-OUT-BRANCHING problem. We give the first polynomial kernel for ROOTED k-LEAF-OUT-BRANCHING, a
variant of k-LEAF-OUT-BRANCHING where the root of the tree searched for is also a part of the input. Our kernel with O(k 3) vertices is obtained using extremal combinatorics. For the
k-LEAF-OUT-BRANCHING problem, we show that no polynomial-sized kernel is possible unless coNP is in NP/poly. However, our positive results for ROOTED k-LEAF-OUT-BRANCHING immediately imply that the
seemingly intractable k-LEAF-OUT-BRANCHING problem admits a data reduction to n independent polynomial-sized kernels. These two results, tractability and intractability side by side, are the first
ones separating Karp kernelization from Turing kernelization. This answers affirmatively an open problem
, 2010
"... Abstract. In this paper we make the first step beyond bidimensionality by obtaining subexponential time algorithms for problems on directed graphs. We develop two different methods to achieve
subexponential time parameterized algorithms for problems on sparse directed graphs. We exemplify our approa ..."
Add to MetaCart
Abstract. In this paper we make the first step beyond bidimensionality by obtaining subexponential time algorithms for problems on directed graphs. We develop two different methods to achieve
subexponential time parameterized algorithms for problems on sparse directed graphs. We exemplify our approaches with two well studied problems. For the first problem, k-Leaf Out-Branching, which is
to find an oriented spanning tree with at least k leaves, we obtain an algorithm solving the problem in time 2 O( √ k log k) n + n O(1) on directed graphs whose underlying undirected graph excludes
some fixed graph H as a minor. For the special case when the input directed graph is planar, the running time can be improved to 2 O( √ k) n+n O(1). The second example is a generalization of the
Directed Hamiltonian Path problem, namely k-Internal Out-Branching, which is to find an oriented spanning tree with at least k internal vertices. We obtain an algorithm solving the problem in time 2
O( √ k log k) + n O(1) on directed graphs whose underlying undirected graph excludes some fixed apex graph H as a minor. Finally, we observe that for any ε> 0, the k-Directed Path problem is solvable
in time O((1+ε) k n f(ε)), where f is some function of ε. Our methods are based on non-trivial combinations of obstruction theorems for undirected graphs, kernelization, problem specific
combinatorial structures and a layering technique similar to the one employed by Baker to obtain PTAS for planar graphs. 1.
"... Abstract. In this paper we make the first step beyond bidimensionality by obtaining subexponential time algorithms for problems on directed graphs. We develop two different methods to achieve
subexponential time parameterized algorithms for problems on sparse directed graphs. We exemplify our approa ..."
Add to MetaCart
Abstract. In this paper we make the first step beyond bidimensionality by obtaining subexponential time algorithms for problems on directed graphs. We develop two different methods to achieve
subexponential time parameterized algorithms for problems on sparse directed graphs. We exemplify our approaches with two well studied problems. For the first problem, k-Leaf Out-Branching, which is
to find an oriented spanning tree with at least k leaves, we obtain an algorithm solving the problem in time 2 O( √ k log k) n + n O(1) on directed graphs whose underlying undirected graph excludes
some fixed graph H as a minor. For the special case when the input directed graph is planar, the running time can be improved to 2 O( √ k) n+n O(1). The second example is a generalization of the
Directed Hamiltonian Path problem, namely k-Internal Out-Branching, which is to find an oriented spanning tree with at least k internal vertices. We obtain an algorithm solving the problem in time 2
O( √ k log k) + n O(1) on directed graphs whose underlying undirected graph excludes some fixed apex graph H as a minor. Finally, we observe that for any ε> 0, the k-Directed Path problem is solvable
in time O((1+ε) k n f(ε)), where f is some function of ε. Our methods are based on non-trivial combinations of obstruction theorems for undirected graphs, kernelization, problem specific
combinatorial structures and a layering technique similar to the one employed by Baker to obtain PTAS for planar graphs. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=7836431","timestamp":"2014-04-20T18:02:53Z","content_type":null,"content_length":"30821","record_id":"<urn:uuid:a5a31c57-00c2-4c52-8e13-b4f241dc0178>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steve Kass
25 Sep 2011 22:46
The possible existence of heteroscedasticity is a major concern in the application of regression analysis, including the analysis of variance, because the presence of heteroscedasticity can
invalidate statistical tests of significance that assume the effect and residual (error) variances are uncorrelated and normally distributed. —Wikipedia
Perhaps I’m overeager to use one of my favorite words, but the more I look at Figure 11 of The Neutrino Preprint, the more I think I see a hint of heteroscedasticity in the residuals. If present, it
would support the possibility that the model used for the best fit analysis (a one-parameter family of time-shifted scaled copies of the summed proton waveform) was not appropriate. See my previous
post for some background.
The figure above (which is the bottom half of Figure 11) shows the best fit of the complete summed proton waveform (red) vs. the observed neutrino counts (black), summarized using 150 nanosecond
bins. For both extractions (left and right), the residuals of the fit (the distances from the red curve to each black dot) appear possibly heteroscedastic in two ways.
First, they seem to be slightly (negatively) correlated with the time scale — positive residuals are more likely towards the beginning of the pulse, negative residuals towards the end. Second, there
may be a slight negative correlation of the variance of the residuals with the time scale as well. The residuals seem to become more consistent — vary less in either direction from zero — from left
to right. [I didn’t pull out a ruler and calculate any real statistics.]
To be fair, there is little evidence of heteroscedastic residuals in Figure 12 (below), which shows a zoomed-in detail of the beginning and end of each extraction, summarized into 50 nanosecond bins.
In all, only about a sixth of the waveform is shown at this resolution. (A data point appears to have been omitted from this figure; between the first two displayed bins in the the second extraction,
there should probably be a black point to indicate that zero neutrinos were observed in that 50 ns interval.)
The authors report some tests of robustness; for example, they analyzed daytime and nighttime data separately and found no discrepancy. They also calculated and report a reduced chi-square statistic
that indicates a good model fit. They may also have measured the heteroscedasticity of the residuals, but they don’t mention it.
They do say a fair bit about how they obtained the summed proton waveform (the red line) used for the fit, but so far I don’t see any indication that they considered the possibility of a systematic
process occurring over the length of each proton pulse that caused the ratio of protons to observed neutrinos to vary.
Then again, I don’t understand every sentence in the paper that might be relevant, such as this one: “The way the PDF [the probability density functions for the proton waveform] are built
automatically accounts for the beam conditions corresponding to the neutrino interactions detected by OPERA.” And I’m not a physicist or a statistician.
One Response to “Heteroscedasticity in the Residuals?”
1. Eric Jones Says:
November 18th, 2011 at 12:01 pm
Here’s an update on the original results, http://news.sciencemag.org/scienceinsider/2011/11/faster-than-light-neutrinos-opera.html, which appears to rule out the statistical argument (which I
really liked).
Steve Kass » Heteroscedasticity in the Residuals? Says:
September 25th, 2011 at 10:46 pm
[...] family of time-shifted scaled copies of the summed proton waveform) was not appropriate. See my previous post for some [...]
Joe Says:
September 27th, 2011 at 7:25 am
You kindof shoot yourself in the leg with your speculations.
You go on about how uncertainty about the neutrino creation process could have distorted the resulting measurements.
But if you look at the graph you posted it seems clear that there are multiple peaks within the graph that are shifted by exactly the same ammount as the whole graph.
The red line is a computer prediction based on neutrinos traveling -at- the speed of light.
Notice that the shape of the red graph pretty much has exactly the same shape as the data points, just shifted.
This means that the simulation used for the prediction has a very precise understanding of the neutrino generation process and what the resulting measurement amplitude series will be.
The only discrepancy is the detection time.
If what you say were true then the arriving data points would have had distorted rise and fall but would otherwise have its peaks match the predicted graph to at least fall on the speed of light
instead of faster than the speed of light.
So based on that graph i think you are thinking in the wrong direction to find the flaw (if there is one).
Steve Says:
September 27th, 2011 at 9:35 am
You’ve missed my point.
“Pretty much exactly the same shape’ is not a statistical or mathematical statement. The data (black points) do not fit the red curve exactly when shifted. They come close, and among all possible
horizontal shifts, 1048.5 ns gives the closest fit. But the six-sigma statistical claim assumes that the distribution from which the black data points were a random sample is a copy of the shifted
red line and not any similar but different shape.
This assumption is not addressed in the paper. The shifted red line used for the statistics is the shape of the proton waveform hundreds of miles upstream of the detector at Gran Sasso. The data is
not a random sample of protons from that waveform. The data is a sample (presumably random) of neutrinos hundreds of miles away, produced from the precisely-understood waveform of protons by several
intermediate processes (including pion/kaon production when the proton beam strikes the graphite target and subsequent decay of the particles produced at the target into neutrinos later on). The
arrival waveform clearly has a similar shape, but the authors give no theoretical or statistical evidence to suggest it must have an identical shape.
If the intermediate processes systematically change the shape of the proton waveform even slightly (as it becomes a pion/kaon waveform and then a neutrino waveform), the statistics reported are not
In addition, the data in the paper is only a summary of the actual data into bins (150 ns wide for Figure 11, and 50 ns wide for Figure 12). The experimental result yields a neutrino speed only 60 ns
faster than light-speed, so it’s impossible to “notice” the best fit to such high precision only from the paper’s graphs. In Figure 11, where the multiple peaks are visible, “exactly the same amount”
can’t be determined to 60 ns accuracy. Even if the black data points, when shifted by 1048.5 ns, all lay exactly on the red line (and they do not at all), one cannot conclude that the actual data
(not given in the paper, which summarizes it into bins) fits just as perfectly.
Philip Meadowcroft Says:
September 28th, 2011 at 4:23 am
Is the same true at the detection end? If the first detection in any way compromises the likelyhood of another detection in the same burst.
May be insignificant due to the low number detected per burst.
Gareth Williams Says:
September 28th, 2011 at 4:49 am
OK, so add an extra parameter. Scale the red line from 1 at the leading edge to a faction k at the trailing edge (to crudely model the hypothesis that the later protons, for whatever unknown reason,
are less efficient at producing detectable neutrinos), and find what combination of translation and k produces the best fit.
If there is no such effect we should get the same speed as before and k=1. But if we get speed = c and k = 0.998 (say) then we have an indication where the problem is.
It would be interesting in any case to just try a few different constant values of k and see how sensitive the result is to that.
(It also occurs to me that k could arise from a problem with the proton detector, if the sensitivity changes very slightly from the beginning to the end of the pulse you would get the same effect).
This does not look too hard. I would do it myself but I am busy today [/bluff]
Steve Says:
September 28th, 2011 at 7:50 am
Philip: I think there was a similar question at the news conference given by OPERA, and it was answered to the satisfaction of the person who asked.
Gareth: Yes, absolutely. If the complete neutrino arrival data is posted, I might try this. But I would be happy to see you do it for me!
Gareth Williams Says:
October 27th, 2011 at 10:43 am
What you said, I think:
[I’ve posted a follow-up here: Heteroscedasticity in the Residuals?]
When applying statistics to find a “best fit” between your observation and reality, always ask yourself “best among what?”
The CERN result about faster-than-light neutrinos is based on a best fit. If the authors were too restrictive in their meaning of “among what,” they might have missed figuring out what really
happened. And what might have really happened was that the neutrinos they detected had not traveled faster than light.
The data for this experiment was, as usual, a bunch of numbers. These numbers were precisely-measured (by portable atomic clocks and other very cool techniques) arrival times of neutrinos at a
detector. The neutrinos were created by shooting a beam of protons into a long tube of graphite. This produced neutrinos, some of which were subsequently observed by a detector hundreds of miles
Over the course of a few years, the folks at CERN shot a total of about 100,000,000,000,000,000,000 protons into the tube; they observed about 15,000 neutrinos. The protons were fired in pulses, each
pulse lasting about 10 microseconds.
A careful statistical analysis of the data, the authors report, indicates that the neutrinos traveled about 0.0025% faster than the speed of light. Whooooooosh! Furthermore, because the experiment
looked at a lot of neutrinos and the results were consistent, the experiment indicates that in all likelihood the true speed of neutrinos was very close to 0.0025% faster than the speed of light, and
it was almost without doubt at least faster.
If the experimental design and statistical analysis are correct (and the authors are aware they might not be, though they worked hard to make them correct), this is one of the great experiments of
all time.
So far, I haven’t read much scrutiny of the statistical analysis pertaining to the question of “among what?” But Jon Butterworth of The Guardian raised one issue, and I have a similar one.
Look at the graph below, from the preprint.
The statistical analysis of the data was designed to measure how far to slide the red curve (the summed photon waveform) left or right so that the black data points (the neutron observation data) fit
it most closely.
The experiment didn’t detect individual neutrinos at the beginning of the trip. The neutrons were produced by 10-microsecond proton bursts, and neutrinos were expected to appear in 10-microsecond
bursts at the other end. The time between the bursts, then, should indicate how fast the individual neutrinos traveled.
To get the time between the bursts, slide the graphs back and forth until they align as closely as they can, and then compare the (atomic) clock times at the beginnings and ends of the bursts.
For this to give the right travel time, and more importantly, to be able to evaluate the statistical uncertainty, the researchers appear to have assumed that the shape of the proton burst upstream of
the graphite rod exactly matched the shape of the neutrino burst at the detector (once adjusted for the fact that the detector sees about one neutrino for each 10 million billion or so protons in the
initial burst).
Why should the shapes match exactly? If God jiggled the detector right when the neutrinos arrived, for example, the shapes might not match. More scientifically plausibly, though, at least to this
somewhat-naïve-about-particle-physics mathematician, what if the protons at the beginning of the burst were more likely to create detectable neutrinos than those at the end of the burst? Maybe the
graphite changes properties slightly during the burst. [Update: It does, but whether that might affect the result, I don’t know.] Or maybe the protons are less energetic at the end of the bursts
because there’s more proton traffic.
The authors don’t tell us why they assume the shapes match exactly. There might be good theory and previous experimental results to support the assumption, but if so, it’s not mentioned in the paper.
The authors do remark that a given “neutrino detected by OPERA” might have been produced by “any proton in the 10.5 microsecond extraction time.” But they don’t say “equally likely by any proton.”
If protons generated early in the burst were slightly more likely to yield detectable neutrinos, then the data points at the left of the figure should be scaled down and those at the left scaled up,
if the observational data is expected to indicate the actual proton count across the burst.
If that’s the case, then the adjusted data might not have to be shifted quite so far to best match the red curve. And the calculated speed would be different.
Whether this would make enough of a difference to bring the speed below light-speed, I don’t know and can’t guess from what’s in the preprint. And of course, there may be good reasons for same-shape
bursts to be a sound assumption.
[Disclaimer: I’m a mathematician, not a statistician or a physicist.]
I’m a serial comma guy, and so is my good friend Andy. Unfortunately for Andy, serial commas are verboten at his workplace, and this requires him “to violate a fundamental law of that which is right
and good.” (I might have said “right, good, and just.”)
Hoping to assuage his hardship, I whipped up a batch of cereal commas for him as a birthday gift. He’ll have to decide whether or not he can risk sneaking some into work.
Shown: eight cereal commas in various sizes. Four were made with Rice Krispies and Fruity Pebbles, and four were made with Rice Krispies, Cocoa Krispies, and Alpha Bits. Also shown are two pieces of
the Ateco Plain Comma Cutter Set with which they were cut [full set below].
Please note that the Ateco cutters are backwards. Instead of cutting comma shapes, they cut reversed comma shapes. Although their rolled edges prevented me from using them upside-down without injury,
it was not difficult to turn the treats over after cutting. The treat at center left in the photo is unturned.
Janet Napolitano isn’t making the official announcement until tomorrow, but this is 2011, folks. There are no secrets any more.
DHS to end color-coded terror alert system.
It will be called the National Terror Advisory System. DHS Secretary Janet Napolitano will officially make the announcement tomorrow at a “State of America’s Homeland Security” speech at George
Washington University.
Brilliant, Janet. Brilliant! | {"url":"http://www.stevekass.com/category/scoops/","timestamp":"2014-04-19T06:58:19Z","content_type":null,"content_length":"50551","record_id":"<urn:uuid:19a5be05-0acb-4773-95d8-2e25ff3ad3d3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Installment loans
September 15th 2009, 03:54 PM
Installment loans
I can't figure this problem out at all:
Ned plans to donate $50 per week to his church for the next 60 years. Assuming an annual interest rate of 5.2% compounded weekly, find the present value of this annuity to Ned's church.
The formula we are to use is as follows:
P = Fq [(q^T - 1) / (q - 1)]
with an installment loan of P dollars paid off in T payments of F dollars at a periodic interest rate of p (written in decimal form), and q = 1/(1+p).
My prof also let us know the following:
For all installment loan problems, assume that the payments are made at the end of the period, so that the current value of the first payment is worth P*q.
For #75 (this problem), assume there are 52 weeks/year.
September 15th 2009, 09:00 PM
50[1 - 1/(1 + .052/52)^(60*52)] / (.052/52) = 47788.69693....
He'll go to heaven for sure (Giggle)
September 20th 2013, 04:41 PM
Re: Installment loans
Always be cautious about individuals selling stuff door to door, as a number are rip-off artists. Aside from Girl Scouts attempting to get individuals hooked on diabolically addicting cookies,
there are a number of door-to-door scams out there. Source for this article:Personal Finance | {"url":"http://mathhelpforum.com/business-math/102465-installment-loans-print.html","timestamp":"2014-04-18T19:43:09Z","content_type":null,"content_length":"4827","record_id":"<urn:uuid:7c8d5d8a-2cab-4f56-b6ca-ca05b6fa7660>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
proving that two subspaces are direct sum
January 6th 2013, 02:19 PM
proving that two subspaces are direct sum
Let $W,U\subset V$ be subspaces.
i need to prove that if $dim(V)=dim(U)+dim(W)$ and $W\cap U=\left \{ \vec0 \right \}$ then $W\oplus U=V$.
here's the thing.
i proved it, but without considering $W\cap U=\left \{ \vec0 \right \}$.
(i proved that if $dim(V)=dim(U)+dim(W)$ then $W\oplus U=V$)
and that's obviously can't be, i mean there must be something i'm doing wrong, i just can't figure out what.
here's what i did:
to prove that U and W are the direct sum of V, i just need to show that for every $v\in V$: there's only one linear combination of $u\in U$ and $w\in W$.
Let $\left \{ u_1, u_2... u_n \right \}$ be the basis for U.
Let $\left \{ w_1, w_2... w_m \right \}$ be the basis for W.
now, since $dim(V)=dim(U)+dim(W)$, i can conclude that the number of vectors in V's basis must be $n+m$.
so, the basis for V can now be:
$\left \{ u_1, u_2... u_n,w_1, w_2... w_m \right \}$
now, let's presume v can be presented in two different ways, and show that it's actually the same presentation, so:
let's presume there are scalars $a_1,a_2... a_{n+m}\in \mathbb{R}$, not all 0, and $b_1,b_2... b_{n+m}\in \mathbb{R}$, not all 0, such that:
$a_1u_1+a_2u_2+...+a_nu_n+a_{n+1}w_1+a_{n+2}w_2+... +a_mw_m=v$
$b_1u_1+b_2u_2+...+b_nu_n+b_{n+1}w_1+b_{n+2}w_2+... +b_mw_m=v$
if we'll subtruct one from another we'll get:
and since $\left \{ u_1, u_2... u_n,w_1, w_2... w_m \right \}$ are linear independent (basis for V), then $a_1=b_1$, $a_2=b_2$ and so on...
so that's all.
my question now is: where exactly does $W\cap U=\left \{ \vec0 \right \}$ fit in? why do i even need it here?
it's quite a task to use the math terminology when it's not your native language, so i hope i used it right and everything is clear enough... (Happy)
thanks in advanced!
January 6th 2013, 04:21 PM
Re: proving that two subspaces are direct sum
Hi Stormey,
The argument uses $U\cap W=\{0\}$ when you conclude that $\{u_{1},\ldots, u_{n},w_{1},\ldots,w_{m}\}$ is a basis for $V.$ Just because $\{u_{1},\ldots, u_{n}\}$ and $\{w_{1},\ldots, w_{m}\}$ are
linearly independent sets of vectors on their own does not always mean $\{u_{1},\ldots, u_{n},w_{1},\ldots, w_{m}\}$ must be a linearly independent collection too. For example, the sets $\
{[1,0,0], [0,1,0]\}$ and $\{[0,1,0], [0,0,1]\}$ are each linearly indepdent sets of vectors on their own, but $\{[1,0,0], [0,1,0], [0,1,0], [0,0,1]\}$ is not a linearly indepedent set of vectors.
Does this answer your question? Let me know if anything is unclear. Good luck!
January 6th 2013, 10:09 PM
Re: proving that two subspaces are direct sum
Hi GJA, and thanks for your help.
i'm aware that if two subspaces' basis are linear independed that doesn't necessarily mean their union is also linear independent,
but i can draw this conclusion (that this two are linear independent and are basis for V) from $dim(V)=dim(U)+dim(W)$, i don't need $W\cap U=\left \{ \vec0 \right \}$ for that.
so actually, my question is:
if $dim(V)=dim(U)+dim(W)$, why doesn't it mean that U and W's basis are *disjoint sets?
*of course, disjoint except their common $\vec0$, but that goes without saying, since U and W are subspaces.
January 6th 2013, 10:51 PM
Re: proving that two subspaces are direct sum
Hi Stormey,
In the previous post you said
i'm aware that if two subspaces' basis are linear independed that doesn't necessarily mean their union is also linear independent,
but i can draw this conclusion (that this two are linear independent and are basis for V) from $dim(V)= dim(U)+dim(W)$
However, knowing $dim(V) = dim(U) + dim(W)$ does not imply the bases of $U$ and $W$ are linearly independent. For example, take $U=W=span([1,0])$ and $V=\mathbb{R}^{2}.$ Then $dim(V) = dim(U)+dim
(W)$ holds, but the bases for $U$ and $W$ are not linearly independent.
Does this clear things up? Good luck!
January 6th 2013, 11:04 PM
Re: proving that two subspaces are direct sum
Brilliant, thanks man!
I forgot that U can be equal to W.
It all makes sense now. | {"url":"http://mathhelpforum.com/advanced-algebra/210871-proving-two-subspaces-direct-sum-print.html","timestamp":"2014-04-16T14:07:51Z","content_type":null,"content_length":"17049","record_id":"<urn:uuid:b3ffcd94-6b0e-4426-98ed-f9692562d0a3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem of the Week
Here I will be posting curious physics and math problems: one new each week. Some of them will be standard while others less standard, but none of the problems will require knowledge beyond typical
undergraduate curriculum.
You are welcome to stop by my office (EPS 214) on Thursdays 4-5 pm to discuss them.
You may ask your instructor if your solutions can be used for extra credit.
Anton Vorontsov
For theoretically minded students - here is a list of problems, a good portion of which you must be capable of doing by the end of your graduate program.
Arnold’s Mathematical Trivium.
Working on those problems is a great way to learn something new and refresh what you have forgotten. | {"url":"http://www.physics.montana.edu/faculty/avorontsov/Personal/Teaching/Entries/2012/9/17_Problem_of_the_Week.html","timestamp":"2014-04-20T13:44:19Z","content_type":null,"content_length":"21891","record_id":"<urn:uuid:f738bfae-541d-4104-973b-ae58d7a94964>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Collisional Ring Galaxies - P.N. Appleton &
C. Struck-Marcell
4.2. Varieties of Symmetric Rings
To begin to appreciate the range of symmetric ring morphologies we show a number of different radius-time plots in Figure 16. The first panel has a rising rotation curve and the amplitude A(Figure 3.
In this case both first and second waves are caustic rings that do not pinch off. The third panel shows a case whose rotation curve rises somewhat more rapidly than a solid body curve. This is not a
physically very realistic case, but the inward propagating waves are an interesting result, perhaps relevant to galaxy formation.
Figure 16. Examples of kinematic models of ring waves for different values of primary and companion structural parameters m and n defined in the text. Radius vs. time is plotted for a number of
collisionless particles as in figure 7. In all the cases shown the amplitude A (see equation (4.6) and discussion following) has a value of 0.3. For the effects of varying amplitudes see Figure 3 of
Struck-Marcell and Lotan 1990. In plot a) (the upper left), n = 10, m = 1, i.e., a quite flat primary rotation curve, with a perturbation amplitude that declines with radius. In plot b), (upper
right), n = - 2, m = 0, a constant perturbation amplitude and a declining rotation curve. In c) (lower left) n = 0.5, m = 1, a declining perturbation amplitude and a primary rotation curve that rises
more steeply than a solid body curve. In d) (lower right) n = 10, m = - 0.2, i.e. perturbation amplitude that rises with radius, corresponding to a very extended companion.
The success of the kinematic caustics theory is that the morphological variety of Figure 16 can be accounted for by the caustics Equation (4.13). This is demonstrated by Figure 17, which consists of
a montage of the solutions of Equation (4.13) as a function of the parameters m, and n. In the low amplitude cases (e.g., A(Figure 17 shows the A(m = 0 case the right-hand-side of Equation (4.13) is
constant, so the phases t must also be independent of radius. Thus, one only has to solve Equation (4.13) once for the phases of the inner and outer ring edges. At any given time the Lagrangian
radius of the edges can be determined from the equations t = constant, and then the Eulerian radii r(t) are given by Equation (4.5). This case was considered in detail in Struck-Marcell and Lotan
Figure 17. Schematic showing caustic borders of the first and second ring waves in radius-time plots as a function of the structural parameters m, n defined in the text. Specifically, thick lines or
curves showing an inner and outer edge at any time represent caustic edges. A thinner, single line or curve represents an orbit-crowding region, without the orbit crossings that define a caustic
region. The amplitude A = 0.3 is constant for all the figures in this assemblage. Individual cases are discussed in the text.
When n = 1 the primary has a solid-body rotation curve, so that following the (impulsive) collision all stars in the disk should execute synchronous radial oscillations. Thus, there should be no
rings, for rings are generally the result of orbital dispersion, which leads to orbit crowding. However, the caustic rings can result from an amplitude gradient in this case. Stars within two
different annuli reach their minimum at the same time, but only if the amplitude of the stars in the outer annulus is greater than those in the inner annulus will there be orbit crossings.
Next consider rings at relatively large radii (q >> n > 1 (i.e., the rotation curve rises less steeply with radius than for solid body rotation). In this case the second term on the left-hand-side of
Equation (4.13) will usually dominate, since the first term is less than or of the order of unity. Then we also require that cos(t) < 1, or (2p - 1)(t < (2p + 1)(p = 1, 2, 3 .... In general, the
first few rings will pinch off at radii that are not too large. In fact, Figure 17 shows that first ring caustics are rare, the first rings are usually weak.
If n < 0, the primary disk has a declining rotation curve. In this case the absolute value of the (1/n - 1) term is greater than in the rising rotation curve case, and rings tend to be more robust
and broader. Physically this is simply because this case is still farther from the solid body case, and the orbital dispersion is correspondingly greater. When 0 < n < 1, the rotation curve rises
more steeply than the solid body case and the rings propagate inward. To achieve this, the mass distribution would have to rise extremely rapidly with radius, for example if the galaxy contained a
central deficiency of matter.
If m < 0, then the perturbation amplitude increases outward. Normally this is not physically realistic unless the "companion" is larger than the ring galaxy. In this case outward propagating rings
become very broad with time.
Figure 17 also provides information on ring propagation speeds through the disk. Rings in disks with flat rotation curves tend to have a nearly constant propagation velocity. This is often assumed in
estimates of ring ages from measured expansion velocities.
To summarize, several generalizations can be derived from Figure 17 and Equation (4.13).
1. Broad stellar rings result from declining rotation curves, or large amplitude perturbations. Thus, in cases with modest companions we suspect that the former is usually true. This seems to be the
case in the "Sacred Mushroom", AM 1724-622 (see Wallin and Struck-Marcell 1994).
2. Widely spaced narrow rings occur in disks with flat or rising rotation curves. This is confirmed in the case of the Cartwheel (Higdon 1993; Struck-Marcell and Higdon 1993).
3. The variation of the velocity perturbation with radius determines (in part) the annular zone where caustic rings can occur, but this variation generally cannot be determined observationally. | {"url":"http://ned.ipac.caltech.edu/level5/Sept01/Appleton/Appleton4_2.html","timestamp":"2014-04-17T18:58:24Z","content_type":null,"content_length":"10007","record_id":"<urn:uuid:c45e40f3-9a31-4dd3-8b11-b0ad4b8d91bd>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
C) Where Is The Center Of Mass Of All Four Balls? ... | Chegg.com
C) Where is the center of mass of all four balls? Determine this two ways: (1) find the center of mass of all four balls treated individually and then (2) find the center of mass acting as if the
system was made up of two bodies located at the two centers of masses you found above (e.g. a total mass of 1 kg + 2 kg at the location of the first center of mass you found above). The two results
should agree.
(Answer in ihat + jhat+ khat form) | {"url":"http://www.chegg.com/homework-help/questions-and-answers/c-center-mass-four-balls-determine-two-ways-1-find-center-mass-four-balls-treated-individu-q1254731","timestamp":"2014-04-21T16:59:41Z","content_type":null,"content_length":"18529","record_id":"<urn:uuid:9b521cd6-d623-44ab-a67b-f7a237ea1680>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Invisibility and Cloaking Based on Scattering Cancellation
Advances in material synthesis and in metamaterial technology offer new venues to tailor the electromagnetic properties of devices, which may go beyond conventional limits in a variety of fields and
applications. Invisibility and cloaking are perhaps one of the most thought-provoking possibilities offered by these new classes of advanced materials. Here, recently proposed solutions for
invisibility and cloaking using metamaterials, metasurfaces, graphene and/or plasmonic materials in different spectral ranges are reviewed and highlighted. The focus is primarily on
scattering-cancellation approaches, describing material challenges, venues and opportunities for the plasmonic and the mantle cloaking techniques, applied to various frequency windows and devices.
Analogies, potentials and relevant opportunities of these concepts are discussed, their potential realization and the underlying technology required to verify these phenomena are reviewed with an
emphasis on the material aspects involved. Finally, these solutions are compared with other popular cloaking techniques. | {"url":"http://onlinelibrary.wiley.com/doi/10.1002/adma.201202624/full?globalMessage=0","timestamp":"2014-04-20T06:41:28Z","content_type":null,"content_length":"339581","record_id":"<urn:uuid:d2f6d5a1-24c0-46f4-8f4c-75f917d6727f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
spockz / xp.memo - 222757d
Initial commit with the plan :)
+ \item Set up the library with |memo :: ((a -> b) -> a -> b) -> IO (a -> b)| first so that it accepts functions in fix-point-notation
+ \item Check whether the checking of equality is done breadth-first. (Would this work with just having a map where the key is the curried version of the arguments?)
+ \item Fun with type functions, Oleg Kiselyov Simon Peyton Jones Chung-chieh Shan: \url{http://research.microsoft.com/en-us/um/people/simonpj/papers/assoc-types/fun-with-type-funs/typefun.pdf}
+ \item Stretching the storage manager: weak pointers and stable names in Haskell, SPJ, SM and Conal Elliot \url{http://community.haskell.org/~simonmar/papers/weak.pdf}
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o. | {"url":"https://bitbucket.org/spockz/xp.memo/commits/222757d9ad15","timestamp":"2014-04-18T09:10:52Z","content_type":null,"content_length":"120013","record_id":"<urn:uuid:834f4614-a1cf-4d75-a057-181b15424e5b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
A logic programming system with first-class relations and the complete decision procedure (e.g., Kanren) can define the pure Hindley-Milner typechecking relation for a language with polymorphic
let, sums and products. The typechecking relation relates a term and its type: given a term we obtain its type. It can also work in reverse: given a type, we can obtain terms that have this type.
Or, we can give a term with blanks and a type with blanks, and ask the relation to fill in the blanks.
The code below implements this approach. We use Scheme notation for the source language (we could just as well supported ML or Haskell-like notations). The notation for type terms is infix, with
the right-associative arrow. As an example, the end of the file type-inference.scm shows the derivation for call/cc, shift and reset from their types in the continuation monad. Given the type:
(define (cont a r) `((,a -> . ,r) -> . ,r))
(((a -> . ,(cont 'b 'r)) -> . ,(cont 'b 'b)) -> . ,(cont 'a 'b))
within 2 milli-seconds, we obtain the term for shift:
(lambda (_.0) (lambda (_.1)
((_.0 (lambda (_.2) (lambda (_.3) (_.3 (_.1 _.2)))))
(lambda (_.4) _.4))))
From Curry-Howard correspondence, determining a term for a type is tantamount to proving a theorem -- in intuitionistic logic as far as our language is concerned. We formulate the proposition in
types, for example:
(define (neg x) `(,x -> . F))
(,(neg '(a * b)) -> . ,(neg (neg `(,(neg 'a) + ,(neg 'b)))))
This is one direction of the deMorgan law. In intuitionistic logic, deMorgan law is more involved:
NOT (A & B) == NOTNOT (NOT A | NOT B)
The system gives us the corresponding term, the proof:
(lambda (_.0)
(lambda (_.1)
(_.1 (inl (lambda (_.2)
(_.1 (inr (lambda (_.3) (_.0 (cons _.2 _.3))))))))))
The de-typechecker can also prove theorems in classical logic, via the double-negation (aka CPS) translation. We formulate a proposition:
(neg (neg `(,(neg 'a) + ,(neg (neg 'a)))))
and, within 403 ms, obtain its proof:
(lambda (_.0) (_.0 (inr (lambda (_.1) (_.0 (inl _.1))))))
The proposition is the statement of the Law of Excluded Middle, in the double-negation translation.
Programming languages can help in the study of logic. | {"url":"http://www.okmij.org/ftp/Computation/types.html","timestamp":"2014-04-20T23:28:01Z","content_type":null,"content_length":"13247","record_id":"<urn:uuid:42916203-0405-4525-a27f-ce54b60a6e6a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 40
Graph the function. Compare the graph with the graph y= x² 1. y=x²+5
Age Probs
A candy company packages mixed candies. How much white chocolate selling at $2.75 a pound must be added to 4 pounds of dark chocolate selling at $3.95 a pound to produce a mixture selling for $3.30 a
Age Probs
Lisa is 16 years younger than Andy. In 6 year, the sum of their ages id 42. How old is Lisa now?
Jemma read 16 pages of her book in 40 minutes. If she continues at that pace, how log will it take her to read the next 24 pages?
Language Art
A) Most friendly
us history
Virtuous means having/showing high moral standards
4002-2153=1849 how is regrouping thousand shown in the problem above
If an object is moving along the curve y=x^3, at what points is the y-coordinate changing 3 times more rapidly then the x-coordinate?
Consider f(x)=x^3-x over the interval [0,2]. Find all the values of C that satisfy the Mean Value Theorem (MVT)
Find the absolute maximum & minimum of the function f (x)=e^x for any closed interval [a,b] Justify your answer.
Find the Optimum value of the function f(x)=2x^2+6x-10 and also state if the function attains a maximum or minimum.
I get stuck at this and dont know how to from here.... can you help... Q1a) h(x)=√(x+1 ) [3,8] MVT=[h(b)-h(a)]/(b-a)=h'(c) To find h(b) and h(a), we just plug endpoints into original function h(b)=h
(8)=√(x+1 ) h(b)=h(8)=√(8+1 ) = 3 h(a)=h(3)= √(3+1 ...
Verify the hypothesis of the mean value theorem for each function below defined on the indicated interval. Then find the value C referred to by the theorem. Q1a) h(x)=√(x+1 ) [3,8] Q1b) K(x)=(x-1)/
(x=1) [0,4] Q1c) Explain the difference between the Mean Value...
Find the area between the following functions 2y=2x^2; y=1-x^2 & x=1
Find the Optimum value of the function f(x)=2x^2=6x-10 and also state if the function attains a maximum or minimum.
A plane flying at a constant rate of 200km/hr. What is the distance travelled by the plane 3mintues after it passes the observation point.
Water flowing into a hemispherical tank of 5m radius at a rate of ((3m^3)/(hr)) Determine the rate at which the depth of water increases.
A Construction Company uses the function below to determine the cost C in dollars for risk management. C(x)={(35 if 0<x<3 AND 35+10(x-2) if x≥3)
Determine the point on the curve 2y=2x^2 which is nearest to the point (2,0) Please include step by step calculations if possible... as that will help mem understand the problem better.
ok...thanks a lot.....
ohh sorry, in my first question... A 15 kg box sits on a horizontal FORCE. the coefficient of static fricition between the box and the surface is 0.70. if a force P is exerted on the box at an angle
directed 37 degrees below the horizontal, what must be the magnitude of P to g...
i mean horizontal surface not horizontal force..sorry..
A 15 kg box sits on a horizontal force. the coefficient of static fricition between the box and the surface is 0.70. if a force P is exerted on the box at an angle directed 37 degrees below the
horizontal, what must be the magnitude of P to get the box moving??
is the area of a paperback book cover closer to 28 square inches or 28 square centimeters
alegebra II
N= -2x^2+76x+430
Augustine continues to run around. He gets to the park and has the overwhelming urge to jump off a cliff and try to fly. He does so at an angle of 50 degrees to the horizontal while running at 14.5m/
s. With his arms flapping, he rises in the air. a) what is the highest point f...
Augustine continues to run around. He gets to the park and has the overwhelming urge to jump off a cliff and try to fly. He does so at an angle of 50 degrees to the horizontal while running at 14.5m/
s. With his arms flapping, he rises in the air. a) what is the highest point f...
7th grade Science
Explain the role of energy in the carbon cycle.
7th grade Science
Early Earth was constantly being bombarded by meteorites, comets, and asteroids. Was early Earth an open system or a closed system? Explain your answer.
Three points are on a coordinate plane: A(1, 5), B(-2, -4), and C(6, -4). 1. Write an equation in point-slope form of the line with slope -1 that contains point C. 2. Write an equation in point-slope
form of the line that contains points A an B. 3. Write an equation of the lin...
algebra 2
What is the third degree polynomial function such that f(0) = 18 and whose zeros are 1, 2, and 3
The magnitude multiplied by the direction should be the same on both sides. So for the boy, it would be (65 kg x 0.6)= 39. Therefore the mass of the girl multiplied by the distance should equal 39 as
well. So (39=X x 40). x = 0.975
The magnitude multiplied by the direction should be the same on both sides. So for the boy, it would be (65 kg x 0.6)= 39. Therefore the mass of the girl multiplied by the distance should equal 39 as
well. So (39=X x 40). x = 0.975
Hamaira has just started to work for Graphic Services and has been asked to use a graphics package to prepare a logo for arrow computers. Grahic Services decided against choosing a graphic package
with a command driven Human Computer inter face (HCI). why do you think they did...
How do you change fractions to percents HELP
6th grade Math
What is 1 _ 6 in a percent can you help me?
7th grade
a samall, intracellular, membrane-enclosed sac that stores or transport subtances.
thank you!!!!!!
Indicate the type of intermolecular forces expected with each of the following compounds. H2 Li2CO3 LiOH C3H7OH choices include: Hydrogen Bonding,Ion-Molecule and vanDerWaals. help me! please!!!! im
so stuck! | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Daniella","timestamp":"2014-04-16T08:42:32Z","content_type":null,"content_length":"14176","record_id":"<urn:uuid:2bdff345-b334-4f70-b548-b7d70bcf340b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clearance (medicine)
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology
(Index, Outline)
In medicine, the clearance is a measurement of the renal excretion ability. Although clearance may also involve other organs than the kidney, it is almost synonymous with renal clearance or renal
plasma clearance. Each substance has a specific clearance that depends on its filtration characteristics. Clearance is a function of glomerular filtration, secretion from the peritubular capillaries
to the nephron, and reabsorption from the nephron back to the peritubular capillaries.
When referring to the function of the kidney, clearance is considered to be the amount of liquid filtered out of the blood that gets processed by the kidneys or the amount of blood cleaned per time
because it has the units of a volumetric flow rate [ volume / time ]. However, it does not refer to a real value; "[t]he kidney does not completely remove a substance from the total renal plasma
flow."^[1] From a mass transfer perspective^[2] and physiologically, volumetric blood flow (to the dialysis machine and/or kidney) is only one of several factors that determine blood concentration
and removal of a substance from the body. Other factors include the mass transfer coefficient, dialysate flow and dialysate recirculation flow for hemodialysis, and the glomerular filtration rate and
the tubular reabsorption rate, for the kidney. A physiologic interpretation of clearance (at steady-state) is that clearance is a ratio of the mass generation and blood (or plasma) concentration.
Its definition follows from the differential equation that describes exponential decay and is used to model kidney function and hemodialysis machine function:
$V \frac{dC}{dt} = -K \cdot C + \dot{m} \qquad (1)$
• $\dot{m}$ is the mass generation rate of the substance - assumed to be a constant, i.e. not a function of time (equal to zero for foreign substances/drugs) [mmol/min] or [mol/s]
• t is dialysis time or time since injection of the substance/drug [min] or [s]
• V is the volume of distribution or total body water [L] or [m³]
• K is the clearance [mL/min] or [m³/s]
• C is the concentration [mmol/L] or [mol/m³] (in the USA often [mg/mL])
From the above definitions it follows that $\frac{dC}{dt}$ is the first derivative of concentration with respect to time, i.e. the change in concentration with time.
It is derived from a mass balance.
Clearance of a substance is sometimes expressed as the inverse of the time constant that describes its removal rate from the body divided by its volume of distribution (or total body water).
In steady-state, it is defined as the mass generation rate of a substance (which equals the mass removal rate) divided by its concentration in the blood.
Effect of plasma protein binding
For substances that exhibit substantial plasma protein binding, clearance is generally defined as the total concentration (free + protein-bound) and not the free concentration.^[3]
Most plasma substances have primarily their free concentrations regulated, which thus remains the same, so extensive protein binding increases total plasma concentration (free + protein-bound). This
gives a decreased clearance than what would have been the case with no protein binding.^[3] However, the mass removal rate is the same^[3], because it depends only on concentration of free substance,
and is independent on plasma protein binding, even with the fact that plasma proteins increase in concentration in the distal renal glomerulus as plasma is filtered into Bowman's capsule, because the
relative increases in concentrations of substance-protein and non-occupied protein are equal and therefore give no net binding or dissociation of substances from plasma proteins, thus giving a
constant plasma concentration of free substance throughout the glomerulus, which also would have been the case without any plasma protein binding.
In other sites than the kidneys, however, where clearance is made by membrane transport proteins rather than filtration, extensive plasma protein binding may increase clearance by keeping
concentration of free substance fairly constant throughout the capillary bed, inhibiting a decrease in clearance caused by decreased concentration of free substance through the capillary.
Derivation of equation
Equation 1 is derived from a mass balance:
$\Delta m_{body}=(-\dot m_{out}+ \dot m_{in} +\dot m_{gen.})\Delta t \qquad (2)$
• $\Delta t$ is a period of time
• $\Delta m_{body}$ the change in mass of the toxin in the body during $\Delta t$
• $\dot m_{in}$ is the toxin intake rate
• $\dot m_{out}$ is the toxin removal rate
• $\dot m_{gen.}$ is the toxin generation rate
In words, the above equation states:
The change in the mass of a toxin within the body ($\Delta m$) during some time $\Delta t$ is equal to the toxin intake plus the toxin generation minus the toxin removal.
$m_{body} = C \cdot V \qquad (3)$
$\dot m_{out}=K \cdot C \qquad (4)$
Equation A1 can be rewritten as:
$\Delta (C \cdot V)=(-K \cdot C+ \dot m_{in} +\dot m_{gen.})\Delta t \qquad (5)$
If one lumps the in and gen. terms together, i.e. $\dot m=\dot m_{in} +\dot m_{gen.}$ and divides by $\Delta t$ the result is a difference equation:
$\frac{\Delta (C \cdot V)}{\Delta t} = -K \cdot C + \dot{m} \qquad(6)$
If one applies the limit $\Delta t \rightarrow 0$ one obtains a differential equation:
$\frac{d(C \cdot V)}{dt}= -K \cdot C + \dot{m} \qquad(7)$
Using the Product Rule this can be rewritten as:
$C \frac{dV}{dt}+V \frac{dC}{dt} = -K \cdot C + \dot{m} \qquad(8)$
If one assumes that the volume change is not significant, i.e. $C \frac{dV}{dt}=0$, the result is Equation 1:
$V \frac{dC}{dt} = -K \cdot C + \dot{m} \qquad(1)$
Solution to the differential equation
The general solution of the above differential equation (1) is:
$C = \frac{\dot{m}}{K} + (C_{o}-\frac{\dot{m}}{K}) e^{-\frac{K \cdot t}{V}} \qquad (9)$^[4]^[5]
• C[o] is the concentration at the beginning of dialysis or the initial concentration of the substance/drug (after it has distributed) [mmol/L] or [mol/m³]
• e is the base of the natural logarithm
Steady-state solution
The solution to the above differential equation (9) at time infinity (steady state) is:
$C_{\infty} = \frac {\dot{m}}{K} \qquad (10a)$
The above equation (10a) can be rewritten as:
$K = \frac {\dot{m}}{C_{\infty}} \qquad (10b)$
The above equation (10b) makes clear the relationship between mass removal and clearance. It states that (with a constant mass generation) the concentration and clearance vary inversely with one
another. If applied to creatinine (i.e. creatinine clearance), it follows from the equation that if the serum creatinine doubles the clearance halves and that if the serum creatinine quadruples the
clearance is quartered.
Measurement of renal clearance
Renal clearance can be measured with a timed collection of urine and an analysis of its composition with the aid of the following equation (which follows directly from the derivation of (10b)):
$K = \frac {C_U \cdot Q}{C_B} \qquad (11)$
• K is the clearance [mL/min]
• C[U] is the urine concentration [mmol/L] (in the USA often [mg/mL])
• Q is the urine flow (volume/time) [mL/min] (often [mL/24 hours])
• C[B] is the plasma concentration [mmol/L] (in the USA often [mg/mL])
When the substance "C" is creatinine, an endogenous chemical that is excreted only by filtration, the calculated clearance is equivalent to the glomerular filtration rate. Inulin clearance is also
used to estimate glomerular filtration rate.
Note - the above equation (11) is valid only for the steady-state condition. If the substance being cleared is not at a constant plasma concentration (i.e. not at steady-state) K must be obtained
from the (full) solution of the differential equation (9).
See also | {"url":"http://psychology.wikia.com/wiki/Clearance_(medicine)?oldid=109266","timestamp":"2014-04-23T23:00:13Z","content_type":null,"content_length":"85377","record_id":"<urn:uuid:512f38e2-f0e9-43a9-a244-60d84d72844a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some lower bounds for a class of frequency assignment problems
Results 1 - 10 of 71
- IEEE Personal Communications , 1996
"... This paper provides a detailed discussion of wireless resource and channel allocation schemes. We provide a survey of a large number of published papers in the area of fixed, dynamic and hybrid
allocation schemes and compare their trade-offs in terms of complexity and performance. We also investigat ..."
Cited by 267 (1 self)
Add to MetaCart
This paper provides a detailed discussion of wireless resource and channel allocation schemes. We provide a survey of a large number of published papers in the area of fixed, dynamic and hybrid
allocation schemes and compare their trade-offs in terms of complexity and performance. We also investigate these channel allocation schemes based on other factors such as distributed/centralized
control and adaptability to traffic conditions. Moreover, we provide a detailed discussion on reuse partitioning schemes, effect of hand-offs and prioritization schemes. Finally, we discuss other
important issues in resource allocation such as overlay cells, frequency planning, and power control. 1 Introduction Technological advances and rapid development of handheld wireless terminals have
facilitated the rapid growth of wireless communications and mobile computing. Taking ergonomics and economics factors into account, and considering the new trends in the telecommunications industry
to provide ubiqui...
, 1998
"... A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for
the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators ..."
Cited by 105 (14 self)
Add to MetaCart
A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the
graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out
on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on
ways to further improvement. Keywords: Graph coloring, solution recombination, tabu search, combinatorial optimization. 1 Introduction A recent and very promising approach for combinatorial
optimization is to embed local search into the framework of population based evolutionary algorithms, leading to hybrid evolutionary algorithms (HEA). Such an algorithm is essentially based on two
key elements: an eff...
- INFORMS Journal on Computing , 1995
"... We present a method for solving the independent set formulation of the graph coloring problem (where there is one variable for each independent set in the graph). We use a column generation
method for implicit optimization of the linear program at each node of the branch-and-bound tree. This approac ..."
Cited by 73 (2 self)
Add to MetaCart
We present a method for solving the independent set formulation of the graph coloring problem (where there is one variable for each independent set in the graph). We use a column generation method
for implicit optimization of the linear program at each node of the branch-and-bound tree. This approach, while requiring the solution of a difficult subproblem as well as needing sophisticated
branching rules, solves small to moderate size problems quickly. We have also implemented an exact graph coloring algorithm based on DSATUR for comparison. Implementation details and computational
experience are presented. 1 INTRODUCTION The graph coloring problem is one of the most useful models in graph theory. This problem has been used to solve problems in school timetabling [10], computer
register allocation [7, 8], electronic bandwidth allocation [11], and many other areas. These applications suggest that effective algorithms for solving the graph coloring problem would be of great
importance. D...
- SIAM REV , 2005
"... Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring
problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specific ..."
Cited by 41 (7 self)
Add to MetaCart
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems
occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertexcoloring problems
here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a
unifying framework for the graph models of the variant matrixestimation problems. The framework is based upon the viewpoint that a partition of a matrixinto structurally orthogonal groups of columns
corresponds to distance-2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms
for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our
claims. Most of the methods for these problems treat a column or a row of a matrixas an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these
criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
- HANDBOOK OF COMBINATORIAL OPTIMIZATION , 1999
"... The ever growing number of wireless communications systems deployed around the globe have made the optimal assignment of a limited radio frequency spectrum a problem of primary importance. At
issue are planning models for permanent spectrum allocation, licensing, regulation, and network design. Furt ..."
Cited by 30 (3 self)
Add to MetaCart
The ever growing number of wireless communications systems deployed around the globe have made the optimal assignment of a limited radio frequency spectrum a problem of primary importance. At issue
are planning models for permanent spectrum allocation, licensing, regulation, and network design. Further at issue are on-line algorithms for dynamically assigning frequencies to users within an
established network. Applications include aeronautical mobile, land mobile, maritime mobile, broadcast, land fixed (pointto -point), and satellite systems. This paper surveys research conducted by
theoreticians, engineers, and computer scientists regarding the frequency assignment problem (FAP) in all of its guises. The paper begins by defining some of the more common types of FAPs. It
continues with a discussion on measures of optimality relating to the use of spectrum, models of interference, and mathematical representations of the many FAPs, both in graph theoretic terms, and as
mathematical pro...
, 1998
"... In this paper, a generic tabu search is presented for three coloring problems: graph coloring, T-colorings and set T-colorings. This algorithm integrates important features such as greedy
initialization, solution re-generation, dynamic tabu tenure, incremental evaluation of solutions and constraint ..."
Cited by 27 (8 self)
Add to MetaCart
In this paper, a generic tabu search is presented for three coloring problems: graph coloring, T-colorings and set T-colorings. This algorithm integrates important features such as greedy
initialization, solution re-generation, dynamic tabu tenure, incremental evaluation of solutions and constraint handling techniques. Empirical comparisons show that this algorithm approaches the best
coloring algorithms and outperforms some hybrid algorithms on a wide range of benchmarks. Experiments on large random instances of T-colorings and set T-colorings show encouraging results.
- Wireless Networks , 1996
"... this paper, we introduce two such metrics: the worst-case number of channels required to accommodate all possible configurations of N calls in a cell cluster, and the set of cell states that can
be accommodated with M channels. We first measure two extreme policies, fixed channel allocation and maxi ..."
Cited by 21 (2 self)
Add to MetaCart
this paper, we introduce two such metrics: the worst-case number of channels required to accommodate all possible configurations of N calls in a cell cluster, and the set of cell states that can be
accommodated with M channels. We first measure two extreme policies, fixed channel allocation and maximum packing, under these metrics. We then prove a new lower bound, under the first metric, on any
channel assignment policy. Next, we introduce three intermediate channel assignment policies, based on commonly used ideas of channel ordering, hybrid assignment, and partitioning. Finally, these
policies are used to demonstrate the tradeoff between the performance and the complexity of a channel allocation policy. 1 Introduction
, 2000
"... Finding a good graph coloring quickly is often a crucial phase in the development of efficient, parallel algorithms for many scientific and engineering applications. In this paper we consider
the problem of solving the graph coloring problem itself in parallel. We present a simple and fast paral ..."
Cited by 21 (7 self)
Add to MetaCart
Finding a good graph coloring quickly is often a crucial phase in the development of efficient, parallel algorithms for many scientific and engineering applications. In this paper we consider the
problem of solving the graph coloring problem itself in parallel. We present a simple and fast parallel graph coloring heuristic that is well suited for shared memory programming and yields an almost
linear speedup on the PRAM model. We also present a second heuristic that improves on the number of colors used. The heuristics have been implemented using OpenMP. Experiments conducted on an SGI
Cray Origin 2000 super computer using very large graphs from finite element methods and eigenvalue computations validate the theoretical run-time analysis.
- Future Generation Computer Systems , 1999
"... The problem considered in this paper consists in defining an assignment of frequencies to radio links, to be established between base stations and mobile transmitters, which minimizes the global
interference over a given region. This problem is NP-hard and few results have been reported on techni ..."
Cited by 18 (2 self)
Add to MetaCart
The problem considered in this paper consists in defining an assignment of frequencies to radio links, to be established between base stations and mobile transmitters, which minimizes the global
interference over a given region. This problem is NP-hard and few results have been reported on techniques for solving it to optimality. We have applied to this version of the frequency assignment
problem an ANTS metaheuristic, that is an approach following the ACO optimization paradigm. Computational results, obtained on a number of standard problem instances, testify the effectiveness of the
proposed approach. 1. Introduction The introduction of mobile communication, such as portable phones, has a tremendous impact on everyday life. Mobility raises a number of research questions: for
many of them discrete models and algorithms are required in order to solve the underlying mathematical problem. The Ant Colony Optimization paradigm (ACO) [Dorigo and Di Caro, 1999], [Maniezzo and
, 2002
"... Cellular data and communication networks are usually modeled as graphs with each node representing a base station in a cell in the network, and edges representing geographical adjacency of
cells. The problem of channel assignment in such networks can be seen as a graph multicoloring problem. We surv ..."
Cited by 18 (0 self)
Add to MetaCart
Cellular data and communication networks are usually modeled as graphs with each node representing a base station in a cell in the network, and edges representing geographical adjacency of cells. The
problem of channel assignment in such networks can be seen as a graph multicoloring problem. We survey the models, algorithms, and lower bounds for this problem. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=212799","timestamp":"2014-04-17T01:18:18Z","content_type":null,"content_length":"39193","record_id":"<urn:uuid:a43cbc6c-4d33-42ad-b1c6-529f0cbd04ba>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
18.013A Calculus with Applications, Fall 2001, Online Textbook
Infinite series are useful mathematical tools. We discuss their convergence, properties of power series, and methods for determining their sums.
30.1 Introduction
30.2 Conditions for Convergence of an Alternating Sequence
30.3 Conditions for Absolute Convergence
30.4 Power Series and Radius of Convergence
30.5 Manipulating Absolutely Convergent Series
30.6 Computing Series Partial Sums
30.7 Expressions for Coefficients of a Power Series
30.8 Fourier Series | {"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/chapter30/contents.html","timestamp":"2014-04-21T04:33:41Z","content_type":null,"content_length":"6508","record_id":"<urn:uuid:aa2d7abd-8f50-4545-a77d-7112fc5489a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Model choice: a minimum posterior predictive loss approach
Results 1 - 10 of 47
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is
proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Cited by 143 (17 self)
Add to MetaCart
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper
if the forecaster maximizes the expected score for an observation drawn from the distribution F if he or she issues the probabilistic forecast F, rather than G ̸= F. It is strictly proper if the
maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide
attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and
discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we
prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical,
and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error
and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite
functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper
scoring rules to Bayes factors and to cross-validation, and propose a novel form of cross-validation known as random-fold cross-validation. A case study on probabilistic weather forecasts in the
North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile
- Journal of the Royal Statistical Society, Series B , 2002
"... [Read before The Royal Statistical Society at a meeting organized by the Research ..."
- Journal of the American Statistical Association , 2000
"... this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging
hierarchical longitudinal model of the kind often encountered in biostatistical practice. We find that the joint ..."
Cited by 31 (1 self)
Add to MetaCart
this paper we review several of these methods, and subsequently compare them in the context of two examples, the first a simple regression example, and the second a much more challenging hierarchical
longitudinal model of the kind often encountered in biostatistical practice. We find that the joint model-parameter space search methods perform adequately but can be difficult to program and tune,
while the marginal likelihood methods are often less troublesome and require less in the way of additional coding. Our results suggest that the latter methods may be most appropriate for
practitioners working in many standard model choice settings, while the former remain important for comparing large numbers of models, or models whose parameters cannot be easily updated in
relatively few blocks. We caution however that all of the methods we compare require significant human and computer effort, suggesting that less formal Bayesian model choice methods may offer a more
realistic alternative in many cases.
, 1998
"... We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the
log-likelihood under each model, from which we derive measures of fit and complexity (the effective number of p ..."
Cited by 28 (7 self)
Add to MetaCart
We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the
log-likelihood under each model, from which we derive measures of fit and complexity (the effective number of parameters). These may be combined into a Deviance Information Criterion (DIC), which is
shown to have an approximate decision-theoretic justification. Analytic and asymptotic identities reveal the measure of complexity to be a generalisation of a wide range of previous suggestions, with
particular reference to the neural network literature. The contributions of individual observations to fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages.
The procedure is illustrated in a number of examples, and throughout it is emphasised that the required quantities are trivial to compute in a Markov chain Monte Carlo analysis, and require no
analytic work for new...
- Journal of Agricultural, Biological and Environmental Statistics , 1997
"... The variogram is a basic tool in geostatistics. In the case of an assumed isotropic process, it is used to compare variability of the difference between pairs of observations as a function of
their distance. Customary approaches to variogram modeling create an empirical variogram and then fit a vali ..."
Cited by 26 (5 self)
Add to MetaCart
The variogram is a basic tool in geostatistics. In the case of an assumed isotropic process, it is used to compare variability of the difference between pairs of observations as a function of their
distance. Customary approaches to variogram modeling create an empirical variogram and then fit a valid parametric or nonparametric variogram model to it. Here we adopt a Bayesian approach to
variogram modeling. In particular, we seek to analyze a recent data set of scallop catches. We have the results of the analysis of an earlier data set from the region to supply useful prior
information. In addition, the Bayesian approach enables inference about any aspect of spatial dependence of interest rather than merely providing a fitted variogram. We utilize discrete mixtures of
Bessel functions which allow a rich and flexible class of variogram models. To differentiate between models, we introduce a utility based model choice criterion that encourages parsimony. We conclude
with a fully Bayesian ...
- Journal of the American Statistical Association , 1996
"... this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics. ..."
Cited by 23 (0 self)
Add to MetaCart
this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics.
- STATIST. SCI , 2004
"... ..."
- COMPUTATIONAL STATISTICS & DATA ANALYSIS 42 (2003) 513 -- 533 , 2003
"... Space-varying regression models are generalizations of standard linear model where the regression coefficients areal/fkz to change in space. Thespatial structure is specified by a mul#TE/bhEf
extension of pairwise difference priors, thusenablEk incorporation of neighboring structures and easysamplTk ..."
Cited by 13 (2 self)
Add to MetaCart
Space-varying regression models are generalizations of standard linear model where the regression coefficients areal/fkz to change in space. Thespatial structure is specified by a mul#TE/bhEf
extension of pairwise difference priors, thusenablEk incorporation of neighboring structures and easysamplTk schemes. Bayesian inference is performed by incorporation of a prior distribution for the
hyperparameters. This approachlpro to anuntractabl posterior distribution. Inference is approximated by drawing samplg from the posterior distribution. Different samplen schemes areavailIfI and may
be used in an MCMCal/zh#hT/ They basicalk differ in the way theyhandl bldl of regression coefficients. Approaches vary fromsamplkI each lch/###TE/bhhTk vector of coefficients tocomplfI ellfI/bhf of
al regression coe#cients by anal#TE/b integration. These schemes are compared in terms of their computation, chain autocorrel ##TE/ andresulzI; inference.Resule areilh#hEf/bf withsimulhhf data
andapplE# to a real dataset.Relset prior specifications that can accommodate thespatial structure in different forms are al/ discussed. The paperconclhh; with a few general remarks.
"... The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. DIC is shown to be an approximation to a penalized loss
function based on the deviance, with a penalty derived from a cross-validation argument. This approximati ..."
Cited by 10 (0 self)
Add to MetaCart
The deviance information criterion (DIC) is widely used for Bayesian model comparison, despite the lack of a clear theoretical foundation. DIC is shown to be an approximation to a penalized loss
function based on the deviance, with a penalty derived from a cross-validation argument. This approximation is valid only when the effective number of parameters in the model is much smaller than the
number of independent observations. In disease mapping, a typical application of DIC, this assumption does not hold and DIC under-penalizes more complex models. Another deviance-based loss function,
derived from the same decision-theoretic framework, is applied to mixture models, which have previously been considered an unsuitable application for DIC.
, 2007
"... Summary. We discuss tools for the evaluation of probabilistic forecasts and the critique of statistical models for ordered discrete data. Our proposals include a non-randomized version of the
probability integral transform, marginal calibration diagrams and proper scoring rules, such as the predicti ..."
Cited by 9 (1 self)
Add to MetaCart
Summary. We discuss tools for the evaluation of probabilistic forecasts and the critique of statistical models for ordered discrete data. Our proposals include a non-randomized version of the
probability integral transform, marginal calibration diagrams and proper scoring rules, such as the predictive deviance. In case studies, we critique count regression models for patent data, and
assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=763012","timestamp":"2014-04-20T05:02:52Z","content_type":null,"content_length":"37646","record_id":"<urn:uuid:7fead9f7-e105-4421-abc7-897a91235c0b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Isomorphism type of fibered products of groups
up vote 4 down vote favorite
This question is, in a way, a follow-up of this earlier question of mine.
Let $A$, $B$ and $F$ be finite groups and let $\alpha: A \to F$ and $\beta: B \to F$ be surjective homomorphisms.
Let $A \times_F B$ denote the fibered product of $A$ and $B$ over $F$, defined as the subgroup of $A\times B$ consisting of those elements $(a,b)$ such that $\alpha(a) = \beta(b)$. It is the
categorical pullback of the following diagram $$\begin{matrix} & & A \cr & & \downarrow^\alpha \cr B & \stackrel{\beta}{\longrightarrow} & F \cr \end{matrix}$$
Now let $\tau \in \operatorname{Aut}(F)$ be an automorphism and consider the "twisted" fibered product $$A\times_{(F,\tau)} B = \lbrace (a,b) \in A \times B \mid \alpha(a) = \tau(\beta(b))\rbrace.$$
In other words, it is the pullback of the diagram $$\begin{matrix} & & A \cr & & \downarrow^\alpha \cr B & \stackrel{\tau \circ\beta}{\longrightarrow} & F \cr \end{matrix}$$
It follows from Robin Chapman's answer to my earlier question, that in the case where $A$ and $B$ are cyclic groups, $A\times_{(F,\tau)} B$ and $A\times_F B$ are abstractly isomorphic. In other
words, the isomorphism type of the fibered product is impervious to twisting by automorphisms of $F$.
This situation is not exclusive to cyclic groups. In fact, in a paper I am writing at the moment, a large number of fibered products of ADE subgroups of $\operatorname{Sp}(1)$ arise and in all cases
the fibered products do not see the automorphism of $F$, up to isomorphism. The key observation in all cases is that one can lift the automorphism $\tau$ of $F$ to an automorphism of either $A$ or
$B$. This is trivial for inner automorphisms, since they lift via surjections, but $F$ often admits automorphisms which are not inner and they too happen to lift.
Naturally, one is always suspicious that something which can be shown to hold by a case-by-case analysis might in fact follow from some general result. Hence my question:
How general is this?
More precisely, let me ask two questions.
(1) Do automorphisms always lift via surjections?
If true, this would explain what I have observed, but I suspect this is not true: although inner automorphisms do indeed lift, normal subgroups (which define surjections) need not be preserved under
outer automorphisms. And at any rate, this would perhaps be too strong a result. What I really want to know is the answer to this next question:
(2) Are twisted fibered products $A\times_{(F,\tau)} B$ corresponding to different automorphisms $\tau$ always abstractly isomorphic?
Thanks in advance!
gr.group-theory finite-groups fibre-products
add comment
2 Answers
active oldest votes
(1) No. Let $C_n$ denote the cyclic group of order $n$, and since I will only consider abelian groups, I will write all groups additively. There is a surjection $C_9 \times C_3 \to C_3
\times C_3$ given on generators by $(1,0) \mapsto (1,0)$ and $(0,1) \mapsto (0,1)$. Then all preimages of $(1,0) \in C_3 \times C_3$ have order $9$ in $C_9 \times C_3$, whereas all
preimages of $(0,1)$ have order $3$ in $C_9 \times C_3$. So the involution of $C_3 \times C_3$ given by switching the two generators does not lift to an automorphism of $C_9 \times
up vote 3 C_3$.
down vote
accepted (2) is the interesting question, and I don't see the answer immediately. My suspicion is "no".
... or follow your nose from Theo's example ... – Tom Goodwillie Jul 14 '10 at 13:15
Yeah, my thought was that this would give a counterexample to the claim, but I didn't take the time to think it through. – Theo Johnson-Freyd Jul 14 '10 at 17:44
(I do not understand the comments here -- did I miss something?) Thanks -- this is a very nice example. – José Figueroa-O'Farrill Jul 15 '10 at 0:17
Jose - Originally there was another comment on which I was commenting, but then it was deleted. – Steve D Jul 20 '10 at 18:45
add comment
Theo's example works for question (2) as well. Let $p$ be a prime (it doesn't have to be $3$). Let $\alpha:Z_{p^2}\times Z_p\to Z_p\times Z_p$ be the projection map and $\beta=\alpha$.
Then the fibre product is isomorphic to $Z_{p^2}\times Z_p\times Z_p$.
Now let $\tau:(x,y)\to(y,x)$ be an automorphism of $Z_p\times Z_p$. Then the twisted fibre product is isomorphic to the fibre product of $\alpha$ and the projecion map $\beta':Z_p\times
up vote 4 Z_{p^2}\to Z_p\times Z_p$ and this is isomorphic to $Z_{p^2}\times Z_{p^2}$. So these fibre products are not isomorphic.
down vote
This doesn't even need $p$ prime, $p>1$ will suffice :-)
+1. Thanks! I wish I could accept both your answers! – José Figueroa-O'Farrill Jul 15 '10 at 17:15
1 +1. I just like the number "3". – Theo Johnson-Freyd Jul 20 '10 at 22:52
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups fibre-products or ask your own question. | {"url":"http://mathoverflow.net/questions/31783/isomorphism-type-of-fibered-products-of-groups/32040","timestamp":"2014-04-16T13:54:56Z","content_type":null,"content_length":"62514","record_id":"<urn:uuid:85359fe4-0b2d-4d66-98cc-7a82ca578aca>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Investigation of the Stability of the Laminar Boundary Layer in a Compressible Fluid
Lees, Lester and Lin, Chia Chiao (1946) Investigation of the Stability of the Laminar Boundary Layer in a Compressible Fluid. National Advisory Committee for Aeronautics , Washington, D. C.. http://
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:LEEnacatn1115
In the present report the stability of two-dimensional laminar flows of a gas is investigated by the method of small perturbations. The chief emphasis is placed on the case of the laminar boundary
layer. Part I of the present report deals with the general mathematical theory. The general equations governing one normal mode of the small velocity and temperature disturbances are derived and
studied in great detail. It is found that for Reynolds numbers of the order of those encountered in most aerodymnic problems, the temperature disturbances have only a negligible effect on those
particular velocity solutions which depend primarily on the viscosity coefficient ("viscous solutions"). Indeed, the latter are actually of the same form in the compressible fluid as in the
incompressible fluid, at least to the first approximation. Because of this fact, the mathematical analysis is greatly simplified. The final equation determining the characteristic values of the
stability problem depends on the "inviscid solutions" and the function of Tietjens in a manner very similar to the case of the incompressible fluid. The second viscosity coefficient and the
coefficient of heat conductivity do not enter the problem; only the ordinary coefficient of viscosity near the solid surface is involved. Part II deals wlth the limiting case of infinite Reynolds
numbers. The study of energy relations is very much emphasized. It is shown that the disturbance will gain energy from the main flow if the gradient of the product of mean density and mean vorticity
near the solid surface has a sign opposite to that near the outer edge of the boundary layer. A general stability criterion has been obtained in terms of the gradient of the product of density and
vorticity, analogous to the Rayleigh-Tollmien criterion for the case of an incompressible fluid. If this gradient vanishes for some value of the velocity ratio of the main flow exceeding 1 - 1/M
(where M is the free stream Mach number), then neutral and self-excited "subsonic" disturbances exist in the inviscid fluid. (The subsonic disturbances die out rapidly with distance from the solid
surface.) The conditions for the existence of other types of disturbance have not yet been established to this extent of exactness. A formula has been worked out to give the amplitude ratio of
incoming and reflected sound waves. It is found in the present investigation that when the solid boundary is heated, the boundary layer flow is destabilized through the change in the distribution of
the product of density and vorticity, but stabilized through the increase of kinematic viscosity near the solid boundary. When the solid boundary is cooled, the situation is just the reverse. The
actual extent to which these two effects counteract each other can only be settled by actual computation or some approximate estimstes of the minimum critical Reylolds number. This question will be
investigated in a subsequent report. Part III deals with the stability of laminar flows in a perfect gas with the effect of viscosity included. The method for the numerical computation of the
stability limit is outlined; detailed numerical calculations will be carried out in a subsequent report.
Item Type: Report or Paper (Technical Report)
Record Number: CaltechAUTHORS:LEEnacatn1115
Persistent URL: http://resolver.caltech.edu/CaltechAUTHORS:LEEnacatn1115
Alternative URL: http://naca.larc.nasa.gov/reports/1946/naca-tn-1115/naca-tn-1115.pdf
Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 457
Collection: CaltechAUTHORS
Deposited By: Archive Administrator
Deposited On: 21 Jun 2005
Last Modified: 26 Dec 2012 08:40
Repository Staff Only: item control page | {"url":"http://authors.library.caltech.edu/457/","timestamp":"2014-04-19T20:18:34Z","content_type":null,"content_length":"26182","record_id":"<urn:uuid:fc4ce32e-9a42-4a3a-b7f2-1c22369cddfd>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 11. Configuration Interaction
An AMPAC N-electron wavefunction Ψ is always initially defined to be a single-determinant SCF wavefunction Ψ[SCF], i.e., a determinant ψ of orthonormal, variationally determined spin orbitals (SOs)
χ. For RHF, the spatial components (MOs) φ of the SOs are restricted to be identical in pairs of alpha and beta electrons (or “half-electrons” of fractional charge for open-shell RHF) during the SCF,
whereas this is not the case in UHF. In some RHF cases, it is desirable or necessary (e.g., open-shell RHF, calculation of UV/visible spectra, etc.) to go beyond SCF to a more sophisticated
Configuration Interaction (CI) method (CI cannot be used with UHF), where ψ = ψ[CI] is one of many possible variationally determined linear combinations of determinants ψ. The set of determinants ψ
combined to form Ψ[CI] is called an N-electron basis (in contrast, to the one-electron basis of Slater orbitals used to expand the SOs) and includes a reference determinant Ψ[Ref] (usually Ψ[SCF]) as
well as a set of “excited” determinants obtained by moving one or more of the electrons from the occupied SOs of Ψ[Ref] to corresponding virtual SOs. For open-shell RHF, the reference wavefunction Ψ
[Ref] is not Ψ[SCF] since it contains fractionally occupied SOs, but rather a determinant obtained by filling the SOs of Ψ[SCF] in the standard way using the Aufbau principle. In AMPAC, a generic
single determinant ψ is referred to as a “microstate”. A general expression for Ψ[CI] can be given by:
where the “C”s are the linear coefficients to be determined, ψ^ia is the determinant resulting from moving an electron from occupied SO χ[i] to virtual SO χ[a], ψ^ia,jb is the determinant resulting
from moving electrons from occupied SOs χ[i] and χ[j] to virtual SOs χ[a] and χ[b] respectively, etc. The sums involving i, j, k, ... are over some subset of occupied SOs in ψ[Ref] while the sums
involving a,b,c,... are over some subset of virtual SOs in ψ[Ref]. Together, these SOs define the CI-active MOs, or the “active space”. A set of CI wavefunctions and corresponding energies can be
variationally determined (coefficients C optimized, orbitals fixed) by solving the matrix eigenvalue equation resulting from differentiating the standard Hamiltonian energy expression with respect to
the elements of the CI coefficient vector C and setting the result to zero:
where H is a semi-empirical Hamiltonian matrix over microstates (H[pq] = <ψ[p]|H|ψ[q] >) and V is the overlap matrix over microstates (V[pq] = <ψ[p]|ψ[q]>).
Members of the set of CI wavefunctions satisfying Equation (11.2) are called “CI eigenstates” and are labeled here by their root number in order of increasing energy as Ψ^[R], starting with R = 1 for
the lowest energy CI eigenstate Ψ^[1].
A microstate ψ with N[α] alpha electrons and N[β] beta electrons is an eigenfunction of the operator Ŝ[z] (the z-component of the total electron spin angular momentum operator) with eigenvalue S[z]:
For an N-electron system, the set {S[z]} of possible values for S[z] is:
A microstate ψ is not an eigenfunction of the operator Ŝ^2 (the square of the total electron spin angular momentum operator) unless it has a closed-shell configuration (all MOs doubly occupied or
empty) or a high-spin open-shell configuration (all singly-occupied MOs have parallel spin):
A “spin adapted microstate” η^[S,Sz] is a linear combination of microstates that is defined to be an eigenfunction of both Ŝ[z] and Ŝ^2:
For example, a microstate with two singly-occupied MOs with opposite spin (S[z] = 0) is not an eigenfunction of Ŝ^2, but the combination of this microstate with the corresponding spin-flipped
microstate is an eigenfunction of Ŝ^2, with quantum number S = 0.
For an N-electron system, the set of possible values for S are:
For a given value of S, the set of possible values of S[z] are:
The “spin multiplicity” S[M] corresponding to S is given by:
The spin multiplicity indicates the number of possible values for S[z], and therefore the degeneracy of a spin adapted microstate with total spin quantum number S. For “singlets” (S[M] = 1, S = 0),
there is only one possibly value for S[z]: S[z] = 0. For “doublets” (S[M] = 2, S = 1/2), there are two possible values for S[z]: S[z] = -1/2 and S[z] = 1/2. For “triplets” (S[M] = 3, S = 1), there
are three possible values for S[z]: S[z] = -1, S[z] = 0 and S[z] = 1.
Since exact eigenstates of the non-relativistic Hamiltonian (modeled by the semi-empirical Hamiltonian H) are pure spin states, i.e., eigenfunctions of both Ŝ[z] and Ŝ^2, it useful to constrain the
CI eigenstates to be as well. This can be achieved if, instead of using just the “raw” microstates ψ as the N-electron basis prior to solving Equation (11.2), the N-electron basis is defined in terms
of spin adapted microstates η.
In terms of a set of spin adapted microstates η^[S,Sz] with spin quantum numbers S and S[z], a corresponding pure spin state CI eigenstate Ψ^[R,S,Sz][CI] is given by:
In AMPAC, the CI eigenstates generated are always expanded according to Equation (11.13) and so they are pure spin states. In addition, for efficiency only one member of a degenerate set of spin
adapted microstates is ever used. By default, this is the one with the smallest non-negative value of S[z] (0 for even-electron systems, 1/2 for odd-electron systems), but this is modifiable using
the keywords SZ=n or MICROS=n. Thus, the CI matrix equations to be solved are:
where H^[R,S,Sz][pq] = <η[p]^[S,Sz]|H|η[q]^[S,Sz]> and V[pq] = <η[p]^[S,Sz]|η[q]^[S,Sz]>.
Combining equations Equation (11.7) and Equation (11.13), Ψ^[R,S,Sz][CI] can be expressed directly in terms of the microstates ψ[m]^[Sz] with coefficients D[m]^[R,S,Sz] as:
In the AMPAC output files, it is always the microstate coefficients D[m]^[R,S,Sz] which are printed.
While there are many CI eigenstates which can be calculated (the number can be specified using the keyword CISTATE=n), AMPAC considers one of them to be the “primary” CI eigenstate whose energy
hypersurface will be followed during geometry optimizations and which will be used as the reference for all property calculations. The other n - 1 CI eigenstates requested by CISTATE=n are considered
“secondary” CI eigenstates, for which some properties are calculated and printed, typically at one or more optimized geometries of the primary eigenstate, so their transition properties are
non-adiabiatic. By default, the primary CI eigenstate is the ground state, of any spin multiplicity, and the secondary eigenstates are all excited states. To specify a different primary CI
eigenstate, use one of the spin multiplicity keywords SINGLET, DOUBLET, TRIPLET, etc. and/or ROOT=n, where n = 1 refers to the ground state. For example, to use the second‑lowest energy triplet CI
eigenstate (“T2”) as the primary one, specify TRIPLET and ROOT=2. To use the second‑lowest energy CI eigenstate of any spin multiplicity as the primary one, specify ROOT=2 without a spin multiplicity
For the CI eigenstate Ψ^[R,S,Sz], the total electron density function ρ^[R,S,Sz](r) is expressed in terms of the 2M occupied and virtual SOs χ of ψ[Ref] by:
where γ[mi] is the occupancy (0 or 1) of the i^th SO for the m^th microstate, s[z,i] = 1/2 for alpha SOs and -1/2 for beta SOs. In terms of the corresponding M MOs, ρ^[R,S,Sz](r) is given by:
where γ[a,mi] is the occupancy (0 or 1) of the alpha SO of the i^th MO for the m^th microstate and P[MO]^[R,S,Sz] is the total one-electron density matrix in the MO basis, with alpha and beta
contributions P[MO,α]^[R,S,Sz] and P[MO,β]^[R,S,Sz], respectively. In terms of a basis of L atomic orbitals (AOs) symbolized by ξ, ρ^[R,S,Sz](r) is given by:
where P[AO]^[R,S,Sz] is the total one-electron density matrix in the AO basis, with alpha and beta contributions P[AO,α]^[R,S,Sz] and P[AO,β]^[R,S,Sz], respectively.
In AMPAC, when the keyword CIDIP is specified, the dipole moment and Mulliken atomic charges are calculated for both the primary and secondary CI eigenstates from the corresponding density matrices P
[AO]^[R,S,Sz]. In general, other one-electron properties which are also available without CI, such as ESP charges, are calculated in CI calculations from P[AO]^[R,S,Sz], but only for the primary CI
The “electron spin density” ρ[σ]^[R,S,Sz](r) corresponding to ρ^[R,S,Sz](r) is simply the alpha electron density ρ[α]^[R,S,Sz](r) minus the beta electron density ρ[β]^[R,S,Sz](r), which, along with
the corresponding spin density matrices P[MO,α]^[R,S,Sz] and P[AO,β]^[R,S,Sz] is given by:
In AMPAC, when the ESR keyword is specified, the spin density matrices P[MO,α]^[R,S,Sz] and P[AO,β]^[R,S,Sz] are printed for the primary CI eigenstate along with the net Mulliken atomic electron
spins for both primary and secondary CI eigenstates. The net Mulliken electron spin for the A^th atom, σ[A], is calculated like the corresponding Mulliken atomic electron population except that P
[AO,σ]^[R,S,Sz] is used instead of P[AO]^[R,S,Sz]
The transition dipole moment μ^[R→n,S,Sz] between CI eigenstates Ψ^[R,S,Sz] and Ψ^[n,S,Sz] is an important result:
For example, contributions from all available transition dipole moments appear in the “sum-over-states” (SOS) expression for the dynamic polarizability tensor α^[R→n,S,Sz](ω), given by Equation
(11.26). Individual transition dipole moments are also of interest because they yield information about the UV / visible spectrum of a molecule. The oscillator strength f^[R→n,S,Sz]between states ψ^
[R,S,Sz] and ψ^[n,S,Sz] is proportional to the absorptivity of light at a wavelength λ^[R→n,S,Sz]:
where K is a constant. By default, AMPAC writes the transition dipole moments μ^[R→n,S,Sz], transition wavelengths λ^[R→n,S,Sz] and oscillator strengths f^[R→n,S,Sz] between the primary eigenstate Ψ^
[R,S,Sz] and all of the secondary CI eigenstates Ψ^[n,S,Sz]. In AMPAC, the number of CI eigenstates to calculate, including the primary CI eigenstate, can be specified using the CISTATE=n keyword
(some of these will have a different total spin quantum number S than the primary eigenstate and so their corresponding transition dipole moments vanish).
The “sum-over-states” (SOS) expression for the dynamic polarizability tensor α^[R→n,S,Sz](ω) for the CI eigenstate ψ^[R,S,Sz] is given by:
where ω is the external electric field frequency (in energy units) and the sum is over all possible CI eigenstates different from the primary eigenstate, but having the same S and S[z] quantum
numbers. In AMPAC, α^[R→n,S,Sz](ω) will be calculated and written to the AMPAC output file when the keywords DYNPOL or DYNPOL=n.nnnn are specified. Note that the keyword CISTATE=n has no influence on
the calculation of dynamic polarizabilities, and vice versa, but the number of possible CI eigenstates (determined by the active space and hence the number of final microstates) does.
The set of occupied and virtual MOs whose corresponding SOs are allowed to exchange electrons in Ψ[Ref] to form new microstates ψ are called the CI-active MOs or the “active space”. The choice of
active space is one of the most crucial, and sometimes difficult, steps in a CI calculation, both computationally and in terms of physical results. Given this importance, the CI-active MOs are
usually specified along with the keywords which invoke CI, possibly together with the RECLAS(n,m) keyword and its associated MO permutation data. For example, C.I.(5,8) means “do a CAS-CI using MOs
5,6,7 and 8 as the CI-active MOs”. It is essential that all or none of the members of a degenerate set of MOs be included in the active space. By default, AMPAC will abort if this is not the case.
The keywords, CIGAP=n.nnnn and CI-OK can be used to alter the definition of MO degeneracy and to allow the active space to contain an incomplete set of degenerate MOs. By default, all of the MO
energies are printed to the AMPAC output file. The keywords VECTORS and ALLVEC can be used to print both the MO energies and AO coefficients to the AMPAC output file for inspection. This information
is also present in an AMPAC visualization file so that MOs can be visualized with AMPAC’s GUI. It is important to know the order in which the SCF MOs occur and their corresponding labeling. For a
system with M MOs, the MOs are ordered from 1 to M by increasing occupancy, i.e., first doubly-occupied MOs, then partially occupied MOs and finally unoccupied (virtual) MOs. This order usually
coincides with increasing MO energy for the entire list from 1 to M, but within each subset of the same occupancy the order always coincides with increasing energy.
For RHF open-shell calculations, the SCF calculations in AMPAC are done using the “Half-Electron” method instead of the ROHF (Restricted Open-Shell Hartree-Fock) method used by others. In the “
Half-Electron” method, the usual “spin-less” closed-shell RHF SCF formalism is used to calculate Ψ[SCF], except that instead of N / 2 doubly occupied spatial MOs there are assumed to be N / 2 - n (N
even) or N / 2 + 1 - n (N odd) doubly occupied MOs and m MOs with fractional occupancies which sum to n, where n is the number of open-shell electrons . When the OPEN(n,m) keyword is specified, the m
open MOs have an equal occupancy of (n/m). When the SCFCI(n,m[1],m[2],r) keyword is specified, the set of m = m[1] + m[2] open MOs consists of a group of m[1] MOs each with an occupancy of (nm[1])/(m
[1]+rm[2]) and a group of m[2] MOs each with an occupancy of (nrm[2])/(m[1]+rm[2]). A fractionally occupied MO in the “Half-Electron” method may be thought of as being occupied by two “half-electrons
” of opposite spin and with a charge equal to half the occupancy of the MO, e.g., (n / 2m) when OPEN(n,m) is used. This leads to an energy expression which is similar to Roothan’s multiconfiguration
open-shell SCF energy expression after spurious coulomb and exchange energies arising from the interaction between “half-electrons” are subtracted out. In AMPAC, however, the energy calculated using
the “Half-Electron” method is never used, since it is non-variational, but the corresponding set of SCF MOs are, either in a “minimal” CAS-CI calculation involving all of the partially occupied MOs
of Ψ[SCF] as the active space if CI is not otherwise invoked, or more generally in any specified type of CI calculation. While the fractionally occupied SOs of Ψ[SCF] determine the active space of
corresponding MOs, the reference wavefunction Ψ[Ref] used from an open-shell RHF calculation is not Ψ[SCF] but rather a determinant obtained by filling the SOs of Ψ[SCF] in the standard way using the
Aufbau principle. It is important to note that, in general, the number of open-shell electrons to assume for the SCF should be specified explicitly using one of the keywords OPEN(n,m), BIRADICAL,
EXCITED or SCFCI, otherwise AMPAC will assume the minimum number of open-shell electrons (0 for even-electron systems and 1 for odd-electron systems) for the SCF. The spin-multiplicity keywords
(e.g., SINGLET, DOUBLET, TRIPLET, etc.) are not used in RHF until the CI portion of the calculation. Thus, for the oxygen molecule, OPEN(2,2) should be specified even if TRIPLET is also specified.
Given ψ[Ref] and a corresponding active space, a definition of which microstates to generate and potentially use for the expansion of the CI eigenstates is necessary. In the “Complete Active Space”
method (CAS-CI), specified by C.I.=n or C.I.(n,m) and the default when CI is only implied by OPEN(n,m), all possible microstates which can be generated by permutations of the electrons among the SOs
within the active space are potentially used. In the “CI Singles” method (S-CI), specified by SC.I.=n or SC.I.(n.m), all possible singly-excited microstates ψ^ia are potentially used. In the “CI
Singles and Doubles” method (SD-CI), specified by SDC.I.=n or SDC.I.(n,m), all possible singly-excited microstates ψ^ia and doubly-excited microstates ψ^ia,jb are potentially used. In the “CI
Singles, Doubles and Triples” method (SDT-CI), specified by SDTC.I.=n or SDTC.I.(n,m), all possible singly-excited microstates ψ^ia, doubly-excited microstates ψ^ia,jb and triply-excited microstates
ψ^ia,jb,kc are potentially used. The initial set of microstates is referred to here as {I}[MS].
The size of {I}[MS] grows very rapidly (combinatorially) as the size of the active space increases, especially when CAS-CI is used. (For a CAS-CI involving 10 electrons and 10 CI-active MOs, the
number of possible microstates is over 60000, after spin degeneracies are excluded.) In some cases, all of {I}[MS] should be used, if possible. If this is not the case, whether due to resource
limitations and / or to avoid “over-correlating” the already partially correlated, semi-empirically calculated ground state energy, then some means of efficiently selecting the most important “final”
set of microstates, referred to here as {F}[MS], from {I}[MS] is necessary. Typically, only a relatively small “target” set of the possible CI eigenstates, {R}[ES], are of interest. For example, {R}
[ES] might be composed of the singlet ground CI eigenstate and the first excited singlet and triplet CI eigenstates. {R}[ES] can usually be characterized in terms of relatively large contributions
from a small subset of “germ” microstates {G}[MS] = {G[0], G[1], G[2], ?}[MS], where G [0]≡ ψ[Ref] roughly corresponds to the ground CI eigenstate R0, G1 to a first excited CI eigenstate R1, etc.
While, much of the information relevant to {R}[ES] is included in {G}[MS], the CI eigenstates of {R}[ES] constructed from {G}[MS] alone would generally have two significant deficiencies. First, there
is generally a lack of specific correlation within the set {G}[MS]. Second, the excited members {G}[MS] are lacking in “repolarization” because the SCF orbitals from which {G}[MS] is generated are
obtained from a ground state wavefunction optimization. The objective of the microstate selection procedure used in AMPAC to produce {F}[MS] is to extract from the enormous list of initial
microstates in {I}[MS] and not in {G}[MS], the ones which should contribute most to specific correlation and repolarization. This microstate selection consists of four major steps:
I. From the initial microstate space {I}[MS], keep those J[1](≈ 10 × J[4], J[4] defined below) microstates ψ with the lowest Møller‑Plesset zero‑order energy E^0[MP][ψ] (sum of occupied SO
where the sums over i and j are over all alpha and beta SOs, respectively, while λ and ε represent SO occupancies and energies, respectively.
II. From the J[1] microstates of step I, choose the J[2] (default 100) microstates ψ with the lowest Epstein‑Nesbet (EN) energy E[EN][ψ] (semi‑empirical Hamiltonian expectation value).
This set of J[2] microstates is the “germ” set {G}[MS] referred to above.
III. From {G}[MS] of step II, determine the J[3] (default 30) eigenvectors of the corresponding CI matrix.
IV. From the J[1] - J[2] “non-germ” microstates ψ which are in {I}[MS] but not {G}[MS], choose the J[4] (default 1200) - J[2] microstates which make the largest contribution to the following quantity
At each stage of this microstate selection procedure, the sets of microstates selected are required to preserve spatial degeneracy, i.e., all members of a degenerate set of microstates are kept if
there is space available in the target list, or not kept if there is not space available in the target list. This is achieved by simple inspection of the Møller-Plesset zero-order energies, using a
degeneracy threshold of 1.0 × 10^-4 eV, which is adjustable by the keyword CIGAP=n,n. Of course, this procedure will not cover the case of an active space containing only a partial set of degenerate
MOs. It is important to remember that either all or none of the members of a degenerate set of MOs should be included in the active space.
In AMPAC, the above microstate selection procedure can be partially customized by specifying the parameters J[2], J[3] and J[4] using the keywords CIMAX=J[4], PERTU=J[2] and PERTU(J[2],J[3]). | {"url":"http://www.semichem.com/ampacmanual/ci.html","timestamp":"2014-04-17T04:50:27Z","content_type":null,"content_length":"124825","record_id":"<urn:uuid:1c4eb803-4541-4c26-9f73-a5e6dcadc0e1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design of a Mathematical Unit in FPGA for the Implementation of the Control of a Magnetic Levitation System
International Journal of Reconfigurable Computing
Volume 2008 (2008), Article ID 634306, 9 pages
Research Article
Design of a Mathematical Unit in FPGA for the Implementation of the Control of a Magnetic Levitation System
Departamento de Electrónica, Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI), Universidad de Guadalajara, Boulevard Marcelino García Barragan 1421, Guadalajara, Jal. 44430, Mexico
Received 2 July 2008; Revised 9 October 2008; Accepted 30 October 2008
Academic Editor: Gustavo Sutter
Copyright © 2008 Juan José Raygoza-Panduro et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
This paper presents the design and implementation of an automatically generated mathematical unit, from a program developed in Java that describes the VHDL circuit, ready to be synthesized with the
Xilinx ISE tool. The core contains diverse complex operations such as mathematical functions including sine and cosine, among others. The proposed unit is used to synthesize a sliding mode controller
for a magnetic levitation system. This kind of systems is used in industrial applications requiring high level of mathematical calculations in small time periods. The core is designed to calculate
trigonometric and arithmetic operations in such a way that each function is performed in a clock cycle. In this paper, the results of the mathematical core are shown in terms of implementation,
utilization, and application to control a magnetic levitation system.
1. Introduction
Mathematical control equations in an FPGA reconfigurable device is an important aspect in the design of arithmetic blocks when implementing control algorithms [1]. A well-known method utilized in the
implementation of arithmetic operations in FPGAs is based upon the coordinate rotation digital computer (CORDIC) algorithm [2–6] which has become the standard solution for the implementation of
complex operations in FPGAs.
This paper proposes the design of a mathematical unit dedicated to the implementation of control algorithms that involve several sequences of complex mathematical functions calculations.
Traditionally, the development of complex arithmetic functions in FPGA devices has resulted in difficulties to implement such operations. Therefore, the elaboration of mathematical operations in
Xilinx FPGAs is proposed through the core generator [7]. The objective of this paper is to explain the development of a core capable of performing mathematical operations such as trigonometric
functions in a clock cycle, using an alternative method of the core generator suggested by the manufacturer.
In order to construct such cores, the architecture of the mathematical unit is established by the user with Java software, in which the input and output parameters are defined as well as the
functions needed to perform the desired control algorithm. This tool facilitates the users' implementation of mathematical blocks in FPGAs, simplifying the flow design to the adjustment of the
interconnection of the required blocks in the main program described in VHDL. This reduces the designer's workload during the implementation stage of control algorithms. The tool is capable of
implementing 16 different types of mathematical functions which may be described according to the required algorithm. The maximum number of functions that can be implemented depends on the available
resources of the FPGA.
When the VHDL code generator is activated, a window initially appears, asking the characteristic of the input-output variables. The longitude of the input data indicating integer and decimal bits
must be specified. At this point, selection of the functions to be implemented according to the control algorithm is made, and finally the code generator creates a file containing the description of
each block in VHDL language ready to be synthesized by the Xilinx ISE tool [8]. Each function might be an independent module that can be interconnected with the rest of the blocks in order to
represent the equations that describe the desired algorithm. Trigonometric functions are implemented in the embedded memory of the FPGA. The advantage of solving complex functions with preloaded
tables can be clearly seen in computing time, simplifying the execution of a mathematical function to the transfer of data from memory to the accumulator register.
The control algorithm of a magnetic elevation system is presented in order to provide an implementation study for the proposed mathematical unit. This system deals with the levitation of steel
objects aided by a controlled electromagnetic force that is equal and opposed to the gravitational force acting on the steel object. This type of control is actually applied in commercial magnetic
levitation (MAGLEV) trains [9].
2. Description of the Mathematical Unit
The mathematical unit has been developed with a Java program that generates blocks of mathematical functions in VHDL. The complete system is composed of 5 main modules, as shown in Figure 1, (1) VHDL
code generator, (2) RAM or ROM memory block for mathematical operations, (3) control unit for instructions, (4) accumulator registers for results, and (5) magnetic levitation system.
The mathematical unit was functionally designed in VHDL code with instantiation of RAM or ROM memories that were created through the program generator functions, elaborated in Java language,
especially for this job which is described in Section 2.1. The memories were programmed with input parameters assigned by the user, allowing the data input to have a suitable format according to the
designers needs.
2.1. VHDL Code Generator
As previously mentioned, the proposed mathematical unit is capable of solving trigonometric functions in a clock cycle by using preestablished data tables. To accomplish this, a program was developed
in Java language that calculates the values of the trigonometric or mathematical functions within the range of values defined by the user, followed by the creation of tables with the calculated
values, and uses a RAM or ROM memory to store these data values and then to translate them into the description hardware language VHDL. The program defines the architecture, entity, and process which
automatically adds to the libraries, reducing user time and the definition of each block to only the definition of ranges and precise values of input and output data in its integer and decimal parts.
The software function generator reduces the computational burden to the FPGA by using a standard computer to calculate the possible results of mathematical functions that require only one parameter
in the instantiation of a RAM or ROM memory.
The program creates the desired function as an entity in VHDL with an input and an output of the selected size. The VHDL has syntax standards, which are contained in the libraries. The program
generates the necessary lines for use by the corresponding libraries. The entity block is also created at the same time, along with the input data, ready to be synthesized by the Xilinx ISE
simulator. The list of mathematical function values is calculated with the program code generator.
An example is shown in Table 1. This table corresponds to the calculation of the cosine function, which is implemented in a ROM memory of 16 bits 1024 lines. The address bus is identified with the
letters a9 to a0, where a0 is the least significant bit. Before executing the program generating code, the data format specifies the required bits for the integer part and the decimal part.
The value of the angle is defined in radians at an interval from 0 to 3.99. In the example in Table 2, this quantity may be defined by the user in the program generator. The calculation of the cosine
function is made considering the bits from a6 to a0 as the decimals of the parameter and the bits from a9 to a7 as the integer part. The result of the function is located in the data bus where d0 is
the sign bit, d1 to d13 is the decimal part, and from d14 to d16 is the integer part of the data.
The following program code fragment is an example of the result of the VHDL mathematical functions, where numbers 9 and 16 are the defining entrance parameters that were programmed: library IEEE;
use IEEE.STD_LOGIC_1164.ALL;use IEEE.std_logic_arith.all;use IEEE.std_Logic_UNSIGNED.ALL;entity block isPort(angle:in std_logic_vector(9 downto 0);result:out std_logic_vector(16 downto 0));
end block;architecture behavior of block istype func is array (0 to 1024) of std_logic_vector(16 downto 0);constant Content: func:=(B“00000000000100000,”B“00000000000100000,”
B“00000000000100000,”B“00000000000100000,” The critical functions programmed in C language turned a floating chain of bits as well as the same operation in inverse form. An example of the code
follows: acadena(Number_to_turning, chain_of_exit,decimal_of_exit, size_of_exit)adouble(Chain_to_turning,Number_of_decimal)acadena(15.25,chain_of_exit,2,6) // chain_of_exit will have the
value of 111101acadena(15.5,chain_of_exit,2,6) // chain_of_exit will have the value of 111110acadena(10.5,chain_of_exit,2,6) // chain_of_exit will have the value of 101010acadena
(8.75,chain_of_exit,2,10) // chain_of_exit will have the value of 0000100011adouble(“100011,”2);// The result is 8.75adouble(“100011,”1);// The result is 17.5adouble(“100011,”0);// The
result is 35 In the program, the “acadena” function transformed the floating value of bits and the “adouble” function converted a floating value of bits. A part of the second version which was
generated in Java language follows: import java.io.^*;#1class seno#2#3public static String acadena(double X,int enteros,int longitud)#4double Y=0.0;#5if (X<0)#6#7Y=
Math.abs(Math.ceil(X));#8else#9Y=Math.abs(Math.floor(X));#10 In order to complete the conversion of the floating value to chain of bits, we followed a 2-stage process; firstly, the whole
part becomes a chain of bits, and after the part decimal is turned into a chain of bits. Later they are united in a single decimal number in binary code.
The conversion process starts with the whole part of the function; this requires rounding the smallest number (when positive) or rounding the largest number (when negative). Using the “ceil” function
one can obtain the rounding of the number and using the “floor” function one can round the whole part. Since the conversion algorithm uses positive numbers, the “abs” function is used to take the
absolute value from the rounded number. The variable “res” keeps the final result from the conversion.
The code generator program allows the usage of RAM or ROM memories and selection of these will depend on the application required. For example, when using ROM memories, these are implemented with the
internal resources of the FPGA augmenting the utilization of the circuit; the flexibility of using these memories is their facility to adjust the size of the word and required address for the precise
calculations that will be stored in them. When RAM memories are selected, as these are embedded, they do not impact the available resources in the FPGA, allowing a huge logic capacity for other
circuit implementation, the disadvantage that it is limited to the implementation of variable arrays in the word longitude and address bus.
With the objective of observing the units behavior during the calculation of different trigonometric functions, a sequence of operations was established for the resolution of the functions with
different angles. The obtained results are shown in Table 2. The first column corresponds to the evaluation angles; the second column is equivalent to the first column in radians; the third column
shows the results of the cosine function obtained with the mathematical unit presented; the fourth column has the results obtained with Matlab; the last column presents the difference between the
value calculated with Matlab and the value obtained with the mathematical unit.
It is important to emphasize that the mathematical function sequence can be carried out to form complete equations which are calculated and stored in a ROM or RAM memory, to be used later in the
implementation of individual block control equations, that are capable of being calculated in a clock pulse, optimizing the calculation time.
2.2. Description of the Mathematical Unit Operation
The mathematical unit was implemented in a FPGA Virtex II. The results of the utilization are shown in Table 3. The utilization of slices, LUTs, and total equivalent gate (TEG) is presented in
independent columns. The column “sel” refers to the instruction code that mathematical unit executes. This makes 16 trigonometric and mathematical functions which may be selected through a control
word of 4 bits.
The total circuit utilization is 95% of the available slices in the FPGA and 74% of LUTs, being equivalent in TEG to 58157 out of 1000000 of the available total on the Xilinx Virtex II.
3. Application to the Control of a Magnetic Levitation System
In order to prove the capacities of the mathematical unit, a sliding mode controller [10] was used to regulate a magnetic levitation system. This type of system is used in several applications such
as frictionless bearings [11], high-speed MAGLEV passenger trains [12], wind tunnel levitation models [13], molten metal levitation [14], and the levitation of metal slabs during industrial
manufacturing process [15]. These systems have natural unstable nonlinear dynamics requiring closed-loop control designs for stabilization. Several control techniques have been applied to the
stabilization of MAGLEV systems, such as I/O linearization [16, 17], backstepping [18], and sliding mode control [19], among others. The sliding mode control [10] has been extensively used in
electromechanical systems due to its robustness to unknown bounded perturbations. Another characteristic of sliding mode control is the discontinuous nature of its control signal which switches from
two states. This is an advantage because it avoids using pulse width modulation (PWM). The drawback of sliding mode control is that the switching signal has an infinite frequency and when implemented
with common switching power devices with a frequency around 20KHz, produces an output phenomenon called chattering; small oscillations around the set-point. Nowadays, there are power devices
available with a switching frequency of at least 150KHz, which common digital signal processor boards cannot support. To take full advantage of such switching devices, one needs high speed digital
media such as FPGAs that can support and match high switching frequencies. In this case, the chattering problem is considerably reduced.
3.1. Mathematical Model and Problem Formulation for the MAGLEV System
Figure 2 shows an schematic diagram of a MAGLEV system.
The mathematical model of the MAGLEV system is given in the following equations [17]: with state vector defined as , where represents the position of the steel ball of mass which is positively
increasing in the downward position, is the velocity of the steel ball, is the current through the coil, is the input voltage applied to the coil, and is the output of the system. The constant
parameters are the resistance of the coil denoted by , the inductance denoted by , which is the gravitational constant and is considered as a known perturbation term, finally which is the magnetic
constant of the electromagnet.
The control problem is based upon forcing the output to track a reference signal . Therefore, one can consider the following output tracking error:
3.2. Sliding Mode Output Regulation for the MAGLEV System
The applied control design methodology is a combination of two important control techniques, output regulation theory (ORT) [20] and sliding mode control (SMC) [10]. The advantage of using ORT is
that it plays an important role in trajectory output tracking and in the rejection of known disturbances. ORT deals with the problem of finding a control law such that the output of the controlled
system can asymptotically track a signal generated by an exosystem and at the same time reject perturbations possible generated by the same exosystem. The nature of the control signal is continuous
or smooth, and in this case PWM is required for implementation. When ORT is combined with SMC one obtains a control methodology commonly known as sliding mode output regulation (SMOR) [10] resulting
in robust protection against unknown perturbations and avoids the use of PWM as just mentioned before.
The exosystem is proposed as follows: with initial conditions , , and , such that, the exosystem generates a reference output tracking signal for an MAGLEV system, which is chosen as , that is, a
sinusoidal shape signal with frequency , peak value of , and a dc bias value . The reference signal is chosen in this way in order to test some trigonometric functions of the mathematical unit. In
this case, the steel ball will move upward and downward as dictated by the amplitude and frequency of the reference signal.
What follows is the ideal steady state of operation for the MAGLEV system, that is, ; this state is such that, if the original states of the MAGLEV, , are driven to the ideal steady-state, then the
output tracking error will asymptotically decay to zero, accomplishing the control objective. In order to find the steady state of operation one must solve the well-known Francis-Isidori-Byrnes [20]
equations. In the case of the MAGLEV system results are as follows: with . Note that the ideal steady-state value for is obviously zero. Using this fact, one easily calculates from (6) , replacing in
(4) one finds that . Substituting in (5), one reckons the expression for as . The variable represents the steady-state value for the control input , but it is not neccesary to calculate such
expression when using SMC actions. Let us define the steady-state error as The dynamic equation for (7) with tracking error (2) can be obtained from (1) as Now, one defines the sliding function and
control as where sign is the typical signum function, with and .
Making use of a rigorous stability analysis by means of a Lyapunov function [10], one finds a stability condition for gain : where is a solution of , namely, If condition (12) is satisfied then is
guaranteed, implying that can be calculated from (11) as That is, the differential equation (10) is unnecessary as its solution (14) is now known. The remaining differential equations for and are
obtained by replacing (14) in (8) and (9). This residual dynamic is known as the sliding mode dynamic. This dynamic is made stable by the proper choice of . An easy way to stabilize the sliding mode
dynamic is by using its linear approximation at the origin as shown here: with and being as proper dimensions matrices obtained from linear approximation, and where H.O.T. stands for higher-order
terms, that vanish at the origin. Now, is chosen so that the matrix is Hurwitz or has negative real part poles. In this case and as a consequence by (14) tends to zero too. By continuity, using one
finally finds that the output tracking error e asymptotically tends to zero, satisfying the control objective. Finally a closed-loop block diagram is presented in Figure 3.
4. Control Algorithm Implementation Results
The control algorithm was tested using an FPGA virtexII XE2V1000-4fg256, and the plant dynamic was simulated using the DSP board DS1104 from DSPACE. This type of simulation is known as
hardware-in-the-Loop (HIL) simulation [21]. HIL simulation is a real-time simulation form. It differs from real-time simulation by the addition of a hardware component in the loop as an FPGA. This
technique is increasingly being used in the development and testing of complex real-time embedded systems. Moreover, the complexity of the plant dynamic under control is commonly simulated in a
graphical environment as SIMULINK from Matlab. In our case the plant dynamic was created in SIMULINK and then downloaded to the DSP board DS1104 in order to arrange the I/O ports. Figure 4 shows a
simple diagram of the HIL simulation that was performed.
4.1. FPGA Implementation Results
The system is declared as an entity of three inputs that represents the position of the ball , the velocity of the ball , the current through the coil , and the output voltage closed loop with the
MAGLEV system. The internal variables used for the calculation of the equations use a word of 32 longitude bits—15 bits to represent the integer part, 16 bits for the decimal part, and 1 bit to
represent the sign. The variable υ corresponds to the final calculation of the system and has a word longitude of 64 bits—4 for integers, 1 for signs, and 59 for decimals, providing the necessary
accuracy for the stability of the system. The total processing time of the calculations of one cycle in the FPGA is 202 nanoseconds, representing maximum processing speed of up to 21 nanoseconds.
Figure 5 shows the utilization of the components in the FPGA virtexII XE2V1000-4fg256, with 3% of slices, 3% of LUTs, 7% of RAMs, and 20% of multipliers. The device has sufficient resources available
to implement additional circuits.
4.2. Closed-Loop System in Implementation Results
The nominal parameters of the MAGLEV system are , , , , . The constant values of the exogenous signals (2) are , , and . Taking the nominal parameters of the MAGLEV system, the following pairs of
matrices are calculated: The control parameters that appear in (11) are as follows: The matrix in (11) is calculated using the LQR function provided in Matlab.
To verify the robustness properties, some plant parameter variations are introduced which can be seen in Figure 6, where and may change up to 100% from their nominal values. It is worth to mention
that the perturbation term generated by the variation of satisfies the matching condition [10], but not the variations on .
Figure 7 shows the tracking of the output signal where can be appreciated a good performance for . But for where the perturbation term due to the variation in is present, and the output still
performs well due to the matching conditions. Finally, the unmatched perturbation term due to the variation of appearing at adversely affects the MAGLEV system but the output still performs well.
Figure 8 shows the output tracking error where can be appreciated the transient and steady-state responses can be observed.
Figure 9 shows , which represents the ideal steady-state behavior of the current. It can be seen that the current becomes different to for due to the unmatched perturbation.
Finally, Figure 10 shows the voltage input signals where the discontinuous nature of the control signal can be appreciated. The main advantage of having discontinuous control signals is that it
avoids the use of PWM as mentioned in [10], therefore, facilitating a straightforward implementation of the control action.
5. Conclusion
This work has presented the results of a program generator for VHDL code developed in Java language and designed to implement a mathematical unit prototyped and implemented in reconfigurable FPGA
circuits from Xilinx. The mathematical unit was used to implement the control algorithm of a magnetic levitation system, accomplishing the requirements of speed and precision necessary to operate
under nominal conditions. The code generator tool allows the implementation of blocks containing complex operations which may be grouped in the same memory, letting operations to run in a clock
pulse, based on the calculation of functions through preestablished tables. Moreover, the HIL simulation test platformed has facilitated the verification of the results obtained when the physical
plant is not available.
1. S. Ortega-Cisneros, J. J. Raygoza-Panduro, J. Suardíaz Muro, and E. Boemo, “Rapid prototyping of a self-timed ALU with FPGAs,” in Proceedings of International Conference on Reconfigurable
Computing and FPGAs (ReConFig '05), p. 8, Puebla City, Mexico, September 2005. View at Publisher · View at Google Scholar
2. J. S. Walther, “A unified algorithm for elemetary functions,” in Proceedings of the AFIP Spring Joint Computer Conference (SJCC '71), vol. 38, pp. 379–385, AFIPS Press, Montvale, NJ, USA, 1971.
3. F. Cardells-Tormo and J. Valls-Coquillat, “Optimisation of direct digital frequency synthesisers based on CORDIC,” Electronics Letters, vol. 37, no. 21, pp. 1278–1280, 2001. View at Publisher ·
View at Google Scholar
4. T. C. Chen, “Automatic computation of exponentials, logarithms, ratios and square roots,” IBM Journal of Research and Development, vol. 16, no. 4, pp. 380–388, 1972.
5. C.-S. Wu, A.-Y. Wu, and C.-H. Lin, “A high-performance/low-latency vector rotational CORDIC architecture based on extended elementary angle set and trellis-based searching schemes,” IEEE
Transactions on Circuits and Systems II, vol. 50, no. 9, pp. 589–601, 2003. View at Publisher · View at Google Scholar
6. H. Dawid and H. Meyr, “VLSI implementation of the CORDIC algorithm using redundant arithmetic,” in Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS '92), vol. 3, pp.
1089–1092, San Diego, Calif, USA, May 1992. View at Publisher · View at Google Scholar
7. J. E. Volder, “The CORDIC trigonometric computing technique,” IRE Transactions on Electronic Computers, vol. EC-8, no. 3, pp. 330–334, 1959.
8. Xilinx, 2007, http://www.xilinx.com.
9. M. Ono, Y. Sakuma, H. Adachi, et al., “Train control characteristic and the function of the position detecting system at the Yamanasi Maglev test line,” in Proceedings of the 15th International
Conference on Magnetically Levitated Systems and Linear Drives (MAGLEV '98), pp. 184–189, Mt. Fuji, Yamanashi, Japan, April 1998.
10. V. I. Utkin, A. G. Loukianov, B. Castillo-Toledo, and J. Rivera, “Sliding mode regulator design,” in Variable Structure Systems: From Principles to Implementation, A. Sabanovic, L. Fridman, and
S. Spurgeon, Eds., p. 1943, IEE, London, UK, 2004.
11. P. Allaire and A. Sinha, “Robust sliding mode control of a planar rigid rotor system on magnetic bearings,” in Proceedings of the 6th International Symposium on Magnetic Bearings (ISMB '98), pp.
577–586, Boston, Mass, USA, August 1998.
12. Hyung-Woo Lee, Ki-Chan Kim, and Ju Lee, “Review of maglev train technologies,” in IEEE Transactions on Magnetics, vol. 42, no. 7, pp. 1917–1925, 2006.
13. R. J. M. Muscroft, D. B. Sims-Williams, and D. A. Cardwell, “The development of a passive magnetic levitation system for wind tunnel models,” SAE Transactions: Journal of Passenger Cars:
Mechanical Systems, vol. 115, no. 6, pp. 415–419, 2006.
14. K. Im and Y. Mochimaru, “Numerical analysis on magnetic levitation of liquid metals, using a spectral finite difference scheme,” Journal of Computational Physics, vol. 203, no. 1, pp. 112–128,
2005. View at Publisher · View at Google Scholar
15. B. V. Jayawant and D. P. Rea, “New electromagnetic suspension and its stabilization,” Proceedings of the Institution of Electrical Engineers, vol. 115, no. 4, pp. 549–554, 1965.
16. A. El Hajjaji and M. Ouladsine, “Modeling and nonlinear control of magnetic levitation systems,” IEEE Transactions on Industrial Electronics, vol. 48, no. 4, pp. 831–838, 2001. View at Publisher
· View at Google Scholar
17. W. Barie and J. Chiasson, “Linear and nonlinear state-space controllers for magnetic levitation,” International Journal of Systems Science, vol. 27, no. 11, pp. 1153–1163, 1996. View at Publisher
· View at Google Scholar
18. Z.-J. Yang, K. Kunitoshi, S. Kanae, and K. Wada, “Adaptive robust output-feedback control of a magnetic levitation system by K-filter approach,” IEEE Transactions on Industrial Electronics, vol.
55, no. 1, pp. 390–399, 2008. View at Publisher · View at Google Scholar
19. F.-J. Lin, L.-T. Teng, and P.-H. Shieh, “Intelligent sliding-mode control using RBFN for magnetic levitation system,” IEEE Transactions on Industrial Electronics, vol. 54, no. 3, pp. 1752–1762,
2007. View at Publisher · View at Google Scholar
20. A. Isidori and C. I. Byrnes, “Output regulation of nonlinear systems,” IEEE Transactions on Automatic Control, vol. 35, no. 2, pp. 131–140, 1990. View at Publisher · View at Google Scholar · View
at Zentralblatt MATH · View at MathSciNet
21. B. Lu, X. Wu, H. Figueroa, and A. Monti, “A low-cost real-time hardware-in-the-loop testing approach of power electronics controls,” IEEE Transactions on Industrial Electronics, vol. 54, no. 2,
pp. 919–931, 2007. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/ijrc/2008/634306/","timestamp":"2014-04-23T14:23:12Z","content_type":null,"content_length":"203833","record_id":"<urn:uuid:73143ca5-d035-4ea8-94dc-660a88343944>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Guess-Check-Generalize and the Scrubbing Calculator
Several other blogs have been talking about Bret Victor’s Kill Math website, including its Scrubbing Calculator. I’d like to talk about how the Scrubbing Calculator is both very similar to and very
different from an approach to solving word problems we call “Guess-Check-Generalize”. Here’s a graphic from a sample problem solved Scrubbingly.
The challenge is to find the height of each bar, given the information about other heights. When I first taught Algebra 1, my approach to this would be to get students to “translate” the problem
into algebra, trying to get them to write an equation that would be true for the right height. And the results were a mixed bag, for a lot of reasons that might be good for a different post. I
think there’s something inherently challenging about trying to write a fully symbolic statement immediately from a problem situation.
The concept of guess-check-generalize starts by changing the nature of the problem. The question to start with changes:
from What is the correct bar height? …
to Is 100 the correct bar height?
Here, 100 could have been any number at all, it’s a total guess. (Some teachers using this method ask students to write down their first guess before even presenting the problem, since students may
be afraid to guess incorrectly.)
Now we see if the guess is right. Up until now, I agree completely with the philosophy of the Scrubbing Calculator: make a guess at the bar height, then see if it’s right. This is where things get
interesting, because there’s more than one way to check the guess. The most conventional way is to add up the heights on the right side, and a student might do this:
60 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 140 = nope
It doesn’t actually matter what that equals, as long as it doesn’t equal 768. Guess-check-generalize is about determining a process you can use to check any guess; then, the process you’ve described
becomes an equation to solve. And the process can evolve from one guess to another, as students realize they’ve used the same number 8 times or that this thing is twice that thing.
So 100 was wrong; take a second guess. It doesn’t have to be a better guess, because you’re not trying to nail the numeric answer, you’re trying to nail the process of checking a guess. Let’s guess
36. Checking this guess a student might notice they could combine some terms from before:
$60 + (9 \cdot 36) + (8 \cdot 20) + 140 = 684, \text{nope}$
No more guessing. The third guess is $h$, a variable. (Students may need more guesses, especially at first; eventually some only need one or zero guesses.) Take all the places the guess was found
and replace them with the variable, noting that the correct guess yields 768:
$60 + (9 \cdot h) + (8 \cdot 20) + 140 = 768$
Solving that equation and bringing the answer back into context are still issues, but I always found the largest difficulty with the dreaded “word problem” is an inability to take the situation and
make a mathematical statement about it. When almost every real mathematical situation an adult encounters is a “word problem”, this is a major issue that needs to be addressed.
Here’s why I think guess-check-generalize is a good way of dealing with word problems.
• The method is general in nature. The method presented here works equally well for linear and nonlinear situations, for problems with a variable on each side of the equation, for rates, coin
values, painting houses, counting beans, whatever. This is a general-purpose tool that is useful over many years, including some surprising topics like generating the equations of lines and
circles. (More on this some other time.)
• This is what people do with problems. When a problem is new or overly complicated, picking a few cases and following them through leads to an understanding of what happens in general.
Traditional word problem methods expect students to have the generalization at the ready, and it just doesn’t work that way in reality. The concept of generalizing from repeated example is a
fundamental one that all students should learn, not just those heading into STEM careers.
• Students have a simple place to start from. By asking students to guess at the answer, the difficulty level of word problems can be reduced by 2 or 3 grade levels immediately. Students with
language difficulty can learn what is happening by calculating with numbers, connecting the new language to the calculations they know, then advancing to symbols when appropriate.
• There are no black boxes. Students construct equations and can understand where they come from. Multiple equations with the same answer can be found from different techniques used on the same
problem, leading to good discussions about the basic moves of algebra and how different equations and formulas are related.
• Connections between arithmetic and algebra are reinforced. Bret Victor says this: “We are accustomed to assuming that variables must be symbols. But this isn’t true — a variable is simply a
number that varies.” I’d like this to change. Too many students only see variables as symbols for manipulation, and not as numbers that vary. Students make mistakes with variables they would
never make with numbers. When this happens, it is because they don’t see that the symbol represents a number. Since arithmetic is at the heart of guess-check-generalize, students are asked to
solidify their number skill and sense. Students begin to guess “nice” numbers, like a multiple of 3 when they see that dividing by 3 will be part of the process.
It is on this last point that I disagree deeply with the philosophy of the Scrubbing Calculator; students don’t really do any of the calculating. In the end, a student might see that the answer
produced by Scrubbing works, but if there is more than one answer, there’s no way for a student to discern this. If the problem changes slightly from its original form (say, to a 1024-high screen),
the Scrubbing solution method is to start from scratch, which doesn’t help students generalize toward functions and formulas (in this case, a relationship between the screen height and the bar
What if the correct answer to the equation is $\sqrt 2$ or even $\frac 2 3$? I don’t see how the Scrubbing Calculator could get these answers. I agree that too many students don’t see the real
meaning of a variable, but this is no reason to ditch symbolic algebra, this is a reason to make the connections between arithmetic and algebra as strong as possible, as often as possible.
The Scrubbing Calculator’s method is an opportunity for students to make deep connections between arithmetic and algebra, between real problems and symbolic algebra. I’m disappointed that its
intended purpose is to remove symbolic algebra altogether, because it could be pretty cool. What do you think?
For homework, solve this problem using guess-check-generalize or come up with a better one. No scrubbing, please!
Nancy takes a long car trip from Boston. In one direction she drives at an average speed of 60 miles per hour, and in the other direction she drives at an average speed of 50 miles per hour. She’s
in the car a total of 38 hours for the round trip. How far from Boston was her destination? (Bonus: what city did she drive to?)
11 Responses to Guess-Check-Generalize and the Scrubbing Calculator
1. I say Atlanta at 1045 miles, but I’m just guessing.
2. Great stuff here.
Some questions:
1. How do the majority of students get from guess 2 to step 3 without explicit guidance? Most students- when guessing and checking- continue doing so, unaware that there is something to
generalize. When it’s modeled like step 3, some students check out. Their intuition and number sense has been preyed upon. Now it becomes a ‘math class’ problem.
2. Why does the first guess have to be wrong? When the first guess is right – through coincidence or refined number sense- a whole new set of questions come out like: ‘Is that the only answer?’
‘Can we prove that it’s the only answer?’ ‘Can you provide answers that are wrong?’ ‘Is there a visual representation for why it’s the only answer?’
3. Thanks for the questions!
1. The reality is that it will take more than two guesses for the first such problem. Ask students to continue focusing on keeping track of their steps, getting the “rhythm of the calculations”
(as Al would say). What I’d want to hear is a student who gets sick of guessing and says “Stop it already! Whatever number you use, it’s just going to be…” That kid is ready to generalize. I have
seen it taught explicitly by asking students to take three guesses, then the fourth guess is a variable. I feel it should be up to the student to decide when to generalize, which may at first
take many guesses.
You can also “preload” this behavior by using the same tactics when building expressions for things like “3 less than a number” — do “3 less than 20″ then “3 less than 50″ until “3 less than n”
makes sense.
2. These are great, great questions, and well worth asking. I also feel that if a student can determine the correct answer by some means, they shouldn’t be required to also create an equation to
solve the same problem. It’s math for no purpose at that point. It’s good to present a mix of problems that have ‘nice’ answers and ones that deliberately do NOT have nice answers. The first
problem we present using this method is one from Benjamin Banneker, we call the “four fours” problem:
There’s this number. When you add 4 to it, subtract 4 from it, multiply it by 4, or divide it by 4, you get four different answers. That’s not too interesting, but the four different answers add
up to exactly 60. What is the number?
(Banneker’s original phrasing flips the question: Divide 60 into four such parts that the first being increased by 4, the second decreased by 4, the third multiplied by 4, the fourth part divided
by 4, that the sum, the difference, the product and the quotient shall be one and the same number.)
I like this problem because it’s simple to take a guess, students should eventually be generalizing, and the equation is relatively necessary to find the answer. Plus, the problem is over 200
years old!
□ As another bonus, the writer of the problem wasn’t a white guy…
4. Another advantage of this method is that it puts metacognition right into the process from the beginning: because students start by asking the question “is this actually right?” and maybe “how
far off am I?” I think they would be more likely to ask those questions at the end, too. Good stuff!
5. Chattanooga. She wanted to ride the Choo-Choo.
Here’s my concern with the scrubbing calculator-different from yours, I think.
I don’t get how the scrubbing calculator helps the student who is struggling with the set up. In the original post I read about it, Bret writes:
This is a simple problem, but it’s not obvious. It typically would require either writing out and solving an equation:
or recognizing the “trick” that we have to split the difference.
Then he sets up two equations, 2910-1000=1910 and 426+1000=1426 and “scrubs” until they are equal. I’m fine with this, but I find that for this problem, the same insight is required for either
technique. Namely, that we need to add to one person’s total and subtract from the other one.
This seems like a fundamental structural insight. In my experience with community college developmental mathematics, this is the insight students struggle with and not so much the solving of the
equation. So let’s just be clear about what the scrubbing calculator does and what it does not.
When Bret alludes to the calculator being an alternative to using division and subtraction to solve an equation, I agree. And I’m curious about the consequences of the tool from that perspective.
But he seems also to imply that the scrubbing calculator is an alternative to setting up an equation. And I disagree. I think we need the same structural insights into the problem to set up a
scrubbing solution as a symbolic algebra one. Not that there’s anything wrong with that. I’m just not sure the scrubbing calculator really solves the pedagogical problem it claims to.
6. @bowen “If the problem changes slightly from its original form (say, to a 1024-high screen), the Scrubbing solution method is to start from scratch, which doesn’t help students generalize toward
functions and formulas (in this case, a relationship between the screen height and the bar height).”
You may have missed the section on unlocked numbers, which addresses this very issue. If scrubbing bothers you, take a closer look at unlocking to see how it allows you to turn any number into a
variable, and solve for it without scrubbing at all.
@christopher “I’m just not sure the scrubbing calculator really solves the pedagogical problem it claims to.”
Believe me, I have never claimed to address any pedagogical problems whatsoever. My interest in these tools is purely practical. Most people I know (adults, solving problems they care about) have
no trouble with the insight that we have to add to one person’s total and subtract from the other — that’s what it means to pay for something. But these people won’t go near anything with an “x”,
and don’t know or care about “moving terms to the other side of the equation”.
7. Thanks for the comments, Bret. My first thought is that philosophically we are a lot closer than I thought! I agree that there is a disservice to a large number of students when they are taught
only symbolic algebra. I feel that many of the real purposes of learning algebra are ones that can and should apply to adults solving problems they care about: generalizing from examples, looking
for structure and similarities between different kinds of problems, reasoning about and picturing calculations, and more. These are what an algebra course should really be about.
I feel that symbolic algebra, and the connections between arithmetic and algebra, are critical to a deep understanding of algebraic habits of mind — and that these habits of mind are what school
mathematics should be about. You posted a link to a paper by William Thurston, and this paper contains a quote we (at CME Project) frequently cite when we introduce the philosophy of the program:
“What mathematicians most wanted and needed from me was to learn my ways of thinking, and not in fact to learn my proof of the geometrization conjecture for Haken manifolds.”
I feel this should still be true if you replace that last part with “the quadratic formula” or even “the basic moves of solving equations”. Mathematics education should be preparing students for
their adult lives, and courses that are purely about symbol manipulation do not accomplish this.
I was unclear in my comment about relationships between variables. I meant that there doesn’t seem to be a way to find the overall relationship between two variables when using the Scrubbing
Calculator. For example, if a proportional relationship emerged between two variables, it can be observed through several specific cases but not generalized. Similarly it would be difficult to
identify when variables were in an inverse, quadratic, or exponential relationship.
There’s more to say about Guess-Check-Generalize (more posts some other time) but I think there is a lot to talk about, educationally, as a result of these types of tools. I am hopeful that
school mathematics courses can better serve students by targeting high-level thinking goals, the habits of mind that Thurston talks about, instead of just being about content goals and “mindless
manipulation” of symbols.
But I still think the symbols of algebra are necessary to accomplish those higher goals… | {"url":"http://patternsinpractice.wordpress.com/2011/06/02/guess-check-generalize-and-the-scrubbing-calculator/","timestamp":"2014-04-21T12:19:19Z","content_type":null,"content_length":"80391","record_id":"<urn:uuid:992734b3-7bac-49e2-949c-c8b3d6313850>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding Type 1 and Type 2 Errors from the Feline Perspective: All Mistakes Are Not Equal! - The Stats Cat | Minitab
Serving cat food? I sure hope you've set your alpha
level high enough.
"Bad kitty!" That's a phrase you almost never hear, but even we cats make the occasional mistake. I was reminded of this recently as I watched my human trying to analyze some data. People frequently
make mistakes when they test a hypothesis with data analysis. Specifically, they can make either Type I or Type II errors.
When I first started reading my human's statistics textbooks a few years ago, this idea seemed awfully silly to me. We cats appreciate being direct, and you either get the answer correct or you
don't. I mean, a mistake is a mistake, right? Does it really matter
you reached the wrong conclusion?
Then I recalled how, as a kitten, I once made a mistake near my litter box. Later that week, I made a mistake on my human's bed. While these two mistakes had certain similarities, the responses they
elicited from my human couldn't have been more different. So even as a kitten, I learned that not all mistakes have the same impact.
If you're talking about statistical questions, the way you make an error certainly does matter.
Type I and Type II Errors: A Vital Example of Why It Matters
Textbook authors throw around lots of examples about Type I and Type II errors and why they're important. They'll cite allegedly life-threatening examples that involve, say, testing the effectiveness
of different medicines. Or the reliability of airplane parts. Or the stopping distances for different brands of car tires.
Whatever. I guess humans care about that sort of thing, but let's be honest--none of those examples are worth a mummified mouse tail to any self-respecting cat.
So let's talk about Type I and Type II errors as they apply to a situation that actually matters, one where lives really hang in the balance: the taste of cat food.
Assume that the Puma Gourmet cat food company wants to compare two formulations for a new food. The null and alternative hypotheses are:
Null hypothesis (Ho ): m1 = m2, or "Both types of cat food taste the same."
Alternative hypothesis (H1 ): m1 ≠ m2, or "Both types of cat food do not taste the same."
The company's brainiest science guys are assigned to conduct some taste tests, gather data about how well a representative sample of cats likes each formulation, and then analyze the data.
If the conclusion they reach matches the reality of the situation, wonderful. Can't wait to try the new food. But what if they've made a mistake, even inadvertently?
To Reject, or Fail to Reject, The Null Hypothesis
As in any hypothesis test, the Puma Gourmet researchers must decide whether or not to reject the null hypothesis based on the data they've collected.
So when we talk about Type I and Type II errors, we're really talking about the two different ways in which you can botch the decision whether or not to reject the assumption that the null hypothesis
(Ho) is correct. Rejecting the null hypothesis when it is true is a Type I error.
Failing to reject the null hypothesis
when it is false is a Type II error.
It's a little easier to understand if you look at it in tabular form:
│The Reality │We do not reject the null (Ho) │We reject the null (Ho) │
│Null (Ho) is true. │Correct decision. Well done! │Type I error │
│Null (Ho) is false.│Type II error │Correct decision. Sweet!│
If you make a Type I error, you reject the null hypothesis when, in fact, it's true. In our example, the company would conclude that the two cat food formulations taste different when they really
don't. Since the cat foods taste the same, this error is not a complete disaster, because at least the cats wil experience the same great Puma Gourmet taste regardless of which formulation they get.
If you commit a Type II error, you do not reject the null hypothesis even though it is false. In this case, the company would conclude that the cat foods taste the same when, in fact, they taste
different. Imagine how devastating this error would be if, as a result, cats got a less delicious version of their Puma Gourmet food.
Choosing the Right Amount of Risk for Your Statistical Analysis
Now you can see why it's important to understand the difference between Type I and Type II errors as you conduct your own hypothesis tests. For instance, if you're working on a Six Sigma project that
could save your company millions, is the probability of committing one type of error more serious or costly than committing the other type?
This ties in to the statistical concepts of risk, significance, and power. Statisticians refer to
the probability of making a Type I error as "alpha," or the"significance level"
you set for your hypothesis test. A common default value for alpha is 0.05, which means you have a 5 percent chance of rejecting the null hypothesis when it is true. A lower alpha value gives you a
lower risk of incorrectly rejecting the null hypothesis. When it's really important, like in our cat food example, researchers will select an alpha value of 0.01, which would reduce the chances of a
Type I error to just 1 percent.
The concept of "power" is related to the probability of making a Type II error. Statisticians refer to this as "beta," and it's a value that statisticians typically cannot know. But researchers can
lessen the risk of Type II errors by making sure their tests have enough power -- in other words, by making sure their sample size is large enough to detect a difference when one truly exists.
I hope this explanation has helped you appreciate the importance of avoiding both types of errors. And if you're one of those Puma Gourmet taste researchers, I hope you'll take this to heart!
Comments for Understanding Type 1 and Type 2 Errors from the Feline Perspective: All Mistakes Are Not Equal!
Name: Jim Taylor, CRE, CPE, CPMMTime: Friday, January 25, 2013One of the best explanations I've seen. Now explain how to find out how big the sample is to have enough power.
Name: Carmen FrostTime: Wednesday, January 30, 2013Dear Stat Cat,
We all make mistakes... Thank goodness you know how to set your alpha levels high enough!
I hope in the near future you can write a stat article about mice. My favourite topic!
Have you ever had MICE-TEA?
Name: Cathy EdwardsTime: Wednesday, February 12, 2014That was awesome! I feel so much smarter! ;) | {"url":"http://blog.minitab.com/blog/the-stats-cat/understanding-type-1-and-type-2-errors-from-the-feline-perspective-all-mistakes-are-not-equal","timestamp":"2014-04-21T15:15:55Z","content_type":null,"content_length":"51088","record_id":"<urn:uuid:eede6c96-9fd9-4f7d-a116-efabea111f8f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magnetic field inside a cylinder which is rotating in a non-constant angular velocity
1. The problem statement, all variables and given/known data
A hollow cylinder of length L and radius R, is madeout of a non-conducting material, is charged with a constant surface charge σ, and is rotating, along its axis of symmetry, with an angular velocity
w(t) = αt.
Q:What is the magnetic field inside the cylinder?
2. Relevant equations
Maxwell correction for Ampere law.
3. The attempt at a solution
The answer in the manual is B = μαtRσ
Where μ is ofcurse μ zero. [ the magnetic constant ].
The manual's solution makes perfect sense if I knew that the circular electric field which is induced by the fact that the magnetic field is changing in time is constant.
because then i could say that that the displacement current density is zero.
Q: How can derive that the circular electric field, induced by the changing -in-time magnetic field, is not changing with time?
Thanks in advance | {"url":"http://www.physicsforums.com/showthread.php?t=507575","timestamp":"2014-04-19T12:43:27Z","content_type":null,"content_length":"32928","record_id":"<urn:uuid:b4af8f85-4081-47aa-a313-4ba8450453cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 410
The first exam is September 25th. Here is a Review Sheet.
Exam 1 with Solutions. Note: There were typos in 1e (should have had n to infinity) and 2c (the sum should have started at 2). These solutions still have the typos in the problem set, but the
solution is to the problem as it was stated on the board. | {"url":"http://www2.math.umd.edu/~bmw12/Math410.html","timestamp":"2014-04-21T01:59:33Z","content_type":null,"content_length":"8282","record_id":"<urn:uuid:f5cab3e1-87c4-4334-8564-964134a4c931>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Word Problem homework help
November 20th 2012, 01:54 AM #1
Senior Member
Nov 2012
Probability Word Problem homework help
A group of students was recently polled about the technology they own.
69 own a cell phone
45 own a computer
23 own an ipod
4 do not own any of the above three items
34 own a cell phone but not a computer nor an ipod
6 own all three, a computer, an iPod, and a cell phone
8 own a cell phone and an ipod but not a computer
2 own only an ipod
a.) How many students were polled?
b.) What is the probability a randomly selected polled student owns a computer?
c.) What is the probability a computer owner also owns an iPod?
d.) What is the probability a randomly selected polled student owns an iPod or a cell phone?
e.) What is the probability a randomly selected polled student owns an ipod and a cell phone?
Re: Probability Word Problem homework help
A Venn diagram will be very helpful to solve this. Have you tried drawing one?
Re: Probability Word Problem homework help
I'll try
Re: Probability Word Problem homework help
Once you do, post your answer to part a) and I will tell you if it agrees with what I got.
Re: Probability Word Problem homework help
i got 89 for a.)
Re: Probability Word Problem homework help
I got 45/89 for b.)
Re: Probability Word Problem homework help
I get a different number for part a). Can you post your diagram?
edit: Did you leave out those who do not own any of the 3?
Last edited by MarkFL; November 20th 2012 at 02:48 AM.
Re: Probability Word Problem homework help
Yes so is 93?
Re: Probability Word Problem homework help
Yes, that's what I have.
Re: Probability Word Problem homework help
I got 13/93 for c
78/93 for d
8/93 for e
Is that what you got??
Last edited by asilvester635; November 20th 2012 at 03:14 AM.
Re: Probability Word Problem homework help
c) no
d) yes, but reduce the fraction.
e) no
Re: Probability Word Problem homework help
C) 7/93
E) How is it not 8/93? It asks us the probability that a student owns an iPod and a cell phone, and the example tells us that 8 owns a cellphone and an ipod??????
Re: Probability Word Problem homework help
c) No. You know there are 45 computer owners, so the denominator will be 45.
e) You are neglecting those who own all 3.
Re: Probability Word Problem homework help
Right, but there are 6 others that have all three. They qualify don't they?
Re: Probability Word Problem homework help
November 20th 2012, 01:57 AM #2
November 20th 2012, 02:05 AM #3
Senior Member
Nov 2012
November 20th 2012, 02:18 AM #4
November 20th 2012, 02:39 AM #5
Senior Member
Nov 2012
November 20th 2012, 02:43 AM #6
Senior Member
Nov 2012
November 20th 2012, 02:45 AM #7
November 20th 2012, 02:51 AM #8
Senior Member
Nov 2012
November 20th 2012, 02:57 AM #9
November 20th 2012, 03:11 AM #10
Senior Member
Nov 2012
November 20th 2012, 03:20 AM #11
November 20th 2012, 03:28 AM #12
Senior Member
Nov 2012
November 20th 2012, 03:31 AM #13
November 20th 2012, 03:33 AM #14
Aug 2012
November 20th 2012, 03:40 AM #15
Senior Member
Nov 2012 | {"url":"http://mathhelpforum.com/advanced-statistics/208043-probability-word-problem-homework-help.html","timestamp":"2014-04-17T02:35:07Z","content_type":null,"content_length":"64935","record_id":"<urn:uuid:f9addd68-ccd9-4126-bd81-29992f139fed>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |