content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Tree Drawing
TreePlot lays out the vertices of a graph in a tree of successive layers, or a collection of trees. If the graph g is not a tree, TreePlot lays out its vertices on the basis of a spanning tree of
each component of the graph.
By default, TreePlot places each tree root at the top. TreePlot[g, pos] places the roots at position pos. Possible positions are: Top, Bottom, Left, Right, and Center.
Options for TreePlot
In addition to options for Graphics, the following options are accepted for LayeredGraphPlot.
Options for TreePlot.
The option DirectedEdges specifies whether to draw edges as directed arrows. Possible values for this option are True or False. The default value for this option is False.
The option EdgeLabeling specifies whether and how to display labels given for the edges. Possible values for this option are True, False, or Automatic. The default value for this option is True,
which displays the supplied edge labels on the graph. With EdgeLabeling->Automatic, the labels are shown as tooltips.
The option EdgeRenderingFunction specifies graphical representation of the graph edges. Possible values for this option are Automatic, None, or a function that gives a proper combination of graphics
primitives and directives. With the default setting of Automatic, a dark red line is drawn for each edge. With EdgeRenderingFunction->None, edges are not drawn.
With EdgeRenderingFunction->g, each edge is rendered with the graphics primitives and directives given by the function g that can take three or more arguments, in the form , where are the coordinates
of the beginning and ending points of the edge, are the beginning and ending vertices, and is any label specified for the edge or None. Explicit settings for EdgeRenderingFunction->g override
settings for EdgeLabeling and DirectedEdges.
The LayerSizeFunction option specifies the relative height to allow for each layer. By default the height is 1. Possible values include a function that gives real machine numbers.
The option MultiedgeStyle specifies whether to draw multiple edges between two vertices. Possible values for MultiedgeStyle are Automatic (the default), True, False, or a positive real number. With
the default setting MultiedgeStyle->Automatic, multiple edges are shown for a graph specified by a list of rules, but not shown if specified by an adjacency matrix. With MultiedgeStyle->, the
multiedges are spread out to a scaled distance of .
The option PackingMethod specifies the method used for packing disconnected components. Possible values for the option are Automatic (the default), , , , , , and . With PackingMethod->
"ClosestPacking", components are packed as close together as possible using a polyomino method, starting from the top left. With PackingMethod->"ClosestPackingCenter", components are packed starting
from the center. With PackingMethod->"Layered", components are packed in layers starting from the top left. With PackingMethod->"LayeredLeft" or PackingMethod->"LayeredTop", components are packed in
layers starting from the top or left respectively. With PackingMethod->"NestedGrid", components are arranged in a nested grid. The typical effective default setting is PackingMethod->"Layered", and
the packing starts with components of the largest bounding box area.
PlotRangePadding is a common option for graphics functions inherited by TreePlot.
PlotStyle is a common option for graphics functions inherited by TreePlot. The option PlotStyle specifies the style in which objects are drawn.
The option SelfLoopStyle specifies whether and how to draw loops for vertices that are linked to themselves. Possible values for the option are Automatic (the default), True, False, or a positive
real number. With SelfLoopStyle->Automatic, self-loops are shown if the graph is specified by a list of rules, but not if it is specified by an adjacency matrix. With SelfLoopStyle->, the self-loops
are drawn with a diameter of (relative to the average edge length).
The option VertexCoordinateRules specifies the coordinates of the vertices. Possible values are None or a list of coordinates. Coordinates specified by a list of rules are not supported by TreePlot
The option VertexLabeling specifies whether to show vertex names as labels. Possible values for this option are True, False, Automatic (the default) and Tooltip. VertexLabeling->True shows the
labels. For graphs specified by an adjacency matrix, vertex labels are taken to be successive integers , where is the size of the matrix. For graphs specified by a list of rules, labels are the
expressions used in the rules. VertexLabeling->False displays each vertex as a point. VertexLabeling->Tooltip displays each vertex as a point, but gives its name in a tooltip. VertexLabeling->
Automatic displays each vertex as a point, giving its name in a tooltip if the number of vertices is not too large. You can also use Tooltip[v[k], v[lbl]] anywhere in the list of rules to specify an
alternative tooltip for a vertex .
The option VertexRenderingFunction specifies graphical representation of the graph edges. Possible values for this option are Automatic, None, or a function that gives a proper combination of
graphics primitives and directives. With the default setting of Automatic, vertices are displayed as points, with their names given in tooltips.
With VertexRenderingFunction->g, each vertex is rendered with the graphics primitives given by , where is the coordinate of the vertex and is the label of the vertex. Explicit settings for
VertexRenderingFunction->g override settings for VertexLabeling.
Example Gallery
k-ary tree | {"url":"http://reference.wolfram.com/mathematica/tutorial/TreeDrawing.html","timestamp":"2014-04-17T12:46:18Z","content_type":null,"content_length":"74945","record_id":"<urn:uuid:757fa410-ea56-4fb2-8d2d-603f141a89a3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
finding the mean
October 11th 2010, 05:25 PM #1
Jan 2010
finding the mean
Suppose that the percentage of college students who engage in binge drinking, which is defined as having 5 drinks for male students and four drinks for female students at one "drinking occasion"
during the previous two weeks, is approximately 40%. Let X equal the number of students in a random sample of size n=12 who binge drink.
So it asks me to find the mean and the variance and the stan. dev, I'm just not sure where to start.. If I can find the p.m.f I can do the rest but I tried 4/10(40%) and that didnt work out so I
need help now
for p = 0.4 and n = 12
Mean = np
Var = np(1-p)
Std = sqrt of the Var.
October 11th 2010, 05:30 PM #2 | {"url":"http://mathhelpforum.com/statistics/159243-finding-mean.html","timestamp":"2014-04-16T22:01:21Z","content_type":null,"content_length":"32871","record_id":"<urn:uuid:880358be-5482-45f6-a7ff-c331dfb6c648>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
I got my hands on a session pass at Java One's "community one" event, and decided to attend two session for which I am a newbie: JRuby and Matisse - both of which were under the "Netbeans Track".
Both of the sessions were essentially a demo, which consisted of someone very well versed with both the technology and Netbeans quickly clicking and typing to produce some interesting application in
an unreasonably short amount of time. Wow! What really gave me goose bumps was when the audience applauded the results. Not goose bumps because I was impressed, but the kind of goose bumps you get
when you are embarrassed for someone else.
For any GUI-based development, whether it's portlets, EJBs, SWT applications, or whatever, there will come a point very quickly when you need to do more than the "hello world" component. At that
point, you will need to have an intimate knowledge of the underlying technology. Now, the problems is that the GUI lets you get from 0 to 5 without any real knowledge. To do anything more, you need
to go back and learn what the GUI did to get you from 0 to 5, which is more difficult than just going from 0 to 5 the "hard" way, by actually learning.
Rapid GUI-based development tools make good demos. They impress product managers and sales folks. They even sell products. But they are not solutions for real developers. | {"url":"https://blogs.oracle.com/jtb/category/General?page=1","timestamp":"2014-04-20T05:50:53Z","content_type":null,"content_length":"77554","record_id":"<urn:uuid:aba0efac-1645-43b7-9750-3785692960eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phys 401 - Fall 2011
University of Maryland - Physics Department
Physics 401 - Quantum Physics - Fall 2011
Prof. Hall
Monday/Wednesday/Friday 10:00-11:00 am
Wednesday 11:00-11:50 am
• Course info:
• Video:
• Applets:
• Final Exam :
□ The final exam will be held in Chemistry 0115 from 8:00 am to 10:00 am on Wednesday December 21st..
□ The exam will consist of quantitative and qualitative questions, similar to the homework and the two mid-term exams.
□ You will not be allowed to use any books, notes, study guides, cheat sheets, or calculators. Your exam will include the following three formula sheets:
□ Exam Review (Monday December 12th): Tom Langford will go over the final exam from last year, posted here.
□ Suggestions for studying for the exam:
☆ Review the homework and the homework solutions, posted below.
☆ Review the final exam posted above.
☆ Review the formula sheets posted above. Make sure that you understand the meaning of all the equations which appear there.
☆ Review the suggested reading in Griffiths and/or Liboff. Note: the suggested reading assigned at the beginning of the semester did not include the hydrogen atom, but this topic will be
included on the final exam. This material is covered in Griffiths Section 4.2 and Liboff Sections 10.5 and 10.6.
☆ Review your class notes and my lecture notes (posted below).
• Homework:
• Lecture Notes: | {"url":"http://www.physics.umd.edu/courses/Phys401/Hall-Fall-2011/","timestamp":"2014-04-21T04:52:13Z","content_type":null,"content_length":"7765","record_id":"<urn:uuid:0ff5864e-fe60-4b53-b2f1-4a55d93b18cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question in group theory.
December 3rd 2008, 10:44 PM #1
Aug 2008
Question in group theory.
Hi all,
A student I'm helping gave me a question.
Q: Give an example of a group G with a normal subgroup N such that N and G/N are abelian, but G is not abelian.
I'm fairly sure there is a dihedral group with this property but i don't have time to find an explicit example.
Can someone help please?
Side note: If we weaken the conditions slightly and replace abelian by solvable no example exists.
Hi all,
A student I'm helping gave me a question.
Q: Give an example of a group G with a normal subgroup N such that N and G/N are abelian, but G is not abelian.
I'm fairly sure there is a dihedral group with this property but i don't have time to find an explicit example.
Can someone help please?
Side note: If we weaken the conditions slightly and replace abelian by solvable no example exists.
every dihedral group $D_n, \ n \geq 3,$ satisfies the condition. because $D_n=<a,b: \ a^2=b^n=1, \ ab=b^{-1}a>,$ and so you just choose $N=<b>.$
December 3rd 2008, 11:05 PM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/63234-question-group-theory.html","timestamp":"2014-04-20T01:42:37Z","content_type":null,"content_length":"33994","record_id":"<urn:uuid:00808fda-3ea2-4b6d-8c63-6fe442571a56>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
Was Archimedes really the greatest scientist ever? Let us have a look at his potential competitors. Some say Gauss was not only the most influential mathematician since antiquity but also of all
time. Of course it is hard to compare ancient and more recent breakthroughs, but even if the fans of Gauss were right, Archimedes' work would have more total impact as he also layed the foundations
of physics and mathematical engineering. Euclid's achievements also do not quite match those of Archimedes. Although his book on geometry has been called the most influential scientific book ever, it
is a compendium of results by numerous researchers, not just one. What about Galileo, Newton, and Einstein, frequently called the three greatest physicists since antiquity? Galileo is known as the
"father of modern physics", Newton brought the field to a culmination point through his Principia Mathematica (often called the most influential book in the history of physics), and Einstein provided
the next such peak in form of his Theory of General Relativity (the "greatest scientific discovery ever", according to Dirac; Einstein also was voted greatest physicist ever in a poll for Physics
World magazine; source: BBC News, 29 Nov 1999). But it was Archimedes who provided the basic tools that made the later discoveries possible. Thus his work had more leverage and was more
all-encompassing, combining major theoretical and practical advances in a way unmatched by his successors. His achievements also compare favorably to those of other great pioneers such as Leibniz,
who layed the foundations of computer science and (with Newton) extended Archimedes' and Madhava of Sangamagrama's (14th century) work on infinitesimals and calculus.
While the work of Archimedes was essential for all later mathematicians and physicists, it was less relevant to biologists such as Mendel (father of genetics), Darwin & Wallace, (evolution theory),
and especially Pasteur, whose work on the germ theory of disease (with Koch) has earned him the title "greatest benefactor of mankind" in the eyes of some commentators. However, biology and other
relatively young, "soft" scientific disciplines do not yet have the same general standing as the hard sciences, notably math and physics. History will show whether they will eventually receive the
same respect.
Without diminuishing the enormous contributions of the science heroes mentioned above, it is fair to say that those of Archimedes embody an even greater conceptual jump size, given his lower starting
point defined by the more limited prior knowledge of his era. Of course the work of early pioneers tends to have more time to unfold its impact; Archimedes was lucky to live at a time when a single
person could still make world-changing discoveries in quite diverse areas, with little competition by peers, as there weren't many scientists and inventors back then. But that is also the very reason
why Archimedes was so unique and outstanding.
Formal Science was born in Ancient Greece, and Archimedes was its prophet. Give me a lever and a place to stand on, he said, and I can move the earth. And he did - today we still feel his impact
through a lever spanning 2200+ years of Archimedes-inspired science.
Jürgen Schmidhuber, August 2006 | {"url":"http://www.idsia.ch/~juergen/archimedes.html","timestamp":"2014-04-20T05:44:10Z","content_type":null,"content_length":"8729","record_id":"<urn:uuid:3fe82e8d-bc14-4b9b-a43d-fe1963514060>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shape based assignment tests suggest transgressive phenotypes in natural sculpin hybrids (Teleostei, Scorpaeniformes, Cottidae)
Hybridization receives attention because of the potential role that it may play in generating evolutionary novelty. An explanation for the emergence of novel phenotypes is given by transgressive
segregation, which, if frequent, would imply an important evolutionary role for hybridization. This process is still rarely studied in natural populations as samples of recent hybrids and their
parental populations are needed. Further, the detection of transgressive segregation requires phenotypes that can be easily quantified and analysed. We analyse variability in body shape of divergent
populations of European sculpins (Cottus gobio complex) as well as natural hybrids among them.
A distance-based method is developed to assign unknown specimens to known groups based on morphometric data. Apparently, body shape represents a highly informative set of characters that parallels
the discriminatory power of microsatellite markers in our study system. Populations of sculpins are distinct and "unknown" specimens can be correctly assigned to their source population based on body
shape. Recent hybrids are intermediate along the axes separating their parental groups but display additional differentiation that is unique and coupled with the hybrid genetic background.
There is a specific hybrid shape component in natural sculpin hybrids that can be best explained by transgressive segregation. This inference of how hybrids differ from their ancestors provides basic
information for future evolutionary studies. Furthermore, our approach may serve to assign candidate specimens to their source populations based on morphometric data and help in the interpretation of
population differentiation.
Although hybridization has long been considered important in the diversification of plants zoologists often considered it detrimental and thus unimportant [1]. The debate of the relative importance
of hybridization has received recent attention because the advance of molecular techniques has resulted in a surge of data suggesting that hybridization is taking place rather frequently in the
animal kingdom as well. This in turn has revived questions surrounding the potential role that hybridization may play in the penetration of evolutionary novelty in animals [2,3]. A simple explanation
for novel phenotypes of hybrids is available through the process of transgressive segregation. Briefly, transgressive segregation is a phenomenon specific to segregating hybrid generations and refers
to individuals that exceed parental phenotypic values in any direction. This could be caused by heterosis, which is most pronounced in first generation hybrids, or alternatively by the complementary
action of parental alleles dispersed among divergent parental lineages. If this is frequent, then an important evolutionary role for hybridization is more easily explained [4].
In fact, there is abundant evidence that transgressive segregation is common in both plants and animals and that the genetic architecture for it is rather commonplace than exceptional [5]. Given
these findings, it is astonishing, that relatively few studies have evaluated transgressive segregation in natural systems [4]. On the one hand this a results from the paucity of study systems where
sufficiently large samples are readily available and from the simple fact that quantitative genetics experiments are usually conducted in controlled environments in order to separate environmental
from genetic effects. If one searches for transgressive segregation one would ideally study traits, which are determined by several genes and that display a hidden divergence of the underlying
genetic network [5]. Finally, one has to study direct hybrids and not lineages of hybrid origin because otherwise secondary evolutionary processes will have reshaped any hybrid lineage and
secondarily modified characters cannot be easily distinguished from transgressive traits. Despite these difficulties many evolutionary studies will ultimately have to incorporate natural populations
in real ecosystems if the effects and outcome of hybridization are to be analysed.
As an example, hybrid zones among divergent lineages are viewed as natural laboratories and offer interesting study systems [6]. In these, the fitness of hybrids is a key component to understand the
dynamics of the hybrid zone as a whole [7]. Transgressive segregation may affect hybrid fitness as it is a mechanism that would make hybrids different and thus produces the raw material upon which
selection can act.
We have recently identified hybrid zones of European sculpins belonging to the Cottus gobio complex (Scorpaeniformes, Cottidae) that fulfill the above requirements. Sculpins are small, benthic
freshwater fishes that occur in streams throughout Europe, with closely related species distributed throughout the northern hemisphere. Previous studies have revealed a high cryptic diversity of this
group across the entire distribution range [8]. Our focus area is the River Rhine System, where divergent lineages of sculpins are known to occur in parapatry and have come into secondary contact [8,
9]. Small tributaries to the Lower Rhine drainage are inhabited by isolated populations of 'stream' sculpins, a lineage endemic to the River Rhine [8,10]. These stream populations correspond to
Cottus rhenanus [11]. Intriguingly, a new 'invasive' lineage, has recently appeared within the main channels of larger rivers that where previously free of Cottus [10]. The invasive sculpins
represent a different species, Cottus perifretum [11] that differs from sculpins in streams of the Rhine area (C. rhenanus) in body shape and in that its lateral body is largely covered by modified
scales vs. an almost complete absence of such modified scales [10]. Invasive sculpins come into secondary contact with populations of stream sculpins where small tributaries disembogue into the main
channel of larger rivers. In these areas individuals belonging to both parental populations as well as hybrids among them occur syntopically. With respect to transgressive segregation the above
prerequisites are fulfilled. First, sufficiently divergent lineages come into contact and produce recent hybrids. Secondly, these hybrids can be readily identified using genetic data. Finally,
variation in body shape provides a well-suited character complex since sophisticated methods are available to study shape [12]. Furthermore, previous studies on body shape show that this character
complex is usually determined by multiple genes [13-15]. Below we combine genetic and phenotypic approaches to study body shape in sculpin hybrid zones and present data suggesting that transgressive
body shape phenotypes occur in natural sculpin hybrids.
In order to study variation in shape, parental groups and their hybrids were classified to establish how their phenotypes and genotypes were related. Since shape was of key interest, we relied on
model based population genetic approaches [16] to independently cluster and assign specimen to their populations of origin or to determine their hybrid status. Such model based clustering is not
possible in the analysis of shape because a powerful "theory of population shape" comparable to population genetic theory is lacking that would allow to independently infer population affinity.
However, simple assignment methods can contribute much in the sense of the first assignment approaches in genetics that were employed for much more basic questions [17] namely the problem that
distances alone are biologically and conceptually hard to interpret. As an alternative to an abstract distance one may ask whether a given character is sufficiently informative to be diagnostic at
the individual, population or at higher levels to help in assessing the significance of results. This general problem also applies to quantitative morphometric studies, especially when multivariate
analyses are used. We have developed a distance based assignment approach with statistical tests that parallel population genetic approaches [18]. The method is not intended to give a measure of the
absolute distance among groups but may help to interpret the differences among groups. One purpose of this paper is to introduce shape based assignment as a multivariate measure of distinctness and
to employ this approach to study the relationships of genotypes and phenotypes at natural hybrid zones.
Morphometric differentiation
One population of invasive sculpins (C. perifretum) and the two populations of stream sculpins (C. rhenanus) each confined to a separate stream were sampled and independently confirmed with genotypic
data (see methods; see 1). These served as basic groups for the following analyses. All of them form distinct clusters in a canonical variates analysis (CVA) along the first two axes, which display
the greatest separation of the groups relative to within group variance (Figure 1). Both populations of stream sculpins separate from the invasive sculpins along the first CV axis (Lambda = 0.0678
chisq. = 777.6114 df = 72 p < 0.001). The stream Naaf and the stream Broel populations are further separated along the second CV axis (Lambda = 0.2792 chisq. = 368.6997 df = 46 p < 0.001).
Table 1. Assignment success under alternative CVA models
Figure 1. Differentiation of ancestral populations and hybrid intermediacy. Invasive sculpins separate from all stream sculpins along the first CV axis. Sculpin populations from Stream Broel and
Stream Naaf separate along the second CV axis. BI hybrids form an intermediate group between their parental populations. Distance based assignment based on these two axes correctly identifies pure
candidates while a majority of BI hybrids are wrongly assigned to one of the parental groups with which they overlap.
A fourth group comprised recent hybrids among invasive and stream Broel sculpins that were sampled from natural hybrid zones and identified based on genetic data (BI hybrids). When these are
introduced into the CVA as 'unknowns', where the CVA model does not consider them as a separate group but determines their scores along the CV axes separating the parental groups, BI hybrids overlap
with their ancestral populations and take somewhat intermediate positions along the first CV axis (Figure 1). If in contrast BI hybrids are used as a predefined group in the CVA, hybrids are further
characterized by a third CV axis (Lambda = 0.7369 chisq. = 88.2487 df = 22 p < 0.001). They separate partially from invasive sculpins and stream sculpins along the third CV axis (Figure 2) and take,
on average, more extreme phenotypic values than both parental populations along this axis.
Figure 2. Extreme phenotypic values indicate a hybrid shape component. BI Hybrids are, on average, not intermediate along the third CV axis and may occupy extreme values relative to their parental
populations. The parental populations as well as stream Naaf sculpins display little differentiation along the third CV axis. An inclusion of this hybrid specific shape component in distance based
assignment increases the power to correctly identify hybrids more than two fold.
The differentiation in shape as captured by CV axes can be visualized as displacement vector for each landmark on a deformation grid relative to a reference (Figure 3). Invasive sculpins differ from
both populations of stream sculpins in that they have a larger head and anterior trunk as well as a shorter tail (Figure 3; first CV axis). The two populations of stream sculpins differ most in their
head length and the positions of their anal and dorsal fin landmarks (Figure 3; second CV axis). While the deformation implied by the first two axes can be expressed in terms of inflation or
compression of body parts, the hybrid specific shape change appears to be less balanced although this is hard to objectify (Figure 3; third CV axis).
Figure 3. Landmark configuration and displacement vectors that distinguish groups of sculpins. Fourteen Landmarks were chosen to analyse variability in sculpin body shape (top). CVA was used to
identify axes along which different groups can be discriminated based on the relative position of landmarks to a reference. The shape change captured by these axes can be visualized as relative
displacement vectors for each landmark on a deformation grid. CV axis 1 separates invasive sculpins from all stream sculpins and axis 2 further separates two populations of stream sculpins. CV axis 3
captures the shape component that is unique to recent hybrids. While the deformation along CV axes 1 and 2 can be expressed in terms of inflation or compression of body parts, the hybrid specific
shape change appears to be less balanced.
In order to evaluate whether the observed differentiation was biased due to imbalanced sampling CVA scores were regressed on centroid size and sex for all specimens. A linear regression reveals for
all axes that correlations coefficients were low at most (CV axis1 vs. size r^2 = 0.03; CV axis1 vs. sex r^2< 0.01; CV axis^2 vs. size r^2<0.01; CV axis2 vs. sex r^2<0.01; CV axis3 vs. size r^2 =
0.11; CV axis3 vs. sex r^2 = 0.019). Therefore, neither size nor gender can explain the considerable amount of variance of the CV axes that distinguish the groups.
Assignment and cross validation
To evaluate the utility of the derived axes to discriminate among groups and to determine a given specimens group affinity, distance based assignment tests were performed. In a first approach the CVA
model was based on the differentiation as observed among the pure populations but hybrids were not included as a known group. The clear differentiation of pure populations facilitates that single
"unknown" specimens removed from the complete dataset can be correctly assigned to their population of origin with high success (Table 1), and the resubstitution rate of correct assignment (the
assignment of the known specimens) is high also, although this resubstitution rate is known to be biased upward. Approx. 92 % of pure sculpins were assigned correctly. This number is slightly lower
than the expected 95% due to false positive assignments because of a slight overlap of parental phenotypic values. The number of outliers corresponds well to the amount expected from the significance
criterion. In this approach hybrids were used as "unknowns" and could only be identified as outliers relative to the pure sculpins (non significant assignment test). Only 37.1% of the BI hybrids were
correctly classified while the majority was misassigned to one of the pure populations.
In an alternative approach assignment was based on a CVA model that includes the differentiation among pure populations but also takes into account the shape component specific to hybrids as captured
by the third CV axis (Figures 2 &3). The assignment success of pure populations was decreased to 85.4–89.5% because of the partial overlap with the group of hybrids. In sharp contrast to the above,
83.9% of the BI hybrids were now classified correctly with only relatively few false positive assignments to the parental groups (Table 1).
A jackknife test of assignment was performed for both assignment approaches to evaluate the robustness of the CV axes and assignment model (Table 2). The cross validation procedure revealed a very
consistent signal, inherent to even small partitions of the whole dataset. Roughly half of the specimens can be removed from the data without much loss of information for the CV axes. Even when 80%
of the whole dataset are left out in the CVA procedure (see methods) the general outcome remains unchanged although the number of correct assignments decreases. As evident from the individual
assignment tests (Table 1) the overall assignment success is lower when the more complex model including hybrids is employed.
Table 2. Jackknife estimates of assignment performance.
Transgressive phenotypes in natural hybrids
We were able to use a microsatellite dataset with surpassing information content to classify sculpins into distinct lineages (see methods for details). With the genotypic data it is possible to
unambiguously identify invasive sculpins (Cottus perifretum), which are genetically distinct from populations of stream sculpins (Cottus rhenanus). The latter are further represented here by two
separate populations from the streams Naaf and Broel. In agreement with the genetic data, the CVA based on morphometric data recovers significant differentiation that separates all studied
populations of sculpins with a higher amount of variance between species and a lesser amount between two conspecific samples ofstream sculpins. Cross validation confirms that these results are based
on a signal inherent to the whole dataset as a removal of a large fraction of the specimens will not notably alter the axes as determined in the CVA (Table 2). This differentiation is sufficient to
assign unknowns to either one of the known groups with high confidence. Thus the groups are distinctly different in their multivariate signal even though no single diagnostic morphometric character
can be found.
The genotypic data served to identify a set of hybrids between the invasive and stream Broel populations (BI hybrids). In contrast to the ancestral populations, hybrids cannot be distinguished
completely from all of the ancestral phenotypes (Figure 1) and are more or less intermediate in the characters that discriminate their parental populations. This is expected for a character like body
shape that is most likely determined by multiple genes. Yet, there are properties of the hybrids that could not be attributed to hybrid intermediacy. The group of hybrids displays a unique shape
component that distinguishes them from a their parental populations (Figure 2). Altogether it is not a strong effect thus additional evidence to evaluate the biological significance of this result
are desirable. To address sampling artifacts, we have tested for possible effects of typical confounding variables in morphometric studies. Regressions show that the amount of total variance of
individual CV axis scores that can be explained by size or sex is small. Therefore the influence of allometry or sexual dimorphism is most likely not important for the differentiation we observe.
Despite the large overlap of the BI hybrids with the parental populations, the hybrid shape component constitutes a considerable amount of variation, which results in an increased overall assignment
success when hybrid shape is considered specifically (Table 1). Moreover, cross validation has shown that all axes are robust to removal of specimens, which suggests that the signal is inherent to a
majority of the recent hybrids (Table 2).
Two alternative explanations remain. One assumes an involvement of genetic factors that interact to produce novel phenotypes, in contrast, the second proposes that the genetic background is not
important. According to the latter hypothesis extreme hybrid phenotypes should be determined by the environment. Our genetic data demonstrate that hybrids occur syntopically with the parental
populations within the hybrid zones (Table 3). This excludes the possibility that hybrids would be exclusively subjected to environmental factors that could induce the observed phenotypes. Phenotypic
plasticity cannot be fully excluded in heterogeneous environments but this process alone is not likely to explain our results. After possible confounding variables were found to play a minor role, it
seems reasonable to assume the differentiation is real. In contrast to the above explanations, differentiation due to the underlying genetic background is strongly supported. This includes that the
specific hybrid shape effect is coupled with the hybrid genetic background.
Table 3. Sampling sites and number of specimens in the morphometric study.
Although there is considerable overlap of parental groups and BI hybrids along the CV axis that captures the hybrid shape component, hybrids are on average more extreme than both parental
populations. Given that genetic data verify a recent hybrid status of the BI hybrids, these results can be best explained by transgressive segregation in shape traits. This is also in agreement with
other studies suggesting that transgressive segregation occurs in morphometric traits [4,5]. However, to assess the evolutionary implications of the hybrid phenotypes will require functional studies
and measurements of fitness to complement the mere observation of possible transgressive effects.
Information content of shape markers
A drawback as compared to population genetic model – based assignment is that our shape distance based method needs a priori defined groups as input. If such groups can be provided hypotheses
regarding their differentiation and distinctness can be tested. For example, an attractive application of genotype based assignment procedures is to detect outliers that belong to source populations
that were not sampled [19,16]. Unfortunately this is not straightforward in our implementation of phenotype-based assignment. If a candidate does not belong to one of the expected groups, the
exclusion of source populations is not predictable because assignment based on discriminant axes is conditioned on the specific case being studied. We find this for the hybrids among stream Broel and
Invasive sculpins if they are used as unknowns and not as a separate group in the CVA. Hybrids take more or less intermediate phenotypic values but largely overlap with the parental groups (Figure 1
Similar results were already obtained by Strauss [20] in a study of phenotypic variation in hybridising North American sculpins. However, we have a sufficiently large sample of verified hybrids that
could be used as an extra group in the CVA. Only this revealed significant differentiation along an extra axis that is specific to the hybrids. The differentiation specific to hybrids adds
information to the group assignment. As a result the assignment success of hybrid specimen was raised notably despite the tremendous overlap of the hybrids with both parental populations (Table 1).
The assignment procedure based on morphometric data as implemented here allows to unambiguously assign sculpins to their population of origin. Morphometric differentiation of European sculpins was
studied before [21] using a set of landmarks that was largely identical to the ones used here (note that these authors [21] did not study the same evolutionary lineages, see [8,11]). Groups of
sculpins as defined by different tributaries to the Rhine were found to differ significantly in shape but formed largely overlapping clusters. Our system differs in that we have not compared
assemblages of populations but separate more or less panmictic populations as defined by a currently shared gene pool. These form distinct clusters in the CVA (Figure 1). Such differentiation would
have escaped the approach of Riffel and Schreiber [21] as they pooled specimens from different subpopulations for their analysis. This by no means negates their results but demonstrates an even
higher information content of shape data, namely the power to discriminate separate populations. Although a comparison among genetic markers and body shape seems arbitrary the resolution as compared
population genetic markers goes beyond recognition of ancient lineages as resolved by mitochondrial haplotypes [8] and species [11] but parallels that of microsatellites in that genetically well
separated populations are also distinctly differentiated in body shape. Apparently, shape represents a character with a fast evolutionary divergence that occurs and becomes fixed even among closely
related populations. Thus, in our example morphometric data resolve to the lowest possible level above the individual.
Implementation of shape based assignment
Landmark-based geometric morphometric methods were used to capture information about shape, by obtaining the x and y coordinates of homologous landmarks in the configuration shown in Figure 3.
Differences among specimen in the sets of coordinates due to scaling, rotation and translation were removed using the typical geometric morphometric approach [22-25,12] of placing the specimens in
Partial Procrustes Superimposition [24-26] on the iteratively estimated mean reference form, using the Generalized Procrustes Analysis procedure. This procedure places the shapes of specimens in a
linear tangent space to Kendall's shape space [27], allowing the use of linear multivariate statistical methods [23,28,24,12]. After superimposition, the data were converted from Cartesian coordinate
form into components along the eigenvectors of the bending energy matrix (Principal Warp axes) of the thin-plate spline model of deformations of the reference [29,22] and along the uniform axes of
deformation due to shear and dilation [30]. Use of these linearly transformed variables (referred to as Partial Warp plus Uniform Component scores), produces a convenient set of variables (using a
basis set called the Principal Warp axes) for use with standard multivariate statistical methods, since the Partial Warp and Uniform component scores have the same number of variables per specimen as
degrees of freedom. No information is lost during this linear transformation of variables.
A canonical variates analysis (CVA) is then used to determine the set of axes which best discrimate among pre-defined groups of specimens, by determining the linear combinations of the original
variables which display the greatest variance between groups relative to the variance within the groups [31,12]. Fisher's linear discriminant function was used, which makes no particular assumption
about the parametric form of the distribution of the data used, but simply determines the linear combination of the original variables that results in the greatest ratio of the between groups sum of
squares to the within groups sum of squares [32]. A simple distance-based approach is then used to determine which group each specimen belongs to, based on the canonical variate scores. The predicted
group membership of each specimen based on the CVA scores is determined by assigning each specimen to the group whose mean is closest (measured as the square root of summed squared distances along
the CV axes, see [32]) to the specimen. To obtain a measure of the quality of the assignment of each specimen to a group, an assignment test was developed. The CVA axes can always be used to assign
any given specimen to some group, since a minimum distance can always be found but a measure of whether the quality of the assignment is similar to that expected for specimens known to be in that
particular group is desirable. The assignment test presented here is modeled on the genetic distance-based assignment test [33,18]. The distribution of distances produced by a Monte Carlo simulation
(see discussion in [34]) is used to determine if the observed distance of a given specimen is consistent with the null model of random variation around the mean of the group to which the specimen is
assigned to. The distance from a specimen to a group mean can then be assigned a p-score which describes how likely it would be for a specimen from the original population to be as far from the mean
specimen as the observed specimen is (under the null model used in the Monte-Carlo simulation). If the p score is smaller than 5%, then we can assert that there is a less than 5% chance that random
variation could have produced a distance as large as that of the particular specimen from the group mean, and hence that the assignment of that specimen to the group is in doubt.
It should be noted that in a study with many specimens, a number of them will have low p-scores by chance, and so to assess the validity of the assignments of the set as a whole, the researcher
should assess the number of specimens expected to have p values less than 5%. It will then be possible to determine if the observed number of low p values exceeds that expected by chance. The model
used in the Monte Carlo simulation of the distribution of distances of specimens within a group around the group mean (the average specimen within the group) is based on a normal model of the
distribution of the CVA scores of each group about the mean of that group. For a given group, it appears probable that the CVA scores along each CVA axes for the specimens within the group are
correlated, thus there exists within each group a covariance structure to the CVA scores of specimens within the group. In carrying out the Monte-Carlo simulation of the distribution of specimens
within the group about the mean, it is necessary to preserve this covariance structure in order to produce a valid model of the distribution. An eigenvalue decomposition of the variance-covariance
matrix of within group CVA scores is used to find the principal component axes of the within group variance. This yields the same number of variables as the CVA scores, but now with uncorrelated axes
(the eigenvectors), each of which has a variance given by the corresponding eigenvalue. The model used for the distribution of the CVA scores of the specimens assumes the group has an independent
random normal distribution along each of these eigenvectors (principle component axes), with amplitudes given by the square roots of the eigenvalues (the eigenvalues are the variances of the group
along the corresponding eigenvectors), so that the square root of the eigenvalue is the standard deviation of the population along that eigenvector.
An independent, normal distribution with known amplitude (the square root of the eigenvalue) is assumed along each eigenvector. This allows generation of a Monte Carlo population of specimens,
assuming the independent normal distribution along these principal component axes. Each simulated specimen is generated using a random number generator to compute locations along the eigenvectors,
which are then translated back into CVA axes scores (a simple linear translation of basis vectors). The Monte Carlo generated CVA axis scores will have the same mean and variance-covariance structure
as the original population did. Using an independent multinomial distribution model of the CVA axes scores in the Monte-Carlo simulation (by using the random number generater to directly generate the
CVA scores) would not preserve any covariation structure in the data. If there is no significant covariation structure to the CVA scores, the use of the PCA axes is not necessary, but will not induce
The distance from the group mean is then calculated for each simulated specimen. The Monte Carlo distribution of distances of specimens about the group mean under the null model of random variation
can then be used as an estimate of the distribution of distances about the group mean in the original data. Based on the estimated distributions of distances about the group means expected for
specimens in the group, p-values may be determined for the assignment of specimens, with either initially known or unknown group affinities. Based on an alpha level of p = 0.05, all assignments of
specimens to groups can be scored as either statistically significant or not.
As a test of the performance of the assignment test, a cross validation or jackknife procedure [34,35] was implemented. Unlike a standard jackknife where only one specimen at a time is omitted, a
"delete-d" jackknife [35] was used in which d specimens at a time were omitted. Under the delete-d jackknife, a variable percentage of individuals from a dataset are left out during the CVA
procedure, and then assigned to groups as "unknown" specimens. The specimens treated as unknowns during the jackknife procedure are also assigned an assignment p-value during this procedure. High
success rates under the delete-d jackknife resampling indicate that the differentiation among the involved groups is sufficient to be diagnostic. This implies that the discriminant axes capture
enough information to assign individuals of the given groups, and form a reasonable estimate of the distribution of distances based on the Monte Carlo procedure. The jackknife procedure also allows
estimation of the number of individuals needed to obtain meaningful CVA axes, and distance distribution estimates.
The sculpin data set
Here we employ the methods outlined above to study the differentiation in shape that occurs among divergent populations and their naturally occurring hybrids. Population affinity and hybrid status
are independently derived from genetic markers. Note that the specimen are taken from natural populations and occur syntopically in the studied hybrid zones (Table 3). Sculpins were sampled across an
area of secondary contact of invasive and stream sculpins (C. perifretum and C. rhenanus) in the Lower Rhine basin, which is situated at the confluence of the Stream Broel with the River Sieg (Table
3). An extra population of stream sculpins was sampled from the stream Naaf (also a tributary to the River Sieg drainage).
All specimens were genotyped for 45 microsatellite loci [36]. The loci were chosen for their particularly high information content for our study system following the approach of Shriver et al. [37]
using Whichloci [38]. We have used a preliminary genetic map of Cottus [39], to verify that our set of microsatellite markers does not contain pairs of loci that are tightly physically linked. The
genotypic data allow to unambiguously classify individuals to belong to pure populations or to identify them as hybrids with a mixed ancestry using the methods outlined in Falush et al. [16]. The
program Structure 2.1 [40] yielded consistent results in independent runs (burnin: 20000; sampling iterations 100000, correlated allele frequency model allowing for an individual alpha and different
F[ST ]for each subpopulation) according to which the genetic ancestry of individual could be determined (see 1, for genotypic data of those specimen included here). The classification based on
genotypic data was highly congruent with data from distribution and morphology.
Two independent populations of stream sculpins confined to separate streams (Stream Naaf, Stream Broel) both being devoid of skin prickling were recovered. A third population can be identified, which
represents the invasive sculpin. Invasive sculpins generally occur within the main channel of the River Sieg and display pronounced skin prickling [10]. Hybrids among Stream Broel sculpins with the
invasive sculpins were only found at the confluence where the Broel merges with the River Sieg (Table 3). A detailed study of these hybrid zones, particularly on the geographic extension, is
currently in preparation (Nolte et al. in prep). Of particular relevance in this context is the fact that hybrid sculpins occur syntopically with their parental populations within the hybrid ones
(Table 3; Sites 2, 3, 4). For the morphometric analysis we grouped specimens that were found to belong to pure populations from the genotypic data into those corresponding to the invasive sculpins
(invasive) and to the two stream sculpin populations (Streams Naaf and Broel). To allow for some uncertainty in the estimates we used those specimens that were found to be at least 97% pure in the
structure analysis. These populations are represented here by 117 Stream Broel sculpins, 76 Stream Naaf sculpins and 40 Invasive sculpins (Table 3). In contrast, hybrids represent a somewhat
inhomogeneous group consisting of various degrees of ancestry. To restrict this analysis to those specimens that have a pronounced hybrid genotype and to exclude later generation backcrosses that
might have been subject to repeated rounds of natural selection (as this could blur possible transgressive effects) we decided to exclude hybrids with less than 25% ancestry in one of the ancestral
populations. Based on genotypic data, we were able to identify 62 BI hybrids (mixed Stream Broel/Invasive ancestries, less than 75% pure ancestry).
Images of specimens were taken with a digital camera fixed on a stage so that the midsagittal body plane was as much as possible perpendicular to the image plane. Fourteen morphological landmarks
were used to capture the shape of each individual (suppl. Table 2) using TPSdig [41]. The positions of the tip of the nasal (#1), nares (#2), interorbital pores (#3), dorsal fin I origin (#4), dorsal
fin II origin (#5), dorsal fin II end (#6), upper caudal fin origin (#7), lower caudal fin origin (#8), anal fin end (#9), anal fin origin (#10), ventral fin origin (#11), upper origin of the gill
opening (#12), opercular spine (#13) and posterior end of the maxilla (#14) were used as landmarks (Figure 3). The morphometric analysis was conducted using the IMP package according to the methods
outlined above [42]. Shape based assignment tests were conducted with CVAgen6N (part of IMP). In order to estimate possible confounding effects of allometric growth and sexual dimorphism these
variables were determined individually. A scale bar was photographed besides each specimen as a size reference and sex was determined for individuals larger than 45 mm by examination of the genital
papilla (see 2).
Authors' contributions
AN identified the sculpin hybrid zones and carried out the molecular genetic studies, morphometric analyses and drafted the manuscript. HDS participated in the morphometric analysis, developed the
assignment procedure based on morphometric data and helped to draft the manuscript. All authors read and approved the final manuscript.
Additional File 1. Inferred group affinity and individual genotypic data. Genotypes of all specimens for 45 microsatellite loci (0 = missing data, alleles numbered according to size, but not
necessarily repeat size) with group affinity and sampling site as of Table 3.
Format: XLS Size: 195KB Download file
This file can be viewed with: Microsoft Excel Viewer
Additional File 2. Individual landmark data, centroid size and sex. Cartesian coordinates (X – Y format) for fourteen landmarks, with individual group affinities and sampling site as of Table 3 as
well as sex (0 = female; 1 = male) and centroid size.
Format: XLS Size: 189KB Download file
This file can be viewed with: Microsoft Excel Viewer
We thank D. Tautz for critical comments on the manuscript. AN thanks D. Tautz for his support of the Cottus project. The Cottus project was funded by the DFG (Ta99/20). We have obtained permissions
to sample and take specimens from A. Mellin, T. Heilbronner and W. Fettweis, whom we thank for their benevolent support.
1. Burke JM, Arnold ML: Genetics and the fitness of hybrids.
Annu Rev Genet 2001, 35:31-52. PubMed Abstract | Publisher Full Text
2. Seehausen O: Hybridization and adaptive radiation.
Trends Ecol 3 vol 2004, 19(4):198-207. Publisher Full Text
3. Schliewen UK, Klee B: Reticulate sympatric speciation in Cameroonian crater lake cichlids.
Frontiers in Zoology 2004, 1:5. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
4. Rieseberg LH, Archer MA, Wayne RK: Transgressive segregation, adaptation, and speciation.
Heredity 1999, 83:363-372. PubMed Abstract | Publisher Full Text
5. Rieseberg LH, Widmer A, Arntz AM, Burke JM: The genetic architecture necessary for transgressive segregation is common in both natural and domesticated populations.
Phil Trans R Soc Lond B 2003, 58:1141-1147. Publisher Full Text
6. Harrison RG: Hybrids and hybrid zones. In Hybrid zones and the Evolutionary process. Edited by Harrison RG. Oxford University Press New York; 1993.
7. Barton NH: The role of hybridization in evolution.
Mol Ecol 2001, 10:551-568. PubMed Abstract | Publisher Full Text
8. Englbrecht CC, Freyhof J, Nolte A, Rassmann K, Schliewen U, Tautz D: Phylogeography of the bullhead Cottus gobio (Pisces: Teleostei: Cottidae) suggests a pre-Pleistocene origin of the major
central European populations.
Mol Ecol 2000, 9:709-722. PubMed Abstract | Publisher Full Text
9. Volckaert FAM, Hänfling B, Hellemans B, Carvalho GR: Timing of the population dynamics of bullhead Cottus gobio (Teleostei: Cottidae) during the Pleistocene.
J Evol Biol 2002, 15:930-944. Publisher Full Text
10. Nolte AW, Freyhof J, Stemshorn KC, Tautz D: An invasive lineage of sculpins, Cottus sp. (Pisces, Teleostei) in the Rhine with new habitat adaptations has originated from hybridization between old
phylogeographic groups.
11. Freyhof J, Kottelat M, Nolte A: Taxonomic diversity of European Cottus with description of eight new species (Teleostei: Cottidae).
Ichthyological Exploration of Freshwaters 2005, 16(2):107-172.
12. Zelditch ML, Swiderski DL, Sheets HD, Fink WL: Geometric Morphometrics: A primer. Academic Press, London; 2004.
13. Klingenberg CP, Leamy LJ: Quantitative genetics of geometric shape in the mouse mandible.
14. Albertson RC, Streelman JT, Kocher TD: Genetic Basis of Adaptive Shape Differences in the Cichlid Head.
Journal of Heredity 2003, A4(4):291-301. Publisher Full Text
15. Moraes EM, Manfrin MH, Laus AC, Rosada RS, Bomfin SC, Sene FM: Wing shape heritability and morphological divergence of the sibling species Drosophila mercatorum and Drosophila paranaensis.
Heredity 2004, 92:466-473. PubMed Abstract | Publisher Full Text
16. Falush D, Stephens M, Pritchard JK: Inference of Population Structure Using Multilocus Genotype Data: Linked Loci and Correlated Allele Frequencies.
Genetics 2003, 164:1567-1587. PubMed Abstract | Publisher Full Text
17. Paetkau D, Calvert W, Stirling I, Strobeck C: Microsatellite analysis of population structure in Canadian polar bears.
Mol Ecol 1995, 4:347-354. PubMed Abstract
18. Cornuet J-M, Piry S, Luikart G, Estoup A, Solignac M: New Methods Employing Multilocus Genotype to Select or Exclude Populations as Origins of Individuals.
Genetics 1999, 153:1989-2000. PubMed Abstract | Publisher Full Text
19. Primmer CR, Koskinen MT, Piironen J: The one that did not get away: individual assignment using microsatellite data detects a case of fishing competition fraud.
Proceedings of the Royal Society of London Series B Biological Sciences 267(1453):1699-1704.
20. Strauss RE: Natural hybrids of the freshwater sculpins Cottus bairdi and Cottus cognatus (Pisces: Cottidae): Electrophoretic and morphometric evidence.
21. Riffel M, Schreiber A: Morphometric differentiation of sculpin (Cottus gobio), a fish with deeply divergent genetic lineages.
Can J Zool 1998, 76:876-885. Publisher Full Text
22. Bookstein FL: Morphometric Tools for Landmark Data: Geometry and Biology. Cambridge University Press; 1991.
23. Rohlf FJ, Marcus LF: A Revolution in Morphometrics.
Trends in Ecology and Evolution 1993, 8:129-132. Publisher Full Text
24. Rohlf FJ: Shape Statistics: Procrustes Superimpositions and Tangent Spaces.
Journal of Classification 1999, 16:197-223. Publisher Full Text
25. Slice DE: Landmark coordinates aligned by Procrustes analysis do not lie in Kendall's shape space.
Systematic Biology 2001, 50:141-149. PubMed Abstract
26. Bookstein FL: Combining the tools of geometric morphometrics. In Advances in Morphometrics. Edited by Marcus LF, Corti M, Loy A. Plenum Press; 1996:131-152.
27. Bookstein FL: Principal warps: thin-plate splines and the decomposition of deformations.
IEEE Transactions on Pattern Analysis and Machine Intelligence 1989, 11:567-585. Publisher Full Text
28. Bookstein FL: Standard formula for the uniform shape component in Landmark data. In Advances in Morphometrics. Edited by Marcus LF, Corti M, la oy A. Plenum Press; 1996:153-168.
29. Piry S, Cornuet J-M: Gene-Class Users Manual. [http://www.montpellier.inra.fr/CBGP/softwares/geneclass/geneclass.html] webcite
30. Nolte AW, Stemshorn KC, Tautz D: Direct cloning of microsatellite loci from Cottus gobio through a simplified enrichment procedure.
31. Shriver MD, Smith MW, Jin L, Marcini A, Akey JM, Deka R, Ferrell RE: Ethnic-affiliation estimation by use of population-specific DNA markers.
American Journal of Human Genetics 1997, 60(4):957-964. PubMed Abstract
32. Whichloci [http://www.bml.ucdavis.edu/whichloci.htm] webcite
33. Stemshorn KC, Nolte AW, Tautz D: A Genetic Map of Cottus gobio (Pisces, Teleostei) based on microsatellites can be linked to the Physical Map of Tetraodon nigroviridis.
34. Structure 2.1 [http://pritch.bsd.uchicago.edu/structure.html] webcite
35. Rohlf FJ: TpsDig, digitize landmarks and outlines, version 1.39. Department of Ecology and Evolution, State University of New York at Stony Brook; 2003.
Sign up to receive new article alerts from Frontiers in Zoology | {"url":"http://www.frontiersinzoology.com/content/2/1/11","timestamp":"2014-04-17T12:29:19Z","content_type":null,"content_length":"137454","record_id":"<urn:uuid:d23490ba-8f81-4261-821d-565c86c385b2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items where Division is "Faculty of Physical Sciences and Engineering > Electronics and Computer Science > Comms, Signal Processing & Control" and Year is 1996
Number of items: 105.
Adamson, M. J. and Damper, R. I . (1996) A recurrent network that learns to pronounce English text. International Conference on Spoken Language Processing (ICSLP'96) , 1704-1707.
Adamson, M. J. and Damper, R. I. (1996) Recurrent networks for English text-to-phoneme conversion. Proceedings of the Institute of Acoustics, 18, (9), 27-34.
Aguado, A.S. (1996) Primitive Extraction via Gathering Evidence of Global Parameterised Models. : University of Southampton, Doctoral Thesis .
Aguado, A.S., Montiel, M.E. and Nixon, M.S. (1996) Extracting Arbitrary Geometric Primitives Represented by Fourier Descriptors. Proc. International Conference on Pattern Recognition ICPR '96 IEEE,
Aguado, A.S., Montiel, M.E. and Nixon, M.S. (1996) Improving Parameter Space Decomposition for the Generalised Hough Transform. Proc. IEEE International Conference on Image Processing ICIP '96 IEEE,
Aguado, A.S., Montiel, M.E. and Nixon, M.S. (1996) On using Directional Information for Parameter Space Decomposition in Ellipse Detection. Pattern Recognition, 29, (3), 369--381.
Allan, W., Nelson, P.A., Tutty, O.R., Rogers, E. and Wright, M.C.M. (1996) Novel Computing Technologies Appled to the Optimisation of Suction Distribution in Multi-Channel Drag Reduction Systems.
Int. Conference on Adaptive Computing in Engineering Design and Control '96
Amann, N, Owens, D H and Rogers, E (1996) Iterative learning control using optimal feedback and feedforward actions. International Journal of Control, 65, (2), 277-293.
Amann, N., Owens, D.H. and Rogers, E. (1996) Convergence Rates for Learning Algorithms - A Repetitive Process Theory Approach. 1996 IFAC World Congress
Amann, N., Owens, D.H. and Rogers, E. (1996) Iterative Learning Control for Discrete Time Systems with Exponential Rates of Rate Convergence. Proceeding of The Institution of Electrical Engineers on
Control Theory and Applications, 143, (2), 217-224.
Amann, N., Owens, D.H., Rogers, E. and Wahl, A. (1996) A Practical H-Infinity Approach to Iterative Learning Control Design. International Journal of Adaptive Control and Signal Processing, 10, (6),
An, P.E. and Harris, C.J. (1996) An intelligent driver warning system for vehicle collision avoidance. IEEE Trans. System, Man and Cybernetics, 26, (2), 254--261.
Bossley, K.M., Brown, M. and Harris, C.J. (1996) Bayesian Regularisation Applied to Neurofuzzy models. EUFIT '96 , 757--761.
Bossley, K.M., Brown, M. and Harris, C.J. (1996) Neurofuzzy Model Weight Identification With Multiple Priors. IEEE Trans Neural Networks
Brown, M., Bossley, K.M. and Harris, C.J. (1996) An Analysis of the Application of B-spline Neurofuzzy Construction Algorithms. EUFIT '96 , 1830--1834.
Brown, M., Bossley, K.M. and Harris, C.J. (1996) Neurofuzzy Algorithms for Model Identification: Structure and Parameter Determination. Computational Engineering in Systems Applications '96:
Symposium on Control, Optimization and Supervision , 1061--1066.
Brown, M., Bossley, K.M. and Harris, C.J. (1996) The Theory and Implementation of the B-spline Neurofuzzy Construction Algorithms. EUFIT '96 , 762--766.
Brown, M. and Harris, C.J. (1996) The Development and Application of Neurofuzzy Systems. Artificial Intelligence in Consumer and Domestic Products IEE, 1-6.
Brown, M., Mills, D.J. and Harris, C.J. (1996) The Representation of Fuzzy Algorithms used in Adaptive Modelling and Control Schemes. Fuzzy Sets and Systems, 79, 69--91.
Carter, J.N., Shadle, C.H. and Davies, C.J. (1996) On the use of structured light in speech research. ETRW - 4th Speech Prod. Seminar , 229--232.
Chen, S., Chng, E. S. and Alkadhimi, K. (1996) Regularized orthogonal least squares algorithm for constructing radial basis function networks. International Journal of Control, 64, (5), 829-837.
Chen, S., Chng, E. S., Mulgrew, B. and Gibson, G. (1996) Minimum-BER linear-combiner DFE. Proceedings of 1996 IEEE International Conference on Communications , 1173-1177.
Chen, S., McLaughlin, S., Mulgrew, B. and Grant, P. M. (1996) Bayesian decision feedback equaliser for overcoming co-channel interference. IEE Proceedings Communications, 143, (4), 219-225.
Cherriman, P J and Hanzo, L (1996) H261 and H263-based Programable Video Transceivers. In, ICCS'96/ISPAC'96, 25 - 29 Nov 1996. , 1369-1373.
Cherriman, P J and Hanzo, L (1996) Power-Controlled H.263-Based Wireless Videophone Performance in Interference-Limited Scenarios. of PIMRC'96, Taipei, Taiwan, 15 - 18 Oct 1996. , 158-162.
Cherriman, P J and Hanzo, L (1996) Robust H.263 Video Transmission Over Mobile Channels in Interference Limited Environments. of First Wireless Image/Video Communications Workshop, Loughborough, UK,
04 - 05 Sep 1996. , 1-7.
Chng, E. S., Chen, S. and Mulgrew, B. (1996) Gradient radial basis function networks for nonlinear and nonstationary time series prediction. IEEE Transactions on Neural Networks, 7, (1), 190-194.
Chng, E.S., Mulgrew, B., Chen, S. and Gibson, G. (1996) Approximating the Bayesian decision boundary for channel equalisation using subset radial basis function network. In, Ellacott, S.W., Mason,
J.C. and Anderson, I. (eds.) Mathematics of Neural Networks: Models, Algorithms and Applications. , Kluwer Acedemic Publishers, 151-155.
Damper, R. I. and Eastmond., J. F. G. (1996) Pronouncing text by analogy. 16th International Conference on Computational Linguistics (COLING'96) , 268--273.
Damper, R. I., Garner, D., Jordan, G., Rahman, A. and Saunders, C. (1996) A barcode-scanner aid for visually-impaired people. 18th Annual International Conference of IEEE Engineering in Medicine and
Biology Society , 397-398.
Damper, R. I., Gore, M. O. and Harnad, S. (1996) Acoustic and auditory representations of the voicing contrast. Journal of the Acoustical Society of America, 100, (4/2), 2682.
Damper, R. I., Harnad, S. and Gore, M. O. (1996) The auditory basis of the perception of voicing. ESCA Tutorial and Research Workshop on the Auditory Basis of Speech Perception , 69--74.
Damper, R. I., Tranchant, M. A. and Lewis, S. M. (1996) Speech versus keying in command and control: effect of concurrent tasking. International Journal of Human-Computer Studies, 45, (3), 337--348.
Dasmahapatra, Srinandan and Martin, Paul (1996) On an Algebraic Approach to Cubic Lattice Potts Models. Journal of Physics A, 29, 263.
Davies, C. J. (1996) Three Dimensional Sensing via Coloured Spots. Doctoral Thesis .
Davies, C.J. and Nixon, M.S. (1996) Sensing Surface Discontinuities via Coloured Spots. Proc. International Workshop on Image and Signal Processing , 573--576.
Doyle, R.S. and Harris, C.J. (1996) Multi-Sensor Data Fusion for Helicopter Guidance using Neuro-Fuzzy Estimation Algorithms. The Royal Aeronautical Society Journal, 241--251.
Evans, A.N. and Nixon, M.S. (1996) Biased Motion-Adaptive Temporal Filtering for Speckle Reduction in Echocardiography. IEEE Transactions on Medical Imaging, 15, (1), 39--50.
Finan, R. A., Sapeluk, A. T. and Damper, R. I. (1996) Comparison of multilayer and radial basis function neural networks for text-dependent speaker recognition. International Conference on Neural
Networks (ICNN'96) IEEE, 1992--1997.
French, M. and Rogers, E. (1996) A stability analysis of Direct Neuro-Fuzzy Controllers. Proceedings of IEE Control '96 , 182--187.
Galkowski, K, Rogers, E and Owens, D H (1996) A new state space model for linear discrete multipass processes. Bulletin of the Polish Academy of Sciences (Technical Sciences), 44, (1), 87-98.
Galkowski, K, Rogers, E. and Owens, D H (1996) Computing the Trajectories Generated by Discrete Linear Repetitive Processes. Zeitschrift fur Angewandte Mathematik un Mechanik (SAMM), 76, (2), 159-160
Galkowski, K., Rogers, E. and Owens, D.H. (1996) 1D Representations and the Control of 2D Systems. Int. Conference Control 96
Galkowski, K., Rogers, E. and Owens, D.H. (1996) Geometric Control Theory for Differential Linear Repetitive Processes. Internationa Symposium on The Mathematical Theory of Networks and Systems (MTNS
Grant, P. M., Chen, S. and Mulgrew, B. (1996) Nonlinear adaptive filter performance in typical applications. Annual Reviews in Control, 20, (0), 107-118.
Gunn, S.R. (1996) Dual Active Contour Models for Image Feature Extraction. University of Southampton, Electronics and Computer Science : University of Southampton, Doctoral Thesis .
Gunn, S.R. and Nixon, M.S. (1996) Snake Head Boundary Extraction using Local and Global Energy Minimisation. IEEE Int. Conf. on Pattern Recognition, Vienna, Austria, , 581-585.
Hannah, M. I. (1996) Prospects for Applying Speaker Verification to Unattended Secure Banking. University of Abertay Dundee, Electrical and Electronic Engineering : Department of Electonic and
Electrical Engineering, Doctoral Thesis .
Hanzo, L (1996) Bandwidth-efficient Multi-level Communications.
Hanzo, L (1996) Bandwidth-efficient Tetherless Multimedia Communications.
Hanzo, L. (1996) The British Cordless Telephone Standard CT2. In, Gibson, J. (ed.) The Mobile Communications Handbook. , IEEE Press-CRC Press, 462-477.
Hanzo, L. (1996) The Pan-European Mobile Radio System. In, Gibson, J. (ed.) The Mobile Communications Handbook. , IEEE Press-CRC Press, 399-418.
Hanzo, L. (1996) Wireless Audio-Visual Communications. , ---.
Harris, C.J. (1996) Collision Avoidance in Helicopters via Neurofuzzy Multisensor Data Fusion.
Harris, C.J. (1996) Development and Application of Adaptive Neurofuzzy Modelling and Estimation in Drug Assays.
Harris, C.J. (1996) Intelligent Estimatiors with Applications in Multi-Data Fusion.
Harris, C.J. (1996) Intelligent Modelling, Control and Navigation for AUVs. Colloq. on Autonomous Underwater Vehicles and their Systems
Harris, C.J., Brown, M., Bossley, K.M., Mills, D.J. and Feng, M. (1996) Advances in Neurofuzzy Algorithms for Real-time Modelling and Control. J. Engineering Application of AI, 9, (1), 1--16.
Harris, C.J. and Doyle, R.S. (1996) Multi Sensor Data Fusion for Real Time Aircraft Collision Avoidance. Colloquium on Target Tracking and Data Fusion IEE, 12/1.
He, Z., Qiu, G. and Chen, S. (1996) Visual vector quantization for image compressing based on Laplacian pyramid structure. Proceedings of 3rd IEE International Workshop on Image and Signal
Processing: Advances in Computational Intelligence , 7-10.
Hertz, John and Prügel-Bennett, Adam (1996) Learning short synfire chains by self-organization. Network: Computatioon in Neural Systems, 7, (2), 357-363.
Hertz, John and Prügel-Bennett, Adam (1996) Learning synfire chains: Turning noise into signal. International Journal of Neural Systems, 7, (4), 445-451.
Johntston, D.S., Pugh, A C, Rogers, E, Hayton, G.E. and Owens, D.H. (1996) A Polynomial Matrix Theory for a Class of 2-D Linear Systems. Linear Algebra and its Applications, 241-3, 669-703.
Jones, Mark P. (1996) A Low Frequency Acoustic Method for Detecting Abnormalities in the Human Thorax. Doctoral Thesis .
Keller, T and Hanzo, L (1996) Orthogonal Frequency Division Multiplex Synchronisation Techniques for Wireless Local Area Networks. of PIMRC'96, Taipei, Taiwan, 15 - 18 Oct 1996. , 963-967.
Lawrence, A.J. and Harris, C.J. (1996) Auto-tuning of Low Order Controllers by Direct Manipulation of Closed Loop Time Domain Measures. I Mech E Proc. Part I, J. Systems and Control Engineering, 210,
Luk, R. W. P. and Damper, R. I. (1996) Stochastic phonographic transduction for English. Computer Speech and Language, 10, (2), 133--153.
Matthews, N.D., An, P.E., Charnley, D. and Harris, C.J. (1996) Vehicle Detection and Recognition in Greyscale Imagery. Control Eng. Practice, 4, (4), 473--479.
Middleton, I. and Damper, R. I. (1996) Identification of boundaries in MRI medical images using artificial neural networks. IEE Colloquium on Artificial Intelligence Methods for Biomedical Data
Processing IEE, 6/1-6/6.
Middleton, I. and Damper, R. I. (1996) Identification of the lung boundary in MR images using neural networks. 18th Annual International Conference of IEEE Engineering in Medicine and Biology Society
, 1085-1086.
Moore, C.G., Rogers, E. and Harris, C.J. (1996) Fuzzy Logic Based Estimators and Predictors for Agile Target Tracking Applications. 1996 IFAC World Congress
Rayner, N.J.W and Harris, C.J (1996) Mission Management System for Multiple Autonomous Vehicles. Fuzzy Logic John Wiley and Sons, 77--100.
Rocha, P., Rogers, E. and Owens, D.H. (1996) Stability of Discrete Non-Unit Memory Linear Repetitive Processes - A 2D Systems Interpretation. International Journal of Control., 63, (3), 457-482.
Rogers, E. and Owens, D.H. (1996) Lyapunov stability theory and performance bounds for a class of 2D linear systems. Multidimensional Systems and Signal Processing, 7, (2), 179-194. (doi:10.1007/
Runnacles, B.S. and Nixon, M.S. (1996) Texture extraction, Classification and Segmentation using Statistical Geometric Features. Proc. IEEE International Conference on Image Processing ICIP '96 IEEE,
Schilhabel, T.E. and Harris, C.J. (1996) Understanding chaotic dissipative dynamics in the State Space with Fuzzy Systems. Int. Conference on Adaptive Computing in Engineering Design and Control '96
Scutt, T. W. (1996) Synthetic Neural Networks: A Situated Systems Approach. University of Southampton, Electronics and Computer Science : Faculty of Engineering and Applied Science, Doctoral Thesis .
Shadle, C.H., Mair, S.J. and Carter, J.N. (1996) Acoustic characteristics of the front fricatives [f, v, th, eth]. ETRW - 4th Speech Prod. Seminar , 193--196.
Streit, J and Hanzo, L (1996) Fixed-rate Video Codecs for Wireless Systems. of First Wireless Image/Video Communications Workshop, Loughborough, UK, 04 - 05 Sep 1996. , 91-98.
Streit, J and Hanzo, L (1996) Quadtree-Based Reconfigurable Cordless Videophone Systems. IEEE Transactions on Circuits and Systems for Video Technology, 6, (2), 225-237.
Streit, J and Hanzo, L (1996) Vector-quantised Cordless Videophone Transceivers. of Globecom'96, QE II Conf. Centre, London, UK, 18 - 22 Nov 1996. , 808-812.
Streit, J, Hanzo, L and Bradshaw, B (1996) Eye and Lip Enhancement for Low-rate Videophony. of PIMRC'96, Taipei, Taiwan, 15 - 18 Oct 1996. , 1049-1053.
Streit, J. and Hanzo, L. (1996) Quadtree-Decomposed Cordless Videophones. of VTC'96, Atlanta, USA, Georgia, 28 Apr - 01 May 1996. , 218-222.
Torrance, J and Hanzo, L (1996) Adaptive Modulation in a Slow Rayleigh Fading Channel. of Personal, Indoor and Mobile Radio Communications, Taipei, Taiwan, 15 - 18 Oct 1996. , 497-501.
Torrance, J M, Didascalou, D and Hanzo, L (1996) The Potential and Limitations of Adaptive Modulation over Slow Rayleigh Fading Channels. IEE Colloquium, Savoy Place, London, UK, , 10/1-10/6.
Torrance, J M and Hanzo, L (1996) Demodulation level selection in adaptive modulation. IEE Electronics Letters, 32, (19), 1751-1752.
Torrance, J M and Hanzo, L (1996) Optimisation of Switching Levels for Adaptive Modulation in Slow Rayleigh Fading. IEE Electronics Letters, 32, (13), 1167-1169.
Torrance, J M and Hanzo, L (1996) Performance Upper Bound of Adaptive QAM in Slow Rayleigh-Fading Environments. In, ICCS'96/ISPAC'96, 25 - 29 Nov 1996. , 1653-1657.
Torrance, J M and Hanzo, L (1996) Upper bound performance of adaptive modulation in a slow Rayleigh-fading channel. IEE Electronics Letters, 32, (8), 718-719.
Torrance, J.M., Keller, T. and Hanzo, L. (1996) Multi-Level Modulation in the Indoors Leaky Feeder Environment. of VTC'96, Atlanta, USA, Georgia, 28 Apr - 01 May 1996. , 1554-1558.
Wang, H., Brown, M. and Harris, C.J. (1996) Modelling and Control of Nonlinear, Operating Point dependent Systems via Associative Memory Networks. J. Dynamics and Control, 6, (2), 199-218.
Wang, H., Wang, A.P., Brown, M. and Harris, C.J. (1996) One to one mapping and its application to neural network based control system design. Int. J. Systems Science, 27, (2), 161--170.
Woodard, J P and Hanzo, L (1996) Performance and Error Sensitivity Comparison of Low and High Delay CELP Codecs Between 8 and 4 kbits/s. of PIMRC'96, Taipei, Taiwan, 15 - 18 Oct 1996. , 1000-1004.
Woodard, J.P., Torrance, J.M. and Hanzo, L. (1996) A Low-Delay Multimode Speech Terminal. of VTC'96, Atlanta, USA, Georgia, 28 Apr - 01 May 1996. , 213-217.
Wu, Z.Q. and Harris, C.J. (1996) Adaptive Neurofuzzy Kalman Filter. FUZZ-IEEE '96 - Proceedings of the fifth IEEE International Conference on Fuzzy Systems , 1344-1350.
Wu, Z.Q. and Harris, C.J. (1996) Indirect Adaptive Neurofuzzy Estimation of Nonlinear Time Series. Neural Network World, 6, (3), 407--416.
Wu, Z.Q. and Harris, C.J. (1996) Neurofuzzy Modelling and State Estimation. IEEE Medit. Symp. on Control and Automation: Circuits, Systems and Computers '96 , 603-610.
Yang, L-L and Li, C. S. (1996) DS-CDMA Performance of Random Orthogonal Codes over Nakagami multipath fading channels. Proceedings of ISSSTA'96 , 68--72.
Yang, L-L and Li, C. S. (1996) Extreme Performance evaluation for cellular DS-SFH Systems. Proceedings of ISSSTA'96 , 1282--1286.
Yang, L-L, Li, C. S. and Nie, T. (1996) A fault-tolerant data-transmission model based on RRNS. Proceedings of SPIE , 517-522.
Yang, L. L. and Li, C. S. (1996) RNS combinatory orthogonal DS-SSMA performance over multipath fading channels. Proceedings of Int. Conf. on Personal, Mobile and Spread-Spectrum Commun.
Yang, Lie-Liang and Li, C.S. (1996) Direct-sequence spread-spectrum multiple-access communications mode based on RNS and its performance. ACTA Electronica Sinica, 24, (7), 17-21.
Yang, Lie-Liang and Li, C.S. (1996) Performance of DS-CDMA systems using M-ary orthogonal codes with selection diversity combining. Journal of Railway Society of China, 18, (5), 52-57.
Yang, Lie-Liang and Li, C.S. (1996) Performance of hybrid DS-SFH spread-spectrum random access communications. Mobile Communications, 20, (2), 23-26.
Yang, Lie-Liang and Nie, T. (1996) Fault-tolerant interface of CPU-memory based on 1-sEC/AUED codes. Microelectronics and Computers, (4), 7-10. | {"url":"http://eprints.soton.ac.uk/view/divisions/uos2011-F7-FP-CS/1996.html","timestamp":"2014-04-17T09:50:46Z","content_type":null,"content_length":"71212","record_id":"<urn:uuid:aa445ea6-c15a-4517-b905-31ebf3563d19>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rightagon Inc.- a math/geometry puzzle
Now, not everyone enjoys these math problems, so please be sure to check out all the other puzzle types we’ve just put up tonight.
Created by Michaelc!
Rightagon Inc. has a lot in downtown Metropolis in which they wish to build upon. The area that they are allowed to build a structure on is a 10,000 ft (squared) right triangle with 2 equal
If they want to build an Octagon building with all sides equal, what is the maximum area they can build?
Can you solve by Friday?
On a side note, it looks like I’ve gotten into the habbit the past year of posting just really hard math puzzles and brain teasers; however, from here on out, will try to put up a nicer mixture of
hard and medium ones.
Tags: geometry, math | {"url":"http://www.smart-kit.com/s2514/rightagon-inc-a-mathgeometry-puzzle/","timestamp":"2014-04-16T15:59:19Z","content_type":null,"content_length":"51311","record_id":"<urn:uuid:141eb43d-9307-4cc3-b4db-4d0a69d52d1c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hypothesis Testing
June 2nd 2010, 09:15 AM #1
May 2010
Hypothesis Testing
I'm not sure how to work out this question:
The Australian Medical Association believed that the Health Minister's recent statement claiming that at least 80% of doctors supported the reforms to Medicare was incorrect. The Association's
President suggested the best way to test this was to survey 200 members, selected through a random sample, on the issue. She indicated that the Association would be prepared to accept a Type I
error probability of 0.02.
1. State the direction of the alternative hypothesis for the test.
2. Using the tables, state the absolute value of the test statistic
and determine the lower boundary 3._____, and the upper boundary 4______.
of the region of non-rejection in terms of the sample proportion of respondents (as %) in favour of the reforms. If there is no (theoretical) lower bound, type lt in box 3, and if there is no
(theoretical) upper bound, type gt in box 4. State the numerical value(s) correct to one decimal place.
5. If 137 of the survey participants indicated support for the reforms, is the null hypothesis rejected for this test? Type yes or no.
6. If the null hypothesis is rejected, can the Association claim that the Health Minister's assertion is incorrect at the 2% level of significance on the basis of this test?
Thank-you very much for your help!!
Not everyone does these the same way.
I look at hypothesis testing as a proof by contradiction.
I assume the null hypothesis to be correct and see if the data tells me it's wrong.
Hence I want to prove the alternative hypothesis.
In this case we want to prove that .8 is too high, hence
$H_0<img src=$=.8" alt="H_0$H_a<img src=$<.8" alt="H_a
This absolute vale, seems to hint at a two-sided test, whihc I don;t agree with.
My test stat would be
$Z^*={\hat p-p_0\over \sqrt{p_0q_0/n}}$
which is close to a normal since n=200.
Here $\hat p=137/200$ and $p_0=.8$
This would be a one tail test, and would be left tailed. A two tailed test would be if the statement was "The proportion of so-and-so is EXPLICITLY equal to some number"; thus your null
hypothesis would fail if it were greater than, or less than some critical values. For example, if some machine process had to correct to some specific number, no more no less - a test on this
process would be two tailed since it could be rejected if it feel within the greater than or less than region (if that makes any sense). Here we are saying the population proportion is at LEAST
some value, which means the null hypothesis only fails if if we get values that are less than that value. Two tails would allow for the null hypothesis to fail if we got a value greater than 0.8,
which doesn't make sense in the context of the problem.
They say absolute value, because your critical value will have a z-score to the right of the population proportion and thus would be negative.
I agree with the rest, except my null hypothesis would be $h_0: p \ge 0.8$
The two procedures...
$H_0<img src=$=.8" alt="H_0$H_a<img src=$<.8" alt="H_a
$H_0<img src=$\ge .8" alt="H_0$H_a<img src=$<.8" alt="H_a
are exactly the same.
In the second one, $\alpha$ is the largest (sup) probability
of committing a type one error over the region $p\ge .8$
which occurs at the boundary of $p= .8$
I stick with simple hypotheses to make it easier on the students.
The only reason I suggest people use the less than or greater than signs in their null hypothesis, is because they may look at it and think "two tailed test, since the opposite of equals is 'not
equals.' " As you said, it will all lead to the same answer so, no harm no foul.
June 2nd 2010, 10:42 PM #2
June 2nd 2010, 10:56 PM #3
Super Member
Jul 2009
June 2nd 2010, 11:09 PM #4
June 2nd 2010, 11:16 PM #5
Super Member
Jul 2009 | {"url":"http://mathhelpforum.com/statistics/147505-hypothesis-testing.html","timestamp":"2014-04-16T14:39:45Z","content_type":null,"content_length":"47375","record_id":"<urn:uuid:03bbdc28-260b-457c-8bf9-fe97b2960485>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
A VLSI Implementation of Rank-Order Searching Circuit Employing a Time-Domain Technique
Journal of Engineering
Volume 2013 (2013), Article ID 759761, 8 pages
Research Article
A VLSI Implementation of Rank-Order Searching Circuit Employing a Time-Domain Technique
^1Faculty of Electronics and Telecommunications, University of Science, VNU-HCM, Ho Chi Minh City, Vietnam
^2Department of Electrical Engineering and Information Systems, The University of Tokyo, Tokyo 113-8656, Japan
Received 28 August 2012; Revised 3 December 2012; Accepted 11 December 2012
Academic Editor: Jan Van der Spiegel
Copyright © 2013 Trong-Tu Bui and Tadashi Shibata. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We present a compact and low-power rank-order searching (ROS) circuit that can be used for building associative memories and rank-order filters (ROFs) by employing time-domain computation and
floating-gate MOS techniques. The architecture inherits the accuracy and programmability of digital implementations as well as the compactness and low-power consumption of analog ones. We aim to
implement identification function as the first priority objective. Filtering function would be implemented once the location identification function has been carried out. The prototype circuit was
designed and fabricated in a 0.18μm CMOS technology. It consumes only 132.3μW for an eight-input demonstration case.
1. Introduction
Searching operation is an important function in recognition systems. In conventional recognition systems, only the nearest matched template data among a vast number of template data can be retrieved.
However, in some applications, such as in -neighbor selectors or internet routers, finding an th nearest matched data is necessary. Although such kind of operation can be carried out by employing a
sorting processor, sorting operation is computationally expensive and time consuming, making it unsuitable for building low-power systems.
In image and speech processing, data compression, communication, neural network, and so forth, nonlinear filters can find a lot of applications such as attenuating impulsive noise while preserving
sudden changes in the signal. Among many types of nonlinear filters, MIN, MAX, and MEDIAN are most popular ones. These filters can be implemented by using rank-order filters (ROFs) with appropriate
rank-order setting values. Several ROFs have been implemented in fully digital [1, 2], mixed-signal [3] as well as analog approaches [4–6]. When considering the problem of saving required circuit
area so as to use the structure as a basic block for building parallel processing array, analog implementations of ROFs are preferred. Although they can achieve low-power consumption and small chip
real estate, the main drawback of analog implementations is that they suffer from the problems of accuracy, such as mismatches between transconductance amplifiers [4].
In this paper, we have developed a compact and low-power rank-order searching (ROS) circuit by employing a time-domain computation technique. Here, a time interval or delay time is used for
representing value. The circuitry in this study is the core that can be used for building associative memories and ROFs. Since employing time-domain technique, the architecture not only achieves
small chip real estate and low power consumption of analog implementations, but also improves the accuracy of such approaches. In the design we aim to identify the location of the candidate as the
first priority objective. Getting out its content or filtering function would be implemented easily once the location was found.
In the rest of the paper, system organization and major circuitries utilized in the prototype chip design are described in Sections 2 and 3. Section 4 shows the experimental results from the test
chips fabricated in a 0.18μm CMOS technology. And the conclusion of the paper is given in Section 5.
2. System Organization
A ROF employing pulse width modulation (PWM) signals was proposed in [7] in a fully digital architecture. That architecture works well for filtering function, but it suffers from the problem of
narrow-pulse signals (or glitches) probably occurring at the output of XOR gates in the address encoder circuit, leading error in the location identification function. The problem becomes more
obviously in the case of a large number of inputs. In order to overcome this problem as well as to achieve a compact architecture dealing with the problem of a large number of inputs in many
applications, an analog rank-order searching engine employing time-domain computation techniques is proposed in Figure 1. It is the basic core for building ROFs and associative memories.
Identification function is the first objective that we aim at with this architecture. Basically, it consists of analog-to-delay-time converters (ATCs), a rank-order setting circuitry, a comparator
based on floating-gate MOS technology, a binary encoder, and a binary counter. The ATCs convert analog values, , to delay-time signals; then the rank-order searching circuit uses them as input data.
The final output is a binary code representing the location of the th smallest value in analog voltage domain or the th risen-up signal in time domain. In addition, in order to establish a smooth
interfacing to the following digital processing, a binary counter can be added to the system. The value of the th smallest input in a digital format is given at the output of this counter. Filtering/
searching operation is carried out within a period called the operation slot which is determined by the SLOT signal. The value of SLOT is mainly selected depending on the desired resolution of
computation. For example, SLOT is set to 256 () clock cycles for 8-bit resolution. In the study, from now on, a rank of is represented by a binary number RANK equivalent to (). For example, a rank of
four is represented by the rank value of .
3. Circuit Implementations
3.1. Analog-to-Delay-Time Converter (ATC)
Input voltages, , are converted to delay-time signals, , by ATCs as shown in Figure 1. Input analog voltages are applied to the negative nodes of voltage comparators while a common ramp voltage
signal is applied to the positive nodes. The comparator compares the input analog voltage with this ramp voltage. The output of the comparator remains “0” level until the ramp exceeds the input
voltage. At that moment, the comparator output is inverted to “1” level. In this manner, an analog voltage is converted to a delay-time signal. A smaller delay-time is corresponding to a smaller
analog voltage.
3.2. Floating-Gate-MOS-Based Comparator and Rank-Order Setting Circuit
In order to reduce the circuit area compared with [7], the carry save adder (CSA) and the subtractor in [7] are replaced by a simple floating-gate-MOS-based comparator and a rank-order setting
circuit. Simplified schematic of the floating-gate-MOS-based comparator utilized is shown in Figure 2. The voltages at the floating gates are determined as linear weighted summations of multiple
input signals and calculated by [8]: where , is either “” or “.”
are the input voltages getting one of two levels: or ; are capacitive coupling coefficients between the floating gate and each of the input gates; is the capacitive coupling coefficient between the
floating gate and the substrate. and are necessary to guarantee that is smaller than at each given rank and to fit the range of and inside the input range of the comparator. Smallest MIM capacitances
of 16fF of the fabrication process were chosen to save chip area.
For a given rank-order value, rank-order setting circuit will connect some of its capacitors (i.e., ) to and connect others to ground so that it can set a corresponding proportional to the rank
value. The voltage rises proportionally to the number of “1” inputs. When exceeds , the comparator output COUT becomes “high,” as shown in Figure 2, after a small delay due to the response of the
Figure 3 demonstrates the case of a 5-input system with the required rank order of four. As can be seen in this example, the filtered input, that is, the candidate, is and a binary code of will be
generated by the address encoder.
3.3. Address Encoding
Figure 4 illustrates the schematic diagram of the address encoding circuit. It identifies the location of the filtered input and represents this location as a binary code. It consists of simple XOR
gates, narrow-pulse filters, domino buffers, and a binary encoder. Location identification function is carried out by taking XOR function between COUT and delay-time signals then searching a “0” that
is remained at the output of domino buffers. Unfortunately, due to the response time of the comparator, a narrow-pulse signal (i.e., glitch) will occur at the output of the corresponding XOR gate of
the candidate signal. Theoretically, such a glitch disappears if the response time is zero. This pulse is narrower than others occurring at other XORs’ outputs. Narrow-pulse filters are placed at the
outputs of XOR gates to remove such a pulse. The domino buffers following filters detect from “0” to “1” transitions. As a result, only one output of these buffers remains “0” level, and others
become “1,” at the end of the operation slot. A following binary priority encoder senses the “0” input and generates an address corresponding to that one. As can be seen in the example of Figure 3,
the fourth smallest data () is nearly identical to the signal COUT, and the glitch occurring at is removed by the narrow-pulse filter.
3.4. Narrow-Pulse Filter (Glitch Filter)
A well-known delay cell [9] has been employed as a narrow-pulse filter in this design. Figure 5 shows the basic schematic of the filter. At the beginning of operation, the output of the filter is
reset to “0” by and . The output changes its state when the voltage of the parasitic capacitor becomes smaller than the threshold voltage of the inverter. Therefore, if the discharge time , namely,
the time required to discharge the parasitic capacitance from to , is larger than the pulse width of , the pulse is filtered. The advantage of this filter over conventional RC filters is that the
filtered pulse width is programmable by changing a bias voltage . By this manner, the filter is programmed to filter only the narrowest pulse , which is related to the candidate signal , while other
pulses are just delayed by filtering.
4. Experimental Results
4.1. Chip Fabrication
The proof-of-concept chip was designed and fabricated using a 0.18μm standard CMOS technology. The chip includes eight inputs for demonstration. Time-domain signals are directly applied as input
data. It means that ATCs shown in Figure 1 were not implemented in the test chip. For simplicity, the binary counter to count the number of clocks representing the digital value of the ranked input
was not implemented either.
A photomicrograph of the test chip is shown in Figure 6 and specifications are summarized in Table 1. The core size is 0.006mm^2. The power dissipation is 132.3μW and the accuracy is 9.5ns,
respectively. Assuming that the system has 8-bit resolution, it takes 256 clock cycles in each operation slot. As a result, the filtering latency is 2.432μs.
4.2. Measurement Results and Discussion
In order to verify the operation of the prototype chip, an arbitrary pattern of 8 time-domain signals, as shown in Figure 7, was applied as input data. In this example, a rank value of 100[2]
corresponding to searching for the 5th smallest signal is applied to the rank-order setting circuit. Figure 8 shows the measurement results of the searching operation. The searched input (filtered
input) is the input . A winner address code of was generated by the encoder. These waveforms are captured at the maximum time accuracy (i.e., time resolution) of 9.5ns.
Figure 9 shows the response of the comparator. The comparator has a response time of 3.8ns at the bias voltage of 0.9V. Reducing will increase the response time with the expense of more power
dissipation. It can be seen that the time accuracy of the system is the minimum time between two successive delay-time signals in order that the system can distinguish them correctly. In this design,
the time accuracy is at least twice as large as the response time of the comparator because of XOR function and filtering operation. If two successive signals violate the time accuracy, they both
generate narrow pulses at the outputs of XORs, and these two pulses will be filtered. Consequently, the binary encoder may give a wrong decision. For the test chip, the time accuracy is achieved as
small as 9.5ns. As a matter of fact, the response time can be reduced by employing fast comparators but usually with the tradeoff of more power consumption. High-speed synchronous comparators such
as in [10] can be implemented in the system since time-domain signals can be synchronized with the system clock. The time accuracy is not an important issue because it can be satisfied by changing
the slope of the ramp voltage signal in the ATC. It mainly affects the latency time required for a given resolution.
The performance of the test chip is summarized in Table 2 along with some ROF implementations from the literature. As can be seen from the table, the analog design in [5] gives the best performance
in terms of compactness. It is also quite fast but the precision is not good. Digital implementation in [2] is very fast, but it occupies large area. The architecture in this study achieves small
core size and low-power consumption. Although this architecture still suffers from some sources of errors such as the offset of the comparators, process variations,…, most of these error can be
eliminated by increasing the time resolution of the system to a certain value, via changing the slope of the ramp voltage in the ATCs, so that the system can distinguishe successive time signals
correctly. As can be seen, the tradeoff is a larger latency. The problem of glitches in [7], which probably occur at the outputs of XOR gates, is solved by using programmable narrow-pulse filters.
The rank-order setting circuit can be removed in certain applications where the rank is fixed, making the architecture simpler, and thus saving chip area. In addition, chip real estate becomes
smaller if such a high- MIM capacitance technology is available. The area required for capacitors in the floating-gate-MOS-based comparator, therefore, would be reduced significantly. In terms of
computation accuracy, the proposed approach can preserve the precision of digital approaches which is difficult to achieve with pure analog implementations.
The narrow-pulse filter employing delay elements described in Section 3.4 can be replaced by a better version shown in Figure 10. The power-hungry current source caused by in Figure 5 is removed,
thus reducing the total DC power consumption. As a result, an estimated power consumption as small as 77μW is achieved.
Once the location of the desired input is identified, the ROF function can be implemented by either an additional counter, as shown in Figure 1, to receive the filtered value in a binary code or an
additional multiplexer to select the filtered analog signal.
5. Conclusions
A low-power analog implementation of rank-order searching circuit for building ROFs and associative memories has been developed by using a time-domain computation scheme. The architecture can
preserve the accuracy of digital implementations but achieves advantages of analog implementations in terms of low-power dissipation and small chip real estate. The architecture is also a promising
solution when a large number of input data are required. This is because it does not require many additional circuits. The circuit operation has been verified by experimental results obtained from
the fabricated proof-of-concept chip.
The authors would like to thank Mr. Liem T. Nguyen for his original ideas of the rank-order filter using time-domain digital computation technique presented in [7]. The VLSI chip has been fabricated
in the fabrication program of VLSI Design and Education Center, The University of Tokyo in collaboration with Rohm Corporation and Toppan Printing Corporation. | {"url":"http://www.hindawi.com/journals/je/2013/759761/","timestamp":"2014-04-21T02:57:37Z","content_type":null,"content_length":"86945","record_id":"<urn:uuid:91253067-ce42-4a56-96e1-9d603a8f45cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
It is the best simple calculator for everyday use.
Unlike ordinary office calculator Easy Calculator lets you perform complicated expressions always keeping an eye on the input data.
* Expression calculation
* Calculation history
* Advanced percent calculation (Discounts, Tax, Tips)
* Auto enclose in bracket. Just click )
* Advanced result formatting
* Round a number to a specified number of digits
* Sound and vibration effects on touch
* Advanced memory operations: M+, M- with unlimited number of memory cells
Try EasyCalcPro (Easy Calculator Pro) version for more features.
Les Like that I Can do complicated math on it.I Also like having the ready history so I can check my work.
Great app Does everything well, great color just the right amount of options. Easy to use. Only thing I'm not "wild" about is that when you bring up the history it reverts back to the top of the
entries, very annoying if you use this a lot. Any way to fix this ?
Easy calculater! A plane & simple calculater. I love it. So simple. None of all that extra crap so many calculators come with. Good job. So far app runs great.
One of the best simple calculators Love the calculator. The stock one doesn't have a percentage button. This one has nice features, but I don't use it due to its gradient buttons, which are very hard
on the eyes. Am hoping the developer will change the color scheme soon.
- I haven't really played with this yet. Just felt compelled to explain "order of operations" to one of the below comments. 5+5+5+5*10 does indeed equal 65. Multiply/divide comes before addition/
subtraction in the order of operations. I feel bad the person left a 1 star review because of her own lack of knowledge of simple mathematics.
Very good functions. I also liked the history. Color and design not very good, more work can be done.
What's New
* show result even if an expression contains not closed parentheses.
* Icon changed.
* Increased speed of calculation.
Android's #1 Scientific Calculator. A fully featured scientific calculator which looks and operates like the real thing.
Looking for fractions? Degrees/minutes/seconds? Landscape mode? You need RealCalc Plus. See elsewhere on this page for the link, or select 'Upgrade' from the RealCalc menu.
RealCalc includes the following features:
* Traditional algebraic or RPN operation
* Result history
* Unit conversions
* Physical constants table
* Percentages
* 10 memories
* Binary, octal, and hexadecimal (can be enabled in Settings)
* Trig functions in degrees, radians or grads
* Scientific, engineering and fixed-point display modes
* Configurable digit grouping and decimal point
* Full built-in help
* A complete lack of advertising
RealCalc Plus contains all these features, plus:
* Fraction calculations and conversion to/from decimal
* Degrees/minutes/seconds calculations and conversion
* Landscape mode
* User-customizable unit conversions
* User-customizable constants
If you find RealCalc useful, please consider purchasing RealCalc Plus to support further development. Thank you.
* If you want data size conversions in multiples of 1024, use kibibytes, mebibytes, gibibytes, etc - see en.wikipedia.org/wiki/Kibibyte.
* If the percent key appears to give wrong answers, make sure you are pressing '=' at the end, e.g. '25 + 10 % =' will give 27.5.
* If sin/cos/tan functions don't give the answer you are expecting, make sure you are in the correct angle mode. Degrees, radians and grads are supported, indicated by DEG, RAD, GRAD in the display.
Use the DRG key to change mode.
* If any of the digit keys are disabled, or the decimal point doesn't work, or you have answers with letters in, or basic arithmetic appears to be wrong, then you are in binary, octal or hexadecimal
mode. Press DEC to return to decimal operation. If you don't need these modes, please make sure that 'Enable Radix Modes' is disabled in the settings.
* If you can't find HEX, BIN or OCT modes, go to the settings and make sure that 'Enable Radix Modes' is checked.
Please read the help for more information.
USA TODAY named Calculator Plus among its "25 Essential Apps", calling it the "handy calculator app that's garnered great user ratings"
I'm Calculator Plus - the perfect calculator for Android. I'm easy to use and beautifully designed to do things better than your phone or handheld calculator ever did.
I love saving you time and effort. I remember everything you calculate, and let you review it anytime, making me perfect for shopping, doing homework, balancing checkbooks, or even calculating taxes.
And if you quit the calculator and go do something else, it's all still here when you come back. You'll never need to type the same calculation twice again.
I'm attractive and effective and I make great use of your big, beautiful display:
- You'll never forget where you are in a calculation - I show you exactly what's happening at all times
- I remember everything, so you can take a break, then come back later and pick up where you left off
- I show your calculations in clear, elegant type that's easy to read, with commas just where they should be
- You can use backspace anytime to correct a simple mistake, instead of starting over
- Use memory to keep a running total you can actually see
- My percentage key shows exactly what it did, so you're not left confused
- Swipe memory keys aside for advanced math functions!
- NEW! Full support for Samsung Multi-Window - true multitasking for your Galaxy device.
- My intuitive, lovable design makes it simple to do everyday calculations on your phone or tablet
Let Calculator Plus and your phone or tablet finally put that handheld calculator to rest!
This is an ad supported version - our ad-free version is also available.
Calculator Plus (C) 2013 Digitalchemy, LLC
Universal free, every day use calculator with scientific features. One of top.
Good for simple and advanced calculations!
* Math expressions calculation (developed on RPN-algorithm but no RPN-calculators' kind UI!)
* Percentages (calculation discount, tax, tip and other)
* Radix mode (HEX/BIN/OCT)
* Time calculation (two modes)
* Trigonometric functions. Radians and degrees with DMS feature (Degree-Minute-Second)
* Logarithmic and other functions
* Calculation history and memory
* Digit grouping
* Cool color themes (skins)
* Large buttons
* Modern, easy and very user friendly UI
* Very customizable!
* NO AD!
* Very small apk
* More features will be added. Stay in touch! :)
OLD NAME is Cube Calculator.
PRO-version is currently available on Google Play.
KW: mobicalc, mobicalculator, mobi, calc, cubecalc, mobicalcfree, android calculator, percentage, percent, science, scientific calculator, advanced, sine, simple, best, kalkulator, algebra, basic
A fully featured scientific calculator with a neat twist. It calculates when you shake your phone. For your everyday calculations it features an easy to use basic view with direct access to all the
most used functions. For complex calculations you can switch to an advanced view with a simple swipe of your thumb. Both views offer large buttons for easy usage on small and large devices.
- Entering entire expressions
- Calculation history with support for copying old results to calculator memory or clipboard
- Full support for Percentages (50 + 10% = 55!)
- Calculation memory support
- Trigonometric and hyperbolic functions in radians, degrees and grads
- Predefined list of most used physical, chemical and mathematical constants
- Possibility to define custom constants for often used values
- Advanced result formatting with digit grouping etc. (customizable)
- Supports devices with large screens like Galaxy Tab or Xoom with optimized layout and large keys
- Supports different Themes (Classic, elegant & colorful)
- To access advanced memory functions (M-, MC) and hyperbolic functions use the “2nd” key
- A short touch of the “C” key deletes the last digit/function, a long press clears the entire expression
With MyScript© Calculator, perform mathematical operations naturally using your handwriting.
Specially designed for Android devices.
Easy, simple and intuitive, just write the mathematical expression on the screen then let MyScript technology perform its magic converting symbols and numbers to digital text and delivering the
result in real time.
The same experience as writing on paper with the advantages of a digital device (Scratch-outs, results in real time, …).
Solve mathematical equations by hand without actually having to crunch the numbers yourself
- Works on your smartphone (Use your finger or a capacitive stylus with your android phone (Samsung Galaxy phones, HTC, Motorola, Sony Xperia, LG Optimus and others)
- Works on your tablet (Take advantage of the S-Pen with your Galaxy Note, or use any capacitive pen with your Galaxy Tab family of tablets, HTC flyer, Lenovo Thinkpad, Asus transformer and others)
- Use your handwriting to write any arithmetic formula
- Write and calculate mathematical expressions in an intuitive and natural way with no keyboard
- Scratch-out gestures to easily delete symbols and numbers
- Portrait and landscape operation
- Redo and undo functions.
Basic operations: +, -, x, ÷, +/‒, 1/x
Misc. Operations: %, √, x!, |x|
Powers/Exponentials: ℯx, xy , x2
Brackets: ( )
Trigonometry: cos, sin, tan
Inverse trigonometry: acos, asin, atan
Logarithms: ln , log
Constants: π, ℯ, Phi.
*** IMPORTANT: If you wish to report a problem encountered with MyScript Calculator, or simply ask a question to Vision Objects, don’t use the application feedback space as it’s impossible for us to
answer there.
Support website: http://myscriptsupport.visionobjects.com
Note about permissions:
We are asking permission to access Internet connection; this is to provide users the possibility to watch the video tutorial.
Future releases:
We are taking note of all your remarks and comments, many of you are asking us for equations, you can visit our web demo page, a web equation demo will show to you what we are planning to add in the
future, visit:
Equation web demo:
Handwriting recognition web demos:
Powerful simulator of the classics calculators. With advanced features and easy to use. The same that we all know but now in your Smartphone.
* Percentages
* Memories
* Trig functions in degrees, radians or grads
* Scientific, engineering and fixed-point display modes
* Configurable digit grouping and decimal point
Calculator simple and free.
Calculator includes basic everyday use arithmetic and percentage operations.
Memory functions and rotation of display support are fused with modern and pleasant design.
Calculator++ is an advanced, modern and easy to use scientific calculator #1.
Calculator++ helps you to do basic and advanced calculations on your mobile device.
Discuss Calculator++ on Facebook: http://facebook.com/calculatorpp
1. Always check angle units and numeral bases: trigonometric functions, integration and complex number computation work only for RAD!!!
2. Application contains ads! If you want to remove them purchase special option from application settings. Internet access permission is needed only for showing the ads. ADS ARE ONLY SHOWN ON THE
SECONDARY SCREENS! If internet is off - there are no ads!
++ easy to use
++ home screen widget
+ no need to press equals button any more - the result is calculated automatically
+ smart cursor positioning
+ copy/paste in one button
+ landscape/portrait orientations
++ drag buttons up or down to use special functions, operators etc
++ modern interface with possibility to choose themes
+ highlighting of expressions
+ history with all previous calculations and undo/redo buttons
++ variables and constants support (build-in and user defined)
++ complex number computations
+ support for a huge variety of functions
++ expression simplification: use 'identical to' sign (≡) to simplify current expression (2/3+5/9≡11/9, √(8)≡2√(2))
+ support for Android 1.6 and higher
+ open source
NOTE ABOUT INTERNET ACCESS: Calculator++ (version 1.2.24) contains advertisement which requires internet access. To get rid of it - purchase a version without ads (can be done from application's
How can I get rid of the ads?
You can do it by purchasing the special option in the main application preferences.
Why Calculator++ needs INTERNET permission?
Currently application needs such permission only for one purpose - to show ads. If you buy the special option C++ will never use your internet connection.
How can I use functions written in the top right and bottom right corners of the button?
Push the button and slide lightly up or down. Depending on value showed on the button action will occur.
How can I toggle between radians and degrees?
To toggle between different angle units you can either change appropriate option in application settings or use the toggle switch located on the 6 button (current value is lighted with yellow color).
Also you can use deg() and rad() functions and ° operator to convert degrees to radians and vice versa.
268° = 4.67748
30.21° = 0.52726
rad(30, 21, 0) = 0.52726
deg(4.67748) = 268
Does C++ support %?
Yes, % function can be found in the top right corner of / button.
100 + 50% = 150
100 * 50% = 50
100 + 100 * 50% * 50% = 125
100 + (100 * 50% * (25 + 25)% + 100%) = 150
100 + (20 + 20)% = 140, but 100+ (20% + 20%) = 124.0
100 + 50% ^ 2 = 2600, but 100 + 50 ^ 2% = 101.08
Does C++ support fractional calculations?
Yes, you can type your fractional expression in the editor and use ≡ (in the top right corner of = button). Also you can use ≡ to simplify expression.
2/3 + 5/9 ≡ 11/9
2/9 + 3/123 ≡ 91/369
(6-t) ^ 3 ≡ 216 - 108t + 18t ^ 2 - t ^ 3
Does C++ support complex calculations?
Yes, just enter complex expression (using i or √(-1) as imaginary number). ONLY IN RAD MODE!
(2i + 1) ^ = -3 + 4i
e ^ i = 0.5403 + 0.84147i
Can C++ plot graph of the function?
Yes, type expression which contains 1 undefined variable (e.g. cos(t)) and click on the result. In the context menu choose 'Plot graph'.
Does C++ support matrix calculations?
No, it doesn't
Keywords: calculator++ calculator ++ engineer calculator, scientific calculator, integration, differentiation, derivative, mathematica, math, maths, mathematics, matlab, mathcad, percent, percentage,
complex numbers, plotting graphs, graph plot, plotter, calculation, symbolic calculations, widget
PowerCalc is a powerful Android scientific calculator with real look. It is one of the few Android calculators with complex number equations support. Features:
* Real equation view editor with brackets and operator priority support
* Component or polar complex entry/view mode
* Equation and result history
* 7 easy to use memories
* Large universal/physical/mathematical/chemical constant table
* Degrees, radians and grads mode for trigonometric functions
* Fixed, scientific and engineering view mode
* Easy to use with real look
* Advertisement free!
Would you like to have multiline equation editor with equation syntax hightiting, actual bracket highlighting and trigonometric functions of complex argument support? Upgrade to PowerCalc Pro.
* Multiline equation editor
* Equation syntax highliting
* Actual bracket highliting
* Trigonometric functions with complex argument support
Stay tuned! We are preparing new functionalities:
* Unit conversions
* Radix modes
* Help
Found bug? Please contact us to fix it.
If you find PowerCalc useful please upgrade to PowerCalc Pro to support further development. Thank you!
1. Calculate Percents & Percentages
2. Percent Discounts (sale price)
3. Percent Markups (increase by)
4. Percent Margin (selling price)
5. Calculate Tips.
6. Percentage Difference (Change)
7. Percentage (what % of) i.e. x is what percentage of y
Enter any two values and the third is computed e.g. [23]% of [x] = 115. X will be computed (20).
Customize background colors.
A calculator with 10 computing modes in one application + a handy scientific reference facility - different modes allow: 1) basic arithmetic (both decimals and fractions), 2) scientific calculations,
3) hex, oct & bin format calculations, 4) graphing applications, 5) matrices, 6) complex numbers, 7) quick formulas (including the ability to create custom formulas), 8) quick conversions, 9) solving
algebraic equations & 10) time calculations.
Functions include:
* General Arithmetic Functions
* Trigonometric Functions - radians, degrees & gradients - including hyperbolic option
* Power & Root Functions
* Log Functions
* Modulus Function
* Random Number Functions
* Permutations (nPr) & Combinations (nCr)
* Highest Common Factor & Lowest Common Multiple
* Statistics Functions - Statistics Summary (returns the count (n), sum, product, sum of squares, minimum, maximum, median, mean, geometric mean, variance, coefficient of variation & standard
deviation of a series of numbers), Bessel Functions, Beta Function, Beta Probability Density, Binomial Distribution, Chi-Squared Distribution, Confidence Interval, Digamma Function, Error Function,
Exponential Density, Fisher F Density, Gamma Function, Gamma Probability Density, Hypergeometric Distribution, Normal Distribution, Poisson Distribution, Student T-Density & Weibull Distribution
* Conversion Functions - covers all common units for distance, area, volume, weight, density, speed, pressure, energy, power, frequency, magnetic flux density, dynamic viscosity, temperature, heat
transfer coefficient, time, angles, data size, fuel efficiency & exchange rates
* Constants - a wide range of inbuilt constants listed in 4 categories:
1) Physical & Astronomical Constants - press to include into a calculation or long press for more information on the constant and its relationship to other constants
2) Periodic Table - a full listing of the periodic table - press to input an element's atomic mass into a calculation or long press for more information on the chosen element - the app also includes
a clickable, pictorial representation of the periodic table
3) Solar System - press to input a planet's orbit distance into a calculation or long press for more information on the chosen planet
4) My Constants - a set of personal constants that can be added via the History
* Convert between hex, oct, bin & dec
* AND, OR, XOR, NOT, NAND, NOR & XNOR Functions
* Left Hand & Right Hand Shift
* Plotter with a table also available together with the graph
* Complex numbers in Cartesian, Polar or Euler Identity format
* Fractions Mode for general arithmetic functions including use of parentheses, squares, cubes and their roots
* 20 Memory Registers in each of the calculation modes
* A complete record of each calculation is stored in the calculation history, the result of which can be used in future calculations
An extensive help facility is available which also includes some useful scientific reference sections covering names in the metric system, useful mathematical formulas and a detailed listing of
physical laws containing a brief description of each law.
A default screen layout is available for each function showing all buttons on one screen or, alternatively, all the functions are also available on a range of scrollable layouts which are more
suitable for small screens - output can be set to scroll either vertically (the default) or horizontally as preferred – output font size can be increased or decreased by long pressing the + or -
A full range of settings allow easy customisation - move to SD for 2.2+ users
Please email any questions that are not answered in the help section or any requests for bug fixes, changes or extensions regarding the functions of the calculator - glad to help wherever possible.
This is an ad-supported app - an ad-free paid version is also available for a nominal US$ 0.99 - please search for Scientific Calculator (adfree)
A simple calculator with big buttons, large display and four basic functions: addition, subtraction, multiplication and division. All you really need in a calculator. No wasted space for functions
you never use.
Simplify your life with this calculator app. Save time doing math homework, paying bills, adding up tips, resolving finances, and calculating your loan and mortgage payments.
Calculator features:
-Basic arithmetic (add, subtract, multiply, divide)
-Large buttons
-Supports tablets and smart phones
-Clear plus All Clear functionality
-Saves last calculation upon reopening calculator
-Calculates in order of input
-Free app
-Small app size
-Small ads at the top
-Formatted results (i.e. 5,152,225.32)
Download now and see the greatness of this basic calculator for free. No glasses needed.
Calculator Kalkylator 計算器 계산기 電卓 калькулятор Calcolatore Regnemaskine Calcolatore
A simple, beautiful calculator inspired by the iPhone calculator but with added functionality such as skins and layouts. It is designed to replace the rather lame stock calculator with a calculator
that looks great while retaining ease of use and intuitive design. Calculator is designed to meet all of your calculating needs!
***Calculator Features***
*Simple, accurate calculator: Leave the calculator in portrait view to see just a simple calculator, or turn the calculator on its side to reveal a scientific calculator
*Calculator Skins: Calculator v.3.0 now supports changing of skins to change the look and feel of the calculator. Choose between and array of different skins to match your favorite phone OS including
iOS, HTC Sense and Samsung Galaxy. Also if there is a certain skin you would like to see made for you, don't hesitate to send an email!
*Different calculator layouts: Choose between 3 different calculator layouts. 1) Default Layout for a standard 10-key calculator in portrait mode and scientific calculator in landscape. 2) Business
Layout adds a '%' key in portrait mode and a '00' key. 3) Simple Layout keeps the 10-key calculator in both portrait and landscape mode when all you need is a simple calculator.
*Memory Buttons- Fully functional 'M+', 'M-', 'MR' and 'MC' calculator buttons. Calculator will remember the value stored in it even if you leave the app.
*Landscape calculator: Turn your phone on its side to reveal a scientific calculator with advanced functions like square root, exponents, factorials and more.
*Trig calculator: The landscape calculator even features full support for trigonometric functions. Calculator can take Sin, Cos, Tan, inverse trig, and hyperbolic functions and can calculate in both
Radian and degree mode. Great for use in algrebra and other math classes.
*Calculator looks great from the smallest to the largest screens as well as low and high density screens. Calculator also works on the Galaxy Tab!
*App2SD- You can install Calculator to your SD card if using Android 2.2+
Check my website below for a full Calculator changelog.
Any suggestions for improvements or new features for the calculator are greatly appreciated. Just send me an email!
Website: www.stormindorman.com
Twitter: @stormindorman
I hope you enjoy using Calculator!
Keywords: calculator calculate calc icalc calculation utility handy tool tools math trig business percent iphone htc samsung skins algebra trigonometry construction number add subtract multiply
divide skins themes layout stormin dorman productions
★ MyCalc is a fully featured All-In-One calculator for your everyday calculations. ★
★★★★★ "Nice Calculator! Handles everyday calculations and then some with ease and speed!" - Michael Christy (Nov 9, 2013)
★★★★★ "Quite Nice. I am always on the lookout for tools like this app. It is very good." - Leland Whitlock (Nov 9, 2013)
★★★★★ "Nice app. Works great helps me very much in the office and on the go. Keep up the great work." - Randy Salazar (Nov 7, 2013)
MyCalc includes: Scientific calculator, Standard calculator, Currency calculator, Tip and Percent calculator which can be accessed from the menu.
● My Calc Features:
✓ Result history
✓ Unit conversions
✓ Physical constants table
✓ Traditional algebraic operation
✓ Permutations (nPr) & Combinations (nCr)
✓ Trigonometric and hyperbolic functions in radians, degrees and grads
✓ Scientific, engineering and fixed-point display modes
✓ Calculation memory support
✓ Full support for percentages (20 + 10% = 22)
✓ Decimal degrees into degrees, minutes, and seconds converter
✓ Tip calc - calculates tip quickly and easily and split the bill between any number of people.
✓ Percent calc (calculation discount, tax, margin and other)
✓ Currency Converter - Track Currencies from around the world with live currency rates. Easily convert between your favorite currencies.
Calculator done the right way!
Calculator Pro is designed for everyone looking for simplicity and functionality. You can enjoy using a standard calculator for basic operations or extend it into a scientific one for more complex
calculations. Just tilt your Android device into the landscape mode!
Whether you’re a diligent student, an accountant, a banking manager, a housewife in charge of the family finances or even a maths genius, this calc will save both your effort and time and let you
calculate anything you need!
• Two modes are available: do basic calculations in the Portrait Mode or tilt your smartphone or tablet and go advanced in the Landscape Mode
• Degrees and Radians calculations
• Memory buttons to help you out with complex calculations
• Choose the skin that suits you (additional skins available through in-app purchase)
• Accidentally input the wrong number? Just swipe with your finger to edit it!
• Copy and paste results and expressions directly into the current calculation
• History Bar: see your full calculation history directly on the screen
• Feel free to use it as an office calculator for all kinds of financial and statistical computations
• Use Calculator Pro for college studies, accounting, algebra and geometry lessons, engineering calculations and much more
• Work with decimal fractions, algebraic formulas, solve equations or just master your math skills
Now you can do any calculations on the go seamlessly!
I believe every student in the world should have free access to graphing calculator and learn the amazing math and science to the next level.
Thanks for your support, Graphing Calculator recently got 1M+ downloads.
A Simple Graphing Calculator
- Added Basic Calculator
- Added Scientific Calculator
- Added Graphing Calculator
- Pinch Zoom for Graphing Calculator
- Take Screenshot of Graph
- Added Tablet Screen Size Support
- Improved User Interaction for Graphing Calculator
- Scientific Notation
- Radian vs Degree
- InApp purchase to remove ads banner
- copy and paste using clipboard
Please press Shift on the top left corner, if you don't find the operation key.
- buttons like ln, sqrt, log, 1/x, sinh, sinh-1, cosh, cosh-1, tanh, tanh-1
- abs means absolute value |x| | {"url":"https://play.google.com/store/apps/details?id=com.junglesoft.calc&referrer=utm_source%3Dappbrain","timestamp":"2014-04-17T16:29:48Z","content_type":null,"content_length":"191119","record_id":"<urn:uuid:73733f30-de7d-4990-b693-29f649f32deb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
In Jefferson School, 300 students study French or Spanish or
Author Message
In Jefferson School, 300 students study French or Spanish or [#permalink] 13 Dec 2006, 13:03
Senior Manager
5% (low)
Joined: 20 Feb 2006
Question Stats:
Posts: 375
Followers: 1
(00:00) correct
0% (00:00)
based on 0 sessions
In Jefferson School, 300 students study French or Spanish or both. If 100 of these students do not study French, how many of these students study both French and Spanish?
(1) Of the 300 students, 60 do not study Spanish.
(2) A total of 240 of the students study Spanish
Explanations please!
This post received
ncp S=100(those who do not study french)
SVP So equation is now,
Joined: 08 Nov 2006 F+100-(F&S)=300
Posts: 1560 (1) Of the 300 students, 60 do not study Spanish.
Location: Ann Arbor ==> F=60.
Schools: Ross '10 Substituting this in the equation, we get F&S=140.So sufficient.
Followers: 12 (2) A total of 240 of the students study Spanish
Kudos [?]: 158 [1] , This means that 140 students study spanish and also french.
given: 1
So sufficient.
Answer is D.
My Profile/GMAT/MBA Story
Senior Manager If 100 do not study French then doesn't that mean that 200 do study French?
Joined: 20 Feb 2006
Posts: 375
Followers: 1
MBAlad wrote:
If 100 do not study French then doesn't that mean that 200 do study French?
Joined: 08 Nov 2006
Yes, but also consider that some of the 200 can also study spanish. If you want to identify the students who study only french, then you will need to exclude the spanish
Posts: 1560 students from this group.
Location: Ann Arbor _________________
Schools: Ross '10 My Profile/GMAT/MBA Story
Followers: 12
Kudos [?]: 158 [0], given:
MBAlad ncprasad wrote:
Senior Manager MBAlad wrote:
Joined: 20 Feb 2006 If 100 do not study French then doesn't that mean that 200 do study French?
Posts: 375 Yes, but also consider that some of the 200 can also study spanish. If you want to identify the students who study only french, then you will need to exclude the spanish
students from this group.
Followers: 1
Thanks! Was having a mental block.
st. fwd Q.
S U F = S+F-SnF
Stem gave: LHS=300 and only S =100..To find SnF
1) only F =60 ->
S =300-60=240
Joined: 18 Nov 2006 from stem, only S=100
Posts: 125 ->> S n F = 140 Suff...
Followers: 1 2) S=240
from stem, only S =100
->> intersection = 140 Suff..
check this:
Given, Its either S or F or both ..therefore, you can see a picture mentally and pickup D in 5-10 secs
jimmyjamesdonkey Is there anyway to solve this using a table instead of a venn diagram?
Director For example:
...................French..............No French............Total
Joined: 01 May 2007
Posts: 797
No Spanish......................................................60
Followers: 1
Kudos [?]: 45 [0], given:
0 Using a table I got E. It is obviously wrong, but can you explain why a table of this soft won't work in this situation and a venn diagram is needed?
Witchiegrlie You can't use 300 as the total since 300 is people who studies French, Spanish, or BOTH.....the chart has a NEITHER column so you can't use 300 as the total.
Intern One question for ncparad:
Joined: 10 Feb 2007 F+100-(F&S)=300
Posts: 46 (1) Of the 300 students, 60 do not study Spanish.
==> F=60.
Followers: 0 Substituting this in the equation, we get F&S=140.So sufficient.
Kudos [?]: 3 [0], given: 0 When I plug in the 60, I get 160-(FuS)=300 and a negative 140. Maybe it's late and I can't think but it shouldn't be a negative number.
Witchiegrlie wrote:
You can't use 300 as the total since 300 is people who studies French, Spanish, or BOTH.....the chart has a NEITHER column so you can't use 300 as the total.
Crow One question for ncparad:
Intern F+100-(F&S)=300
Joined: 23 Sep 2007 (1) Of the 300 students, 60 do not study Spanish.
==> F=60.
Posts: 18 Substituting this in the equation, we get F&S=140.So sufficient.
Followers: 0 When I plug in the 60, I get 160-(FuS)=300 and a negative 140. Maybe it's late and I can't think but it shouldn't be a negative number.
can someone clarify this ? i get a negative 140 with this formula as well (F+S-FS=300)
i do get a +140 with seeing S as S=100+SF (from the stem sayin 100=S-SF) | {"url":"http://gmatclub.com/forum/in-jefferson-school-300-students-study-french-or-spanish-or-40013.html?fl=similar","timestamp":"2014-04-21T07:16:59Z","content_type":null,"content_length":"168158","record_id":"<urn:uuid:8f286423-255a-4785-9a4f-e0e62cf0b106>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hybrid model of Conditional Random Field and Support Vector Machine
Qinfeng Shi, Mark Reid and Tiberio Caetano
In: Workshop at the 23rd Annual Conference on Neural Information Processing Systems, 11-12 Dec 2009, Whistler, Canada.
Conditional Random Fields (CRFs) are semi-generative (despite often being classified as discriminative models) in the sense that it estimates the conditional probability $D(y|x)$ (given any
observation $x$) of any label $y$, which is {\bf generated} from $D(y|x)$. Estimating $D(y|x)$ is usually more efficient than estimating $D(x|y)$ when there aren't sufficient observation $x$ per
class or there are too many labels (e.g. there are exponential many $y$ for a chain-like $x$). Unlike CRFs, Support Vector Machine (SVM) seeks for a predicting function regardless of modeling the
underlying distribution. It is fisher inconsistent in multiclass and structured label case, however, it does provide a PAC bound on the true error. Particularly, its PAC-Bayes margin bound is rather
tight, which states that, knowing training sample size $m$, hypothesis space $\Hcal$ and margin threshold $\gamma$, with overwhelming probability at least $1-\delta$, the true error is upper bounded
by the empirical error \[ + O(\sqrt{\frac{\gamma^{-2}\log|\Hcal|\log m+\log {\delta^{-1}}}{m}}).\] Is there a model that is fisher consistent for classification and has a generalization bound? We use
a naive combination of two models by simply weighted summing up the losses of two. It turns out a surprising theoretical result --- the hybrid loss could be fisher consistent in some circumstance and
it has a PAC-bayes bound on its true error.
PDF - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00005541/","timestamp":"2014-04-19T17:07:03Z","content_type":null,"content_length":"8912","record_id":"<urn:uuid:c8c6b9ca-3caf-4629-93ed-fb959baf2a27>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rotated Quadratic Equations
April 14th 2010, 10:53 PM #1
Rotated Quadratic Equations
$x^2 + 2\sqrt{3}xy + 3y^2 + 8\sqrt{3}x - 8y + 32 = 0$
A) Use the Discriminant test to determine whether the graph of the equation is a parabola, an ellipse, or a hyperbola.
B) Find a suitable rotation of axis
C) Find an equation for the graph in a $x'y'$ - plane
D) Sketch the graph over top both coordinate systems with your angle of rotation displayed.
My Work:
A) $B^2 - 4AC \rightarrow (2\sqrt{3})^2 - 4(1)(3) = 0$ So, The equation is a parabola.
B) Here's where I get somewhat confused, so bear with my work.
$tan 2\alpha = \frac{B}{A-C} \rightarrow \frac{2 \sqrt{3}}{1-3} \rightarrow \frac{\sqrt{3}}{-1}$
$2 \alpha = tan^{-1} \left(\frac{-\pi}{3}\right)$
$\alpha = \frac{-\pi}{6}$
From here on I'm not really quite sure what to do. The notes I have on this don't really give me enough help to continue on. I know later on though I have to utilize these:
$A'=Acos^2 \alpha + Bcos\alpha sin\alpha + Csin^2 \alpha$
$B'=Bcos2\alpha + (C-A)sin2 \alpha$
$C'=Asin^2 \alpha - Bcos\alpha sin\alpha + Ccos^2 \alpha$
$D'=Dcos\alpha + Esin\alpha$
$E'=-Dsin\alpha + Ecos\alpha$
Since you know that $\alpha= -\frac{\pi}{6}$, you can immediately calculate $cos(\alpha)=\frac{\sqrt{3}}{2}$ and $sin(\alpha)= -\frac{1}{2}$ and put them into those equations. What is stopping
So I just plug in those values into the equations I stated above and come up with the values for A',B',C',D',E' and come up with the new "shifted" equation?
That's what the formulas say isn't it?
$A'=1\left(\frac{\sqrt{3}}{2}\right)^2 + 2\sqrt{3}\left(\frac{\sqrt{3}}{2}\right)\left(\fra c{-1}{2}\right) + 3\left(\frac{-1}{2}\right)^2 \rightarrow A'=\frac{3}{4}$
$B'=2 \sqrt{3}\left(\frac{1}{2}\right) + (3-1)(\frac{-\sqrt{3}}{2} \rightarrow B'=0$
$C'=1\left(\frac{-1}{2}\right)^2 - 2\sqrt{3}\left(\frac{\sqrt{3}}{2}\right)\left(\fra c{-1}{2}\right) + 3\left(\frac{\sqrt{3}}{2}\right)^2 \rightarrow C'=\frac{19}{4}$
$D'=8\sqrt{3}\left(\frac{\sqrt{3}}{2}\right) + (-8)\left(\frac{-1}{2}\right) \rightarrow D'=16$
$E'=-8\sqrt{3}\left(\frac{-1}{2}\right) + (-8)\left(\frac{\sqrt{3}}{2}\right) \rightarrow E'=0$
$\frac{3}{4}(x')^2 + \frac{19}{4}(y')^2 + 16x' + 32 = 0$
Is this equation correct for the graph in the $x'y'$ - plane? I'm having a little trouble putting this in the correct form of a parabola (if this equation is correct that is), if someone could
point me in the right direction that would be great.
Last edited by VitaX; April 15th 2010 at 02:05 PM.
Am I supposed to utilize $x=x'cos\alpha - y'sin\alpha$ and $y=x'sin\alpha + y'cos\alpha$
I'm so confused right now.
April 14th 2010, 11:13 PM #2
MHF Contributor
Apr 2005
April 14th 2010, 11:24 PM #3
April 14th 2010, 11:42 PM #4
MHF Contributor
Apr 2005
April 15th 2010, 01:42 AM #5
April 15th 2010, 02:06 PM #6 | {"url":"http://mathhelpforum.com/calculus/139260-rotated-quadratic-equations.html","timestamp":"2014-04-20T04:53:31Z","content_type":null,"content_length":"52083","record_id":"<urn:uuid:a3905194-943b-45e2-a41c-f054cd1712e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
CCSS.Math.Content.HSS-MD.A.4 - Wolfram Demonstrations Project
US Common Core State Standard Math HSS-MD.A.4
Demonstrations 1 - 5 of 5
Description of Standard: (+) Develop a probability distribution for a random variable defined for a sample space in which probabilities are assigned empirically; find the expected value. For example,
find a current data distribution on the number of TV sets per household in the United States, and calculate the expected number of sets per household. How many TV sets would you expect to find in 100
randomly selected households? | {"url":"http://www.demonstrations.wolfram.com/education.html?edutag=CCSS.Math.Content.HSS-MD.A.4","timestamp":"2014-04-20T09:29:43Z","content_type":null,"content_length":"24889","record_id":"<urn:uuid:0763eb8f-62f1-43d0-aca2-fd3008d92430>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the definition of EIRP?
1. 21st January 2005, 16:55 #1
Member level 2
Join Date
Nov 2004
2 / 2
What is EIRP???
Hello, everybody.
I have a question, that is, what is the definition of EIRP??? and what's the relationship with antenna gain?
Thks in advance.
2. 21st January 2005, 17:15 #2
Advanced Member level 3
Join Date
Dec 2002
40 / 40
What is EIRP???
Well, one theory could say that EIRP is one of our moderator buddies, but here http://www.csgnetwork.com/antennaecalc.html you can find a more comprehensive definition of Effective Isotropic
Radiated Power, and some tools to calculate it from the antenna gain and the actual transmitted output power.
3. 21st January 2005, 18:11 #3
Full Member level 5
Join Date
Jan 2005
22 / 22
Re: What is EIRP???
EIRP is a measure of approximate power radiated from an antenna.
The approximation works well as it considers almost all possible factors.
Say an antenna is capable of transmiting P power..(Watts) ,then this P is not actually transmited.
Consider the various losses and we can get actual power radiated.
For simplicity ,EIRP is calculated in Log system as losses can be directly substracted from net power.
EIRP= P - Losses.
EIRP is directly proportional to antenna gain.
EIRP is most important parameter while designing various systems. (say a satellite link )
4. 21st January 2005, 19:03 #4
Member level 2
Join Date
Feb 2002
4 / 4
Re: What is EIRP???
Well, one theory could say that EIRP is one of our moderator buddies, but here h**p://www.csgnetwork.com/antennaecalc.html you can find a more comprehensive definition of Effective Isotropic
Radiated Power, and some tools to calculate it from the antenna gain and the actual transmitted output power.
The power should be the actual transmitter output power. I think it's a typo :)
5. 21st January 2005, 20:56 #5
Advanced Member level 5
Join Date
Dec 2004
New England, USA
1086 / 1086
Re: What is EIRP???
A famous cowboy from the American west, Wyatt Eirp.
6. 23rd January 2005, 07:05 #6
Full Member level 4
Join Date
Apr 2004
6 / 6
Re: What is EIRP???
You can refer to this docuemnt.
Antenna Pattern Measurement: Theory and Equations
7. 27th January 2005, 10:06 #7
Junior Member level 3
Join Date
Feb 2004
0 / 0
Re: What is EIRP???
The Effective Isotropic Radiated Power (EIRP) of a transmitter is the power that the transmitter appears to have if the transmitter were an isotropic radiator (if the antenna radiated equally
in all directions).*
By virtue of the gain of a radio antenna (or dish), a beam is formed that preferentially transmits the energy in one direction.* The EIRP is estimated by adding the gain (of the antenna) and
the transmitter power (of the radio).
EIRP = transmitter power + antenna gain – cable loss
[100 mW <=> 20 dBm]
8. 28th January 2005, 13:35 #8
Member level 2
Join Date
Nov 2004
2 / 2
Re: What is EIRP???
thank you for answer my question, except the cable loss, Is the path loss also the important factor for measuring the EIRP????
9. 8th February 2005, 12:47 #9
Member level 4
Join Date
Dec 2004
8 / 8
What is EIRP???
path loss is not included in measuring EIRP, coz EIRP is effective power radiated by antenna, this is transmitter's power(dB) + antenna gain (dB). Path loss is experienced by the radiated
power, so path loss is an external factor of lossess. | {"url":"http://www.edaboard.com/thread30890.html","timestamp":"2014-04-19T20:24:50Z","content_type":null,"content_length":"82558","record_id":"<urn:uuid:c0d24a0f-49f0-42f1-9bc6-4c79d39bcb66>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Error of Gaussian quadrature rule
Prove that the error of the Gaussian quadrature rule is
(integral of f (x)dx from a to b dx) - (∑ ie sum from i=1 to n of ci f(xi) = ((f^(2n)(z))/(2n)!)(integral of (∏ ie sum of products from i =1 to n of (x - xi)^2) from a to b dx)
for some z element of (a, b). Hint: Consider some kind of interpolation of f .
It would be nice if you could provide a clear proof so I could understand the error of the Gaussian quadrature rule. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=260769","timestamp":"2014-04-17T21:45:24Z","content_type":null,"content_length":"8605","record_id":"<urn:uuid:1c97b30a-23c1-49cd-b170-f29f9955e6cd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
The European Mathematical Society
The first president of the European Mathematical Society, Friedrich Hirzebruch, has passed away. Fritz Hirzebruch studied mathematics at Münster. In 1956 he became professor at Bonn where he remained
until his retirement. In 1980 he founded the Max-Planck-Institut fur Mathematik, serving as Director until 1995. His profound contributions to mathematics – in applications of topology to algebraic
manifolds and number theory – and to German mathematical life, were honoured by many prizes, including that of the Wolf foundation in 1988 and the Georg Cantor medal of the DMV in 2004. He was
honorary president of the 1998 International Congress of Mathematics in Berlin.
European mathematics has lost a great scholar and the EMS has lost one of its architects.
Detailed biographies: http://www-history.mcs.st-and.ac.uk/history/Biographies/Hirzebruch.html
June 11, 2012 - 23:25 — ems_site
Obituary in the New York Times
May 30, 2012 - 09:44 — ems_site
Fritz Hirzebruch (1927-2012)
Slides of an Oberwolfach talk on Hirzebruch's work and influence by Andrew Ranicki (Edinburgh):
Post new comment | {"url":"http://www.euro-math-soc.eu/node/2746","timestamp":"2014-04-17T15:49:38Z","content_type":null,"content_length":"17457","record_id":"<urn:uuid:b61cea0e-9d38-4613-8471-20b1ae4dbb42>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Hands on" activity
Students will (1) measure the density of water, (2) measure and compare the density of salt water, (3) demonstrate changes in density by adding marbles to a floating plastic container until it sinks,
and (4) compare their result with calculated predictions.
• A useful definition of a gram is the mass of one milliliter of pure water.
• Whether an object will float depends on the amount of water that object displaces.
• Clear plastic containers about 15 cm (6 in) square and 4 cm (1.6 in) deep (for example those used for take-home food such as salads). One per group.
• Graduated cylinders, 50 ml or 100 ml
• Scales, 0 to 200 grams
• Glass marbles, 125 per group (pennies will also work and are less likely to roll around)
• Glass or plastic bowls, large enough to hold one of the clear plastic containers listed above
• Table salt, about 4 gm per group
• Paper towels
Clear plastic containers made with thin plastic will give the best results because the thin plastic will have little effect on volumes and densities. If you cannot find these, however, you can try
similar containers such as Tupperware, but results may not be as accurate. Separate lids and bottoms of the plastic containers. Each group should have 1 lid or 1 bottom. Use the graduated cylinders
to pour water into the lid or bottom to find its liquid capacity (volume). They should be in the 500 to 700 ml range. These containers will be the boats used in Part II. Measure the mass of 10 or 20
marbles together, and then calculate the mass per marble. Students will need this value for Part II of the activity. Alternatively, before Part II each group of students can weigh 10 or 20 marbles,
and calculate the mass per marble themselves, rather than having the teacher provide them with this number.
One of the most important molecules on Earth is water. Water is commonly used as a reference for physical properties. One such physical property, density, is defined as the measure of a materialšs
mass (e.g., in grams) divided by its volume (e.g., in milliliters) (d=m/v). The density of water, 1 g/ml, is also used as a means of comparison called specific gravity. Water is defined to have a
specific gravity of 1 (no units). Objects with a specific gravity of less than one will float, while objects with a specific gravity of more than one will sink. Seawater has an average specific
gravity of 1.028 with 3.5 g of dissolved salts for every 100 g of pure water. Ship designs and carrying capacity are based upon the known density of water. The human body is about 70% water and has
about the same average density as water.
Part I
Determining the density of tap water:
1. Measure the mass of the empty graduated cylinder. Record the weight.
2. Fill the cylinder with water to the 100 ml line. This is the volume.
3. Measure the mass of the cylinder with water.
4. Subtract the mass of the empty cylinder from the mass of the filled cylinder.
5. Divide the mass of the water by its volume. This will yield the density of the tap water. Record your result.
Determining the density of tap water with salt:
1. Use an eyedropper to remove 2 g (2 ml) of water from the cylinder.
2. While the cylinder is on the scales, add 2 g of salt.
3. Read the new water level inside the cylinder. This is the new volume.
4. Divide the mass of the water inside the cylinder by its new volume. This is the density of the salt water. Record your result.
5. Compare the densities of the salt water and the fresh water.
Part II
1. Measure the volume of the plastic container (boat). Fill a graduated cylinder with 100 ml of water and pour it into the hull of your boat. Do this as many times as necessary until the boat is
full. Be sure to keep track of how many times you re-filled the cylinder. On the last cylinder of water, any water left over in the graduated cylinder must be subtracted from the 100 ml
origi-nally in the cylinder. Multiply the number of times you refilled your cylinder by 100, then subtract the amount of water left over in the last cylinder. This is the total volume, TOTAL(ml).
Record your answer.
2. Find the mass your boat will carry. Since one milliliter of water is equal to one gram, the volume in ml of your boat also equals the mass it can carry in grams. Write your total mass, TOTAL(g).
3. Calculate the number of marbles your boat will hold. Divide your TOTAL(g) by the mass of the marble (from ŗPreparation˛ section). This equals the number of marbles your boat should be able to
carry. Record this number. Calculate 90% of that number by multiplying by 0.9.
4. Count out 90% of the calculated number of marbles and place them into your boat. Be sure the marbles are distributed evenly to avoid tipping of the boat.
5. Carefully place the boat, with the number of marbles calculated in step 4 inside the boat, into the bowl of water.
6. Add more marbles to your boat, one at a time, counting and adding these to the previous number of marbles. Continue this until the boat sinks. Remember to place the marbles carefully to maintain
a level boat. Record the number of marbles it took to sink the boat.
7. Compare the calculated number of marbles to the actual number of marbles held afloat by your boat before it sank. If the numbers are different, what factors may have contributed to that
8. To repeat the experiment, be sure to first dry the marbles and the inside of your boat.
9. Optional: Add a significant amount (e.g., 20 grams or more) of salt to the water, then repeat the experiment. Do you find a difference? Why?
Part I: A useful definition of a gram is the mass of one cubic centimeter (cm 3 ), also called a milliliter (ml), of pure water. The density of pure water varies with temperature: water contracts
until almost freezing and expands into a gas when boiling. The density of pure water is 1 g/ml at 4°C (39°F); however this changes by less than 0.2% at room temperature. Adding salt increases the
density of the water.
Part II: For any floating object, the buoyant force equals the weight of the liquid displaced (Archimedešs Principle). A plastic boat which holds 500 ml of water will support 500 g of any denser
material. A less dense load of the same mass will have a higher center of gravity and will cause the boat to tip.
Discuss whether it will be easier for a person to float in salt water or fresh water. Why? Have any of the students noticed this difference? For stability, the center of gravity of a boat must be
below the center of buoyancy as in Figure 1 (right). The boat in Figure 1 (left) will tip over. Standing up in a canoe shifts the center of gravity and can cause it to flip over. What other types of
boats are designed to be more stable than canoes? What are the advantages of the canoe design over more stable boats?
• buoyant (buoyancy): 1) the tendency of an object to float or rise when submerged in a fluid. 2) the power of a fluid to exert an upward force on a body placed in it.
• density: mass per unit volume of a substance. Usually expressed as grams per cubic centimeter. For ocean water with a salinity of 35 at 0 C, the density is 1.028 g/cm 3 .
• gram: 1/1000 of a kilogram. Abbreviated g or gm.
• specific gravity: the ratio of density of a given substance to that of pure water at 4ēC and at a pressure of one atmosphere.
• volume: the amount of space occupied by a three-dimensional object.
SOURCE: San Juan Institute Activity Series.
• Science Standard 1, Grades 6-8 Knows the properties that make water an essential component of Earth system (e.g., its ability to act as a solvent, its ability to remain a liquid at most Earth
• Science Standard 1, Grades 3-5 Knows that water can change from one state to another (solid, liquid, gas) through various processes (e.g., freezing, condensation, precipitation, evaporation)
• Science Standard 10, Grades K-2 Knows that different objects are made up of many different types of materials (e.g., cloth, paper, wood, metal) and have many different observable properties
(e.g., color, size, shape, weight)
• Science Standard 10, Grades K-2 Knows that things can be done to materials to change some of their properties (e.g., heating, freezing, mixing, cutting, dissolving, bending), but not all
materials respond the same way to what is done to them
• Science Standard 10, Grades 3-5 Knows that objects can be classified according to their properties (e.g., magnetism, conductivity, density, solubility)
• Science Standard 10, Grades 3-5 Knows that properties such as length, weight, temperature, and volume can be measured using appropriate tools (e.g., rulers, balances, thermometers, graduated
• Science Standard 10, Grades 3-5 Knows that materials have different states (solid, liquid, gas), and some common materials such as water can be changed from one state to another by heating or
• Science Standard 10, Grades 6-8 Knows that atoms often combine to form a molecule (or crystal), the smallest particle of a substance that retains its properties
• Science Standard 10, Grades 6-8 Knows that atoms are in constant, random motion (atoms in solids are close together and don't move about easily; atoms in liquids are close together and stick to
each other, but move about easily; atoms in gas are quite far apart and move about freely)
• Math Standard 4, Grades 3-5 Understands the basic measures perimeter, area, volume, capacity, mass, angle, and circumference Math Standard 4, Grades 3-5 Knows approximate size of basic standard
units (e.g., centimeters, feet, grams) and relationships between them (e.g., between inches and feet)
• Math Standard 4, Grades 3-5 Understands relationships between measures (e.g., between length, perimeter, and area)
• Math Standard 3, Grades 3-5 Adds, subtracts, multiplies, and divides whole numbers and decimals
• Math Standard 3, Grades 6-8 Adds, subtracts, multiplies, and divides whole numbers, fractions, decimals, integers, and rational numbers
• Math Standard 3, Grades K-2 Adds and subtracts whole numbers
• Math Standard 3, Grades 6-8 Uses proportional reasoning to solve mathematical and real-world problems (e.g., involving equivalent fractions, equal ratios, constant rate of change, proportions, | {"url":"http://www.bigelow.org/shipmates/density.html","timestamp":"2014-04-19T12:48:14Z","content_type":null,"content_length":"13800","record_id":"<urn:uuid:030e6591-40cb-4a72-81f8-cdffd4d852bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Q: Do the "laws" of physics and math exist? If so, where? Are they discovered or invented/created by humans?
Q: Do the “laws” of physics and math exist? If so, where? Are they discovered or invented/created by humans?
The original question was:
Mathematicians sometimes say, “There exists a number such that . . .” Which provokes me to ask, Where does it exist? For how long has it existed? Did numbers exist before people did? Or did people
somehow create (instead of discover) them?
In her “Incompleteness” quasi-biography of Godel (not a bad mathematician), Rebecca Goldstein emphasizes that he was a Platonist about math. What’s the current state of Platonism in math?
And such questions can be extended to the “laws” of physics: Do they exist? If so, where? And for how long? Are they discovered (implying prior existence) or invented/created by humans?
Some comments relating to such issues would be interesting!
Physicist: Discovered. Although most of the laws that can be re-arranged and expressed in different ways. For example you can express “conservation of total momentum” as “the velocity of the center
of mass never changes”.
A good physicist (one who pick their words carefully) will avoid saying that one thing or another is “true”. Physics, and the laws we come up with, don’t exist “out there somewhere”. Boiled down to
its most basic, what we study is “what has worked before, and still seems to work” as opposed to “what is true”.
For example: Einstein showed that Newtonian physics is wrong (so wrong), but it still “works”. If you learn Newton’s stuff you’ll notice that it’s fairly intuitive (compared to some other sciences
at least), and seems to be true. It was taught as fact for over 200 years, but again: wrong. Taking this, and dozens of other similar stories as a warning, physicists try to talk only about what
works and not what’s true.
That being said, some of the laws that have been found may actually be true, written into the nature of the universe. I’d like to say that we know at least a few of them for sure, and that if what
we know is wrong then the universe is entirely fucked. However, that has been exactly the case before (I’m looking at you wave-particle duality), so who knows?
The laws we have could easily be special cases of the true laws (like Newtonian mechanics in relativistic mechanics), or could be merely the descriptions of the behavior created by those laws.
As far as the physical laws of the universe actually, physically existing in some form somewhere (this is the total extent of my understanding of Platonism): no, I don’t think there are very many
scientists who think that.
8 Responses to Q: Do the “laws” of physics and math exist? If so, where? Are they discovered or invented/created by humans?
1. In the quantum sense, things exist because we try to measure them. Do the laws of physics exist because we choose to see them? Our labels are used to measure the universe, yes? We give it a name
so that we can communicate and share the idea. So just by calling it something are we creating it? What if we aren’t actually discovering anything, rather, we are creating it and it appears
before our very eyes? If so, are we collectively engaged in this construction of reality? What would it take for the whole world to agree to think up a completely eutopian planet? Is such a thing
possible ever / in our lifetime?
I think about gravity. Why does it exist? more mass = more gravitational pull but why is that constant throughout the universe? as if it were programmed or written?
2. In quantum theory things still exist whether or not we measure them. The only difference is that unmeasured things have the option of being in more than one state at once (there’s presently a
handsome youtube video that covers this fairly well).
If the laws of physics were observer dependent, then you’d expect them to have changed throughout history, and that extremely unpopular theories wouldn’t work (I’m looking at you, Relativity).
But that doesn’t seem to be the case.
There’s obviously a correlation between the laws of the universe and the laws we know and observe. However, the cause and effect arrow only points in the one direction.
That being said, convincing the entire world to “think peace” seems pretty worthwhile anyway.
3. The Truth is eternal. The Truth is God. God exist throughout the universe in every law and in every sub atomic particle. We discovered the laws of nature (Physics, chemistry, math, etc … Love
also) and we named the laws after the people that discovered them, but those laws had been in the universe before solid was created. Every Truth in the universe makes up the perfect mind of God
so God is the answer to every logical question. God does not have a solid body, he is not Jesus Christ (Jesus is his son) God is a spirit. His universe is in a spiritual realm just like the laws
of nature are now. We can not get into the spiritual realm (Thank God) so we have no control over the laws. 1+1=2 long before any two solid object were there to count. Distance was also present
but nothing solid to measure between. The laws of Truth are eternal, the laws are God.
4. Before the 1st physicist or mathematician was born , Nature existed already. There was gravity , energy ,…all these phenomenon were behaving to some point inside this universe of ours….when we
came to this universe , we saw this behaviours and ask Why. We dig deeper and found only a couple of explanations that are not complete and we until now continue to be doubtful even on the latest
explanation we know. If we question , how these behaviours come to be , I dont know , but what I do know that it was not a results of a mathematician or physicist thinking. But , in the back on
my head , I do believe that Man was not the 1st mathematician and physicist in this universe….whom Paul Dirac attributed to be the Mathematician who created the universe…( ” One could perhaps
described the situation by saying that God is a mathematician of a very high order , and He used very high-order advanced mathematics in constructing the universe.- The Evolution of the
Physicist’s Picture of Nature , Scientific American ) …. though Dirac’s idea about a higher mathematician is doubtful to many …what is so far true is that we don’t have yet the GUT and will never
be able to know and provide a complete explanation of Nature ( cf Ecclesiates 3:11 )
5. The physicist’s reply was interesting, but notably only answered the question about physical laws. No answer was given to the corresponding question about mathematics. For example, the law of
non-contradiction (with its refinements due to the intuitionists and the paraconsistent theories), or more exactly that which corresponds to the law (in model theoretic terms, the law is the
theory, but the model is what we are asking about here,) seems to be observer independent (unless you practice Zen, I guess). Gödel’s Platonism was about the “reality” of pure mathematics which,
he said, was different from physical reality but, in a sense never made clear (despite the fact that he wrote about it at length), just as valid. Hence a question about physical reality does not
address the question about a mathematical Platonism. I would be interested in an answer (that left out references to deities) to this question. Thanks.
6. We can never know reality – we can only feel it. All laws – whether they be platonic, mathematical, physical, legal or moral, are our consciously constructed descriptions of reality, so exist
only in our minds. They may seem to approximate to reality but their degree of approximations is a measure of the poverty of our minds. And because, chronologically, we felt/sensed before we
thought (and still do most of the time) all these so-called laws must originally have been premised exclusively upon our sensually induced feelings.
7. Well, ignore mathematics I say, let’s just stay with ‘logic’. If we assume that no logic exist we get a magical universe. A magical universe is not ‘a very advanced scientific’ universe, it’s a
universe without logic.
So, either One think this universe will be found to answer to some logic(s), or you don’t? What we have found so far is logics, not the absence of it.
8. The original question was about “existence” in the physical and mathematical realms. (The question whether logic is part of mathematics or vice-verse is an ongoing debate among mathematicians,
and I won’t go into that here.) The physicist dealt implicitly with the idea of physical “existence” –although the answer did not cover the debate among physicists, which is still ongoing among
physicists, as to whether one can say that the wave function, which contains the information about the probabilities of a particle’s properties when measured) is as “real” as measurements;
another debate is whether one can talk about “existence” for elements of a parallel universe that cannot be measured. But for logic, thankfully the definition of existence is much more precise:
logical existence has to do with the role of the existence quantifier in an interpretation in a model (where “interpretation” and “model” in logic have very specific definitions). This is not
just a formalist definition; a Platonist would agree. The difference between a Platonist and a Formalist is more subtle, but I won’t go into that right now because there are very few logicians
who are either pure Platonists or pure Formalists. Hilbert’s Formalism bit the dust with Gödel’s results, but ironically Physics in the twentieth century made pure Platonism, as it used to be
expressed, more difficult a standpoint to hold to. So most logicians are somewhere in the middle.
This entry was posted in -- By the Physicist, Philosophical. Bookmark the permalink. | {"url":"http://www.askamathematician.com/2010/03/q-do-the-laws-of-physics-and-math-exist-if-so-where-are-they-discovered-or-inventedcreated-by-humans/","timestamp":"2014-04-18T23:15:57Z","content_type":null,"content_length":"140086","record_id":"<urn:uuid:75f47a07-e0e0-444b-8f93-4951a8dbea67>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
January 3rd 2012, 06:39 AM #2
MHF Contributor
January 3rd 2012, 01:02 AM #1
Re: Eigenvalues
You have no idea how to solve this? Even if you are a beginner, if you are given a problem like this, you should know how to find eigenvalues- at least the definition of "eigenvalue".
A number, $\lambda$, is said to be an "eigenvalue" of linear transformation A if and only if there exist a non-zero ("non-trivial") vector, v, satisfying $Av= \lambda v$. Obviously, v= 0 always
satisfies that- the key here is "non-zero".
That equation is the same as $Av- \lambda v= (A- \lambda I)v= 0$. Now, in terms of matrices, if $A- \lambda I$ had an inverse, we could multiply on both sides by that inverse, $(A- \lambda I)^
{-1}(A- \lambda I)v= v= (A- \lambda I)0= 0$ showing that v= 0, the "trivial" solution is the only solution.
So $\lambda$ is an eigenvalue of A if and only if $A- \lambda I$ does not have an inverse- and that is true if and only if the determinant of $A- \lambda I$ (written as a matrix) is 0. That is,
any eigenvalue, $\lambda$ must satisfy the "characteristic equation" $\left|A- \lambda I\right|= 0$
Here, $A= \begin{pmatrix}3 & 1 \\ -1 & -\mu\end{pmatrix}$ so $A- \lambda I= \begin{pmatrix}3 & 1 \\ -1 & -\mu\end{pmatrix}- \lambda\begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}= \begin{pmatrix}3-\
lambda & 1 \\ -1 & -\mu-\lambda\end{pmatrix}$
The "characteristic equation is
$\left|\begin{array}{cc}3-\lambda & 1 \\ -1 & \mu-\lambda\end{array}\right|= (3- \lambda)(\mu- \lambda)+ 1-= \lambda^2- (\mu+ 3)\lambda+ 1+ 3\mu= 0$.
That is a quadratic equation to be solved for $\lambda$. Using the quadratic formula to solve it will let you determine what values of $\mu$ give two real, one real, or two complex solutions.
Last edited by HallsofIvy; January 3rd 2012 at 11:30 AM.
Last Post: November 6th 2009, 06:27 AM | {"url":"http://mathhelpforum.com/advanced-algebra/194873-eigenvalues.html","timestamp":"2014-04-16T13:18:59Z","content_type":null,"content_length":"37651","record_id":"<urn:uuid:f54a395a-f2f2-4ab0-aed3-8040cbfaa5a0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALBERT, R., and Barabasi, A.L. (2002). Statistical mechanics of complex networks. Review of Modern Physics. 74 47.
AMBLARD, F., and Deffuant, G. (2004) The role of network topology on extremism propagation with the relative agreement opinion dynamics. Physica A: Statistical Mechanics and its Applications,
343: 725-738
DEFFUANT, G., Neau, D., Amblard, F., and Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3(1-4):87-98.
DEFFUANT, G., Amblard, F., Weisbuch, G., and Faure, T. (2002). How can extremism prevail? a study based on the relative agreement interaction model. Journal of Artificial Societies and Social
Simulation, 5 (4). http://jasss.soc.surrey.ac.uk/5/4/1.html
DEFFUANT, G., Amblard, F., and Weisbuch, G. (2004) Modelling Group Opinion Shift to Extreme: the Smooth Bounded Confidence Model. European Social Simulation Association, 2nd Annual Conference,
Valladolid, Spain, 2004.
DITTMER, J.C. (2001). Consensus formation under bounded confidence. Nonlinear Analysis, (47):4615 - 4621.
GALAM, S. (1997). Rational group decision making: A random field Ising model at T = 0. Physica A: Statistical Mechanics and its Applications, 238: 66-80.
GALAM, S. and Wonczak, S. (2000). Dictatorship from majority rule voting. The European Physical Journal B, 18:183-186.
HEGSELMANN, R. and Krause, U. (2002). Opinion dynamics and bounded confidence: models, analysis and simulation. Journal of Artificial Societies and Social Simulation, 5 (3). http://
HOLYST, J. A., Kacperski, K., and Schweitzer, F. (2000). Phase transitions in social impact models of opinion formation. Physica A: Statistical Mechanics and its Applications, 285 (Issues
KACPERSKI, K., and Holyst, J. A. (2000). Phase transitions as a persistent feature of groups with leaders in models of opinion formation. Physica A: Statistical Mechanics and its Applications,
287: 631-643.
SCHULZE, C. (2003). Advertising in the sznajd marketing model. International Journal of Modern Physics C: Physics & Computers, 14(1):95.
STAUFFER, D. (2005). Sociophysics simulations ii: Opinion dynamics. http://arxiv.org/pdf/physics/0503115
STAUFFER, D., Sousa, A., and Schulze, C. (2004). Discretized opinion dynamics of the deffuant model on scale-free networks. Journal of Artificial Societies and Social Simulation, 7(3). http://
SZNAJD-WERON, K. and Sznajd, J. (2000). Opinion evolution in closed community. International Journal of Modern Physics C: Physics & Computers, 11(6):1157.
SZNAJD-WERON, K. and Weron, R. (2003). How effective is advertising in duopoly markets? Physica A: Statistical Mechanics and its Applications, 324(1-2):437-444.
WATTS, D.J., and Strogatz, S.H. (1998). Collective dynamics of small-world networks, Nature 393 440-442.
WEISBUCH, G., Deffuant, G., and Amblard, F. (2005) Persuasion dynamics. Physica A: Statistical Mechanics and its Applications, 353: 555-575 | {"url":"http://jasss.soc.surrey.ac.uk/9/1/11.html","timestamp":"2014-04-18T21:09:15Z","content_type":null,"content_length":"74566","record_id":"<urn:uuid:a9499d31-c1ed-4ad4-a764-3aff79f9c0cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Batterham and Hopkins have proposed a new approach for reporting the statistical findings from research studies. Their technique combines information on the magnitude of the estimate of the effect
(e.g., mean difference), the degree of imprecision about that effect (e.g., the confidence interval), and the smallest difference that has real-world (or clinical) meaning. This information is
combined into an overall set of likelihood statistics and a set of short descriptors (likely beneficial, etc.) are proposed. In this commentary, I address three issues: Is this approach better than
using p-values? Is this approach useful with observational data? What are the drawbacks of this approach? I particularly comment on the usefulness of their approach for epidemiologists and other
researchers who work with observational data.
Is This Approach Better Than Using P-Values?
Batterham and Hopkins correctly object to the painful reductionism associated with formal tests of statistical significance (the null-hypothesis test). As they point out, the statistical theory
underlying this approach is counter-intuitive and mysterious to almost all scientists who use it. Most researchers fail to comprehend that failure to reject the null is not the same as accepting the
alternative. More importantly, researchers frequently ignore important information in their data purely because the magical p-value of 0.05 has not been obtained. An approach based on the magnitude
of the estimate is vastly preferable to the unfortunate binary mindset of accept/reject that null-hypothesis testing engenders (Rothman, 1978; Poole, 1987; Poole, 2001; Wolf and Cumming, 2004). The
Batterham and Hopkins approach has the advantage of being a sophisticated quantitative alternative. It incorporates a Bayesian approach but circumvents the issues involved in defining priors. As
such, it is highly attractive and researchers should be encouraged to adopt it.
Is This Approach Useful With Observational Data?
Batterham and Hopkins largely develop their approach from within the paradigm of experimental statistics. Observational studies, however, have additional complexities associated with the use of
non-random samples and independent variables (risk factors, or exposures) that are not randomized, and perhaps can never be randomized (Greenland, 1990). Randomization may be impossible for ethical
reasons (e.g. if the effect of interest is cigarette smoking) or logistical reasons (e.g. if the effect of interest is air pollution), or both. In observational data, issues associated with
confounding, selection factors, and misclassification of effects pose at least as a great source of uncertainty as the imprecision of estimates (Greenland, 1990; Greenland, 1998). Thus, the
Batterham and Hopkins approach, to have maximum utility for epidemiologists, should incorporate quantitative information on the likely effect of these non-random sources of error (bias). Recent work
has investigated the use of simulation techniques for the quantitative assessment of bias (Lash and Fink, 2003; Fox 2005; Greenland 2004; Greenland 2005; Fox 2005), and these methods can be used to
produce an uncertainty interval–an interval that is based not just on imprecision (random error) but also includes information on the effect of bias (systematic error).
The beauty of the Batterham and Hopkins approach in this regard is its flexibility. Although Batterham and Hopkins demonstrate the method using the confidence interval (random error only), their
approach will work equally well using a simulated uncertainty interval (random and systematic error). Use of an uncertainty interval, rather than a confidence interval, will allow epidemiologists to
include non-random sources of error into the Batterham and Hopkins approach. This makes the use of the method a very attractive tool for epidemiologists. As a minor point, epidemiologists applying
the Batterham and Hopkins approach should note that, for measures of effect that are based on ratios (such as odds ratios, rate ratios, and hazard ratios), the X-axis in Figure 3 should be plotted on
the logarithmic scale.
What are the Drawbacks of This Approach?
Beyond pointing to Cohen's scales of magnitudes, Batterham and Hopkins do not provide any guidance about how to determine the smallest worthwhile effect. Does this need to be defined before the data
analysis is conducted? Can one change one’s mind about the smallest worthwhile effect after reviewing the data analysis? If so, what is the effect on the validity of the conclusions? The answers
to these and other questions about determining the smallest worthwhile effect await further research and guidance. One thing seems clear: two groups of researchers who use different criteria for
selecting the smallest worthwhile effect will, even given the same data, arrive at different conclusions. Although this sounds like a weakness, it could be seen as a strength of the method, since
requires that researchers make explicit what number they consider to be the smallest worthwhile effect.
The great strength of the Batterham and Hopkins method is that it does incorporate information on the smallest worthwhile effect into the formal presentation of data. The great drawback is, in many
cases, there may be little data and limited consensus on what the smallest worthwhile effect should be. The “ballpark” estimates often used to motivate power calculations in a research grant
proposal are unlikely to be sufficiently refined to fulfill a useful function in the analysis phase of a study.
In summary, Batterham and Hopkins have proposed a simple yet powerful method for presenting the findings of research studies. Their presentation combines information on the magnitude of the
estimate, the degree of imprecision, and the smallest difference that has “real-world” (or clinical) meaning. For epidemiologist, their method can readily be extended to include sources of
uncertainty other than random error using multiple bias models. However, I suspect that some clarification, guidance, and resolution of issues around selecting the numbers to be used as smallest
worthwhile effects will be required if the Batterham and Hopkins method is to achieve its full potential. Despite this possible limitation, the technique provides a useful tool for discouraging the
mindless dependence on null-hypothesis tests that pervades science. Use of the Batterham and Hopkins method will encourage a move towards less null-hypothesis testing and more estimation of effects.
It is also expected to promote a thoughtful analysis of, and reflection upon, study data and findings.
Greenland S (1990). Randomization, statistics, and causal inference. Epidemiology 1, 421-429
Greenland S (1998). Basic methods for sensitivity analysis and external adjustment. In: Modern Epidemiology (2nd Edition), Rothman K, Greenland S (Eds). Lippincott-Raven: New York, NY, pp.343-358
Greenland S (2005). Multiple-bias modelling for analysis of observational data. Journal of the Royal Statistical Society A 168, 267-306
Greenland S (2004). Interval estimation by simulation as an alternative to and extension of confidence intervals. International Journal of Epidemiology 33, 1389-1397
Fox MP, Lash TL, Greenland S (2005). A method to automate probabilistic sensitivity analyses of misclassified binary variables. International Journal of Epidemiology Advance Access published
September 19
Lash TL, Fink AK (2003). Semi-automated sensitivity analysis to assess systematic errors in observational data. Epidemiology 14, 451-458
Phillips CV (2003). Quantifying and reporting uncertainty from systematic errors. Epidemiology 14, 459-466
Poole C (1987). Beyond the confidence interval. American Jounral of Public Health 77, 195-199
Poole C (2001). Low p-values or narrow confidence intervals: which are more durable? Epidemiology 12, 291-294
Rothman KJ (1978). A show of confidence. New England Journal of Medicine 299, 1362-1363
Published Dec 2005 | {"url":"http://www.sportsci.org/jour/05/swm.htm","timestamp":"2014-04-19T01:16:44Z","content_type":null,"content_length":"51978","record_id":"<urn:uuid:07794ac3-a240-4e7e-8864-6e90ec5bd1b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trivial vector bundle problem
April 1st 2010, 04:41 AM
Trivial vector bundle problem
I have a problem showing that any vector bundle $E \to [0,1]$ is trivial.
The hint given with the exercise is considering the biggest number $t \in [0,1]$ so that $\pi^{-1}([0,t]) \to [0,t]$ is trivial, but i don't get how to do that, in fact I'm not even sure what
that last statement means.
So far I think I've shown that for some $t \in (0,1)$, we have a homeomorphism $\pi^{-1}([0,t]) \to [0,t]$, since there must exist a homeomorphism to an open neighbourhood of 0, and we can
restrict that. Also, $[0,t]$ is homeomorphic to $[0,1]$. But where do I go from there?
April 1st 2010, 05:56 AM
I think it's going to go something like
1) Show such a t exists.
2) Show t=1 (assume <1 and get a contradiction).
I suspect that connectedness and/or compactness will come into play, but I can't see the answer immediately. | {"url":"http://mathhelpforum.com/differential-geometry/136826-trivial-vector-bundle-problem-print.html","timestamp":"2014-04-18T13:23:06Z","content_type":null,"content_length":"5302","record_id":"<urn:uuid:7cbbbad6-4644-4cad-8239-95c95c095824>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Factor variables and mim
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Factor variables and mim
From ymarchenko@stata.com (Yulia Marchenko, StataCorp LP)
To statalist@hsphsun2.harvard.edu
Subject Re: st: Factor variables and mim
Date Tue, 11 Aug 2009 10:20:53 -0500
Fred Wolfe mentioned that -xtmixed- could not be used with -mi estimate- and I
replied that they could be used together if one specified -mi estimate-'s
-cmdok- option:
. mi estimate, cmdok: xtmixed ...
The full post can be found at
Alan Taylor <Alan.Taylor@psy.mq.edu.au> then wrote,
> It might be helpful to know, roughly speaking, what the distinction is
> between a command which officially supports mi and one, like xtmixed, which
> can be used with cmdok.
Commands are officially supported by -mi estimate- if they
1. produce the numerical results that -mi estimate- needs and stores them
where -mi estimate- expects (explained below)
2. work properly with available -mi estimate- postestimation tools, and
3. produce good-looking -mi estimate- output.
When I wrote my reply, I happened to know that -xtmixed- meets the first
requirement. -xtmixed- will be added to the list of supported commands once
we verify that it meets the other requirements, too. Most Stata estimation
commands meet requirement (1). To meet requirement (1), the command must
1.1 save its name in global macro -e(cmd)-,
1.2 save the estimated coefficients in -e(b)-
1.3 save the full variance-covariance matrix estimate in -e(V)-,
1.4 save the residual degrees of freedom in -e(df_r)- or, if there is
no such concept as residual degrees, leave -e(df_r)- undefined.
-xtmixed- meets the above requirements. As an example, Stata's -exlogistic-
estimation command violates (1.3) in that it does not even estimate the full
variance-covariance matrix, and therefore should not be used with -mi
estimate, cmdok-.
In any case, when you specify -cmdok-, you are asserting (1.1)-(1.4) are
The only reason that -xtmixed- is not on the list of supported commands is
because we have not yet performed adequate testing of (2) and (3). The issues
here are not concerns about inaccurate or inappropriate results, but merely
that the output might look ugly.
-xtmixed- will eventually be listed as officially supported along with other
estimation commands.
P.S. Programmers:
If you want to write an estimation command that works with -mi estimate-
even without the user specifying -cmdok-, see "Writing programs for use
with mi" in -help program properties-.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-08/msg00479.html","timestamp":"2014-04-21T12:51:55Z","content_type":null,"content_length":"7762","record_id":"<urn:uuid:d9fafc92-beaa-4bff-a0ec-72da7e176678>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browse by Committee Member
Number of items: 64.
Saito, Namiko (2014) Large-eddy simulations of fully developed turbulent channel and pipe flows with smooth and rough walls. Dissertation (Ph.D.), California Institute of Technology. http://
Bourguignon, Jean-Loup (2013) Models of turbulent pipe flow. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:11272012-130849053
Cheng, Mulin (2013) Adaptive methods exploring intrinsic sparse structures of stochastic partial differential equations. Dissertation (Ph.D.), California Institute of Technology. http://
Damazo, Jason Scott (2013) Planar reflection of gaseous detonations. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:06112013-153305610
DeLorimier, Michael John (2013) GRAph parallel actor language : a programming language for parallel graph algorithms. Dissertation (Ph.D.), California Institute of Technology. http://
Lopez Ortega, Alejandro (2013) Simulation of Richtmyer-Meshkov flows for elastic-plastic solids in planar and converging geometries using an Eulerian framework. Dissertation (Ph.D.), California
Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:02202013-185004693
Moeller, Robert Carlos (2013) Current transport and onset-related phenomena in an MPD thruster modified by applied magnetic fields. Dissertation (Ph.D.), California Institute of Technology. http://
Hu, Xin (2012) Multiscale modeling and computation of 3D incompressible turbulent flows. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Lintner, Stéphane Karl (2012) High-order integral equation methods for diffraction problems involving screens and apertures. Dissertation (Ph.D.), California Institute of Technology. http://
Ziegler, Jack L. (2012) Simulations of compressible, diffusive, reactive flows with detailed chemistry using a high-order hybrid WENO-CD scheme. Dissertation (Ph.D.), California Institute of
Technology. http://resolver.caltech.edu/CaltechTHESIS:12302011-185742249
Brown, Justin Lee (2011) High pressure Hugoniot measurements in solids using Mach reflections. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Ward, Geoffrey M. (2011) The simulation of shock- and impact-driven flows with Mie-Grüneisen equations of state. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Yang, Yue (2011) Lagrangian and vortex-surface fields in turbulence. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:02212011-233246689
Karnesky, James Alan (2010) Detonation induced strain in tubes. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:05142010-174001426
Choi, Inki (2010) Catalytic modification of flammable atmosphere in aircraft fuel tanks. Engineer's thesis, California Institute of Technology. http://resolver.caltech.edu/
Kapre, Nachiket Ganesh (2010) SPICE2 -- a spatial parallel architecture for accelerating the spice circuit simulator. Dissertation (Ph.D.), California Institute of Technology. http://
Kramer, Richard (2009) Stable high-order finite-difference interface schemes with application to the Richtmyer-Meshkov instability. Dissertation (Ph.D.), California Institute of Technology. http://
Othmer, Jonathan Andrew (2009) Algorithms for mapping nucleic acid free energy landscapes. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Benezech, Laurent Jean-Michel (2008) Premixed hydrocarbon stagnation flames : experiments and simulations to validate combustion chemical-kinetic models. Engineer's thesis, California Institute of
Technology. http://resolver.caltech.edu/CaltechETD:etd-05302008-113043
Bermejo-Moreno, Ivan (2008) On the non-local geometry of turbulence. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-05092008-173614
Hoch, David (2008) Nonreflecting boundary conditions obtained from equivalent sources for time-dependent scattering problems. Dissertation (Ph.D.), California Institute of Technology. http://
Kao, Shannon (2008) Detonation stability with reversible kinetics. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-06022008-170629
Lombardini, Manuel (2008) Richtmyer-Meshkov instability in converging geometries. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-05302008-140331
Matheou, Georgios (2008) Large-eddy simulation of molecular mixing in a recirculating shear flow. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Monro, John Anderson (2008) A super-algebraically convergent, windowing-based approach to the evaluation of scattering from periodic rough surfaces. Dissertation (Ph.D.), California Institute of
Technology. http://resolver.caltech.edu/CaltechETD:etd-01032008-222910
Sweatlock, Sarah L. (2008) Asymptotic weight analysis of low-density parity check (LDPC) code ensembles. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Tian, Lixiu (2008) Effective behavior of dielectric elastomer composites. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-08272007-145455
Wang, Ke (2008) A subdivision approach to the construction of smooth differential forms. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Latini, Marco (2007) Simulations and analysis of two- and three-dimensional single-mode Richtmyer-Meshkov instability using weighted essentially non-oscillatory and vortex methods. Dissertation
(Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-12082006-124547
Scheel, Janet D (2007) Rotating Rayleigh-Benard convection. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-08252006-154116
Sone, Kazuo (2007) Modeling and simulation of axisymmetric stagnation flames. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-04252007-170838
Goulet, David Michael (2006) Mathematical models of the developing C. elegans hermaphrodite gonad. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Bergthorson, Jeffrey Myles (2005) Experiments and modeling of impinging jets and premixed hydrocarbon stagnation flames. Dissertation (Ph.D.), California Institute of Technology. http://
Stredie, Valentin Gabriel (2005) Mathematical modeling and simulation of aquatic and aerial animal locomotion. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Yu, Xinwei (2005) Localized non-blowup conditions for 3D incompressible Euler flows and related equations. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Chaubell, M. Julian (2004) Low-coherence interferometric imaging: solution of the one-dimensional inverse scattering problem. Dissertation (Ph.D.), California Institute of Technology. http://
Chiam, Keng-Hwee (2004) Spatiotemporal chaos in Rayleigh-Benard convection. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-08062003-162208
O'Reilly, Gerard Kieran (2004) Compressible vortices and shock-vortex interactions. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Arienti, Marco (2003) A numerical and analytical study of detonation diffraction. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-02122003-152525
Greenberg, Andrei (2003) Chebyshev spectral method for singular moving boundary problems with application to finance. Dissertation (Ph.D.), California Institute of Technology. http://
Howard, Elizabeth (2003) A front tracking method for modelling thermal growth. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-03042003-115138
Hyde, E. McKay (2003) Fast, high-order methods for scattering by inhomogeneous media. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Kastner, Jason (2003) Modeling a Hox gene metwork: stochastic simulation with experimental perturbation. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Koslowski, Marisol (2003) A phase-field model of dislocations in ductile single crystals. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Mauch, Sean Patrick (2003) Efficient algorithms for solving static Hamilton-Jacobi equations. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Tokman, Mayya (2001) Magnetohydrodynamic modeling of solar magnetic arcades using exponential propagation methods. Dissertation (Ph.D.), California Institute of Technology. http://
Efendiev, Yalchin R. (1999) The multiscale finite element method (MsFEM) and its applications. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Gallagher, Donal A. (1999) Saffman-Taylor fingers in deformed Hele-Shaw cells. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-02062008-103933
Kang, Sung Phill (1999) A study of viscous flow past axisymmetric and two-dimensional bodies. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Lahey, Patrick M. (1999) A fixed-grid numerical method for dendritic solidification with natural convection. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Leyva, Ivett A. (1999) Shock detachment process on cones in hypervelocity flows. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-02082008-162753
Hill, David J. (1998) Part I. Vortex dynamics in wake models. Part II. Wave generation. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Massingill, Berna Linda (1998) A structured approach to parallel programming. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-01242008-074143
Meloon, Mark Robert (1998) Models of Richtmyer-Meshkov instability in continuously stratified fluids. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Shiels, Doug (1998) Simulation of controlled bluff body flow with a viscous vortex method. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Papalexandris, Miltiadis Vassilios (1997) Unsplit numerical schemes for hyperbolic systems of conservation laws with source terms. Dissertation (Ph.D.), California Institute of Technology. http://
Regelson, Moira Ellen (1997) Protein structure/function classification using hidden Markov models. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Conley, Andrew (1994) New plane shear flows. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-10182005-102648
Dumas, Guy (1991) Study of spherical couette flow via 3-D spectral simulations: large and narrow-gap flows and their transitions. Dissertation (Ph.D.), California Institute of Technology. http://
Mudkavi, Vidyadhar Yogeshwar (1991) Numerical studies of nonlinear axisymmetric waves on vortex filaments. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Pham, Thu (1991) Numerical studies of incompressible Richtmyer-Meshkov instability in a stratified fluid. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Teng, Michelle Hsiao Tsing (1990) Forced emissions of nonlinear water waves in channels of arbitrary shape. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/
Brouillette, Martin (1989) On the interaction of shock waves with contact surfaces between gases of different densities. Dissertation (Ph.D.), California Institute of Technology. http://
Winckelmans, Gregoire Stephane (1989) Topics in vortex methods for the computation of three- and two-dimensional incompressible unsteady flows. Dissertation (Ph.D.), California Institute of
Technology. http://resolver.caltech.edu/CaltechETD:etd-11032003-112216 | {"url":"http://thesis.library.caltech.edu/view/committee/Meiron-D-I.default.html","timestamp":"2014-04-19T10:58:53Z","content_type":null,"content_length":"36004","record_id":"<urn:uuid:ddaac212-bf2e-4043-98f3-c9dd3de901cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that if n > 2, then U_n has a subgroup of order 2.
April 9th 2012, 03:18 AM #1
Junior Member
Mar 2011
Prove that if n > 2, then U_n has a subgroup of order 2.
Prove that if $n > 2$, then $\mathbb{U}_n$ has a subgroup of order $2$.
There must exist a $g \in \mathbb{U}_n$ such that $g^2 \equiv 1 \pmod{n}$. How do I know there exists such an element in $\mathbb{U}_n$?
Re: Prove that if n > 2, then U_n has a subgroup of order 2.
how about n-1 (mod n)?
(there's one other element that always works, to make up "the rest of the subgroup". i'll leave it to you to discover. oh, and you'll want to show that gcd(n-1,n) = 1, as well).
Last edited by Deveno; April 9th 2012 at 03:37 AM.
Re: Prove that if n > 2, then U_n has a subgroup of order 2.
Thanks! Can I prove $gcd(n-1,n) = 1$ by the Euclidean Algorithm?
$n = (n-1) + 1$
Hence $gcd(n,n-1) = 1$.
Is the other element $1$ that makes up the rest of the subgroup $\langle n-1 \rangle$?
I cannot find another generator for a subgroup of order $2$ for $\mathbb{U}_5 = \{1,2,3,4\}$.
Re: Prove that if n > 2, then U_n has a subgroup of order 2.
well, of course 1 has the property that 1^2 = 1 (mod n) (for any n, in fact).
4 has order 2 mod 5:
4^2 = 16 = 1 (mod 5), so 4 is a generator (of a group of order 2).
if p is a prime, then U(p) will be cyclic (this is an involved proof, so i won't write it here), of order p-1.
as long as p > 2, then p will be odd, so U(p) (being cyclic) will have ONLY ONE element of order 2.
for a general n, there may be several elements of order 2 in U(n) (you get at least one for each prime factor of n).
Re: Prove that if n > 2, then U_n has a subgroup of order 2.
Thanks. I am having trouble understanding some of your points at the moment.
>> if p is a prime, then U(p) will be cyclic (this is an involved proof, so i won't write it here), of order p-1.
I found a theorem in my notes saying that if $n \geq 2$ then $\mathbb{U}_n$ is cyclic if and only if $n = 2,n=4,n=p^\alpha$ or $n = 2p^\alpha$, where $p$ is a prime other than $2$ and $\alpha$ is
a positive integer. The proof of this should cover the case when $p$ is prime.
I need to think more about the other two points. I will come back later.
>> as long as p > 2, then p will be odd, so U(p) (being cyclic) will have ONLY ONE element of order 2.
>> (you get at least one for each prime factor of n).
April 9th 2012, 03:34 AM #2
MHF Contributor
Mar 2011
April 9th 2012, 04:28 AM #3
Junior Member
Mar 2011
April 9th 2012, 05:37 AM #4
MHF Contributor
Mar 2011
April 10th 2012, 05:52 AM #5
Junior Member
Mar 2011 | {"url":"http://mathhelpforum.com/number-theory/196981-prove-if-n-2-then-u_n-has-subgroup-order-2-a.html","timestamp":"2014-04-20T16:30:00Z","content_type":null,"content_length":"45744","record_id":"<urn:uuid:dbf8e6c9-bfc9-4d83-9688-a50bca2fe05d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
conjugate representation
June 6th 2010, 09:12 AM #1
Jun 2010
conjugate representation
For $n\in\mathbb{Z}$ and a fixed $t\in\mathbb{T}$, the unit circle in the complex plane,
How can I show that $\overline{t}^n$ can be written as a linear combination of powers (in Z) of t?
Last edited by Plato; June 6th 2010 at 12:19 PM. Reason: LaTex fix
Oh of course! I feel silly for asking now :S
Thank you!
June 6th 2010, 10:52 AM #2
Oct 2009
June 6th 2010, 03:18 PM #3
Jun 2010 | {"url":"http://mathhelpforum.com/differential-geometry/147971-conjugate-representation.html","timestamp":"2014-04-17T16:22:06Z","content_type":null,"content_length":"35851","record_id":"<urn:uuid:9be1edf2-e4b7-4943-88df-98c927d002ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kripke's Quantified Modal Logic (
Kripke's Quantified Modal Logic (KQML)
The system here (minus identity) is equivalent to the S5-based system found in Kripke (1963), but the presentation is designed to highlight its similarities to, and differences from, SQML. (The
addition of identity poses no problems for the system.)
Definition: Let L[K] be just like the first-order modal language L except that it contains no constants. Let φ be a formula of L[K]. Define a closure of φ to be a formula of L[K] without free
variables obtained by prefixing any string of boxes and universal quantifiers, in any order, to φ.
Let φ, ψ, and θ be formulas, and α and β variables, of L[K]; any closure of any (instance) of the following is an axiom of KQML:
1. φ → (ψ → φ)
2. φ → (ψ → θ)) → ((φ → ψ) → (φ → θ))
3. (¬φ → ψ) → ((¬φ → ¬ψ) → φ)
4. ∀β(∀αφ → φ[α/β]), if β is substitutable for α in φ
5. ∀α(φ → ψ) → (φ → ∀αψ), if α does not occur free in φ
6. x = x
7. x = y → (φ → φ′), where φ′ is the result of substituting y for some, but not necessarily all, occurrences of x in φ, provided that y is substitutable for x at those occurrences.
8. □(φ → ψ)→ (□φ → □ψ)
9. □φ → φ
10. ◊φ → □◊φ
Modus Ponens (MP): ψ follows from φ → ψ and φ
Definition: φ is a theorem of KQML if it is an axiom of KQML or follows from other theorems of KQML by Modus Ponens. | {"url":"http://plato.stanford.edu/entries/actualism/logic-Kripke.html","timestamp":"2014-04-20T00:46:32Z","content_type":null,"content_length":"14774","record_id":"<urn:uuid:2dc4ffaf-5851-4fe1-86e5-9cf279da8651>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics: Power from data! Graph types: Using graphs
Using graphs
Archived Content
Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since
it was archived. Please contact us to request a format other than those available.
What is a graph?
A graph is a visual representation of a relationship between, but not restricted to, two variables. A graph generally takes the form of a one- or two-dimensional figure such as a scatterplot.
Although, there are three-dimensional graphs available, they are usually considered too complex to understand easily.
A graph commonly consists of two axes called the x-axis (horizontal) and y-axis (vertical). Each axis corresponds to one variable. The axes are labelled with different names, such as Price and
The place where the two axes intersect is called the origin. The origin is also identified as the point (0,0).
A point on a graph represents a relationship. Each point is defined by a pair of numbers containing two co-ordinates (x and y). A co-ordinate is one of a set of numbers used to identify the location
of a point on a graph.
In the following section, you will learn how to determine both co-ordinates for any given point, and to correctly label the co-ordinates of a point.
Identifying the x-co-ordinate
The x-co-ordinate of a point is the value that tells you how far the point is from the origin on the (horizontal) x-axis. In order to find the x-co-ordinate of a point on any graph, draw a straight
line from the point to intersect at a right angle with the x-axis. The number where the line intersects with the x-axis is the value of the x-co-ordinate.
Figure 2 is a graph with two points, A and B. Identify the x-co-ordinate of points A and B.
Answer: The x-co-ordinate of point A is 50, and the x-co-ordinate of point B is 200.
Identifying the y-co-ordinate
The y-co-ordinate of a point is the value that tells you how far away the point is from the origin on the vertical or y-axis. To find the y-co-ordinate of a point on a graph, draw a straight line
from the point to intersect at a right angle with the y-axis. The number where the line intersects the y-axis is the value of the y-co-ordinate.
Identify the y-co-ordinate for point A and point B on Figure 3.
Answer: The y-co-ordinate of point A is 200, and the y-co-ordinate of point B is 50.
Once you have determined the co-ordinates of a point, you can label the points using ordered pair notation. This notation is simple—points are identified by stating their co-ordinates in the form of
(x, y). Note that you must plot the x-co-ordinate first as in Figure 2. The x- and y-co-ordinates for each of points A and B are identified in Figure 4 below.
• The x-co-ordinate of point A is 50 and the y-co-ordinate of point A is 200. The co-ordinates of point A are therefore (50, 200).
• The x-co-ordinate of point B is 200 and the y-co-ordinate of point B is 50. The co-ordinates of point B are therefore (200, 50).
Points on the axes
If a point falls on an axis, you do not need to draw lines to determine the co-ordinates of the point. In Figure 5 below, point C lies on the y-axis and point D lies on the x-axis. When a point lies
on an axis, one of its co-ordinates must be 0.
• Point C lies on the y-axis and has an x-co-ordinate of 0. When you move along the y-axis to find the y-co-ordinate, the point is 200 from the origin. The co-ordinates of point C are therefore (0,
• Point D lies on the x-axis and has a y-co-ordinate of 0. If you move along the x-axis to find the co-ordinate, the point is 100 from the origin. The co-ordinates of point D are therefore (100,
Quick quiz!
Answer the following questions using Figure 6 below.
• Which points intersect with the y-axis?
• Which point would be labelled with the ordered pair notation of (100, 200)?
• Which points have a y-co-ordinate of 100?
Answers; 1. Point A 2. Point B 3. Point C
There are times when you will be given the coordinates of a point and will need to find its location on a graph. This process is often referred to as plotting a point. The process for plotting a
point is shown below.
Plot the point (200, 150) using the following step-by-step approach.
Step 1
First, draw a perpendicular line extending out from the x-axis at the x-co-ordinate of the point. In the example, the x-co-ordinate is at 200.
Step 2
Then, draw a perpendicular line extending out from the y-axis at the y-co-ordinate of the point, the y-co-ordinate is at 150.
Step 3
Finally, draw a dot where the two lines intersect. This is the point we are plotting (200, 150).
The scale of a graph is very important. It is determined by the data for each axis, and should be measured accordingly.
A survey was conducted of the Grade 9 students at Elm High. The students were asked which of the following four team sports they preferred.
The results were:
1. Soccer – 45 students
2. Football – 55 students
3. Hockey – 75 students
4. Baseball – 25 students
In Figure 10, these four preference categories have been placed on the x-axis, each representing the grouped data collected. Because the categories are nominal (names, not numbers) and describe
qualitative (not quantitative) distinctions, the groups can be placed in any order on the axis.
On the y-axis, the data values range from 0 to 80 students. As mentioned earlier, your origin should be located at 0 where the x-axis and y-axis meet. Since the largest group of students by sport
preference is 75, then it would be appropriate to end the scale at 80, resulting in a scale that ranges from 0 to 80. Depending on how the scale is arranged, the graph will not change, but its visual
appearance might be altered.
The interval of the scale is the amount of space along the axis from one mark to the next. If the range of the scale is small, the general rule is to take the range of the scale and divide it by 10.
Make this your interval. For ranges that are larger, the interval is typically 5, 10, 100, 500, 1,000, etc. Use numbers that divide evenly into 100, 1,000 or their multiples in order to provide a
graph that is easy to understand.
In this case, if you take 80 and divide it by 5, you will get 16. However, it might be better to use 10 because it is easier to analyse. This provides a scale that is smaller, but still easy to use.
Knowing how to convey information graphically is important in presenting statistics. The following is a list of general rules to keep in mind when preparing graphs.
A good graph
• accurately shows the facts
• grabs the reader's attention
• complements or demonstrates arguments presented in the text
• has a title and labels
• is simple and uncluttered
• shows data without altering the message of the data
• clearly shows any trends or differences in the data
• is visually accurate (i.e., if one chart value is 15 and another 30, then 30 should appear to be twice the size of 15).
Why use graphs to present data?
Because they...
• are quick and direct
• highlight the most important facts
• facilitate understanding of the data
• can convince readers
• can be easily remembered
There are many different types of graphs that can be used to convey information, including:
Knowing what type of graph to use with what type of information is crucial. Depending on the nature of the data some graphs are more appropriate than others. For example, categorical data like
favorite school subjects are best displayed in a bar graph or circle graph while continuous numeric data such as height are illustrated by a line graph or histogram. For more information on
appropriate graph types, see "Types of data" in Teacher’s Guide to Data Discovery.
A graph is not always the most appropriate tool to present information. Sometimes text or a data table can provide a better explanation to the readers—and save you considerable time and effort.
You might want to reconsider the use of a graph when
• the data are very dispersed
• there are too few data (only one, two or three data points)
• the data are very numerous
• the data show little or no variations
If you have decided that using a graph is the best method to relay your message, then there are four guidelines to remember:
1. Define your target audience.
Ask yourself the following questions to help you understand more about your audience and what their needs are:
1. Who is your target audience?
2. What do they know about the issue?
3. What do they expect to see?
4. What do they want to know?
5. What will they do with the information?
2. Determine the message(s) to be transmitted.
Ask yourself the following questions to figure out what your message is and why it is important:
1. What do the data show?
2. Is there more than one main message?
3. What aspect of the message(s) should be highlighted?
4. Can all the messages be displayed in the same graphic?
3. Use appropriate terms to describe your graph.
Consider the following appropriate terms when labelling the graph or describing features of it in accompanying text:
Use appropriate terms to describe your graph
describes components share of, percent of the, smallest, the majority of
compares items ranking, larger than, smaller than, equal to
establishes a time series change, rise, growth, increase, decrease, decline, fluctuation
determines a frequency range, concentration, most of, distribution of x and y by age
analyses relationships in data increase with, decrease with, vary with, despite, correspond to, relate to
4. Experiment with different types of graphs and select the most appropriate.
1. circle graph/pie chart (description of components)
2. horizontal bar graph (comparison of items and relationships, time series)
3. vertical bar graph (comparison of items and relationships, time series, frequency distribution)
4. line graph (time series and frequency distribution)
5. scatterplot (analysis of relationships) | {"url":"http://www.statcan.gc.ca/edu/power-pouvoir/ch9/using-utilisation/5214829-eng.htm","timestamp":"2014-04-17T00:49:48Z","content_type":null,"content_length":"34441","record_id":"<urn:uuid:adf225c9-9b95-427b-944e-7e93f407f833>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
PhD Thesis
Kernels for the F-Deletion Problem
My PhD thesis written under the joint guidance of Prof. Venkatesh Raman and Prof. Saket Saurabh. The thesis is largely concerned with the Planar F-deletion problem, which seeks the smallest number of
vertices that we must delete from a graph G to ensure that it contains no minor models of graphs from F (F is a family of connected graphs containing at least one planar graph). The thesis explores
the kernelization complexity and explores some closely related structural questions. En route, we devise an approximation algorithm for the problem which can be useful in other contexts. An
interesting variety of techniques are called upon as we journey through variants of the general question, ranging from applications of Courcelle's theorem to using protrusion-based techniques.
Remark: There has been substantial progress in this line of work since the writing of the thesis. In particular, the question of whether Planar F-deletion admits a polynomial kernel is fully resolved
and the answer is affirmative. The most recent work makes no assumption about the connectivity of the graphs in $F$. The status of the general F-deletion problem, where we drop the assumption that F
contains at least one planar graph, continues to be an intriguing open problem.
In this thesis, we use the parameterized framework for the design and analysis of algorithms for NP-complete problems. This amounts to studying the parameterized version of the classical decision
version. Herein, the classical language appended with a secondary measure called a "parameter". The central notion in parameterized complexity is that of fixed-parameter tractability, which means
given an instance $(x, k)$ of a parameterized language $L$, deciding whether $(x, k) \in L$ in time $f(k) \cdot p(|x|)$, where $f$ is an arbitrary function of $k$ alone and $p$ is a polynomial
function. The notion of kernelization formalizes preprocessing or data reduction, and refers to polynomial time algorithms that transform any given input into an equivalent instance whose size is
bounded as a function of the parameter alone.
The center of our attention in this thesis is the F-Deletion problem, a vastly general question that encompasses many fundamental optimization problems as special cases. In particular, we provide
evidence supporting a conjecture about the kernelization complexity of the problem, and this work branches off in a number of directions, leading to results of independent interest. We also study the
Colorful Motifs problem, a well-known question that arises frequently in practice. Our investigation demonstrates the hardness of the problem even when restricted to very simple graph classes.
(Ideally viewed in full screen - use the button at the bottom-right corner if you are not using a mobile device.) | {"url":"http://neeldhara.com/thesis/phd","timestamp":"2014-04-19T01:48:52Z","content_type":null,"content_length":"16258","record_id":"<urn:uuid:29905de7-8017-471e-bb47-e57a26ae60d4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
drawing hexagon [Archive] - OpenGL Discussion and Help Forums
01-31-2009, 07:29 AM
I want to draw a hexagon. I am limited to open gl es, but i can translate for the most part.
My real question. What should my approach be to thinking about the polygon. I have some graph paper and a pencil, and I started drawing it out. I am not sure whether I am drawing several squares and
triangles that all make up the hexagon? or whether I am drawing one big shape. I'm guessing there are tons of approaches, but any suggestions on where to start?
Eventually, I want to draw a 3D hexagon that looks similar to a pencil. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-166613.html","timestamp":"2014-04-17T18:51:01Z","content_type":null,"content_length":"4276","record_id":"<urn:uuid:d00de7b3-3f7e-48a4-b4b9-c5a0783551c3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Jim Conant
bio website math.utk.edu/~jconant
location University of Tennessee
age 39
visits member for 3 years, 7 months
seen 16 hours ago
stats profile views 2,730
I like low-dimensional topology and geometric group theory. I'm particularly drawn to problems that involve algebraic spaces of graphs, such as graph homology. | {"url":"http://mathoverflow.net/users/9417/jim-conant?tab=favorites","timestamp":"2014-04-19T19:57:50Z","content_type":null,"content_length":"50302","record_id":"<urn:uuid:e11f1924-8e17-49d0-a53b-c0772b02438f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
s in Alpena
Math Schools
In Alpena, Michigan
There is only one math school in Alpena for faculty to choose from. The graphs, statistics and analysis below outline the current state and the future direction of academia in math in the city of
Alpena, which encompasses math training at the math associates degree level.
Alpena Vs. Michigan Math Employment
Alpena No data
Michigan 2,640
Overall employment in Alpena has increased.
Alpena Vs. Michigan Math employment growth
2,710 3,120 3,080 2,730 2,640
40 40 40 No data No data
• Grey: Michigan
• Dark Yellow: Alpena
Math VS. All professions salaries in Alpena
$49,890 $40,780 $41,990 No data No data
$31,310 $32,500 $32,980 No data $33,260
• Light Blue: Math in Alpena
• Dark Yellow: All Professions in Alpena
The decline in the salary of math professionals in Alpena is faster than the salary trend for all careers in the city.
Salary percentiles for Math professionals in Alpena
Average Salaries for Math professionals and related professions in Alpena
Math $0
Accounting $35,415
Many of Alpena's math professionals are graduates of the only accredited math school in the city.
Math Programs Offered In Alpena
associate 1
bachelor 0
master 0
doctor 0
Certificate 0
Total 1
Alpena's math school offers one math degree and certificate program.
Student Completed Math Degrees In Alpena
associate 4
bachelor 0
master 0
doctor 0
Certificate 0
Total 4
Approximately 4 students completed math courses in 2010.
Math Faculty Salaries in Alpena, Michigan | {"url":"http://www.educationnews.org/career-index/math-schools-in-alpena-mi/","timestamp":"2014-04-17T04:49:28Z","content_type":null,"content_length":"29086","record_id":"<urn:uuid:bf603b7a-146a-4714-8f6f-dd1d3b65cb1c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many silicon chips are there on a 300 mm wafer? | EE Times
Programmable Logic DesignLine Blog
How many silicon chips are there on a 300 mm wafer?
As I mentioned in my previous blog, a few days ago I received an email from "EDA Analyst to the Stars" Gary Smith posing a number of essentially unanswerable questions, including the following:
1. How big is a bacterium compared to a transistor on a modern silicon chip?
2. How many die do you get on a 300 mm diameter wafer?
3. What is the total length of the interconnect (poly-silicon and metal tracks) on a high-end silicon chip?
Well, I think we answered the first question to everyone's satisfaction yesterday (Click Here to discover the secrets of the ancients that have – until now – been hidden in the mists of time). So now
let's proceed to question #2. . .
#2 How many die do you get on a 300 mm diameter wafer?
How many die are on a wafer? What sort of a question is this? It's like asking: "How long is a piece of string?" (Actually, there's an answer to this latter question, which is: "Twice as long as from
the middle to the end!")
Hmmm, this obviously depends on the size of the die (the silicon chips themselves). As a starting point, the area of a circle is Pi × r^2 (that's Pi times the radius of the circle squared). So if we
have a 300 mm diameter wafer, its area will be 3.142 × 150^2 = 3.142 × 22,500 = 70,695 square millimeters.
Now, let's assume that we can use the wafer right up to its edge (in reality there's a small border around the edge that we don't use). Let's also assume we are creating very small chips – not great
big hairy System-on-Chip (SoC) devices, just itty-bitty little rascals intended to perform some relatively simple function. Suppose our die are each 1 mm × 1 mm = 1 square millimeter; some of the die
will be cut short by the curve of the circle, so we might round things down to say 70,000 die (each 1mm × 1mm) on our 300 mm diameter wafer.
By comparison, what if our die were great big hairy beasts, each 20 mm × 20 mm = 400 square millimeters. In this case, we will only get 148 of these little scamps on a 300 mm wafer as shown below.
A 300 mm diameter wafer can hold 148 die
(assuming they are each 20 mm × 20 mm)
Of course, we are also going to lose some die to random defects, which will affect our yield (the number of good die compared to the total number we fabricate). Purely for the sake of these
discussions, let's assume that we have 50 tiny inclusions (defects) in the waver itself – that these defects are randomly scattered across the surface of the wafer – that each defect will "kill" one
of our die – and that these defects will be the only source of any failures.
In the case of our 1 mm × 1 mm die, this means that our failure rate will be 50 / 70,000 = ~0.0007 = 0.07%, so our yield will be 99.93% (hurray!). By comparison, in the case of our 20 mm × 20 mm die,
our failure rate will be 50 / 148 = ~0.34 = 34%, so our yield falls to only 66% (yah, boo, hiss).
But just how big are typical die these days? In order to answer this poser, I called my old buddy Adam Traidman from Chip Estimate, whose InCyte chip estimation systems provide IC design teams,
system architects, and management with the ability to visualize tradeoffs throughout the chip design flow.
Based on data culled from 50,000 chip estimations over the last 18 months, Adam was able to provide me with the following data:
Smallest die size:
0.683 mm × 0.683 mm at the 90 nm technology node
1.533 mm × 1.533 mm at the 65 nm technology node
Average die size
7.020 mm × 7.020 mm at the 90 nm technology node
2.130 mm × 2.130 mm at the 65 nm technology node
Biggest die size:
20.253 mm × 20.253 mm at the 65 nm technology node
This means that our 20 mm × 20 mm "guestimate" wasn't too far from the truth (honest, I did that part before I called Adam – just call me "Lucky Max"). Of course, that made me wonder as to the size
of the largest die anyone has created since the dawn of history...
All I've been able to come up with so far is that the 2005 ITRS report says the maximum die back then was 23 mm × 26 mm (Click Here for more details). Also, from the Wikipedia, the die size of
Intel's Montecito processor is 27.72 mm × 21.5 mm, which equates to a whopping 596 square millimeters (Click Here for more details).
Wow – that's pretty darn big! But is it the biggest die out there? Do you know something I don't? If so, please email me and I'll add it to my pile of "interesting facts" that always seem to come in
useful when I least expect it.
OK, that's all for now, but don't forget to come back tomorrow when we will consider the answer to Gary's third question: "What is the total length of the interconnect (poly-silicon and metal tracks)
on a high-end silicon chip?"
Questions? Comments? Feel free to email me – Clive "Max" Maxfield – at max@techbites.com). And, of course, if you haven't already done so, don't forget to Sign Up for our weekly Programmable Logic
DesignLine Newsletter. | {"url":"http://www.eetimes.com/author.asp?section_id=14&doc_id=1282825&piddl_msgorder=thrd","timestamp":"2014-04-20T06:37:16Z","content_type":null,"content_length":"129534","record_id":"<urn:uuid:05a9b2ef-61f9-444e-9328-d448584a6dda>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
WyzAnt Resources
Please help simplify these expressions: 1. √a + √b √a - √b 2. √5 + 1 √5 - 1 3...
1,4,9,16,25,… n=10 I ask this question before but i am still comfuse
geometry shapes
need to know what is left in the bag after some poured into canister which gives the canister 2 times the mass over the bag
Altitude The angles of elevation to an air plane from two points A and B on level ground are 71° and 88°,respectively .The points A and B are 5.5 miles apart and the airplaine is east if both
This is the problem 12x2 + 14x=0
does the slope stay the same steeper less steep
you must put it in standard form
Each side of the polygon measures 6 feet. Whats the name of that polygon?
a frog is sitting on a stump 24 ft above the ground. it hops off the stump and lands on the ground 8 ft away. during its leap, its height h is given by the equation h=-0.5x^2+x+24 where x...
Hi! I am having some trouble doing a combinations problem, which is stated below: How many committees of five people can be selected from 8 men and 8 women if the committee must have...
(negation,p,disjunction(variable r,implication,negation variable q
1. Listed below are the number of years it took for a random sample of college students to earn a bachelor's degree. At the .05 significance level, test the claim that it takes the average student...
C=75°20,a=30,c=40 2)Find the area of thr triangles .Given B=82°30',a=110,c=75
f(x)=x-1/x+1 at c=-1 What 3 conditions of continuity can I use to justify the answer
Find the 9 year growth an the annual growth factor
express this statement using quantifiers (every student in this class has taken some course in every department in the school of mathmatical sciences
express this statement using quantifiers (there is a building on the campus of some college in the united states in which every room is painted whit
There grows in the middle of a circular pond 10 ft. in diameter a reed which projects 1 ft. out of the water. When it is drawn down it just reaches the edge of the pond. How deep is the water.
Sketch the graph on the axis y=e^2x+1 and y=e^x | {"url":"http://www.wyzant.com/resources/answers","timestamp":"2014-04-16T23:56:00Z","content_type":null,"content_length":"57083","record_id":"<urn:uuid:90160cb5-4aa3-475d-aa43-41e5d587059d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Successive Approximations
Successive Approximations ADC
Choose Subtopic
│ Comparators ││ Flash ADC ││
│Dual Slope ADC ││Sigma-Delta ADC ││
From the time of their invention in the 1940s until the turn of the 21st Century, Successive Approximation ADCs were the most common choice for high resolution, low cost, intermediate speed
digitization. They remain popular except for the very highest resolution digitization.
It is CRITICAL that a sample and hold stage precede a successive approximations converter! As we will see, the sampled signal must be stable for the entire duration of a conversion.
A successive approximations ADC has much in common with the children's classic, "The Story of the Three Bears." At each stage of the story, results are too hot, too cold, or just right, too big, too
small, or just right, etc. Because measurements are hardly ever "just right" (due to noise), a guess at an answer to any question will likely be too high or too low. So if one starts making large
amplitude guesses, then takes progressively smaller steps, one will always be too high or too low, but will iterate to "very close to just right."
The key here is that we synthesize a voltage using a DAC, compare the DAC's output voltage to the signal input voltage, then increase or decrease the DAC's output until the code feeding the DAC gives
the closest possible match to the potential of the input signal. For the simplest possible example, let's use a 1 bit DAC. The full scale signal will be +5 V, and 0 will correspond to 0. Thus, if the
input is over +2.5 V, the output should be with the DAC's bit set. Otherwise, the DAC output should be 0. How do we proceed?
1) Trigger the sample and hold to hold the input value.
2) Guess that the digitized value is 1. Feed that logical 1 to a 1 bit DAC which puts out 1/2 of the ADC's full range, or +2.5 V.
3) Use a comparator to look at the DAC output and the held signal from the sample and hold. If the sampled value is ≥ 2.5 V, the comparator output will be high. If the sampled value is < 2.5 V, the
comparator output will be low.
4) If the comparator output is high, the digitized value is 1; otherwise, the single bit is too big to represent the input signal, so we set it back to 0 and we're done.
A picture may help:
Note that if the DAC output is greater than the sample and hold output, the comparator output is high, while if the DAC output is lower than the sampled value, the comparator output is low.
Why stop at 1 bit? Why not 8 bits or 12 bits or 16 bits? In fact, we only need one more insight and all of these possibilities snap into focus. We need to set the most significant bit first. Let's
work that out using a 2 bit converter. Again, use +5 volts full scale and have an input that we're trying to digitize of + 3 V. The values for a 2 bit DAC are -- oh, go ahead. Fill in the table:
│DAC bits │DAC Output V │
│ 00 │ │
│ 01 │ │
│ 10 │ │
│ 11 │ │
So now we can see how the ADC would arrive at the digitized value for output.
1) Set DAC to 10. DAC puts out 2.5 V
2) Comparator determines that sampled voltage is at least as great as the voltage produced by code 10. The 1 stays set.
3) Set the next DAC bit to 1, for a coding of 11. DAC outputs 3.75 V.
4) Comparator determines that the DAC output is greater than the sampled voltage. The second bit gets turned off.
5) Final encoding: 10.
The digitization error in this case is 0.5 V; the resolution of the measurement is only 1.25 V, so the closest representation of a 3 V input we can have is 2.5 V.
Now let's make the leap to a 12 bit convertor. Resolution is now 1 part in 2^12. Let's keep the conversion unipolar, full scale +5 V, and see how closely we can digitize +3 V.
Fill in the second column in the table, check your work, then work out the third and fourth columns. "MSB" is the most significant bit (what would correspond to the 1 bit example), while "LSB" is the
12^th bit. In the third column, you will have a 1 or a 0 for each bit. In the fourth column, sum the values implied by the 1's and 0's in the second and third columns (Thus, if the voltages for the 3
most significant bits were 2, 1, and 0.5 V and the binary values were 101, that would be 2*1 + 1*0 + 0.5*1 =2.5 V).
│Bit│Voltage│Value for +3V │Cumulative Approximate V │
│MSB│ │ │ │
│2 │ │ │ │
│3 │ │ │ │
│4 │ │ │ │
│5 │ │ │ │
│6 │ │ │ │
│7 │ │ │ │
│8 │ │ │ │
│9 │ │ │ │
│10 │ │ │ │
│11 │ │ │ │
│LSB│ │ │ │
There are several interesting things to see in the encoding. First, look at the pattern of the digitized voltage: 100110011001. In hexadecimal, that's 999. Why is there the repeating pattern? 3 V =
0.6 * 5 V in base 10. But 0.6*16 = 9.6, not an integer. 0.6 times ANY power of 2 (and thus for ANY grouping of bits into nybbles, bytes, or words) does not give an integer, and thus 3 V cannot be
exactly represented in binary. Rather, it is represented as a repeating "decimal" (heximal?) fraction.
Second, if you didn't carry enough significant figures with you along the way, you might have rounded off the final result with fewer non-zero bits. For example, if you round off voltage values to
1.0 mV or finer, the coding is as shown above. At 10 mV resolution, the 4 most significant bits, 1001, rounds to 2.810 V. At 10011001, the summed voltage is 2.99 V. One could then get 3.00 V by
setting the next bit, for a 10 bit approximation of 100110011. Implicitly, this gives a 12 bit encoding of 100110011000. Inadequate resolution (or noise) during digitization limits the precision of
the final encoding.
Hybrid Converters
Suppose there's a signal that is always between 2.8 and 3.2 volts. The first 4 bits of the digitized word will always be 1001. Doesn't it seem wasteful (4 comparator operations!) to start fresh every
time to convert these bits when it is only the less significant bits that are changing? We could probably speed up the conversion if we didn't waste time on digitizing the slowly-varying, large
amplitude part of the potential. Engineers, being clever, have reached the same conclusion and have designed hybrid ADCs that use flash converters for the most significant bits, then successive
approximations (or sigma-delta) for less significant bits. The first 8 bits are digitized in a single cycle and feed the 8 most significant bits of the DAC. As long as the output precision of the DAC
is good enough, one can then do an analog subtraction of the DAC output from the original (sampled) signal to provide the input for the successive approximations part of the circuit. Here's a sketch:
Why digitize the 8 most-significant bits and then reconvert them to analog before subtracting? Recall that the comparators in the flash ADC are subject to error and thus they only approximately carry
out digitization. If the output of the 8 bit DAC is PRECISE to 12-18 bits, then a precision offset can be introduced before the least-significant bits are digitized. The subtraction amplifer (the
bottom operational amplifier in the figure) can be designed with gain so that differences of a few millivolts between the sampled voltage and the DAC output are presented to the second ADC at 2×,
10×, or 100× the actual difference, so the second ADC can use higher, less noise-susceptible potentials. Are there additional "gotcha"s? Yes -- and we'll discuss those under Bits, Noise, and | {"url":"http://www.asdlib.org/onlineArticles/elabware/Scheeline_ADC/ADC_ADC_SucAprrox.html","timestamp":"2014-04-17T12:31:24Z","content_type":null,"content_length":"21148","record_id":"<urn:uuid:e7dc5d3c-a641-4024-a291-b25a27f2f701>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electronics Failure Analysis for Pb- and Pb-Free Solder Joints
The Weibull distribution is arguably the most important distribution in failure analysis of leaded and lead-free solder joints. It is the first thought of someone trying to model thermal cycle, drop
shock, or other failure modes associated with through-hole and SMT assembly.
Figure 1. The Likelihood of Getting Heads in 60 Coin Tosses is Described by The Binomial Distribution
The Weibull distribution was invented by Waloddi Weibull in 1931. This invention fact was recounted by Dr. Robert Abernethy in his famous textbook on Weibull analysis, The New Weibull Handbook. This
statement may not seem unusual, until we ponder that all common distributions in statistics were discovered, not invented. The three most common statistical distributions are the Normal, Poisson and
Binomial distributions. As an example of a discovered statistical distribution, let’s consider the Binomial distribution. This distribution describes, among other things, the odds in flipping a
coin. If you flip a fair coin 60 times, you are most likely to obtain 30 heads (H) and 30 tails (T), but getting 29 H and 31 T or 32 H and 28 T would not be all that uncommon. Mathematical analysis
shows that the curve below results. If a coin flipping experiment is performed many times, this curve will faithfully predict the results. The curve is not invented it is discovered from the deep
theoretical underpinnings of the Binomial Distribution.
The fact that the Weibull distribution was invented suggests that Weibull selected it because it fit many types of failure data. He defined cumulative Weibull distribution is defined as:
where eta is the characteristic life or the scale function and beta is the slope, were as F(t) is the cumulative fraction of failures. Weibull proposed this function because for beta less than 1, F
(t) describes “infant” mortality fails. In this situation the failure rate is decreasing with time. For beta greater than 1, it describes “wear out” failures, where the failure rate is increasing
with time. In electronics, we typically try to weed out infant mortality by using “burn in.” For beta equal to 1, the failure rate is constant. These three scenarios are shown in the figure below.
So typically, in electronics failure analysis, we are plotting failure data versus time to determine beta and eta, typically with software like Minitab.
In the next posting we will analyze some failure data to determine eta and beta and discuss their significance.
Weibull himself was a curious character and much of the available information on him is chronicled by Abernethy.
For sure Weibull was a vigorous man. His second wife was almost 50 years his junior and he fathered a daughter at about 80 years of age!
Dr. Ron
2 Responses to Electronics Failure Analysis for Pb- and Pb-Free Solder Joints
This entry was posted in Dr. Ron and tagged failure analysis, lead-free, Reliability, solder joint. Bookmark the permalink. | {"url":"http://circuitsassembly.com/blog/?p=3311","timestamp":"2014-04-17T19:30:09Z","content_type":null,"content_length":"31480","record_id":"<urn:uuid:f902f970-8bff-4161-9015-8e01de91e3e5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Jerald on Thursday, November 8, 2012 at 9:26pm.
Everett is reading a book for his language arts class. He read 1/3 of the book on Saturday, 3/8 of the book on Sunday and 1/4 of the book on Monday. Which procedure can Everett use to find the total
fraction of the book he has read?
A.Write the equivalent fractions using a common denominator then subtract the fractions from 1
B.Write equivalent fractions using a common denominator then find the sum of the fractions
C.Find the sum of 1/3 and 3/8 then subtract 1/4
D. Add the numerators of all 3 fractions then add the denominators
• Math - Ms. Sue, Thursday, November 8, 2012 at 9:28pm
• Math - Jerald, Thursday, November 8, 2012 at 9:33pm
Robert is cutting wooden strips from a foot long piece of wood for his woodworking class. He needs a strip of wood that is 3/8 foot long and another strip that is 1/2 foot long. Which of the
following strips is shaded to show the total amount of wood Robert needs for his class?
Amanda made 3 hair bows using 2/3 yard ribbon each. She made 2 bows using 1/2 yard each. Which shows the amount of ribbon Amanda used to make the bows?
• Math - Ms. Sue, Thursday, November 8, 2012 at 9:39pm
Both are right! :-)
• Math - Jerald, Thursday, November 8, 2012 at 9:40pm
Thank you
• Math - Ms. Sue, Thursday, November 8, 2012 at 9:46pm
You're welcome.
Related Questions
Language Arts/ Reading - Which is true of reading methods? 1. Quality literature...
Language Arts - Need some help on the 2 sentences below. 1. Are you tired of ...
reading/ language arts - hi... i need some similarities and differences between ...
Language Arts - Flannel board stories encourage which of the language arts ...
Language Arts - In the Holocaust book were reading the Nazis were planning on ...
AED / ASAP PLEASE - 150 words needed for How can English language arts teachers ...
Language Arts - I am in a verse chior group for oral language and I need an oral...
language arts - What is the difference between a plot and a summary of a book? ...
Language Arts - Do you have any book reviews opr book reports on Sharon Draper's...
language arts - I am doing a project on what kind of medicines were available in... | {"url":"http://www.jiskha.com/display.cgi?id=1352428008","timestamp":"2014-04-21T00:41:00Z","content_type":null,"content_length":"9899","record_id":"<urn:uuid:9d359a60-246e-4de8-95ac-3a174ee33df5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2010 [00017]
[Date Index] [Thread Index] [Author Index]
Re: How to assume that a function is positive?
• To: mathgroup at smc.vnet.net
• Subject: [mg114333] Re: How to assume that a function is positive?
• From: ADL <alberto.dilullo at tiscali.it>
• Date: Wed, 1 Dec 2010 02:12:29 -0500 (EST)
• References: <id2emr$d88$1@smc.vnet.net>
On 30 Nov, 10:04, Sam Takoy <sam.ta... at yahoo.com> wrote:
> Hi,
> Who do I let Mathematica know that a function f is positive for all
> arguments? For example, how do I make the following work (I think my
> intention is clear):
> Assuming[f[x] > 0, (f[x + y]^2)^(1/2) // Simplify]
> Many thanks in advance,
> Sam
This fact is even more puzzling after observing the following results:
Simplify[Sqrt[f[x]^2], f[x] > 0]
Out[]= f[x]
Simplify[Sqrt[f[x]^2], !f[x] < 0]
Out[]= f[x]
Simplify[Sqrt[f[x]^2], f[x]>0]
Out[]= Sqrt[f[x] ^2]
Simplify[Sqrt[f[x]^2], !f[x] < 0]
Out[]= Sqrt[f[x] ^2]
In other terms, not only any way to impose the positivity of f appears
to fail, but it also *breaks* Mathematica (8) capability to perform
the simplifications!
Can anyone explain this behavior? | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00017.html","timestamp":"2014-04-17T01:01:52Z","content_type":null,"content_length":"25980","record_id":"<urn:uuid:822ae1c7-47b4-4017-862c-e996c1f4053a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Eliminate A from each pair of parametric equations x = 3sinA y= 6sin2A
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
is it like that or is it sin^2A
Best Response
You've already chosen the best response.
its sin2A 2sinAcosA
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
from x= 3sin(a), what would sin(a) be?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
.. right... ... is x= 3sin(a), what you have or x = 3cos(a)?
Best Response
You've already chosen the best response.
the question is x = 3sinA
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Btw, what do you mean by eliminate? Are you to convert this in to a rectangular equation with y in terms of x or are you to have both x and y written in terms of something else that isn't A?
Best Response
You've already chosen the best response.
write the equation simply without A
Best Response
You've already chosen the best response.
@genius12 pretty much, is just conversion to rectangular
Best Response
You've already chosen the best response.
So y in terms of x and not A right?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[\bf x=3\sin(A) \implies \frac{x}{3}=\sin(A) \implies \sin^{-1} \left( \frac{x}{3}\right)=A\]Plug this value of A in y = 6sin(2A) and you're done. @AonZ
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
is that readable
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
well he shudnt get the asnwer too easily so its all good :)
Best Response
You've already chosen the best response.
Both @dan815 and mine rectangular forms work. Except his makes it more obvious that cosine and sine can be used parametrically to give an ellipse as dan's rectangular form is the equation of an
Best Response
You've already chosen the best response.
^ true
Best Response
You've already chosen the best response.
right :)
Best Response
You've already chosen the best response.
|dw:1369437893899:dw| i dont it get when u went into that part
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Rearrange x and y so that:\[\bf \frac{y}{4x}=\cos(A) \ and \ \frac{x}{3}=\sin(A)\]Squaring both sides of both equations gives:\[\bf \left( \frac{y}{4x} \right)^2=\cos^2(A) \ and \ \left( \frac{x}
{3} \right)^2=\sin^2(A)\]Adding both equations and using the identity cos^2(A) + sin^2(A) = 1 gives u the rectangular form. @AonZ
Best Response
You've already chosen the best response.
I just realised, @dan815 rectangular form actually won't be an ellipse even thought i looks like it will be lol. There is an x in the denominator under y which means it can't be the equation of
an ellipse.
Best Response
You've already chosen the best response.
not sure you can get rid of the "x" though, I got the same :S
Best Response
You've already chosen the best response.
u dont need to they just want an equation without A
Best Response
You've already chosen the best response.
right, so I notice
Best Response
You've already chosen the best response.
thank you so much :D understood @genius12 way much better
Best Response
You've already chosen the best response.
was hard to read dan's writting :P
Best Response
You've already chosen the best response.
if u wanna see a nice graph :)
Best Response
You've already chosen the best response.
last question :)
Best Response
You've already chosen the best response.
just post in the channel, so we can all see it and thus help :)
Best Response
You've already chosen the best response.
got a link :P http://openstudy.com/study#/updates/519ff981e4b04449b221f091 but Question is Eliminate A from each pair of parametric equations x = 2tan( A/2) y = cosA
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
2 ways to go from cosa to a/2 or other way, which trig u know
Best Response
You've already chosen the best response.
theres also an identity u can use straight from tan to a double angle
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
see if that helps
Best Response
You've already chosen the best response.
look at the formula for Cos(2a) and that tan^2a either one of those will help you simplify and eliminate a
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/519fef85e4b04449b221ebea","timestamp":"2014-04-20T06:28:35Z","content_type":null,"content_length":"517597","record_id":"<urn:uuid:5412d227-9754-4ef6-970c-4eee08373e91>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
WyzAnt Resources
Please help simplify these expressions: 1. √a + √b √a - √b 2. √5 + 1 √5 - 1 3...
1,4,9,16,25,… n=10 I ask this question before but i am still comfuse
geometry shapes
need to know what is left in the bag after some poured into canister which gives the canister 2 times the mass over the bag
Altitude The angles of elevation to an air plane from two points A and B on level ground are 71° and 88°,respectively .The points A and B are 5.5 miles apart and the airplaine is east if both
This is the problem 12x2 + 14x=0
does the slope stay the same steeper less steep
you must put it in standard form
Each side of the polygon measures 6 feet. Whats the name of that polygon?
a frog is sitting on a stump 24 ft above the ground. it hops off the stump and lands on the ground 8 ft away. during its leap, its height h is given by the equation h=-0.5x^2+x+24 where x...
Hi! I am having some trouble doing a combinations problem, which is stated below: How many committees of five people can be selected from 8 men and 8 women if the committee must have...
(negation,p,disjunction(variable r,implication,negation variable q
1. Listed below are the number of years it took for a random sample of college students to earn a bachelor's degree. At the .05 significance level, test the claim that it takes the average student...
C=75°20,a=30,c=40 2)Find the area of thr triangles .Given B=82°30',a=110,c=75
f(x)=x-1/x+1 at c=-1 What 3 conditions of continuity can I use to justify the answer
Find the 9 year growth an the annual growth factor
express this statement using quantifiers (every student in this class has taken some course in every department in the school of mathmatical sciences
express this statement using quantifiers (there is a building on the campus of some college in the united states in which every room is painted whit
There grows in the middle of a circular pond 10 ft. in diameter a reed which projects 1 ft. out of the water. When it is drawn down it just reaches the edge of the pond. How deep is the water.
Sketch the graph on the axis y=e^2x+1 and y=e^x | {"url":"http://www.wyzant.com/resources/answers","timestamp":"2014-04-16T23:56:00Z","content_type":null,"content_length":"57083","record_id":"<urn:uuid:90160cb5-4aa3-475d-aa43-41e5d587059d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
evaluating indefinite integral
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5040bd1fe4b0dc8c69e7e2b5","timestamp":"2014-04-16T07:52:29Z","content_type":null,"content_length":"102760","record_id":"<urn:uuid:c708719f-6869-478a-8378-daa85af2b769>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Medinah Math Tutor
Find a Medinah Math Tutor
...I plan to get my certificate in Special Education as well. I have been exposed to people with special needs throughout most of my life. They are some of the most inspiring people I know and
they deserve to be treated with respect and deserve the same type of treatment as everyone else.
19 Subjects: including geometry, ACT Math, trigonometry, dyslexia
My name is Aaron. I am a resident of Roselle with my wife and newborn. I am a former mathematics teacher (and Mathletes Coach) and currently working as an Actuarial Analyst in downtown Chicago.
10 Subjects: including statistics, algebra 1, algebra 2, calculus
...For photography, I have been a lifelong photographer as a hobby. I learned on manual rangefinder cameras in the 1970's. I was a yearbook photographer in middle and high school; and involved in
dark room work.
37 Subjects: including algebra 2, precalculus, GED, SAT math
...I am quite flexible as a teacher, depending on what the student requires since every student is different! I can be laid back or strict, but I am always well organized and focused on the
subject at hand. In my experiences, I have learned how to gauge what a student needs to succeed and I always strive to provide him or her with just that.
26 Subjects: including trigonometry, ACT Math, discrete math, ASVAB
Hello, my name is Lei Sun. I grow up in Beijing, China. I have 5 years of teaching experience in Asia and 3 years of tutoring experience in university of Minnesota.
4 Subjects: including algebra 1, algebra 2, ACT Math, Chinese
Nearby Cities With Math Tutor
Addison, IL Math Tutors
Bloomingdale, IL Math Tutors
Dundee, IL Math Tutors
Elk Grove Village Math Tutors
Eola, IL Math Tutors
Fox Valley Math Tutors
Fox Valley Facility, IL Math Tutors
Golf, IL Math Tutors
Hines, IL Math Tutors
Itasca, IL Math Tutors
Mooseheart Math Tutors
Roselle, IL Math Tutors
Sleepy Hollow, IL Math Tutors
Wayne, IL Math Tutors
Western, IL Math Tutors | {"url":"http://www.purplemath.com/medinah_math_tutors.php","timestamp":"2014-04-20T19:20:03Z","content_type":null,"content_length":"23394","record_id":"<urn:uuid:1d33bf36-d8ad-410f-befd-9ebfb554b64d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ramsey theory
A branch of mathematics that asks questions such as: Can order always be found in what appears to be disorder? If so, how much can be found and how big a chunk of disorder is needed to find a
particular amount of order in it? Ramsey theory is named after the English mathematician Frank P. Ramsey (1904–1926) who started the field in 1928 while wrestling with a problem in logic. (Frank's
one-year-younger brother, Arthur, served as Archbishop of Canterbury from 1961 to 1974.) His life was cut short in 1930, at the age of 26, following a bout of jaundice. Ramsey suspected that if a
system was big enough, even if it seemed to be disorderly to an arbitrary degree, it was bound to contain pockets of order from which information about the system could be gleaned.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/R/Ramsey_theory.html","timestamp":"2014-04-16T10:12:40Z","content_type":null,"content_length":"6630","record_id":"<urn:uuid:8dc88b02-361f-47d3-885f-d65d50385e8b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Hyperbola 3
April 16th 2010, 05:06 PM #1
Junior Member
Apr 2010
[SOLVED] Hyperbola 3
Prove that the locus of midpoints of parallel chords of xy = c^2 is a diameter.
Is anyone able to give me some hints on how to do this question?
Hello UltraGirl
Consider the chord joining $P\;\Big(cp,\frac cp\Big)$ to $Q\;\Big(cq,\frac cq\Big)$.
You can easily show that its gradient is $-\frac1{pq}$.
A set of parallel chords will all have the same gradient, $m$, say. So:
$-\frac1{pq}=m$, where $m$ is a constant. Call this equation (1).
The mid-point $(x,y)$ of $PQ$ satisfies:
$x = \tfrac12c(p+q)$
$y = \tfrac12(\frac cp+\frac cq\Big)$
Now, using equation (1), find an equation connecting $x, y$ and $m$, and show that this, the locus of the mid-point of $PQ$, is a straight line through the origin.
Can you complete it now?
April 17th 2010, 10:23 AM #2 | {"url":"http://mathhelpforum.com/geometry/139563-solved-hyperbola-3-a.html","timestamp":"2014-04-19T07:34:04Z","content_type":null,"content_length":"36856","record_id":"<urn:uuid:f1f6a301-6c71-40d1-b926-aca8fb5afd57>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2010 [00667]
[Date Index] [Thread Index] [Author Index]
Re: Help with an ODE
• To: mathgroup at smc.vnet.net
• Subject: [mg111304] Re: Help with an ODE
• From: Leonid Shifrin <lshifr at gmail.com>
• Date: Tue, 27 Jul 2010 04:15:59 -0400 (EDT)
My guess is that your equation does not have a solution with your boundary
condition. If you use NDSolve, it will bark on derivative at zero being
What I can do is to show you how for example your equation can be solved
for y[0]=1, y'[0]=1 and a=1. We will get an analytical solution for x =
x[y], but
will have to solve it numerically for y = y[x].
There are a number of steps. This is your equation
In[77]:= eq = y''[x]==(1+y'[x]^2+ a y[x]^4)/y[x]
Out[77]= (y^\[Prime]\[Prime])[x]==(1+a y[x]^4+(y^\[Prime])[x]^2)/y[x]
1. First, equation does not depend on x explicitly, so you
can make a substitution y'[x] = f[y]. Then, y''[x] = 1/2 d/dy f[y]^2.
Making another
substitution f[y]^2 = phi[y], you get:
In[81]:= eq1=eq/.{y'[x]->f[y],y''[x]:> 1/2D[f[y]^2,y],y[x]-> y}
Out[81]= f[y] (f^\[Prime])[y]==(1+a y^4+f[y]^2)/y
2. Setting now g[y] == f^2[y] you will have:
In[88]:= eq2 =Apart/@(eq1/.{f[y]
Out[88]= (g^\[Prime])[y]/2==(1+a y^4)/y+g[y]/y
3. It is easy to solve, with the solution being:
In[91]:= soly = a y^4 + C[1] y^2 - 1
Out[91]= -1 + a y^4 + y^2 C[1]
where C[1] is some integration constant. You can check that it is indeed a
In[93]:= eq2 /. {g[y] -> soly, g'[y] -> D[soly, y]} // Simplify
Out[93]= True
4. Now you recall that g[y] = (dy/dx)^2, and therefore:
In[95]:= eq3 =y'[x]==Sqrt[soly]
Out[95]= (y^\[Prime])[x]==Sqrt[-1+a y^4+y^2 C[1]]
(actually plus or minus Sqrt, and later I will use the minus sign).
5. This is the equation which gives then x as a function of y.
In[97]:= HoldForm[-x+C[2]==-Integrate[Sqrt[a t^4+C[1]t^2-1],{t,y,Infinity}]]
Out[97]= (* Suppressed *)
The reason that I did it in this way rather than from 0 to y with a plus
sign is
that y[x] will never be zero.
6. The integral can be computed:
In[107]:= int = Integrate[1/Sqrt[a t^4+C[1] t^2-1],t]
Out[107]= -((I Sqrt[1+(2 a t^2)/(C[1]-Sqrt[4 a+C[1]^2])] Sqrt[1+(2 a
t^2)/(C[1]+Sqrt[4 a+C[1]^2])] EllipticF[I ArcSinh[Sqrt[2] t
Sqrt[a/(C[1]+Sqrt[4 a+C[1]^2])]],(C[1]+Sqrt[4 a+C[1]^2])/(C[1]-Sqrt[4
a+C[1]^2])])/(Sqrt[2] Sqrt[-1+a t^4+t^2 C[1]] Sqrt[a/(C[1]+Sqrt[4
7. Integral at infinity can be computed as a series expansion around y = 1/q
when q->0:
intinf = Normal[
Assuming[q > 0, Simplify[Series[int /. t -> 1/q, {q, 0, 0}]]]]
Out[109]= (Sqrt[2] Sqrt[a/(
C[1] - Sqrt[
4 a + C[1]^2])] (EllipticK[(
2 Sqrt[4 a + C[1]^2])/(-C[1] + Sqrt[
4 a + C[1]^2])] + ((-I a +
Sqrt[-(a^2/(C[1] + Sqrt[4 a + C[1]^2])^2)] (C[1] + Sqrt[
4 a + C[1]^2])) (-1 +
Sqrt[(C[1] - Sqrt[4 a + C[1]^2])/(C[1] + Sqrt[4 a + C[1]^2])]
Sqrt[(C[1] + Sqrt[4 a + C[1]^2])/(
C[1] - Sqrt[4 a + C[1]^2])]) EllipticK[(
C[1] + Sqrt[4 a + C[1]^2])/(C[1] - Sqrt[4 a + C[1]^2])])/(
2 a)))/Sqrt[a]
8. Integral on the lower limit you get just substituting t->y:
In[110]:= inty = int /. t -> y
Out[110]= -((
I Sqrt[1 + (2 a y^2)/(C[1] - Sqrt[4 a + C[1]^2])] Sqrt[
1 + (2 a y^2)/(C[1] + Sqrt[4 a + C[1]^2])]
I ArcSinh[Sqrt[2] y Sqrt[a/(C[1] + Sqrt[4 a + C[1]^2])]], (
C[1] + Sqrt[4 a + C[1]^2])/(C[1] - Sqrt[4 a + C[1]^2])])/(
Sqrt[2] Sqrt[-1 + a y^4 + y^2 C[1]] Sqrt[a/(
C[1] + Sqrt[4 a + C[1]^2])]))
9. The final result for the integral then:
In[111]:= intfull = intinf - inty
Out[111]= (I Sqrt[1+(2 a y^2)/(C[1]-Sqrt[4 a+C[1]^2])] Sqrt[1+(2 a
y^2)/(C[1]+Sqrt[4 a+C[1]^2])] EllipticF[I ArcSinh[Sqrt[2] y
Sqrt[a/(C[1]+Sqrt[4 a+C[1]^2])]],(C[1]+Sqrt[4 a+C[1]^2])/(C[1]-Sqrt[4
a+C[1]^2])])/(Sqrt[2] Sqrt[-1+a y^4+y^2 C[1]] Sqrt[a/(C[1]+Sqrt[4
a+C[1]^2])])+(Sqrt[2] Sqrt[a/(C[1]-Sqrt[4 a+C[1]^2])] (EllipticK[(2 Sqrt[4
a+C[1]^2])/(-C[1]+Sqrt[4 a+C[1]^2])]+((-I a+Sqrt[-(a^2/(C[1]+Sqrt[4
a+C[1]^2])^2)] (C[1]+Sqrt[4 a+C[1]^2])) (-1+Sqrt[(C[1]-Sqrt[4
a+C[1]^2])/(C[1]+Sqrt[4 a+C[1]^2])] Sqrt[(C[1]+Sqrt[4
a+C[1]^2])/(C[1]-Sqrt[4 a+C[1]^2])]) EllipticK[(C[1]+Sqrt[4
a+C[1]^2])/(C[1]-Sqrt[4 a+C[1]^2])])/(2 a)))/Sqrt[a]
10. For a = 1, condition y'[0] = 1 gives (since y[0] = 1) :
In[105]:= 1 == Sqrt[-1 + a + C[1]]
Out[105]= 1 == Sqrt[-1 + a + C[1]]
which gives C[1] -> 1 for a == 1.
11. We have then :
In[119]:= -x+C[2]==intfull
Out[119]= -x+C[2]==(I Sqrt[1+(2 a y^2)/(C[1]-Sqrt[4 a+C[1]^2])] Sqrt[1+(2 a
y^2)/(C[1]+Sqrt[4 a+C[1]^2])] EllipticF[I ArcSinh[Sqrt[2] y
Sqrt[a/(C[1]+Sqrt[4 a+C[1]^2])]],(C[1]+Sqrt[4 a+C[1]^2])/(C[1]-Sqrt[4
a+C[1]^2])])/(Sqrt[2] Sqrt[-1+a y^4+y^2 C[1]] Sqrt[a/(C[1]+Sqrt[4
a+C[1]^2])])+(Sqrt[2] Sqrt[a/(C[1]-Sqrt[4 a+C[1]^2])] (EllipticK[(2 Sqrt[4
a+C[1]^2])/(-C[1]+Sqrt[4 a+C[1]^2])]+((-I a+Sqrt[-(a^2/(C[1]+Sqrt[4
a+C[1]^2])^2)] (C[1]+Sqrt[4 a+C[1]^2])) (-1+Sqrt[(C[1]-Sqrt[4
a+C[1]^2])/(C[1]+Sqrt[4 a+C[1]^2])] Sqrt[(C[1]+Sqrt[4
a+C[1]^2])/(C[1]-Sqrt[4 a+C[1]^2])]) EllipticK[(C[1]+Sqrt[4
a+C[1]^2])/(C[1]-Sqrt[4 a+C[1]^2])])/(2 a)))/Sqrt[a]
12. At x == 0, and a == 1, y[0] == 1, and we get
c2rule = C[2]-> intfull/.{y->1,C[1]->1,a->1}
C[2]->-Sqrt[1/2 (1+Sqrt[5]) (-1-2/(1-Sqrt[5])) (1+2/(1+Sqrt[5]))]
EllipticF[I ArcSinh[Sqrt[2/(1+Sqrt[5])]],(1+Sqrt[5])/(1-Sqrt[5])]+I
Sqrt[2/(-1+Sqrt[5])] EllipticK[(2 Sqrt[5])/(-1+Sqrt[5])]
13. The numerical value of this constant:
In[118]:= N[C[2]/.c2rule]
Out[118]= 0.941458+6.66134*10^-16 I
14. Finally, we define our equation which we want to reverse:
In[133]:= finaleq[x]
Out[133]= Sqrt[1/2 (1+Sqrt[5]) (-1-2/(1-Sqrt[5])) (1+2/(1+Sqrt[5]))]
EllipticF[I ArcSinh[Sqrt[2/(1+Sqrt[5])]],(1+Sqrt[5])/(1-Sqrt[5])]+(I
Sqrt[1/2 (1+Sqrt[5])] Sqrt[1+(2 y^2)/(1-Sqrt[5])] Sqrt[1+(2
y^2)/(1+Sqrt[5])] EllipticF[I ArcSinh[Sqrt[2/(1+Sqrt[5])]
15. Here is the function which solves this equation numerically, to get y =
y[x] (as we only have x = x[y]):
sol[x_?NumericQ] := Re[y /. FindRoot[Evaluate[finaleq[x]], {y, 1.5}]]
16. To compare it with something, let us solve the equation numerically by
res = NDSolve[{y''[x]==(1+y'[x]^2+ a
During evaluation of In[139]:= NDSolve::ndsz: At x == 0.941458301631319`,
step size is effectively zero; singularity or stiff system suspected. >>
Out[140]= {{y->InterpolatingFunction[{{0.,0.941458}},<>]}}
We see again the same constant value as our C[2] in the analytical solution
as a value where something interesting is happening (a singularity). Here we
define a function representing a solution:
In[141]:= fn = First[y/.res]
Out[141]= InterpolatingFunction[{{0.,0.941458}},<>]
17. Now we can plot together our two solutions:
Plot[{sol[x], fn[x]}, {x, 0, 1}]
(* Output suppressed *)
Plot[sol[x] - fn[x], {x, 0, 1}]
(* Output suppressed *)
You can see that these are the same curves.
This approach can be generalized to different values of <a> and initial
conditions. While
I did not do the detailed analysis for your conditions, it is quite clear
that there is no simple solution
there because we have an equation like
C[2]-x == h[y,C[1]]
(by h I denoted the integral over y), and since y[0]==y[1], you have
C[2] == h[y[0],C[1]]
C[2]-1 == h[y[1],C1]],
which clearly has no solution since y[0]=y[1].
Now, it could be that at x = 0 we take one solution (with plus sign), and at
x =1 - another one (or vice versa),
so that you will have
C[2] == h[y[0],C[1]]
C[2]-1 == - h[y[1],C1]],
This case I did not look into.
Anyways, you can use the above considerations to perform further analysis of
your equation.
Hope this helps.
On Fri, Jul 23, 2010 at 3:13 PM, Sam Takoy <sam.takoy at yahoo.com> wrote:
> I would GREATLY appreciate help symbolically solving the following ODE
> system.
> DSolve[y''[x] == (1 + y'[x]^2 + a y[x]^4)/y[x], y, x],
> with y[0] = y[1] = 1
> and "a" is a constant that satisfies
> Integrate[R[x]^2, {x, 0, 1}] = 1/(10*a);
> I can't quite pull it off with my current Mathematica skill level.
> Many many thanks in advance!
> PS: The solution is a smooth function that looks like a parabola.
> Thanks again!
> Sam | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jul/msg00667.html","timestamp":"2014-04-16T04:56:31Z","content_type":null,"content_length":"33195","record_id":"<urn:uuid:53b88c54-6aba-48bb-9e42-d2214a0c7fab>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-Negative Matrix Factorization
16 Non-Negative Matrix Factorization
This chapter describes Non-Negative Matrix Factorization, the unsupervised algorithm used by Oracle Data Mining for feature extraction.
Non-Negative Matrix
Factorization (NMF) is described in the paper "Learning the Parts of Objects by Non-Negative Matrix Factorization" by D. D. Lee and H. S. Seung in
(401, pages 788-791, 1999).
This chapter contains the following topics:
About NMF
Non-negative Matrix Factorization (NMF) is a state of the art feature extraction algorithm. NMF is useful when there are many attributes and the attributes are ambiguous or have weak predictability.
By combining attributes, NMF can produce meaningful patterns, topics, or themes.
NMF is often useful in text mining. In a text document, the same word can occur in different places with different meanings. For example, "hike" can be applied to the outdoors or to interest rates.
By combining attributes, NMF introduces context, which is essential for predictive power:
"hike" + "mountain" -> "outdoor sports"
"hike" + "interest" -> "interest rates"
How Does it Work?
NMF decomposes multivariate data by creating a user-defined number of features. Each feature is a linear combination of the original attribute set; the coefficients of these linear combinations are
NMF decomposes a data matrix V into the product of two lower rank matrices W and H so that V is approximately equal to W times H. NMF uses an iterative procedure to modify the initial values of W and
H so that the product approaches V. The procedure terminates when the approximation error converges or the specified number of iterations is reached.
During model apply, an NMF model maps the original data into the new set of attributes (features) discovered by the model.
Data Preparation for NMF
Automatic Data Preparation normalizes numerical attributes for NMF.
When there are missing values in columns with simple data types (not nested), NMF interprets them as missing at random. The algorithm replaces missing categorical values with the mode and missing
numerical values with the mean.
When there are missing values in nested columns, NMF interprets them as sparse. The algorithm replaces sparse numerical data with zeros and sparse categorical data with zero vectors.
If you choose to manage your own data preparation, keep in mind that outliers can significantly impact NMF. Use a clipping transformation before binning or normalizing. NMF typically benefits from
normalization. However, outliers with min-max normalization cause poor matrix factorization. To improve the matrix factorization, you need to decrease the error tolerance. This in turn leads to
longer build times.
Moved from text mining chapter: NMF has been found to provide superior text retrieval when compared to SVD and other traditional decomposition methods. NMF takes as input a term-document matrix and
generates a set of topics that represent weighted sets of co-occurring terms. The discovered topics form a basis that provides an efficient representation of the original documents. | {"url":"http://docs.oracle.com/cd/E18283_01/datamine.112/e16808/algo_nmf.htm","timestamp":"2014-04-20T15:32:03Z","content_type":null,"content_length":"11879","record_id":"<urn:uuid:b0380bbe-46c7-4377-a7fa-bfe8ee48a202>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
ed d
27 Apr 21:33 2013
partially applied data constructor and corresponding type
TP <paratribulations <at> free.fr>
2013-04-27 19:33:17 GMT
I ask myself if there is a way to do the following.
Imagine that I have a dummy type:
data Tensor = TensorVar Int String
where the integer is the order, and the string is the name of the tensor.
I would like to make with the constructor TensorVar a type "Vector", such
that, in "pseudo-language":
data Vector = TensorVar 1 String
Because a vector is a tensor of order 1.
Is this possible? I have tried type synonyms and newtypes without any success.
Thanks a lot, | {"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/104908","timestamp":"2014-04-19T22:07:47Z","content_type":null,"content_length":"29679","record_id":"<urn:uuid:579fb290-1efa-45e1-a63e-45e6a3099558>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science's AIDS Prevention and Vaccine Research Site
AIDScience Vol. 3, No. 21, 2003
Calculating the potential epidemic-level impact of therapeutic vaccination on the San Francisco HIV epidemic
Sally Blower,^1 Ronald B. Moss,^2 Eduardo Fernandez-Cruz^3
^1Department of Biomathematics, David Geffen School of Medicine, University of California at Los Angeles, California, United States
^2The Immune Response Corporation, Carlsbad, California, United States
^3Comunidad de Madrid, Hospital General Universitario Gregorio Marañón, Madrid, Spain
Address correspondence to: sblower@mednet.ucla.edu
The level of plasma HIV-1 RNA has been shown to predict the rate of clinical disease progression in HIV-1 infected individuals (1, 2). Furthermore, the level of HIV-1 RNA predicts the probability of
transmission (3). Therefore, therapies that decrease HIV-1 RNA can potentially benefit both individual patients as well as reduce epidemic severity. Mathematical modeling analyses have shown that
combination antiretroviral (ARV) therapies can significantly reduce the number of new infections, and the AIDS death rate (4-6). Mathematical modeling has also been coupled with cost-effectiveness
analysis to estimate the effectiveness and efficiency of U.S. HIV prevention efforts (7). Here, we developed a new mathematical model to examine the potential epidemic-level impact of the addition of
an HIV-1 therapeutic vaccine to ARV therapies. Specifically, we calculated the potential epidemic-level impact of a therapeutic vaccine when used with ARV in terms of (i) the number of HIV infections
that could be prevented per year, and (ii) the number of AIDS deaths that could be prevented per year in the gay community in San Francisco.
In order to make these calculations we developed a new mathematical model consisting of six ordinary differential equations. The model allows HIV-infected individuals to receive ARV at any time, but
they would only be eligible to receive the therapeutic vaccine in combination with ARV in the first stage of infection. We defined this vaccination eligibility period to last from 0 to 5 years
post-infection. We modeled the effects of ARV and the therapeutic vaccine in terms of both increasing survival and reducing the probability of transmission (by reducing infectiousness). We
parameterized our model to reflect the HIV epidemic in the gay community in San Francisco, where HIV prevalence is currently 30%. We assumed a high ARV usage rate of 90%. We modeled two types of
vaccine: (i) a vaccine that increased survival slightly but had no effect on transmission, and (ii) a vaccine that increased survival slightly and also reduced infectiousness. We used parameter
values for the vaccines that reflected the results from a recently reported clinical trial STIR 2102 of a therapeutic vaccine used in combination with ARV (8). In this trial the therapeutic vaccine R
emune appeared to add one additional year to survival, and decreased viral load by 37% (8).
Figure 1. [Enlarge] Predicted potential epidemic-level impact of an HIV-1 therapeutic vaccine, when combined with a 90% rate of ARV therapy, on the HIV epidemic in the gay community in San Francisco.
A mathematical model (see Supporting Online Material) was used to predict the effect of the therapeutic vaccine over time on (A) the cumulative number of HIV infections prevented, and (B) the
cumulative number of AIDS deaths prevented. The epidemic-level effects of two types of vaccine are shown: (i) a vaccine that increased survival by one year but had no effect on transmission (data in
blue), and (ii) a vaccine that increased survival by one year and also reduced transmission (data in red). We assumed that patients treated only with ARV would survive on average 15 years; all other
parameter values are given in the text.
The model tracked the transmission dynamics of HIV in the presence of ARV and a therapeutic vaccine. We used the model to calculate the number of AIDS deaths prevented per year and the number of HIV
infections prevented per year. The model consisted of 6 ordinary differential equations (see Supporting Online Material). The potential treatment strategies were stratified on the time since
infection: HIV-infected individuals could receive ARV at any time, early or late stage of infection, but they could only receive the therapeutic vaccine in combination with ARV in the first stage of
infection, set to be 5 years post-infection in the current analysis but which could be of any length. Thus, two treatment strategies were modeled: ARV alone or ARV plus the therapeutic vaccine. The
model allowed ARV and the therapeutic vaccine to independently alter both the disease progression rate and the probability of treated HIV-infected individuals transmitting HIV by reducing their viral
The model tracked the temporal dynamics of the number of individuals in each of six states that are specified by the six state variables: the number of susceptible uninfected individuals (X), the
number of untreated individuals in the early stage of HIV-infection (Y[E]^U), the number of treated (with ARV only) individuals in the early stage of HIV-infection (Y[E]^A), the number of treated
(with ARV and therapeutic vaccine) individuals in the early stage of HIV-infection (Y[E]^AV), the number of untreated individuals in late stage of HIV-infection (Y[L]^U), and the number of treated
(with ARV only) individuals in late stage of HIV-infection (Y[L]^A). Individuals in the five different HIV-infection states can progress to AIDS at different rates and transmit HIV with different
probabilities, due to differences in their viral load (See Supporting Online Material for further details).
We parameterized the model to reflect the transmission dynamics of HIV in the gay community in San Francisco (4), which has a 30% prevalence of HIV infection. To reflect the situation in that city we
specified high treatment rates: the fraction of early HIV infections treated only with ARV (F[E]^A) to be 0.45 (i.e., 45%), the fraction of early HIV infections treated with ARV plus vaccine (F[E]^
AV) to be 0.45 (i.e., 45%) and the fraction of late HIV infections treated only with ARV (F[L]^A) to be 0.8 (80%). We assumed that 15-18% of treated patients would give up treatment per year; hence,
we set g[E]^A = 0.15 per year, g[E]^AV = 0.18 per year, and g[L]^A = 0.15 per year. We set the early stage of infection when individuals would be eligible for the therapeutic vaccine to be 5 years
(hence 1/v[E]^U = 5 years and 1/v[L]^U = 5 years), the average survival time if on ARV only and received treatment early (1/v[E]^A)^ to be 15-20 years, the average survival time if on ARV and the
therapeutic vaccine and received treatment early (1/v[E]^AV) to be 16-21 years, and the average survival time if on ARV only and received treatment late (1/v[L]^A) to be 10 years. We assumed that the
transmission probability of untreated individuals would be 0.1 (hence, β[E]^U = β[L]^U = 0.1), that ARV would reduce transmissibility by 50% (hence β[E]^A = β[L]^A = 0.05), and that the vaccine, if
it reduced infectiousness by reducing viral load, would reduce transmissibility by an additional 37% (hence, β[E]^AV = 0.013).
Figure 2. [Enlarge] Predicted potential epidemic-level impact of an HIV-1 therapeutic vaccine, when combined with a 90% rate of ARV therapy, on the HIV epidemic in the gay community in San Francisco.
A mathematical model (see Supporting Online Material) was used to predict the effect of the therapeutic vaccine over time on (A) the cumulative number of HIV infections prevented, and (B) the
cumulative number of AIDS deaths prevented. The epidemic-level effects of two types of vaccine are shown: (i) a vaccine that increased survival by one year but had no effect on transmission (data in
blue), and (ii) a vaccine that increased survival by one year and also reduced transmission (data in red). Parameter values are all the same as used to generate Figure 1 except we assumed that
patients treated only with ARV would survive on average 20 years.
We modeled the potential effect that a therapeutic vaccine used in combination with ARV would have on the HIV epidemic in the gay community in San Francisco. We modeled two types of therapeutic
vaccine: (i) a vaccine that increased survival by one year but had no effect on transmission, and (ii) a vaccine that increased survival by one year and also reduced transmission. For each of the two
types of vaccines, we compared their potential effects with two baseline simulations, where we assumed that only ARV was available and treated individuals average survival time was either 15 years or
20 years. We predicted the epidemic-level effects of these therapeutic vaccines for a 20 year period.
Our simulations showed that if only ARV was used in the gay community, over a 20 year time period 12,559-13,441 cumulative new infections and 10,915-11,845 cumulative AIDS deaths would occur. We
calculated the effects of the two types of therapeutic vaccines when used in addition to ARV in terms of (i) the cumulative number of additional HIV infections prevented per year, and (ii) the
cumulative number of additional AIDS deaths prevented per year. Results are shown in Figure 1, assuming individuals treated with ARV live an average of 15 years, and Figure 2, assuming individuals
treated with ARV live an average of 20 years. The data in blue show the impact of the therapeutic vaccine that only increases survival by one year, and the data in red show the impact of the
therapeutic vaccine that increases survival by one year and also decreases transmission.
Over a 20 year time period it can be seen that the therapeutic vaccine that would only increases survival by one additional year had an almost negligible epidemic-level impact: only a few (44-75)
additional AIDS deaths were prevented (Figure 1B and 2B; data in blue), and the number of new HIV infections slightly (137-178) increased over this time period (Figure 1A and 2A; data in blue).
However, the therapeutic vaccine that would increase survival by one additional year and also would decrease transmission could have a substantial epidemic-level impact: over a 20 year period this
vaccine would prevent 3,261-3,599 HIV infections (Figures 1A and 2A; data in red), and 1,091-1,229 AIDS deaths (Figure 1B and 2B; data in red). Therefore this type of therapeutic vaccine would
prevent 26%-27% of HIV infections, and 9%-10% of AIDS deaths over a 20 year period.
Moderately effective imperfect HIV vaccines could substantially reduce the HIV epidemic, particularly if coupled with changes in risk behavior (9-11). Here, we have calculated the potential effects
at the epidemic-level of modestly effective therapeutic vaccines. Our calculations showed that if a therapeutic vaccine used together with ARV prolonged survival and also reduced infectiousness, if
used at high coverage levels, it could potentially have a fairly significant impact at the population level. We have specifically evaluated the potential impact of such a therapeutic vaccine if used
with high levels of ARV in the gay community in San Francisco. It is possible that such therapeutic vaccines could also be beneficial in other communities in the developed and developing world, as
access to ARV increases. Our model could be used to investigate this possibility, and hence the applicability of our results for other communities. However, our current results suggests that safe and
modestly effective immune-based therapies that work in combination with ARV could potentially result in substantial public health benefits in San Francisco.
References and notes
1. J.W. Mellors, et al., Ann. Intern. Med. 126, 946 (1997). PubMed
2. W.A. O'Brien, P.M. Hartigan, E.S. Daar, M.S. Simberkoff, J.D. Hamilton, Ann. Intern. Med. 126, 939 (1997). PubMed
3. T.C. Quinn, et al., N. Engl. J. Med. 342, 921 (2000). PubMed
4. S.M. Blower, H. Gershengorn, R. Grant, Science 287, 650 (2000). PubMed
5. S.M. Blower, A.N. Aschenbach, H.B. Gershengorn, J.O. Kahn, Nat. Med. 7, 1016 (2001). PubMed
6. J.X. Velasco-Hernandez, H.B. Gershengorn, S.M. Blower, Lancet Infect. Dis. 2, 487 (2002). PubMed
7. D.R. Holtgrave, AIDS 16, 2347 (2002). PubMed
8. E. Fernandez-Cruz, et al., paper presented at the 10th Conference on Retroviruses and Opportunistic Infections, Boston, MA, February 2003. Available online
9. S. Blower, A.R. McLean, Science 265, 1451 (1994). PubMed
10. S. Blower, K. Koelle, D.E. Kirschner, J. Mills, Proc. Natl. Acad. Sci. USA 98, 3618 (2001). PubMed
11. S. Blower, E.J. Schwartz, J. Mills, AIDS Rev. 5, 113 (2003). PubMed
12. This work was funded by The Immune Response Corporation. | {"url":"http://aidscience.org/Articles/AIDScience040.asp","timestamp":"2014-04-19T09:39:27Z","content_type":null,"content_length":"31454","record_id":"<urn:uuid:d04df998-bd8d-4ba4-9e91-990119a9442a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
WyzAnt Resources
Please help simplify these expressions: 1. √a + √b √a - √b 2. √5 + 1 √5 - 1 3...
1,4,9,16,25,… n=10 I ask this question before but i am still comfuse
geometry shapes
need to know what is left in the bag after some poured into canister which gives the canister 2 times the mass over the bag
Altitude The angles of elevation to an air plane from two points A and B on level ground are 71° and 88°,respectively .The points A and B are 5.5 miles apart and the airplaine is east if both
This is the problem 12x2 + 14x=0
does the slope stay the same steeper less steep
you must put it in standard form
Each side of the polygon measures 6 feet. Whats the name of that polygon?
a frog is sitting on a stump 24 ft above the ground. it hops off the stump and lands on the ground 8 ft away. during its leap, its height h is given by the equation h=-0.5x^2+x+24 where x...
Hi! I am having some trouble doing a combinations problem, which is stated below: How many committees of five people can be selected from 8 men and 8 women if the committee must have...
(negation,p,disjunction(variable r,implication,negation variable q
1. Listed below are the number of years it took for a random sample of college students to earn a bachelor's degree. At the .05 significance level, test the claim that it takes the average student...
C=75°20,a=30,c=40 2)Find the area of thr triangles .Given B=82°30',a=110,c=75
f(x)=x-1/x+1 at c=-1 What 3 conditions of continuity can I use to justify the answer
Find the 9 year growth an the annual growth factor
express this statement using quantifiers (every student in this class has taken some course in every department in the school of mathmatical sciences
express this statement using quantifiers (there is a building on the campus of some college in the united states in which every room is painted whit
There grows in the middle of a circular pond 10 ft. in diameter a reed which projects 1 ft. out of the water. When it is drawn down it just reaches the edge of the pond. How deep is the water.
Sketch the graph on the axis y=e^2x+1 and y=e^x | {"url":"http://www.wyzant.com/resources/answers","timestamp":"2014-04-16T23:56:00Z","content_type":null,"content_length":"57083","record_id":"<urn:uuid:90160cb5-4aa3-475d-aa43-41e5d587059d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contour maps, turn based gaming, and space travel
I'm working on a 2D turn based space combat game, and I'd like to nail the mathematics of how a ship would truly move in space. Without going too much into my control scheme, here's the problem I'd
like to solve:
"Given a ship at point (x_0, y_0), with initial velocity V, along with a fixed length of time, T, and a maximum amount of energy/thrust/fuel a player can spend during this turn (which lasts T seconds
long), what is the function f(x, y) that describes how much fuel will remain should the player decide to go to point (x, y) this turn?"
Now, this may seem like an ambiguous question (and, I guess, it could be), but here are some other constraints:
-During this time interval T, only a single, constant force can be applied for the duration of T
-There are no external forces at play
Now, intuitively, I'd guess that the set of (x, y) points where f(x, y) = 0 would form an ellipse of some kind. This is backed up by some preliminary work I've done, but I'm pretty well lost at this
point. Here's my attempt at solving the problem, thus far:
since x_f = x_0 + v_x * t + (1/2) * a * t^2, and likewise for the y component, I can say that:
a_x = (2/t^2) * (x_f - x_0 - v_x * t).
From this, I can find the magnitude of a:
||a|| = (2/t^2) * sqrt((x - (x_0 + v_x * t))^2 + (y - (y_0 + v_y * t))^2)
needed to bring the ship to some point (x, y) in time t. However, beyond this, I'm not sure how to model my function f(x,y) so as to draw where a player could go. Initial ideas have included creating
some maximum amount of acceleration, A, that a player can use this turn, but that doesn't really make any physical sense. I've tried working in the Tsiolkovsky rocket equation (
), but I don't quite know how. Any ideas? | {"url":"http://www.physicsforums.com/showthread.php?p=3845131","timestamp":"2014-04-18T13:53:30Z","content_type":null,"content_length":"24745","record_id":"<urn:uuid:170f6486-1501-4377-8b29-fa965a92b49a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prediction of Missing Flow Records Using Multilayer Perceptron and Coactive Neurofuzzy Inference System
The Scientific World Journal
Volume 2013 (2013), Article ID 584516, 7 pages
Research Article
Prediction of Missing Flow Records Using Multilayer Perceptron and Coactive Neurofuzzy Inference System
Department of Civil Engineering, National Pingtung University of Science and Technology, Neipu Hsiang, Pingtung 91201, Taiwan
Received 25 August 2013; Accepted 2 October 2013
Academic Editors: R. Beale and R.-J. Dzeng
Copyright © 2013 Samkele S. Tfwala et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Hydrological data are often missing due to natural disasters, improper operation, limited equipment life, and other factors, which limit hydrological analysis. Therefore, missing data recovery is an
essential process in hydrology. This paper investigates the accuracy of artificial neural networks (ANN) in estimating missing flow records. The purpose is to develop and apply neural networks models
to estimate missing flow records in a station when data from adjacent stations is available. Multilayer perceptron neural networks model (MLP) and coactive neurofuzzy inference system model (CANFISM)
are used to estimate daily flow records for Li-Lin station using daily flow data for the period 1997 to 2009 from three adjacent stations (Nan-Feng, Lao-Nung and San-Lin) in southern Taiwan. The
performance of MLP is slightly better than CANFISM, having of 0.98 and 0.97, respectively. We conclude that accurate estimations of missing flow records under the complex hydrological conditions of
Taiwan could be attained by intelligent methods such as MLP and CANFISM.
1. Introduction
Taiwan is situated on typhoon tracks with high temperatures and heavy rainfalls. There are over 350 typhoons and about 1000 storms that have attacked Taiwan over the past century and led to severe
flood disasters. These events concentrate in the summer and autumn season (June to August), resulting in average annual precipitation of about 2500mm and reaches 3000–5000mm in the mountain
regions. In addition, rivers in Taiwan are short with small drainage basins and steep slopes. During the above-said period, their peak flows are enormous; for example, a catchment area of about
2000–3000km^2 often receives peak flows of up to 10000m^3/s [1]. Consequently, measurement instruments installed in some stations are damaged resulting in data gaps. Field personnel may also
attribute the data gaps to a number of factors such as malfunctioning of monitoring instrument, absence of observer, natural phenomena (e.g., earthquakes and landslides), and human induced factors
like mishandling of observed records. These gaps and discontinuities lead to problems in planning of water development schemes, design of hydraulic structures, and management of water resources. In
addition, challenges in the future may surface when a modelling system or a decision support system requires making use of this measured data. This necessitates filling the gaps.
Regression techniques have long been used for the generation of stream flow [2]. The idea is to model flow at one gauge as a function of flow at another gauge or gauges. Reference [3] compared
regression and time-series techniques to synthesize and predict stream flow at downstream gauge from an upstream gauge in California. Reference [4] successfully filled in missing data, by extending
single-output box-Jenkins transfer/noise models for several groundwater head series to a multiple-output transfer/noise models. However, such methods may not be suitable in Taiwan because of the
complex hydrological system.
Artificial neural networks (ANN) are gaining popularity, especially over the last few years, in terms of hydrological applications. At the beginning early nineties, it has been successfully applied
in hydrology related areas such as rainfall-runoff modelling [5, 6], stream flow forecasting [7, 8], ground water modelling [9], and reservoir operations and modelling [1, 10]. Reference [11] applied
ANN and adaptive neurofuzzy inference system (ANFIS) models to model and predict precipitation 12 months in advance. Reference [12] employed a distributed support vector regression model (D-SVR)
equipped with genetic algorithm based artificial neural network (ANN-GA) as part of flood control measures. ANN has also been used successfully in water quality, water management policy,
evapotranspiration, precipitation forecasting, and hydrological time series. Most hydrological processes exhibit temporal and spatial variability and are often plagued by issues of nonlinearity of
physical processes and uncertainty in spatial estimates. The time and effort required in developing and implementing such complicated models may not be justified. Simpler neural network forecast may
therefore seem attractive as an alternative tool.
Reference [13] compared six different types of ANN, namely, the multilayer perceptron network and its variation (the time lagged feed forward network), the radial basis function, recurrent neural
network and its variation (the time delay recurrent neural network), and the counter propagation fuzzy neural network for infilling missing daily total precipitation. The results of their experiment
revealed that the multilayer perceptron network could provide the most accurate estimates of the missing precipitation. In recent years, much attention has been given to derive effective data driven
neurofuzzy models due to its numerous advantages [14]. Reference [15] modeled inflow forecasting of the Nile River using neurofuzzy model. Reference [16] applied neurofuzzy model for
evapotranspiration modelling.
To the knowledge of the authors, no work has been reported in the literature that investigates the accuracy of multilayer perceptron (MLP) neural networks model and coactive neurofuzzy inference
system model (CANFISM) in missing flow records. Hence, in this study, MLP and CANFISM are used to estimate daily flow records for Li-Lin station using daily flow data for the period 1997 to 2009 from
three adjacent stations (Nan-Feng, Lao-Nung, and San-Lin). The above stations are located in the Kaoping river basin in southern Taiwan.
2. Materials and Methods
2.1. Study Area Characteristic
Kaoping River basin is located in the southern part of Taiwan at 22°12′30′′ North latitude and 120°12′0′′ East longitude and is shown in Figure 1. In this basin, four flow observation stations were
selected and these are Nan-Feng Bridge, San-Lin Bridge, Lao-Nung, and Li-Lin Bridge. This river basin is the largest and most intensively used basin in Taiwan. It is Taiwan’s second longest river
with its 171km length and drains a catchment covering 3,257km^2 of land that is roughly 9% of the island’s total area.
2.2. Neural Networks Model
An ANN is an information-processing paradigm inspired by biological nervous systems such as our brain [17]. Neural networks are composed of neurons as basic units. Each neuron receives input data,
processes the input data, and transforms them into output forms. The input may be pure data or the output results of other neurons and the output forms may be the results of other neurons [18]. The
neural networks used in the study (MLP and CANFISM) are managed by the Neurosolution software version 5.07 presented by the Neurodimension and further descriptions are given below.
2.2.1. Multilayer Perceptron Neural Network
An MLP distinguishes itself by the presence of one or more hidden layers, with computation nodes called hidden neurons, whose function is to intervene between the external inputs and the network
output in a useful manner. By adding hidden layers, the network is enabled to extract higher order statistics. The network acquires a global perspective despite its local connectivity due to the
extra synaptic connections and the extra dimension of neural network interconnections. The MLP can have more than one hidden layer; however, studies have revealed that a single hidden layer is enough
for ANN to approximate any complex nonlinear function [19, 20]. Therefore, in this study, one hidden layer MLP is used. MLP is trained using the many kinds of backpropagation algorithm.
The training performance is a process of adjusting the connection weights and biases so that its output can match the desired output best. Specifically, at each setting of the connection weights, it
is possible to calculate the error committed by the networks by taking the difference between the desired and actual responses [21, 22]. In this study, we use Quickprop backpropagation algorithm
(BPA). The advantage of this algorithm is that it operates much faster in the batch mode than conventional BPA. In addition, it is not sensitive to the learning rate and the momentum [22]. Throughout
all the simulations, the numbers of hidden layer neurons (PE) were found using trial and error method. In total, there were 1283 patterns of data from which 70% was used for training, 20% for cross
validation, and 10% used for testing.
Table 1 shows the condition of the training performance variables for the MLP and Figure 2 shows the developed structure of MLP with 3 inputs of the 3 adjacent stations (Figure 1) from which missing
flow records are estimated. The training performance of neural network is iterated until the training error is attained to the training tolerance. Iteration refers to a one completely pass through a
set of inputs and target data.
2.2.2. Coactive Neurofuzzy Inference System Model
Coactive neurofuzzy inference system model (CANFISM) belongs to a more general class of adaptive neurofuzzy inference system model (ANFISM). It may be used as a universal approximator of any
nonlinear function. In addition, it integrates adaptable fuzzy inputs with a modular neural network to rapidly and accurately approximate complex functions. The characteristics of CANFISM are
emphasized by the advantages of integrating neural networks with fuzzy inference in the same topology. The powerful capability of CANFISM stems from pattern-dependant weights between the consequent
layer and the fuzzy association layer [23]. The fundamental component of CANFISM is a fuzzy node that applies membership functions to the input nodes. Two membership functions commonly used are
general bell and Gaussian. The network also contains a normalization axon to expand the output into a range of 0-1. The second major component of this type of CANFISM is a modular network that
applies functional rules to the inputs. The number of modular networks matches the number of network outputs and the number of processing elements in each network corresponds to the number of
membership functions. CANFISM also has a combiner layer that applies the membership functions outputs to the modular network outputs. Table 2 shows the conditions of the training performance
variables of the CANFISM.
In this study, the CANFISM architecture used had three inputs and one output. The flow data from Nan-Feng Bridge, San-Lin Bridge, and Lao-Nung were used as inputs to the model and Li-Lin Bridge as
output (Figure 3). From the 1283 patterns of data, 70% of the data were used for training, 20% for cross validation, and 10% for testing the CANFISM model. From the two available membership functions
in the model (Bell and Gaussian), the membership function used in this study which is the Gaussian fuzzy axon type which uses a Gaussian shaped curve as its membership function to each neuron. The
advantage of this function is that the fuzzy synapses help in characterizing inputs that are not easily discretized [13]. The number of membership functions assigned to each network input was varied
between 1 and 10. In the various algorithms (i.e., Levenberg-Marquardt, Delta-Bar-Delta, Step, Momentum, Conjugate Gradient, and Quickprop), we used Quickprop due to the various advantages stated by
[21]. Besides, different transfer functions (i.e., Sigmoid, Linear Sigmoid, Tanh, Linear Tanh, Linear, and Bias) were used to identify the one that gives the best results in depicting the
nonlinearity of the modeled natural system. The best network architecture for each function was determined by trial and error and was selected based on the one that resulted in minimum errors and
best correlation.
2.3. Data Normalization
Preprocessing of the data is usually required before presenting the data samples to the neural network [6]. Hence, stream flow data of the stations used were normalized to prevent problems associated
with extreme values. In this study, the data is scaled in the range (0-1) using the following equation: where is the scaled input value, is the actual unscaled observed flow input, and and refer to
the minimum and maximum values of the data, respectively. In addition, some of the data were similar for some days in the different stations; these data was assumed incorrect, and therefore we
discarded it.
2.4. Models Performance Evaluation
The performance of the neural networks models are evaluated using a variety of standard statistical indexes. In our study, we evaluated the models using three indexes, root mean square error (RMSE),
mean absolute error (MAE), and coefficient of correlation (). The RMSE is a measure of the residual variance. MAE measures how close forecasts or predictions are to eventual outcomes. The is a
measure of accuracy of a hydrological modelling and is generally used for comparison of alternative models where represents the observed flow record, is the alternative methods estimated flow values,
and represent the average values of the corresponding variable, and represents the amount of data considered. Additionally, a linear regression is applied for evaluating the models’ performance
statistically, where is the dependent variable (alternative methods), the independent variable (observed), the slope, and the intercept.
3. Results and Discussion
3.1. Processing Elements Determination
The determination of processing elements (PE) is one of the difficult tasks in neural network models [10, 21, 23]. In addition, it is an important factor, which affects the performance of the trained
network [24]. Hence, determination of PEs was the initial process of the learning procedure. The number of PEs in the hidden layer was varied between 1 and 10 for the MLP. The data set aside for
testing was used to find the optimal number of PEs. In this study, the number of optimum PEs was found at 8 based on the minimum RMSE and maximum as illustrated by Figure 4.
In CANFISM, however, the hidden layer and the processing elements do not exist in the structure. Instead, membership functions are used. The ability of CANFISM model to achieve the performance goal
depends on the internal CANFISM parameters such as the number and shape of membership functions [25]. In this study, the membership functions were varied between 1 and 10. The optimum membership
function was found to be 3, with the algorithm Quickprop and the transfer function bias as proved by trial and error. This was still based on the minimum RMSE and maximum (Figure 5).
3.2. Comparison of the Different Models
In the present study, flow records for one station are estimated using MLP and CANFISM from three adjacent stations located in the same catchment. The data used to develop these models was obtained
from annual reports of the Taiwan Water Resources Agency, Taiwan. The prediction capabilities of these models were analysed by means of comparison with observed data. A summary of the models
statistical performance during training, cross validation, and testing stage is shown in Table 3. From the evaluation of these results, MLP was found to show better statistics results compared to
CANFISM in the cross validation and testing stage. The RMSE of MLP for cross validation and testing stage was 382.98m^3/s and 150.36m^3/s, respectively, while that for CANFISM was 388.97(m^3/s)
and 404.49(m^3/s), respectively. Moreover, the of MLP in cross validation and testing was 0.83 and 0.98, respectively, while that for CANFISM was 0.81 and 0.97, respectively. Reference [23] made
similar observations in the prediction of pan evaporation that MLP model was better than CANFISM.
CANFISM showed better results only in the training stage, having RMSE and of 388.97(m^3/s) and 0.69 compared to that of MLP, having RMSE of 401.84(m^3/s) and of 0.67. Figures 6 and 8 show the
observed and estimated flows using MLP and CANFISM, respectively. The trends of the estimated flow are similar to the observed data, although at some places, slight differences are seen. The
corresponding scatters for both MLP and CANFISM in the testing stage are shown in Figures 7 and 9. The higher accuracy attained by these models emphasizes the applicability of ANNs in estimating
missing flow records.
4. Conclusion
Accurate estimation of missing flow records is an essential component in decision support system for efficient water management and future planning of water resources systems. The objective of the
paper was to investigate the accuracy of artificial neural networks (ANN) in estimating missing flow records. The flow data of three stations was used to estimate flow data of one station. The
potential of ANNs for estimating missing flow records has been demonstrated in this study with both MLP and CANFISM having higher of 0.98 and 0.97, respectively. In general, the findings of this
study indicate that accurate estimations of missing flow records under the complex hydrological condition of Taiwan can be attained using MLP and CANFISM methods.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors gratefully acknowledge the financial support from NSC Taiwan under the Grant of NSC101-2625-M-020-003.
1. C.-T. Cheng, W.-C. Wang, D.-M. Xu, and K. W. Chau, “Optimizing hydropower reservoir operation using hybrid genetic algorithm and chaos,” Water Resources Management, vol. 22, no. 7, pp. 895–909,
2008. View at Publisher · View at Google Scholar · View at Scopus
2. C. T. Haan, Statistical Models in Hydrology, John Wiley & Sons, New York, NY, USA, 1977.
3. J. J. Beauchamp, D. J. Downing, and S. F. Railsback, “Comparison of regression and time-series methods for synthesizing missing streamflow records,” Water Resources Bulletin, vol. 25, no. 5, pp.
961–975, 1989. View at Scopus
4. F. C. Van Geer and A. F. Zuur, “An extension of Box-Jenkins transfer/noise models for spatial interpolation of groundwater head series,” Journal of Hydrology, vol. 192, no. 1–4, pp. 65–80, 1997.
View at Publisher · View at Google Scholar · View at Scopus
5. A. S. Tokar and P. A. Johnson, “Rainfall-runoff modeling using artificial neural networks,” Journal of Hydrologic Engineering, vol. 4, no. 3, pp. 232–239, 1999. View at Publisher · View at Google
Scholar · View at Scopus
6. Y. M. Wang, S. M. Chen, and I. Tsou, “Using artificial neural network approach for modelling rainfall-runoff,” Journal of Earth System Science, vol. 122, no. 2, pp. 399–405, 2013.
7. L. E. Besaw, D. M. Rizzo, P. R. Bierman, and W. R. Hackett, “Advances in ungauged streamflow prediction using artificial neural networks,” Journal of Hydrology, vol. 386, no. 1-4, pp. 27–37,
2010. View at Publisher · View at Google Scholar · View at Scopus
8. M. T. Dastorani and N. G. Wright, “A hydrodynamic/neural network approach for enhanced river flow prediction,” International Journal of Civil Engineering, vol. 2, no. 3, pp. 141–148, 2004.
9. F. Szidarovszky, E. A. Coppola Jr., J. Long, A. D. Hall, and M. M. Poulton, “A hybrid artificial neural network-numerical model for ground water problems,” Ground Water, vol. 45, no. 5, pp.
590–600, 2007. View at Publisher · View at Google Scholar · View at Scopus
10. Y.-M. Wang and S. Traore, “Time-lagged recurrent network for forecasting episodic event suspended sediment load in typhoon prone area,” International Journal of Physical Sciences, vol. 4, no. 9,
pp. 519–528, 2009. View at Scopus
11. M. T. Dastorani, A. Moghadamnia, J. Piri, and M. Rico-Ramirez, “Application of ANN and ANFIS models for reconstructing missing flow data,” Environmental Monitoring and Assessment, vol. 166, no.
1–4, pp. 421–434, 2010. View at Publisher · View at Google Scholar · View at Scopus
12. C. L. Wu, K. W. Chau, and Y. S. Li, “River stage prediction based on a distributed support vector regression,” Journal of Hydrology, vol. 358, no. 1-2, pp. 96–111, 2008. View at Publisher · View
at Google Scholar · View at Scopus
13. P. Coulibaly and N. D. Evora, “Comparison of neural network methods for infilling missing daily weather records,” Journal of Hydrology, vol. 341, no. 1-2, pp. 27–41, 2007. View at Publisher ·
View at Google Scholar · View at Scopus
14. A. Aytek, “Co-active neurofuzzy inference system for evapotranspiration modeling,” Soft Computing, vol. 13, no. 7, pp. 691–700, 2009. View at Publisher · View at Google Scholar · View at Scopus
15. A. El-Shafie, M. R. Taha, and A. Noureldin, “A neuro-fuzzy model for inflow forecasting of the Nile river at Aswan high dam,” Water Resources Management, vol. 21, no. 3, pp. 533–556, 2007. View
at Publisher · View at Google Scholar · View at Scopus
16. Ö. Kişi and Ö. Öztürk, “Adaptive neurofuzzy computing technique for evapotranspiration estimation,” Journal of Irrigation and Drainage Engineering, vol. 133, no. 4, pp. 368–379, 2007. View at
Publisher · View at Google Scholar · View at Scopus
17. J.-Y. Lin, C.-T. Cheng, and K.-W. Chau, “Using support vector machines for long-term discharge prediction,” Hydrological Sciences Journal, vol. 51, no. 4, pp. 599–612, 2006. View at Publisher ·
View at Google Scholar · View at Scopus
18. S. Haykin, Neural Networks and Learning Machines, Prentice Hall, Upper Saddle River, NJ, USA, 3rd edition, 2009.
19. Y.-M. Wang, S. Traore, T. Kerh, and J.-M. Leu, “Modelling reference evapotranspiration using feed forward backpropagation algorithm in arid regions of Africa,” Irrigation and Drainage, vol. 60,
no. 3, pp. 404–417, 2011. View at Publisher · View at Google Scholar · View at Scopus
20. M. Feyzolahpour, M. Rajabi, and S. Roostaei, “Estimating suspended sediment concentration using neural differential evolution (NDE), multilayer perceptron (MLP) and radial basis function (RBF)
models,” International Journal of Physical Sciences, vol. 7, no. 29, pp. 5106–5117, 2012.
21. C.-C. Lin, “Partitioning capabilities of multi-layer perceptrons on nested rectangular decision regions part I: algorithm,” WSEAS Transactions on Information Science and Applications, vol. 3, no.
9, pp. 1674–1680, 2006. View at Scopus
22. S. Kim, K. B. Park, and Y. M. Seo, “Estimation of Pan Evaporation using neural networks and climate based models,” Disaster Advances, vol. 5, no. 3, pp. 34–43, 2012.
23. H. Tabari, P. H. Talaee, and H. Abghari, “Utility of coactive neuro-fuzzy inference system for pan evaporation modeling in comparison with multilayer perceptron,” Meteorology and Atmospheric
Physics, vol. 116, no. 3-4, pp. 147–154, 2012. View at Publisher · View at Google Scholar · View at Scopus
24. N. Muttil and K.-W. Chau, “Neural network and genetic programming for modelling coastal algal blooms,” International Journal of Environment and Pollution, vol. 28, no. 3-4, pp. 223–238, 2006.
View at Publisher · View at Google Scholar · View at Scopus
25. M. Heydari and P. H. Talaee, “Prediction of flow through rockfill dams using a neuro-fuzzy computing technique,” Journal of Mathematics and Computer Science, vol. 2, no. 3, pp. 515–528, 2011. | {"url":"http://www.hindawi.com/journals/tswj/2013/584516/","timestamp":"2014-04-21T00:26:10Z","content_type":null,"content_length":"85686","record_id":"<urn:uuid:e088277e-adad-4828-b40b-e43dbfb9914b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distance measures as prior probabilities
Thomas P. Minka
Many learning algorithms, especially nonparametric ones, use distance measures as a source of prior knowledge about the domain. This paper shows how the work of Baxter and Yianilos provides a formal
equivalence between distance measures and prior probability distributions in Bayesian inference. The prior distribution applies either to how the data was generated or to the shape of the
discrimination boundary. This perspective is useful for extending distance-based algorithms to new feature spaces and especially for learning distance measures on those spaces.
Also see Learning distance measures from labeled data -- An overview.
Thomas P Minka Last modified: Thu Apr 22 12:19:33 GMT 2004 | {"url":"http://research.microsoft.com/en-us/um/people/minka/papers/metric/","timestamp":"2014-04-16T13:09:01Z","content_type":null,"content_length":"2300","record_id":"<urn:uuid:dc83daea-7ec4-45c5-bb7c-dc6a5a9e6edd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spatial Data Structures: Quadtree, Octrees and Other Hierarchical Methods
Results 1 - 10 of 22
- Proc. ACM SIGGRAPH, 171–180 , 1996
"... {gottscha,lin,manocha}©cs. unc.edu We present a data structure and an algorithm for efficient and exact interference detection amongst complex models undergoing rigid motion. The algorithm is
applicable to all general polygonal and curved models. It pre-computes a hierarchical representation of mode ..."
Cited by 658 (43 self)
Add to MetaCart
{gottscha,lin,manocha}©cs. unc.edu We present a data structure and an algorithm for efficient and exact interference detection amongst complex models undergoing rigid motion. The algorithm is
applicable to all general polygonal and curved models. It pre-computes a hierarchical representation of models using tight-fitting oriented bounding box trees. At runtime, the algorithm traverses the
tree and tests for overlaps between oriented bounding boxes based on a new separating axis theorem, which takes less than 200 operations in practice. It has been implemented and we compare its
performance with other hierarchical data structures. In particular, it can accurately detect all the contacts between large complex geometries composed of hundreds of thousands of polygons at
interactive rates, almost one order of magnitude faster than earlier methods.
- In Proc. of IMA Conference on Mathematics of Surfaces , 1998
"... In this paper, we survey the state of the art in collision detection between general geometric models. The set of models include polygonal objects, spline or algebraic surfaces, CSG models, and
deformable bodies. We present a number of techniques and systems available for contact determination. We a ..."
Cited by 184 (15 self)
Add to MetaCart
In this paper, we survey the state of the art in collision detection between general geometric models. The set of models include polygonal objects, spline or algebraic surfaces, CSG models, and
deformable bodies. We present a number of techniques and systems available for contact determination. We also describe several N-body algorithms to reduce the number of pairwise intersection tests. 1
Introduction The goal of collision detection (also known as interference detection or contact determination) is to automatically report a geometric contact when it is about to occur or has actually
occurred. The geometric models may be polygonal objects, splines, or algebraic surfaces. The problem is encountered in computer-aided design and machining (CAD/CAM), robotics and automation,
manufacturing, computer graphics, animation and computer simulated environments. Collision detection enables simulationbased design, tolerance verification, engineering analysis, assembly and
dis-assembly, motion pla...
- In Proc. 10th ACM-SIAM Sympos. Discrete Algorithms , 2001
"... We present an efficient O(n + 1/ε^4.5)-time algorithm for computing a (1 + 1/ε)-approximation of the minimum-volume bounding box of n points in R³. We also present a simpler algorithm (for the
same purpose) whose running time is O(n log n+n/ε³). We give some experim ..."
Cited by 77 (12 self)
Add to MetaCart
We present an efficient O(n + 1/ε^4.5)-time algorithm for computing a (1 + 1/ε)-approximation of the minimum-volume bounding box of n points in R³. We also present a simpler
algorithm (for the same purpose) whose running time is O(n log n+n/ε³). We give some experimental results with implementations of various variants of the second algorithm. The
implementation of the algorithm described in this paper is available online [Har00].
, 2003
"... In a geometric context, a collision or proximity query reports information about the relative configuration or placement of two objects. Some of the common examples of such queries include
checking whether two objects overlap in space, or whether their boundaries intersect, or computing the minimum ..."
Cited by 74 (15 self)
Add to MetaCart
In a geometric context, a collision or proximity query reports information about the relative configuration or placement of two objects. Some of the common examples of such queries include checking
whether two objects overlap in space, or whether their boundaries intersect, or computing the minimum Euclidean separation distance between their boundaries. Hundreds of papers have been published on
di#erent aspects of these queries in computational geometry and related areas such as robotics, computer graphics, virtual environments, and computer-aided design. These queries arise in di#erent
applications including robot motion planning, dynamic simulation, haptic rendering, virtual prototyping, interactive walkthroughs, computer gaming, and molecular modeling. For example, a large-scale
virtual environment, e.g., a walkthrough, creates a model of the environment with virtual objects. Such an environment is used to give the user a sense of presence in a synthetic world and it s
- In Proc. of Third International Workshop on Algorithmic Foundations of Robotics
"... Hierarchical data structures have been widely used to design e cient algorithms for interference detection for robot motion planning and physically-based modeling applications. Most of the
hierarchies involve use of bounding volumes which enclose the underlying geometry. These bounding volumes are u ..."
Cited by 46 (9 self)
Add to MetaCart
Hierarchical data structures have been widely used to design e cient algorithms for interference detection for robot motion planning and physically-based modeling applications. Most of the
hierarchies involve use of bounding volumes which enclose the underlying geometry. These bounding volumes are used to test for interference orcompute distance bounds between the underlying geometry.
The e ciency of a hierarchy is directly proportional to the choice ofabounding volume. In this paper, we introduce spherical shells, a higher order bounding volume for fast proximity queries. Each
shell corresponds to a portion of the volume between two concentric spheres. We present algorithms to compute tight tting shells and fast overlap between two shells. Moreover, we show that spherical
shells provide local cubic convergence to the underlying geometry. As aresult, in many cases they provide faster algorithms for interference detection and distance computation as compared toearlier
methods. We also describe an implementation and compare it with other hierarchies. 1
, 1996
"... We introduce the boxtree, a versatile data structure for representing triangulated or meshed surfaces in 3D. A boxtree is a hierarchical structure of nested boxes that supports efficient ray
tracing and collision detection. It is simple and robust, and requires minimal space. In situations where sto ..."
Cited by 44 (7 self)
Add to MetaCart
We introduce the boxtree, a versatile data structure for representing triangulated or meshed surfaces in 3D. A boxtree is a hierarchical structure of nested boxes that supports efficient ray tracing
and collision detection. It is simple and robust, and requires minimal space. In situations where storage is at a premium, boxtrees are effective alternatives to octrees and BSP trees. They are also
more flexible and efficient than R-trees, and nearly as simple to implement. Keywords: collision detection, hierarchical data structures, ray shooting. 1. Introduction In 1981 Ballard 1 presented a
simple data structure for representing digitized curves by means of nested strips. This work is an attempt to generalize his strip tree structure to the case of surfaces in 3D. As is well known,
curves can seem quite tame when compared to surfaces. For example, collision detection in 3D is orders of magnitude more difficult than in 2D. Expectedly, generalizing a strip tree into a boxtree
raises a ...
- Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems , 2000
"... We present an accelerated proximity query algorithm between moving convex polyhedra. The algorithm combines Voronoi-based feature tracking with a multi-level-of-detail representation, in order
to adapt to the variation in levels of coherence and speed up the computation. It provides a progressive re ..."
Cited by 41 (14 self)
Add to MetaCart
We present an accelerated proximity query algorithm between moving convex polyhedra. The algorithm combines Voronoi-based feature tracking with a multi-level-of-detail representation, in order to
adapt to the variation in levels of coherence and speed up the computation. It provides a progressive refinement framework for collision detection and distance queries. We have implemented our
algorithm and have observed significant performance improvements in our experiments, especially on scenarios where the coherence is low. 1 Introduction Proximity queries, i.e. distance 1 computations
and the closely related collision detection problems, are ubiquitous in robotics, design automation, manufacturing, assembly and virtual prototyping. The set of tasks include motion planning,
sensor-based manipulation, assembly and disassembly, dynamic simulation, maintainability study, simulation-based design, tolerance verification, and ergonomics analysis. Proximity queries have been
extensively stud...
- In Proceedings of Virtual Reality Conference
"... We present a fast and accurate collision detection algorithm for haptic interaction with polygonal models. Given a model, we pre-compute a hybrid hierarchical representation, consisting of
uniform grids (represented using a hash table) and trees of tight-fitting oriented bounding box trees (OBBTrees ..."
Cited by 39 (0 self)
Add to MetaCart
We present a fast and accurate collision detection algorithm for haptic interaction with polygonal models. Given a model, we pre-compute a hybrid hierarchical representation, consisting of uniform
grids (represented using a hash table) and trees of tight-fitting oriented bounding box trees (OBBTrees). At run time, we use hybrid hierarchical representations and exploit frame-to-frame coherence
for fast proximity queries. We describe a new overlap test, which is specialized for intersection of a line segment with an oriented bounding box for haptic simulation and takes 42-72 operations
including transformation costs. The algorithms have been implemented as part of H-COLLIDE and interfaced with a PHANToM arm and its haptic toolkit, GHOST, and applied to a number of models. As
compared to the commercial implementation, we are able to achieve up to 20 times speedup in our experiments and sustain update rates over 1000Hz on a 400MHz Pentium II. In practice, our prototype
implementation can a...
- In Proc. 21st ACM Symposium on Computational Geometry , 2005
"... We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R 2) or the skip octree (for point data in R d, with constant d> 2). Our data structure
combines the best features of two well-known data structures, in that it has the well-defined “box”-shaped reg ..."
Cited by 36 (5 self)
Add to MetaCart
We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R 2) or the skip octree (for point data in R d, with constant d> 2). Our data structure combines
the best features of two well-known data structures, in that it has the well-defined “box”-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of
skip lists. Indeed, the bottom level of our structure is exactly a region quadtree (or octree for higher dimensional data). We describe efficient algorithms for inserting and deleting points in a
skip quadtree, as well as fast methods for performing point location and approximate range queries. 1
- In Data Visualization (2001 , 2000
"... . We propose a hardware-assisted visibility ordering algorithm. From a given viewpoint, a (back-to-front) visibility ordering of a set of objects is a partial order on the objects such that if
object obstructs object , then precedes in the ordering. Such orderings are useful because the ..."
Cited by 14 (3 self)
Add to MetaCart
. We propose a hardware-assisted visibility ordering algorithm. From a given viewpoint, a (back-to-front) visibility ordering of a set of objects is a partial order on the objects such that if object
obstructs object , then precedes in the ordering. Such orderings are useful because they are the building blocks of other rendering algorithms such as direct volume rendering of unstructured grids.
The traditional way to compute the visibility order is to build a set of visibility relations (e.g., ), and then run a topological sort on the set of relations to actually get the partial ordering.
Our technique instead works by assigning a layer number to each primitive, which directly determines the visibility ordering. Objects that have the same layer number are independent, and can be
placed anywhere with respect to each other. We use a simple technique which exploits a combination of the z- and stencil buffers to compute the layer number of each primitive... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=326468","timestamp":"2014-04-20T02:19:33Z","content_type":null,"content_length":"40066","record_id":"<urn:uuid:07a4dbb7-3f66-4679-b0cc-4924c5997207>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Castle Pines, CO ACT Tutor
Find a Castle Pines, CO ACT Tutor
...I then develop a custom plan to fill in the gaps and shoring up the weaknesses. I can provide challenge to elementary (and high school) students that are not being challenged, introducing them
to Calculus, Statistics, Probability, etc. in a way that they can understand given their current unders...
30 Subjects: including ACT Math, chemistry, geometry, physics
...There are few things more gratifying to me than the look of satisfaction in the eyes of a student for whom the math mystery has been revealed! I have over 70 hours on Wyzant and each new
student, each new challenge, brings me tremendous satisfaction! I am passionate about learning and about sha...
43 Subjects: including ACT Math, chemistry, English, Spanish
...These methods are usable not just for solving math problems you're dealing with right now, but can have an impact on your future in a variety of professions.Some say that algebra 1 is all
about identifying and practicing techniques for setting up and solving problems that are "harder" than what y...
18 Subjects: including ACT Math, calculus, statistics, geometry
...My name is Randy and I am an experienced and effective multi-subject tutor. I have extensive tutoring experience working with students of all levels from Middle School through College. I
specialize in teaching Guitar (Electric, Acoustic, & Bass) Mathematics (Algebra, Geometry, etc.) and English...
57 Subjects: including ACT Math, reading, writing, English
...While I was a student in college, I ran a successful math tutoring company and tutored high school and college students in Algebra 1/2, Geometry, Statistics, Probability, Calculus, and SAT
Math 1/2C. Greater than 95% of my students increased their scores by at least 1 letter grade. I achieved t...
21 Subjects: including ACT Math, chemistry, calculus, geometry
Related Castle Pines, CO Tutors
Castle Pines, CO Accounting Tutors
Castle Pines, CO ACT Tutors
Castle Pines, CO Algebra Tutors
Castle Pines, CO Algebra 2 Tutors
Castle Pines, CO Calculus Tutors
Castle Pines, CO Geometry Tutors
Castle Pines, CO Math Tutors
Castle Pines, CO Prealgebra Tutors
Castle Pines, CO Precalculus Tutors
Castle Pines, CO SAT Tutors
Castle Pines, CO SAT Math Tutors
Castle Pines, CO Science Tutors
Castle Pines, CO Statistics Tutors
Castle Pines, CO Trigonometry Tutors
Nearby Cities With ACT Tutor
Cadet Sta, CO ACT Tutors
Crystola, CO ACT Tutors
Deckers, CO ACT Tutors
Dupont, CO ACT Tutors
Fort Logan, CO ACT Tutors
Foxton, CO ACT Tutors
Lowry, CO ACT Tutors
Montbello, CO ACT Tutors
Montclair, CO ACT Tutors
Roxborough, CO ACT Tutors
Sedalia, CO ACT Tutors
Tarryall, CO ACT Tutors
Welby, CO ACT Tutors
Western Area, CO ACT Tutors
Woodmoor, CO ACT Tutors | {"url":"http://www.purplemath.com/castle_pines_co_act_tutors.php","timestamp":"2014-04-18T14:03:12Z","content_type":null,"content_length":"24009","record_id":"<urn:uuid:0ec17192-1c47-4d45-b1be-e5248de43292>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
On this site you'll find lots of logic puzzles and riddles, puzzling riddles,
science puzzles and math puzzles and riddles.
Answers and detailed solutions are included.
Do you know other nice brain teasers, riddles or puzzles that should be on the site?
If that's the case, send a message to pzzls!
Boiling an egg - logic puzzle of the day
Suppose you would like to boil an egg for exactly 15 minutes. But you only have two hourglasses, one of zeven and one of eleven minutes. What is the the fastest way to do this?
Pzzls top 10
logic puzzle
math puzzle
3. Lockers
math puzzle
logic puzzle
5. Sky theory
science puzzle
logic puzzle
7. The superstitious president
logic puzzle
math puzzle
logic puzzle
10. Planting trees
logic puzzle | {"url":"http://pzzls.com/","timestamp":"2014-04-20T15:51:15Z","content_type":null,"content_length":"19077","record_id":"<urn:uuid:bec07f48-c185-410b-b7a6-b293b3fae753>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Redan Math Tutor
Find a Redan Math Tutor
...I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High School level in both private and public schools. I have chosen to leave the classroom
to tutor from home so that I can be a stay at home mom.
10 Subjects: including linear algebra, logic, algebra 1, algebra 2
...All children have the ability to learn the material. It is important to address the learning style of the student. I also provide learning style assessments in order to provide the most
effective use of my tutoring.
10 Subjects: including algebra 1, grammar, Microsoft Word, Microsoft PowerPoint
...I am patient and caring. I believe that every child can achieve success.I am a Georgia certified teacher. I am certified in Early Childhood Education P-5 as well as Middle Grades Education
11 Subjects: including algebra 1, prealgebra, reading, ESL/ESOL
...My specialty was income tax and I prepared returns for individuals, small businesses, farmers, students, and retirees, among others. In short, I love both teaching and learning and I'm excited
to be of help to anyone who needs assistance in one of my fields of experience!I've been using American...
30 Subjects: including algebra 1, algebra 2, reading, SAT math
...I am able to do this successfully because of my experience and patience. I have a BS and Master's degree in mathematics. I have tutored and taught on the high school and college level.
19 Subjects: including geometry, precalculus, trigonometry, SAT math
Nearby Cities With Math Tutor
Avondale Estates Math Tutors
Between, GA Math Tutors
Clarkston, GA Math Tutors
Conley Math Tutors
Ellenwood Math Tutors
Grayson, GA Math Tutors
Hapeville, GA Math Tutors
Jersey, GA Math Tutors
Mansfield, GA Math Tutors
Oxford, GA Math Tutors
Porterdale Math Tutors
Red Oak, GA Math Tutors
Rex, GA Math Tutors
Scottdale, GA Math Tutors
Walnut Grove, GA Math Tutors | {"url":"http://www.purplemath.com/Redan_Math_tutors.php","timestamp":"2014-04-18T05:58:37Z","content_type":null,"content_length":"23201","record_id":"<urn:uuid:2e186ad0-43b8-47ae-975c-0dcec677dc7c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematica Labs for MATH 121
These Labs refer to the textbook
Calculus/Early Transcendentals, 7th Ed., by J. Stewart.
Lab 1 (goes with Secs. 12.1 -- 12.4)
Part 1: Review, and Parametric plots in 2D and 3D (no exercises)
Part 2: Parametric plots of ellipses and hyperbolas (this part is to be submitted for grading)
Lab 2 (goes with Sec. 12.5)
Basic vector operations, and equations of planes
Lab 3 (goes with Sec. 12.6)
Trace plots of quadric surfaces
Lab 4 (goes with Secs. 14.1--14.3)
Contour plots, and discontinuous mixed partial derivatives
Lab 5 (goes with Secs. 14.6, 14.7)
Finding extrema of a function
Lab 6 (goes with Sec. 14.8, which was not covered in class)
Introduction to Lagrange multipliers
Lab 7 (goes with Secs. 15.10 and 16.6)
Parametric surfaces and their animation
Last updated: November 2013 | {"url":"http://www.cems.uvm.edu/~tlakoba/math121_labs/labs_blank/Mathematica_Labs_for_MATH_121.html","timestamp":"2014-04-21T02:08:02Z","content_type":null,"content_length":"7244","record_id":"<urn:uuid:907a3a0a-a90a-4d3b-bee5-acad1e2bd92a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spectral clustering by recursive partitioning
"... Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then
design algorithms to (approximately) optimize various graph-based objective functions. However, in most appli ..."
Cited by 24 (9 self)
Add to MetaCart
Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then design
algorithms to (approximately) optimize various graph-based objective functions. However, in most applications, this similarity information is merely based on some heuristic; the ground truth is
really the unknown correct clustering of the data points and the real goal is to achieve low error on the data. In this work, we develop a theoretical approach to clustering from this perspective. In
particular, motivated by recent work in learning theory that asks “what natural properties of a similarity (or kernel) function are sufficient to be able to learn well? ” we ask “what natural
properties of a similarity function are sufficient to be able to cluster well?” To study this question we develop a theoretical framework that
"... Abstract. In this paper, we initiate a theoretical study of the problem of clustering data under interactive feedback. We introduce a query-based model in which users can provide feedback to a
clustering algorithm in a natural way via split and merge requests. We then analyze the “clusterability” of ..."
Cited by 5 (1 self)
Add to MetaCart
Abstract. In this paper, we initiate a theoretical study of the problem of clustering data under interactive feedback. We introduce a query-based model in which users can provide feedback to a
clustering algorithm in a natural way via split and merge requests. We then analyze the “clusterability” of different concept classes in this framework — the ability to cluster correctly with a
bounded number of requests under only the assumption that each cluster can be described by a concept in the class — and provide efficient algorithms as well as information-theoretic upper and lower
bounds. 1
, 2007
"... Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then
design algorithms to (approximately) optimize various graph-based objective functions. However, in most appli ..."
Cited by 3 (3 self)
Add to MetaCart
Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then design
algorithms to (approximately) optimize various graph-based objective functions. However, in most applications, this similarity information is merely based on some heuristic: the true goal is to
cluster the points correctly rather than to optimize any specific graph property. In this work, we initiate a theoretical study of the design of similarity functions for clustering from this
perspective. In particular, motivated by recent work in learning theory that asks “what natural properties of a similarity function are sufficient to be able to learn well? ” we ask “what natural
properties of a similarity function are sufficient to be able to cluster well?” We develop a notion of the clustering complexity of a given property (analogous to notions of capacity in learning
theory), that characterizes its information-theoretic usefulness for clustering. We then analyze this complexity for several natural game-theoretic and learning-theoretic properties, as well as
design efficient algorithms that are able to take advantage of them. We consider two natural clustering objectives: (a) list clustering: analogous to the notion of list-decoding, the algorithm can
produce a small list of clusterings (which a user can select from) and (b) hierarchical clustering: the desired clustering is some
, 2007
"... This thesis develops and analyzes theoretical frameworks for new emerging paradigms of Machine Learning including Semi-supervised, Active, and Similarity-based Learning. These are areas of
significant practical importance and significant activity in Machine Learning, and a number of different algori ..."
Cited by 3 (0 self)
Add to MetaCart
This thesis develops and analyzes theoretical frameworks for new emerging paradigms of Machine Learning including Semi-supervised, Active, and Similarity-based Learning. These are areas of
significant practical importance and significant activity in Machine Learning, and a number of different algorithmic approaches have been developed for each of them. Standard Learning Theory
frameworks such as PAC or Statistical Learning Theory models tend to not capture these learning approaches, hence developing sound and rigorous models that provide a thorough understanding of these
new paradigms is desirable. The purpose of this thesis is to propose and to study new theoretical frameworks and algorithms for better understanding and extending some of these learning approaches.
In addition, this dissertation also presents new applications of techniques from Machine Learning Theory to new emerging areas of Computer Science at large, such as Auction and Mechanism Design. In
Machine Learning, there has been growing interest in using unlabeled data together with labeled data due to the availability of large amounts of unlabeled data in many applications. As a result, a
number of different algorithmic approaches have been developed for this
, 2009
"... We consider the problem of clustering with feedback. We study a recently proposed framework for the problem and present new results on clustering geometric concept classes in that model. In this
model the clustering algorithm interacts with the user via “split ” and “merge ” requests to figure out t ..."
Cited by 1 (0 self)
Add to MetaCart
We consider the problem of clustering with feedback. We study a recently proposed framework for the problem and present new results on clustering geometric concept classes in that model. In this
model the clustering algorithm interacts with the user via “split ” and “merge ” requests to figure out the target clustering. We also give a simple generic algorithm to cluster any concept class in
the model. Our algorithm is query-efficient in the sense that it involves only a small amount of interaction with the user. We also present and study two natural generalization of the original model.
The original model assumes that the user response to the algorithm is perfect. We eliminate this limitation by proposing a noisy model for interactive clustering and give an algorithm for learning
the class of intervals in that model. We also propose a dynamic model considering the fact that the user might see a random subset of the space of all points at every step. Finally, for datasets
satisfying a spectrum of weak to strong properties, we give query bounds, and show that a class of clustering functions containing Single-Linkage will find the target clustering under the strongest
property. 1
"... Problems of clustering data from pairwise similarity information arise in many different fields. Yet the question of which algorithm is best to use under what conditions, and how good a notion
of similarity does one need in order to cluster accurately remains poorly understood. In this work we propo ..."
Add to MetaCart
Problems of clustering data from pairwise similarity information arise in many different fields. Yet the question of which algorithm is best to use under what conditions, and how good a notion of
similarity does one need in order to cluster accurately remains poorly understood. In this work we propose a new general framework for analyzing clustering from similarity information that directly
addresses this question of what properties of a similarity measure are sufficient to cluster accurately and by what kinds of algorithms. We show that in our framework a wide variety of interesting
learning-theoretic and game-theoretic properties, including properties motivated by mathematical biology, can be used to cluster well, and we design new efficient algorithms that are able to take
advantage of them. We consider two natural clustering objectives: (a) list clustering, where the algorithm’s goal is to produce a small list of clusterings such that at least one of them is
approximately correct, and (b) hierarchical clustering, where the algorithm’s goal is to produce a hierarchy such that desired clustering is some pruning of this tree (which a user could navigate).
We develop a notion of the clustering complexity of a given property, analogous to notions of capacity in learning theory, that characterizes information-theoretic usefulness for clustering. We
analyze this quantity for a wide range of properties, giving tight upper and lower
"... Abstract. The high dimensionality of the data generated by social networks has been a big challenge for researchers. In order to solve the problems associated with this phenomenon, a number of
methods and techniques were developed. Spectral clustering is a data mining method used in many application ..."
Add to MetaCart
Abstract. The high dimensionality of the data generated by social networks has been a big challenge for researchers. In order to solve the problems associated with this phenomenon, a number of
methods and techniques were developed. Spectral clustering is a data mining method used in many applications; in this paper we used this method to find students ’ behavioral patterns performed in an
elearning system. In addition, a software was introduced to allow the user (tutor or researcher) to define the data dimensions and input values to obtain appropriate graphs with behavioral pattens
that meet his/her needs. Behavioral patterns were compared with students ’ study performance and evaluation with relation to their possible usage in collaborative learning. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=262418","timestamp":"2014-04-23T18:45:39Z","content_type":null,"content_length":"31065","record_id":"<urn:uuid:1e5e1ef4-4690-4f74-971f-67b8763db26c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jacobi algorithm problem... could wikipedia be wrong?
01-17-2010 #1
Registered User
Join Date
Dec 2009
Jacobi algorithm problem... could wikipedia be wrong?
I am working on a c implementation of the jacobi algorithm, using wikipedia as an example.
I was trying to follow along with the given example on the page;
Jacobi method - Wikipedia, the free encyclopedia
The starting matrix A = [2 3,5 7]
then the lower is said to be [0 0,-5 0] which makes sence except that the 5 is negative.
the same is for the upper.
the page on strictly lower and upper parts of a matrix says nothing about negating them. am i misunderstanding something?
Could wikipedia be wrong?
Of course! Its useful for quick scan research pointers but should not form the basis of it because of potentially not so 'correct' articles. Of course a lot of articles are very authorative, but
seek alternatives if in doubt
I would use MATLAB or other equivalent software (one free alternative is "octave" for Linux). Download a Jacobi algorithm for this language, and try some input. You can add simple print
statements to the MATLAB code that shows the values at each iteration, and compare each iteration with your own C algorithm.
And as mentioned, it is very possible Wikipedia is incorrect. For Jacobi or other math stuff, I would use a math website/reference instead.
01-19-2010 #2
01-19-2010 #3
Registered User
Join Date
Oct 2006 | {"url":"http://cboard.cprogramming.com/c-programming/123228-jacobi-algorithm-problem-could-wikipedia-wrong.html","timestamp":"2014-04-21T16:22:17Z","content_type":null,"content_length":"47813","record_id":"<urn:uuid:b0e031d7-50d6-49d9-b09b-babfc2fd64da>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Choosing tau for elliptic curves over the rational numbers with prescribed ramification data
up vote 6 down vote favorite
Let $r>2$ and let $b_1,b_2,\ldots,b_r$ be in $\mathbf{P}^1(\mathbf{Q})$. Let $B$ be the divisor $$B:= \sum [b_i].$$ We consider this data to be fixed. For $d>1$, we define $\textrm{Ell}(b_1,b_2,\
ldots,b_r,d)$ as the set of (isomorphism classes of) elliptic curves $E$ over $\mathbf{Q}$ that admit a finite morphism $f:E\longrightarrow \mathbf{P}^1_\mathbf{Q}$ of degree $d$ which is etale
outside $\{b_1,b_2,\ldots,b_r\} \subset \mathbf{P}^1(\mathbf{Q})$.
Question 1. Let $E$ be in $\textrm{Ell}(b_1,b_2,\ldots,b_r,d)$ and choose a finite morphism $f:E\longrightarrow \mathbf{P}^1_\mathbf{Q}$ of degree $d$ which is etale outside $\{b_1,b_2,\ldots,b_r\} \
subset \mathbf{P}^1(\mathbf{Q})$. Let $X$ be the analytification of $E_\mathbf{C}$. There exists a $\tau$ in the complex upper half plane such that $X = \mathbf{C}/\mathbf{Z}+\tau\mathbf{Z}$. Can we
choose $\tau$ (or $q=e^{2\pi i \tau}$) using the data $(b_1,b_2,\ldots,b_r,d,f)$?
Question 2. It follows from Faltings's theorem that the set $\textrm{Ell}(b_1,b_2,\ldots,b_r,d)$ is finite. Is there a more elementary proof of this?
EDIT: Let me describe how the elliptic curve is given (in the set-up I have in mind).
Let $U$ be an open subscheme of $\mathbf{P}^1_\mathbf{Z}$ with complement $D$. We suppose that the closed subscheme $D$ is a horizontal divisor on $\mathbf{P}^1_\mathbf{Z}$ such that the base change
$D_\mathbf{Q}$ equals $B$ defined above. Let $V\longrightarrow U$ be a finite etale morphism, with $V$ connected. Let $g:Y\longrightarrow \mathbf{P}^1_\mathbf{Q}$ be the normalization of $\mathbf{P}^
1_\mathbf{Q}$ in the function field of $V$. We make the following extra assumptions:
1. $Y$ has a $\mathbf{Q}$-rational point.
2. The genus of $Y$ equals 1.
So the morphism $f$ arises like this.
I'm actually more interested in the set-up described above without assumptions 1 and 2. I just figured it would be an easy case to start with because it could/should be handled more directly.
add comment
1 Answer
active oldest votes
I am having difficulty making sense of question 1. If you already know $f$, then you have $E$, which gives you an $SL_2(\mathbb{Z})$-orbit of values of $\tau$. This seems to be the best
you can do.
up vote 1 For question 2, I think you can bound the number of degree $d$ field extensions of $\mathbb{C}(z)$ whose discriminant divides a certain polynomial (e.g., $\prod_{i=1}^r (z-b_i)^d$). This
down vote puts a bound on the number of curves of any genus with a degree $d$ map to the line ramified at the chosen points, and hence on the genus one curves.
I want to know how this orbit you mention looks like and choose a "small" representative in a way. This could mean something like finding a tau such that its absolute value is bounded by
d times r, say. I really hope I'm making sense here. – Ari Jan 3 '11 at 14:58
For any orbit, there are infinitely many points within any ball of positive radius around zero. If instead you want $e^{2 \pi i \tau}$ to be small, then you should get a well-defined
answer in most cases (namely, inside the injectivity radius). Given the $j$-invariant of $E$, you should be able to approximate $q$ in various absolute values by power series methods
(but I haven't tried this). – S. Carnahan♦ Jan 3 '11 at 15:24
What is the injectivity radius? What does it depend on? I really don't know anything about X as a compact Riemann surface besides its genus and the given morphism f. So for example I
"dont know" the j-invariant. So I ask, can one "explicitly" write down the j-invariant in terms of the data given (branch points and degree of f)? – Ari Jan 3 '11 at 16:43
Sorry, I shouldn't have called it the injectivity radius, but I meant the maximum radius in the $q$-disk where $j$ is injective, which is $e^{-2\pi}$. Again, when you say that you know
the morphism $f$, it sounds like you have some datum that gives you the isomorphism type of the source $E$. How is the map $f$ described to you otherwise? A set of branch points in the
line is not sufficient to characterize $E$. – S. Carnahan♦ Jan 4 '11 at 5:55
For the sake of brevity, I left out the exact set-up of my problem. I now see that this wasn't a good idea. I edited the question. Hopefully it'll be a bit clearer now. – Ari Jan 4 '11
at 11:07
add comment
Not the answer you're looking for? Browse other questions tagged elliptic-curves moduli-spaces ag.algebraic-geometry ramification or ask your own question. | {"url":"http://mathoverflow.net/questions/50940/choosing-tau-for-elliptic-curves-over-the-rational-numbers-with-prescribed-ramif","timestamp":"2014-04-17T07:45:20Z","content_type":null,"content_length":"60641","record_id":"<urn:uuid:01021c56-8ab1-4ad6-860f-a5e27916ed2c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some problems I just don't get.
26 January 2004 - 6:15am | by Anonymous
What mass of C6H13BR will be produced by a reaction giving 65% yield if 12.5mL of liquid C6H12 (density = 0.673g/mL) is treated with 2.70L HBr (g) at STP?
C6H12 + HBr ----> C6H13Br
I don't understand what it means by density. Should I just work the problem by finiding the limiting reactant and caculate the theoretical yield and multiply it by a percentage?
by Anonymous | 26 January 2004 - 6:42am
Well you need to calculate the limiting reagent. The reason they gave you density and a volume was so you can find the mass of C6H12 and from that figure out the moles.
by Anonymous | 30 January 2004 - 5:28pm
[quote]What mass of C6H13BR will be produced by a reaction giving 65% yield if 12.5mL of liquid C6H12 (density = 0.673g/mL) is treated with 2.70L HBr (g) at STP?
C6H12 + HBr ----> C6H13Br
I don't understand what it means by density. Should I just work the problem by finiding the limiting reactant and caculate the theoretical yield and multiply it by a percentage?[/quote]
Density is defined by mass/volume.
You can find the limiting agent by converting to moles. Remember, whenever you see a chemical equation you will always need convert to moles (with few exceptions).
To find the moles of C6H12, multiply the volume by density. Notice that the volume cancells out to give you the grams. Find the molar mass of the above compound and thus find the number moles (since
you have the actual grams).
Notice that the HBr is a gas at standard temperature and pressure. One mole of any gas at standard temperature occupying a volume of 22.4L will exert standard pressure (1 atm). Thus 1moles/22.4L, you
can use this as a conversion factor to find the moles e.g. 2.70L(1mole/22.4L)=?
I am sure you are familiar with finding the limiting reagent. If you need help, just tell me. Find the limiting reagent, find the number of moles of the product yielded. This is the theoretical
yield. Multiply by .65 (65%).
Hope this helps. | {"url":"http://www.webelements.com/forum/node/41","timestamp":"2014-04-16T16:28:01Z","content_type":null,"content_length":"20074","record_id":"<urn:uuid:eff79000-834a-4acf-800b-689c68f397bd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
pH question
20 May 2007 - 5:41am | by Anonymous
I am having difficulties as to what equation to use to calculate the volume. Can someone please kindly give me some help?
What is the volume of 6M HCl required to acidify the mixture of 1g benzaldehyde and 2mL 10M KOH to a pH ~2?
MW of benzaldehyde is 106.1g/mol
cannizzaro reaction: benzaldehyde + KOH + H+ <=> benzoate + benzoic acid
pH = pKa + logQ <- I don't know if this is right to use....
Here is what i think. The volume of HCl needed to acidify should be more than the volume needed to neutralize the solution, so I've calculated the amount needed for neutralization, which is according
to my calculation:
Moles of Benzaldehyde
= 1g Benzaldehyde x (1 mol/106.1g)
= 0.009425 mol benzaldehyde
Mole of KOH
= (2 ml x 10 mmol/1ml) x (1 mol/1000 mmol)
= 0.02000 mol KOH
Mole of OH- in excess
= 0.02000 mol KOH – 0.009425 mol Benzaldehyde
= 0.01058 mol OH- in excess
To neutralize, volume of HCl needed
= 0.01058 mol HCl x (1 L HCl/ 6mol HCl) x (1000mL/1L)
= 1.76 mL HCl
But the problem is that I don't know how to relate the equation to the PH equation since KOH and H+ is both on the same side of the chemical equation..... | {"url":"http://www.webelements.com/forum/node/1204","timestamp":"2014-04-17T20:02:23Z","content_type":null,"content_length":"15901","record_id":"<urn:uuid:cec4595a-5d26-408a-b3f2-4bf1aca76bbb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Series
But the Cassini's identity is
Making use of this may be difficult here.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
Hi anonimnystefy;
gAr is right. I did not get around to helping you with yours but that identity you chose is not true.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
hi guys
i don't know how i got to it then.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Hi anonimnystefy;
That is okay. As long as you saw the method we used.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
hi bobbym
if i'm not mistaken this kind of computing is called telescoping.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Yes, that is correct. The whole idea has been expanded into what has been called "Creative Telescoping."
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
nice.i would just like the rest of the sums to be solved so that gAr could give out some slightly easier sums.not too easy.just slightly easier.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Series
Hi anonimnystefy;
Slightly easier sums?
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Hi anonimnystefy,
We feel the problem's easy after we solve it!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Hi bobbym,
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
Did you publish what you wanted?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Also submitted a few generating functions...
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
Very good! Can I see?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Yeah, sure! You want the link?
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
Yes, then you can delete it for privacy.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
[link deleted by moderator for privacy]
You may edit the link yourself after you read.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
Congratulations and many more!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
Other g.f's I submitted were relatively easy ones.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
I will look for them.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
I hope to find some more in the future.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
I found them. Of course you will.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
But not for now, again a bit busy!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Series
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Series
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=16080&p=9","timestamp":"2014-04-20T11:40:18Z","content_type":null,"content_length":"38087","record_id":"<urn:uuid:b2d53aeb-8443-4501-815f-020a4b64ff18>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
How tall is that UFO?
You know I wander around the intertubes, right? Who doesn’t? Anyway, I saw this collection of strange google Earth images. Yeah, it is kind of dumb, but this one made me think:
That article said the image was from TechEBlog, so there is that.
I have no idea what this thing is, but it is clearly tall. How tall? Instead of searching online for info about this structure (that wouldn’t be any fun), I figured I could do a quick analysis of the
shadow. Here we go. First, I need to make some measurements. It turns out that Tracker Video tool for analysis is also quite excellent to use for image analysis.
Before I look at the image, let me draw a diagram of two objects with shadows.
Both of these objects cast shadows. But, there are two important ideas. First, the Sun is so far away that the shadows cast from the objects are at the same angle ( θ ). These shadows make triangle
shapes with the ground. Both objects are vertical which means that the triangles are right-triangles. Since they have two of the same angles, the triangles are similar. The ratio of height to length
of the shadow is the same for both objects.
The plan is to measure the lengths of the two shadows and estimate the height of the building. This way I can solve for the height of the unknown object as:
And here are my measurements – note that I don’t really need to set a scale for the picture. The key is that I need the ratio of the lengths of shadows. Here is the building – I am setting the length
of its shadow as 1 unit.
This gives a shadow length ratio of about 10 to 1. The UFO thingy is 10 times taller than the house structure. I am guessing that is some type of single story structure (just a guess) with a height
of around 10 feet. This would make the UFO 100 feet tall.
1. #1 dean August 25, 2010
Neat little demonstration – nice and straightforward. I do have to admit that speaking about how “tall” the “UFO” is sounds odd.
2. #2 Blaise Pascal August 25, 2010
Since the UFO has a round shadow, I suspect it is a water tower, many of which consist of a ball-shaped tank on top of a narrow column.
3. #3 NoAstronomer August 25, 2010
“I have no idea what this thing is.”
It’s clearly a water tower. Taken from almost directly overhead. Here’s a water tower in NJ not directly from above…
4. #4 Eric Lund August 25, 2010
The links don’t give any information as to where this UFO is, but I can narrow it down a bit:
(1) To get that sun angle, you have to be within 30 degrees of the equator.
(2) It’s not in the US, because the center line on that road is white, not yellow. (The TechE blog shows a little more than Rhett; in particular, the road widens to four lanes a bit to the right
of the edge of what Rhett shows.)
(3) It’s surrounded by a substantial parcel of farmland, but trees line both sides of the side road as well as the buildings, so we are in flat, non-desert terrain with a low population density.
At this resolution, I’m not sure whether the vehicle on the road is a truck headed down in the right-hand lane or a car headed up in the left-hand lane. The former would suggest a location in
Latin America (perhaps Brazil), while the latter would imply Australia or southern Africa.
And wow, that thing is big. Assuming the road width (including shoulders) is about 8 m (consistent with 11 ft lanes and 2 ft shoulders) and the altitude from which this perspective is taken is
much larger than the height of the UFO, I come up with a 25 m diameter for that disk. It might be a water tower, as commenter #2 suggests, though it would be a relatively large one. It’s
definitely too big to be a cell phone tower, but about the right size (though an unusual shape) to be a shortwave transmitter (25 m is the length of a 12 MHz wave).
5. #5 Art August 25, 2010
Looks like someone stobbed a Dremel tool cutoff wheel and arbor into a Google maps screen capture printout. The blurring disguises the slot in the screw, the round object in the middle of the
The difference in the color of the shadows, deep black for the houses, and more gray for the ‘UFO’ shadow had me wondering and then it dawned on me that I’ve seen that object before.
But assuming it represented actuality the analysis is good on height.
6. #6 Flavin August 25, 2010
I think this calculation might benefit from some error bars. The ratio of the uncertainty in l_1 to its value looks like it could be pretty high, maybe around 1/3 or 1/2.
I might invest some more time on this after dinner, unless someone else jumps in before me.
7. #7 6EQUJ5 August 25, 2010
That may be a deep-space antenna (a ‘big dish’) parked at the stow position (aimed toward zenith). Seen from directly above, it will not be obvious that the surface is parabolic in cross-section.
The small central circle would be the subreflector, which you can see is casting a shadow on the main reflector. The shadow of the main reflector does not seem to be fully black like the other
shadows. Such antennas have panels with pinholes in them to drain off water (which would otherwise cause phase noise in transmission and reception) and they happen to let light through. (Think of
the holes in the screens inside your microwave oven: these let air and light through, but the holes are invisible to S-band beams.)
8. #8 Badger3k August 25, 2010
I was going to say a silo or tower, but I’d agree that a water tower makes more sense.
Either that, or the lizard-people-aliens are getting really, really lazy.
9. #9 stephenk August 25, 2010
I’m curious what it is.
Don’t know much about water towers (though I have seen some in Thailand that were inverted conical from memory) the object looks flat on top not spherical.
Compare to NoAstronomer’s photo and you can see the gradiation in shade (barely) from sunward to back side. The round object colouration in this pic looks really consistent across the top, hence
Also curious on the perimeter from about 3 to 5 oclock appears to be something sticking just slightly out (advertising hoarding pointing at the road?)
I think it’s cylindrical, are US watertanks that shape too?
10. #10 Rob (no, the other Rob) August 26, 2010
It couldn’t be a cylinder and cast such a shadow, the vertical sides of would cast a straight line.
11. #11 stephenk August 26, 2010
shallow cylinder (like a puck) on top of a stick,
Sorry I wasn’t clearer
12. #12 stephenk August 26, 2010
Got ‘im…
55°44’5.92″N 9°33’48.98″E
Inverted shallow conical water tower (Denmark)
There about 3 Panoramio pix of it too.
Also measured off Google Earth says it’s 34m across
13. #13 Ashley Moore August 26, 2010
It’s obviously a base from the arcade game Xevious.
14. #14 Rhett Allain August 26, 2010
Good job with the detective work. I am impressed. Most impressed.
15. #15 Thomas August 26, 2010
stephenk, can you explain how you found that location? Are there special tools or did you find the source for the original image?
16. #16 Robert LaRue August 26, 2010
@10 Rob gets it totally correct. Round objects cast straight sided shadows. Also, the round shadow is offset to the left slightly, while the shadows of the two buildings are not offset at all. If
the sun is the light source, this is not possible. Length of the shadow from the round object is totally impossible. The proportion of the height of objects to the length of their shadows should
be the same… shouldn’t they? Isn’t that the basis of the math above? The round shadow extends way beyond the object, unlike the shadows of what are assumed to be buildings. What am I missing
17. #17 Sam August 26, 2010
A water tower with a cylinder-shaped tank that has a flat top and flat bottom and a diameter that is grotesquely wide compared to its height? (And by “height” I mean the cylinder tank itself, not
its elevation)
I mean, I haven’t done the calcs but I would imagine that the same types of analyses that show that this thing is 100 feet up could also be used to figure its shape, and would presumably show
that the object casting the shadow is flat on the top and bottom and is quite thin (or, if it’s an elevated cylinder, “short”?) in proportion to its diameter. I say this because the shadow
appears to be nearly a perfect circle despite the fact that the sun is hardly directly overhead. At that angle, one would expect more distortion in the shadow if it were rounded on the top +/or
bottom, or if the cylinder were significantly deep — no? And if the top is not flat, wouldn’t there be a ladder to climb to the center/top?
Also, the diameter of this water tower seems exceptionally gigantic compared to the area of the other structures and the roadway.
18. #18 stephenk August 26, 2010
no special tools, just a bit of clue following.
I did start off trying to find the original image source (but couldn’t). However, did find a reference to the town. Searched around on Google earth (it only took a couple of minutes) assuming
that the image hadn’t been reoriented and I was looking for a paddock just to the west of an almost north/south road and also it wouldn’t be too far away from a population centre to use it.
19. #19 Ricardo Sgrillo August 26, 2010
These are the coordinates of the UFO:
lat 55.735057
lon 9.563656
It is a place near Vejle in Denmark
20. #20 stephen August 26, 2010
Coming from a place with not that much water, water towers are not that common where I live although there are some they tend to be small. That’s what spiked my interest.
Anyway, this one is not exceptionally gigantic. In my following of links for water towers trying to track this one down there seem to be plenty around. There is one in Finland 75m diameter.
The shape of towers varies widely. Like I said, The ones I saw in Thailand were inverted cones and were metal (I think) the sphere on a stick shape seems to be a US common arrangement. The UK
seems to prefer a comparatively deep and narrow cylinder (much like the old abandoned railway ones I remember here in Australia, that probably isn’t a coincidence). Europe seems to have the big
concrete ones like in this article. But these are generalisations, lots of variation too.
21. #21 Steve August 27, 2010
Way to go, Ricardo. A view from the side is available on Google Street View if you go to those coordinates.
22. #22 Black Russian Terrier puppies August 28, 2010
Tank you for righting this great article..it’s great info to those people that never knowing about that kind..
23. #23 mynet August 28, 2010
I recommend a nice alternative for Sifterapp – I suggest a great and powerful tool with screen capturing application called
24. #24 LUS August 28, 2010
this picture just doesn’t look real…
but it is, and that’s why is owesome
25. #25 upgradeto3d August 29, 2010
There always seems to be a logical explanation for most strange sightings but wouldn`t it be boring if we didn`t have them to try and debunk.This water tower one was a good find.
26. #26 Bret Jones August 29, 2010
I was going to say that the object was from the game Xevious, but someone beat me to it! Nice write up and what a fun bit of harmless speculation.
27. #27 FreeSex August 29, 2010
It couldn’t be a cylinder and cast such a shadow, the vertical sides of would cast a straight line.
28. #28 Albert Tarpey August 29, 2010
I also think that straight shadows can be seen from those objects that are rounded. .The light is comming from the sun, so is it still a possibility? From the look of this and the shadow length,
I doubt it. It’s too tall. The shadow of the circle is far to wide, while the other shadow in the picture seem about right.
29. #29 LordSwarovski August 29, 2010
Could you give us a location of this object? I am interesting in viewing it.
Beside that, interesting article and cleaver way of using math/computer, quite inspiring. But how accurate can this way of calculating can be?
30. #30 Samuel August 29, 2010
At first it really scared me!! lol.! Good post.
31. #31 Rabbit Forum August 29, 2010
Looks more like an angle grinder bit!!
Is this not some sort of water tower as opposed to a UFO especially given its proximity to what looks like some outbuildings next to the road?
32. #32 Arapça müzikler August 29, 2010
Neat little demonstration – nice and straightforward. I do have to admit that speaking about how “tall” the “UFO” is sounds odd. http://www.arapcamuzikler.com
33. #33 Eva August 29, 2010
Is it a real UFO picture? yep , it’s about 100 feet tall. I agree
34. #34 aflam online August 29, 2010
i think UFO is a fake and the arrivals made it.http://www.aflamonline.net
35. #35 Simon Birch August 29, 2010
I must agree with this statement
“I have no idea what this thing is.”
It’s clearly a water tower. Taken from almost directly overhead. Here’s a water tower in NJ not directly from above…
apart from that must be fake
36. #36 ps3magic August 29, 2010
It looks like a water tower to me, since the place looks like a farm. And another thing if it was a real UFO, it wouldn’t b flying a such low height.
37. #37 Casey Pearson August 29, 2010
This has a conspicuous resemblace to community thrashing devices used in Indian farms in remote villages. Those are also popularly known as ‘mini-twisters’ there.
38. #38 Cristiano Ronaldo August 29, 2010
Well done Rhett! Remind me of the good ol geometry class days. I agree with the guy above who said its a water tower. But what do I know?
39. #39 Jimmy Alfaro August 30, 2010
We cannot say that UFO are real, but think the whole outer space are so big and there’s a lot of galaxies anyways this post reminds me what I saw on youtube yesterday about UFO.
40. #40 Phoenix Psychiatrist August 30, 2010
Good work on the mathematics. I love looking at the strange google earth images, you always find crazy stuff. There is no denying this UFO has raised some eyebrows
41. #41 Jerry August 30, 2010
Awesome stuff! I’m currently teaching my children how to calculate the height of a tower without going to the tower to measure how far away they are. The estimating is a good technique! It’s
amazing how different things look from straight up. This is a link to the picture of the tower from the street: http://tinyurl.com/292c9pr | {"url":"http://scienceblogs.com/dotphysics/2010/08/25/how-tall-is-that-ufo/","timestamp":"2014-04-17T12:48:28Z","content_type":null,"content_length":"101969","record_id":"<urn:uuid:d7a89150-1e0b-4587-8353-61392d373954>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Appendix C: Homework Problems
The following “homework” problems were assigned at the end of the first evening and at the end of the next full day. Note that the problems are slightly different: The first is a task of teaching,
the second a task of teacher preparation.
Day 1 Homework
A Task of Teaching: Preparing to Teach a Mathematics Problem
Several digits “8” are written, and some “+” signs are inserted to get the sum 1000. Figure out how it is done.
TASK: Prepare to teach this problem to a class. Do whatever you think you need to do to be ready to teach it to a specific class you have in mind.
Stand back and reflect: What mathematics did you use to do this task of preparing to teach? What sorts of mathematical understanding and sense did you draw on? How did you use what you did?
Day 2 Homework
A Task in Teacher Preparation: Comparing Two Versions of a Mathematics Problem
a. Several digits “8” are written, and some “+” signs are inserted to get the sum 1000. Figure out how it is done.
b. Write down as many 8's and + signs as you want in a row so that you write a statement that equals 1000. Find all the possible solutions to this problem.
TASK: A teacher took the original 8's problem and revised it for her class. How does problem B compare with the version in A?
Stand back and reflect: What opportunities for learning mathematics could be drawn from teachers' engagement with this task?
Gelfand, I. M., & Shen, A ( 1993). Algebra. Boston, MA: Birkhäuser. | {"url":"http://www.nap.edu/openbook.php?record_id=10050&page=169","timestamp":"2014-04-20T14:32:11Z","content_type":null,"content_length":"41162","record_id":"<urn:uuid:6883d010-81ab-435d-8d31-2cfcf414f745>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Single Speed Gearing
From Pvdwiki
Often when talking about gearing a singlespeed, the expression "two to one" comes up. Beside the fact that this is a meaningless statement, it implys that the coice of gears is based on easy math
rather than real decisions. Let's look at this.
All gearing choice and discussion is conditional on the possibility of actually finding a gear and chain combination that is actually possible to use. This discussion is covered on the Chain Length
Calculation page.
Simple Math
32 devided by 2 is 16. So the ratio of 32 to 16 is 2:1.
34 devided by 2 is 17. So the ratio of 34 to 17 is 2:1.
We call this simple math. Any moron can do this math in his/her head with these numbers. What connection does this have with choosing a gear for a bicycle with at specific rider on a specific
terrain? Absolutely none. It is just simple math. To base a gearing choice on the fact that the number of teeth on two chosen gears can be divided into the whole numbers of 1 and 2 is nuts.
Here is a table of gear ratios with the more common gearing choices highlighted for off road riding, single gear, 26" wheel. It is a quick and simple way of seeing a facet of gearing choice. It
doesn't tell the whole story, but it tells us something.
As can easily be seen, a great variety of gears exist within the 2:1 and 1.7:1 range.
'Real' Gearing
Real gearing is probably not the best term. Typically, this is refered to as Rollout or Developement. What makes this a better system to use is that rather than comparing one gear ratio to another, a
gear is described by what it actually does. This is the amount of forward distance moved on the bike by each rotation of the crank arms.
Now we can quickly compare 26" MTB wheel gearing to 29" MTB wheel gearing.
Other Factors
The Chosen gear must fit on the bike. Bikes are produced with differing range of adjustments.
This chart shows how many gear choices exist, given a specific chainstay length.
Lets say you have a bike with a 16.750" Chainstay lenght and 0.250" of adjustment in either direction.
Then we separate out the gears that are actually within the range of gearing that we want.
You then end up with about 150 different usable gear combinations that produce between 135 and 175 inches of development.
Now, reduce the range of inches of development to a finer range that reflects what is actually desired. For me, That means between 145 and 155 inches.
We now have 36 different gear configurations available.
(12 21) (12 22) (13 23) (13 24) (14 25) (14 26) (15 27) (15 28) (16 28) (16 29) (17 30) (17 31) (18 32) (18 33) (19 34) (19 35) (20 35) (20 36) (20 37) (21 37) (21 38) (21 39) (22 39) (22 40) (22 41)
Many of these gears are not desirable for different reasons.
• The chainring size may not be available.
• The cog size may not be available.
• A lighter, more efficient gear is a better option.
Since cogs are generally available (or best used) between 16 and 20 teeth, and chainrings usually available in even numbers of teeth, we end up with:
(16 28)
(17 30)
(18 32)
(19 34)
(20 36)
Five choices within the specified range. These then get checked against the physical capabilities of the system.
For example, (16 28) and (20 36) will not fit on my Zion 660EBB. See: Chain Length Calculation page. So, now I am down to just 3 good choices for gearing within the chosen range.
Bigger Really Is Better
Larger diameter gears work much more efficeiently than smaller gears.
see: http://www.bhpc.org.uk/HParchive/PDF/hp50-2000.pdf
Personal Selection
Personaly, I am running a 34/19 gear (148.4"). We do a lot of climbing where I live, and I'm overweight and not the strongest climber. I would love to go up to 34/18 (156.7") or 32/17 (156.1"). That
would help my downhill speeds a little if I could handle the gear on the climbs, but with mud season coming, I may even have to drop to 34/20 (141.0) depending on power losses. We will see.
Interesting to note is that if I was running 29" wheels, I would be running a 31/19 as a close to equivelant gear. | {"url":"http://www.peterverdone.com/wiki/index.php?title=Single_Speed_Gearing","timestamp":"2014-04-16T07:50:26Z","content_type":null,"content_length":"19081","record_id":"<urn:uuid:264f7cfc-69a4-47f0-9658-166994b4be1b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] A New Ordinal Notation
Dmytro Taranovsky dmytro at MIT.EDU
Mon Aug 8 13:33:45 EDT 2005
I have discovered an ordinal notation system for rudimentary set theory
+ "for every ordinal alpha, there is recursively alpha-inaccessible
ordinal," which may be simpler and more natural than the standard
I have also discovered what may be an ordinal notation system for full
second order arithmetic; however, I do not have a proof of its
correctness. A key idea is assignment of reflection degrees to
ordinals. An ordinal has degree alpha+1 iff it has degree alpha and is
below a certain ordinal (that is dependent on alpha) or is a limit of
ordinals of degree alpha. An ordinal has degree gamma where gamma is
limit iff it has every degree less than gamma. The notation is built
from constants and a function C such that C(alpha, beta) is the least
ordinal above beta that has degree alpha.
Details are in my paper
Dmytro Taranovsky
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-August/009030.html","timestamp":"2014-04-19T07:01:33Z","content_type":null,"content_length":"3192","record_id":"<urn:uuid:adb07cd2-ec03-4996-a0ab-2fabe1c49c97>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
The DistanceTransform package
An n-D distance transform that computes the Euclidean distance between each element in a discrete field and the nearest cell containing a zero.
The algorithm implemented is based off of Meijster et al., "A general algorithm for computing distance transforms in linear time." Parallel versions of both the Euclidean distance transform and
squared Euclidean distance transform are also provided.
Version 0.1.2
Dependencies base (>=4.5 && <5), primitive, vector (>=0.9)
License BSD3
Copyright (c) Anthony Cowley 2012,2013
Author Anthony Cowley
Maintainer acowley@gmail.com
Category Math
Source repository head: git clone git://github.com/acowley/DistanceTransform.git
Upload date Sat Feb 16 02:35:56 UTC 2013
Uploaded by AnthonyCowley
Downloads 90 total (13 in last 30 days)
Maintainers' corner
For package maintainers and hackage trustees | {"url":"http://hackage.haskell.org/package/DistanceTransform","timestamp":"2014-04-20T18:55:17Z","content_type":null,"content_length":"4119","record_id":"<urn:uuid:c40663bb-d383-42a3-b944-319747d7f589>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Random effects probit
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Random effects probit
From "Wiji Arulampalam" <Wiji.Arulampalam@warwick.ac.uk>
To <statalist@hsphsun2.harvard.edu>, <ddrukker@stata.com>
Subject Re: st: Random effects probit
Date Fri, 20 Sep 2002 13:28:59 +0100
Dear David,
Am I right in thinking that one could actually do the test for rho=0 using the coefficient est and its std error? The statistic coeff/se will be asymp distributed sort of like a std. normal. The actual distribution has a mass of 0.5 at 0 and the rest is Normal on the positive side due to the fact that the null is on the boundary of the parameter space. Essentially a 5% test requires one to use a 10% critical level (same in the LR chisq test as specified below). But Likelihood ratio is better since it is invariant to the way one specifies the parameters where as the above Wald type test is not because of the non-linear transformations involved.
Is this wrong?
many thanks
Professor Wiji Arulampalam,
Department of Economics,
University of Warwick,
CV4 7AL,
Tel: +44 (24) 7652 3471
Sec. Tel: +44 (24) 7652 3202
Fax: +44 (24) 7652 3032
email: wiji.arulampalam@warwick.ac.uk
RES2003: http://www.warwick.ac.uk/res2003/
>>> ddrukker@stata.com 09/19/02 12:57AM >>>
Now, let's look into quesiton iv). The test of the null that there is no
heterogeneity is a test of the null hypothesis that sigma_u = 0. This is a
test on the boundary of the parameter space for sigma_u. (See help j_chibar
for more on this topic and a reference.)
The computation of the test statisitic remains the same, but its asymptotic
distribution is different than that of a stardard likelihood ratio test.
Instead of converging to chi-squared with one degree of freedom it converges
to another distribution which is basically .5*(chi-squared with one degree
of freedom).
Let's compute the value of the likelihood ratio test and its p-value.
. scalar lr = 2*(ll1-ll0)
. scalar p = .5*chi2tail(1,lr)
And display the results:
. di "hand lr is " lr " with p-value " p
hand lr is .22917373 with p-value .31606859
This result indicates that we fail to reject the null hypothesis of no
heterogeneity, or equivalently, that sigma_u =0
I hope that this helps.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-09/msg00370.html","timestamp":"2014-04-19T17:59:20Z","content_type":null,"content_length":"7594","record_id":"<urn:uuid:fb62cdf4-92b2-4fbd-8e3f-e657196f0c9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
The TeX Catalogue OnLine, Entry for circle, Ctan Edition
Provides circles in math mode that can be used for the nextstep operator of temporal logic, in conjunction with \Box and \Diamond (latexsym) or \square and \lozenge (amssymb). LaTeX circles \circ and
\bigcirc are not of the right size. The circles are taken from the font lcircle10. The package contains some hacks to approximate the right size and this solution is definitely not sufficient to give
a high quality output.
The author is Klaus Georg Barthelmann.
License: noinfo Version dated: 1998-07-15 Catalogued: 2010-02-23 | {"url":"http://ftp.rediris.es/mirror/tex-archive/help/Catalogue/entries/circle.html","timestamp":"2014-04-21T07:43:36Z","content_type":null,"content_length":"4208","record_id":"<urn:uuid:e67305df-ac3c-4cd0-8c8e-a4e784a25ebe>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic: finding valid conclusions
This is not a general post about all methods of extracting valid conclusions, but I will be looking at more than just syllogistic reasoning.
I want to talk about the four logic questions I asked in my most recent happenings post.
You may, but don’t need to, go back to that post… here are four pairs of premises. (And because we have two premises for each question, we may consider syllogistic reasoning, as well as other
some A are B
no B are C
no B are A
some C are B
no A are B
some B are C
some B are A
no C are B
What valid conclusion or conclusions, if any, may we draw?
Follow me….
Question 1: intuitive and modern approaches
I will spend most of my time on question one. Here is some, but not quite all, of the information which Dodd & White provided. I have shown the answer, that 17/20 people got the right answer, I have
labeled the premises and the conclusion as IEO, and I have written out the symbolic logic.
1. 17/20
I: some A are B $\exists x Ax \land Bx$
E: no B are C $\forall x Bx \Rightarrow eg Cx$
O: $\therefore\$ some A are not C. $\therefore \exists x Ax \land eg Cx$
Let’s not worry about Aristotle yet.
First of all, let’s just think about it. I believe that my thought process went:
some A are B, no B are C…
so those A’s that are B’s are not C’s…
so some A are not C.
So, I got the answer they gave. That’s nice.
But what if we want to try other things?
We could draw some Venn diagrams. We have that B and C do not intersect (their intersection is the null set); and that A and B do intersect. I would not limit myself to just one possibility.
As you can see, in fact, we can draw the disc for the set A in ways such that
no C is A
some C is A
all A are B
all C are A.
All we need to do is make sure that A and B intersect. A variety of drawings gives us some idea of all the things that might or might not be true — some of the potential conclusions which are not
valid inferences.
For example, from the fourth drawing I see that I cannot say that “some C are not A”.
I have to say that the Venn diagrams alone would almost convince me that the only valid conclusion is “some A are not C” — and its ugly commutative version, “some not-C are A” (which is not the same
as “some C are not A”).
OK, let’s look at the symbolic logic. I can get Mathematica to confirm the conclusion, but not very nicely. I can define two premises and the conclusion…
Fine, why not just prove it?
In what follows, ES stands for existential specification; US for universal specification; EG for existential generalization: I have included a more detailed comment with each one. At some level, this
is just about all we need to know about dropping quantifiers (“specialization”) and inserting them (“generalization”). I will be talking about this more, but not in this post. Think of them as fancy
names for such statements as “let y be such an x”.
That is, “some A are not C”. QED. Great.
Question 1: syllogism
Finally, let’s look at it as a syllogism. More importantly, let’s start by looking only at the two premises:
I: some A are B
E: no B are C
The middle term is “B”, the one that is common to both premises. The subject S is the other term in the 2nd premise, so we seek a valid conclusion of the form
{all or some} C {are or are not} A.
Isn’t that interesting? First off, we already know that the answer is not of that form. But that’s the only form of conclusion allowed by these two premises in this order. Hmm.
Second, the Venn diagrams showed cases where no C was A, and where some C was A, and some C was not A, and all C was A. Those are the four possibilities, and we cannot assert that any one of them is
true. Each of them can be true — but none is guaranteed to be true.
It would appear that — as written — this is not one of the 15 (or 19) valid syllogisms. “Hmm”, indeed.
Let’s recall the four figures and the inversion:
And, also for reference, here’s the medieval pseudo-Latin:
Barbara, Celarent, Darii, Ferioque, prioris;
Cesare, Camestres, Festino, Baroco, secundae;
tertia, Darapti, Disamis, Datisi, Felapton,
Bocardo, Ferison, habet; quarta insuper addit
Bramantip, Camenes, Dimaris, Fesapo, Fresison.
The two premises are in figure 4, so we extract the pseudo-Latin for figure 4: Bramantip, Camenes, Dimaris, Fesapo, Fresison.
“Fresison” has the right vowels, but not in the right order. Gee, IE is not part of any valid syllogism in figure 4. Sure enough, Aristotle has it right: there is no valid conclusion with C as the
subject and A as the predicate.
The point is simply that we are looking at one of the other 256 – 15 = 241 syllogisms — one of the 241 invalid syllogisms. That reckoning of them by figures 1-4 (not inversions) always got the
subject from the second (the minor) premise.
Aristotle (modified by the medievalists) didn’t organize it as: what valid conclusion(s) may I draw? He organized it by: having defined S and P, when can I reach a conclusion of the form S P?
And we can’t get one of those here.
To put that another way, Aristotle didn’t lay out sets containing two premises; he laid out ordered pairs of two premises. We have the right set but the wrong ordered pair.
OK, now it’s time to interchange the premises — to check the other order.
E: no B are C $\forall x Bx \Rightarrow eg Cx$
I: some A are B $\exists x Ax \land Bx$
Now we are in figure 1 instead of figure 4, and the subject is A instead of C. Is there a valid syllogism? We certainly hope so, since we’ve worked out a proof that some A are not C.
We extract the pseudo-Latin for figure 1: Barbara, Celarent, Darii, Ferioque… and we see that Ferio works (we can ignore the -que, which just means “and”), and it says we have a valid conclusion in
some A are not C.
That’s it.
There is a valid conclusion, but it’s for a different syllogism than we were given.
That’s one reason I asked what valid conclusions we could draw — not whether the syllogisms were valid. On the other hand, perhaps I should not have said that all four questions were syllogisms. (I
knew there was a problem, but I hadn’t sorted it all out.)
And yet, given what we just saw, we now know that if we want to use Aristotle, we must consider both of the possible orderings of the two premises.
For reference, let me also write this, rewritten example 1, as
What we are seeing is that the organization is not as transparent as we might have thought at the beginning. If we look for valid syllogisms in figures 1-4, then we are restricted to conclusions of
the form S P.
Yet another way to look at this is: the idea of extracting all possible conclusions from a set of premises is not the same as organizing syllogisms. The valid conclusion “some A is not C” is a
possible conclusion from the set of given premises, or from the re-ordered pair of premises, but it is not even a conceivable syllogistic inference from the given ordered pair.
That said, 17 out of 20 people got it right. I did, too, in my head.
Johnson-Laird and Steedman, who did the test, decided on the basis of many additional questions that what seems to matter is that the given forms
A, B
B, C
encourage us to look for a conclusion of the form
A, C
and that’s right (albeit not Aristotelian).
Question 2
Here is the second question all laid out with additional information.
2. 14/20
Figure 1 “Ferioque”
E: no B are A $\forall x Bx \Rightarrow eg Ax$
I: some C are B $\exists x Cx \land Bx$
O: $\therefore\$ some C are not A. $\therefore \exists x Cx \land eg Ax$
I’m not going to punch through the Venn diagram, or the symbolic logic proof: using modern techniques, this is the same as the rewritten question one… There we rewrote the question as:
and here we have
which is of the very same form, with C and A interchanged.
Consequently, there is no problem with Arisitotle: this question is posed as Ferio in figure 1 — just as the rewritten question 1 was.
Incidentally, we already know that if we interchange the the premises, we will be in figure 4 … but this is the same as question 1 (with A and C interchanged), so there is no valid figure 4
That is, we have the only right answer. (Again, except for the equivalent, but ugly, “some not C are A”.)
Note that 14 out of 20 got it right; and I did, too, in my head.
Note also that we have the sequences
B A
C B
in the premises, and the investigators decided that “C A” is the kind of conclusion people look for, and in this case also, there is a valid conclusion of that form.
Question 3
3. 8/20
Figure 4 “Fresison”
E: no A are B $\forall x Ax \Rightarrow eg Bx$
I: some B are C $\exists x Bx \land Cx$
O: $\therefore\$ some C are not A. $\therefore \exists x Cx \land eg Ax$
This time we have a valid figure 4 syllogism. Recall the pseudo-Latin: Bramantip, Camenes, Dimaris, Fesapo, Fresison.
This is Fresison. The subject S is C, from the second premise.
I could do Venn diagrams and symbolic logic… but I’m convinced by Aristotle.
Well, we know there is one thing we must check: what if we interchange the premises?
I: some B are C $\exists x Bx \land Cx$
E: no A are B $\forall x Ax \Rightarrow eg Bx$
$\therefore\$ {some or all} A {are or are not} C.
Look back at the figures… this is figure 1. Is there a valid figure 1 syllogism whose first two vowels are IE?
Barbara, Celarent, Darii, Ferioque…
No. We already found our answer, and it’s unique.
Note, however, that only 8 out of 20 people got this one right.
Even I got it wrong — in my head. A Venn diagram showed me that my answer was wrong, and then I located it in Aristotle (well, in the medievalists, since Aristotle would have used the inversion
instead of figure 4).
Why did most people get it wrong?
The investigators observed that we have the sequences
A B
B C
in the premises, so people tend to look for, and state, a conclusion with the sequence
A C.
Ding! Thanks for playing.
Question 4
As I did for questions 2 and 3, let me present all the data I was given or inferred.
4. 5/20
Figure 1?
I: some B are A $\exists x Bx \land Ax$
E: no C are B $\forall x Cx \Rightarrow eg Bx$
O: $\therefore\$ some A are not C. $\therefore \exists x Ax \land eg Cx$
Once again, Dodd & White have made the very same mistake as in question 1: they said this syllogism was figure 1. Let me recall the figures.
No. The subject is C — it always comes from the second premise — so “some A are not C” is not a even a possible conclusion, never mind valid or invalid.
Interchange the premises. Then we have made A the subject…
E: no C are B $\forall x Cx \Rightarrow eg Bx$
I: some B are A $\exists x Bx \land Ax$
O: $\therefore\$ some A are not C. $\therefore \exists x Ax \land eg Cx$
and the rewritten syllogism is figure 4, and we’re looking at Fresison again. It is a legitimate syllogism, and a valid syllogism.
(And because it’s Fresison, as was question 3, we’ve already confirmed that there is no valid conclusion if we interchange the premises again, thereby using them in their original order.)
Note that only 5 out of 20 got this right — and I’m on the losing side again, at least until I draw a Venn diagram. My reasoning was faulty — oops, I hope you didn’t think I was perfect — but I knew
how to check it, and catch the mistake.
And this too follows the pattern which the investigators conjectured: the sequences of terms in the premises are
C B
B A
so most people look for, and often state, a conclusion with the sequence C A.
Wrong again, because the valid conclusion has the sequence A C.
What the investigators had was the four sequences
A B
B C
A C
B A
C B
C A
A B
B C
C A
B A
C B
A C
and they decided that people had troubling reaching conclusions when the A and C in the conclusion were not in the same columns as in the premises. (That’s my phrasing of it.) This is actually the
key point about the four questions, that we have these four distinct and exhaustive sequences.
As a set of sequences, those four are exemplary. Furthermore, because one of the premises was a universal negative (“no B is A”), the conclusions were best phrased in one way (“some A is not C”); the
equivalent valid “some not C is A” is rather awkward to champion.
(If the conclusion had been positive (“some A is B”) then I would immediately add “and some B is A”. And the experiment would have been all mixed up, with some people saying one or the other, and
some people saying both — as well as, perhaps, some people getting none of the right answers.)
I was rather shocked to see that Dodd & White had messed up. The modern approach to logic (“extract all possible valid conclusions”) works fine — it’s what we did back in question 1. But if you want
to phrase those 4 questions as syllogisms, you can’t phrase the conclusions the way they did. (Given, for example, question 1 as a syllogism to be investigated, there is no valid conclusion.)
All this forced me to clarify the Aristotelian and medieval organization of syllogisms. It’s crucial that the subject is in the second premise, and that we look for conclusions of the form S P. To
find the valid “some A are not C”, we had to interchange the premises — thus making A the subject — and then all conclusions of the form
{all or some} A {are or are not} C
are open to consideration, and one of them is valid.
If we want to use the list of valid syllogisms, we have to consider both possible orderings of the two premises. To put that another way, we need to consider the premises as a set .
As I said, this was an unscheduled post, but it gave me more insight into the use of syllogistic reasoning. I had never said to myself that interchanging the premises could move me between valid &
invalid syllogisms. Oh, of course, it’s blindingly obvious once I imagine doing it.
But it’s only when I try to work out the details that I imagine doing a lot of things.
Leave a Reply Cancel reply
Recent Comments
Color Systems — Part… on Color: from Spectrum to XYZ an…
Mariz Yap on Trusses – Example 3, a Howe…
Poker game(r) on Poker Hands – 5 card draw
sola on Trusses – Snow Load on Howe, F…
pahb on Control Theory: Transfer Funct…
pahb on Control Theory: Transfer Funct…
David Cortes-Ortuño… on Mathematica Notes – Coloring…
kmoxe on Color: from Spectrum to XYZ an…
Dan on Color: Color-primary transform…
Guenter Bruetting on Color: from XYZ to spectr…
Femi on Cubic Polynomials and Complex…
Femi on Cubic Polynomials and Complex…
Nikunj on Happenings – 2013 Apr 13
bad boy sex secrets… on axis and angle of rotatio… | {"url":"http://rip94550.wordpress.com/2010/05/03/logic-finding-valid-conclusions/","timestamp":"2014-04-17T09:35:46Z","content_type":null,"content_length":"102173","record_id":"<urn:uuid:66f1547e-1ac0-4507-a2a9-84e71baa600a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comprehending ultrafilters
This is the first of a projected two-part series of articles about ultrafilters. The former introduces ultrafilters and establishes various properties about them, so that the latter article can
concentrate entirely on applications of ultrafilters.
Much of this material is from Professor Imre Leader’s lecture course on Ramsey theory, notes of which can be obtained from Paul Russell’s website.
I’ve been intending to write about ultrafilters for some time, but, whilst simple to define and manipulate, they are rather abstract objects. I suppose I’ll first give a formal definition, and then
attempt to provide a more intuitive interpretation of what an ultrafilter is.
So, an ultrafilter U on a set S is a collection of subsets of S with the following properties:
• If A is in U, then every superset of A is also in U (U is an upset);
• If A and B are in U, then the intersection A ∩ B is also in U;
• Precisely one of A and S\A belongs to U.
The correct way to view ultrafilters is a way to assign a finitely-additive measure to S, where every set has either measure 0 (almost nothing) or measure 1 (almost everything). Then, U is the
collection of big (measure-1) subsets of S. The axioms then have entirely intuitive interpretations:
• If A is a big set (has measure 1), then so is every superset of A;
• The union of two measure-0 sets also has measure 0;
• If A has measure 1, its complement has measure 0, and vice-versa.
Very often, we take S to be the set of positive integers, $\mathbb{N}$. From the ultrafilter axioms, we can deduce the following:
• If we partition a measure-1 set into finitely many sets, then precisely one of those has measure 1.
There is a notion of a stronger version of an ultrafilter (on necessarily uncountable sets) where ‘finitely’ is replaced with ‘countably’ (thus is a genuine measure in the usual sense), but their
existence is equivalent to that of a certain large cardinal called a measurable cardinal for obvious reasons. Anyway, in the remainder of the article we will only consider ultrafilters on the set N
of positive integers.
Examples of ultrafilters
When trying to explain ultrafilters, I am compelled to provide a few examples. The easiest examples to provide are the principal (boring) ultrafilters, such as the following:
A set has measure 1 if and only if it contains the number 7.
There is a principal ultrafilter for each natural number n, by replacing 7 with n. Principal ultrafilters are somewhat silly, since they only care about a single element of S. Note that if a finite
set has measure 1 with respect to some ultrafilter, then one of its elements must have measure 1 and the ultrafilter must be principal. So, in a non-principal ultrafilter, all measure-1 sets are
infinite (as they should be).
Anyway, what’s an example of a non-principal ultrafilter?
It transpires that non-principal (or free) ultrafilters cannot be explicitly constructed, relying heavily on the axiom of choice. We create an ultrafilter U by extending a filter F, which is a
strictly weaker notion than that of an ultrafilter. A filter F need only satisfy the following properties:
• The empty set does not belong to F;
• The entire set S does belong to F;
• If A and B belong to F, then so does the intersection;
• If A belongs to F, then so does every superset of A.
Unlike ultrafilters, there are lots of interesting examples of filters. For instance, we can take the filter of cofinite sets, where a set belongs to F if and only if its complement is finite.
There is an obvious partial order on the set of filters, namely inclusion. If a filter is not an ultrafilter, then we can extend it by throwing in another set. If we have a chain (totally-ordered set
of filters), then the union is a filter which is an upper bound. Hence, by Zorn’s lemma, we can extend any filter to a maximal filter (an ultrafilter).
Ultrafilters as quantifiers
You are probably aware of the universal quantifier ∀ and the existential quantifier ∃. Given your favourite ultrafilter U, we can define the quantifier ∀U (pronounced ‘for U-most’) as follows:
• ‘∀U x : P(x)’ means ‘P(x) is satisfied by a measure-1 set of x in S’
It has the beautiful property that it distributes over all Boolean operators:
• ∀U x : (P(x) or Q(x)) ⇔ (∀U x : P(x)) or (∀U x : Q(x))
• ∀U x : (P(x) and Q(x)) ⇔ (∀U x : P(x)) and (∀U x : Q(x))
• ∀U x : (not P(x)) ⇔ not (∀U x : P(x))
The first two of these also apply to ∃ and ∀, respectively, but the third applies to neither (∀ and ∃ are interchanged, rather than preserved, when conjugated by logical negation).
Warning: it does not commute with itself (∀U x ∀U y is not the same as ∀U y ∀U x). This again contrasts with the more familiar logical quantifiers.
The Stone-Cech compactification of N
The space of ultrafilters on the positive integers is denoted βN and admits a natural topology. Specifically, we define a base of open sets:
• For every set A of positive integers, let C(A) = {U in βN | A in U} be an open set.
C(N) and C(Ø) are βN and Ø, respectively. The intersection of two base sets, C(A) and C(B), is easily verified to be C(A ∩ B), so this is indeed a base of open sets. A set is open if and only if it
is some arbitrary union of base sets. Note that again we have C distributing over everything:
• C(A ∩ B) = C(A) ∩ C(B)
• C(A ∪ B) = C(A) ∪ C(B)
• C(N \ A) = C(N) \ C(A) = βN \ C(A)
In particular, the base sets C(A) are both open and closed.
So, why are we interested in the topology on βN? It transpires that it has several very nice properties:
• Hausdorffness: For two distinct ultrafilters, U and V, we can find a set A such that A belongs to U but not to V. Then, C(A) contains U and C(N\A) contains V. Since C(A) and C(N\A) are disjoint
open sets, this means that βN is a Hausdorff space.
• Compactness: Suppose we have a collection P of closed sets, such that the intersection of any finite subset of P is non-empty. It suffices to consider the case where the elements of P are all
base sets. Indeed, let P = {C(A) : A in Q}, where Q is a collection of subsets of N, such that any intersection of finitely many elements of Q is non-empty. Let F be the filter of all supersets
of elements of Q, and extend it to an ultrafilter U. U must necessarily belong to all elements of P, and therefore their intersection. This establishes the finite intersection property, which is
trivially equivalent to topological compactness.
• N is dense in βN: We can embed N in βN by associating the number n with the principal ultrafilter {A in N : n in A}. Then N is a dense subset of βN, since for every non-empty C(A) in the base of
open sets, we can choose an arbitrary a in A and observe that the principal ultrafilter on a belongs to C(A). βN is actually the largest compact Hausdorff space in which N is dense, and is called
the Stone-Cech compactification of N. It is a surprising fact — which is not too difficult but still non-trivial to prove – that after removing N from βN, we are left with an inseparable space
(one with no countable dense subset).
Even though it is very nice from a topological point of view, little is known about βN. Assuming the continuum hypothesis, Parovicenko proved that several topological assumptions are sufficient to
characterise βN; conversely, in the absence of the continuum hypothesis, there exist non-homeomorphic spaces satisfying the same properties. Also, if we assume certain large cardinal axioms, then βN
has completely different properties than under the (incompatible) assumption of the continuum hypothesis.
Ultrafilter addition and idempotent ultrafilters
Recall that N admits a commutative addition operation. This can be extended to a non-commutative addition on βN, where we define the sum of two ultrafilters as follows:
• U + V = {A ⊆ N | ∀U x ∀V y : x + y in A}
Note that this can easily be verified to satisfy the defining properties of an ultrafilter. This is associative, since (U + V) + W = U + (V + W) = {A ⊆ N | ∀U x ∀V y ∀W z : x + y + z in A}. We can
also prove that it is left-continuous, that is to say that U + V is a continuous function of U. Of course, working in a general topological space, there is only one definition of continuity that
we’re allowed to use: the pre-image of an open set is open.
It suffices to prove that, for an arbitrary fixed ultrafilter V, the pre-image of C(A) is always open, for every A ⊆ N. So, let’s unpack the definitions:
Note that U + V = {A ⊆ N | ∀U x : (∀V y : x + y in A)}, so the pre-image of C(B) is the set:
• {U in βN | {A ⊆ N | ∀U x : (∀V y : x + y in A)} in C(B)}
• = {U in βN | B in {A ⊆ N | ∀U x : (∀V y : x + y in A)}}
• = {U in βN | ∀U x : (∀V y : x + y in B)}
• = {U in βN | {x in N | (∀V y : x + y in B)} in U}
• = C({x in N | (∀V y : x + y in B)})
which is open, by definition. To summarise, the operation + is associative and left-continuous, and βN is a compact Hausdorff space. We’re now going to forget absolutely everything else about
ultrafilters, and prove the following lemma:
The idempotent lemma: If X is a non-empty compact Hausdorff space and + is a left-continuous associative operation, then there exists x in X such that x + x = x.
The proof requires extensive use of Zorn’s lemma.
We firstly consider the collection S of non-empty closed sets Y with the property that Y + Y is a subset of Y. The entire space X is trivially one such example, so our collection S is non-empty.
Endow S with the partial order of inclusion.
If we have a chain of such sets, then the intersection is non-empty (since all finite intersections are non-empty, and we are working in a compact Hausdorff space where compact sets are closed and
vice-versa) and closed (by virtue of being an intersection of closed sets). So, every chain has a ‘lower bound’ in S and thus we can find a minimal Y by Zorn’s lemma.
Choose an arbitrary x in Y, which we can do by the fact that Y is non-empty. Y + x is the continuous image of a non-empty compact set, thus non-empty and compact. Also, (Y + x) + (Y + x) = Y + Y + x
+ x, by associativity, which is clearly a subset of Y + x. Hence, Y + x is also in S and is a subset of Y, so Y + x = Y (as otherwise Y + x would be more minimal than Y).
Now define Z = {y in Y | y + x = x}, which is non-empty since Y + x = Y, and Y contains x. It is the pre-image of a singleton set (which must be compact and thus closed) under a continuous function,
so is itself closed. If y and y‘ both belong to Z, then (y + y‘) + x = y + (y‘ + x) = y + x = x, so Z + Z is a subset of Z and therefore belongs to S. So Z = Y by minimality of Y.
Hence, since x is in Y, x is in Z and therefore x + x = x, concluding our claim.
Consequently, there must exist an ultrafilter U with U + U = U. It is clear that such an ultrafilter must necessarily be non-principal. U is known as an idempotent ultrafilter. Non-principal
ultrafilters themselves already have many applications (as we shall see in the follow-up post), but idempotent ultrafilters are even more useful, powerful and generally awesome. I can imagine that
you’re all bursting with excitement in anticipation of the follow-up article, so I shall endeavour to write it as soon as feasibly possible.
3 Responses to Comprehending ultrafilters
1. Thank you for this article. I haven’t asked for it, but it seems to clarify everything about ultrafilters I wished to know :) I also ask for few things about article:
You mentioned that “There is a principal ultrafilter for each natural number n”, but you seem to forgot to mention these are all, i.e. principal ultrafilters are exactly these defined by single
When mentioning measurable cardinals, you said they have to be countably additive. On Wikipedia, it’s said that it must be κ-additive. Does former imply latter?
Other than that, I wish you Merry Christmas! (P.S. I love the new logo ^^)
□ Thank you, and Merry Christmas!
Yes, I promised that I would have a Christmas-themed banner over the festive period (and I’m going to re-use last winter’s New Year banner).
Stanislaw Ulam (who deserves to be better known than he currently is) showed that if κ is the minimal cardinal admitting a countably-additive measure, then κ must also admit a κ-additive
measure. Hence, the existence of a countably-additive ultrafilter is indeed equivalent to the axiom that a measurable cardinal exists.
This entry was posted in Uncategorized. Bookmark the permalink. | {"url":"http://cp4space.wordpress.com/2013/12/23/comprehending-ultrafilters/","timestamp":"2014-04-17T19:14:35Z","content_type":null,"content_length":"81677","record_id":"<urn:uuid:028b89c5-f8cc-44fb-b130-772acdfdeab5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphs on Surfaces - Surfaces
Chapter 3   Surfaces
In this chapter we establish some basic tools for graphs embedded on surfaces. We restrict ourselves to compact 2-dimensional surfaces without boundary. These are compact connected Hausdorff
topological spaces in which every point has a neighborhood that is homeomorphic to the plane R^2. In Section 3.1 we present a simple rigorous proof (due to Thomassen [Th92] of the fact that every
surface is homeomorphic to a triangulated surface. Other available proofs of this result are complicated and appeal to geometric intuition. Perhaps it is difficult to follow some of the details but
the proof is conceptually very simple since it merely consists of repeated use of the Jordan-Schoenflies Theorem. The surface classification theorem, which we prove next, says that every surface is
homeomorphic to a space obtained from the sphere by adding handles and crosscaps. One of the first complete proofs of this fundamental result was given by Kerekjarto [Ke23]. The proof presented in
Section 3.1 is very short and follows [Th92]. It differs from other proofs in the literature in that it uses no topological results, not even the Jordan Curve Theorem (except for polygonal simple
closed curves in the plane). In particular, it does not use Euler's formula (which includes the Jordan Curve Theorem). Instead, Euler's formula is a by-product of the proof.
After obtaining the surface classification theorem, it is shown that every orientable graph embedding, whose faces are all homeomorphic to an open disc in the plane, can be described combinatorially,
up to homeomorphism, by the local rotations at each vertex. A similar description exists also for embeddings of graphs into nonorientable surfaces. Again, this is proved rigorously without appealing
to geometric intuition.
Inspired by the possibility of describing embeddings of graphs combinatorially, we define in Chapter 4 an embedding as a collection of local rotations in the orientable case and, more generally, an
embedding scheme in the nonorientable case. In the remaining part of the book we treat embeddings in this purely combinatorial framework. | {"url":"http://www.fmf.uni-lj.si/~mohar/Book/Ch3.html","timestamp":"2014-04-20T03:12:07Z","content_type":null,"content_length":"2659","record_id":"<urn:uuid:6f8ff745-1a53-4d29-8325-c2dd11e5b4bf>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tangent bundle
From Encyclopedia of Mathematics
of a differentiable manifold
The vector bundle Vector field on a manifold). An atlas on the manifold
Associated with the tangent bundle is the frame bundle of the manifold Pfaffian form).
A differentiable mapping Immersion of a manifold), then submersion, then the quotient bundle
[1] C. Godbillon, "Géométrie différentielle et mécanique analytique" , Hermann (1969)
The tangent mapping (also called differential)
tangent vector is seen as a special kind of
In terms of local coordinates and the " /xi-notation" (cf. Tangent vector), the matrix of
There are many notations in use for the differential Differential; Differential form). Using the " /xi and dxi" notation (cf. Tangent vector), the differential
The differential
[a1] M. Spivak, "Calculus on manifolds" , Benjamin/Cummings (1965)
[a2] M.W. Hirsch, "Differential topology" , Springer (1976) pp. Chapt. 5, Sect. 3
[a3] F. Brickell, R.S. Clark, "Differentiable manifolds" , v. Nostrand-Reinhold (1970)
[a4] L. Auslander, R.E. MacKenzie, "Introduction to differentiable manifolds" , Dover, reprint (1977)
[a5] R. Hermann, "Geometry, physics, and systems" , M. Dekker (1973)
[a6] Yu. Borisovich, N. Bliznyakov, Ya. Izrailevich, T. Fomenko, "Introduction to topology" , Kluwer (1993) (Translated from Russian)
How to Cite This Entry:
Tangent bundle. M.I. Voitsekhovskii (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Tangent_bundle&oldid=12080
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Tangent_bundle","timestamp":"2014-04-21T07:04:31Z","content_type":null,"content_length":"27374","record_id":"<urn:uuid:26f026cf-6c23-44be-a817-bd172a5910f5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Installing SECOND and DSECND
Next: Testing IEEE arithmetic and Up: Test and Install the Previous: Installing SLAMCH and DLAMCH   Contents
Installing SECOND and DSECND
Both the timing routines and the test routines call SECOND (DSECND), a real function with no arguments that returns the time in seconds from some fixed starting time. Our version of this routine
returns only ``user time'', and not ``user time 10#10 system time''. The version of SECOND in second.f calls ETIME, a Fortran library routine available on some computer systems. If ETIME is not
available or a better local timing function exists, you will have to provide the correct interface to SECOND and DSECND on your machine.
On some IBM architectures such as IBM RS/6000s, the timing function ETIME is instead called ETIME_, and therefore the routines LAPACK/INSTALL/second.f and LAPACK/INSTALL/dsecnd.f should be modified.
Usually on HPPA architectures, the compiler and loader flag +U77 should be included to access the function ETIME.
The test program in secondtst.f performs a million operations using 5000 iterations of the SAXPY operation 11#11 on a vector of length 100. The total time and megaflops for this test is reported,
then the operation is repeated including a call to SECOND on each of the 5000 iterations to determine the overhead due to calling SECOND. The test program executable is called testsecond (or
testdsecnd). There is no single right answer, but the times in seconds should be positive and the megaflop ratios should be appropriate for your machine. The files second.f and dsecnd.f are
automatically copied to LAPACK/SRC/ for inclusion in the LAPACK library.
Next: Testing IEEE arithmetic and Up: Test and Install the Previous: Installing SLAMCH and DLAMCH   Contents Susan Blackford 2001-08-13 | {"url":"http://www.netlib.org/lapack/lawn41/node18.html","timestamp":"2014-04-21T02:04:11Z","content_type":null,"content_length":"5163","record_id":"<urn:uuid:3c022c39-dcfd-4789-a7c2-b3f8bba0857e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: November 1998 [00244]
[Date Index] [Thread Index] [Author Index]
Re: Re: using Upset for defining positive real values (Re: Can I get ComplexExpand to really work?)
• To: mathgroup at smc.vnet.net
• Subject: [mg14810] Re: [mg14756] Re: using Upset for defining positive real values (Re: Can I get ComplexExpand to really work?)
• From: Jurgen Tischer <jtischer at col2.telecom.com.co>
• Date: Wed, 18 Nov 1998 01:29:12 -0500
• Organization: Universidad del Valle
• References: <199811140807.DAA03408@smc.vnet.net.>
• Sender: owner-wri-mathgroup at wolfram.com
Hi Maarten,
I'm afraid you are asking for the philosopher's stone. Or maybe just the
wrong question. If it would be of any help, one answer at least for
very easy problems could be to use interval arithmetic (which could be
and has been extenuated to standard functions). Something like
Here is an example how it works:
In[2]:= positiveSolve[x^2 == b, x, b -> Interval[{0, Infinity}]]
Out[2]= {{x -> Sqrt[b]}}
and here one how it doesn't
In[2]:= positiveSolve[{x^2 + y^2 == a, y == b^2}, {x, y},
{a -> Interval[{0, Infinity}], b -> Interval[{0, Infinity}]}]
Out[2]= {}
I could find positive solutions using other conditions (intervals), like
In[3]:= positiveSolve[{x^2 + y^2 == a, y == b^2}, {x, y},
{a -> Interval[{1, Infinity}], b -> Interval[{-1, 1}]}]
Out[3]= {{x -> Sqrt[a - b^4], y -> b^2}}
but of course this is not at all satisfying, the real condition for
positive solutions (and real a, b) being b^2 < a. So it amounts to an
analysis of conditions on the parameters, with possibly extra
information about the parameters for physical reasons. So I'm back to
the philosopher's stone. And maybe even worse: I think it should not be
too difficult to find examples in physics where non standard values of
the parameters led to surprising solutions.
Maarten.vanderBurgt at icos.be wrote:
> Rolf,
> Thanks for the tips.
> PowerExpand comibed with Simplify works fine
> Instead of your PosSolve function, which ony works in the particular
> case of PosSolve[a^2 -1==0, a], I am in fact looking for a more general
> solution of the following problem:
> Solve[{f(a,b,c,d, x,y,z)==0}, {x,y,z}], where {f( )==0} is a well
> defined set of equations for some physical problem, and retain e.g.
> only positive solutions for x, y,z while a,b are positive reals and
> c,d are negative. If your set of solutions is quite complicated it is
> often difficult to see which one(s) actually make(s) sense in the
> particular problem Is there a way to do this, with some 'Solve-like'
> function, without having to calculate numerical values for the
> solutions with some typical parameter values for a,b,c,d?
> thanks
> Maarten van der Burgt
> rolf at mertig.com on 09-11-98 02:49:58 AM
> cc:
> Subject: [mg14810] [mg14756] Re: using Upset for defining positive real values (Re: Can I
> get
> ComplexExpand to really work?)
> Maarten.vanderBurgt at icos.be wrote:
> >
> > Hello,
> >
> > In functions like Solve and Simplify there is no option like the
> > Assumptions option in Integrate.
> > In a recent message ([mg14634]) Kevin McCann(?) suggested usign Upset as
> > an alternative to the Assumptions option in Integrate. I thought this
> > might work as well for Solve, Simplify etc.
> >
> > In the example below I want A to be positive real number. I use Upset to
> > give A the right properties.
> > I was hoping Solve[A^2-1 == 0, A] would come up with the only possible
> > solution given that A is a positive real: {A -> 1}. Same for
> > Simplify[Sqrt[A^2]]: I would expect the result to be simply A (instead
> > of Sqrt[A^2]) when A is set to be positive and real.
> >
> > Upset does not seem to work here.
> >
> > 1st question: why?
> Because Simplify and Solve are obviously not written to recognize Upset
> values.
> >
> > 2nd question: is there a way you can introduce simple assumptions about
> > variables in order to rule out some solutions or to reduce the number
> > of solutions from functions like Solve, or to get a more simple answer
> > from manipulation fuctions like Simplify.
> >...
> > In[3]:= Solve[a^2-1 == 0, a]
> > Out[4]= {{a -> -1},{a -> 1}}
> > In[5] := Simplify[Sqrt[a^2]]
> > Out[5]= Sqrt[a^2]
> >
> Some possibilities are:
> In[1]:= PosSolve[eqs_, vars_] := Select[Solve[eqs, vars], Last[Last[#]]
> > 0&]
> In[2]:= PosSolve[a^2-1 == 0, a]
> Out[2]= {{a -> 1}}
> In[3]:= PowerExpand[Sqrt[a^2]]
> Out[3]= a
> --
> Dr. Rolf Mertig
> Mertig Research & Consulting
> Mathematica training and programming Development and distribution of
> FeynCalc Amsterdam, The Netherlands
> http://www.mertig.com
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Nov/msg00244.html","timestamp":"2014-04-17T13:10:54Z","content_type":null,"content_length":"39839","record_id":"<urn:uuid:f3ba3a9f-72af-4844-bf48-dad74c05010c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] Nice reference to R
jim holtman jholtman at gmail.com
Thu Oct 18 15:04:52 CEST 2007
Just saw this in the Windows Secret blog:
Get official and unofficial fixes for Excel
By Brian Livingston
Despite the hotfix that Microsoft recently released for Excel 2007, as
I described on Oct. 11, some math errors that you should know about
still lurk in both Excel 2007 and Excel 2003.
I'll bring you up to date and explain how you can get better results from Excel.
Baier and Neuwirth offer Excel math add-ins
In a nutshell, this month's patch for Excel 2007 corrects a bug that
treats numbers close to 65,535 as if they were 100,000. To get the
fix, see the Oct. 9 entry in Microsoft's official Excel blog.
Even with the hotfix, however, both Excel 2007 and Excel 2003 give
slightly wrong — and, in some cases, extremely wrong — answers to some
floating-point calculations. I'll give you some examples below. First,
let's discuss an independent solution to the problem.
Those who want more accurate floating-point math than any version of
Excel supports should download a statistics program called R. This is
open-source software that was originally written by Robert Gentleman
and Ross Ihaka ("R & R"), who now work with about 20 researchers
around the world to maintain the code.
The R program, in turn, can be used with Excel if you install various
add-ins by Thomas Baier and Erich Neuwirth called RExcel, rcom, and
R(D)COM. Windows Secrets contributing editor Woody Leonhard
recommended this in his Oct. 4 column on the Excel problem.
In last week's article, I rounded off R(D)COM to R, which resulted in
me mistakenly saying R was authored by Baier and Neuwirth. Ouch! This
floating-point stuff really is hard!
Erich Neuwirth kindly e-mailed me the following explanation:
"Thomas Baier wrote rcom and R(D)COM, both of which allow you to use R
as an embedded library in any Windows program supporting the COM
(Component Object Model, not the serial port) interface. I wrote
RExcel, which embeds R into Excel and allows you to use R functions as
if they were native Excel worksheet functions.
"So, yes, R can be used as a floating point library for Excel, but it
is much, much more. Most computational statistics research nowadays is
done using R."
For more information about R, or to download it free from
R-Project.org, visit the R-Project site.
For more information about the Excel add-ins, see Baier and Neuwirth's
R(D)COM page and the RExcel installation instructions.
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem you are trying to solve?
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2007-October/143589.html","timestamp":"2014-04-20T13:20:59Z","content_type":null,"content_length":"5011","record_id":"<urn:uuid:5f0bd4a8-b00e-4419-af95-2e595b6b5f39>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pre Algebra Cramer's rule problem
September 16th 2012, 08:44 PM #1
Sep 2012
Pre Algebra Cramer's rule problem
I have done several other equations of this type with no problem. I understand this method but I just cannot seem to come up with the correct answer. According to my homework assignment there IS
a correct answer. I even tried using a different method and came up woth the same incorrect answer as when using this method. Any help would be greatly appreciated!
Use Cramer's rule to solve the sytem of equations.
Last edited by AngelaM; September 16th 2012 at 08:52 PM.
Re: Pre Algebra Cramer's rule problem
I have done several other equations of this type with no problem. I understand this method but I just cannot seem to come up woth the correct answer. According to my homework assignmetn there IS
a correct answer. I even tried using a differnet method and came up woth the same incorrect answer as when using this method. Any help would be greatly appreciated!
Use Cramer's rule to solve the sytem of equations.
Since you say you've made an attempt, please post your working here so that we can guide you through where you have made your mistake.
Re: Pre Algebra Cramer's rule problem
Honestly I have no idea how to type in math formats or whatever they are called. I've never been on one of these things before
Re: Pre Algebra Cramer's rule problem
The answers i came up with were
Re: Pre Algebra Cramer's rule problem
There are ways around that. You could write out what you have done, scan it and post it as an image. You could attempt to type what you have done, but the formatting might be difficult. You could
download MathType, or look for a similar freeware online program. Or you could learn some LaTeX, of which the forum has an inbuilt compiler, to learn how to code and compile the mathematics
properly. We have a LaTeX help subforum on this site.
September 16th 2012, 08:52 PM #2
September 16th 2012, 08:57 PM #3
Sep 2012
September 16th 2012, 08:58 PM #4
Sep 2012
September 16th 2012, 09:00 PM #5 | {"url":"http://mathhelpforum.com/new-users/203587-pre-algebra-cramer-s-rule-problem.html","timestamp":"2014-04-17T11:32:32Z","content_type":null,"content_length":"43365","record_id":"<urn:uuid:30540e9e-debb-49ff-8e9f-112d8fe452c3>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
CGTalk - Converting rotation values for BVH
I wrote a BVH exporter for learning purposes. My exporter's "hierarchy parser" part is already done and is working ok and root positions and rotations are written into bvh file, and i can import it
However, problem is that biovision stores rotations in angles. IMO It was relatively straightforward to read angles into object rotations (i also wrote importer part).
But for exporting problem seems to be how to convert rotations to angles, in such manner that i can get the exactly same result back (max -> bvh -> max).
create a point helper
rotate point 45 degrees along it's local x
quatToEuler ($point.rotation) shows: (eulerAngles -45 0 0)
rotate point -45 degrees along it's local z
quatToEuler ($point.rotation) shows: (eulerAngles -45 -1.02453e-005 45)
So far good...
rotate point -45 degrees along it's local y
Now, this is my problem:
quatToEuler ($point.rotation) shows: (eulerAngles -9.73564 30 54.7356)
I no longer can just read then write the actual x,y and z values and then expect those to work later, when importing bvh file back to max.
Result is an "almost there" walkcycle for example, when i compare it to original bvh file imported with my importer.
I have tested results of my importer compared to other importers and it matches, so i think i'm quite sure that the problem is in how i write the rotation values.
And what comes to from where i read the values; i use world oriented points, linked into joints of hierarchy, in setup frame. I read the rotations in "parent space" of each point, which is point
linked to bone one level up in hierarchy.
If someone has enlightening thoughts about rotations and how to solve this i would be delighted. I have tried different methods for getting rotations, not just converting quat rotation value to
euler. But i seem to be stuck with this... Maybe i'm going into wrong direction. | {"url":"http://forums.cgsociety.org/archive/index.php/t-610891.html","timestamp":"2014-04-18T08:05:26Z","content_type":null,"content_length":"36685","record_id":"<urn:uuid:55387a19-a15c-468b-bcd3-4268c669550f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 878.01017
Autor: Pach, János
Title: Two places at once: A remembrance of Paul Erdös. (In English)
Source: Math. Intell. 19, No.2, 38-48 (1997).
Review: This is a story about Paul Erdös as a mathematician and as a man. The author knew him personally for many years (in fact since his, i.e. the author's, childhood). One finds there a lot of
interesting facts, observations, even anecdotes.
Reviewer: R.Murawski (Poznan)
Classif.: * 01A70 Biographies, obituaries, personalia, bibliographies
01A60 Mathematics in the 20th century
Biogr.Ref.: Erdös, P.
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/87801017.htm","timestamp":"2014-04-17T09:38:10Z","content_type":null,"content_length":"2993","record_id":"<urn:uuid:0e25e1c1-0a05-46ba-b1b1-97153e3c8ac5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
a chart ordering fractions from least to greatest
Author Message
jrienjeeh Posted: Friday 29th of Dec 09:28
I have this test coming and I would really appreciate if anyone can guide a chart ordering fractions from least to greatest on which I’m stuck and don’t know how to start from. Can
you give me a helping hand with triangle similarity, perfect square trinomial and complex fractions. I would rather get help from you than hire a algebra tutor who are very pricey.
Any pointer will be highly prized very much.
nxu Posted: Saturday 30th of Dec 10:40
You don’t need to ask anyone to solve any sample questions for you; as a matter of fact all you need is Algebrator. I’ve tried quite a few such algebra simulation software but
Algebrator is a lot better than most of them. It’ll solve all the questions that you have and it’ll even explain each and every step involved in reaching that solution . You can
try out as many examples as you want to , and unlike us human beings, it won’t ever say, Oh! I’ve had enough for the day! ;) I used to have some problems in solving questions on
function range and y-intercept, but this software really helped me get over those.
From: Siberia,
Russian Federation
TC Posted: Sunday 31st of Dec 09:04
Yeah, I think so too . Algebrator explains everything in such great detail that even a beginner can learn the tricks of the trade, and solve some of the most tough mathematical
problems. It explains each and every intermediate step that it took to reach a certain solution with such perfection that you’ll learn a lot from it.
From: Kµlt °ƒ Ø,
working on my time
ohuarelli Posted: Monday 01st of Jan 15:35
Is it really true that a program can do that? I don’t really know much anything about this Algebrator but I am really looking for some assistance so would you mind sharing me where
could I find that program? Is it downloadable over the net ? I’m hoping for your fast reply because I really need assistance badly .
TihBoasten Posted: Tuesday 02nd of Jan 09:26
I bought my copy from : http://www.algebra-expression.com/solving-equations-i-expressions-involving-power-functions.html . They even offer an unconditional money back guarantee, so
you have nothing to lose , just go ahead and solve all those algebra problems that you thought you’ll never be able to solve. | {"url":"http://www.algebra-expression.com/algebra-expression-simplifying/3x3-system-of-equations/a-chart-ordering-fractions.html","timestamp":"2014-04-18T13:27:00Z","content_type":null,"content_length":"22539","record_id":"<urn:uuid:816d6bf8-3a95-42c3-bdc9-0fb76d9fea5d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 350
MATH 350, Review for Midterm Test 2
Test topics
III. Topology of the real numbers
1. Open and closed sets (3.1)
2. Compactness (3.1)
IV. Limits and continuity
1. Limit of a function (4.1)
2. Definition of continuity. Continuity on a set (4.1)
3. Properties of continuous functions (4.1)
4. Uniform continuity (4.1)
V. Differentiation
1. The derivative of a function (5.1)
2. Mean value theorems (5.2)
3. Taylor's formula (5.2)
Important definitions/facts
• Open and closed sets; compact sets
• Limit points, boundary points, interior points, closure
• The Heine-Borel theorem
• Limits, continuity
• Global and local extrema
• Intermediate value property
• Uniformly continous functions
• Definitions of the derivative
• Generalized mean value thorem
• Taylor's formula and Lagrange's form of the remainder
Theorems that you need to know how to prove
• Basic properties of open and closed sets (Theorems 3.1-3.3)
• Properties of compact sets (Theorems 3.12 and 3.13)
• Sequential definition of limits (Theorem 4.1)
• Algebraic operations on continous functions(Theorem 4.3)
• Extreme value theorem (Corollary 4.7(b))
• Intermediate value theorem (Theorem 4.9 and Corollary 4.9)
• Differentialble function is continuous (Theorem 5.2)
• Derivatives and algebraic operations (Theorem 5.3)
• Condition on relative extremum (Theorem 5.6)
• Rolle's theorem, Mean-value theorem (Theorems 5.7, 5.8)
General rules
• All tests are closed books/notes; graphing calculators, cell phones or laptops are not permitted. A basic scientific calculator is OK (example: TI 30X IIS or similar) but it is not required to
answer the questions.
• The length of the test is 1 hr 15 minutes.
• Problems will come in the same format as on the quizzes: the expectation is that you write a clear and concise proof for the statement given in the question. | {"url":"http://www.csun.edu/~panferov/math350_f10/test2.html","timestamp":"2014-04-19T01:59:30Z","content_type":null,"content_length":"3106","record_id":"<urn:uuid:d9f77f44-85ab-42b2-9c1e-ecd5152ea18d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
annoying relativity
February 13th 2006, 12:37 PM #1
Feb 2006
I am having problems with an equation involving relativity, but my question is directly math related currently the equation is in the form of e=mc^2(blah blah blah), but I need it in the form e=
mv^2(blah blah blah).
the current equation is "E=mc^2((1/(sqrt(1-v^2/c^2)))-1"
where sqrt(x) signifys the square root of x. please help.
what I really need is the answer, but I would love to know the steps taken.
I am having problems with an equation involving relativity, but my question is directly math related currently the equation is in the form of e=mc^2(blah blah blah), but I need it in the form e=
mv^2(blah blah blah).
the current equation is "E=mc^2((1/(sqrt(1-v^2/c^2)))-1"
where sqrt(x) signifys the square root of x. please help.
what I really need is the answer, but I would love to know the steps taken.
I do not understand you want to solve for $v$?
I am having problems with an equation involving relativity, but my question is directly math related currently the equation is in the form of e=mc^2(blah blah blah), but I need it in the form e=
mv^2(blah blah blah).
the current equation is "E=mc^2((1/(sqrt(1-v^2/c^2)))-1"
where sqrt(x) signifys the square root of x. please help.
what I really need is the answer, but I would love to know the steps taken.
What you want to do is expand the gamma function ( $\gamma=1/\sqrt{1-v^2/c^2}$) in a Taylor series about v=0. So $E = mc^2(1+1/1!*\gamma'(v=0)+1/2!*\gamma''(v=0)+...)$. You should get $E=mc^2(1+1
Last edited by topsquark; February 14th 2006 at 05:22 AM.
February 13th 2006, 01:53 PM #2
Global Moderator
Nov 2005
New York City
February 13th 2006, 04:02 PM #3 | {"url":"http://mathhelpforum.com/advanced-math-topics/1878-annoying-relativity.html","timestamp":"2014-04-17T04:20:59Z","content_type":null,"content_length":"39285","record_id":"<urn:uuid:751af7f3-c81b-44a7-8b8c-3dddbe45b129>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about mazur on Quomodocumque
Those of us outside Silicon Valley tend to think of it as a single entity — but venture capitalists and developers are not the same people and don’t have the same goals. I learned about this from
David Carlton’s blog post. Cathy O’Neil reposted it this morning. It’s kind of cool that the three of us, who started grad school together and worked with Barry Mazur, are all actively blogging!
We just need to get Matt Emerton in on it and then we’ll have the complete set. Maybe we could even launch a new blogging platform and call it mazr. You want startup culture, I’ll give you startup
“Deforming Galois Representations” is online, too
The hits just keep on coming, as Barry Mazur has now posted a scan of his paper, “Deforming Galois Representations,” from the long-unavailable Galois Groups over Q proceedings, on his homepage. I
didn’t link directly to the .pdf because there’s tons of other interesting stuff on Barry’s homepage to look at!
Tagged algebraic geometry, barry mazur, deformation theory, Galois representations, mazur, number theory
Diophantineness: Mazur-Rubin and Kollar
Last year I blogged about an argument of Bjorn Poonen, which shows that Hilbert’s tenth problem has a negative solution over the ring of integers O_K of a number field K whenever there exists an
elliptic curve E/Q such that E(Q) and E(K) both have rank 1. That is: there’s no algorithm that tells you whether a given polynomial equation over O_K is solvable. The idea is that under these
circumstances one can construct a Diophantine model for Z inside O_K; one already knows (by Matijasevic, Robinson, etc.) that no algorithm can determine whether a polynomial equation over Z has a
solution, and the same property is now inherited by the ring O_K.
The necessary fact about existence of low-rank elliptic curves over number fields (actually, not quite the fact Poonen asked for but something weaker that suffices) has now been proven, subject to a
hypothesis on the finiteness of Sha, by Mazur and Rubin: see Theorems 1.11 and 1.12. So, if you believe Sha to be finite, you believe that Hilbert’s tenth problem has a negative answer for the ring
of integers of every number field.
The result of Mazur and Rubin is actually much more substantial than the corollary I mention here, giving for instance quite strong lower bounds on the number of twists of an elliptic curve E with
specified 2-Selmer rank. But I haven’t studied the argument sufficiently to say anything serious about what’s inside.
I recently returned from the “Spaces of curves and their interaction with Diophantine problems” conference at Columbia, where Janos Kollar discussed the question: Which subsets S of C(t) are
Diophantine? That is, which have the property that they can be written as the set of s in C(t) such that $\exists x_1, x_2, ..., x_k: f(s,x_1, ... , x_k) = 0$ for some polynomial f in k+1 variables
with coefficients in C(t). Kollar explained how to prove that the polynomial ring C[t] is not Diophantine in C(t). The idea is to show that any “sufficiently large” Diophantine subset S of C(t)
contains functions whose denominators are essentially arbitrary; more precisely (but not completely precisely!) if X in Sym^d P^1 is the locus of degree-d denominators of elements of S, the Zariski
closure of X needs to be — well, it doesn’t have to contain all degree-d polynomials, but it has to contain a set of the form $\{ FG^r\}$ as F,G range over polynomials of degrees s,t with s+rt = d.
In particular, it’s not possible for the denominator to be identically 1, as would be the case if S were C[t]. In fact, this argument shows that no finitely generated C-subalgebra of C(t) is
Diophantine over C[t].
Open question: is the localization of C(t) at t Diophantine over C(t)?
Update: When I first posted this I didn’t notice that Kollar’s result is already out, in the new journal Algebra and Number Theory, so you can go to the source for more details. ANT, by the way, is
a free electronically distributed journal with a terrific editorial board, and I highly recommend submitting there.
Tagged algebraic geometry, diophantine, hilbert problems, kollar, logic, mazur, poonen, rubin
Non-simple abelian varieties in a family
Here’s a funny question. Let f in C[x] be a squarefree polynomial of degree at least 6. Let S be the set of complex numbers t such that the Jacobian of the hyperelliptic curve
$y^2 = f(x)(x-t)$
is not simple. Is S always finite? Even more, is there a bound on |S| which doesn’t depend on f, or depends only on the degree of f?
This question comes from the introduction to “Non-simple abelian varieties in a family: geometric and analytic approaches” , a new paper by me, Christian Elsholtz, Chris Hall, and Emmanuel Kowalski.
In its original form this was a four-author, six-page paper — fortunately we’ve now added enough material to make the ratio a bit more respectable!
The paper isn’t about complex algebraic geometry at all — it explains how to get bounds on S when f has rational coefficients and t ranges over rational numbers, which is quite a different story. The
point of the paper is partly to prove some theorems and partly to make a metamathematical point — that problems of this kind can be approached via either arithmetic geometry or analytic number
theory, and that the two approaches have complementary strengths and weaknesses. Bounds from arithmetic geometry are stronger but less uniform; bounds from analytic number theory are weaker but have
better uniformity.
Here’s my favorite example of this phenomenon. Let X be a smooth plane curve over Q of degree d at least 4. Then by Faltings’ Theorem we know that X has only finitely many rational points.
On the other hand, a beautiful theorem of Heath-Brown tells us that the number of rational points on X with coordinates of height at most B is at most C B^(2/d), for some constant C depending only on
d. At first, this seems to give much less than Faltings. After all, as B gets larger and larger, the upper bound given by Heath-Brown gets arbitrarily large — whereas we know by Faltings that there
are only finitely many points on the whole curve, no matter how large we allow the coordinates to be.
But note that the constant in Heath-Brown’s result doesn’t depend on the curve X. It is what’s called a uniform bound. Faltings’ theorem, by contrast, gives an upper bound on the number of points
which depends very badly on the choice of X. Depending on what you’re trying to accomplish, you might be willing to sacrifice uniformity to get finiteness — or the reverse. But it’s best to have both
options at hand.
Is it possible to have uniformity and finiteness simultaneously? Conjecturally, yes. Caporaso, Harris, and Mazur showed that, conditional on Lang’s conjecture, there is a constant B(g) such that
every genus-g curve X/Q has at most B(g) rational points. The Caporaso-Harris-Mazur paper came out when I was in graduate school, and the idea of such a uniform bound was considered so wacky that CHM
was thought of as evidence against Lang’s conjecture. Joe Harris used to wander around the department, buttonholing graduate students and encouraging us to cook up examples of genus-g curves with
arbitrarily many points, thus disproving Lang. We all tried, and we all failed — as did many more experienced people. And nowadays, the idea that there might be a uniform bound for the number of
rational points on a genus-g curve is considered fairly reputable, even among people who have their doubts about Lang’s conjecture. As far as I know, the world record for the number of rational
points on a genus-2 curve is 588, due to Kulesz. Can you beat it?
Tagged algebraic geometry, analytic number theory, caporaso, faltings, harris, heath-brown, lang, mazur | {"url":"http://quomodocumque.wordpress.com/tag/mazur/","timestamp":"2014-04-17T00:58:54Z","content_type":null,"content_length":"65041","record_id":"<urn:uuid:fff8dd6b-a201-446e-a16c-371345334500>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Interpret SPSS Regression Results | The Classroom | Synonym
Regression is a complex statistical technique that tries to predict the value of an outcome or dependent variable, such as annual income, economic output or student test scores, based on one or more
predictor variables, such as years of experience, national unemployment rates or student course grades. Researchers in education and social sciences use regression to study a wide range of phenomena,
using statistical software programs such as SPSS to conduct their analyses. SPSS generates regression output that may appear intimidating to beginners, but a sound understanding of regression
procedures and an understanding of what to look for can help the student or novice researcher interpret the results.
Step 1
Conduct your regression procedure in SPSS and open the output file to review the results. The output file will appear on your screen, usually with the file name "Output 1." Print this file and
highlight important sections and make handwritten notes as you review the results.
Step 2
Begin your interpretation by examining the "Descriptive Statistics" table. This table often appears first in your output, depending on your version of SPSS. The descriptive statistics will give you
the values of the means and standard deviations of the variables in your regression model. For example, a regression that studies the effect of years of education and years of experience on average
annual income will have the means and standard deviations in your data for these three variables.
Step 3
Turn your attention to the correlations table, which follows the descriptive statistics. Correlations will measure the degree to which these variables are related. Correlations range in value from
zero to one. The higher the value, the greater the level of correlation. The values can be positive or negative, signifying positive or negative correlation.
Step 4
Review the model summary, paying particular attention to the value of R-square. This statistic tells you how much of the variation in the value of the dependent variable is explained by your
regression model. For example, regressing average income on years of education and years of experience may produce an R-square of 0.36, which indicates that 36 percent of the variation in average
incomes can be explained by variability in a person's education and experience.
Step 5
Determine the linear relationship among the variables in your regression by examining the Analysis of Variance (ANOVA) table in your SPSS output. Note the value of the F statistic and its
significance level (denoted by the value of "Sig."). If the value of F is statistically significant at a level of 0.05 or less, this suggests a linear relationship among the variables. Statistical
significance at a .05 level means there is a 95 percent chance that the relationship among the variables is not due to chance. This has become the accepted significance level in most research fields.
Step 6
Study the coefficients table to determine the value of the constant. This table summarizes the results of your regression equation. Column B in the table gives the values of your regression
coefficients and the constant, which is the expected value of the dependent variable when the values of the independent variables equal zero.
Step 7
Study the values of the independent variables in the coefficients table. The values in column B represent the extent to which the value of that independent variable contributes to the value of the
dependent variable. For example, a B of 800 for years of education suggests that each additional year of education raises average income by an average of $800 a year. The t-values in the coefficients
table indicate the variable's statistical significance. In general, a t-value of 2 or higher indicates statistical significance.
Style Your World With Color | {"url":"http://classroom.synonym.com/interpret-spss-regression-results-2536.html","timestamp":"2014-04-18T20:44:14Z","content_type":null,"content_length":"32859","record_id":"<urn:uuid:2a87d274-d3a5-457d-8fcd-2ff87dbc0523>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
When is an Albanese variety principally polarized?
up vote 9 down vote favorite
Let (X,x) be a pointed projective variety. Then there exists an abelian variety V which is universal for maps of pointed varieties $(X,x) \to (A,e_A)$, called the albanese variety. When X is a curve,
the variety V is isomorphic to the Jacobian of X (in higher dimensions this fails) which is a principally polarized abelian variety.
Question: When is an albanese variety principally polarized? If it is not always principally polarized, can one describe the degree of the polarization in terms of data intrinsic to X?
ag.algebraic-geometry abelian-varieties
add comment
1 Answer
active oldest votes
In general it could happen that the Albanese variety does not admit a principal polarization at all. For instance the Albanese variety of an abelian variety is the Abelian variety itself.
So choose $X$ to be some abelian variety that has no principal polarization and you will get an example.
On the other hand it can happen that the Albanese variety is principally polarized. For instance you can take the Albanese of the $n$-th symmetric product of a curve. It is equal to the
Jacobian of the curve and so admits a principal polarization. Or if you want to be fancier you can take a hyperplane section in the symmetric product of a curve. It will also have the
Jacobian of the curve as its Albanese variety.
up vote 10
down vote Another useful comment is that the Albanese of $X$ is the dual of $Pic^{0}(X)$ and so $Alb(X)$ admits a principal polarization if and only if $Pic^{0}(X)$ does. If you fix an ample line
accepted bundle $L$ on an $n$-dimensional complex projective variety $X$, then $L$ induces a natural polarization on $Pic^{0}(X)$: the universal cover of $Pic^{0}(X)$ is naturally identified with
$H^{1}(X,O_{X}) = H^{1,0}(X)$, the integral $(1,1)$ form $c_{1}(L)$ then induces a Hermitian pairing on $H^{1,0}(X)$ by the formula $$ h(\alpha,\beta) := -2i \int_{X} \alpha\wedge \bar{\
beta} \wedge c_{1}(L)^{\wedge (n-1)}. $$ This $h$ defines a polarization on $Pic^{0}(X)$. The construction of $h$ is purely cohomological and so it is straightforward to check if it
defines a principal polarization by computing the divisors of this polarization.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry abelian-varieties or ask your own question. | {"url":"http://mathoverflow.net/questions/7326/when-is-an-albanese-variety-principally-polarized/7336","timestamp":"2014-04-19T22:31:52Z","content_type":null,"content_length":"51460","record_id":"<urn:uuid:f046cc44-361a-4cd2-8278-7d1e71acd572>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precalculus: Functions
Calculus is the mathematical study of change, and real-life things that change are modeled by functions. Precalculus is essentially the study of functions, with a few other related topics that
supplement the study of functions and prepare students for calculus. In this text we'll concern ourselves first with learning about general functions, and later with certain types of common functions
with special properties, like polynomial, exponential, logarithmic, and trigonometric functions. First, it is of critical importance to understand exactly what a function is. In the following
lessons, we'll discuss what makes a function a function, some general properties of functions, and a few basic categories of functions. In this text we'll assume a general knowledge of algebraic
principles of solving equations, working with the real numbers, and working with sets. In the last of the upcoming sections, we'll learn how functions behave under operations like addition,
subtraction, etc. | {"url":"http://www.sparknotes.com/math/precalc/functions/summary.html","timestamp":"2014-04-16T10:27:34Z","content_type":null,"content_length":"49809","record_id":"<urn:uuid:e12d86d1-0119-4336-bca1-6c8d958156a3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gas Mixtures and Partial Pressures
Gas Mixtures and Partial Pressures
Gas Mixtures and Partial Pressures
How do we deal with gases composed of a mixture of two or more different substances?
John Dalton (1766-1844) - (gave us Dalton's atomic theory)
The total pressure of a mixture of gases equals the sum of the pressures that each would exert if it were present alone
The partial pressure of a gas:
• The pressure exerted by a particular component of a mixture of gases
Dalton's Law of Partial Pressures:
• P[t] is the total pressure of a sample which contains a mixture of gases
• P[1], P[2], P[3], etc. are the partial pressures of the gases in the mixture
P[t] = P[1] + P[2 ]+ P[3] + ...
If each of the gases behaves independently of the others then we can apply the ideal gas law to each gas component in the sample:
• For the first component, n[1] = the number of moles of component #1 in the sample
• The pressure due to component #1 would be:
• For the second component, n[2] = the number of moles of component #2 in the sample
• The pressure due to component #2 would be:
And so on for all components. Therefore, the total pressure P[t] will be equal to:
• All components will share the same temperature, T, and volume V, therefore, the total pressure P[t] will be:
• Since the sum of the number of moles of each component gas equals the total number of moles of gas molecules in the sample:
At constant temperature and volume, the total pressure of a gas sample is determined by the total number of moles of gas present, whether this represents a single substance, or a mixture
A gaseous mixture made from 10 g of oxygen and 5 g of methane is placed in a 10 L vessel at 25°C. What is the partial pressure of each gas, and what is the total pressure in the vessel?
(10 g O[2])(1 mol/32 g) = 0.313 mol O[2]
(10 g CH[4])(1 mol/16 g) = 0.616 mol CH[4]
V=10 L
P[t] = P[O2] + P[CH4] = 0.702 atm + 1.403 atm = 2.105 atm
Partial Pressures and Mole Fractions
The ratio of the partial pressure of one component of a gas to the total pressure is:
• The value (n[1]/n[t]) is termed the mole fraction of the component gas
• The mole fraction (X) of a component gas is a dimensionless number, which expresses the ratio of the number of moles of one component to the total number of moles of gas in the sample
The ratio of the partial pressure to the total pressure is equal to the mole fraction of the component gas
• The above equation can be rearranged to give:
The partial pressure of a gas is equal to its mole fraction times the total pressure
a) A synthetic atmosphere is created by blending 2 mol percent CO[2], 20 mol percent O[2] and 78 mol percent N[2]. If the total pressure is 750 torr, calculate the partial pressure of the oxygen
Mole fraction of oxygen is (20/100) = 0.2
Therefore, partial pressure of oxygen = (0.2)(750 torr) = 150 torr
b) If 25 liters of this atmosphere, at 37°C, have to be produced, how many moles of O[2] are needed?
P[O2] = 150 torr (1 atm/760 torr) = 0.197 atm
V = 25 L
T = (273+37K)=310K
R=0.0821 L atm/mol K
PV = nRT
n = (PV)/(RT) = (0.197 atm * 25 L)/(0.0821 L atm/mol K * 310K)
1996 Michael Blaber | {"url":"http://www.mikeblaber.org/oldwine/chm1045/notes/Gases/Mixtures/Gases06.htm","timestamp":"2014-04-19T07:31:58Z","content_type":null,"content_length":"5815","record_id":"<urn:uuid:86908f3e-1af1-4ae8-b6a8-95849ee07c37>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [SI-LIST] : Oscillation in lumped circuits and transmission lines
Steve Corey (steve@tdasystems.com)
Fri, 05 Feb 1999 09:05:19 -0800
This is an interesting question, if you have a stomach for semantics... In my opinion, although the two oscillations to which you refer are distinct, they are definitely related. System/circuit/
network theory people use
"oscillation" to refer to an underdamped sinusoidal oscillation, whereas signal integrity designers use the term to refer to any nonmonotonic signal transition. In fact, the oscillation seen on an
unmatched distributed
transmission line excited by a step is a sum of an infinite number of sinusoidal oscillations.
An earlier reply alluded to the fact that the time constant of an LC circuit is sqrt(LC), which is true for the single LC lump. In fact, this value will coincide with the delay of the transmission
line. However, when such a
circuit is divided into n lumped segments, there will now be n modes of oscillation, each at its own frequency. (caveat -- a related point -- modes of oscillation of a network are distinct from, but
related to, modes of
transmission of a T-line system.) As you allow n to approach infinity, the sum of the sinusoidal oscillations converges into a distributed transmission line response, in the spirit of Fourier theory.
In the awkward in-between stage, for n less than infinity, you get non-physical ringing, the same way you get it from a truncated fourier series, and in JPEG compression, for that matter, because the
higher order modes of
oscillation are not present. However, this brings up the key point as it applies to modeling interconnects with lumped elements: in some cases the system is excited with a low enough frequency input
that the higher order modes
are not excited anyway. The nonphysical ringing is not seen, and the distributed and lumped models behave the same, so either model is valid. Returning to the compressed image analogy, if the
compression-induced errors are too
small to resolve with your eye, then as far as you're concerned there are no errors.
Hope this helps,
-- Steve
Steven D. Corey, Ph.D.
Time Domain Analysis Systems, Inc.
"The Interconnect Modeling Company."
email: steve@tdasystems.com
phone/fax: (206) 527-1849
Arani Sinha wrote:
> Hi,
> I have the following question.
> We can model an interconnect as either a lumped circuit or a
> transmission line. By means of lumped modeling, we can say that
> it has an oscillatory response if its damping factor is less
> than 1. By means of transmission line modeling, we can say that
> it has an oscillatory response if the signal reflection
> co-efficients at source and load satisfy certain conditions.
> My question is whether oscillation in a lumped circuit and
> signal reflection in a transmission line are actually the same
> phenomenon. If so, there should be a correlation between
> conditions for oscillation in a lumped circuit and those for
> oscillation in a transmission line.
> After many discussions and much thought, I have not been able
> to determine a correlation. I am also ambivalent about whether
> they are the same phenomenon.
> I understand that the damping factor in a lumped circuit is
> equivalent to the attenuation constant in a transmission line
> and that condition of no reflection is equivalent to the
> maximum power transfer theorem.
> I will really appreciate help in this regard.
> Thanks,
> Arani
> **** To unsubscribe from si-list: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP. si-list archives are accessible at http://
www.qsl.net/wb6tpu/si-list ****
**** To unsubscribe from si-list: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP. si-list archives are accessible at http://
www.qsl.net/wb6tpu/si-list **** | {"url":"http://www.qsl.net/wb6tpu/si-list2/0097.html","timestamp":"2014-04-19T12:03:45Z","content_type":null,"content_length":"7075","record_id":"<urn:uuid:1af5cae0-e069-4c6d-b3e3-eb780b4d70e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magic trick based on deep mathematics
up vote 90 down vote favorite
I am interested in magic tricks whose explanation requires deep mathematics. The trick should be one that would actually appeal to a layman. An example is the following: the magician asks Alice to
choose two integers between 1 and 50 and add them. Then add the largest two of the three integers at hand. Then add the largest two again. Repeat this around ten times. Alice tells the magician her
final number $n$. The magician then tells Alice the next number. This is done by computing $(1.61803398\cdots) n$ and rounding to the nearest integer. The explanation is beyond the comprehension of a
random mathematical layman, but for a mathematician it is not very deep. Can anyone do better?
popularization soft-question
6 Please make this community wiki? – Theo Johnson-Freyd Dec 25 '09 at 22:47
29 I am informed that Persi Diaconis is the correct person to answer this question. – Sam Nead Dec 26 '09 at 0:09
15 I have discussed this question with Persi. He could not come up with anything significant (though he did not think about it very long). – Richard Stanley Dec 26 '09 at 16:30
11 I've also heard Persi talk about this subject, and my guess is that he would say that the requirements of "deep mathematics" and "would actually appeal to a layman" are nearly incompatible in
practice. – Mark Meckes Dec 27 '09 at 13:54
2 I don't think they should be incompatible: the deep mathematics are the reason the trick works; you don't have to understand them to be stunned by the trick! – Sam Derbyshire Jan 17 '10 at 17:06
show 9 more comments
41 Answers
active oldest votes
"The best card trick", an article by Michael Kleber. Here is the opening paragraph:
"You, my friend, are about to witness the best card trick there is. Here, take this ordinary deck of cards, and draw a hand of five cards from it. Choose them deliberately or randomly,
up vote 76 whichever you prefer--but do not show them to me! Show them instead to my lovely assistant, who will now give me four of them: the 7 of spades, then the Q of hearts, the 8 of clubs, the 3
down vote of diamonds. There is one card left in your hand, known only to you and my assistant. And the hidden card, my friend, is the K of clubs."
16 Martin Gardner gave an interesting variant of this in one of his books, where the volunteer also gets to choose which 4 of the 5 cards the assistant hands to the mathematician. This
seems like only 4!=24 pieces of information to convey one of 48 cards: the extra bit is whether the assistant passes the cards right side up or upside down. – David Speyer Feb 2 '10 at
4 That's a really nifty trick... I wonder if I can find an "assistant" to help me run this soon! – Gwyn Whieldon May 16 '10 at 7:48
show 4 more comments
This was fascinating for me. Somehow the man takes a bagel and with one cut arrives with two pieces that are interlocked. Whether this qualifies as "magic" I dunno (it's hard to say
once the trick's been explained), but it sure seems like it to me.
up vote 43 down It doesn't hurt that I love bagels, and have the opportunity to perform this with friends/family/non-math people and can teach a little about problems/topology/counter-intuitive facts
vote about the universe.
11 I was amused by the connected bagel that resulted when my friend cut along a Mobius strip instead of a full-twisted strip. – Elizabeth S. Q. Goodman Feb 11 '10 at 6:50
1 As soon as I saw the first image in the link I thought "this must be a toral knot of type $(1,1)$ !" (or how knot theorists call those beasts..) – Qfwfq Nov 8 '11 at 2:26
show 1 more comment
Five unrelated items:
Mobius strip
One of the best mathematical tricks is what happens when you cut a Mobius strip in the middle. (Look here) (And what happens when you cut it again, and when you cut it not in the middle.)
This is truly mind boggling and magicians use it in their acts. And it reflects deep mathematics.
Diaconis mind reading trick
I also heard from Mark Gorseky this description of a mathematical based card game
"Mark described a card trick of Diaconis where he takes a deck of cards, gives it to a person at the end of the room, lets this person “cut” the deck and replace the two parts, then asks
many other people do the same and then asks people to take one card each from the deck. Next Diaconis is trying to read the mind of the five people with the last cards by asking them to
concentrate on the cards they have. To help him a little against noise coming from other minds he asks those with black cards to step forward. Then he guesses the cards each of the five
people have.
Mark said that Diaconis likes to perform this magic with a crowd of magician since it violates the basic rule: “never let the cards out of your control”. This trick is performed (with a
reduced deck of 32 cards) based on a simple linear feedback shift register. Since all the operations of cuting and pasting amount to cyclic permutations, the 5 red/black bits are enough to
up vote tell the cylic shift and no genuine mind reading is required."
33 down
vote I think there is a paper by Goresky and Klapper about a version of this magic and relations to shift registers.
The Link Illusion
I heard a wonderful magic from Nahva De Shalit. You tie a string between the two hands of two people and link the two strings. The task is to get unlinked.
This ties with what I heard from Eric Demaine about the main principle behined many puzzles (Some of which he manufectured with his father whan he was six!)
Symmetry Illusion
Sometimes things are not as symmetric as they may look.
commutators-based magic
(I heard this from Eric Demaine and from Shahar Mozes.) If we hang a picture (or boxing gloves) with one nail, once the nail falls so does the picture. If we use two nails then ordinarily
if one nails falls the picture can still hangs there. Mathematics can come for the rescue for the following important task: use five nails so that if any one nail falls so does the picture.
4 They stand up if they have a red card and stay seated if they have a black card. That'll tell him (for example) that he's looking at the sequence of cards corresponding to 01100 or
11001, etc. He has the cyclic order of the cards he handed out memorized, and just reads of the corresponding card names. – Gwyn Whieldon May 16 '10 at 0:31
show 6 more comments
The Kruskal count.
up vote 25 down vote
show 1 more comment
I saw this trick demonstrated at a math camp once. When it works, it is extremely impressive to non-mathematicians and mathematicians alike.
Have a volunteer shuffle a deck of cards, select a card, show it to the audience, and shuffle it back into the deck. Take the deck from him, and fling all of the cards into the air.
Grab one as it falls, and ask the volunteer if it is his card.
up vote 24 down
vote 1 in 52 times (this is the deep mathematics part), the card you grab will be the card the volunteer selected. Even most statisticians should be amazed at this feat. Just make sure you
never perform this trick twice to the same audience.
1 This is brilliant! – Qiaochu Yuan Jun 15 '10 at 10:15
13 xkcd.com/628 – Ryan Reich Nov 7 '11 at 20:34
show 1 more comment
Persi Diaconis and Ron Graham just published Magical Mathematics. The book contains a plethora of magic tricks rooted in deep mathematics.
up vote 19 down vote
2 Dear Sami, Welcome to MO – Gil Kalai Nov 7 '11 at 19:42
add comment
The following trick uses some relatively deep mathematics, namely cluster algebras. It will probably impress (some) mathematicians, but not very many laypeople.
Draw a triangular grid and place 1s in some two rows, like the following except you may vary the distance between the 1s:
. . . . . .
. . . . . . .
. . . . . .
. . . . . . .
. . . . . .
Now choose some path from the top row of 1s to the bottom row and fill it in with 1s also, like so:
1 . . . . .
. 1 . . . . .
1 . . . . .
. 1 . . . . .
. 1 . . . .
Finally, fill in all of the entries of the grid with a number such that for every 2 by 2 "subsquare"
up vote 17 b
down vote a d
the condition $ad-bc=1$ is satisfied, or equivalently, that $d=\frac{bc+1}{a}$. You can easily do this locally, filling in one forced entry after another. For example, one might get the
. 1 5 5 3 1 .
1 2 8 7 1 .
. 1 3 11 2 1 .
. 1 4 3 1 .
The "trick" is that every entry is an integer, and that the pattern of 1s quickly repeats, except upside-down. If you were to continue to the right (and left), then you would have an
infinite repeating pattern.
This should seem at least a bit surprising at first because you sometimes divide some fairly large numbers, e.g. $\frac{5\cdot 11+1}{8} = 7$ or $\frac{7\cdot 3+1}{11} = 2$ in the above
picture. Of course, the larger the grid you made initially, the larger the numbers will be, and the more surprising the exact division will be.
Incidentally, if anyone can provide a reference as to why this all works, I'd love to see it. I managed to prove that all of the entries are integers, and that they're bounded, and so
there will eventually be repetition. However, the repetition distance is actually a simple function of the distance between the two rows of 1, which I can't prove.
2 The fact you are looking for is that, in a finite type cluster algebra, if you mutate each vertex once in the order given by the orientation of the Dynkin diagram, the resulting
operation has period h+2, where h is the Coxeter number. See front.math.ucdavis.edu/0111.3053 – David Speyer Feb 2 '10 at 13:28
7 I'm not sure what level you want an answer on. My preferred proof for this particular case is to notice that the numbers you are getting are the Plucker coordinates of a point in G
(2,n), and that presentation makes it obvious that they will be periodic modulo n. – David Speyer Feb 2 '10 at 13:31
show 1 more comment
A late addition: The Fold and One-Cut Theorem. Any straight-line drawing on a sheet of paper may be folded flat so that, with one straight scissors cut right through the paper, exactly
the drawing falls out, and nothing else. Houdini's 1922 book Paper Magic includes instructions on how to cut out a 5-point star with one cut. Martin Gardner posed the general question in
his Scientific American column in 1960.
up vote 15
down vote For the proof, see Chapter 17 of Geometric Folding Algorithms: Linkages, Origami, Polyhedra. We include instructions for cutting out a turtle, which, in my experience, draws a gasp from
the audience. :-)
add comment
The coffee mug trick
Give a coffee mug (full if you're brave) to someone and ask them to rotate 360 degrees without spilling the (real or imaginary) coffee, so that their hand ends up in the same
This is impossible, so you get to smirk while they contort themselves and become more and more baffled (this works better with more than one person since it turns into a kind of
up vote 12 down "competition")
Finally, take the cup and show that while it's impossible to turn it once (as has been "proven"), it's possible to turn it twice (!) and end up in the same position.
Has to do with the fundamental group of SO(3) being $\mathbb{Z}/2\mathbb{Z}$, and when we require the cup to stay upright we end with a non-trivial loop.
1 Sometimes called the "plate trick" or the "belt trick". – Sam Nead Oct 11 '10 at 8:54
1 And then, you pretend that the mug is an electron and your arm tracks its spin. – Elizabeth S. Q. Goodman Nov 8 '11 at 7:04
1 An easier solution than the plate trick: Stand up, pick up the mug, and walk in a circle around it. – Sam Nead Apr 7 '12 at 14:44
add comment
This trick exploits the thinness of coins.
up vote 10 http://www.howtodotricks.com/easy-coin-magic-trick.html
down vote
1 I remember this - a nice trick! Very similar to betting someone "I can walk through this piece of paper." [Holding up a piece of letter paper]. So this is a real mathematical trick;
it uses the interaction between length and area. Is there a way to make a deeper version? – Sam Nead Jan 17 '10 at 14:38
add comment
You can use hamming codes to guess a number with lying allowed. For example, here is a way to guess a number 0-15 with 7 yes-or-no questions, and the person being questioned is
up vote 9 down allowed to lie once. (The full cards are here).
add comment
Here is a card trick from Edwin Connell's Elements of Abstract and Linear Algebra, page 18 (it can be found online). I always do this trick to my undergraduate number theory class in the
first minutes of the first day. A few weeks later, after they've learned some modular arithmetic, we come back to the trick to see why it works. I quote from Connell:
"Ask friends to pick out seven cards from a deck and then to select one to look at without showing it to you. Take the six cards face down in your left hand and the selected card in your
right hand, and announce you will place the selected card in with the other six, but they are not to know where. Put your hands behind your back and place the selected card on top, and bring
the seven cards in front in your left hand. Ask your friends to give you a number between one and seven (not allowing one). Suppose they say three. You move the top card to the bottom, then
the second card to the bottom, and then you turn over the third card, leaving it face up on top. Then repeat the process, moving the top two cards to the bottom and turning the third card
face up on top. Continue until there is only one card face down, and this will be the selected card."
When I do this trick, I always use big magician's cards (much easier for an audience to see), but a regular deck works too. To get to the trick faster, I skip the first part and just pick 7
cards myself, showing them all the cards so they see nothing is funny (like two ace of spades or something). I then spread the cards in one hand face-down and let a student pick one and show
up vote it to everyone else but me before I take it back face down. When the student is showing the cards to the class I move the rest of the cards behind me so that before I get the card back I
8 down already have the rest behind my back.
You need to make sure students at the side of the room won't be able to see what you're doing behind your back (namely, putting the mystery card on the top of the deck), so stand close to the
board. Practice this with yourself many times first to be sure you can do it without screwing up. The hard part is remembering to keep the last card you reached in the count on the top of the
deck; that same card will be used when you start the count in the next round. If you stick it on the bottom before counting off cards again then you'll mess everything up. For instance, if
someone picks the number 3 then I start counting from the top of the deck and say (with hand movements in brackets) "One [put it under], two [put it under], three [turn it over, put it on top
FACE UP and stop]. This [show face-up card to everyone] is not your card. [Put it back face-up on top] One [now put it under], two [put it under], three [turn over and put on top FACE UP and
stop]. This etc. etc."
Connell advises telling people to pick up a number from 1 to 7 but not allow 1. In practice there's no need to tell people not to pick 1. They never do (it's never happened to me). They don't
pick 7 either. And if they did pick 1, well, just turn over the top card and you're done! Again, that never really happens.
add comment
Audience asked to choose an integer from 0 to 1000. Ask to give remainder when divided by 7, 11, and 13 respectively.
Magician gives original integer by Chinese Remainder Theorem.
up vote 7 down vote
Works because 7×11×13=1001.
1 You mean 13, not 3. 13x7x11 = 1001. – Jason DeVito Dec 26 '09 at 3:41
2 I've changed the 3 to a 13. @Jason: since the post is Community Wiki, you could have changed the 3 to a 13 as well. – Anton Geraschenko Dec 26 '09 at 6:56
show 1 more comment
Here's an example of a magic trick that works with high probability, based on a careful analysis of the riffle shuffle, in which an audience member performs a number of riffle shuffles
up vote 6 down and then moves a single card, and the magician guesses which card has been moved.
add comment
Here is a general trick that you can use to make yourself look like you have an amazing memory.
Start with a finite abelian group $(G,+)$ in which you are comfortable doing arithmetic. Be sure to know the sum $$g^* = \sum_{g \in G} g.$$
Take a set $S$ of $|G|$ physical objects with an easily computable set isomorphism $$ \varphi : S \longrightarrow G.$$ Allow your audience to remove one random element from $a \in S$
and then shuffle $S$ without telling you what $a$ is. [Shuffling means we need $G$ to be abelian.]
Now inform your audience that you are going to look briefly at each remaining element of $S$ and remember exactly which elements you saw, and determine by process of elimination which
element of $S$ was removed.
up vote 6
down vote Now glace through all the remaining elements of $S$ one by one and keep a "running total" to compute $$ \varphi(a) = g^* - \sum_{s \in S-\{a\}} \varphi(s).$$
Finally apply $\varphi^{-1}$ and obtain $a.$
Note that $\varphi$ is not "canonical" in the sense there are definitely choices to be made. On the other hand in should be "natural" in the sense that you should be very comfortable
saying $s = \varphi(s).$
The prototypical example is to take $G$ to be $\Bbb Z / 13 \Bbb Z \times V_4,$ $S$ to be a standard deck of 52 cards, and $\varphi(s)$ to be $( \text{rank}(s) , \text{suit}(s) )$.
show 4 more comments
I gave a talk about card shuffling to a general audience recently and wanted to memorise a "random-looking" deck so as to motivate a correct definition of what it means for a deck to be
random. Most magicians actually use memory tricks to learn off the deck but I thought it would be much cleverer to order the cards in the obvious way, and then find a recursive sequence of
length 52 containing all of 1 to 52. In the end, caught for time I settled on using the Collatz recursive relation with seed 18 --- this allowed me to name off 21 distinct cards effortlessly
and when I held up the deck prior to the demonstration, the audience voted that the deck was random. Can anyone think of a suitable recursive sequence with the desired property? We can either
take a random-looking order and a "regular" recursive sequence but I think it would be much better to find an easy to compute recursive sequence that "looks random" when using a more
up vote canonical order simply because if we can remember a "random looking order" we're pretty much going to have to remember the whole deck --- the problem I'm exactly trying to avoid.
5 down
vote PS: I did one of the simpler Diaconis tricks. A deck is riffle shuffled three times, the top card shown to the audience, inserted into the deck, and after laying the cards out on the table
the top card can be easily recovered by looking at the descents. The key is that the order of the deck is known beforehand --- a simple demonstration that three shuffles does not suffice to
mix up a deck of cards (with respect to variation distance).
add comment
Peter Suber writes:
By the way, the single best knot trick I've ever found is at pp. 98-99 of Louis Kauffman's On Knots, a mathematical treatise listed below with the books on knot theory. I'm sure you've seen
the trick in which someone ties an overhand knot by crossing their arms before picking up the cord, and then uncrossing them. Kauffman shows you how to do the same trick without crossing
your arms first. The version of this trick in Ashley #2576 and Budworth 1977 [p. 151] is not nearly as good.
up vote 4 Work out how it is possible for yourselves! A link to the book is here.
down vote
[Edit: This magic trick does not rely on mathematics -- instead it violates an important mathematical fact, that the trefoil is not unknotted! The Chinese rings have a similar feel, but the
mathematics violated (linking number) is less deep.]
3 Here's a video of that trick: math.toronto.edu/~drorbn/Gallery/KnottedObjects/WaistbandTrick/… Actually, I've tried to reproduce this trick many many times, but I've never succeeded. The
trick also seems to imply that the trefoil knot is trivial, which is weird... – Kevin H. Lin Dec 26 '09 at 17:26
show 5 more comments
Magician: "Here is a deck of 27 cards. Select one, memorize it, put it back and shuffle at libitum. Now name a number between 1 and 27 inclusive (=: N)." Then the magician deals the cards
face up into three heaps. You have to tell him in which heap the selected card lies, and he quickly ramasses the three heaps. This is done three times, then he hands you the deck, and you
up vote 4 have to count N cards from its back. The N'th card is flipped over, and it turns out to be the card you have originally selected.
down vote
add comment
Not so much a magic trick as a math trick, in that I can prove it works in theory but I have never tried it in practice.
Take a very long one-dimensional frictionless billiard table, with a wall at one end. Away from the wall, place a billiard ball with mass $10^{2n}$ for $n$ positive. Between that ball and
the wall, place another billiard ball with mass $1$. Then start the heavy ball rolling slowly towards the light one. Of course, they bounce, setting the light one traveling quickly towards
the wall, which it bounces off, and then it hits the heavy ball, etc., until all the momentum from the heavy ball has been transferred and it starts rolling away.
up vote 4
down vote Assume that all collisions are perfectly elastic. Then at the end of the day, there will be finitely many collisions. Indeed, the number of collisions will calculate the digits of $\pi$, in
the sense that there will be $\lfloor \pi \times 10^n \rfloor$ collisions.
I prefer this method of calculating $\pi$ much better than the probabilistic one.
4 This reminds me of a joke that ends in a mathematician explaining his solution to a real world problem starting with "let $C$ be a spherical chicken..." – Mariano Suárez-Alvarez♦ Jan 4
'10 at 4:07
Another trick for calculating $\pi$;, observed by David Boll (home.comcast.net/~davejanelle/mandel.html) and proven by Aaron Klebanoff (home.comcast.net/~davejanelle/mandel.pdf), is the
2 following. Let $z_0 = 0$; then let $z_j = z_{j-1}^2 + c$ where $c = -.75 + \epsilon i$ for some small number $epsilon$. Then for $k \lt \pi/\epsilon + O(1)$, $z_k$ is in a circle of
radius 2 around the origin; for larger $k$ it's not. (Boll came across this while investigating the Mandelbrot set. There are other points near the boundary of the set that behave
similarly.) – Michael Lugo Jan 4 '10 at 16:39
show 8 more comments
Ask someone to lay out the 52 cards in a deck, face up, in 4 rows of 13 cards each, in any order the person wants. Then you can always pick 13 cards, one from each column, in such a way
as to get exactly one card of each denomination (that is, one ace, one deuce, ..., one king).
up vote 4
down vote As a trick, it's not up there with sawing a woman in half, but its explanation does require Hall's Marriage Theorem.
3 Actually, Hall's Marriage Theorem has a constructive version: the augmenting-paths algorithm for finding a perfect matching in a bipartite graph, which runs in polynomial time. The
existence of this algorithm might help explain why the problem isn't so hard in practice... – Scott Aaronson Jul 13 '10 at 4:35
show 2 more comments
Two persons, A and B, perform this trick. The public (or one from the public) chooses two natural numbers and give A the sum and B the product. A and B will ask each other, alternatively,
the only single question "Do you know the numbers?" answering only yes or no until both find the numbers. There is a strategy such that for any input and only doing this, A and B will
manage to find the original numbers.
up vote 4 I have never seen magicians actually performing this, but is perfectly doable.
down vote
This was a problem in the shortlist of the proposed problem for some international mathematical olympiad. Unfortunately I don't remember which. If someone remembers or finds it. Tell us
please. i would also like to know.
1 If "the public" consists of mathematicians/wiseguys, the two natural numbers might have a few million digits, and it might take A and B a while to complete the trick. – Gerry Myerson
Jul 13 '10 at 4:52
show 5 more comments
I forgot the historical name for this and I'm pretty sure this is classical and well-known.
Consider a circular disk and remove an interior circular region, not necessarily concentric. In this annulus we play the following game. Start at any point $p_{1}$ of the outer boundary and
draw a line through this point which is tangent to the inner circle. This line intersects the outer circle at another point $p_2$. Now repeat the same procedure with $p_2$ and get $p_3$.
up vote 3 Iterating this procedure ad infinitum we either conclude that these sequence of points are periodic or not. What's true is that the periodicity or lack of it is independent of the starting
down vote point $p_1$.
I believe there is a proof involving Lefschetz fixed point theorem involving the torus but any details on this and the history of this is more than welcome.
1 I believe you are referring to Poncelet's theorem. mathworld.wolfram.com/PonceletsPorism.html – Gjergji Zaimi Dec 26 '09 at 0:50
11 It is hard to see how this idea can be turned into an actual trick. It requires either infinite accuracy (if the procedure is done by drawing on paper) or lots of complicated
computation. Moreover, generically there is no periodicity, and the audience will not be very impressed by the prediction of non-periodicity. – Richard Stanley Dec 26 '09 at 16:37
I'm not sure I agree. Choosing the circles so that the periodicity is very low, say 3 or 4, and letting a computer do the calculation at the touch of a button for audience-member-chosen
1 initial points (a bit of a pain but definitely something you can accomplish nowadays), you can definitely turn this into something pretty interactive and fun. – Emilio Pisanty Jun 22
'12 at 11:27
show 1 more comment
I hope this is contribution is appropriate; I think that a nice puzzle based on Hamming codes discussed a little here: http://ocfnash.wordpress.com/2009/10/31/yet-another-prisoner-puzzle/
is the following:
A room contains a normal 8×8 chess board together with 64 identical coins, each with one “heads” side and one “tails” side. Two prisoners are at the mercy of a typically eccentric jailer who
up vote has decided to play a game with them for their freedom. The rules are the game are as follows.
3 down
vote The jailer will take one of the prisoners (let us call him the “first” prisoner) with him into the aforementioned room, leaving the second prisoner outside. Inside the room the jailer will
place exactly one coin on each square of the chess board, choosing to show heads or tails as he sees fit (e.g. randomly). Having done this he will then choose one square of the chess board
and declare to the first prisoner that this is the “magic” square. The first prisoner must then turn over exactly one of the coins and exit the room. After the first prisoner has left the
room, the second prisoner is admitted. The jailer will ask him to identify the magic square. If he is able to do this, both prisoners will be granted their freedom
add comment
So two points of note.
I did not read all the posts above in detail but did do a search for the Faro Shuffle and got no results... So:
This is a shuffle where all the cards interweave absolutely perfectly (so a perfect riffle shuffle). There's quite a lot of maths behind this. For instance, 8 shuffles takes you back to the
order you started shuffling the cards in. Martin Gardner talked about this a bit in at least one of his SA columns. The problem with the faro shuffle is it takes a long long time to learn...
up vote personally well over a year, and that was with the benefit of having been a practicing amateur magician for along time. Still if interested the book to look for is The Collected Works of
3 down Alex Elmsley, this really lays the foundations for mathematical faro work...
Another trick I came across whilst working towards an Ergodic Theory exam uses the Birkhoff Ergodic Theorem at its core. You can read about it in these notes: http://
add comment
Here is a simple trick based on group theory. Ask a person to choose four numbers from 1 to 9 and write them in a row on a piece of paper. Pause for a moment and then write a number on a
piece of paper without letting the other person see what it is. Turn the paper over and place it on the table.
Now ask the person to choose two of the numbers from the list and put a line though them. Ask the person to compute a*b + a + b and put it in the list to replace the two chosen numbers.
up vote 3 Continue to do this until there is only one remaining number. Turn over the paper and show that the numbers match.
down vote
The simplest way of explaining this is to show that a * b + a + b is isomorphic to multiplication using the transform T(x) = x + 1. (a*b + a + b) + 1 = (a + 1)(b + 1). If we denote the
operation a * b + a + b as a & b, this means that a & b is commuative and associative, just as multiplication is. For any list of numbers ai, the final number can be computed as the (a1 +
1)(a2 + 1)...(an + 1) - 1.
show 1 more comment
Here's another Fibonacci trick, from Benjamin & Quinn's "Proofs that really count".
The magician hands a volunteer a sheet of paper with a table whose rows are numbered from one to ten, plus a final row for the total. She asks him to fill in the first two rows with his
favorite two positive integers. She then asks him to fill in row three with the sum of the first two rows, row four with the sum of row two and row three, etcetera... She then hands him a
calculator and asks him to add up all ten numbers together. Before he's able to finish that, the magician has a quick look at the sheet of paper and announces the total. The magician then
asks the volunteer to divide row 10 by row 9, and cut up the answer to the second decimal digit. The volunteer performs the division and says: 1.61. And the magician: "Now turn over the
paper and look what I've written". The paper says: "I predict the number 1.61".
The first part of the trick uses the following well-known Fibonacci identity:
up vote Indeed, call $x$ the number in row 1 and $y$ the number in row 2. Then for $n \geq 3$, the number in row $n$ is $F_{n-2} x+F_{n-1} y$, where $F_n$ is the $n$-th Fibonacci number. So the
3 down number in row 7 is $F_5 x + F_6 y=5x+8y$ and the total is $$x+y+\sum_{i=3}^{10} (F_{i-2} x+F_{i-1} y)= F_{10} x + F_{11} y=55x+88y$$ by the Fibonacci identity mentioned at the beginning.
vote Therefore all the magician has to do to find the total is multiply row 7 by the number 11.
The second part of the trick uses an inequality for the freshman sum ;-) of two fractions. That is, given positive fractions $\frac{a}{b}$ and $\frac{c}{d}$ such that $\frac{a}{b}<\frac{c}{d}
$ we have:
$$\frac{a}{b} < \frac{a+c}{b+d} < \frac{c}{d}$$
Just note that the number in row 9 is $13x+21y$ while the number in row 10 is $21x+34y$. Hence:
$$ 1.615 \dots =\frac{21x}{13x} < \frac{21x+34y}{13x+21y} < \frac{34y}{21y}=1.619 \dots $$
add comment
I would like to thank all the contributors on this page. I have been putting together a new Math-a-Magic show for the 9-12 grade level and have found some fantastic material here. If I get a
decent video of the show I'll be sure to post a link here so you can play the "What concept is behind this trick?" game.
I have modified some of your ideas severely. For example. Craig Feinstein's suggestion was a commercial effect that asks the volunteer to pick one of a hundred different cities typed out on
ten cards. The volunteer finds the city's name on two different cards which the magician looks at casually. You can then instantly tell him the name of the city he has mentally picked.
In my version, I instruct him not to ever let me see the city's name on the cards and yet I can still easily predict his choice!
Here is my favorite trick based on the deep principal of Set theory. Ok, maybe it is not too deep but the results are astounding!
Taking a deck of cards, you mention you have a prediction about these cards. That means it is very important important to give the cards a really random shuffle. You then give your volunteer
up half the deck and you both shuffle your half decks thoroughly. Tell your volunteer to take a small amount(5-15) of cards from his half of the deck, turn them upside down and give them to you.
vote 3 you do the same to him (doesn't matter how many cards you turn upside down as long as there is some left in your hand) You then both shuffle the cards you have received into the deck in your
down hands in there upside down state. so at this point both people will have some cards right side up some cards upside down. You will follow the same procedure two more times. It doesn't matter
vote how many or what cards he or you are turning upside down and giving away. After all this is done put both half decks of cards together again (IMPORTANT: turn your entire half deck over when
you place it on top of his) Now you spread the cards out across a table top. They should be a seemingly random mix of upside down and rigtside up cards. You then unfold your prediction slip
which says something like: 11 cards will be black, 15 cards will be red, 6 will be clubs and 5 will be hearts and the hearts will also form a royal Flush!
They will be astounded by your amazingly detailed prediction. What happens is that all the face up cards are the ones that were originally in your half deck. This trick is self working. All
you do is to pick out which cards you want in your half of the deck and place them at the top of the deck to start. Then just give him the random bottom half of the deck and you keep the
pre-set ones.
Any questions? Just email me at kevin@hallsofmagic.com
add comment
Apart from tricks based on numbers, there are topological objects whose properties can seem quite magical, like the Möbius strip or the unknot.
E.g. take a standard page of paper, show that it has two sides (number them with a pen, show that any straight pen path meets a boundary). Next, cut out a long strip from it (not needed of
course, but adds to the drama), and ask the audience "and how many sides does this have?". They reply "two". Then you put the the two small ends of the strip together to form a ring and you
up vote ask "and now, how many sides?", they still reply "two!". At this point do a little diversion, like putting a pair of scissors on the table saying out loud "I'll use this in a minute". Now do
2 down a half-twist with the strip before putting the small ends together and ask again "for the last time people, how many sides?". They answer "twoo!!", and you say "the magic has worked people,
vote there's only one side!" (you show that now the pen paths along the long direction never meet a boundary and come back). Most laymen are quite bemused. Now do two half-twists and ask again,
some won't dare an answer...
1 Have an assistant cut a cylinder in half along its median circle. Then cut a Mobius strip along its median. – Douglas Zare Jan 17 '10 at 23:08
show 3 more comments
You may ask the person to encode something by RSA, then you decode it (you have the private key)
To divide two 40-digit integers and give you the decimal result to 100 digits, you then use continued fractions to find the original fraction (reduced)
up vote 2 down vote
To compute pq and pr where p,q,r are prime, you then find p,q,r by the Euclidean algorithm (no very deep, but it's the best i've got)
add comment
The coffee mug trick is also called the Philipine Wine Trick and should be related to the Dirac String Trick, which you can find by a web search, for example here and also in my
presentation Out of Line, where rotations in 3-space are related to the Projective Plane.
A knot trick, I am not sure you would call it magic, has been shown to children and academics in many places. It rquires a pentoil knot of width 20" made of copper tubing, about 7mm
diameter (made by a university workshop) shown in the following diagram:
It also needs some nice flexible boating rope. The rope is wrapped round the $x,y$ pieces according to the rule $$R=xyxyxy^{-1}x^{-1}y^{-1}x^{-1}y^{-1} $$ and the ends tied together, as in
the following picture:
A member of the sudience is then invited to come up and manipulate the loop of rope off the knot, starting by turning it upside down. This justifies the rule $R=1$. Of course the rule is
up vote 2 the relation for the fundamental group of complement of the pentoil, which can, for the right audience, be deduced from the relations at each crossing given by the diagram
down vote
and can be easily demonstrated with the knot and rope.
It is also of interest to have a copper trefoil around to compare the relations. One warning: the use of rope does not really model the fundamental group, so be careful with a demo for the
figure eight knot!
I did the demo for one teenager and he said:"Where did you get that formula?" This demo knot has been well travelled, for many different types of audience; on one occasion the airline lost
my luggage with the rope and I had to ask the taxi from the airport to to stop at a hardware store for me to buy d=some clothesline. I devised this trick for an undergraduate course in
knot theory in the late 1970s.
add comment
Not the answer you're looking for? Browse other questions tagged popularization soft-question or ask your own question. | {"url":"http://mathoverflow.net/questions/9754/magic-trick-based-on-deep-mathematics/9774","timestamp":"2014-04-21T04:47:35Z","content_type":null,"content_length":"203082","record_id":"<urn:uuid:2524d04d-8372-4ddc-872f-96b48a35bbf1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bounded Gaps Between Primes
Posted by Tom Leinster
Guest post by Emily Riehl
Whether we grow up to become category theorists or applied mathematicians, one thing that I suspect unites us all is that we were once enchanted by prime numbers. It comes as no surprise then that a
seminar given yesterday afternoon at Harvard by Yitang Zhang of the University of New Hampshire reporting on his new paper “Bounded gaps between primes” attracted a diverse audience. I don’t believe
the paper is publicly available yet, but word on the street is that the referees at the Annals say it all checks out.
What follows is a summary of his presentation. Any errors should be ascribed to the ignorance of the transcriber (a category theorist, not an analytic number theorist) rather than to the author or
his talk, which was lovely.
Prime gaps
Let us write $p_1, p_2, \ldots$ for the primes in increasing cardinal order. We know of course that this list is countably infinite. A prime gap is an integer $p_{n+1}-p_n$. The Prime Number Theorem
tells us that $p_{n+1}-p_n$ is approximately $\log(p_n)$ as $n$ approaches infinity.
The twin primes conjecture, on the other hand asserts that
$\liminf_{n \to \infty} (p_{n+1}-p_n) =2$
i.e., that there are infinitely many pairs of twin primes for which the prime gap is just two. A generalization, attributed to Alphonse de Polignac, states that for any positive even integer, there
are infinitely many prime gaps of that size. This conjecture has been neither proven nor disproven in any case. These conjectures are related to the Hardy-Littlewood conjecture about the distribution
of prime constellations.
The strategy
The basic question is whether there exists some constant $C$ so that $p_{n+1}-p_n \lt C$ infinitely often. Now, for the first time, we know that the answer is yes…when $C = 7 \times 10^7$.
Here is the basic proof strategy, supposedly familiar in analytic number theory. A subset $H = \{ h_1,\ldots, h_k \}$ of distinct natural numbers is admissible if for all primes $p$ the number of
distinct residue classes modulo $p$ occupied by these numbers is less than $p$. (For instance, taking $p=2$, we see that the gaps between the $h_j$ must all be even.) If this condition were not
satisfied, then it would not be possible for each element in a collection $\{ n + h_1,\ldots, n +h_k\}$ to be prime. Conversely, the Hardy-Littlewood conjecture contains the statement that for every
admissible $H$, there are infinitely many $n$ so that every element of the set $\{ n + h_1,\ldots, n +h_k\}$ is prime.
Let $\theta(n)$ denote the function that is $\log(n)$ when $n$ is prime and 0 otherwise. Fixing a large integer $x$, let us write $n \sim x$ to mean $x$ ≤ $n \lt 2x$. Suppose we have a positive real
valued function $f$—to be specified later—and consider two sums:
$S_1 = \sum_{n \sim x} f(n)$
$S_2 = \sum_{n \sim x} \biggl( \sum_{j=1}^k \theta(n+h_j)\biggr) f(n)$
Then if $S_2 \gt (\log 3x) S_1$ for some function $f$ it follows that $\sum_{j=1}^k \theta(n+h_j) \gt \log 3x$ for some $n \sim x$ (for any $x$ sufficiently large) which means that at least two terms
in this sum are non-zero, i.e., that there are two indices $i$ and $j$ so that $n+h_i$ and $n+h_j$ are both prime. In this way we can identify bounded prime gaps.
Some details
The trick is to find an appropriate function $f$. Previous work of Daniel Goldston, János Pintz, and Cem Yildirim suggests define $f(n) = \lambda(n)^2$ where
$\lambda(n) = \sum_{d \mid P(n), d \lt D} \mu(d) \Bigl(\log \Bigl(\frac{D}{d}\Bigr)\Bigr)^{k+\ell} \quad\quad P(n) = \prod_{j=1}^k(n+h_j)$
where $\ell \gt 0$ and $D$ is a power of $x$.
Now think of the sum $S_2 - (\log 3x) S_1$ as a main term plus an error term. Taking $D = x^\vartheta$ with $\vartheta \lt \frac{1}{4}$, the main term is negative, which won’t do. When $\vartheta = \
frac{1}{4} + \omega$ the main term is okay but the question remains how to bound the error term.
Zhang’s work
Zhang’s idea is related to work of Enrico Bombieri, John Friedlander, and Henryk Iwaniec. Let $\vartheta = \frac{1}{4} + \omega$ where $\omega = \frac{1}{1168}$ (which is “small but bigger than $\
epsilon$”). Then define $\lambda(n)$ using the same formula as before but with an additional condition on the index $d$, namely that $d$ divides the product of the primes less that $x^{\omega}$. In
other words, we only sum over square-free $d$ with small prime factors.
The point is that when $d$ is not too small (say $d \gt x^{1/3}$) then $d$ has lots of factors. If $d = p_1\cdots p_b$ and $R \lt d$ there is some $a$ so that $r= p_1\cdots p_a \lt R$ and $p_1\cdots
p_{a+1} \gt R$. This gives a factorization $d = r q$ with $R/ x^\omega \lt r \lt R$ which we can use to break the sum over $d$ into two sums (over $r$ and over $q$) which are then handled using
techniques whose names I didn’t recognize.
On the size of the bound
You might be wondering where the number 70 million comes from. This is related to the $k$ in the admissible set. (My notes say $k = 3.5 \times 10^6$ but maybe it should be $k = 3.5 \times 10^7$.) The
point is that $k$ needs to be large enough so that the change brought about by the extra condition that $d$ is square free with small prime factors is negligible. But Zhang believes that his
techniques have not yet been optimized and that smaller bounds will soon be possible.
Posted at May 14, 2013 8:44 PM UTC
Re: Bounded Gaps Between Primes
Nice summary!
The Prime Number Theorem tells us that $p_{n+1}−p_n$ is approximately $\log(p_n)$ as $n$ approaches infinity.
Of course, it’s better to say $p_{n+1}−p_n$ is $\log(p_n)$on average as $n$ approaches infinity.
By the way, in 2004, Daniel Goldston, János Pintz and Cem Yıldırım were able to show that there are infinitely many pairs of primes at most 16 apart… if something called the Elliott–Halberstam
conjecture is true.
This is a really nice expository article about the whole issue:
• K. Soundararajan, Small gaps between prime numbers: the work of Goldston-Pintz-Yıldırım.
Posted by: John Baez on May 15, 2013 3:38 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Posted by: Emily Riehl on May 15, 2013 4:09 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Thanks so much for this nice summary! It makes me feel like I can almost understand it.
In other news I see this arxiv paper, claiming that the ternary Goldbach conjecture has been proven: (by H.A. Helfgott, Ecole Normale Superior) Major arcs for Goldbach’s theorem .
If anyone can summarize those 131 pages that would be awesome!
Posted by: stefan on May 15, 2013 4:24 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Stefan wrote:
If anyone can summarize those 131 pages that would be awesome!
I can’t do that, but here’s some chat. On Google+, Terence Tao wrote:
Busy day in analytic number theory; Harald Helfgott has complemented his previous paper (obtaining minor arc estimates for the odd Goldbach problem) with major arc estimates, thus finally
obtaining an unconditional proof of the odd Goldbach conjecture that every odd number greater than five is the sum of three primes. (This improves upon a result of mine from last year showing
that such numbers are the sum of five or fewer primes, though at the cost of a significantly lengthier argument.) As with virtually all successful partial results on the Goldbach problem, the
argument proceeds by the Hardy-Littlewood-Vinogradov circle method; the challenge is to make all the estimates completely effective and to optimise all parameters (which, among other things,
requires a certain amount of computer-assisted computation).
I wrote:
Today +Harald Helfgott publicized his proof of the odd Goldbach conjecture:
Every odd number greater than 5 can be expressed as the sum of 3 primes.
I’m optimistic that it’s correct, not because I understand it, but because Helfgott has a good track record and +Terence Tao, an expert on these matters, sounds optimistic.
Actually Helfgott’s proof only works for odd numbers greater than 10^30. But the result has already been checked by computer for odd numbers smaller than this!
Before Helfgott proved it for odd numbers greater than $10^{30}$, the odd Goldbach conjecture was known to be true for numbers greater than $e^{3100}$. This meant it could in principle be checked
by a computer… but not in practice now, because that number was too big.
Interestingly, Helfgott’s proof for odd numbers greater than $10^{30}$ also relies on computer calculations! As part of the work, David J. Platt needed to check hundreds of thousands of facts
that would be true if the Generalized Riemann Hypothesis holds. I don’t think I want to explain this, since you can see the basic idea here:
• Generalized Riemann Hypothesis, Wikipedia.
But, briefly, this hypothesis says that all the zeros of certain functions called Dirichlet L-functions that lie in a certain strip of the complex plane actually lie on a certain line.
In a conversation last May here on G+, Helfgott said:
Very informally and off the strictest record […] friends say […] “this is now done modulo 2-years-and-£100000 worth of computer time”. I think this is very roughly right. It is of course
possible that the result will be improved or complemented, either by myself or by others, before anybody puts in very serious computer resources to the task.
It seems that Platt did the calculations sooner and more cleverly than expected!
In case you’re wondering, the odd Goldbach conjecture has long been considered much easier than the original Goldbach conjecture: every even number greater than 2 can be expressed as the sum of 2
primes. Tao says brand-new insights would be needed to crack this one.
Harald Helfgott replied:
Hi John -
Actually, what happened is that I improved both my minor arcs results (hence the new version of that paper posted yesterday) and my ideas towards major arc results (included in the new paper,
Major Arcs…, also posted yesterday). This meant that the computations that had to be done went further by only a factor of 2 or so than those that Platt had already done for his thesis. (This
translates into a factor of 8 or so in computer time.)
I should also say that Platt is, of course (as you say), clever, and a pleasure to work with. Also, our joint efforts to get people to give us more computer time bore fruit thanks to the
generosity of several institutions and individuals.
Let me add that my proof really works starting at $10^27$ or so, or, with some modifications, perhaps even a little below that. I’ve assumed $n \ge 10^30$, which is both more than I need and less
than what has been checked, so as to give me a wide berth in case of minor slips.
Posted by: John Baez on May 15, 2013 8:28 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Thank you, John! This is great stuff, and I’ll check back at the Google+ conversation for any future comments.
Posted by: stefan on May 22, 2013 6:58 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
If we follow Elkies
The Harvard math curriculum leans heavily towards the systematic, theory-building style; analytic number theory as usually practiced falls in the problem-solving camp. This is probably why,
despite its illustrious history (Euclid, Euler, Riemann, Selberg, … ) and present-day vitality, analytic number theory has rarely been taught here. … Now we shall see that there is more to
analytic number theory than a bag of unrelated ad-hoc tricks, but it is true that partisans of contravariant functors, adèlic tangent sheaves, and étale cohomology will not find them in the
present course. Still, even ardent structuralists can benefit from this course…. An ambitious theory-builder should regard the absence thus far of a Grand Unified Theory of analytic number theory
not as an insult but as a challenge. Both machinery- and problem-motivated mathematicians should note that some of the more exciting recent work in number theory depends critically on symbiosis
between the two styles of mathematics.
Is there any whiff of such symbiosis in this latest work?
Posted by: David Corfield on May 16, 2013 8:27 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
I have a question. maybe it’s fool, but every people are sometimes fool in the front of God.So please don’t laugh at me. if p(n+1) - p(n) < C => 1-p(n)/p(n+1) < C/p(n+1). we know that lim(C/p(n+1)) =
0, but lim(1-p(n)/p(n+1)) = 1 - lim(p(n)/p(n+1)) >0, so left > 0 and right =0, this inequation (left < right) is not correct =>p(n+1) - p(n) < C is not correct.
Posted by: freepublic on May 16, 2013 11:08 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Let me pretend for a moment that $p_n$ is shorthand for the function $n \times (C-\epsilon)$. Then $p_{n+1} - p_n \lt C$ and as you observe $1 - \frac{p_n}{p_{n+1}} \lt \frac{C}{p_{n+1}}$. The limit
as $n \to \infty$ of the right hand side is zero but so is the left! Note that $\frac{p_n}{p_{n+1}} = \frac{n}{n+1}$.
It is a little more complicated to phrase limiting statements in our context, when the $p_n$ are primes. To say that there are infinitely many prime gaps less than $C$ is to say that the limsup of
$p_{n+1}-p_n$ is less that $C$ as $n \to \infty$.
The prime number theorem, which I slightly misquoted above, says that the function $\pi(n)$ which counts the number of primes less than $n$ is asymptotically equal to $\frac{n}{\log(n)}$, meaning if
you take the limit as $n \to \infty$ of the ratio of these functions you get one. As John points out above, the correct way to express this colloquially is to say that the prime gaps $p_{n+1}-p_n$
are $\log(p_n)$ on average as $n$ gets large.
Posted by: Emily Riehl on May 16, 2013 3:55 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
thanks for your explanation. I have some confusion still. if p(n)=n*(C-e), lim(p(n)/p(n+1))=lim(n/(n+1))=1, but left = 1-1=0,right=0,left10^C, log(p(n))>C. yes, if it’s on average, there are some
gaps less than C, but more gaps above C. the ratio become smaller and smaller.when p(n) =infinity, the ratio become zero. no gap is less than C.
Posted by: freepublic on May 16, 2013 5:27 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Sorry, I accidentally deleted some comments here. Freepublic posted two identical copies of his/her “thanks for your explanation…” comment, one of which I deleted — but an unintended side-effect was
that replies to that comment also got removed.
Posted by: Tom Leinster on May 17, 2013 1:50 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
never mind, because i have some print error, then post twice.
my opinion is if construct a regular sequence X = x(n+1)-x(n) = log(n), the total of x whoese gaps
another way, we can divide axis like this:0____10^C______infinity, we can move all of p(n) whoes gaps are less than C to [0,10^C], and all of p(n) whoes gaps are bigger than C to [10^
C,infinity).because the p(n+1)-p(n)=log(n) on average. then the total primes of [0,10^C] eual to total primes of [10^C,infinity). but how could [0,10^C] contain infinite primes?
Posted by: freepublic on May 18, 2013 3:23 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
if construct a regular sequence X = x(n+1)-x(n) = log(n), the total of x whose gaps are less than C is finite. X can transform to prime sequence by order and value change.
Posted by: freepublic on May 18, 2013 3:38 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
you have mentioned that “A subset H={h1,…,hk} of distinct natural numbers is admissible if for all primes p the number of distinct residue classes modulo p occupied by these numbers is less than p.
(For instance, taking p=2, we see that the gaps between the hj must all be even.) If this condition were not satisfied, then it would not be possible for each element in a collection {n+h1,…,n+hk} to
be prime. ” could you give me a example to understand. does it mean that p=2, H={1,3,5,7,9,..} hk=?
Posted by: freepublic on May 25, 2013 6:33 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Hi! No, {1,3,5,etc.} would not be admissible, as for p=3 all modulo classes are occupied: 1 is 1 mod 3, 3 is 0 mod 3, 5 is 2 mod 3. What this means is that whatever number you add {1,3,5,etc.} to,
one of the resulting numbers is going to be 0 mod 3, i.e. a multiple of 3. If you try 10, you get 11, 13, 15 (boom!). So if the set H is not admissible, you know that (at least for large enough n
that you add) you will get a non-prime number. I find it quite remarkable that this modest necessary condition should also be a sufficient one for getting infinitely many prime constellations, should
the mentioned conjecture be true.
Posted by: Edwin Steiner on May 27, 2013 7:36 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
does it mean that, H={1,2}, H=(1,2,3,4} are admussible. becuase 3>2, 5>3. if p=2, H={1}, there are infinite n, {2+1}, {4+1}, {6+1},{10+1}… are primes?
Posted by: freepublic on May 28, 2013 11:58 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Posted by: freepublic on May 28, 2013 11:59 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
It does not work like “pick a prime p, then find an admissible set H for p”. To be admissible, H must satisfy the condition for all primes p. (Obviously, if you have k elements in H you need only
check p <= k, because you cannot occupy >k modulo classes with k numbers.) So {1,2} is not admissible as it is {0,1} mod 2. {1} is admissible and yields the trivial constellations of a single prime
each. See http://mathworld.wolfram.com/PrimeConstellation.html for valid examples.
Posted by: Edwin Steiner on May 28, 2013 6:58 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
This is another explanation including references: http://primes.utm.edu/glossary/xpage/PrimeConstellation.html
Posted by: Edwin Steiner on May 28, 2013 7:09 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Posted by: freepublic on May 29, 2013 4:29 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
“Whether we grow up to become category theorists or applied mathematicians, one thing that I suspect unites us all is that we were once enchanted by prime numbers.”
I highly disagree with this. I have interest in about 90% of all mathematics and was never interested in elementary features (chanting?) of prime numbers (though I appreciate nowdays large parts of
advanced number theory, because I learned of its relation to other things in mathematics, like algebraic geometry), partly because the community of followers likes to emphasis on tricks. In fact, as
a school boy, although I was very good in mathematics, I did not even consider ever becoming a mathematician until I learned about areas of mathematics which do not have references to calculation
with numbers (and with indeterminates with the same rules as for numbers). I find quite annoying when people argue (without any arguments) that the usual numbers are the basic and natural, god given
subject, unlike other areas of mathematics which are supposedly artificial and so on. This kind of hi brow aristocracy within mathematics is not for any mathematician to be proud of.
Posted by: Zoran Skoda on May 16, 2013 2:54 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
I agree with Zoran that the natural god-given virtue of prime numbers is greatly exaggerated. It is true we all spent a few pleasant hours in 5th or 6th grade factoring numbers like 1250 and 210 but
never numbers like 91 or 391. After that it gets boring pretty quickly.
Probably the interest of any area of mathematics lies in how hard the unsolved problems are. Twin primes for example are only interesting because it is so very hard to prove things about them. The
proofs such as Zhang’s proof are amazingly intricate machines made out of parts with their own history and connections to other areas of mathematics. All of this effort is more interesting than the
object it is directed toward.
Posted by: Daniel Goldston on May 18, 2013 3:15 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
It’s very interesting to read such a statement by a researcher. What also puzzles me in popular texts is when they go on about the amazing apparent randomness of the primes. In my (non-expert) view,
the sieve of Eratosthenes is basically a systematic way of removing all regularity (I realize the tension in this statement), so I do not find it surprising that the primes show pseudo-random
properties. What I find fascinating about primes is the mysterious connections between addition and multiplication. In my intuition, primes are natural inhabitants of the “multiplicative world”,
where they are simple, and it is striking that they turn out to be so exceedingly complicated when looked at in the “additive world” (regarding sums of primes, gaps, etc.).
Posted by: Edwin Steiner on June 1, 2013 11:37 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Posted by: Tom Ellis on June 1, 2013 6:34 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
I see what you are getting at, but in fact my statement was not meant as something that could be made precise. It is just my reaction on a naive level to the usual popular exposition of the primes,
where they first explain how primes are defined and then they show the pattern of the first 100 or so and say “Look how randomly they are scattered! Isn’t that amazing?”, and I think to myself “Well,
in the step before you just removed any obvious regularity, so what’s the big deal?”. There may well be some truly amazing pseudo-random properties that are just not mentioned in these texts. The
Chebyshev bias is an example that there are at least interesting statistical deviations from pseudo-randomness.
Posted by: Edwin Steiner on June 2, 2013 10:28 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Posted by: timur on May 23, 2013 4:26 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
The preprint is available from the Annals. You or your institution need to be a subscriber.
Oh, and Emily, $k=3.5\times 10^6$ is correct.
Posted by: David Roberts on May 21, 2013 11:39 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
The k=3500000 is correct: http://www.wolframalpha.com/input/?i=PrimePi%28n%29-+PrimePi%283500000%29++%3D+%283500000%29
Posted by: Ali on May 28, 2013 7:10 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
I can’t say I saw the real point of primes until I came across algebraic number theory and algebraic geometry. Its importance then became much more vivid.
Certainly its true that its a profound enough, but also simple enough idea that even most educated laymen will know what it is about vaguely. Which is why the media seem inordinately keen on them.
Posted by: mozibur ullah on May 28, 2013 10:30 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
In case anyone isn’t following, there’s discussion at Secret Blogging Seminar where by reasonably simple reasoning the bound is down to 57 554 086.
However the key number to reduce is $k_0 = 3.5\times 10^6$, which hasn’t been touched; all reductions have been in how the bound $k_0$ has been applied.
Posted by: David Roberts on May 31, 2013 9:37 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
I do not quite see that there is two primes n+hi and n+hj. I could only see that there are two primes n1 +hi and n2 +hj with !n1 -n2! < x.
Also, how do we know that \pi (70000000)-\pi (3500000)>3500000?
Posted by: Daniel on May 31, 2013 6:46 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
I think that I got the first question. And am still trying to find the answer of the second. Thanks.
Posted by: Daniel on May 31, 2013 11:38 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Also, how do we know that $\pi (70000000)-\pi (3500000) \gt 3500000$?
There are probably a bunch of ways. Looking to Google and Wikipedia for help, one can look up bounds on the prime-counting function, where one finds inequalities such as
$\frac{x}{\ln x}\left(1+\frac{1}{\ln x}\right) \lt \pi(x) \lt \frac{x}{\ln x}\left(1+\frac{1}{\ln x}+\frac{2.51}{(\ln x)^2}\right).$
Call the analytic function on the left $F_1(x)$ and the analytic function on the right $F_2(x)$. According to these inequalities, we have
$\pi(70000000) - \pi(3500000) \gt F_1(70000000) - F_2(3500000) \gt 4089630 - 250259 = 3839371$
and probably cruder inequalities would work as well, but this gets the job done.
Posted by: Todd Trimble on June 1, 2013 2:54 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
in zone [x,3x], when x trends to infinity.zone becomes [infinity,3*infinity].then p(n+1)-pn=infinity, it satisfy the strategy condition. we think there are 2 primes in zone [x,3x]. but p(n+1)-pn=
infinity, which is equivalent to that p(n+1) is not existing.actually, there is only 1 prime in [x,3x], but we think mistakenly there are 2 primes. so pn-pn=0
Posted by: freepublic on July 12, 2013 8:15 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
I see they’ve got the bound down from 70 million to 285,232.
Posted by: David Corfield on June 10, 2013 12:09 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
And now the gap is down to 60 744, using a $k_0=6329$ (down from 3.5 million in Emily’s post),
Posted by: David Roberts on June 17, 2013 3:44 AM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
This morning, I went to an excellent talk on this by Ben Green. One aspect that particularly caught my attention was admissible sets.
Emily already said this in her post above, but I’ll repeat it in Ben’s vivid way. He said: a set $\{h_1, \ldots, h_k\}$ is admissible if there is a chance that there might be infinitely many numbers
$n$ such that $n + h_1, \ldots, n + h_k$ are all prime.
The conjecture is then: for any admissible set, there really are infinitely many numbers $n$ such that $n + h_1, \ldots, n + h_k$ are all prime.
(Of course, he made the phrase “there is a chance” precise: as Emily said, it means that for any prime $p$, the image of $\{h_1, \ldots, h_k\}$ in $\mathbb{Z}/p\mathbb{Z}$ is a proper subset.
Actually, it doesn’t matter whether we say “prime $p$” or “number $p \geq 2$” here.)
The twin prime conjecture is the case $\{h_1, h_2\} = \{0, 2\}$ of the conjecture above. One thing neither Emily nor Ben pointed out is that the conjecture above would also imply the Green–Tao
theorem, that there are arbitrarily long arithmetic progressions of primes.
To see this, just note that for any $n \geq 1$, the sequence $1\cdot n!,\quad 2\cdot n!,\quad \ldots,\quad n\cdot n!$ is admissible. Indeed, for a prime $p$ greater than $n$, obviously these can’t
represent all the residue classes mod $p$; and for a prime $p$ less than or equal to $n$, these are all zero mod $p$, so again don’t represent all the residue classes. So the conjecture above implies
that there are infinitely many arithmetic progressions of length $n$ and step size $n!$.
But I’m most interested in something more general. The conjecture above (does it have a name?) is of the form “if there’s no obvious reason for there not to be infinitely many primes satisfying
such-and-such, then there really are infinitely many primes satisfying such-and-such”. Here “obvious reason” refers to very simple considerations mod $p$.
Are there more general statements of this type? I mean, some precise conjecture of the form “if it’s not obviously false that there are infinitely primes satisfying such-and-such, it’s true”?
Posted by: Tom Leinster on June 20, 2013 1:43 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
The most general conjecture which I know along these lines is Schinzel’s hypothesis H: For any polynomials $f_1(n)$, $f_2(n)$, … $f_r(n)$, if there is no “obvious” obstacle to all the $f_i$ taking
prime values simultaneously, then they do so infinitely often.
You want to be a little careful with guesses like this. I would guess that there are only finitely many primes of the form $2^{n^2}+3$, even though there is no obvious obstruction, because the
“probability that $t$ is prime” is $1/\log t$ and $\sum 1/\log(2^{n^2}+3)$ converges.
A little more subtly, there is no obvious obstruction to $2^n+1$ being prime, and $\sum 1/\log(2^n+1)$ converges. However, $2^n+1$ being prime implies that $n$ is a power of $2$, and $\sum 1/\log(2^
{2^k}+1)$ converges. So I would guess only finitely many primes of the form $2^n+1$, but for a reason which seems hard to make into a general condition.
Posted by: David Speyer on July 12, 2013 6:29 PM | Permalink | Reply to this
Re: Bounded Gaps Between Primes
Thanks so much for this nice summary! It makes me feel like I can almost understand it. I’ll check back at the Google+ conversation for any future comments. I can understand Zhang’s strategy now. and
investigate more details. Good job!
Posted by: Melissa on April 14, 2014 8:49 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2013/05/bounded_gaps_between_primes.html","timestamp":"2014-04-20T00:42:02Z","content_type":null,"content_length":"114048","record_id":"<urn:uuid:d7f7241e-b021-431c-8094-9e144a158b30>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
math problem-simplify
December 6th 2008, 04:06 PM
math problem-simplify
I can multiply this easily-but the problem says to simplify it. I have the answer and NO idea how to get there. Can someone help to simplify:
Please show steps-I'm lost! Thanks!
December 6th 2008, 05:48 PM
Are U sure U didn't typed wrong?Maybe it is ((3x^2-2x-7)(8x^2-8x+7).
December 6th 2008, 05:50 PM
It was meant to be $(3x^3-2x^2-7)(2x^2-8x+7)$ I think.
December 6th 2008, 06:48 PM
Are you sure you typed it right? Is it $(3x^2-2x-7)(8x^2-8x+7)$ or $(3x^3-2x^2-7)(8x^2-8x+7)\;\;?$
Is this $(3x^2-2x^2-7)(8x^2-8x+7)$ the right question in your book? If it is then
$= (3x^2-2x^2-7)(8x^2-8x+7)$
now finish it.
The answer in your book is wrong then.
December 6th 2008, 07:12 PM
Robin Howard
find the GCF | {"url":"http://mathhelpforum.com/algebra/63655-math-problem-simplify-print.html","timestamp":"2014-04-16T16:29:28Z","content_type":null,"content_length":"7139","record_id":"<urn:uuid:446fe6be-bccd-4fbe-94cc-8f6306231477>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic Population Models in Ecology and Epidemiology
Results 1 - 10 of 19
- SIAM Review , 2000
"... Abstract. Many models for the spread of infectious diseases in populations have been analyzed mathematically and applied to specific diseases. Threshold theorems involving the basic reproduction
number R0, the contact number σ, and the replacement number R are reviewed for the classic SIR epidemic a ..."
Cited by 200 (1 self)
Add to MetaCart
Abstract. Many models for the spread of infectious diseases in populations have been analyzed mathematically and applied to specific diseases. Threshold theorems involving the basic reproduction
number R0, the contact number σ, and the replacement number R are reviewed for the classic SIR epidemic and endemic models. Similar results with new expressions for R0 are obtained for MSEIR and SEIR
endemic models with either continuous age or age groups. Values of R0 and σ are estimated for various diseases including measles in Niger and pertussis in the United States. Previous models with age
structure, heterogeneity, and spatial structure are surveyed.
- Supplement U. S. National Report IUGG , 1987
"... Abstract Stochastic processes with multiplicative noise have been studied independently in several different contexts over the past decades. We focus on the regime, found for a generic set of
control parameters, in which stochastic processes with multiplicative noise produce intermittency of a speci ..."
Cited by 19 (10 self)
Add to MetaCart
Abstract Stochastic processes with multiplicative noise have been studied independently in several different contexts over the past decades. We focus on the regime, found for a generic set of control
parameters, in which stochastic processes with multiplicative noise produce intermittency of a special kind, characterized by a power law probability density distribution. We present a review of
applications, highlight the common physical mechanism and summarize the main known results. The distribution and statistical properties of the duration of intermittent bursts are also characterized
in details. 1 1
, 2008
"... The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework
for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consi ..."
Cited by 13 (5 self)
Add to MetaCart
The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for
constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consideration of implicit dynamic models, meaning statistical models for stochastic dynamical systems
which are specified by a simulation algorithm to generate sample paths. Inference procedures that operate on implicit models are said to have the plug-and-play property. Our work builds on recently
developed plug-and-play inference methodology for partially observed Markov models. We introduce a class of implicitly specified Markov chains with stochastic transition rates, and we demonstrate its
applicability to open problems in statistical inference for biological systems. As one example, these models are shown to give a fresh perspective on measles transmission dynamics. As a second
example, we present a mechanistic analysis of cholera incidence data, involving interaction between two competing strains of the pathogen Vibrio cholerae. 1. Introduction. A
, 1981
"... This paper studies population models which have the following three ingredients: populations are divided into local subpopulations, local population dynamics are noniinear and random events
occur locally in space. In this setting local stochastic phenomena have a systematic effect on average populat ..."
Cited by 8 (3 self)
Add to MetaCart
This paper studies population models which have the following three ingredients: populations are divided into local subpopulations, local population dynamics are noniinear and random events occur
locally in space. In this setting local stochastic phenomena have a systematic effect on average population density and this effect does not disappear in large populations. This result is an outcome
of the interaction of the three ingredients in the models and it says that stochastic models of systems of patches can be expected to give results for average population density that differ
systematically from those of deterministic models. The magnitude of these differences is related to the degree of nonlinearity of local dynamics and the magnitude of local variability. These results
explain those obtained from a number of previously published models which give conclusions that differ from those of deterministic models. Results are also obtained that show how stochastic models of
systems of patches may be simplified to facilitate their study. 1. INTR~OUCTI~N The chances of survival and reproduction for an individual organism
- In (Ed. Fred Ghasssemi) Proceedings of the International Congress on Modelling and Simulation , 2001
"... Abstract: Diffusion models are widely used in ecology, and in more general population biology contexts, for predicting population-size distributions and extinction times. They are often used
because they are particularly simple to analyse and give rise to explicit formulae for most of the quantities ..."
Cited by 7 (3 self)
Add to MetaCart
Abstract: Diffusion models are widely used in ecology, and in more general population biology contexts, for predicting population-size distributions and extinction times. They are often used because
they are particularly simple to analyse and give rise to explicit formulae for most of the quantities of interest. However, whilst diffusion models are ubiquitous in the literature on population
models, their use is frequently inappropriate and often leads to inaccurate predictions of critical quantities such as persistence times. This paper examines diffusion models in the context in which
they most naturally arise: as approximations to discrete-state Markovian models, which themselves are often more appropriate in describing the behaviour of the populations in question, yet are
difficult to analyse from both an analytical and a computational point of view. We will identify a class of Markovian models (called asymptotically density dependent models) that permit a diffusion
approximation through a simple limiting procedure. This procedure allows us to immediately identify the most appropriate approximating diffusion and to decide whether the diffusion approximation, and
hence a diffusion model, is appropriate for describing the population in question. This will be made possible through the remarkable work of Tom Kurtz and Andrew Barbour, which is frequently cited in
the applied probability literature, but is apparently not widely accessible to practitioners. Their results will be presented here in a form that most easily allows their direct application to
population models. We will also present results that allow one to assess the accuracy of diffusion approximations by specifying for how long and over what ranges the underlying Markovian model is
faithfully approximated. We will explain why diffusion models are not generally useful for estimating extinction times, a serious shortcoming that has been identified by other authors using empirical
, 2007
"... The well–known logistic model has been extensively investigated in deterministic theory. There are numerous case studies where such type of nonlinearities occur in Ecology, Biology and
Environmental Sciences. Due to the presence of environmental fluctuations and a lack of precision of measurements, ..."
Cited by 2 (0 self)
Add to MetaCart
The well–known logistic model has been extensively investigated in deterministic theory. There are numerous case studies where such type of nonlinearities occur in Ecology, Biology and Environmental
Sciences. Due to the presence of environmental fluctuations and a lack of precision of measurements, one has to deal with effects of randomness on such models. As a more realistic modeling, we
suggest nonlinear stochastic differential equations (SDEs) dX(t) = [(ρ + λX(t))(K − X(t)) − µX(t)]dt + σX(t) α |K − X(t) | β dW (t) of Itô-type to model the growth of populations or innovations X,
driven by a Wiener process W and positive real constants ρ, λ, K, µ, α, β ≥ 0. discuss well–posedness, regularity (boundedness) and uniqueness of their solutions. However, explicit expressions for
analytical solution of such random logistic equations are rarely known. Therefore one has to resort to numerical solution of SDEs for studying various aspects like the time–evolution of growth
patterns, exit frequencies, mean passage times and impact of fluctuating growth parameters. We present some basic aspects of adequate numerical analysis of these random extensions of these models
such as numerical regularity and mean square convergence. The problem of keeping reasonable boundaries for analytic solutions under discretization plays an essential role for practically meaningful
models, in particular the preservation of intervals with reflecting or absorbing barriers. A discretization of the continuous state space can be circumvented by appropriate methods. Balanced implicit
methods (see Schurz, IJNAM 2 (2), p. 197-220, 2005) are used to construct strongly converging approximations with the desired monotone properties. Numerical studies can bring out salient features of
the stochastic logistic models (e.g. We almost sure monotonicity, almost sure uniform boundedness, delayed initial evolution or earlier points of inflection compared to deterministic model).
, 810
"... We show that the simplest stochastic epidemiological models with spatial correlations exhibit two types of oscillatory behaviour in the endemic phase. In a large parameter range, the
oscillations are due to resonant amplification of stochastic fluctuations, a general mechanism first reported for pre ..."
Cited by 1 (0 self)
Add to MetaCart
We show that the simplest stochastic epidemiological models with spatial correlations exhibit two types of oscillatory behaviour in the endemic phase. In a large parameter range, the oscillations are
due to resonant amplification of stochastic fluctuations, a general mechanism first reported for predator-prey dynamics. In a narrow range of parameters that includes many infectious diseases which
confer long lasting immunity the oscillations persist for infinite populations. This effect is apparent in simulations of the stochastic process in systems of variable size, and can be understood
from the phase diagram of the deterministic pair approximation equations. The two mechanisms combined play a central role in explaining the ubiquity of oscillatory behaviour in real data and in
simulation results of epidemic and other related models. PACS numbers: 87.10.Mn; 87.19.ln; 05.10.Gg Cycles are a very striking behaviour of prey-predator systems also seen in a variety of other
host-enemy systems — a case in point is the pattern of recurrent epidemics of many endemic infectious diseases [1]. The controversy in the literature over the driving mechanisms
, 2011
"... Inference for partially observed Markov process models has been a longstanding methodological challenge with many scientific and engineering applications. Iterated filtering algorithms maximize
the likelihood function for partially observed Markov process models by solving a recursive sequence of fi ..."
Add to MetaCart
Inference for partially observed Markov process models has been a longstanding methodological challenge with many scientific and engineering applications. Iterated filtering algorithms maximize the
likelihood function for partially observed Markov process models by solving a recursive sequence of filtering problems. We present new theoretical results pertaining to the convergence of iterated
filtering algorithms implemented via sequential Monte Carlo filters. This theory complements the growing body of empirical evidence that iterated filtering algorithms provide an effective inference
strategy for scientific models of nonlinear dynamic systems. The first step in our theory involves studying a new recursive approach for maximizing the likelihood function of a latent variable model,
when this likelihood is evaluated via importance sampling. This leads to the consideration of an iterated importance sampling algorithm which serves as a simple special case of iterated filtering,
and may have applicability in its own right. 1
, 2010
"... Time series analysis for nonlinear dynamical systems with applications to modeling of infectious diseases by ..."
Add to MetaCart
Time series analysis for nonlinear dynamical systems with applications to modeling of infectious diseases by | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2358201","timestamp":"2014-04-17T14:27:29Z","content_type":null,"content_length":"38780","record_id":"<urn:uuid:7aecb263-fd21-4355-a63e-f016b0201d21>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Willingboro Prealgebra Tutor
Find a Willingboro Prealgebra Tutor
Thank you for taking the time to get to know me a little bit better. I am a 2013 graduate of Bloomsburg University, with a degree in Interpersonal Communication. During my time at Bloomsburg
University, I was an ACT 101 student mentor.
10 Subjects: including prealgebra, ASVAB, public speaking, elementary (k-6th)
I graduated from West Point with a Bachelor of Science degree in Engineering Management, and I currently teach mathematics, physics and engineering at an independent school in the Philadelphia
suburbs. I have tutored middle and high school students in the areas of PSAT/SAT/ACT preparation, math (Al...
19 Subjects: including prealgebra, English, calculus, GRE
...The emphasis for a theory student, though, would be on writing and analysis. Learn to compose in your own way, at your own pace, from a classically trained composer. I graduated in May 2010
from West Chester University of Pennsylvania, as a bachelor of music in composition.
8 Subjects: including prealgebra, algebra 1, algebra 2, Java
...I enjoy explaining math problems and take great satisfaction when a student of mine understands the material. I have an ability to teach complex problems in varied ways in order to obtain full
comprehension. I taylor my teaching style to the that of the students learning style.
19 Subjects: including prealgebra, geometry, algebra 1, SAT math
...That, coupled with the techniques I have for explaining and the patience I have with my students combine to produce success. I have been an adjunct Professor in Mathematics at University of
Phoenix's Phila. campus for almost 10 years, teaching Algebra 1 & 2. The info above and my detailed credentials vouch for my grasp of the subject matter.
16 Subjects: including prealgebra, physics, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/willingboro_prealgebra_tutors.php","timestamp":"2014-04-17T01:02:58Z","content_type":null,"content_length":"24309","record_id":"<urn:uuid:a7d9f6c6-0c76-449f-af7a-13cd5b99a6cd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modelling temperatures
October 15th 2010, 07:44 PM #1
Junior Member
Nov 2009
Modelling temperatures
the low temp in chicago is 25 degrees, occurs in january. x = 1(january)
average high is 75 degrees which occurs in july. x=7(july)
determine the constants so that the function $f(x)=asin(b(x-c))+d$ models the data, where f(x) gives the temperature as a function of x.
this is what I have:
$f(x) = -25sin(\frac{\pi}{6}x-1)+50$
it just doesn't look right to me. though, when i go to 2nd => calc => value, it gives me close to the exact values being off only by a hundred or less.
the low temp in chicago is 25 degrees, occurs in january. x = 1(january)
average high is 75 degrees which occurs in july. x=7(july)
determine the constants so that the function $f(x)=asin(b(x-c))+d$ models the data, where f(x) gives the temperature as a function of x.
this is what I have:
$f(x) = -25sin(\frac{\pi}{6}x-1)+50$
it just doesn't look right to me. though, when i go to 2nd => calc => value, it gives me close to the exact values being off only by a hundred or less.
try this ...
$\displaystyle f(x) = -25\cos\left[\frac{\pi}{6}(x-1)\right]+50$
October 16th 2010, 03:30 AM #2 | {"url":"http://mathhelpforum.com/trigonometry/159782-modelling-temperatures.html","timestamp":"2014-04-18T22:20:35Z","content_type":null,"content_length":"35025","record_id":"<urn:uuid:def79749-b206-4306-899f-8f6d90d3f918>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
The molecule problem: Determining conformation from pairwise distances
Results 1 - 10 of 20
- SIAM J. OPTIMIZATION , 1995
"... Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. We formulate the distance geometry problem as a global minimization problem with
special structure, and show that global smoothing techniques and a continuation approach for global optim ..."
Cited by 70 (7 self)
Add to MetaCart
Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. We formulate the distance geometry problem as a global minimization problem with
special structure, and show that global smoothing techniques and a continuation approach for global optimization can be used to determine solutions of distance geometry problems with a nearly 100%
probability of success.
, 2004
"... A d-dimensional framework is a straight line realization of a graph G in R d. We shall only consider generic frameworks, in which the co-ordinates of all the vertices of G are algebraically
independent. Two frameworks for G are equivalent if corresponding edges in the two frameworks have the same le ..."
Cited by 61 (9 self)
Add to MetaCart
A d-dimensional framework is a straight line realization of a graph G in R d. We shall only consider generic frameworks, in which the co-ordinates of all the vertices of G are algebraically
independent. Two frameworks for G are equivalent if corresponding edges in the two frameworks have the same length. A framework is a unique realization of G in R d if every equivalent framework can
be obtained from it by an isometry of R d. Bruce Hendrickson proved that if G has a unique realization in R d then G is (d + 1)-connected and redundantly rigid. He conjectured that every realization
of a (d + 1)connected and redundantly rigid graph in R d is unique. This conjecture is true for d = 1 but was disproved by Robert Connelly for d ≥ 3. We resolve the remaining open case by showing
that Hendrickson’s conjecture is true for d = 2. As a corollary we deduce that every realization of a 6-connected graph as a 2-dimensional generic framework is a unique realization. Our proof is
based on a new inductive characterization of 3-connected graphs whose rigidity matroid is connected.
- SIAM Journal on Optimization , 1995
"... . The molecule problem is that of determining the relative locations of a set of objects in Euclidean space relying only upon a sparse set of pairwise distance measurements. This NP--hard
problem has applications in the determination of molecular conformation. The molecule problem can be naturally e ..."
Cited by 60 (0 self)
Add to MetaCart
. The molecule problem is that of determining the relative locations of a set of objects in Euclidean space relying only upon a sparse set of pairwise distance measurements. This NP--hard problem has
applications in the determination of molecular conformation. The molecule problem can be naturally expressed as a continuous, global optimization problem, but it also has a rich combinatorial
structure. This paper investigates how that structure can be exploited to simplify the optimization problem. In particular, we present a novel divide--and--conquer algorithm in which a large global
optimization problem is replaced by a sequence of smaller ones. Since the cost of the optimization can grow exponentially with problem size, this approach holds the promise of a substantial
improvement in performance. Our algorithmic development relies upon some recently published results in graph theory. We describe an implementation of this algorithm and report some results of its
performance on a sample ...
- SIAM Journal on Scientific Computing , 1999
"... A subspace adaptation of the Coleman-Li trust region and interior method is proposed for solving large-scale bound-constrained minimization problems. This method can be implemented with either
sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergenc ..."
Cited by 35 (1 self)
Add to MetaCart
A subspace adaptation of the Coleman-Li trust region and interior method is proposed for solving large-scale bound-constrained minimization problems. This method can be implemented with either sparse
Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergence properties of this subspace trust region method are as strong as those of its full-space version.
, 1995
"... We show that a continuation approach to global optimization with global smoothing techniques can be used to obtain "-optimal solutions to distance geometry problems. We show that determining an
"-optimal solution is still an NP-hard problem when " is small. A discrete form of the Gaussian transform ..."
Cited by 28 (6 self)
Add to MetaCart
We show that a continuation approach to global optimization with global smoothing techniques can be used to obtain "-optimal solutions to distance geometry problems. We show that determining an
"-optimal solution is still an NP-hard problem when " is small. A discrete form of the Gaussian transform is proposed based on the Hermite form of Gaussian quadrature. We show that the modified
transform can be used whenever the transformed functions cannot be computed analytically. Our numerical results show that the discrete Gauss transform can be used to obtain "-optimal solutions for
general distance geometry problems, and in particular, to determine the three-dimensional structure of protein fragments.
- Journal of the ACM , 1996
"... A number of current technologies allow for the determination of inter-atomic distance information in structures such as proteins and RNA. Thus, the reconstruction of a three-dimensional set of
points using information about its inter-point distances has become a task of basic importance in determini ..."
Cited by 26 (0 self)
Add to MetaCart
A number of current technologies allow for the determination of inter-atomic distance information in structures such as proteins and RNA. Thus, the reconstruction of a three-dimensional set of points
using information about its inter-point distances has become a task of basic importance in determining molecular structure. The distance measurements one obtains from techniques such as NMR are
typically sparse and error-prone, greatly complicating the reconstruction task. Many of these errors result in distance measurements that can be safely assumed to lie within certain fixed tolerances.
But a number of sources of systematic error in these experiments lead to inaccuracies in the data that are very hard to quantify; in effect, one must treat certain entries of the measured distance
matrix as being arbitrarily "corrupted." The existence of arbitrary errors leads to an interesting sort of error--correction problem --- how many corrupted entries in a distance matrix can be
efficiently corre...
- Applied Mathematics Division, Argonne National Labs , 1997
"... Abstract. We study the performance of the dgsol code for the solution of distance geometry problems with lower and upper bounds on distance constraints. The dgsol code uses only a sparse set of
distance constraints, while other algorithms tend to work with a dense set of constraints either by imposi ..."
Cited by 26 (3 self)
Add to MetaCart
Abstract. We study the performance of the dgsol code for the solution of distance geometry problems with lower and upper bounds on distance constraints. The dgsol code uses only a sparse set of
distance constraints, while other algorithms tend to work with a dense set of constraints either by imposing additional bounds or by deducing bounds from the given bounds. Our computational results
show that protein structures can be determined by solving a distance geometry problem with dgsol and that the approach based on dgsol is significantly more reliable and efficient than multi-starts
with an optimization code.
- Department of Combinatorics and Optimization, University of Waterloo , 2009
"... AMS Subject Classification: The sensor network localization, SNL, problem in embedding dimension r, consists of locating the positions of wireless sensors, given only the distances between
sensors that are within radio range and the positions of a subset of the sensors (called anchors). Current solu ..."
Cited by 20 (10 self)
Add to MetaCart
AMS Subject Classification: The sensor network localization, SNL, problem in embedding dimension r, consists of locating the positions of wireless sensors, given only the distances between sensors
that are within radio range and the positions of a subset of the sensors (called anchors). Current solution techniques relax this problem to a weighted, nearest, (positive) semidefinite programming,
SDP,completion problem, by using the linear mapping between Euclidean distance matrices, EDM, and semidefinite matrices. The resulting SDP is solved using primal-dual interior point solvers, yielding
an expensive and inexact solution. This relaxation is highly degenerate in the sense that the feasible set is restricted to a low dimensional face of the SDP cone, implying that the Slater constraint
qualification fails. Cliques in the graph of the SNL problem give rise to this degeneracy in the SDP relaxation. In this paper, we take advantage of the absence of the Slater constraint qualification
and derive a technique for the SNL problem, with exact data, that explicitly solves the corresponding rank restricted SDP problem. No SDP solvers are used. For randomly generated instances,
, 2007
"... A multi-graph G on n vertices is (k,ℓ)-sparse if every subset of n ′ ≤ n vertices spans at most kn ′ − ℓ edges. G is tight if, in addition, it has exactly kn − ℓ edges. For integer values k and
ℓ ∈ [0,2k), we characterize the (k,ℓ)-sparse graphs via a family of simple, elegant and efficient algori ..."
Cited by 18 (5 self)
Add to MetaCart
A multi-graph G on n vertices is (k,ℓ)-sparse if every subset of n ′ ≤ n vertices spans at most kn ′ − ℓ edges. G is tight if, in addition, it has exactly kn − ℓ edges. For integer values k and ℓ ∈
[0,2k), we characterize the (k,ℓ)-sparse graphs via a family of simple, elegant and efficient algorithms called the (k,ℓ)-pebble games.
- THE ENCYCLOPEDIA OF OPTIMIZATION , 2001
"... ... In this article we survey some results and provide references about these problems for the following matrix properties: positive semidefinite matrices, Euclidean distance matrices,
completely positive matrices, contraction matrices, and matrices of given rank. We treat mainly optimization an ..."
Cited by 17 (1 self)
Add to MetaCart
... In this article we survey some results and provide references about these problems for the following matrix properties: positive semidefinite matrices, Euclidean distance matrices, completely
positive matrices, contraction matrices, and matrices of given rank. We treat mainly optimization and combinatorial aspects. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1103497","timestamp":"2014-04-20T19:37:38Z","content_type":null,"content_length":"37682","record_id":"<urn:uuid:6ec4cc47-73ed-4196-a4bb-85bbc78e194e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Treatise On Analysis Vol-Ii
2 PARTICULAR CASES AND EXAMPLES 253
We shall use this local definition of a Haar measure in Chapter XIX to
construct a left Haar measure on a Lie group. Here we note the following
consequence of (14.2.4):
(14.2.5) Let G be a locally compact group, H a discrete normal subgroup
of G, and n : G -* G/H the canonical homomorphism. Also let V be an open
neighborhood of the neutral element of G such that the restriction of n to V is
ahomeomorphism of V onto the neighborhood n(V) of the neutral element
of G/H (12.11.2). Let A be a left Haar measure on G. If n is the image under
n | V of the restriction lv of 1 to V, then ju is the restriction to n(V) of a left
Haar measure on G/H.
For every open set in n(V) is of the form 7r(U), where U c V is open, and
the relation n(s)n(U) an(V) is equivalent to sU<=V; hence it follows
immediately from the definitions that ju satisfies the condition of (14.2.4).
(14.2.6) The mapping cp : t\-+e2Klt is a strict morphism (12.12.7) of R onto
the compact group U of complex numbers of absolute value 1, by virtue of
(9.5.2) and (9.5.7). The kernel of q> is the discrete subgroup Z consisting of the
integers, and U may therefore be canonically identified with the quotient
group R/Z = T (also called the \-dimensional torus or the additive group of
real numbers modulo 1). Apply (14.2.5) to the case where V = ] —i, i[;
bearing in mind that a Haar measure \JL on U must be diffuse (14.2.3) and that
the complement of <p(V) in U consists of a single point, we see that a function
/on U is ju-integrable if and only if the function t\-~*f(e2ltit) is Lebesgue-
integrable on ]-i, if, and that we then have \fd\t. = f* 2 f(e2***) dt.
J J —1/2
(14.2.7) Let G! , G2 be two locally compact groups, and ^ (resp. u2) a left
Haar measure on Gj (resp. G2). Then /^ ®/^2 JS a left Haar measure on
Gt x G2.
For each function/e JTC(G! x G2) and each (sl9 s2) e Gt x G2 we have
= dp^xi) f(slx^translation it follows that for all s, tin G the measures jus and ^ | {"url":"http://archive.org/stream/TreatiseOnAnalysisVolII/TXT/00000272.txt","timestamp":"2014-04-17T05:52:37Z","content_type":null,"content_length":"12568","record_id":"<urn:uuid:2ee18960-fb25-4896-ba0f-f556089476e0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |