content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Emergence of a Small-World Functional Network in Cultured Neurons
The functional networks of cultured neurons exhibit complex network properties similar to those found in vivo. Starting from random seeding, cultures undergo significant reorganization during the
initial period in vitro, yet despite providing an ideal platform for observing developmental changes in neuronal connectivity, little is known about how a complex functional network evolves from
isolated neurons. In the present study, evolution of functional connectivity was estimated from correlations of spontaneous activity. Network properties were quantified using complex measures from
graph theory and used to compare cultures at different stages of development during the first 5 weeks in vitro. Networks obtained from young cultures (14 days in vitro) exhibited a random topology,
which evolved to a small-world topology during maturation. The topology change was accompanied by an increased presence of highly connected areas (hubs) and network efficiency increased with age. The
small-world topology balances integration of network areas with segregation of specialized processing units. The emergence of such network structure in cultured neurons, despite a lack of external
input, points to complex intrinsic biological mechanisms. Moreover, the functional network of cultures at mature ages is efficient and highly suited to complex processing tasks.
Author Summary
Many social, technological and biological networks exhibit properties that are neither completely random, nor fully regular. They are known as complex networks and statistics exist to characterize
their structure. Until recently, such networks have primarily been analyzed as fixed structures, which enable interaction between their components (nodes). The present work is one of the first
empirical studies investigating the adaptation of complex networks [1]. Network evolution is particularly important for applying complex network analysis to biological systems, where the evolution of
the network reflects the biological processes that drive it. Here, we characterize the functional networks obtained from neurons grown in vitro. Network properties are described at seven day
intervals during the neurons' maturation period. Initially, neurons formed random networks, which spontaneously reorganized to a ‘small-world’ architecture. The ‘small-world’ concept derives from the
study of social networks, where it is referred to as ‘six-degrees of separation’: the connection of any two individuals by as few as six acquaintances. In brain networks, this translates to rapid
interaction between neurons, mediated by a few links between locally connected clusters (cliques) of neurons. This architecture is considered optimal for efficient information processing and its
spontaneous emergence in cultured neurons is remarkable.
Citation: Downes JH, Hammond MW, Xydas D, Spencer MC, Becerra VM, et al. (2012) Emergence of a Small-World Functional Network in Cultured Neurons. PLoS Comput Biol 8(5): e1002522. doi:10.1371/
Editor: Olaf Sporns, Indiana University, United States of America
Received: December 1, 2011; Accepted: April 1, 2012; Published: May 17, 2012
Copyright: © 2012 Downes et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This work was funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant No. EP/D080134/1. http://www.epsrc.ac.uk/funding/Pages/default.aspx. The funders had no
role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
The organizational properties of biological, technological and social systems are increasingly being characterized by representing them as abstract networks of interacting components and quantifying
non-random features of their structure [2], [3], [4], [5], [6]. Many real-world networks have an organization (topology) that is neither completely random, nor completely regular. Termed complex
networks, these typically afford excellent integration between their constituent parts yet they also provide tightly interconnected subnetworks that segregate efficient within-group interaction. An
example is social networks, for which a seminal study [7] revealed that any two individuals in the world could communicate via only a small number (~6) of mutual acquaintances. Such networks are
sparse – only a tiny proportion of the world's population are associated, yet they are incredibly well-connected. The phenomenon has been termed ‘small-world’ – hence the concept of a small-world
For neuronal connectivity, the abstract network (graph-theoretic) approach to analysis has allowed common organizational principles to be identified at both the macroscale level of whole brain
imaging [8], [9], [10], [11], and the microscale level of connections between individual neurons [12], [13]. Importantly, this form of analysis enables the relationship between neuronal network
organization and (whole or partial) brain function to be investigated. There are numerous complex network statistics for assessing the non-random properties of these abstract networks (for review see
[14]), each of these statistics enable direct comparison of results from diverse experiment modalities and over a range of species and scales [2], [5]. Moreover, properties may also be compared with
those of networks from other domains [15]. Two important metrics are the level of integration and segregation; high levels of which are found in random and lattice networks, respectively. Since
small-world networks have high levels of both properties, the extent to which a given network approximates or deviates from small-worldness may be evaluated by considering the balance between
integration and segregation [16], [17]. This balance has become an important benchmark for the assessment of neuronal networks and the small-world topology has been found at multiple scales over a
range of species in both structural [16], [18] and functional [9], networks. Moreover, its influence on network efficiency [8] and robustness [9] has been demonstrated, and deviation from the
small-world topology has been associated with abnormal or decreased brain function [8], [19], [20], [22], [23].
The focus of the present study is the development of complex network properties within cultures of neurons, grown in vitro. Unlike in-vivo brain networks, where the range of experimental conditions
is typically constrained by the availability of subjects with a given condition, or strict regulation regarding experimental manipulation, cultures of dissociated neurons grown on multi-electrode
arrays (MEAs) provide an experimental platform for the long-term investigation and manipulation of neuronal cells. Such neurons spontaneously form connections [24], [25] and non-random properties
have been found in the resulting structural network [13]. Moreover, cultures share several important characteristics with their in vivo counterparts [26], [27], [28], for review see [25].
Consequently, these preparations are increasingly being used in investigations of cellular and network processes that underlie complex cognitive functions [29], [30], [31], [32], [33] and as models
of pathophysiological states (e.g. epilepsy and stroke [34]). Importantly, since the cultures have no pre-built infrastructure, they allow the network formation to be observed - making them
well-suited to investigating neuronal network development in a living biological system.
Cultured neurons and investigating network function
Two aspects of the cultures that are of particular interest are their structural (anatomical) circuitry and the interactions which take place over this circuitry, both determining the computational
capacity of the underlying network. Whilst cultures are typically too dense for accurate observation of their structural connectivity, analysis of functional connectivity provides a probabilistic
estimation of the relationship between distributed neuronal units [2], thereby enabling spatio-temporal interactions between areas of the network to be measured throughout experiments. This provides
a useful means to investigate the network properties of cultures, particularly since functional connectivity estimated over certain timescales may contain information about the underlying structural
network [35].
Existing literature indicates that the functional network properties of cortical cultures change during maturation [36] and following stimulation [29], [37], [38]. However, such studies have focused
on changes in the expected link-level properties such as the mean strength and metric distance of connections [36], or the proportion of links which are strengthened or weakened following stimulation
[37], [38]. These aggregate measures capture gross changes in global connectivity, but they do not reflect the organizational features of the network, e.g. the distribution of properties amongst the
neural units, or whether there are groups of neural units that are more densely connected than others. Analysis of such organizational features would reveal the architecture of the network, enabling
investigation into which interactions the network could support and how the network organization changes under different experimental conditions. Importantly, by assessing the complex network
properties, the relevance of results from cultures to investigations of whole-brain networks would be increased.
Reports that rigorously compare culture's complex network properties under different experimental conditions are very sparse. Mature cultures were assessed in [39] and networks from cultures subject
to an in vitro glutamate injury model of epileptiform activity were assessed in [34]. The utility of cultures for investigating changes in cognitive function, characterizing drug effects and modeling
disease states, could be greatly extended by applying complex network statistics to quantify the influence of experimental manipulation on the network architecture. Moreover, comparison with results
from in vivo networks may reveal basic organizational principles common to both.
Experiments utilizing cultures can be undertaken across a range of ages, yet little is known about whether developmental changes occur in culture's complex network properties. Questions such as when
and which non-random properties are present, their stability over time and the variability between cultures and their ages remain largely unanswered. The nature of such spontaneously occurring
changes in a culture's functional network are important a priori knowledge for assessing experimental outcomes using complex network statistics. Moreover, by analyzing these ‘known’ conditions, a
framework can be established for evaluating a variety of experimental conditions, including those resulting from embodying a culture in a closed loop system. [40], [41], [42], [43].
The density at which cultures are seeded exerts an important influence on the rate of maturation. Dense cultures mature faster than their sparse equivalents, and they demonstrate bursting activity
earlier in development [44]. For the purpose of the present paper, dense cultures were deemed preferable, since their use enabled network properties to be measured earlier in development than would
have been possible on much sparser cultures. Additionally, to investigate changes in the functional network properties during culture maturation, maintaining consistency in plating parameters was
important to minimize differences in cultures structural properties. Such differences would have complicated the analysis and interpretation of results. Therefore, cultures at a fixed density were
used (those described in [44] as ‘dense’). At ~1,500 to 6,500 cells within the ~1.6 mm^2 recording area of the MEA, the cells in such cultures form a monolayer. Moreover, they can be maintained for
many months [45] and the density is comparable to that used by other groups (typically ~2,500–3,000 cells per mm^2 [24], [30], [36], [42], [46], [47]).
The present study establishes the baseline network statistics for cultures at specified stages of development and uses them to characterize culture maturation. The topological, spatial and
performance properties of functional networks captured every 7 days (7 to 35 days in vitro [DIV]) were compared using a population of 10 cultures. The study is one of the first to investigate
functional connectivity in an evolving complex system. Here, the evolution of network properties is a counterpart of biological processes shaping the culture's development.
Methodological considerations
Since the graph-theoretic approach and use of complex network statistics is a relatively novel method for investigating functional connectivity in cultures, the key methodological decisions are
described next.
Applying network connectivity analysis to multi-electrode array data.
Although both structural and functional neuronal networks can be explored using graph theory [5], [14], the present study concentrates on functional networks. The main steps involved in a
graph-theoretic analysis of neuronal networks are described in [5]. Figure 1 illustrates their application to neuron cultures (or other in-vitro preparations utilizing multi-electrode recordings). At
step 1, the nodes of the network are defined: for the present study, potential nodes were the 59 electrodes (channels) of the MEA. At step 2, functional links are defined, for example using activity
recorded from the electrodes: a computationally straightforward technique estimating pair-wise correlation of spike-times recorded via MEA electrodes [34] was used for the links herein. Many
techniques exist to estimate dependence between time-series [48], [49] and, whilst the choice is in the hands of the experimenter, the decision may influence the interpretation of results. Regardless
of the chosen technique, it is important to consider the time-period over which links are estimated, particularly with respect to the form of activity that will be used to define inter-node
connections. Due the non-stationary mixture of high frequency bursting and low-frequency ‘tonic’ activity found in cultures [44], the links herein were defined over two time-scales:
Figure 1. Steps in functional connectivity analysis of multi-electrode array data.
Steps 1–3 and step 5 are based on recommendations from Bullmore & Sporns (Nature Reviews Neuroscience, 2009). Steps 4, 6 and 7 refer to techniques specific to analysis of culture activity recorded
from multi-electrode arrays (MEAs, example pictured top right). The 8×8 grid indicates the recording area of the MEA (inset: close-up of two electrodes with visible neurons in their vicinity).
Firstly, at short time-scales (hundreds of milliseconds), connectivity was assessed during each network-wide burst, a threshold was then applied to include only the links between highly related nodes
[5] (Step 3). Secondly, to filter out inter-burst fluctuations in activity levels, the ‘persistent’ network infrastructure was estimated over a longer time- scale (20 minutes) based on the frequency
with which links were identified over the set of burst-based (‘transient’) networks (step 4). The application of a threshold at step 3 reduces the complexity of the analysis, and is useful if link
‘strength’ is not the focus of the study. However, selection of an appropriate threshold is important for the interpretation of results.
To compare networks from a sequence of experimental ‘conditions’, the development of the network itself may be of equal importance to the development of its topology. However, methods for
characterizing functional connectivity principally focus upon static networks. Analysis of networks evolving over time is more challenging; network evolution involves the birth and death of links and
in some cases, nodes themselves. Consequently, it is not desirable to fix the number of nodes, or adjust the link definition threshold to achieve a pre-determined connection density (c.f. [9], [20]).
Thus, for the present study, a relative threshold (based on the specificity of the cross-covariance peak) determined whether a given link was included in the network or regarded as ‘noise’. Once all
potential links had been assessed, only those nodes with a connection to at least one other node were included in the network.
The dual time-scale approach to network definition (Figure 1, Step 4) results in two types of network graph, which enables assessment of a culture's network activity over different timescales. For
the work herein, analysis of the topological and non-topological properties of the longer time-scale persistent networks allowed structural and spatial properties to be investigated every 7 DIV,
thereby characterizing the network development. Additionally, at short time-scales, the set of each cultures' transient networks allowed the activity that took place over the networks to be analyzed.
This enabled performance and reliability metrics to be estimated.
A number of topological metrics may be calculated and from these the complex network statistics [2], [5], [14], [16], [18] may be derived (see Materials and Methods). In order to compare the
persistent network properties at each age, both local (node related) and network-wide statistics were calculated. Table S1 provides definitions of all complex network measures used, many of which
were described in [14]. The magnitude of the topological properties from a given network are dependent on the number of nodes (n, referred to herein as network ‘size’), the number of links (m) and
the resultant edge density (ξ). To calculate complex network statistics, empirical network properties are compared against those expected in equivalent random (or lattice) null hypothesis networks
[14]. These comparison networks have the same number of nodes and links, thus the same connectivity density. However, it is important to verify certain assumptions regarding the size and density of
networks that may be compared (see Materials and Methods). Whilst the number of nodes (n) and the average number of connections per node (K) are often used to specify a graph's basic properties, this
does not allow instant evaluation of edge density. Therefore, for the analysis herein, edge density was used instead of mean node degree. The property equates to the mean node degree normalized to
the maximum possible, which provides a density measure (ξ) that is independent of the network size (n). The measure reflects the sparseness or ‘cost’ of the network [8] and most importantly, it can
be directly compared between networks with different numbers of nodes.
Asides from the level of integration and segregation, an important aspect to characterizing a network's ‘class’ is the form of the degree distribution. This may be measured by determining the best
fitting model: A fast-decaying (exponential or Gaussian) model provides a good fit for networks with a homogeneous population of nodes, whereby most nodes have a comparable number of connections and
few nodes deviate from this number significantly. Such networks are classed as ‘single-scale’ and their scale is equal to their mean node degree. Conversely, networks that have no characteristic
scale are termed ‘scale-free’, these are identified by a degree distribution that decays progressively more slowly towards infinity – hence there is no characteristic mean node degree. These are
typically represented by a power-law model.
Random and lattice networks both have a single-scale degree distribution, conversely, many real-world networks have been found to possess a power-law degree distribution [50]. Since both single-scale
[51] and power-law [52] degree distributions have been reported for neuronal networks, to ascertain the degree distribution of the networks herein, both exponential and power-law models were fitted
to the data (see Materials and Methods). The ratio of goodness-to-fit values at each age was used to determine whether the distribution changed during the stages of maturation.
Since cultured neuronal networks are embedded in physical space, spatial and temporal characteristics of interaction, such as inter-node distance and signal propagation speed, can also be informative
about changes in the activity patterns. Therefore, physical link length (derived from inter-electrode distance), and network-wide signal propagation efficiency (via mean burst propagation time) were
also assessed. Additionally, the frequency with which individual links are activated can provide information on the influence of a given link in the various network interactions. Therefore, the
reliability of link activation was calculated from the analog (weighted) persistence adjacency matrix. Tables 1 & 2 provide an overview of the main measures used for the present study, along with
their range and interpretation.
Table 1. Topological & non-topological network properties for the present study (part 1).
Table 2. Topological and non-topological network properties for the present study (part 2).
Results are split into two sections. The first presents topological, then spatial network statistics from persistent networks. The second presents statistics on the propagation of activity over the
network (from the transient networks). Network statistics were obtained for each culture at each age (DIV 7, 14, 21, 28, 35). NOTE, at DIV 7 only one culture was found to have a persistent network,
therefore this age was not considered for the significance testing.
Culture's persistent networks acquire non-random properties during development
The number of nodes and links for a given culture was used to calculate the edge density of its network. Figure 2 shows the expected values for each property. The mean number of nodes was relatively
constant and independent of age (P = 0.272). In contrast, the mean number of links measured at DIVs 14 and 21 was lower than at DIVs 28 and 35, with a strong trend towards a significant increase
between the younger and older ages (P = 0.074). Edge density increased significantly between DIVs 14 and 21 (P = 0.012) and showed no significant change thereafter. Statistics quoted are for the n =
5–8 cultures valid for complex network analysis (see Materials and Methods). However, results were comparable when all cultures were used. Numbers of nodes and links followed a comparable trend for
two different persistence thresholds (see Figure S1), indicating their robustness to threshold selection. Edge density followed different trends for the different link persistence thresholds; this
was due to small differences in the numbers of nodes at each age, resulting in larger differences in edge density (Figure S1).
Figure 2. Basic topological properties of the persistent networks as a function of culture age.
Number of nodes, links and edge density; calculated for 10 cultures at each age (DIV). Left: mean number of nodes and links found in the persistent networks. Note, although the number of nodes is a
very different magnitude from the number of links, number of nodes was not found to change significantly (P = 0.272). Results for numbers of links at each age suggested an increase between younger
(DIV 14 and DIV 21) and older (DIV 28 and 35) ages, however the increase was not significant (P = 0.074). Right: mean edge density of the persistent networks. Edge density (i.e. link density)
quantifies the ‘cost’ of the network in terms of the number of links (m)/the maximum possible number of links ((n*(n−1)), given the number of nodes (n). Edge density was first calculated for each
culture and then averaged over all cultures. Mean edge density at DIVs 21 to 35 was significantly higher than at DIV 14 (P = 0.012). In cases where no links were found the data were excluded from the
analysis. All statistics quoted are for the n = 5–8 cultures valid for complex network analysis. Error bars represent ± standard error of mean (s.e.m, n = 5 to 8).
Complex topological properties.
Complex network statistics from each culture's persistent network were used to assess changes in network topology as the cultures matured. Figure 3 shows the progression of network-wide statistics as
a function of age: there was a significant (P = 0.018) increase in the mean clustering coefficient between DIVs 14 and 28, whilst mean path length was relatively stable across ages (P = 0.6). The
combination of increased clustering coefficient and stable mean path length resulted in a significant increase in the small-worldness property (P≤0.001) and networks were classified as small-world at
DIVs 28 and 35. Homogeneous subsets were DIV 14 and 21, and DIV 28 and 35, indicating a change in the small-worldness between the third and fourth week in vitro.
Figure 3. Complex topological properties of the persistent networks as a function of culture age.
Mean path length, clustering coefficient and conservative small-worldness; averages (n = 5–6), were normalized as follows: mean path length (L) and clustering coefficient (C) were normalized against
the expected value from an equivalent population of random networks (n = 50) with the same number of nodes and links. Small worldness was calculated conservatively as (C[real]/C[lattice])/(L[real]/L
[rand]). Error bars represent ± s.e.m. The average shortest path length and clustering coefficient at DIV 14 were both close to the value expected for a random network. A statistically significant
increase in the clustering coefficient was found between DIV 14 and DIV 28. The combination of a short mean path length and high clustering at DIVs 28 and 35 lead to a network classified as
To assess the relative influence of nodes in the network, the form of the node degree distributions were compared between ages. As the cultures matured, the number of nodes with a high degree
increased, leading to a fatter tailed node degree distribution (Figure 4). To quantify this change, both slow decaying (power law) and fast decaying (exponential) statistical models were fitted to
the data (see Materials and Methods). There was a significant increase in the goodness of fit ratio (power law/exponential) as the cultures aged (P = 0.024). The few data points at DIV 14 meant that
goodness of fit could not be reliably distinguished between models. However, at DIV 21 the ratio was <1 indicating a closer fit by an exponential model, whilst at DIV 35 the ratio was >1 indicating a
closer fit by a power law model. Post hoc tests showed that the DIV 21 ratio was significantly smaller than the DIV 35 ratio (P = 0.017, P = 0.021, Tukey HSD and Bonferroni post hoc tests
Figure 4. Change in the node degree distribution with culture development.
Node degree distributions, obtained from all the nodes of the persistent networks of all cultures using a bin size of 10%. Panel A: bar graphs represent node degree distributions on a linear scale.
Solid lines show the best fitting model at each age, broken lines represent 95^th percent confidence interval. Top left: DIV 14, bottom left: DIV 21, top right DIV 28, bottom right: DIV 35. DIVs 14
and 21 show exponential fit on a linear scale, DIVs 28 and 35 show power law fit on a linear scale. Panel B: scatter plots represent node degree distributions on a log-log scale, DIVs 28 and 35 are
shown with a linear fit. The fat tailed node degree distribution found at DIVs 28 and 35 is indicative of nodes with a high degree (hubs).
Spatial network properties.
The spatial organization of nodes and links also changed as cultures matured. At DIV 14, the proportion of links between distant nodes was significantly higher than the proportion of links between
nearby nodes (P = 0.028), whilst at subsequent ages there was no significant difference (P = 0.27, 0.83, 0.5, for DIV 21, 28, and 35, respectively), Figure 5 panel A. The distribution of link lengths
(Figure 5, panel B) is characteristically Gaussian at DIV 14, but appeared bimodal at DIV 21. Notably at DIVs 28 and 35, the distribution was slightly longer tailed and positively skewed (skewness =
0.000, 0.085, 0.352 and 0.572, at DIVs 14, 21, 28 and 35, respectively). Skewness followed a linearly increasing trend between DIVs 14 and 35 (R^2 = 0.964), reaching significance at DIV 35 (P =
Figure 5. Change in the link lengths with culture development.
Panel A: Each bar represents the median proportion of links between nodes up to (and including) two electrodes apart (classified as ‘nearby’) and links between nodes greater than two electrodes apart
(classified as ‘distant’), diagonal neighbors were included; values were calculated from all cultures at each age. Upwards error bars represent the 75^th percentile and downwards bars the 25^th
percentile. Notably, at DIV 14 there was a significantly higher number of connections between distant nodes. Panel B: Normalized histograms of link lengths at each culture age, constructed from the
link lengths of all cultures, measured as the proportion of each culture's links at each length. Median values from all cultures were used for each bin in the histogram. Bin size was based on spacing
between electrodes of MEA, with one bin for each electrode distance (i.e. bin 1 is all links between neighboring electrodes - including diagonal neighbors, bin 2 is all links between nodes up to two
electrodes distance, and so forth until seven electrodes distance which is the maximum between any two nodes on the MEA). Bin edges (X-axis) specify the start of each bin, measured as the distance
between electrodes on the MEA (micrometers). Y-axis is the same for all histograms in panel, only DIV 14 Y-axis is labeled to avoid overcrowding.
Network graphs were generated to depict the spatial arrangement of each culture's network components. Figure 6, panel A shows the persistent network of a representative culture at DIVs 14, 21, 28 and
35. At DIV 14, the graph was a sparse collection of links between often distant nodes. In some cases regions were disconnected from the main graph (as can happen when high thresholds are applied to
correlation matrices [8]). At DIV 21, there were fewer nodes and links in some (but not all) cultures. From this age onwards, there was a more even distribution of links between nearby vs distant
nodes. At DIVs 28 and 35 some cultures had more nodes and links, and there was a trend towards an increase in the number of links between DIVs 14/21 and DIVs 28/35 (see Figure 2). Figure 6, panel B
shows the persistent networks from the same culture at a lower link persistence threshold. As expected, there were more nodes and links at each age, nonetheless changes in the numbers of nodes and
links followed a comparable trend to the main results (see also Figure S1).
Figure 6. The persistent network of a representative culture at DIV 14, 21, 28 and 35.
Graphs illustrate the spatial organization of network components at each culture age: the 8 by 8 grid corresponds to positions of the electrodes on the multi-electrode array (MEA). Nodes that are
part of the network (i.e. for which a link was identified) are numbered according to their MEA hardware numbers, and the lines between electrodes represent un-directed links between nodes. Panel A:
graphs from the networks thresholded at 25% link persistence. Panel B: graphs from the networks thresholded at 15% link persistence, this lower threshold results in more nodes and links.
Figure 7 shows a close up of one culture's network at DIVs 28 and 35, highlighting the position of high degree nodes (hubs) [18] in the networks. Since cultures have no pre-built infrastructure, and
the cells are randomly distributed over the MEA, the absolute position of the hubs in the culture dish is not of particular interest. However, the relative position of the hubs (with respect to the
nodes that they connected to) may reveal patterns such as whether hubs are located in close proximity to one another, or have a higher proportion of links to distant vs nearby nodes. There are
numerous potential patterns and it was not possible to evaluate them for the present study. However, the figure is intended to illustrate some of the possibilities for future research.
Figure 7. Visualization of hubs in a representative culture at DIVs 28 and 35.
Graphs illustrate the location of hubs in the persistent network of a representative culture at two separate ages. The 8 by 8 grid corresponds to positions of the electrodes on the multi-electrode
array (MEA). Nodes that are part of the network (i.e. for which a link was identified) are numbered according to their MEA hardware numbers, and the lines between electrodes represent un-directed
links between nodes. At DIV 28 (left hand graph), nodes 4 and 38 were classified as hubs in the network, whilst at DIV 35 (right hand graph), nodes 34, 38, 40, 48, 49 and 53 were hubs. Hubs were
classified as nodes having a high degree (degree greater than mean node degree plus one standard deviation) and are highlighted with blue circles.
Network properties and activity propagation
Results presented thus far have focused on identifying changes in the network infrastructure (via the persistent interactions between different areas [nodes] in the cultures). Here, the results focus
upon the activity that takes place over this infrastructure. Each transient network is considered as a ‘snapshot’ of network activity, measured over a short time-scale (duration of a network-wide
burst) and reflects interactions between different areas of the culture in this period.
As per the persistent networks, the basic properties relating to network size were compared. Additionally, since there were multiple transient networks for each culture, the coefficient of variation
was also analyzed (see Materials and Methods). Figure 8 panel A shows the expected number of transient links as a function of culture age, panel B shows the equivalent data for number of nodes. There
was a strong trend towards an increase in the mean number of transient network links (P = 0.087), and a strong trend towards an increase in the number of nodes (P = 0.089). Figure 8 panel C shows the
expected coefficient of variation for the number of transient links. This was largest at DIV 21 and there was a significant increase in coefficient of variation between DIV 14 and DIV 21 (P = 0.021).
This demonstrated that transient networks at DIV 21 varied considerably in their numbers of links, more so than at any other age. Panel D shows the equivalent data for number of nodes (no significant
Figure 8. Basic topological properties of the transient networks as a function of culture age.
Panels A, B: mean number of links and nodes (respectively) in transient networks, averaged over all cultures at a particular age (solid black lines). Error bars represent ± s.e.m. The mean numbers of
links at each culture age suggested an increasing trend in the number of links between DIVs 14 and 28, however the trend was not significant (P = 0.087). Likewise the mean numbers of nodes suggested
an increasing trend (P = 0.089). The mean numbers of persistent network links and nodes are shown for reference (dotted red lines). Panels C, D: expected coefficient of variation for the numbers of
links and nodes (respectively) in each culture's set of transient networks. Error bars represent ± s.e.m. Coefficient of variation for number of links was significantly higher at DIV 21 than DIV 14
(P = 0.021).
Influence of functional network properties on efficiency of activity propagation.
To assess whether network properties influenced the transfer of information across the culture, burst propagation time was compared at each age (Figure 9). There was a significant difference in the
median burst propagation times (P = 0.002), with DIV 14 significantly different to DIVs 28 and 35 (P<0.05). At DIV 14, median burst propagation time was highest (389 ms), and although it reduced to
275 ms at DIV 21, variability was highest at this age. Burst propagation time further decreased between the remaining ages (to 108 ms at DIV 28, and 112 ms at DIV 35). Variability of the burst
propagation times showed a large reduction between DIV 21 and DIV 28 (inter quartile ranges 577 ms, 36 ms, respectively).
Figure 9. Network-wide burst propagation time as a function of culture age.
Bar chart shows the median burst propagation time (from all transient networks of all cultures at each age), values outside the 5^th to 95^th percentiles were removed as outliers, giving n = 6–8 for
each age). Error bars show 25^th and 75^th percentiles. A (network-wide) burst was defined as a near-simultaneous (within 250 ms) occurrence of channel bursts on multiple (≥4) channels. A channel was
considered to display bursting activity if ≥4 spikes were detected in 100 ms. For each channel included in the burst, recruitment time was the timestamp of the first spike in the ≥4 spikes in 100 ms
sequence. Burst propagation time was calculated as the time to recruit all channels in a network-wide burst. At DIVs 28 and 35, this time was significantly lower than at DIV 14.
Influence of functional network properties on reliability of activity propagation.
To investigate whether links became more reliable (persistent) as the cultures matured a histogram of link persistence values was generated. The fat tailed link persistence distributions at DIVs 28
and 35 reflected the fact that persistent links became more numerous and were activated more frequently (Figure 10). There was a significant increase in the contribution of the persistent links as
the cultures aged, P = 0.044, (mean ranks: 1.50, 1.75, 3.00, 3.75). On closer inspection, the increase was for the contribution of links in the 50 to 75% link persistence categories (P = 0.010).
Figure 10. An increase in the number of links with high persistence as cultures aged.
Each histogram shows the percentage of links found at each link persistence level for all cultures at each age (normalized count of links found at the persistence value, expressed as the percentage
of transient networks (bursts) in which the link was found, bin size 5%, bin edges specify the end of each bin). Top left: DIV 14, bottom left: DIV 21, top right: DIV 28, bottom right: DIV 35. Red
(solid) line is the link persistence threshold (link presence in at least 25% of the transient networks). The histograms are cropped to show the detailed distribution, inset histograms show the full
scale. At DIV 21, many links were found infrequently (i.e. below the link persistence threshold). The more pronounced tail of the distribution as the cultures matured, reflected a significantly
higher contribution of persistent links in the network's of mature cultures.
The present study characterizes the evolution of functional networks observed in cortical cultures and extends previous work where network properties of cultures were investigated at a single
developmental stage [34], [39]. Analysis of activity from multiple bursts allowed the identification of frequently activated links - the persistent network, which was robust to inter-burst
fluctuations in activity and suitable for analysis of complex network statistics. Results demonstrated that cortical cultures exhibit developmentally-dependent structured interactions, which are
reflected in their persistent patterns of activity. These data suggest the evolution of a complex network of links that supports increasingly efficient information flow and specialized processing.
Given the absence of external chemical or electrical stimulation applied to the cultures, these findings support the assertion that such complex network evolution is an intrinsic property of neuronal
maturation. Moreover, the characterization of age-dependent network properties enables appropriate selection of culture development stages for specific experiments [24], [37], [38], [42].
Unstructured interactions in the spontaneous activity of immature cultures
Immature cultures (DIV 14) exhibited limited interactions between neuronal units, resulting in a network of few nodes and links. The observation that at DIV 14 activity could spread rapidly between
any two neuronal units (short mean path length in Figure 3, reflects high integration), but was slow to propagate network-wide (Figure 9) indicated an absence of functional organization. The
homogeneous node degree distribution and low clustering coefficient exemplified the poor functional differentiation between nodes, with no evidence of densely interconnected areas that could support
segregation of neural processing. Together, these network properties implied a disordered spread of activity, across a random network topology, whilst the long burst propagation time indicated an
inefficient structure for widespread information transfer. Since dissociated neurons were seeded randomly onto the MEA and received no external stimulation, it could be expected that their initial
connectivity resulted in a random topology. Moreover, since neuron-synapse maturation is incomplete at DIV 14 [24], [53], it is unsurprising that the complex network properties found in mature
cultures [39] were not present at this age. However, the prevalence of long-distance connections at DIV 14 (Figure 5 and [36]) is counter to the economy of wiring principle [54] and suggests that
units are not simply making spatially convenient connections. In in vivo and ex vivo preparations the cell type and neurochemical identity have been proposed as guiding influences for connectivity
[55] and there is evidence that the variety and proportions of neuron types in cortical cultures are similar to those found in vivo [25], [27], [56], therefore connectivity in cultures could be
similarly guided by these influences.
Development of a small-world network during culture maturation
Whilst interactions at DIV 14 were clearly unstructured, the subsequent 14 days of development represented a critical window, during which functional complexity increased (Figure 3), leading to the
emergence of the small-world topology at DIVs 28 and 35. Figures S2, S3 and S4 demonstrate the robustness of the small-world result.
We consider the possible driving forces behind this topology change to include the level of synchronization, the ratio of excitation-inhibition and the mechanism of Hebbian learning.
Synchronization of culture activity can be defined over a range of timescales – from ‘synchronous busting’ [57], where areas of the network are synchronously active (usually to within ~100
milliseconds), to precise synchronization between the spike times of two or more neural units [36] (usually to within ~10 ms or less). For the present study, the network links were derived from
firing-pattern correlations and thus represent synchronization levels between neural units (nodes); the low number of nodes and links at DIV 14 reflects a low level of synchronization (i.e. between
only a few units), compared to a high level of synchronization (i.e. between many units) at DIVs 28 and 35. Literature indicates that a low level of synchronization at DIV 14 may be due to an
excitatory-inhibitory imbalance [53]. Conversely, evidence suggests that a high level of network-synchronization found in older cultures (whereby many neural units are activated within a short
time-window [24]) is supported by a balanced excitatory-inhibitory subsystem [53], with tight synchronization between pairs of neural units (as observed in [36]) arising from the activity-based
refinement of synaptic connection strengths [24], [58].
In a previous study of functional connectivity during development [36] culture properties at DIV 14 and DIVs 28–35 are in accordance with those of the present study. However, at DIV 21 [36] reported
an increased level of synchronization and a dramatic change in burst properties (compared to those at DIV 14). In contrast, the present study revealed no such increase in synchronization at DIV 21,
yet burst properties were highly variable - as reflected by a highly variable number of transient links (Figures 2,8), and there was a highly variable burst propagation time (Figure 9). Results
herein suggest a network with an uneven balance between highly and poorly interconnected areas, whereby bursts initiated from different sites (as reported in [59]) propagate at different rates, with
little link activation regularity (as reflected by the low link persistence at this age). We posit that the highly variable burst properties reported herein and in [24], [36] point to itinerant
rather than persistent synchronization at DIV 21. Such transient synchronization effects may be averaged out by requiring multiple occurrences of correlated activity over long time-scales [58].
Therefore, our persistent networks at this age may not reflect the increased synchronization found in [36] (where links required only a period of correlated activity during the entire recording).
Crucially, the combination of varied burst properties and transient synchronization at DIV 21 indicates a mixture of regular and irregular activity. Modeling studies have suggested that such mixed
activity constitutes optimal conditions for the emergence of a small-world topology via Hebbian learning rules and activity driven plasticity [60]. Thus, a change in the culture's spontaneous
activity patterns could drive the topology transformation. Results herein and in [61] suggest that once the topology of the network has emerged, equilibrium states may exist at different time scales
- from transient synchronization between subgroups of neural units at the short time-scale to regular occurrence of such transiently activated subgroups over longer time-scales. Modeling studies may
provide further insight into the role of synchronization and the evolution of such equilibrium states [62], whilst pharmacological manipulation of specific neuron sub-types could verify biological
mechanisms behind activity modulation.
Increased clustering of connections.
Our results demonstrate that functional clustering increases from DIV 14. Moreover, this increased clustering (rather than a reduced mean path length) was the cause of increased small-worldness,
which continued until cultures reached a state of semi-maturity at DIV 28. We note that the increased clustering was accompanied by a change in the distribution of link lengths, from a clear
dominance of long-range links at DIV 14 to an increased proportion of short links thereafter, which suggests an increase in localized lattice-like clustering. The change of the link length
distributions from Gaussian to bimodal, to long-tailed at DIVs 14, 21 and 28–35 respectively (Figure 5 panel B), coincides with the network topology shift from random, to mixed, to small-world.
Moreover, the small proportion of long-range links at DIVs 28 and 35 suggests connections between distant areas – perhaps between clusters. Together, these findings suggest that spatial
considerations may also play a role in the topology change.
Increasing presence of hubs.
During culture maturation, the distribution of node connections changed from a rapidly decaying and homogeneous degree distribution to one with a longer tail, indicating a small but non-negligible
proportion of highly connected nodes (hubs). The small-world topology does not require hubs [51] and both random and lattice networks have a single-scale degree distribution. Nonetheless, hubs have
been identified in various small-world networks [9], [18], [35]. Moreover, modeling studies imply that presence of a non-Gaussian degree distribution is more likely when networks of neurons evolve
from irregular firing [60], thus it is plausible that the irregular burst properties at DIV 21 may be related to the formation of hubs.
Mature cultures and the influence of network topology on activity
Networks at DIVs 28 and 35 were classified as small-world, exhibiting several highly connected areas (clusters of highly inter-connected neural units), alongside the ability for any two areas to
interact via few intermediary connections (short mean path length). Interestingly, when the network properties at DIVs 28 and 35 were compared, smaller differences were found than between earlier
ages, suggesting a state of maturity [24], [32], [36], [63]. The non-trivial network structure demonstrated at DIVs 28 and 35 corresponds well with previous work [39], which concluded that mature
cultures had complex network properties similar to those found in vivo.
Small-world networks have an architecture which supports efficient information transfer [8], [64]. Accordingly, our results showed a developmental reduction in burst propagation time that accompanied
the emergence of cultures' small-world properties (Figure 9). Furthermore, variability of burst propagation time was lower at DIVs 28 and 35 than at younger ages. Since burst events are typically
initiated from a number of sites [59], this reduced variability suggests that burst propagation times in mature cultures are not influenced by burst source; information propagates efficiently from
all parts of the network. Interestingly, a small-proportion of links at DIVs 28 and 35 were activated extremely frequently (Figure 10), suggesting that they facilitate many of the interactions; it is
possible that they represent activation of the small-world ‘short cuts’ between clusters.
The increasing prevalence of highly connected nodes in older cultures suggests that such hubs play a greater role in network activity as the cultures mature, perhaps indicating sources [65], sinks,
or bridges [18], [33] for network activity. Interestingly, structural and functional hubs have recently been identified in the developing hippocampus where GABAergic interneuron hubs were found to
orchestrate network synchrony [52], firing immediately prior to network bursts. Similarities between connectivity of GABAergic interneurons in the hippocampus and neocortex [66] and suggestions that
cortical cultures develop subsystems akin to those found in vivo [25], [53], [56], imply that similar functional hubs may be present in the primary cortical cultures employed herein.
Conclusions and future work
The present study has demonstrated that networks derived from the spontaneous activity of cultures develop non-random properties despite a lack of external input. Based on these results, we draw four
main conclusions. Firstly, to mitigate fluctuations in spontaneous activity, multiple network bursts should be assessed to obtain the persistent network. Secondly, the functional network of a
cortical culture evolves from an initial random topology to a small-world topology; we propose this is due to a change in the culture's spontaneous activity patterns that is driven by the maturing
excitatory-inhibitory balance and an increase in network-wide synchronization. Thirdly, the reduction in burst propagation time with culture maturation that accompanies the evolution of a small-world
topology supports the efficient network-wide flow of information afforded by a small-world network. Lastly, the presence of hubs and increasing contribution of links with high persistence suggests a
proportion of highly influential nodes and links.
To the authors' knowledge, this is the first demonstration of small-world properties evolving in the functional networks of cortical neurons grown in vitro. This further supports work suggesting
maturation of in vitro networks around the age of DIV 28 to 35; importantly, our results indicate that experiments which require complex network features should be undertaken from DIV 28 onwards,
whilst those aiming to shape network maturation should be undertaken before DIV 28. Moreover, the work herein further supports the use of complex network statistics to quantify network level changes
resulting from different experimental conditions, and importantly it provides a benchmark against which to assess the influence of closed loop stimulation on shaping cultures network properties - a
fundamental question for the work on closed loop culture embodiment.
An important area for future work is to investigate the role of frequently activated nodes (hubs) in cultured neurons; including whether the presence of network-synchrony controlling hubs in the
underlying substrate could mediate the timing and extent of functional interactions between otherwise segregated clusters, perhaps coordinating synchronous network-wide bursting. Additionally, the
use of staining to identify the location and proportion of the different neuron types and sub-types, and the use of pharmacological manipulation to verify their effect on activity may help elucidate
mechanisms behind the different network properties.
Materials and Methods
Cell cultures and sample population selection
Data used for the present study was collected for [44], from cultures of pre-natal (E18) rat dissociated cortical neurons and glia cells, seeded onto multi-electrode arrays (MEAs, Multi Channel
Systems, Reutlingen, Germany). Cultures were maintained in Teflon sealed MEAs in an incubator at 5% CO[2], 9% O[2], 35°C and 65% relative humidity [44]. For the present study, ‘dense’ cultures
(estimated cell density of 2,500±1,500 per mm^2) were used.
Culture's electrical activity was recorded daily during their first 5 weeks of development. For the present study, a sample population was selected from the large number of cultures recorded,
specifically, 10 cultures from 4 preparations (plating batches). Cultures were arbitrarily selected from those that had recordings every 7 DIV, i.e. those which survived for the full 5 weeks and for
whom none of the weekly recordings were missed. The use of multiple preparations is important as bursting patterns across preparations vary considerably [44]. Additionally, since the variation in
burst properties measured at the same age (DIV) from different cultures (of the same plating), can exceed day-to-day differences in their properties (and inter-plating differences are significantly
larger) [44], network properties were compared at weekly intervals. This also allows easy comparison with results from other studies [32], [36].
Electrophysiological recording
Data were recorded from cultures for 30 minutes daily in the incubator used for culture's maintenance. Unit and multi-unit spontaneous spike firing was recorded from the MEA (8×8 array of 59 planar
electrodes, each 30 µm diameter with 200 µm inter-electrode spacing [centre to centre]). The pre-amplifier was from Multi Channel Systems (MCS), excess heat was removed using a custom Peltier-cooled
platform. Data acquisition and online spike detection was performed using MEABench [67]. According to the MEA user manual (MCS) spike detection is reliable up to ~100 µm from the electrode centre,
beyond which spikes become indistinguishable from the background noise. Therefore, each MEA provides a grid of 59 non-overlapping 100 µm recording horizons (once the four analogue channels and single
ground electrode are removed). It should be noted that data recorded on each channel may be from multi-neuron activity, no attempt was made to spike sort the data as overlapping waveforms found
during a burst can present problems [44]. Lastly, as recording began immediately after the cultures were transferred to the pre-amplifier, the first 10 minutes were discarded from the analysis in
order to mitigate any movement induced changes in culture activity [44], [68].
Data pre-processing and burst detection
Spikes were detected online (using MEABench), positive or negative excursions beyond a threshold of 4.5× estimated RMS noise, were classed as spikes. Their peak amplitude timestamp (µs), plus
electrode number were stored. For the present study, all positive amplitude spikes were removed to avoid counting spikes on both upwards and downwards phases.
In cortical cultures, global bursts (population bursts), characterized by an increase in culture activity across the entire MEA, are typically present from DIV 4–6 onwards [44], but sometimes as late
as DIV 14 onwards [36]. Such bursts provide a time window during which many culture interactions take place and thus a useful opportunity to assess network-wide connectivity. For the present study,
global bursts were identified as an increase in the number of spikes detected per unit time, summed over all electrodes in the array: specifically ≥4 spikes per channel in 100 ms, on ≥4 channels
within 250 ms; based on the SIMMUX algorithm, included as Matlab (The MathWorks, Natick, MA, USA) code with MEABench. Burst start was determined by the timestamp of the first spike included in the
global burst, and burst end taken as the timestamp of the last spike included. To assess interactions between neural units underlying all the electrodes, global bursts in which at least 25% (15/59)
electrodes registered channel bursts (≥4 spikes in 100 ms) were selected. These were termed ‘network-wide’ bursts and ensured that many neural units participated in the burst (increasing the
probability that the resultant networks would have sufficient numbers of nodes for the analysis of network properties). Additionally, since there were typically 10 to 150 such bursts in the 20 minute
recording segment used, it provided a good balance between having sufficient numbers of bursts for analysis, whilst avoiding the inclusion of ‘tiny’ bursts [44] since these may have biased results.
All activity occurring from the first spike in the nw-burst to the last spike in the nw-burst (including tonic activity from electrodes not included in the nw-burst) was used for assessing the
relationships between channel pairs. Spike occurrences were counted in 1 ms bins, this allowed a certain amount of jitter in the spike arrival times (which could otherwise decrease the likelihood of
identifying correlated activity). Bin size was selected based on experimentation with 1, 5 and 10 ms bins. The 1 ms bins provided a greater separation between correlated and un-correlated channels,
data not shown.
Link definition: Cross covariance
Functional connectivity was assessed by correlating spike times recorded on pairs of electrodes during a network-wide burst (as per [34]). This linear link analysis method assesses the probability of
a spike at time t on one electrode being accompanied by a spike arriving at t±k on another electrode, where k is the allowable lag time. Spike times arriving within ±13 ms of each other were
considered to be related (under the assumption that a linear relationship between spike arrival times on pairs of electrodes indicates their coupling). The maximum lag time was based on speed of
axonal propagation, time for synaptic transmission and the maximum distance between 2 points on the MEA. Since the firing rates recorded on each channel may be different, cross-covariance was used,
this correlates deviations in firing rates from their respective means as a function of lag.
Channels that had fewer than 8 spikes recorded during the burst were excluded from the cross-covariance analysis, as results from synthetic data testing showed that performing cross-covariance on
vectors with fewer than 8 spikes was poor at distinguishing related vectors from independent ones (data not shown).
The cross-covariance function calculates the covariance of two random vectors:(1)
In the case where X and Y are time-series the cross-covariance may depend on the time when it is estimated and on the lag between the time series:(2)
For wide-sense stationary time series, covariance is a function of the lag only:(3)
Cross covariance was calculated using the built in Matlab function xcov; specifically, each pair of channels with at least 1 ms overlap in their activity were compared from the time of the first
spike on either channel to the time of the last spike on either channel. The tightness of the correlation window (X or Y channel recording spikes), and requirement for overlapping activity was to
mitigate the effects of long periods of quiescence and to ensure that the data were as wide-sense stationary as possible.
Calculation of the cross-covariance at each lag resulted in a cross-covariance plot for each channel pair. The maximum cross-covariance value (peak of the plot) was used to determine whether a link
between nodes was present by comparing it to a threshold as detailed next.
Transient network link definition threshold.
Under the assumption that a peak in the cross-covariance (XCov) plot indicates a relationship between the channel pairs [34], the link definition threshold was set at 4 times the expected value of
uniformly distributed cross-covariance bin counts (the sum of the counts from all 26 bins excluding 0 lag, divided by the number of bins). It was decided not to use a fixed threshold, since the mean
XCov value increased as the cultures matured. Moreover, it varied between cultures and the age related increase did not occur in the same way for each culture. Thus, when a fixed threshold was used a
proportion of the networks obtained were either too small/sparse, or too dense for analysis of their complex network properties (empirical). By setting a threshold that identified peaks in the plot,
the results were not influenced by variations in the mean cross covariance level. Figure S5 shows the ability of the threshold to identify true positives and reject false positives. The threshold
calculated for each channel pair was applied to the weighted adjacency matrix, providing a binary adjacency matrix (transient network) for each nw-burst. The matrices were symmetrized (i.e. if a link
was found in one direction, a corresponding link was added in the opposite direction).
The transient networks obtained over the duration of a recording were found to be highly variable (see Results), therefore to obtain a more robust estimate of the network infrastructure, a persistent
network was calculated as the set of most frequently activated links over all transient networks.
Persistent network.
To compute the persistent network a weighted adjacency matrix comprising the count of each link's occurrence over all transient networks was obtained by summing the binary adjacency matrices of all
transient networks. A link persistence threshold was applied to this ‘adjacency frequency matrix’ to obtain a binary adjacency matrix representing the persistent network. At a threshold of 1, the
persistent network is simply the superset of all transient networks (and thus, not strictly speaking, ‘persistent’); conversely a threshold set equal to the total number of transient networks,
requires link presence in every transient network. Setting the threshold equal to link presence in 25% of transient networks provided a good balance between minimizing the number of overly dense and
overly sparse networks (see complex network analysis).
Complex network analysis: Topological properties
Table S1 provides the mathematical definitions for the topological properties and complex network statistics. Basic topological properties (related to network size), and complex network statistics,
were calculated from the adjacency matrices (using Matlab, with additional scripts from the Brain Connectivity Toolbox [14]). For each transient network, only basic topological properties were
measured, complex network statistics were not calculated due to the highly variable network size and edge density (see verification of network size and edge density). Instead, the mean numbers of
nodes and links were calculated over all transient networks in the recording. Additionally, the coefficient of variation for number nodes and for number of links was calculated over all transient
networks in the recording. The expected numbers of nodes, and links and the expected coefficients of variation were calculated over all 10 cultures.
Verification of network size and edge density for complex network analysis.
Since some of the complex network statistics are defined only for certain ranges of network size, it was important to ensure that each persistent network was within the size range suitable for
complex network analysis. Specifically, an assumption made when assessing small-world properties is that the networks are sparse: n>>K [16] (where n is number of nodes and K is mean node degree).
Since the maximum K is constrained by the number of nodes, this verifies that the average number of connections per node is much lower than the total number of nodes in the network (i.e. the graph is
far from being fully connected and is thus ‘low-cost’). To check for the required sparseness, an edge density (cost; number of links/number of possible links) in the range 0.05 to 0.34 was sought
(following [8]). To ensure that sufficient numbers of networks met the criterion at each age, whilst avoiding too many overly sparse networks (those with K<log(n)) [16], three different persistent
link definition thresholds were tested: 0.15, 0.25, 0.35. These resulted in 3 sets of networks with increasing numbers of nodes and links (and a variety of edge densities). The selected threshold
(0.25) provided the best balance between minimizing the number of overly dense and overly sparse networks, and provided a set of networks where 83% had an edge density in the range 0.05 to 0.34.
Additionally, this threshold resulted in the least variation of edge density between ages; this was useful for comparing statistics influenced by edge density, such as clustering coefficient.
Calculation of network statistics.
The expected persistent network statistics for each age (DIV) were obtained from all 10 cultures. For the numbers of nodes, links and edge density, data outside the 5^th to 95^th percentile were
removed as outliers (maximum removed = data from 4 cultures, leaving minimum n = 5 at all times, once cultures failing to meet the edge density criterion had been removed). The power of statistical
tests was verified to ensure that n numbers for each network statistic were sufficient (see Significance testing subsection).
For each persistent network, the network-wide statistics (mean path length [average shortest path length] (L), global efficiency (E), mean clustering coefficient (C), small-worldness (S^ws), and mean
node degree (K)) were calculated over all nodes that had at least one link (and in the case of mean path length, over all node-pairs with non-infinite distances [3]). C, L and E were normalized
against expected values from a population of equivalent random networks with the same number of nodes and links (see section on generation of equivalent null hypothesis networks).
Small-worldness of the network [17] was calculated using the Watts and Strogatz [16] definition of clustering coefficient, by taking the ratio of normalized mean clustering coefficient to normalized
mean path length. Here, clustering coefficient was normalized to the value expected for an equivalent lattice network, and mean path length to the value expected for an equivalent random network,
this provided a conservative estimate of small-worldness (see Figure S6).
To check that the small-world metric was not influenced by network disconnectedness (as mean path length is only defined for connected graphs and not all graphs were connected), the ratio of global
efficiency to the clustering coefficient [8] was also compared (Figure S4). Global efficiency (E) is inversely related to mean path length and is suitable for use on connected or disconnected
networks. Thus, replacing mean path length (in the small-world calculation), with 1/E, enabled calculation of the small-world metric based on global efficiency.
Assessment of node degree distributions.
Node degree distribution was calculated using all nodes of the network by counting the number of nodes with each degree in bins of size 2. Bin size was selected to provide a sufficient number of data
points, whilst minimizing the number of empty bins (sizes 1, 2 and 3 were tested, data not shown). Hubs were identified as nodes with a degree greater than mean node degree plus one standard
deviation [18]. To assess if the node degree distribution followed exponential or power law trend, both of these distributions were fitted to the node degree data using Graph Pad Prism 4 (GraphPad
Software, Inc., La Jolla, CA, USA). The goodness of fit ratio of power law to exponential model was calculated for each culture at each age (to test the null hypothesis that data would not differ
from an exponential fit). The degree distributions P(k) were of the following form: exponential, P(K)~e^−αK; and power law, P(K) = k^−α.
Generation of equivalent null hypothesis networks.
Network size and density may influence the magnitude of complex network statistics [69]. To counter this, empirical network properties were compared to both random and lattice null hypothesis
networks. Firstly, as per other studies the significance of empirical network statistics was assessed using random networks with the same number of nodes and links to generate a null distribution of
the network statistics. Thus for each persistent network (from one culture at a particular age), a set of 50 equivalent random networks was generated (using a script from the Brain Connectivity
Toolbox), providing 500 (50×10 cultures) equivalent random networks for each age. As per the real networks, statistics were calculated for the random networks of each culture. The expected random
network statistics were then calculated for each culture. Secondly, to assess the significance of the clustering coefficient for a conservative estimate of small-worldness, the expected clustering
coefficient from an equivalent lattice network was used. Lastly, comparison of the raw empirical network measurements against those of both equivalent random and lattice networks allowed results to
be validated against the upper and lower limits expected (see Figure S3). NOTE: For the alternative link persistence thresholds, the mean path length and clustering coefficient values expected from a
population of equivalent random networks were approximated using: L[Rand]~ = ln(n)/ln(K−1), and C[Rand]~ = K/n [64].
Calculation of non-topological properties
In addition to the networks' topological properties, the spatial and temporal features of the networks were also assessed; link distance was calculated as the Euclidean distance between the
electrodes on the MEA, based on 200 µm centre-to-centre spacing of the electrodes. For the present study, connections between nodes up to 566 µm (2 electrodes) apart were considered as ‘nearby’ and
those greater than 566 µm as ‘distant’. Link persistence was calculated using the weighted persistent network adjacency matrix (i.e. prior to thresholding), normalized so that the persistence value
was the percentage of transient networks in which the link was found.
For both link length (derived from the distance between connected nodes) and link persistence, histograms were obtained over all links from all cultures at each age. Thus, for link length, a count of
the number of links in each bin (bin size = 1 electrode spacing) was calculated for each network, this was normalized to the total number of links in the network. For link persistence, a count of the
links at each persistence level (bin size 5%) was calculated for each network. In both cases, median bin values were obtained over all 10 cultures, therefore the histogram proportions may not always
sum to 1.
To quantify the changes in link length and persistence, two further measures were assessed: for link length, the proportion of links between spatially nearby vs distant nodes was calculated for each
culture, and the median of these values was used to compare results between ages; for link persistence, the contribution of persistent links was measured as the number of links in each 5% persistence
category multiplied by the category persistence value (e.g. if 20% of links were found in the 10% persistence category, the contribution was 200). The link contribution counts were further binned
into transient (<25%) and persistent (≥25%).
The efficiency of information broadcast was measured as burst propagation time (time to recruit all channels in a network-wide burst). This was calculated in milliseconds from the time of the first
spike in the burst, until the time at which all channels participating in the burst had been recruited. Channels could be recruited to the burst whilst the burst was in progress (i.e. sufficient
channels displayed the required activity) but once the number of channels bursting dropped below the threshold, channels could no longer be recruited. For each channel included in the burst,
recruitment time was the timestamp of the first spike in the burst activity sequence. Burst propagation times were calculated for all bursts of a culture at each age and the median of these was
calculated for each age. Outliers (values <5^th and >95^th percentile) were removed from the data.
Visualization of network graphs.
Network graphs were visualized using a freely available script [70]. The script was modified to display the node numbers as their corresponding MEA hardware numbers (0 to 59), with Gnd indicating the
ground electrode. A further modification was made to highlight the nodes which had high numbers of links (defined as mean number of links plus one standard deviation), these were considered to be
‘hubs’ in the network [2].
Significance testing
All statistics were obtained using SPSS version 17.0 (SPSS Inc., Chicago, USA). Unless otherwise specified P<0.05 was set as the significance level. Statistical tests for each network property were
selected based on the experiment design and form of the resultant data; Checks were performed to ensure that the assumptions of each test were met. Following test selection, statistical power was
verified at the 80% level (checking that the proposed test statistic had sufficient power to detect a genuine effect [71] [typically set to a difference of 1–2 times standard deviation of the mean],
given the n numbers and variability of the data). For the present study, where some of the tests were applied to data with relatively low n numbers it was important to ensure that the power of each
test was sufficient [72]. It was also important to ensure that the assumptions of the statistical tests were not violated (Text S1 describes the selection and validation of statistical tests used in
the present study). The selected tests were as follows:
To check for a significant increasing or decreasing linear trend of the network properties as a function of the culture age, results for each network property were compared using a one-way ANOVA.
Culture age (DIV) was the factor, and the network property was the dependent variable. The following properties were assessed in this manner: number of nodes, number of links, edge density,
normalized mean path length, normalized clustering coefficient, small-worldness, goodness of fit ratio. In cases where a significant trend was found, Bonferroni and Tukey post-hoc tests were
performed to check for significant differences between each pair of conditions, where found, the homogeneous subsets are mentioned in the results. Homogeneity of variances was tested using the Levene
Normality was tested using the Shapiro-Wilk normality test. In cases where the sample means were not normally distributed, non-parametric tests were used. For the burst propagation times a
Kruskal-Wallis test was performed on the median burst propagation times for each culture at each age, with culture age as the grouping factor and median burst propagation time as the dependent
variable. For the proportion of links to nearby vs distant nodes at each culture age, a 2-tailed Wilcoxon signed rank sum test was used. To compare the contribution of persistent links at each age,
Friedman's rank test was used. Lastly, for the skewness of the link length distributions, a z-test was calculated based on the skewness estimate taken over the standard error of the skewness
estimate. The P value was then calculated using the online statistics analysis tool (http://www.quantitativeskills.com/sisa/calculations/signhlp.htm, accessed November, 2010).
Supporting Information
Robustness of results to changes in link persistence threshold: Basic topological properties. To check the influence of the persistent link definition threshold on the basic network statistics,
results were calculated over a range of thresholds. The plots show results calculated from 10 trials (10 cultures), at a lower and higher link persistence threshold than the main results. Graphs on
the left are from networks thresholded at 15% link-persistence (i.e. link presence required in at least 15% of network-wide bursts), and graphs on the right are from networks thresholded at 35%
link-persistence. As for the main results, in cases where no links were found the data were excluded from the analysis, resulting in n of 6 to 10 for each age. At both 15% and 35% link-persistence
thresholds there was a slight dip in the number of links between DIVs 14 and 21 (consistent with the 25% threshold results) and there was an increase in the number of links from DIV 21 onwards (again
consistent with the main results). For all three link-persistence thresholds, the number of nodes fluctuated slightly between the ages. Edge density of the networks (second row) varied differently
for each of the alternative link persistence thresholds. Moreover, at the 15% and 35% threshold levels some networks were overly dense –breaking the assumption of ‘sparseness’ required to assess
Robustness of results to changes in link persistence threshold: Complex topological properties. To check influence of the persistent link definition threshold on the complex network statistics,
results were calculated over a range of thresholds. The graphs show mean path length, clustering coefficient and small-worldness (top, middle and bottom rows respectively) for the networks
thresholded at 15%, 25% and 35% link-persistence. Results are from 10 trials (10 cultures), as for the main results, in cases where no links were found the data were excluded from the analysis,
resulting in n of 6 to 10 for each age. Mean path length and clustering coefficient were normalized to the value expected for a random network. Small-worldness was calculated conservatively as (C
[real]/C[lattice])/(L[real]/L[rand]). Mean path length (top row) was relatively stable for all three thresholds, although at the 15% link-persistence threshold it increased slightly between DIV 28
and 35, this increase was not found to be significant (ANOVA P = 0.511). Clustering coefficient (second row) followed an increasing trend at all three thresholds. Small-worldness (bottom row) showed
the same trend of increasing small-worldness between DIVs 14 and 28 at the 15% and 25% persistent link definition thresholds, however at the 35% threshold the edge density of the networks precluded
accurate assessment of small-worldness.
Robustness of small-world result: Validation of empirical results against those from random and lattice networks. To check the robustness of the small-world result, complex network statistics from
all three link-persistence thresholds were compared against the values expected for an equivalent lattice as well as those for an equivalent random network. Low, medium and high thresholds required
link persistence in 15%, 25% and 35% of network-wide bursts respectively. Each graph shows the mean network statistic obtained from the real networks, against the value expected from an equivalent
lattice network and the value expected from a population of equivalent random networks (same number of nodes and links in all cases). For all three thresholds the mean path length (first page of
graphs) is close to that of a random network and less than that of a lattice. Likewise, for all three thresholds the clustering coefficient increased from close to the value expected from a random
network, to close to the value expected for a lattice (second page of graphs).
Global efficiency and conservative global-efficiency based ‘Small-Worldness’ of persistent networks. To check the influence of disconnected networks on the measure of network integration, the global
efficiency was tested (since mean path length is designed for connected networks and some of the networks were disconnected). Global efficiency is a measure of integration that is not affected by
network dis-connectedness. The global efficiency was close to 1 at all ages, indicating a high level of integration and confirming the mean path length result. Moreover, the global efficiency-based
small-worldness increased with culture age, consistent with the main results.
Ability of the link definition threshold to identify genuine peaks. A: To check whether the defined links (i.e. above link definition threshold) appeared to be genuine peaks in the cross covariance
(XCov) plots, mean XCov plots were obtained on a per-channel basis. Plots are shown from a representative channel during a representative burst. The left hand plot shows the mean XCov value at each
lag, from all channel pairs with a peak above the threshold (i.e. those that were considered to be related). There are two well-defined peaks and no obvious false positives. To check that genuine
peaks were not missed, a mean XCov plot was obtained from all channel pairs with a peak≤threshold (right hand plot). There are no clear peaks. In addition to checking the mean plots, a number of
individual XCov plots, over a range of cultures at each age, were manually inspected (i.e. checking for false positives or negatives). In all cases, those with a maximum peak above the threshold,
appeared to contain a genuine peak in the plots, whilst those that did not cross the threshold showed no sign of peaks. B: To check the actual link definition thresholds used and see how they
compared to the mean cross-covariance peak over all links, the mean XCov threshold for each age, was compared to the mean XCov peak for each age. Results are depicted in a bar chart, as can be seen,
the mean XCov threshold is well above the mean XCov peak plus one standard deviation.
Conservative small-worldness: guarding against high small-worldness values when clustering coefficient is low. The first graph (Panel A) shows the mean path length, clustering coefficient and
small-worldness values normalized against the expected values from a population of equivalent random networks. The second graph (Panel B) shows the raw network properties alongside those expected
from equivalent random and lattice null hypothesis networks. At DIVs 14 and 21, small-worldness is met (Panel A) despite the clustering coefficient being far from the value expected for a lattice
network (Panel B). It is not until after DIV 21 that the clustering coefficient approaches the value obtained for a lattice. Small-worldness is defined as L≥L[random] and C>>C[random] [73], and the
small-world metric has been defined as: (C/C[rand])/(L/L[rand])>1 [74]. Therefore an overly optimistic small-world result can be obtained if the clustering coefficient of the random equivalent
networks is very low, since normalized values of >>1 can be achieved despite a very low absolute clustering coefficient. Thus, whilst the cultures had a small-world metric >1 at DIVs 14 and 21 (left
hand graph), it was not considered that these networks met the small-world criterion, (since their clustering coefficient was so low compared to a lattice). To address this issue, a conservative
estimate of small-worldness based on: (C/C[Lattice])/(L/L[random]) was used for the present study.
Mathematical definitions of the complex network measures used in this study.
Selection and validation of statistical tests.
Author Contributions
Analyzed the data: JHD MWH DX MCS BJW SJN. Wrote the paper: JHD. Conceived and designed the analysis of existing data: JHD BJW SJN. Edited the manuscript: JHD MWH DX MCS VMB KW BJW SJN. | {"url":"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002522?imageURI=info:doi/10.1371/journal.pcbi.1002522.g008","timestamp":"2014-04-17T10:08:08Z","content_type":null,"content_length":"295547","record_id":"<urn:uuid:00d945dc-e94e-4f1e-a722-069f1a4673ae>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: nlsur error cannot have fewer observations than parameters
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: nlsur error cannot have fewer observations than parameters
From "Nelson, Carl" <chnelson@illinois.edu>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: nlsur error cannot have fewer observations than parameters
Date Fri, 5 Feb 2010 14:27:22 -0600
In general I know what this error means about a rank condition for identification.
I looked at the nlsur.ado file and found the following code that generates the error:
qui count if `touse'
if r(N) < `np' {
di as err "cannot have fewer observations than parameters"
exit 2001
The question comes from a 1996 Journal of Applied Econometric article (Coelli, JAE, 11(1996):77-91) estimating an 8 netput profit function with 35 time series observations.
The functional form is a generalized McFadden profit function with convexity imposed by expressing the matrix of quadratic terms as the product of cholesky factors and estimating the elements of the Cholesky matrix nonlinearly.
Nick and Maarten helped me fix tempname and scalar syntax errors caused by my limited understanding. After fixing them and some other errors I am know encountering:
- replace `q7' = `b7' + `b17'*(`p1'/`p8') + `b27'*(`p2'/`p8') + `b37'*(`p3'/`p8
> ') + `b47'*(`p4'/`p8') + `b57'*(`p5'/`p8') + `b67'*(`p6'/`p8') + `b77'*(`p7'/`p8')
> + `bt7'*`t' + `btt'*`g7'*`t'^2 `if'
= replace __00000J = __00001I + __00001X*(p1/p8) + __000023*(p2/p8) + __000028*
> (p3/p8) + __00002C*(p4/p8) + __00002F*(p5/p8) + __00002H*(p6/p8) + __00002I*(p7/p8)
> + __00001P*t + __00001Q*g7*t^2 if __000002
(obs = 35)
cannot have fewer observations than parameters
It is true that the full system contains 45 parameters and there are 35 observations. But the data is stacked into 8 equations. reg3 estimation of the constrained system, without convexity imposed, runs without problem. I don't think the r(2001) is generated because of the 45 to 35 comparison.
I am guessing that the error is being caused by the first stage single equation estimation to generate the first estimate of the residuals. The higher the index of the equation the more elements of the Cholesky factor enter into the bij parameters that you see in the above specification. So my main question is ``Is the error being generated because the single equation that needs to be estimated to obtain first stage residuals has more parameters than observations."
I would never choose such a specification for research. I am working on this for a problem set for a PhD class using the JAE data archive. Coelli reports that estimation of the model with convexity imposed was accomplished by maximum likelihood estimation of the system in Shazam. I don's want to make to big a point of it, but the results that I am seeing causes me to think that the parameter estimates are very fragile.
I'm hoping someone can give me more insight into the cannot have fewer observations than parameters
r(2001); error. Thanks.
Carl Nelson
University of Illinois
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-02/msg00294.html","timestamp":"2014-04-19T22:33:18Z","content_type":null,"content_length":"8515","record_id":"<urn:uuid:4787ef37-4320-4c5b-aefd-3e8eabd5e7e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I really need help solving this! (x^4+7x^2-5x+3) / (x^2+4)
• one year ago
• one year ago | {"url":"http://openstudy.com/updates/50687f33e4b0e3061a1d5d9b","timestamp":"2014-04-16T13:09:00Z","content_type":null,"content_length":"1042579","record_id":"<urn:uuid:c73f91d1-f5cc-4a81-bb88-687e3b61772d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Function Terminology for Functors?
up vote 1 down vote favorite
The analogy between functions from a set to set, and functors from a category to a category is obvious. But transferring terminology from functions to functors can be a tricky business.
I would like to ask if the following is acceptable: For $B$ a subcategory of $A$, and $D$ a subcategory of $C$, and $F:A \to C$ a functor, is the following statement clear in meaning, and
It holds that the image of $B$ under $F$ lies in $D$.
If not, then how should it be phrased?
ct.category-theory terminology
Presumably you mean that the objects $F(b)$ for $b$ an object of $B$ are all objects of $D$. Do you mean the same for the morphisms? – Jon Beardsley Sep 19 '12 at 13:54
Yes in both cases. – Mihail Matrix Sep 19 '12 at 13:59
1 Analogy? A function is a functor (between discrete categories). – Qiaochu Yuan Sep 19 '12 at 16:15
It seems still unclear. @Mihail: May I ask you to clarify if the intended meaning of your statement is the one explained by Andreas and Todd, or the one suggested by me? – Fred Rohrer Sep 19 '12
at 20:40
1 @Theo: Of course. The point is the distinction between "is" and "is equivalent to". For example, the last "are" in your comment is of the latter type (supposing our definitions of "functor"
coincide, and supposing we have a similar set theory). And in that case it is indeed appropriate to speak of analogy. – Fred Rohrer Sep 20 '12 at 5:13
show 2 more comments
4 Answers
active oldest votes
Andreas's answer is very much to the point. I'd like to add to it and give some accepted terminology.
As he was saying, the naive image doesn't work. Consider the arrow category $2 = \{0 \to 1 \}$, and take the functor $2 \to Set$ which maps both objects to the natural numbers $\mathbb
up vote 5 {N}$ and the arrow to the successor function $s: \mathbb{N} \to \mathbb{N}$. Now ask yourself if the "image" of this functor is actually a subcategory!
down vote
The notion you really want is called the essential image.
add comment
If you use James Cranch's suggestion, namely to say that the image is a subcategory of $D$, you should make sure that it's actually true in your situation, because it might not be in
general. $B$ might contain morphisms $f:x\to y$ and $g:z\to w$, and your functor might have $F(y)=F(z)$, so that $F(f)$ and $F(g)$ are composable morphisms in $D$. Yet, if $y\neq z$ and so
up vote 4 $f$ and $g$ are not composable in $A$, then the composite $F(g)F(f)$ might not be in the image of $F$.
down vote
add comment
As proven by Jon's comment, the statement is not clear in meaning. A precise statement is the following, but it is of course unclear whether it is equivalent to the statement you
had in mind:
up vote 1 down $F$ induces by restriction and coastriction a functor from $B$ to $D$.
(The very useful word "coastriction" seems to be not so well-known, but it is explained in this answer.)
add comment
I think this is well-defined. You mean that the images of objects and morphisms in $B$ under $F$ all lie within $D$.
up vote 0 down It might be slightly clearer to say, "... is a subcategory of $D$" rather than "... lies in $D$" (to avoid a reader thinking that the image has to be a single point for some stupid
vote reason) but this is almost pedantic.
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory terminology or ask your own question. | {"url":"http://mathoverflow.net/questions/107565/using-function-terminology-for-functors/107571","timestamp":"2014-04-19T17:27:19Z","content_type":null,"content_length":"66922","record_id":"<urn:uuid:c1415069-496e-4a3a-a91f-8d79ec8f3b2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Apr 10, 2014
Johan drinks milk in slow motion to piano pop music
Apr 9, 2014
• Nautilus is a very interesting magazine.
• Street photography in Tokyo is fun. Nighttime film photography in Tokyo is also fun because street lights give off a teal, Fujifilm-like colour. That’s kind of also why I’m excited about all
these LED lights in Mississauga; it’ll make nighttime photography look more colour-balanced.
(Source: wnycradiolab)
Apr 9, 2014
There is an interesting theorem in set theory and geometry, called the Banach-Tarski Paradox, which proposes the following:
Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of non-overlapping pieces (i.e. disjoint subsets), which can then be put back together in a
different way to yield two identical copies of the original ball. Since I ripped this off from Wikipedia, I might as well continue to do so and post an illustration here, also from Wiki:
But wait… What? Two identical balls of the same size as the single ball they originated from? Then that means I could, in theory, somehow produce TWO slices of steak equivalent to the one they were
‘reassembled’ from. How heavenly would that be? If possible, with my grocery bills cut down, I’d blow even more money on other useless life ‘necessities’ than I already do.
Personal planning aside, the reassembly of the pieces only involves moving and rotating them, without any stretching or bending. So how does this work?
I don’t know. Hah, I am not completely familiar nor comfortable with the set theory and geometric proofs behind this interesting theorem. I do understand however that the pieces themselves are an
infinite scattering of points as opposed to your conventional solid pieces.
A stronger form of this theorem states that given any two “reasonable” solid objects, you can reassemble one into the other. Take for example a pea and a sun - based on this form of the theorem, you
can make a ball the size of a pea into a ball the size of the sun and vice versa.
If you couldn’t tell, all of this is a paradox because the operations being performed (translations and rotations) preserve volume, yet somehow the volume is ultimately doubled.
So this only works if you use the axiom of choice, which I will not go into here, but seems pretty self-evident (doesn’t everything these days?) when you read it. This axiom allows for “the
construction of nonmeasurable sets, i.e. collections of points that do not have a volume in the ordinary sense and that for their construction would require performing an uncountable infinite number
of choices.”
So there are two things to note, for me at least: the concept of infinity, and the axiom upon which this works. I really question how well-equipped we as humans are to understand the concept of
infinity, or eternity. The idea itself seems simple enough - something that doesn’t have an end, or a beginning for that matter. It just extends out into… space or something. Okay, alright. That’s a
straightforward enough definition, but when you begin to visualize or even attempt to grasp this extremely alien notion, your brain will asphyxiate itself with the contortions it’s done. The idea
itself is breathtaking though (hence the asphyxiation) and utterly astounding the more I think about it; how insignificant and small we are in a place and time containing both ephemeral traits and
unfathomable infinities (because yes, there is a hierarchy of infinities).
Coming back to my second point, it’s interesting that this theorem can only be proven with the axiom of choice. It just goes to show that anything, again in theory, is possible based on where you
begin. I’m going to perhaps make a far-fetched connection and refer back to Poincaré’s description of chaos, and the unpredictability of where an event will lead you based on your initial conditions.
It all ultimately comes down to how you define different ideas and concepts in your life. The disputes and such that happen in society all lie in discrepancies between how people define things. Gee,
even just looking back at the past couple of years, I’ve had to rework my own “life dictionary”; I think I basically destroyed whatever I had before and started from scratch (there’s definitely no
conservation of energy here).
It’s enlightening looking back and seeing how far (? Can one even measure progress like this anymore?) I’ve come; how much I (or so I think) have learned. I wonder, with this new foundation, where
I’ll end up next…
Maybe I’ll be able to make two steaks from one someday.
Apr 8, 2014
Growing up, I’ve never quite envied anyone.
In our early lives, we are only occasionally a part of life-altering choices, and many of those were oftentimes already decided on our behalf. The playing field was level. My world was small, and
everything seemed fair.
We went to school, we had similar classes, and we all had the weekends off. We chose which subjects to enrol and decided which instrument to learn, but these were roughly equal choices and I was
picking my favourite. Choices seemed superficial and inconsequential.
But then somewhat suddenly, I realized that the playing field was no longer level. In fact, it hadn’t ever been; I just couldn’t see far enough to see the curvature of the rolling hills. It was only
recently that I saw how our small choices shaped our day-to-day routines, dictate the types of freedom we enjoy, and leave us with particular responsibilities to bear. I gradually realized that I
wasn’t able to do whatever I wanted to do, unlike how I felt I could when my world was small.
Nowadays, I catch myself feeling envious of others. Envious of people that can start their days with a cup of tea while still in bed, or of people who are in close-distance relationships, or of
people who get to work outside. People who are sad and thus are able to create beautiful art. People who are hungry and thus are able to appreciate food so much more than I appreciate my weekly
Whopper Wednesday meal.
I’m envious of others, but I think that’s okay. I don’t wish I were someone else, or that I had something else. I merely wish that I could feel what others feel.
I have a choice now, in the present, of whether to have my future self will envy my past self more, or to have my present self envy my future self more.
Apr 8, 2014
When The Sun Comes Out, This Synthetic Cloud Self-Inflates
The shape of the umbrella was also inspired by nature. “The power of solar energy inspired us to design this synthetic cloud,” Widdershoven says. The form is aerodynamic, so the umbrella won’t
blow over in strong winds, and the fabric is waterproof, so it can last as long as possible.
More> Co.Exist
(via fastcodesign)
Apr 5, 2014
Guys I want to introduce you to Dinosaur Comics. It is totally awesome because every single comic has the same art and it’s really just a daily blog post by a rambling nerd but OH MAN IT IS SO FUNNY
and its topics are sometimes enlightening.
Apr 3, 2014
봄이와 봄이와 그대와 함께라 좋아라
김현철 / 봄이와 (feat.롤러코스터)
(via chri5ungeun)
Apr 1, 2014
Today I discovered that Logic Pro X comes with pop music loops galore. Mixed them into a song.
Apr 1, 2014
I can’t write on Tumblr because it doesn’t save my posts as I go.
Lost a post that I was writing on the subway this morning. | {"url":"http://josephers.tumblr.com/","timestamp":"2014-04-20T13:29:53Z","content_type":null,"content_length":"41014","record_id":"<urn:uuid:d03e3441-db88-469b-b0cf-c5b48a066408>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Encryption Pseudorandomness Explorer
Encryption Pseudorandomness Explorer
(Summary of the Original Written Report)
Exploratory Object Designer: Neil Marshall
Date: April 2007
There are many encryption methods students can implement with the programming skills learned from the first year MICA course. In class we have implemented encrypting numbers with RSA using a simple
case of 3 digit primes. The question then becomes, “How good is this encryption we have implemented?” Obviously a mathematician with a computer could break such encryption using brute force methods,
but what if that mathematician didn’t have a computer or knowledge of the cryptosystem?
In the early days of code breaking, mathematicians used the distributions of alphabetical (and other) characters in text as a means of breaking the code. The uneven distribution of letters in any
language gives mathematicians a tool to tease a signal out of what is apparently noise. Thus it is appropriate to ask, “How random would encrypted texts appear using methods we can be expected to
implement having taken the course?”
I selected three ciphers: RSA, Hill Cipher and Substitution Cipher. In the case of RSA, I implemented the ability to vary the size of the primes used to create the public (and private) encryption key
(2-digit, 3-digit or 4-digit). I predicted that as you increase the size of the primes, the apparent “randomness” will increase. For Hill Cipher I implemented a choice of either a 2x2 Matrix or a 3x3
Matrix as part of the private encryption key, and I again predicted that the larger 3x3 will be more “random” than the 2x2. In the case of Substitution Cipher, I expect it will not seem random at
all. I predict that RSA, being a “modern” encryption system will seem more “random” than Hill Cipher.
I predict that changing the text encrypted will have no impact on the results of the study as long as the text is sufficiently long (say about 40 words).
RSA encryption using 2-digit primes is “more random” than plain English text but only marginally so. Increasing the size of the primes does increase the “randomness” of the ciphertext.
Hill Cipher in a 2x2 matrix already appears quite “random.” Though in some cases a 3x3 matrix does appear more “random,” other times using the tool it is simply too difficult to determine if this is
the case. This is certainly a limitation of this approach.
Changing the encrypted text between different excerpts of several paragraphs of written English, as expected, did not influence the results.
As expected, Substitution Cipher appears to have the same distribution as plain English text. Surprisingly though, RSA encryption (at least in the form we implement in class) is less “random” than
Hill Cipher. However this may be due to the fact that RSA is a public key encryption system whereas Hill cipher is a private key encryption system based on matrices. Perhaps there is a trade-off in
terms of accessibility versus security.
Written by Neil Marshall, July 2012
Return to Student Learning Objects | {"url":"http://www.brocku.ca/mathematics-science/departments-and-centres/mathematics/undergraduate-programs/mica/student-learning-objects/encryption-pseudora","timestamp":"2014-04-20T06:27:33Z","content_type":null,"content_length":"49705","record_id":"<urn:uuid:6ec207a7-59b8-4de3-aeb5-9804b7f762af>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jacinto City, TX Algebra 2 Tutor
Find a Jacinto City, TX Algebra 2 Tutor
I am currently a CRLA certified level 3. I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for
groups of students to give them extra practice on their course material and help to answer any question...
7 Subjects: including algebra 2, calculus, statistics, algebra 1
...I am also available to assist with math and reading comprehension when preparing for standardized tests including the ACT, SAT, and GRE. I provide fundamental skill training and practice that
will ensure that students grasp the concepts, not just memorize the material. I also perform assessments before and after tutoring to measure the progress of the student.
18 Subjects: including algebra 2, chemistry, reading, biology
...In addition, I tutored the Air Force Academy football team in calculus. I have also tutored high school students in various locations. My reputation at the Air Force Academy was the top
calculus instructor.
11 Subjects: including algebra 2, calculus, geometry, statistics
...I can help in the math related fields. I am interested in helping in AP and SAT related math topics. I am qualified (PhD Rice engineering. 800 Math SAT 1, 800 Math SAT2, 800 Math GRE). I know
how to tackle these tests & can pass this information on to you.
17 Subjects: including algebra 2, calculus, physics, geometry
...I enjoy teaching a great deal and am well versed in teaching in different styles to fit the student. I like to focus on building a student's confidence and understanding over memorization. I
taught algebra and geometry as a high school teacher.
34 Subjects: including algebra 2, chemistry, reading, English
Related Jacinto City, TX Tutors
Jacinto City, TX Accounting Tutors
Jacinto City, TX ACT Tutors
Jacinto City, TX Algebra Tutors
Jacinto City, TX Algebra 2 Tutors
Jacinto City, TX Calculus Tutors
Jacinto City, TX Geometry Tutors
Jacinto City, TX Math Tutors
Jacinto City, TX Prealgebra Tutors
Jacinto City, TX Precalculus Tutors
Jacinto City, TX SAT Tutors
Jacinto City, TX SAT Math Tutors
Jacinto City, TX Science Tutors
Jacinto City, TX Statistics Tutors
Jacinto City, TX Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bellaire, TX algebra 2 Tutors
Bunker Hill Village, TX algebra 2 Tutors
Clear Lake Shores, TX algebra 2 Tutors
Crosby, TX algebra 2 Tutors
Dayton, TX algebra 2 Tutors
El Lago, TX algebra 2 Tutors
Galena Park algebra 2 Tutors
Greens Bayou, TX algebra 2 Tutors
Hedwig Village, TX algebra 2 Tutors
Highlands, TX algebra 2 Tutors
Hilshire Village, TX algebra 2 Tutors
Morgans Point, TX algebra 2 Tutors
Nassau Bay, TX algebra 2 Tutors
North Houston algebra 2 Tutors
Shoreacres, TX algebra 2 Tutors | {"url":"http://www.purplemath.com/Jacinto_City_TX_algebra_2_tutors.php","timestamp":"2014-04-19T02:29:16Z","content_type":null,"content_length":"24369","record_id":"<urn:uuid:7cac492d-a008-4ae2-9f61-1d9186cda746>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can any one teach me the numbering?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512094afe4b06821731cd342","timestamp":"2014-04-21T07:56:48Z","content_type":null,"content_length":"55511","record_id":"<urn:uuid:f148df5e-69fb-42b3-b277-8e143f35e959>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstracts: the pcf theory
Authors: Thomas Jech and Saharon Shelah
Title: Possible pcf algebras
J. Symb. Logic 61 (1996)
Abstract: We prove the existence of a structure on countable ordinals that is relevant to the singular cardinals problem.
Author: Thomas Jech
Title: Singular cardinals and the pcf theory
Bull. Symb. Logic 1 (1995)
Abstract: An expository article on the singular cardinals problem.
Author: Thomas Jech
Title: A variation on a theorem of Galvin and Hajnal
Bull. London Math. Soc. 25 (1993)
Abstract: Using methods of Shelah's pcf theory, we prove an upper bound on the length of cofinal scales in reduced products, using the Galvin-Hajnal norm.
Author: Thomas Jech
Title: Singular cardinal problem: Shelah's theorem on 2 to aleph_omega
Bull. London Math. Soc. 24 (1992)
Abstract: This is an expository paper giving a complete proof of a celebrated theorem of Saharon Shelah.
Authors: Thomas Jech and Saharon Shelah
Title: On a conjecture of Tarski on products of cardinals
Proceedings AMS 112 (1991)
Abstract: We look at an old conjecture of A. Tarski on cardinal arithmetic and show that if a counterexample exists, then there exists one of length omega_1 + omega. | {"url":"http://www.math.psu.edu/jech/preprints/abs-pcf.html","timestamp":"2014-04-20T04:07:24Z","content_type":null,"content_length":"1663","record_id":"<urn:uuid:503bc63a-4139-4600-b28c-f385b78919fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
limit refresher
May 12th 2009, 06:09 AM
limit refresher
Could some please explain how to take this limit?
lim x ->2/3 1/(3x-2)^3
and a similar one
lim x ->2/3 7/(3x-2)^2
May 12th 2009, 06:23 AM
lim x ->2/3 1/(3x-2)^3 yields 1/0 so the limit could be inf,-inf or DNE
So consider onesided limits
lim x ->2/3+ ( 1/(3x-2)^3) = infinity since if x > 2/3 3x-2 > 0 and
(3x-2)^3 > 0
lim x ->2/3- (1/(3x-2)^3) = neg infinity since if x < 2/3 3x-2 < 0 and
(3x-2)^3 < 0
Therefore lim x ->2/3 1/(3x-2)^3 DNE
Note if we had (3x-2)^2 then both limits would be infinity and the limit would be infinity
May 12th 2009, 06:25 AM
And the second one is infinity from both sides so the limit is positive infinity correct?
Ooops just saw the last line of your post. Thanks.
May 12th 2009, 06:26 AM
you got it-- | {"url":"http://mathhelpforum.com/calculus/88683-limit-refresher-print.html","timestamp":"2014-04-19T23:38:48Z","content_type":null,"content_length":"4506","record_id":"<urn:uuid:c1591e9e-f706-4399-b08b-c1dbe3639333>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
es over finite fields
Results 1 - 10 of 37
, 1998
"... Since the group of an elliptic curve defined over a finite field F_q... The purpose of this paper is to describe how one can search for suitable elliptic curves with random coefficients using
Schoof's algorithm. We treat the important special case of characteristic 2, where one has certain simplific ..."
Cited by 17 (1 self)
Add to MetaCart
Since the group of an elliptic curve defined over a finite field F_q... The purpose of this paper is to describe how one can search for suitable elliptic curves with random coefficients using
Schoof's algorithm. We treat the important special case of characteristic 2, where one has certain simplifications in some of the algorithms.
- in ASIACRYPT ’98 Springer LNCS 1514 , 2007
"... Abstract. Classicaly, the Hilbert class polynomial P ∆ ∈ Z[X] of an imaginary quadratic discriminant ∆ is computed using complex analytic techniques. In 2002, Couveignes and Henocq [5] suggested
a p-adic algorithm to compute P∆. Unlike the complex analytic method, it does not suffer from problems c ..."
Cited by 14 (4 self)
Add to MetaCart
Abstract. Classicaly, the Hilbert class polynomial P ∆ ∈ Z[X] of an imaginary quadratic discriminant ∆ is computed using complex analytic techniques. In 2002, Couveignes and Henocq [5] suggested a
p-adic algorithm to compute P∆. Unlike the complex analytic method, it does not suffer from problems caused by rounding errors. In this paper we complete the outline given in [5] and we prove that,
if the Generalized Riemann Hypothesis holds true, the expected runtime of the p-adic algorithm is eO(|∆|). We illustrate the algorithm by computing the polynomial P−639 using a 643-adic algorithm. 1.
, 2005
"... The Sato-Tate conjecture asserts that given an elliptic curve without complex multiplication, the primes whose Frobenius elements have their trace in a given interval (2α √ p, 2β √ p) 1 − t2 dt.
We prove that this conjecture is true on average in a have density given by 2 π more general setting. ..."
Cited by 13 (4 self)
Add to MetaCart
The Sato-Tate conjecture asserts that given an elliptic curve without complex multiplication, the primes whose Frobenius elements have their trace in a given interval (2α √ p, 2β √ p) 1 − t2 dt. We
prove that this conjecture is true on average in a have density given by 2 π more general setting.
- Ann. Inst. Fourier (Grenoble
"... Abstract. We give a complete answer to the question of which polynomials occur as the characteristic polynomials of Frobenius for genus-2 curves over finite fields. 1. ..."
Cited by 9 (2 self)
Add to MetaCart
Abstract. We give a complete answer to the question of which polynomials occur as the characteristic polynomials of Frobenius for genus-2 curves over finite fields. 1.
- London Math. Soc., Journal of Computational Mathematics , 2005
"... The ℓ th modular polynomial, φℓ(x,y), parameterizes pairs of elliptic curves with an isogeny of degree ℓ between them. Modular polynomials provide the defining equations for modular curves, and
are useful in many different aspects of computational number theory and cryptography. For example, computa ..."
Cited by 7 (3 self)
Add to MetaCart
The ℓ th modular polynomial, φℓ(x,y), parameterizes pairs of elliptic curves with an isogeny of degree ℓ between them. Modular polynomials provide the defining equations for modular curves, and are
useful in many different aspects of computational number theory and cryptography. For example, computations with modular polynomials have been used to speed elliptic curve point-counting
- J. Number Theory , 2000
"... Let A be a supersingular abelian variety over a finite field k which is k-isogenous to a power of a simple abelian variety over k. Write the characteristic polynomial of the Frobenius
endomorphism of A relative to k as f = g e for a monic irreducible polynomial g and a positive integer e. We show th ..."
Cited by 7 (1 self)
Add to MetaCart
Let A be a supersingular abelian variety over a finite field k which is k-isogenous to a power of a simple abelian variety over k. Write the characteristic polynomial of the Frobenius endomorphism of
A relative to k as f = g e for a monic irreducible polynomial g and a positive integer e. We show that the group of k-rational points A(k) onAis isomorphic to (Z g(1) Z) e unless A's simple component
is of dimension 1 or 2, in which case we prove that A(k) is isomorphic to (Z g(1) Z) a _ (Z (g(1) 2) Z_Z 2Z) b for some non-negative integers a, b with a+b=e. In particular, if the characteristic of
k is 2 or A is simple of dimension greater than 2, then A(k)$(Z g(1) Z) e.
, 2005
"... Abstract. We consider some problems of analytic number theory for elliptic curves which can be considered as analogues of classical questions around the distribution of primes in arithmetic
progressions to large moduli, and to the question of twin primes. This leads to some local results on the dist ..."
Cited by 6 (0 self)
Add to MetaCart
Abstract. We consider some problems of analytic number theory for elliptic curves which can be considered as analogues of classical questions around the distribution of primes in arithmetic
progressions to large moduli, and to the question of twin primes. This leads to some local results on the distribution of the group structures of elliptic curves defined over a prime finite field,
exhibiting an interesting dichotomy for the occurence of the possible groups. (This paper was initially written in 2000/01, but after a four year wait for a referee report, it is now withdrawn and
deposited in the arXiv). Contents
- Bull. AMS , 1995
"... Abstract. In this expository paper we show how one can, in a uniform way, calculate the weight distributions of some well-known binary cyclic codes. The codes are related to certain families of
curves, and the weight distributions are related to the distribution of the number of rational points on t ..."
Cited by 5 (0 self)
Add to MetaCart
Abstract. In this expository paper we show how one can, in a uniform way, calculate the weight distributions of some well-known binary cyclic codes. The codes are related to certain families of
curves, and the weight distributions are related to the distribution of the number of rational points on the curves. 1.
- Acta Arith
"... Abstract. Let Fq (q = pr) be a finite field. In this note the number of irreducible polynomials of degree m in Fq[x] with prescribed trace and norm coefficients is calculated in certain special
cases and general bounds for that number are obtained. As a corollary, sharp bounds are obtained for the n ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract. Let Fq (q = pr) be a finite field. In this note the number of irreducible polynomials of degree m in Fq[x] with prescribed trace and norm coefficients is calculated in certain special cases
and general bounds for that number are obtained. As a corollary, sharp bounds are obtained for the number of elements in Fq3 with prescribed trace and norm over Fq improving the estimates by Katz.
Next, simple necessary and sufficient conditions are given when a Kloosterman sum over F2r is divisible by three. This result generalizes the earlier result by Charpin, Helleseth, and Zinoviev
obtained only in the case r odd. Finally, a new elementary proof for the value distribution of a Kloosterman sum over the field F3r, obtained by Katz and Livne, is given. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=808836","timestamp":"2014-04-20T19:20:13Z","content_type":null,"content_length":"33536","record_id":"<urn:uuid:8ce7a474-0fbe-4efe-a2ff-de8aeefb6c6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability GHC only
Stability experimental
Maintainer ekmett@gmail.com
Allows the choice of AD Mode to be specified at the term level for benchmarking or more complicated usage patterns.
Exposed Types
data Direction Source
Bounded Direction
Enum Direction
Eq Direction
Ord Direction
Read Direction
Show Direction
Ix Direction
class Lifted t => Mode t whereSource
Mode Id
Lifted Forward => Mode Forward
Lifted Reverse => Mode Reverse
Lifted Tower => Mode Tower
Mode f => Mode (AD f)
(Mode f, Mode g) => Mode (ComposeMode f g)
newtype AD f a Source
AD serves as a common wrapper for different Mode instances, exposing a traditional numerical tower. Universal quantification is used to limit the actions in user code to machinery that will return
the same answers under all AD modes, allowing us to use modes interchangeably as both the type level "brand" and dictionary, providing a common API.
Primal f => Primal (AD f)
Mode f => Mode (AD f)
Lifted f => Lifted (AD f)
Var (AD Reverse)
Iso (f a) (AD f a)
(Num a, Lifted f, Bounded a) => Bounded (AD f a)
(Num a, Lifted f, Enum a) => Enum (AD f a)
(Num a, Lifted f, Eq a) => Eq (AD f a)
(Lifted f, Floating a) => Floating (AD f a)
(Lifted f, Fractional a) => Fractional (AD f a)
(Lifted f, Num a) => Num (AD f a)
(Num a, Lifted f, Ord a) => Ord (AD f a)
(Lifted f, Real a) => Real (AD f a)
(Lifted f, RealFloat a) => RealFloat (AD f a)
(Lifted f, RealFrac a) => RealFrac (AD f a)
(Lifted f, Show a) => Show (AD f a) | {"url":"http://hackage.haskell.org/package/ad-0.22/docs/Numeric-AD-Directed.html","timestamp":"2014-04-20T22:10:33Z","content_type":null,"content_length":"23731","record_id":"<urn:uuid:ef0fe020-1c29-456f-a5ee-bca8b2a6a088>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Physical Oceanography : Chapter 10 - Geostrophic Currents - Comments on Geostrophic Currents
Chapter 10 - Geostrophic Currents
10.6 Comments on Geostrophic Currents
Now that we know how to calculate geostrophic currents from hydrographic data, let's consider some of the limitations of the theory and techniques.
Converting Relative Velocity to Velocity
Hydrographic data give geostrophic currents relative to geostrophic currents at some reference level. How can we convert the relative geostrophic velocities to velocities relative to the Earth?
1. Assume a Level of no Motion: Traditionally, oceanographers assume there is a level of no motion, sometimes called a reference surface, roughly 2,000m below the surface. This is the assumption
used to derive the currents in Table 10.4. Currents are assumed to be zero at this depth, and relative currents are integrated up to the surface and down to the bottom to obtain current velocity
as a function of depth. There is some experimental evidence that such a level exists on average for mean currents (see for example, Defant, 1961: 492).
Defant recommends choosing a reference level where the current shear in the vertical is smallest. This is usually near 2 km. This leads to useful maps of surface currents because surface currents
tend to be faster than deeper currents. Figure 10.9 shows the geopotential anomaly and surface currents in the Pacific relative to the 1,000 decibar pressure level. Compare this with Figure 10.5.
Figure 10.9. Mean depthgeopotential anomaly of the Pacific Ocean relative to the 1,000dbar surface based on 36,356 observations. Height is in geopotential centimeters. If the velocity at 1,000dbar
were zero, the map would be the surface topography of the Pacific. From Wyrtki (1974).
2. Use known currents: The known currents could be measured by current meters or by satellite altimetry. Problems arise if the currents are not measured at the same time as the hydrographic data.
For example, the hydrographic data may have been collected over a period of months to decades, while the currents may have been measured over a period of only a few months. Hence, the hydrography
may not be consistent with the current measurements. Sometimes currents and hydrographic data are measured at nearly the same time (Figure 10.10). In this example, currents were measured
continuously by moored current meters (points) in a deep western boundary currents and from CTD data taken just after the current meters were deployed and just before they were recovered (smooth
curves). The solid line is the current assuming a level of no motion at 2,000 m, the dotted line is the current adjusted using the current meter observations smoothed for various intervals before
or after the CTD casts.
Figure 10.10 Current meter measurements can be used with CTD measurements to determine current as a function of depth avoiding the need for assuming a depth of no motion. Solid line: profile
assuming a depth of no motion at 2000 decibars. Dashed line: profile adjusted to agree with currents measured by current meters 1|7 days before the CTD measurements. (Plots from Tom Whitworth,
Texas A&M University)
3. Use Conservation Equations: Lines of hydrographic stations across a strait or an ocean basin may be used with conservation of mass and salt to calculate currents. This is an example of an
inverse problem (see Wunsch, 1996 on how inverse methods are used in oceanography). See Mercier et al. (2003) for a description of how they determined the circulation in the upper layers of the
eastern basins of the South Atlantic using hydrographic data from the World Ocean Circulation Experiment and direct measurments of current in a box model constrained by inverse theory.
Disadvantage of Calculating Currents from Hydrographic Data
Currents calculated from hydrographic data have provided important insights into the circulation of the ocean over the decades from the turn of the 20th century to the present. Nevertheless, it is
important to review the limitations of the technique.
1. Hydrographic data can be used to calculate only the current relative a current at another level.
2. The assumption of a level of no motion may be suitable in the deep ocean, but it is usually not a useful assumption when the water is shallow such as over the continental shelf.
3. Geostrophic currents cannot be calculated from hydrographic stations that are close together. Stations must be tens of kilometers apart.
Limitations of the Geostrophic Equations
We began this section by showing that the geostrophic balance applies with good accuracy to flows that exceed a few tens of kilometers in extent and with periods greater than a few days. The balance
cannot, however, be perfect. If it were, the flow in the ocean would never change because the balance ignores any acceleration of the flow. The important limitations of the geostrophic assumption
1. Geostrophic currents cannot evolve with time because the balance ingnores acceleration of the flow. Acceleration dominates if the horizontal dimensions are less than roughly 50 km and times are
les than a few days. Acceleration in negligible, but not zero, over longer distances and times.
2. The geostrophic balance does not apply near the equator where the Coriolis force goes to zero because sin j → 0.
3. The geostrophic balance ignores the influence of friction.
Strub et al. (1997) showed that currents calculated from satellite altimeter measurements of seasurface slope have an accuracy of ±3–5 cm/s. Uchida, Imawaki, and Hu (1998) compared currents measured
by drifters in the Kuroshio with currents calculated from satellite altimeter measurements of sea-surface slope assuming geostrophic balance. Using slopes over distances of 12.5 km, they found the
difference between the two measurements was ±16 cm/s for currents up to 150 cm/s, or about 10%. Johns, Watts, and Rossby (1989) measured the velocity of the Gulf Stream northeast of Cape Hatteras and
compared the measurements with velocity calculated from hydrographic data assuming geostrophic balance. They found that the measured velocity in the core of the stream, at depths less than 500 m, was
10–25 cm/s faster than the velocity calculated from the geostrophic equations using measured velocities at a depth of 2000 m. The maximum velocity in the core was greater than 150 cm/s, so the error
was ≈10%. When they added the influence of the curvature of the Gulf Stream, which adds an acceleration term to the geostrophic equations, the difference in the calculated and observed velocity
dropped to less than 5–10 cm/s (≈5%).
chapter contents | {"url":"http://oceanworld.tamu.edu/resources/ocng_textbook/chapter10/chapter10_06.htm","timestamp":"2014-04-21T14:40:11Z","content_type":null,"content_length":"12172","record_id":"<urn:uuid:4227519a-471e-4b25-b71b-cbef35c8a2bc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - finding the sum of a series
so your sayin that it is possible in some cases to derive a formula for a series besides a geometric series but in most cases its not possible?
Yes. saltydog's given numerous examples. Telescoping series is another handy one, so is recognizing your sum as a known power series, rearranging terms sometimes helps, and more .
, and about that series it was suppose to be
[tex]\sum_{n=1}^{\infty}\frac {(-2)^{-n}}{n+1}[/tex]
like would i be able to derive a formula for this series? or how would i be able to find the sum of such a series?
Relate it to the power series for log(1+x). | {"url":"http://www.physicsforums.com/showpost.php?p=600830&postcount=5","timestamp":"2014-04-17T03:50:52Z","content_type":null,"content_length":"8350","record_id":"<urn:uuid:bcd28358-4bd6-4412-847b-ce47c31c3da4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
ecimal point
In Greek version of Office "," is decimal point and not ".". I would like to change this setting through code! Any ideas?
I've searched in vba excel help but the only thing I found was
If you like these VB formatting tags please consider sponsoring the author in support of injured Royal Marines
and it is referred as read-only!
Thank you in advance for your time and suggestions! | {"url":"http://www.knowexcel.com/view/347546-decimal-points.html","timestamp":"2014-04-17T03:49:29Z","content_type":null,"content_length":"53954","record_id":"<urn:uuid:3c5118ed-2667-4802-9f5f-6cd37e0a469e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implicit Differentiation
October 29th 2009, 06:26 PM #1
Jul 2007
Implicit Differentiation
Find dydx in terms of x and y if arcsin((x^5)(y))=(x)(y^5)
I tried to do this problem, and I got (y^5((1-(x^5y)^2))^(1/2)-(5yx^4))/(x^5-5xy^4(((1-x^5y)^2))^(1/2)), which is incorrect.
Help, please???
I supose you mean to find dy/dx in...etc.
$Arcsin\, (x^5y)=xy^5 \Longrightarrow \frac{5x^4y}{\sqrt{1-x^{10}y^2}}\,dx+\frac{x^5}{\sqrt{1-x^{10}y^2}}\,dy=y^5\,dx+5xy^4\,dy$
I bet you'll be able to continue from here.
Yeah, I got it. I actually had the right answer. It was an online assignment, and I put one parenthesis in the wrong spot when I was entering it, thus changing the entire answer. Thanks!
October 29th 2009, 08:10 PM #2
Oct 2009
October 30th 2009, 12:27 PM #3
Jul 2007 | {"url":"http://mathhelpforum.com/calculus/111280-implicit-differentiation.html","timestamp":"2014-04-21T05:56:30Z","content_type":null,"content_length":"34695","record_id":"<urn:uuid:38ae8fac-852c-42dd-8e41-f0ffe7b75fe8>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
6.6 The Gradient and Directional Derivatives
Home | 18.013A | Chapter 6 Tools Glossary Index Up Previous Next
6.6 The Gradient and Directional Derivatives
We have seen above that the 2-vector
$( ∂ f ∂ x , ∂ f ∂ y ) ( x 0 , y 0 )$
is called the gradient of $f$ at argument $( x 0 , y 0 )$ and that it is generally written as $grad ⟶ f$ or $∇ ⟶ f$ .
The equation for the tangent plane to the surface defined by $f$ at $( x 0 , y 0 )$ can be described in terms of the gradient as
$f L ( x , y ) = f ( x 0 , y 0 ) + ( grad ⟶ f ) · ( r ⟶ − r 0 ⟶ )$
From this equation we can deduce that a normal to this tangent plane is in the direction in the three dimensional space whose coordinates represent $( x , y f L )$ of $( grad ⟶ f , − 1
)$ .
The projection of this normal into the $( x , y )$ plane is the vector $grad ⟶ f$ itself.
Thus, $grad ⟶ f$ is in the direction of the projection of the normal to the tangent plane to $f$ at $( x 0 , y 0 )$ into the $( x , y )$ plane.
This relationship can be seen in the applet below.
The symbol $∇ ⟶$ is called "del". It is a strange thing called a vector operator. By itself it makes about as much sense as the noise of one hand clapping. But put next to something
that the derivatives in it can act on, it makes perfect sense.
The equation for the linear approximation $f L$ to $f$ at $( x 0 , y 0 )$ allows us to compute the directional derivatives of $f$ at that point.
Suppose we seek the directional derivative of $f$ in a direction defined by unit vector $u ^$ . Then if $r ⟶ − r 0 ⟶ = s u ^$ , the directional derivative of $f$ (which
is close to $f L$ near $( x 0 , y 0 )$ ) in that direction is the derivative of $f L$ with respect to $s$ .
But we have
$f L ( x , y ) = f ( x 0 , y 0 ) + ( grad ⟶ f ) · ( r ⟶ − r 0 ⟶ ) = f ( x 0 , y 0 ) + ( grad ⟶ f ) · s u ^$
so that $f L$ 's derivative with respect to $s$ , the directional derivative of $f$ in the direction of $u ^$, is given by $( grad ⟶ f ) · u ^$ .
If $f$ were a function of more variables, say $x , y , z , t , …$ we can use exactly the same approach to describe changes, the only difference being that the tangent plane becomes the tangent
hyper-plane, and there are partial derivatives in more directions. The conclusions are exactly the same:
1. The gradient vector is in the direction of the projection of the normal to the tangent hyper-plane into the hyper-plane of coordinates.
2. The directional derivative in any direction is given by the dot product of a unit vector in that direction with the gradient vector.
3. The component of the gradient vector in the direction of any axis is the partial derivative of $f$ with respect to the corresponding distance variable in that direction.
4. That partial derivative is the ordinary derivative with respect to that variable assuming all the other variables remain constant.
The upshot of all this is that the gradient vector, whose components can be computed by ordinary one dimensional differentiation for a field in any number of dimensions, is all you need to compute
its directional derivative in any direction.
If these concepts seem strange to you, play around with the applet below until you feel comfortable with them. The graph on the left shows the field represented on the right restricted to the cutting
half plane pictured on the right. Its linear approximation at the edge of that half plane is also shown. The slope of that linear approximation is the directional derivative of the field at that
edge, in the direction of the cutting half plane.
(Of course what is shown is not exactly a half plane, being rather a rectangle. The edge of it of interest here is the one that is the axis of rotation in the third slider.) | {"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter06/section06.xhtml","timestamp":"2014-04-18T18:15:22Z","content_type":null,"content_length":"18621","record_id":"<urn:uuid:ed0ba8f2-ffea-4a00-9403-cbbb2197c866>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Index of a differential operator between trivial bundles.
up vote 5 down vote favorite
Let $M$ be a closed parallelizable manifold and $D: \Gamma(E) \to \Gamma(F)$ an elliptic differential operator between trivial vector bundles $E,F \to M$. The Atiyah Singer index theorem implies that
the index of $D$ is zero. Is there a way to prove this with less machinery?
By the way, this question is a cross-post from math.SE.
Actually I think I made a mistake in my reasoning, which was that the symbol class $[\sigma_D] \in K(TM)$ is zero (I actually didn't need triviality of $TM$). Viewing $K$-theory as sequences of
bundles the symbol class is $$ 0 \to \pi^* E \stackrel{\sigma_D}{\to} \pi^* F\to 0 $$ where $\pi: TM \to M$. Now if $TM^+$ is the one-point compactification of $TM$ then the isomorphism $K(TM) \to \
tilde K(TM^+)$ is given by extending the sequence to $TM^+$. I thought that the extension would have to involve trivial bundles as well, from which it will follow that $\sigma_D = 0$ since for a
compact space any sequences involving trivial bundles is zero in $\tilde K$. But now I think this extension need not involve trivial bundles: $K(\mathbb R^2) \simeq \tilde K(S^2) = \mathbb Z$. But
every bundle over $\mathbb R^2$ is trivial so my argument would give $K(\mathbb R^2) = 0$.
index-theory dg.differential-geometry differential-operators
1 I don't think you need $E$ or $F$ to be trivial: if $M$ is parallelizable then the Todd class is $0$, and that's enough to kill the index. – Paul Siegel Nov 16 '12 at 5:24
@Paul: that is not true; the Todd class is $1$, and there is enough room for nontrivial index. – Johannes Ebert Nov 16 '12 at 9:22
How does this follow from Atiyah-Singer? I am aware of the statement that a differential operator on an odd-dimensional manifold has zero index (without triviality of the bundle). – Johannes Ebert
Nov 16 '12 at 9:28
I looked it up in Atiyah-Singer IoEO III. There is Proposition 2.17, which has the effect that the index of a geometric operator (associated with a reduction of the sructure group of the tangent
bundle) only depends on $TM$ and the bundles (and it then zero in your situation). I section 9, they prove that any pseudodifferential operator acting on trivial bundles of rank at most half the
dimension of $M$ has trivial index (this is false if the rank becomes larger). – Johannes Ebert Nov 16 '12 at 9:59
@Johannes, Paul: Now I think I was actually wrong-- I edited the question with my reasoning and will probably delete it soon. – Eric O. Korman Nov 16 '12 at 14:10
show 2 more comments
1 Answer
active oldest votes
The result is wrong; the case of a point as base manifold creates counterexamples. Here is a less trivial construction in dimension $2$:
Let $M$ be a manifold and $V \to M$ be any vector bundle. There is an elliptic differential operator $D$ of order $2$ on $V$, which is self-adjoint and has thus index $0$: take a
connection $\nabla$ on $V$ and put $D=\nabla^{\ast} \nabla$ (this is a Laplace type operator).
Now let $M= T^2$ and let $W \to T^2 $ be a holomorphic line bundle of degree $1$. By Riemann-Roch, the operator $\bar{\partial}_W$ has index $1$; and it goes from sections of $W$ to
sections of $W$, since the canonical line bundle of a torus is trivial. Therefore one can form the composite $P:=(\bar{\partial}_W)^2$, and $P$ has index $2$.
up vote 3 down Now let $V$ be a complex vector bundle such that $V \oplus W$ is trivial; with the operator $D$ constructed above. Consider the operator $D \oplus P$; this is an order $2$ elliptic
vote accepted operator on the trivial vector bundle over a parallelizable manifold and has index $2$.
I do not see how to produce an order $1$ operator of index $1$, though.
The vanishing theorems in Aityah-Singer, IoEO III, are quite optimal. My construction does not work in odd dimensions; and it is clear that the resulting trivial vector bundle has
dimension at least $2$. If the dimension of the trivial vector bundle is too small, each (pseudo)differerential operator will have index $0$, as proven by Atiyah-Singer.
Thanks for the nice example! – Eric O. Korman Nov 16 '12 at 19:51
add comment
Not the answer you're looking for? Browse other questions tagged index-theory dg.differential-geometry differential-operators or ask your own question. | {"url":"http://mathoverflow.net/questions/112546/index-of-a-differential-operator-between-trivial-bundles?answertab=active","timestamp":"2014-04-16T22:09:38Z","content_type":null,"content_length":"60153","record_id":"<urn:uuid:08e202e2-68d5-4f9e-9b2b-ad910a9b1f8e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hammond, IN Algebra 2 Tutor
Find a Hammond, IN Algebra 2 Tutor
...I have worked in Special Education as an instructional assistant for the past 10 years. In that time I have had the opportunity to work with approximately 10-15 students with Asperger's or
Autism. I have worked in Special Education as an instructional assistant for the past 10 years.
24 Subjects: including algebra 2, chemistry, calculus, geometry
My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home.
My passion for education comes through in my teaching methods, as I believe that all students have the a...
34 Subjects: including algebra 2, reading, writing, statistics
...I've read that everything in the universe can be measured geometrically, which is not the case for any other type of math. So I guess you can say it is very important. Geometry is all about
understanding space, shapes, and structures using different math concepts to measure and problem solve.
22 Subjects: including algebra 2, chemistry, reading, physics
...I fell only a few credits short of a major in English literature and language, and also had the basic medical sciences (physics, chemistry, organic chemistry, etc.). Originally from Detroit,
MI, I attended public school through primary and secondary schooling. Since then, I have been a resident ...
45 Subjects: including algebra 2, English, chemistry, writing
...In my current role I work in a problem solving environment where I pick up issues and deliver solutions by working with different groups and communicating to management level. This type of
work helped me to communicate better and to make people understand at all levels. My goal in teaching is to provide the nurturing environment that allows children to learn and grow
16 Subjects: including algebra 2, chemistry, physics, calculus
Related Hammond, IN Tutors
Hammond, IN Accounting Tutors
Hammond, IN ACT Tutors
Hammond, IN Algebra Tutors
Hammond, IN Algebra 2 Tutors
Hammond, IN Calculus Tutors
Hammond, IN Geometry Tutors
Hammond, IN Math Tutors
Hammond, IN Prealgebra Tutors
Hammond, IN Precalculus Tutors
Hammond, IN SAT Tutors
Hammond, IN SAT Math Tutors
Hammond, IN Science Tutors
Hammond, IN Statistics Tutors
Hammond, IN Trigonometry Tutors | {"url":"http://www.purplemath.com/hammond_in_algebra_2_tutors.php","timestamp":"2014-04-17T13:10:23Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:d5872f11-6862-4253-8590-55a1fc4e78a4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
June 3rd 2008, 02:27 PM #1
surface area
A quick question:
Cans of soup are often packed in boxes with spaces in between. Calculate the area that is wasted in between all the cans.
The length is 24 cm and the width is 18 cm.
You didn't mention what the radius of the circle of each can was so let's just call it r. I'm assuming we're looking from an overhead point of view and we're just focusing on the area of the 2-D
plane created by the circles of the cans and the rectangle of the box.
If the cans are positioned and aligned perfectly, the length of the rectangle should equal to the sums of the diameters of each can running along the length. So, if the radius of the circles is
equal to r, then the diamater of each circle is 2r.
The number of cans that can fit along the length is $\frac{24 cm}{2r cm} = \frac{12}{r}$ (for example, if the diameter of each circle was 12 cm, then you can only fit 24/12 = 2 cans across the
length of the box).
Similar reasoning shows that there'll be $\frac{18}{2r} = \frac{9}{r}$ cans along the width of the box.
So, looking at the entire box, you'll see that there'll be $\frac{9}{r} \times \frac{12}{r}$ circles, each with an area of $\pi r^{2}$. So you can find the summed area made by the cans and
subtract it from the area of the rectangle created by the box.
June 3rd 2008, 03:17 PM #2 | {"url":"http://mathhelpforum.com/geometry/40533-surface-area.html","timestamp":"2014-04-21T08:14:29Z","content_type":null,"content_length":"33698","record_id":"<urn:uuid:ec266177-507c-4ef7-b98f-18c8546dfaf3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
simple geometry..
October 26th 2008, 03:28 AM
simple geometry..
Isn't it true
A line can be normal and tangent to a curve
but not to a function ,
If it isn't
can you give an example ?
Thank U
October 29th 2008, 08:07 AM
A tangent is a line and a normal is a line. At a specific point on a curve (or function), the normal and the tangent to that point are perpendicular to each other. So at a point on a curve (or
function), the normal and tangent are not the same line. For example, for two lines to be the same, they need to be at the very least parallel to each other.
October 29th 2008, 08:41 AM
(Shake) I meant the points are different | {"url":"http://mathhelpforum.com/math-topics/55745-simple-geometry-print.html","timestamp":"2014-04-20T23:27:58Z","content_type":null,"content_length":"4533","record_id":"<urn:uuid:84ca4a80-a8c1-4c7f-bb1e-44149db8f563>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Whats it cost to fill a Model S 85 "tank" and what is the MPGe?
I'm trying to calculate for a Model S P85 the MPG equivalent (MPGe), the cost per mile of driving and how much it costs to fill the battery 100%. The goal here is to compare it to the cost using gas.
My calculations are like this:
Current Max distance with full charge = 265 miles (there seems to be different values for this in the forum)
Maximum charge = 85 Watts
Cost of electricity in SF Bay area, California (using PG&E cost calculator http://www.pge.com/myhome/myaccount/charges/ and Tier 4@34 cents per kWh and 84 kWh) = $29
$29 / 265 = .109 = 11 cents per mile
265 / $29 = 9.13 = 9 miles per dollar
I thought about using this link to figure out the Tesla Model S MPG3 but its too complicated
Anyone have some simple way to do this calculation or a web site that will do it?
SCCRENDO | 16. September 2013
It depends on your driving efficiency and of course cost of electricity. Calculate you ICE cost as above, using miles driven and mpg and cost per gallon. With an EV you need to know how many wH or
kWh per mile (for me I average 340 wh/m, not the most efficient) and multiply that by the miles you drive. I believe recharging those miles back is about 85 % effiecient, so you need to multiply by
eddiemoy | 16. September 2013
rough calculation is based on the energy in a gallon of gasoline, ~33kwh.
so if you use the following fomula, you get ~104mpge, but on the sales sheet it says 89mpge combined city highway.
x 265
--- = ---
33.4kwh 85kwh
Brant | 16. September 2013
The rate used in your calculation is E9 summer partial peak tier4
Most charging will be overnight at off peak
Tier 1: about $0.04/KwH
Tier 2: About $0.06
Tier 3: about $0.16
Tier 4: about $0.20
jai9001 | 16. September 2013
I prefer a different method.
For examle, if I drive 40 miles in a day and use 12Kwh in my model S.
My energy cost is 10cents/Kwh. So 12Kwh costs $1.20.
Charging efficiency is approximately .85.
So total cost to drive 40 miles is 1.20/.85 or $1.41.
If the avg gallon of gas where I live is $3.75, then I used/spent the equivalent of 1.41/3.75 or 0.376 of a gallon of gas to drive 40 miles.
My personal miles per gallon equivalent is 40 miles/ 0.376 gallons or 106 miles/gallon.
ahm | 16. September 2013
I have 8315 miles on my P85, which has used 2702.6 kWh. That comes out to 104 MPGe. The formula I use is: (33.7 kWh/1 gal gas) * (miles driven/kWh used) = MPGe.
earlyretirement | 16. September 2013
You can use Tesla's estimate on their website:
Just keep in mind that it's on the low side. Your car needs more energy vs. what Tesla calculates as you lose some energy when you charge the car.
For example, I charged 165 miles and Tesla's website says it would use 55 kWh and it really used 66 kWh. So just remember that it's not as efficient as what the website shows. You will lose more than
Then just calculate what you pay per kWh with your electricity company.
Bighorn | 16. September 2013
For today's energy rates vs gasoline, I'm getting 151 mpg equivilent.
earlyretirement | 16. September 2013
Bighorn, what is your rate of electricity that you are paying? That estimate seems high to me. So how did you calculate that? What's your energy per kWh rate? Where are you? Are you using solar, etc?
That estimate seems way high to me.
Also, what are you averaging in the car? What is your average burn rate in the car?
O EMSHN | 16. September 2013
PG&E just announced new "EV" electric rates that are "time of use" with NO tiers. Under those rates, between 11pm and 7am, electricity costs $.09712 / $.09930 in summer / winter.
Also, charging isn't 100% efficient so it takes more electricity out of the wall than shows up in your battery (heat loss, etc) but Tesla has a hidden reserve so from empty to full is only about 82
kWh. Assuming an 85% charging efficiency, you will need 96.5 (82 / 0.85) kWh to "fill" your tank. My experience is that I get about 245 miles from a full tank (I know Tesla says 265 but I don't drive
65 mph, more like 72). 96.5 kWh will cost about $9.50. Assuming $4 for a gallon of gas (Bay Area), that is about 2.4 equivalent gallons or 102MPGe for my 245 miles of driving per "tank".
Call PG&E "Building and Renovation" at 1 (877)743-7782 and ask to be switched to the new EV rates. You will need your VIN to prove you have a qualifying EV.
Bighorn | 16. September 2013
Gas here costs ~3.69/gal and my cost per kWh is about 7.5 cents, so for the cost of a gallon of gas I get 49.2 kWH. With conventional driving ie not showing off to friends, I would actually get
better than 151, but I think I used 325w/m to arrive at 151 miles. This does not factor for charging inefficiencies, though.
If I put in a more realistic 300w/m for sedate use and assume a 12% loss from outlet to output, I'd be at 144 mpge.
Am I missing something or did you not anticipate such cheap power?
Sudre_ | 16. September 2013
Once my solar panels are paid off in 5 to 6 years doesn't my cost drop drastically? :-)
2050project | 18. September 2013
Best way to do this is plug in your info into an online calculator/analysis...
The best calculator to figure this out is here:
Other cool calculator-type analysis is here:
mario.kadastik | 18. September 2013
Simple enough math:
Electricity cost: $0.17 / kWh
charging efficiency: 0.9
Fuel cost: $6.7 / gallon (EU)
85 kWh = 265 miles
Full tank cost: 85 * 0.17 / 0.9 = $16
Equivalent amount of gallons: $16 / $6.7 = 2.4 gallons
Miles per gallon = 265 / 2.4 = 110MPG
There's no need to compute actual Wh / mile etc. Just take what you expect to get from the MS. If you think the rated miles are overestimated to your driving style account for it. If you drive at 45
km/h constantly, then use 420 miles instead of 265 etc :)
Now if you want to compute for your situation just swap the initial numbers.
earlyretirement | 18. September 2013
@ Bighorn. Thanks for posting back. Ah, I see how your costs are so low . Wow, your rates are low ! Here the lowest with special EV TOU super off peak rates are lowest is 0.16 c per kWh. That
explains it.
jdesmo | 18. September 2013
Here on Long Island it does not look so rosy at all, since no superchargers around, and assuming no other free charging opportunities:
Electricity cost: $0.22/kWh
Assumed total charging efficiency:85%
Premium fuel cost: $4.10/gal (US)
85 kWh is good for 265 mi.nominal (not the way I drive..)
True local charging cost:85*0.22/0.85= $22
Equivalent fuel amount: 22/4.10=5.37 gal (US)
Equivalent MPG: 265/5.37 = 49.4MPG
earlyretirement | 18. September 2013
@ jdesmo. Yep. Your numbers look fairly accurate except I'd say that not many people in the real world will get 265 miles. To be conservative, I'd lower that #. Like you, I'm not getting 265 miles so
that # is really not realistic.
earlyretirement | 18. September 2013
Also, the point about the loss of efficiency is totally true. I've been tracking it each night as I have a dedicated meter for my EV. It looks to lose at least 15% to 18% based on what Tesla's
website says it should take to charge it.
I think the 10% efficiency is too conservative as well. I'd use at least 15% to 18% for the charging efficiency. At least that's how I see it in the real world.
AmpedRealtor | 18. September 2013
I've been driving quite conservatively, yet the range shown on the dash is consistently about 20% higher than the range I get in actual driving. I drive about 20,000 miles annually. Let me also add
about 15% in charge efficiency loss plus another 8 miles per day of vampire loss... After taking my electricity rates and gasoline costs into account, my adjusted mileage is 133 MPG. And that's using
$3.70/gallon for regular unleaded.
Not too shabby...
Bighorn | 18. September 2013
I'd agree that most don't get 265 miles on a charge, but that's because the full 85 kWh don't get consumed. Some kWhs are held in reserve, so using 265 miles to simplify the equation doesn't really
work. I think using Wh/m and guesstimating efficiency will be more accurate.
jbunn@hotmail.com | 18. September 2013
MPGe is just over 100 miles to the gallon (Electrical equivalent). Cost is dependent on your local rate.
jdesmo | 19. September 2013
If I view total ownership costs, compared to say jaguar XF-R (I averaged 16.5MPG over 3 years/45,000 mi), it would take the SP85 about 10 years to break even in Long Island/NYC metro area!! Certainly
if you lease like I do, or otherwise replace the car every 3-4 years, the MSP does not make sense.
Not cool at all.
(not even considering the cost of money, as the P85 is ~$25K more to purchase).
Thomas N. | 19. September 2013
I think it's very difficult to obtain any kind of useful ROI from a Model S. Maybe the lowest base model with zero options and a 100 mile one-way commute with free charging at work would price out ok
but for the most part it's a losing proposition.
I bought it because it's fast. And big. And tons of storage. And I never have to fill it up with gas. And it is all technology.
The savings over gas on a monthly basis is neglible in my case, and downright a losing proposition against purchasing say, an Infiniti Q50 at half the price.
reitmanr | 19. September 2013
We have had solar for about 10 years and recently wound up with a free upgrade to higher efficiency panels. We are in the SF Bay area. Our bottom line is we only pay about $5 per month for
electricity service and PGE keeps our excess generation to sell to someone else. We thought with our MS delivered 12/30/12 we would start paying something for the added load. So far after 8500 miles,
it still looks like we continue to drive for "free". On longer trips we do use SC, but not for daily mileage.
We are pleasantly surprised!We average about 320wh/mile. We are also on the old time of use rates, 6pm to noon are off peak.
jdesmo | 19. September 2013
In many locals, solar can make sense. Not so on most of the north shore of Long Island......
I also forgot that with my 2011 XF-R, Jaguar covers all maintenance and wear items at N/C for 4 years (I only pay for tires!)That's additional savings, certainly great bonus if you lease.
I think BMW and others may also have a similar feature already baked into the MSRP.
earlyretirement | 19. September 2013
@ Amped. "Vampire loss"??? Didn't your new car ship with version 5.0? Don't tell me you got stuck on 4.5?
Benz | 21. September 2013
Is it possible to see on the 17" screen how many kWh have been added to the battery pack in the past week/month/year? | {"url":"http://www.teslamotors.com/no_NO/forum/forums/whats-it-cost-fill-model-s-85-tank-and-what-mpge","timestamp":"2014-04-17T17:33:28Z","content_type":null,"content_length":"51116","record_id":"<urn:uuid:5bfce20c-1a62-4cd6-9393-612cd7c42ddc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
ES 105
NumericalMethods for Partial Differential Equations
Keith D. Paulsen
136 Cummings (Thayer School), 646-2695
email: Keith.D.Paulsen@dartmouth.edu
Secretary: Roxanne Campagna, 137 Cummings, 646-3860
email: H.Roxanne.Campagna@Dartmouth.edu
Meeting Rooms:
Regular Class: MWF 11:15-12:20, 202 Cummings
X-hour: Tues Noon-12:50, 202 Cummings
The Linux Lab: 218 Cummings
TA: Matthew McGarry
Weekly Schedule
Week 1: on Friday we will meet in the Linux Lab.
After Week 1, we will begin a regular weekly schedule
Mon, Tues, Wed in class.
Wed: Office hrs 12:30-2pm
Fri: HW due and individual meetings: 11am-2pm, 136 Cummings; no formal class
Linux System Administrator (Matt Dailey): 227 MacLean
TSCC Support: 126 MacLean
TSCC Public Help: Mon, Thurs 3:00-4:00, MacLean Atrium
Numerical Partial Differential Equations for Environmental Scientists and Engineers -- A First Practical Course, D.R. Lynch -- Springer, 2004.
Weekly readings from the text will be assigned.
Course Web Page
Most Useful Supplementary Texts
The ENGS91 prerequisite:
Burden R.L. and J.D. Faires, Numerical Analysis Brooks Cole; 9th edition (August 9, 2010).
A large body of work is available in Feldberg Library; see the online reference list. There are a few especially relevant and simple volumes re PDE's:
Smith, Numerical Solution of Partial Differential Equations, Oxford Univ. Press, 1st ed. (1965) [3rd ed, 1985 etc. not as good as 1st ed. for learners]. [Roughly equivalent to Morton and Mayers.]
Morton, K.W. and D.F. Mayers. Numerical Solution of Partial Differential Equations. Cambridge University Press, 1994.[Roughly equivalent to Smith.]
Segerlind, Applied Finite Element Analysis, Wiley, latest edition
Lapidus and Pinder, Numerical Solution of Partial Differential Equations in Science and Engineering, Wiley, latest edition.
Some standard Linear Algebra works are valuable:
Trefethen, L.N. and D. Bau, III: Numerical Linear Algebra. SIAM, 1997.
Demmel, J. W. , Applied Numerical Linear Algebra. SIAM, 1997.
The LAPACK Subroutine Library and Users Guide:
E. Anderson et al, LUG -- LAPACK Users' Guide. SIAM, 1999; online LUG; LAPACK source code
The Numerical Recipes is all-around useful and practical:
Press, W.H., B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes: The Art of Scientific Computing. Cambridge Univ. Press, 1986 or latest.
There will be weekly graded problem sets, due in class on Friday afternoons beginning January 13. Each student will be required to solve the problem, post solutions on the www, and present it to
Prof. Paulsen individually. These sessions with Prof.Paulsen will be 20 minutes, beginning with 10 minutes for presentation of your work; and the balance for discussion.
The first week (Homework 0) is a warmup/review exercise; it is due on Monday Jan 9. Do not ignore it! The rest of the HW's will use the simple things covered.
Late homework will not be accepted. Each student may have two exceptions to this rule; they must be claimed at least 2 days in advance.
Nearly all homework will require computer programming, graphics, and verbal report generation backed up with www-posted results. Schedule your time accordingly during heavy load periods.
There will be a mid-term and a final exam. Homework will count 60%; Midterm 20%; Final 20%.
Computer Languages: Fortran will be the primary compuational language. It supports the library LAPACK and related software, which we will use. Matlab and TekPlot will be used as the primary graphical
packages. Homeworks will be submitted electronically as .html documents as described above. All of these languages are supported within the Dartmouth/Thayer Linux system. Each student has a special
directory which is web-served; use it responsibly!
Honor Principle: applies to all homework and exams. All work is to be attributed to its author(s). Being an author indicates that the student has mastered the content of the homework and that he/she
has cited all individuals who have contributed. If you recieve assistance other than routine help from faculty, students, or staff, that should be cited. Copying computer code or files without
citation is plagiarism. The work you turn in must be the product of its authors or cited sources.
Special obligations are connected to the web-accessible directories. We will use these for strictly professional, course-related communications among the class. Abuses of this privilege will be
considered Honor Code violations.
Dartmouth will ensure that every student has meaningful and physical access to all activities of the College. Students requiring disability-related accommodations must register with the Student
Accessibility Services office. Once SAS has authorized accommodations, students must show the originally signed SAS Services and Consent Form and/or a letter on SAS letterhead to their professor. As
a first step, if you have questions about whether you qualify to receive accommodations, contact the SAS office. All inquiries and discussions about accommodations will remain confidential.
Laptop Policy in Class Meetings
No laptops, phones, ipods, or other electronic devices in class, unless otherwise stated. | {"url":"http://engineering.dartmouth.edu/courses/12winter/engs105/Organizational_12.html","timestamp":"2014-04-16T19:16:32Z","content_type":null,"content_length":"6855","record_id":"<urn:uuid:46733f73-8495-495e-af9d-66bb07e1fd77>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
More of Self Inductance and Mutual Inductance
Andrew's Blog :
Return to Blog
More of Self Inductance and Mutual Inductance
Posted Dec 07, 2011 at 12:00 am
The principle of action and reaction in Lenz’s Law states that an induced electromotive force (emf) generates a current in a direction which opposes the change in flux that caused the emf in the
first place. The basic unit of inductance is called the Henry (H) and a coil has an inductance of 1H when the current flowing through it changes at a rate of one ampere/second inducing a voltage of
one volt in it and is given by the equation below:
L = 1H = 1V\frac{dI}{dt} = 1 \frac{volt}{ampere/second}
Self Inductance
Some inductance may also be present even in a perfectly straight length of conductor because the current can produce a magnetic field surrounding the conductor. Having inductive effects do not really
require two circuits. Because all circuits have conductors in them, we can assume that all circuits have inductance. Sometimes, induction can occur when only a single device is present. A changing
magnetic field can be created by the changing current through the device.
Figure 1 pertains to the left-hand rule which determines the direction of the induced current. The rule states that if you point the thumb of your left hand in the direction of relative motion of the
conductor and your index finger in the direction of the magnetic field, the middle finger will now indicate the direction of the induced current which will generate the induced voltage. A coil with
many turns will have a higher inductance value than one of only a few turns because the inductance is greater with stronger magnetic flux for a given value of current. The relationship between self
inductance (L) and the number of turns (N) is given by the following equation:
L = N\frac{\phi }{I}
This equation applies to linear magnetic materials and can also be defined as the flux linkage divided by the current flowing through each turn.
Mutual Inductance
The basic operating principle of transformers, motors, generators, and any other electrical component that interacts with another field is the mutual inductance. The relative positioning for the two
coils determines the amount of mutual inductance that links one coil to another. The mutual inductance of two circuits depends on several factors such as orientation of circuits, distance between
circuits, number of turns in each circuit, size of circuits, and shape of circuits.
In Figure 2, the mutual inductance between the two coils can be greatly increased by positioning them on a common soft iron core or by increasing the number of turns of either coil. Unity coupling is
said to exist between the two coils due to the leakage of flux if they are tightly wound with one on top of the other over a common soft iron core.
If one coil is positioned next to the other coil so that the physical distance apart from each other is small, then nearly all of the magnetic flux generated by the first coil will interact with the
coil turns of the second coil inducing a relatively large emf and therefore producing a large mutual inductance value.
Comments on this post:
There are currently no comments. | {"url":"http://www.eeweb.com/blog/andrew_carter/more-of-self-inductance-and-mutual-inductance","timestamp":"2014-04-18T15:39:20Z","content_type":null,"content_length":"54235","record_id":"<urn:uuid:69768b6f-792a-4c29-9d98-760579a935c4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doron Zeilberger
Opinion 40: Akalu Tefera: A Truly 3rd-Millennium Mathematics Ph.D.
By Doron Zeilberger
Written: March 10, 2000
Physical Science, at least for the last 200 years, had two kinds of research modes: theoretical and experimental. Now one can also add: computational. By contrast, mathematics, whether it was "pure"
or "applied" only had the theoretical mode. No one got a Ph.D. for writing a computer program. Even a "conjectures-only" thesis was not acceptable. One had to prove theorems. If the theorems were too
hard, one found easier versions, but proving theorems was the sine-qua-non for a math Ph.D.
Akalu Tefera, who expects to earn his Ph.D. this coming May, is a harbinger of a new kind of math thesis. It is not "experimental" in the semantic sense, since he did not compute zeros of the Riemann
zeta function and study their distribution, or found new Mersenne primes. His results are rigorous theorems. But syntactically, it is analogous to an experimental thesis in physics or chemistry.
There you spend the first two years building the apparatus, the next year or two in taking data, and then in the last year you analyze the data.
Akalu did a masterful job in implementing the Continuous mutli-WZ method, that was described in very broad outline by Herb Wilf and myself. After about two years, of successive improvements and
enhancements, he "built the apparatus", his versatile packages Mint, and qMint (forthcomig).
After Akalu "built" (i.e. wrote) the "equipment" (i.e. software) he "took data" (i.e. used it to DISCOVER) the beautiful MULTI-VARIATE TEFERA INTEGRAL. This is a GENUINE theorem, even today, since,
like the Selberg and Mehta integrals, it states something for an arbitrary dimension. Even Tefera's package, Mint, can't prove it in that generality. But for k=1,2,..., 6, Mint can find immediately,
BEAUTIFUL WZ-Certificates. In the "analyzing the data" stage, Akalu detected a general pattern and proved, humanly, the general case. But without the equipment (the Maple package Mint), this would
not have been possible. However, what's nice about Tefera's "equipment" is that it can be used on many other problems.
Two other 3rd-millennium Ph.D. thesis , that were completed ahead of schedule, way in the last millennium (c. 1998), are Frederic Chyzak's and Axel Riese's futuristic theses. I am sure that there are
others, but we need many more, and in fifty years, this mode will be the norm.
Doron Zeilberger's Opinion's Table of Content | {"url":"http://www.math.rutgers.edu/~zeilberg/Opinion40.html","timestamp":"2014-04-16T16:17:15Z","content_type":null,"content_length":"3435","record_id":"<urn:uuid:58f23d91-8216-451f-8407-abc93ddc5749>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is impedance and resistance the same thing in a simple resistor?
Thinking of just this case using an AC current with a frequency of 1k Hz, thru a simple non-inductive resistor.
Is the impedance of the resistor the same thing as the resistance measured using DC? Is it equal to the resistance in value or different altogether? (sentence edited to change inductance to
I think it's the same thing and so equal in value. Or maybe I've still got a bunch of reading to do.
You're mixing terms a little bit here. Inductance isn't impedance.
Impedance is basically "resistance to AC", so it takes into account other effects like inductance and capacitance. And impedance changes with frequency (and is dynamic, not static, when an AC current
is applied).
Resistors (assuming they aren't wire wound) have nether inductance nor capacitance, so their DC and AC resistance is the same. I.E., the resistance equals the impedance--for the resistor alone.
gmoon answered first but all great answers and I'm satisfied.
You caught my mistake. I did not mean to use Inductance in the second sentence, I meant to use impedance. It was early in my morning when I typed this and had not had my caffeine yet.
Yes. I mean that's the quick answer. I am assuming the resistor you're writing about is the ideal kind, not the real kind.
The impedance of an ideal resistor is:
X[R] = R, where R is resistance in ohms (Ω)
The impedance of an ideal inductor is:
X[L] = j*ω*L, where ω is angular frequency in rad/s and ω=2*π*f, L is self-inductance in henrys (H)
The impedance of an ideal capacitor is:
X[C] = (1/(j*ω*C)), where ω is angular frequency in rad/s and ω=2*π*f, C is capacitance in farads (F)
I also claim that impedors "add" the same way that resistors do. For example if you've got X1 and X2 in series, the total impedance is just their sum:
X[1and2series] = X1+X2.
If X1 and X2 are in parallel, it is:
X[1and2parallel] = (X1^-1 + X2^-1)^-1
The only thing that makes it more complicated is all those j*omegas, which in general will make the answer a complex number, and dependent on omega.
Some easy examples:
Q:What's the impedance of an ideal 100 Ω resistor, at f=60 Hz.
A: (100.00 + 0*j) Ω = 100 Ω
Q:What's the total impedance of an ideal 100 Ω resistor in series with an ideal 100 μF capacitor, again at f=60 Hz.
A: (100.00 - 26.53*j) Ω
(Since X[R]= 100 , and X[C]= 1/(j*2*pi*60*100e-6), and the total impedance is X[R]+X[C])
Ideally, a perfect resistor resists DC and AC with the same impedance. Realistically, every electronic element has some kind of parasitic capacitance and/or inductance that would affect AC
differently than DC. That is usually so small that it only really becomes a problem when you have like really really high frequencies (megahertz and such).
For one kilohertz, I wouldn't worry about that. The AC impedance of the resistor will be so close to the DC resistance that you might as well call them the same.
If you have a specifically NON-inductive resistor, as you said, then the DC resistance dominates any other effects, up to some frequency limit.
You can't measure the inductance of a non-inductive resistor, because it doesn't have any, by definition... | {"url":"http://www.instructables.com/answers/Is-impedance-and-resistance-the-same-thing-in-a-si/","timestamp":"2014-04-19T18:17:16Z","content_type":null,"content_length":"129364","record_id":"<urn:uuid:816fb068-8589-4bbc-8fb9-8e28cd2bb88c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
In any numerical calculation, it is best to reduce quantities to dimensionless numbers.
This way we avoid round-off errors etc.
For the Lennard-Jones system, we have
natural scales from the potential:
V(r) = V[0] [ (r[0]/r)^12 - 2(r[0]/r)^6 ]
The length can be measured in terms of the parameter r[0]. The scale for energy is V[0].
We measure temperature in unit of V[0 ]/k[B] so that the combination k[B]T/V[0] is a dimensionless number.
We use the combination sqrt{ m r[0]^2/ 48 V[0]} as the unit of time, which is proportional to
the vibrational period of a two-body Lennard-Jones particle.
In terms of the dimensionless variables, the Hamilton's equation for the Lennard-Jones system is:
dp'[i]/dt' = sum [j =/= i] (r'[i]-r'[j]) [r'[ij]^-14 - 1/2 r'[ij]^-8]
dr'[i]/dt' = p'[i] = v'[i]
where the reduced coordinates and time are defined as:
r'[i ]= r[i] / r[0]
t' = t [*] sqrt{ m r[0]^2/ 48 V[0]}
r'[ij ]= | r'[i]-r'[j] |
For each particle we need to calculate the total force acting on it by other particles. In the following particular implementation,
we will not use periodic boundary condition. Rather, we put the particles inside a box with repulsive hard wall. Each particle, besides interacting with every other particles, also interacts with
the walls, by the form of wall potential:
V[wall]( x ) = a/x^9
where x is the perpendicular distance to the wall.
Clearly, the force quickly decreases to zero away from the wall.
In two dimensions, we have four terms for each of the four walls.
This is purely for a better, more realistic graphic display. It is not done in usual molecular dynamics simulation which typically use periodic boundary conditions. It is not difficult to modify
the program to adopt a periodic boundary condition.
After we calculated the forces due to walls for each particle, we go through each pair of particles (i,j) exactly once. There are
two forces associated with each pair of particles:
□ the force acting on particle i from particle j
□ the other way around.
They are equal in magnitude and opposite in sign. We accumulate this force for particle i (due to particle j) and for particle j (due to particle i).
To make the program very concise, we use a one-dimensional arrays for positions, momenta (same as velocity when mass is taken to be 1), and forces. Thus:
□ q[0] is the x-component of the 1st particle;
□ q[1] is the y-component of the 1st particle;
□ q[2] is the x-component of the 2nd particle,
□ q[i] similar for other quantities.
The computation can be conducted in two steps:
□ Caculation of forces
□ Numerical integration of Newton's equations.
The C-code for computing the force acting on a particle in Lennard-Jones fluid:
The following function calculates the force
f[ ] on each particle when the particles are at locations specified by q[ ].
int L;
void force( double q[], double f[], int n)
double xl, xr, r2, LJforce, dx, dy;
int i, j;
for (i = 0; i < n; ++i) {
xl = q[i];
xr = L - q[i];
f[i] = pow(xl, -9) - pow(xr, -9);
for (i = 1; i < n/2; ++i)
for (j = 0; j < i; ++j) {
dx = q[2*j ] - q[2*i ];
dy = q[2*j+1] - q[2*i+1];
r2 = dx*dx + dy*dy;
LJforce =
( pow(r2,-7) - 0.5*pow(r2,-4) );
f[2*i ] = f[2*i ] - LJforce*dx;
f[2*i+1] = f[2*i+1] - LJforce*dy;
f[2*j ] = f[2*j ] + LJforce*dx;
f[2*j+1] = f[2*j+1] + LJforce*dy;
The use of symplectic algorithm:
The application of the symplectic algorithm is straightforward. The function takes the current values of positions, momenta, and forces, then derive the values at time elapsed by h or h/2.
Note that we do not need to calculate the forces twice. We rewrite the algorithm C in the following equivalent way (assuming mass m[i] = 1):
p[n+1/2] = p[n] + h/2 f(q[n])
q[n+1] = q[n] + h p[n+1/2]
p[n+1] = p[n+1/2] + h/2 f(q[n+1])
In the program, before this function is called, one must call the force() function just once to initialized the force f[ ].
The symplectic algorithm C-code
#define nmax 200
void symplectic(double p[], double q[], double f[], int n)
const double h = 0.06;
double t[nmax];
int i;
for (i = 0; i < n; ++i) {
t[i] = p[i] + 0.5 * h * f[i];
q[i] = q[i] + h * t[i];
for (i = 0; i < n; ++i)
p[i] = t[i] + 0.5 * h * f[i];
We have not discussed how to choose the step size h. Clearly, h can not be too large, otherwise, our numerical trajectory will
deviate greatly from the true path. But more importantly, the algorithm becomes unstable for large value of h. As a rule of thumb, the value h should be about 1/10 or 1/20 of the smallest
vibrational period of the system. For the Lennard-Jones particle system, this means that the dimensionless h should be about 0.1
or smaller. | {"url":"http://xin.cz3.nus.edu.sg/group/teach/comphys/sec17.htm","timestamp":"2014-04-18T08:25:27Z","content_type":null,"content_length":"19439","record_id":"<urn:uuid:2d204e35-56c8-489b-a5f2-f1c33f03b4c3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about set theory and Frege logic
up vote 1 down vote favorite
Does there exist a very weak axiomatic theory of arithmetic-weaker than (but possibly a sub-theory of) Robinson's theory Q-which can be interpreted in the first order fragment of Frege logic? If so,
what are the axioms of such a theory? By "the first order fragment of Frege logic" I mean the system discussed on pages 251-252 of the book "One Hundred years of Russell's Paradox" (edited by
Godehard Link).
I don't have a copy of this book. Is the list of axioms too long to post here? – François G. Dorais♦ May 3 '10 at 21:54
How weak will you allow? For example, the empty theory? After all, Q is already extremely weak (the weakest known theory to support the incompleteness theorem). – Joel David Hamkins May 3 '10 at
1 Presburger arithmetic is weaker in some sense than Q, but I don't know if one can interpret Frege's FOL in it. See math.stackexchange.com/questions/4107/… – Carlo Von Schnitzel Oct 10 '10 at 7:52
My question is actually the reverse of this. Can one interpret Presburger Arithmetic in Frege's FOL? – Garabed Gulbenkian Jun 3 '11 at 20:29
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/23366/a-question-about-set-theory-and-frege-logic","timestamp":"2014-04-20T11:14:30Z","content_type":null,"content_length":"50246","record_id":"<urn:uuid:8d43b0bc-552a-4822-a900-45cf3af03d92>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
March 2004
March 26, 2004
User Experience
A few more blog-related notes.
1. Srijith has discovered a minor security flaw in MovableType’s handling of email notifications.
Update 3/27/2004: At Ben Trott’s request, Srijith has pulled the details of the flaw from his web site (apparently, Ben claims never to have received Srijith’s vulnerability report). Reluctantly,
I’ve decided to follow suit here at Musings. Supposedly, the fix is in MT 3.0. If that (or a standalone patch) is released in a timely fashion, I’ll be happy about my decision. Otherwise, I may
have to revisit it…
Update 3/27/2004: Oh, to heck with it! We’re not going to have another Comment-Throttling fiasco. “All will be well when MT 3.0 comes out.” is not a viable Security Policy. The exploit is out
there, and MT users need to know about it in order to protect themselves.
In brief, if a spammer (or other miscreant) leaves a comment of the form
Innocent comment here.
Spam links here.
(that’s a single period on a line by itself) only the upper part will be sent in the notification email(s), while the full comment will be posted to your blog. If you are using Sendmail, you
should patch your MT installation.
--- lib/MT/Mail.pm.orig Wed Mar 24 19:55:40 2004
+++ lib/MT/Mail.pm Wed Mar 24 19:58:06 2004
@@ -85,7 +85,7 @@
local $SIG{ALRM} = sub { CORE::exit() };
return unless defined $pid;
if (!$pid) {
- exec $sm_loc, "-t" or
+ exec $sm_loc, "-oi", "-t" or
return $class->error(MT->translate(
"Exec of sendmail failed: [_1]", "$!" ));
2. My previous entry, as promised, uses SVG for figures. I’m curious as to how this works for various classes of users
Personally, I’m using the Adobe Plugin, and I find that scrolling past an SVG image, in Mozilla, is painfully slow. Safari doesn’t have this problem.
3. My Atom feed is now “official.” My RSS 0.91 feed is deprecated (though not dropped … yet).
4. Speaking of feeds and SVG figures, NetNewsWire is a little overzealous in dealing with the SVG figures in my full-content feeds (RSS 2.0 and Atom). I can see an Aggregator not wanting to deal
with sorting out “good” <object> elements from “bad” ones, and instead just ignoring all <object> tags. But, just because you do that, why ignore the content of the <object> element? The content,
in this case, is a GIF image, which is the fallback for those who can’t — or don’t wish to — deal with the SVG. NetNewsWire is perfectly happy displaying GIF images, but it doesn’t in this case,
because the <img> element is ignored.
I suppose I could strip out the <object> tags from my feed. But I don’t want to. Those whose client software (like NetNewsWire, ironically) is capable of handling an SVG figure ought to receive
Posted by distler at 12:01 AM |
Followups (8)
March 25, 2004
We all learned on our grandfather’s knee that supersymmetry required a light Higgs. Back then, this was a cheering thought, for it meant that we would not have to wait too long for the Higgs to be
discovered. The years passed, and the experimental lower bound on the mass of the Higgs crept slowly upwards. We now know that it must be heavier than 114 GeV or so.
Scott Thomas was in town the other week, and gave a very nice colloquium, explaining how serious the situation has become for the MSSM.
At tree level, $m_h \lt m_Z \cos(2\beta)$ where $\tan(\beta)= \langle H\rangle/\langle\tilde{H}\rangle$, and $H$ & $\tilde{H}$ give masses, respectively, to the up and down type quarks. The
inequality becomes an equality in the limit that the mass of one of the other neutral scalars in the Higgs sector, $m_A\to\infty$.
With $m_Z = 91$ GeV, and $m_h\gt 114$ GeV, this bound is clearly violated. Fortunately, the one-loop corrections to the quartic self-coupling, depicted above tend to push this number up. $m_h^2 = m_Z
^2 \cos^2(2\beta)+\frac{6|\lambda_t|^2 m_t^2}{4\pi^2}\log(m_{\tilde{t}}/m_t)$ Note that the supersymmetric cancellation between the two diagrams means that the result depends only logarithmically on
the stop mass. To fit the current lower bound on $m_h$, the stop must be heavy $m_{\tilde{t}} \gt 850\, \text{GeV}$ And each time we push up the lower bound on the Higgs mass, the lower bound on the
stop mass goes up exponentially.
While the corrections to the quartic terms in the Higgs potential depend only logarithmically on the stop mass, the corrections to the quadratic terms are proportional to $m_{\tilde{t}}^2$. $m^2 \sim
|\mu|^2 - \frac{3 |\lambda_t|^2 m_{\tilde{t}}^2}{8\pi^2} \log (m_{\tilde{t}}/M)$ where $M$ is a messenger mass, at which the loop-momentum integral is effectively cut off. (It’s precisely these
radiative corrections that drive this term negative, and lead to the electroweak symmetry-breaking.)
To end up with an electroweak symmetry-breaking scale around $(100\, \text{GeV})^2$, one needs the $\mu$ parameter (the coefficient of $H\tilde{H}$ in the superpotential) to be in the TeV range, and
its value must be tuned to within a few percent.
Personally, I can live with a fine-tuning in the 1% range. But you would not have to push the Higgs mass up too much further to make even me nervous.
Posted by distler at 12:30 AM |
Followups (11)
March 20, 2004
Six Apart have announced their TypeKey service, a centralized Commenter Registration service. Commenters can register with TypeKey, and then sign in once to comment on any MovableType 3.0 blog.
I haven’t seen the details yet, but from what they’ve described, I am not too sanguine about the service. As I read it, there are three motivations for this sort of centralized Registration service.
Spam prevention
This sort of presumes that spammers will be too dumb to register their spambots with the service. Once the spambot is registered and signed-on with TypeKey, I expect it would function pretty much
as before.
On the other hand, centralized registration does allow for centralized banning. If word gets back to the TypeKey administrators, they can disable the spammer’s Identity.
But what’s to prevent the spammer from registering hundred, or thousands of Identities for his spambot? The Slashdot trolls have pioneered “registration 'bots”, which register hundreds of
throwaway Identities. What’s to prevent spammers from doing the same with TypeKey?
Troll Management
Individual blog owners can ban individual TypeKey Identities from commenting on their blogs. Not much of an impediment, if the troll can easily register another Identity.
Cracking down on trolls is a tricky business, and there are some very clever techniques for dealing with them. Merely forcing them to register isn’t enough.
Identity Theft
Can there be two Identities with the same website URL, but different email addresses? Surely there can. Can a registered user hide his email address on his Profile Page? You bet! No one wants to
give the email spammers yet another opportunity to harvest your email address.
Well, then, there’s nothing to prevent me from registering with TypeKey in your name, with your Website URL and your biographical details, but with my (hidden) email address. Now I can sign into
TypeKey and go around impersonating you at various blogs. The only non-fakeable detail in my TypeKey Profile — my throwaway Hotmail email address — is hideable. So it’s not really possible to
establish my “identity”, based on what is revealed on that Profile.
Presumably, TypeKey won’t let two of us register with the same Username. “JohnSmith37” might be out of luck, but, if you go by a less-common name, you can, to some extent, protect yourself by
making sure you’ve registered your favourite nom de plume before I get there. It won’t prevent me from registering a slight variant, with your biographical details, but it’s better than nothing.
For purely defensive reasons, this should create a mass stampede to register with TypeKey, as soon as it opens for business.
As you know^1, I’ve had my own thoughts about Comment authentication, so perhaps I’m biased. But unreliable authentication can be worse than no authentication at all. It creates a false sense of
assurance where there should be none.
Obviously, TypeKey does nothing to make any of these issues worse than before. But it does increase the hassle-factor: commenters must register, and must sign-in to use the service. So one really
hopes that TypeKey would actually improve matters with respect to one or more of these problems. Doubtless, I’m missing something, and someone will correct me. But, from what I’ve seen, TypeKey seems
to be a lot of bother, for not a lot of benefit.
Again, let me emphasize that I haven’t seen any of the implementation details. This post is based purely on the TypeKey Announcement. Still, I find the whole thing troubling enough to want to start
the discussion now, before the official roll-out.
Update (3/23/2003): There’s now a TypeKey FAQ. It addresses some of the questions raised here and elsewhere. The clear focus is on TypeKey as an anti-spam device. By itself, it would be pretty
useless. But they argue that, in conjunction with Comment Moderation (another new feature of MT 3.0), it could be rather effective. TypeKey-registered users, who’ve posted comments to your blog
before could have their comments immediately posted. Everyone else (including the spammers) would have their comments relegated to a moderation queue. Depending on how you feel about Comment
Moderation — more work for the blog owner, interrupts the flow of the conversation — that certainly could be effective. If most of your comments come from the same familiar set of people, TypeKey
would allow you to turn on Comment Moderation with minimal disruption.
For myself, I view comment spam as a more-or-less solved problem (I,II,III), and I don’t anticipate turning on Comment Moderation to deal with it. Still, giving blog-owners a sense (even if partly
illusory) of control over their comment section is a smart move by Six Apart.
Update (3/25/2003): Phil Ringnalda has also come to the conclusion that TypeKey could be useful in conjunction with Comment Moderation, as a way to whitelist “known” commenters. But, on reflection,
I’m now of the opinion that PGP-signed comments provide a much better mechanism for whitelisting known commenters.
Posted by distler at 10:47 AM |
Followups (31)
March 16, 2004
Counting Points
Urs Schreiber asked me to explain what “count[ing] the points on the Calabi-Yau, defined over the finite field $F_{p^k}$” means. I started to respond with a comment, but realized this might work
better as a full-fledged post.
Consider the equation
(1)$x_1^5+ x_2^5+ x_3^5+ x_4^5+ x_5^5- 5\psi x_1 x_2 x_3 x_4 x_5 = 0$
Algebraic geometry is the study of the geometry of the space of solutions to algebraic equations such as this one. We might be interested in the affine variety, where $(x_1,x_2,x_3,x_4,x_5)\in \
mathbb{C}^5$. Alternatively, we might take the $(x_1,x_2,x_3,x_4,x_5)eq(0,0,0,0,0)$, and identify points $(x_1,x_2,x_3,x_4,x_5)\sim (\lambda x_1,\lambda x_2,\lambda x_3,\lambda x_4,\lambda x_5)$, $\
forall \lambda\in \mathbb{C}^*$, a nonzero complex number. This yields the projective variety, the quintic hypersurface in $\mathbb{C}P^4$. This latter is a Calabi-Yau manifold, which makes it rather
interesting for physicists.
Algebraic geometry is a hard subject. It’s hard because algebraic geometers don’t want to restrict themselves to the space (affine or projective) of solutions over the complexes. They’d like to study
the space of solutions over arbitrary fields. So they need to set up the tools of geometry to work even when the equations are defined, say over a finite field.
One such field is $F_p$. Here, $p$ is a prime, and we do arithmetic over the integers modulo $p$. In $F_7$, $5=-2$ and $5 =1/3$ (since $5+2 = 0$ mod $7$ and $5 \times 3 =1$ mod $7$. Since $p$ is a
prime, every nonzero integer in $\mathbb{Z}/p\mathbb{Z}$ is invertible modulo $p$.
A field is said to have characteristic-$k$ if adding the multiplicative identity element, $1$, to itself $k$ times gives the additive identity element, $0$. If you never get the additive identity
element, the field is said to have characteristic-0. $\mathbb{Q}$, $\mathbb{R}$ and $\mathbb{C}$ are fields of characteristic-0. $F_p$ is a field of characteristic-$p$, with $p$ elements.
How about some more fields of characteristic $p$? Pick a polynomial $P(x)$ of degree $n$, with coefficient in $F_p$, which is is irreducible (i.e., which cannot be factored into lower-degree
polynomials with coefficients in $F_p$). Define
(2)$F_{p^n} = F_p[x]/(P(x))$
the ring of polynomials (with coefficients in $F_p$), modulo the ideal generated by our chosen $P(x)$. Each equivalence class of polynomials has a representative of degree less than $n$. Moreover,
since $P(x)$ was irreducible, each polynomial has a multiplicative inverse. So $F_{p^n}$ is a field of characteristic-$p$, with $p^n$ elements.
Exercise: Construct $F_{3^2}$, using the polynomial $x^2+1$, which is irreducible in $F_3$. Write out the 9 linear polynomials representing $F_3[x]/(x^2+1)$ and construct their multiplication table.
Now think about redoing everything you know about geometry (cohomology, vector bundles, sheaves, …) in characteristic-$p$. Number Theorists are typically interested in things like counting the number
of solutions to equations like the quintic above, and avail themselves of the powerful tools of algebraic geometry to do it.
OK, back to the quintic, defined with coefficients in $F_p$ (or $F_{p^n}$). Since the field is finite, so must the number of solutions of the quintic equation, in $F_p$. We can consider $u(\psi)$,
the number of solutions to the affine equation, or $N(\psi)$, the number of solutions to the projective equation. They are simply related:
(3)$N(\psi) = \frac{u(\psi) -1}{p-1}$
(we remove the origin and mod out by rescalings by nonzero elements of $F_p$), but Candelas and company prefer to write formulæ for $u(\psi)$, rather than for $N(\psi)$.
Now, here’s where the magic comes in. The periods of the holomorphic 3-form on the quintic Calabi-Yau, integrated over some basis of 3-cycles satisfy a Picard-Fuchs equation,
(4)$\left[\left(\lambda \frac{d}{d\lambda}\right)^4 -5 \lambda\prod_{k=1}^4\left (5 \lambda \frac{d}{d\lambda} +k\right)\right] \varpi =0$
in the variable $\lambda=1/(5\psi)^5$. The independent solutions can be written as
(5)\array{\arrayopts{\colalign{right center left}} \varpi_0 &=& f_0(\lambda)\\ \varpi_1 &=& f_0(\lambda)\log \lambda + f_1(\lambda)\\ \varpi_2 &=& f_0(\lambda)\log^2 \lambda + 2f_1(\lambda)\log\
lambda +f_2(\lambda)\\ \varpi_3 &=& f_0(\lambda)\log^3 \lambda + 3f_1(\lambda)\log\lambda +2 f_2(\lambda)\log\lambda +f_3(\lambda) }
where the $f_j(\lambda)$ are certain power series in $\lambda$.
(6)$f_0(\lambda)= \sum_{m=0}^\infty\frac{(5m)!}{(m!)^5} \lambda^m$
Let $f^{(n)}_j$ be the power series truncated to the first $n+1$ terms.
Candelas et al can write down an exact expression for $u(\psi)$ in terms of the $f^{(n)}_j$. The full formula is a little complicated, but the first approximation to it is easy to state:
(7)$u(\psi) = f^{([p/5])}_0(\lambda) \quad \text{mod}\, p$
where $[p/5]$ is the integer part of $p/5$.
Of course, there’s nothing special about the quintic. They can do similar things for more complicated Calabi-Yau’s, and they can also get results over the fields $F_{p^n}$.
All of this is bound up in some mysterious way with Mirror Symmetry.
I don’t know what it all means, and neither do they, but their papers (I,II) make very intriguing reading.
Posted by distler at 11:19 PM |
Followups (1)
March 13, 2004
Core Dump
Some random computer notes.
1. Installing Crypt::OpenPGP under MacOSX is a real bear. The basic steps are as follows.
1. Install libpari.
2. Install Math::Pari.
3. Use CPAN to install Crypt::OpenPGP and all of its prerequisites.
There’s a page on installing Math::Pari on MacOSX which will guide you through steps 1,2. Unfortunately, it badly needs to be updated for Pather, but it ought to give you the general idea. Once
you’ve got Math::Pari installed, the rest is fairly easy. Just used CPAN to install Crypt::OpenPGP. It will prompt you to install all of the prerequisite modules first. There are a zillion of
them. Many are required, but some are optional. All of the optional modules will compile except Crypt::IDEA. When it asks you whether to install a list of optional module which includes
Crypt::IDEA, answer “no”. You’ll get asked again, later on, about the other optional module, but you don’t want it to even attempt to install Crypt::IDEA. After quite a bit of churning away, you
should finally have a working copy of Crypt::OpenPGP.
2. I have an experimental Atom feed for this blog. It contains both a <summary type="text/plain"> and a <content type="application/xhtml+xml" mode="escaped"> element. The latter means,
theoretically, that if there were a client which supported it, people could read my MathML-enabled posts in their Aggregator. This sounds far-fetched, but it really isn’t. Dave Hyatt has, at
least, talked about the possibility of MathML support in Safari. If he and his team ever deliver on that, NetNewsWire users will get MathML support “for free.”
My feed validates, but I would still like some feedback from real Atom mavens as to what I might be doing wrong and what could be improved.
For instance, I think I am using the xml:base attribute incorrectly:
<content type="application/xhtml+xml" mode="escaped" xml:lang="en" xml:base="<$MTBlogURL encode_xml="1"$>">
That’s taken straight from the default MovableType Atom Template. Shouldn’t it be xml:base="<$MTEntryLink$>"?
Unfortunately, NetNewsWire doesn’t seem to support xml:base at all, which makes it difficult to test my assumption in practice.
(Update: Oh, to heck with it! The MT template is plainly wrong, and I shouldn’t need NetNewsWire to figure that out. Fixed.)
If I decide to keep the new Atom feed, who would object if I were to drop the RSS 0.91 feed, and replace it with this one?
3. Speaking of validating feeds. Mark and Sam’s Validator has long complained about the onclick and onkeypress attributes which occur in certain anchor tags in my full-content RSS feed. These are
not, strictly speaking, invalid (how could they be?), but they are flagged as examples of poor sportsmanship, anyway, much to my chagrin.
I finally realized that I could use the tagmogrify plugin to strip these attributes out my feeds, and now the Feed Validator no longer complains.
4. Oh, yeah, OpenSSH 3.8p1 is out. Gotta keep up with the Joneses.
Posted by distler at 10:24 PM |
Followups (16)
Number Theory and Physics
There’s a conference going on here at UT on Number Theory and Physics. Victor Batyrev, Philip Candelas, Daqing Wan and Dave Morrison are giving a series of lectures on the connections between
Calabi-Yau Manifolds, Mirror Symmetry and Number Theory.
I’m sitting in Dave’s talk right now, and he’s patiently explaining Gauged Linear $\sigma$-Models to the mathematicians. Years ago, he probably would have said, “and now we take the symplectic
reduction” ( or, more likely, “and now we take the GIT quotient”). Instead, he’s appealing to Lagrangian mechanics: minimizing the scalar potential, modding out by gauge transformations — the usual
physicists’ way of thinking these about these things. Earlier in the day, Candelas responded to the question, “Why are we computing the periods of the holomorphic 3-form on a Calabi-Yau?” with,
“Well, we want to be able to count the points on the Calabi-Yau, defined over the finite field $F_{p^k}$.”
Role reversal?
Seriously, though, the connections with Number Theory seem to be indicative of something very deep. I have this forlorn hope that if I sit through the lectures, some glimmer of understanding will
Later in the week, I’ll probably duck down to College Station to catch a bit of the Cosmology and Strings conference at Texas A&M.
Posted by distler at 4:00 PM |
Followups (3)
March 12, 2004
No More Sore Thumb
I couldn’t stand the trailing
on PGP-signed comments any longer. Visually, it looked bad. And it was a markup-eyesore too, glommed onto the end of the Comment-Body, with just a few <br />s to set it off from the text of the
So I fixed the CGI code to do it right. Much less visually jarring, and perfectly semantic XHTML. (OK, … 5 points off for using the phrase “semantic XHTML”; really, I’m not that sort of guy.)
I am also less than happy with the CGI code which generates the comment verification. It spits out perfectly acceptable XHTML; I’d just prefer the markup to be controlled by the templates, rather
than by the CGI code. Much more flexible. If I get the energy, I’ll fix that too, and send my changes on to Srijith.
Update (3/30/2004): OpenPGPComment 1.5 incorporates this fix.
Posted by distler at 8:59 PM |
Post a Comment
March 11, 2004
Ultra Deep
[Via Sean Carroll] Yet another stunning indictment of NASA’s decision to cancel further servicing of the Hubble Space Telescope (can’t let actual science stand in the way of a manned mission to Mars,
now can we?).
The Hubble Ultra Deep-Field survey of the oldest and most distant galaxies ever seen.
Posted by distler at 8:42 AM |
Post a Comment
I haven’t talked about the $a$-maximization proposal of Intriligator and Wecht, nor the interesting followup papers (I, II) by Kutasov and collaborators. But the recent paper by Csaki et al reminded
We know know that there is a wealth of interacting 4D $N=1$ superconformal field theories arising as the strongly-coupled fixed point of supersymmetric gauge theories with various matter content. We
can’t say much about the physics of such theories, but one thing we ought to be able to calculate is the spectrum of chiral primaries in the theory, superconformal primary fields, $\mathcal{O}$,
which saturate the bound
(1)$\Delta(\mathcal{O} \geq \textstyle{\frac{3}{2}} |R(\mathcal{O})|$
where $R$ is the charge under the $U(1)_R\in SU(2,2|1)$ superconformal symmetry. The difficult part is simply identifying which $U(1)_R$ symmetry of the microscopic theory becomes the R-charge of the
superconformal algebra in the IR. In general, there can be a number of nonanomalous global $U(1)$ symmetries, and the desired R-charge is some linear combination
(2)$R=R_0+\sum_i c_i Q_i$
of a valid $U(1)$ R-charge, and the other global $U(1)$ symmetries of the theory. In general, there might be a further complication that the IR fixed point might have additional, “accidental” $U(1)$
symmetries. For instance, if some chiral field $X$ becomes free, and decouples from the rest of the SCFT (more generally, if the IR SCFT breaks up into decoupled sectors), then there is an accidental
$U(1)_X$ symmetry, and the “true” R-charge of the SCFT may contain some admixture of $Q_X$.
In a conformal field theory, the $\beta$-function vanishes, and the trace anomaly in a curved background is given by $\tensor{T}{_^\mu_\mu} = \frac{1}{120 (4\pi)^2} (c W^2 -\frac{a}{4} e)$ where $W$
is the Weyl tensor,
(3)$W^2 = R_{\muu\rho\sigma} R^{\muu\rho\sigma}- 2R_{\muu} R^{\muu} + \textstyle{\frac{1}{3}}R^2$
and $e$ is the Euler density,
(4)$e= 4R_{\muu\rho\sigma} R^{\muu\rho\sigma}- 16R_{\muu} R^{\muu} + 4R^2$
The trace-anomaly coefficients, $a,c$, are given by 't Hooft anomaly matching
(5)$a = \frac{3}{32} (3 Tr R^3 - Tr R),\qquad c = \frac{1}{32} (9 Tr R^3 - 5Tr R)$
Cardy conjectured that $a$ decreases along RG flows, $a_{\text{IR}}\lt a_{\text{UV}}$, and is non-negative in unitary four dimensional conformal field theories.
What Intriligator and Wecht showed was that the correct choice of $R$ could be determined by maximizing $a$,
(6)$\frac{\partial a}{\partial c_i} =0,\qquad \frac{\partial^2 a}{\partial c_i \partial c_j} \lt 0$
Heuristically, this “explains” why $a_{\text{IR}}\lt a_{\text{UV}}$. A relevant perturbation typically breaks some of the global symmetries and so $a_{\text{IR}}$ is obtained by maximizing only
within a subspace of the original parameter space in which one maximized $a_{\text{UV}}$. In any case, $a$-maximization allows one to determine $R$, and hence the spectrum of conformal weights of the
chiral primaries.
Csaki et al study $SU(N)$ gauge theory with a 2-index antisymmetric tensor, $F$ fundamentals, and $N+F-4$ anti-fundamentals, as a function of $x=N/F$. Starting in the large-$N,F$ limit, the theory
has a Banks-Zaks fixed point near $x\sim .5$. As one increases $x$, the theory remains in a nonabelian Coulomb (SCFT) phase. At some critical value of $x$, the meson $M=\overline{Q}Q$ becomes free
and decouples. At a yet-higher value of $x$, $H=\overline{Q} A \overline{Q}$ become free and decouples. When $H$ decouples, the electric description ceases to be effective. For $F\geq 5$, one can use
a series of Seiberg dualities to rewrite the theory as an $SU(F-3)\times Sp(2F-8)$ magnetic gauge theory with a superpotential. The $Sp(2F-8)$ is IR-free, whereas the $SU(F-3)$ is in a nonabelian
Coulomb phase.
Quite an intricate story, really. And a real testament to how much progress we’ve made in understanding SUSY gauge theories in the past decade.
Posted by distler at 2:58 AM |
Post a Comment
March 9, 2004
<link rel="pgpkeys">, Sean Carroll and Atom
Since publicly proposing the idea a week and a half ago, I’ve noticed an increasing number of personal websites sporting
<link rel="pgpkey" type="application/pgp-keys" href="..." />
links to the owner’s PGP Public Key.
No, I don’t go around viewing the source of every weblog I visit. These links appear in the “More” menu of the Site Navigation Bar in Mozilla.
I’m really pleased to see this being rapidly adopted. But there are a couple of things that site owners can do to make it even more useful.
1. Give the <link> a title attribute, saying whose key it is (mine says “title="Jacques Distler's PGP Public Key"”). If you have a multi-author blog, put up a separate <link> for each author’s
Public Key, and identify each one with a title attribute.
2. Make sure the key file(s) are served up as application/pgp-keys. Surfers who configure a Helper App in their browser for that MIME type can then add the Public Key to their Keychain with a single
I know I’m slow on the uptake, but Sean Carroll has a blog. I’ve added it to my BlogRoll. But you’ll note that, despite it having an Atom Feed, I haven’t syndicated it. mt-rssfeed doesn’t support
Atom feeds yet, and Blogger, apparently, does some really funky stuff with the <summary> element of their Atom feeds.
Posted by distler at 8:50 AM |
Followups (22)
March 4, 2004
Notes on Comment Authentication
I thought I’d write some more notes on the recent implementation of PGP-signed comments on this blog, which will appear in the next release (version 1.4) of the OpenPGPComment plugin for MovableType.
In my previous entry, I made the obvious point that commenters would like to avoid “identity theft,” and that PGP-signed comments provide protection against that. More broadly, from the point of view
of having serious scientific discussions — as occasionally appear here or on the String Coffee Table — you do want some assurance that the person who left a comment really is who they said they are.
In the end, we really do care who said what in the discussion.
The anonymous nature of the internet makes the problem of “identity” a hard one. In physics, when we encounter an intractably-hard problem, our most frequent dodge is to redefine the problem to one
which admits a solution, and hope that the result is a “good-enough” stand-in for the original problem. In that spirit, I (re)defined the problem as reliably associating comments posted with the
websites of the commenters.
For commenters who have an email address, but no web page, I don’t really have a solution, other than to fall back on the traditional PGP Web-of-Trust, which is designed to establish the connection
between a signed message, an email address, and an actual person.
To associate a comment with the owner of a website, however, we have a relatively simple strategy. The owner of the website puts a
<link rel="pgpkey" type="application/pgp-keys" href="http://yoursite.com/path/to/yourkey.asc" />
on his homepage. When he posts a PGP-signed comment, and leaves the URL of his homepage, we can use the <link> on his homepage to find the keyfile containing his public key. The key is then stored on
the keyring locally, for subsequent verifications of his comment(s). We allow multiple <link rel="pgpkey">’s on a page. So if you have a group blog (say), each author can have his own keyfile. Also,
the key isn’t fetched when the comment is posted, but rather when the comment is first verified. You might want to get into the habit of checking the signature on your own comments after posting
them. The first time you do that, your key will be downloaded and stored locally.
You’ll note, also, that when you click on a link to verify a comment, we display, not only the verification status and the “UID” information (usually, an email address), but also the URL of the
homepage from which it was fetched.
Imagine we displayed only the UID (email address) associated to the key. Consider the following attack. Bob Evil has a website, nasty.net. Bob creates a public key in the name of Mary Goode, and put
a <link rel="pgpkey"> pointing to it on his website. Mary has her own site, nice.com, and is unaware of Bob’s nefarious plans. Bob posts a comment here, in Mary’s name, leaving nasty.net as the URL.
Say, on this first comment, we don’t notice the discrepancy (Mary has nothing to do with nasty.net). Having gotten his bogus key onto the keying, Bob can now return and post comments in Mary’s name,
leaving nice.com as the URL. The comments will now verify as “Mary’s” (and display her UID) which is definitely bad for her.
The flaw was that we are really trying to verify the comment author’s website, whereas her PGP key is, typically, tied to her email address. The solution is to display the URL of the homepage
(nasty.net) from which the key was originally fetched. Now Bob can never fool us into thinking his comments come from the owner of nice.com.
In terms of implementation, the public keys of commenters are stored in a standard GnuPG keyring (not your personal Public-keyring; this one has to be writable by the web-server!). We maintain a
separate database of key-id/URL pairs. There’s a bit of a management issue, keeping those two synchronized. We’ll have to write some tools to address that, eventually.
Finally, I want to re-emphasize the importance of making this whole thing easy and transparent for the readers. If verifying PGP-signed comments is tedious, then readers won’t actually do it. In that
situation, sporting the little comment-verification link is actually counter-productive. Readers will get into the habit of simply assuming that, if a comment is PGP-signed, it must be genuine.
That’s worse than not having signed comments at all. An attacker can attach any-old PGP signature to his forged comment and readers, who might otherwise have been skeptical, will assume it to be
So start signing your own comments, and get into the habit of verifying the signatures on the comments of others.
Posted by distler at 10:15 AM |
Followups (30) | {"url":"http://golem.ph.utexas.edu/~distler/blog/archives/2004_03.shtml","timestamp":"2014-04-17T13:11:43Z","content_type":null,"content_length":"138508","record_id":"<urn:uuid:92cb0978-cd4f-4377-b5ac-e466118d1b7e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hilbert's Hotel
From Uncyclopedia, the content-free encyclopedia
Hilbert's paradox of the Grand Hotel is a mathematical thought exercise in which is imagined a hotel with infinite rooms, thus capable of accommodating infinite guests. Even if the hotel is full to
capacity, and an extra guest arrives, he or she can still be accommodated by putting them into room 1408, which nobody ever stays in because the television doesn't work and the bed smells a bit
funny. Problems arise because only a finite number of the rooms have ocean view, and these rooms are usually filled by mathematicians, for whom the hotel is a popular conference venue. The hotel is
also known for its annual Don Con, a popular convention where infinitely many Cambridge University dons visit the hotel for a week. These dons normally live in a (finite) cupboard in the university,
which is possible because they stand on each other's shoulders, and almost all of them are rather small, each being half the height of the previous don.
edit Example problems involving Hilbert's Hotel
Q: The hotel is full to capacity, and an infinite number of further tourists arrive. How does the hotel accommodate them all?
A: The staff directs them to the Infinite Ritz-Carlton across town.
Q: The hotel is full to capacity. The guests in even-numbered suites (of which there are infinite) all call room service to request a VCR. How long will it take the hotel staff to oblige?
A: The hotel no longer carries VCRs, as they are obsolete technology.
Q: Donald Trump has bought Hilbert's Hotel and wishes to gold plate it. How much gold will he require?
A: Four hundred carats.
Q: An infinite number of customers refuse to pay their bills. Does the Hotel move into bankruptcy?
A: No. The IRS only have an finite number of tax accountances.
Q: Mr Trump decides to expand his empire to a chain consisting of an infinite number of Hilbert Hotels. Does this infinite chain contain more rooms than the original infinite Hotel?
A: No. He contracted a Polish construction firm.
Q: Where do the hotel staff find enough water to fill the infinite swimming pool?
A: A Klein bottle.
Q: All guests switch to the room with the number double their previous room number. Why do they do this?
A: They are so drunk that they count double.
Q: Infinite guests are staying at the hotel. When they leave, they each steal a towel, making a total of infinite towels stolen. How does the management replace the stolen towels?
A: They cut all the remaining towels in half.
Q: A fire breaks out in room 991, threatening to spread to adjacent rooms. What is the management's response when asked to call the fire brigade?
A: If you find yourself being horribly burnt, please move to double your room number, plus three, times the square root of pi. Moving rooms solves all problems.
Q: As the hotel covers an infinitely large area, its rent is infinitely high. How does the hotel recoup this cost?
A: Minibars.
Q: At the Don Con, the dons want to play a party game with uncountably many rules. Can they do this?
A: Only by assuming the axiom of choice, which is obviously rubbish.
Q: Someone asks how an infinite hotel could possibly exist given the finite size of the universe. What do the management do?
A: Shoot them on sight.
Glossary of
mathematical terms
Augustin-Louis Cauchy · Albert Einstein · Isaac Newton · Blaise Pascal · Bernhard Riemann
Mathematicians · Mathemagicians · Nerds · Asians · Your Math Tutor
Fundamentals Algorithm · Proof
Tools Calculator · Flow chart · Graphs · Slide rule · Ti-83 · Texas Instruments
Education Intelligent Mathematics · Extreme mathematics · Hex · Maxwell equations · Newmath · Nude math
By Field:
Numerology 0 · 1+1 · 9/11 · 0.999... · Pi equals exactly three · Nillion · Oodles · Infinity
Veggie maths Negative potato · Counting to potato
Number Theory The largest number · Integer · Legend of Zelda Link theory · Negative Numbers · Odd · Prime number · Fibonacci Sequence · Rational numbers · Riemann Hypothesis · Imaginary Number
· Complex numbers · Is one a number? · Fermat's Penultimate Theorem · Fermat's Last Theorem
Arithmetic Arithmetic · Addition · % · How to Divide by Zero · The Quantity 2 plus 4 times y = Your Mom
Algebra Pre-Algebra · Al Gebra · Linear Algebra · Equation · Polynomial · Fourier Transform · Hilbert's Hotel
Geometry & Topology Geometry · Trigonometry · Fractal · Hairy ball theorem · Tangent · Paradox (Achilles and the Tortoise) · Transcendental curve · Soviet Integration (Mathematics)
Calculus Calculus · Integral · Vector calculus · Differential Equations · Cauchy's theorem · ∫ · 2 Girls 1 Calculus Equation
Probability & Statistics · Random Statistics · Soviet Union (Mathematics)
Logic & Computer Logic · Recursive · Monty Hall problem · Turing test · Number Bases
Physics & Mathy Laws of Physics · Principle of Least Action · Einstein's Malicious Theories · The Popularity of War | {"url":"http://uncyclopedia.wikia.com/wiki/Hilbert's_Hotel","timestamp":"2014-04-20T06:36:15Z","content_type":null,"content_length":"43377","record_id":"<urn:uuid:35f209d4-65db-48d5-8ceb-98d0cef3fdb0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Star Member
Re: Sequences!
That looks random!
Maybe -325, -2340, -7457
using lagrange polynomial.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Sequences!
Here is another sequence:
Find as much about this sequence as you can.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
I got 64,64,64,64,64,64,64 for the next seven terms.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
Hi bobbym
I am sorry, but that is not correct.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
I know that, but it is still one possible answer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
Of course, but it is not the wanted one.
Here are the first 50 terms:
How did you get your answer?
Last edited by anonimnystefy (2013-03-25 03:31:40)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
I know that too. I got my answer from a little trick. Now the chance that it coincided with what you wanted was about a billion to one...
Thanks for the extra terms.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
Which little trick?
You are welcome.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
I figured it was the decimal digits of
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
Huh, so PSLQ?
I never got it to work on Maxima. The lattice transform is a real pain.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
Mightier than a PSLQ!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
What is it?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
Maple and an m are a formidable combination.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
How are you doing on the sequence with the new terms?
Last edited by anonimnystefy (2013-02-08 19:40:50)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
Not very good.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
Hi bobbym
Ok, tell me if you get anything.
In the mean time, could you move this thread to Exercises. I don't know what I was thinking when making this thread...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
Are there any more terms that you know of?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
I can get you as many terms as you want.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
Please put a few more there then.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
Would a 100 do?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
It will have to.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
Well, I can always give you more, if you want...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
No, those will do for now.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Sequences!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Sequences!
It looks like I will need 121,312,416 more...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=252498","timestamp":"2014-04-20T21:05:11Z","content_type":null,"content_length":"38644","record_id":"<urn:uuid:d5bda214-5e95-4d5a-8f0a-b7af1ff13c0a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is the non-triviality of the algebraic dual of an infinite-dimensional vector space equivalent to the axiom of choice?
up vote 29 down vote favorite
If $V$ is given to be a vector space that is not finite-dimensional, it doesn't seem to be possible to exhibit an explicit non-zero linear functional on $V$ without further information about $V$. The
existence of a non-zero linear functional can be shown by taking a basis of $V$ and specifying the values of the functional on the basis.
To find a basis of $V$, the axiom of choice (AC) is needed, and indeed, it was shown by Blass in 1984 that in Zermelo-Fraenkel set theory (ZF) it is equivalent to the axiom of choice that any vector
space has a basis. However, it's not clear to me that the existence of a non-zero element of $V^*$ really needs the full strength of AC. I couldn't find a reference anywhere, so here is my question:
Consider the following statement:
(D) For any vector space $V$ that is not finite-dimensional, $V^*\neq \{0\}$.
Is (D) equivalent to AC in ZF? If not, is there some known axiom that is equivalent to (D) in ZF?
Note that this question is about the algebraic dual $V^*$. There are examples of Banach spaces, for example $\ell^\infty/c_0$, where it is possible (in the absence of the Hahn-Banach theorem, itself
weaker than AC) for their topological dual to be $\{0\}$; see this answer on MO. I'm not aware of any result for the algebraic dual.
This question was inspired by, and is related to this question on MO.
Edit: Summary of the five answers so far:
• Todd's answer + comments by François and Asaf: in Läuchli's models of ZF there is an infinite dimensional vector space $V$ such that all proper subspaces are finite dimensional. In particular,
$V$ does not have a basis and $V^*=\{0\}$. Also, according to Asaf, in these models Dependent Choice can still hold up to an arbitrarily large cardinal.
• Amit's answer + comment by François: in Shelah's model of ZF + DC + PB (every set of real numbers is Baire), $\Bbb R$ considered as a vector space over $\Bbb Q$ has a trivial dual.
• François's answer (see also godelian's answer) + Andreas' answer in ZF the following is equivalent to BPIT: all vector spaces over finite fields have duals large enough to separate points.
So DC is too weak, and BPT is strong enough for finite fields (and in fact equivalent to a slightly stronger statement). How far does Choice fail in Blass' model? Update: according to Asaf Karagila,
$DC_{\kappa}$ can hold for arbitrarily large $\kappa$.
This is probably a naive question, but is there no chance that (D) is provable in ZF? – Thierry Zell Dec 14 '10 at 14:14
@Thierry: I don't know. If (D) holds in ZF, then it would be a very obscure fact, because I've seen in many texts it explicitly mentioned that AC is needed to prove (D), as one needs a basis. So I
doubt that it can be done in ZF, but I would also be very surprised if the full AC is needed. – Konrad Swanepoel Dec 14 '10 at 14:23
Great question! – Amit Kumar Gupta Dec 14 '10 at 18:49
add comment
6 Answers
active oldest votes
To add the proof for my claim in Todd's answer, which essentially repeats Läuchli's original [1] arguments with minor modifications (and the addition that the resulted model satisfies
We will show that it is consistent to have a model in which $DC_\kappa$ holds, and there is a vector space over $\mathbb F_2$ which has no linear functionals.
Assume that $M$ is a model of $ZFA+AC$ and that $A$, the set of atoms has $\lambda>\kappa$ many atoms, where $\lambda$ is a regular cardinal. Endow $A$ with a structure of a vector
space over $\mathbb F=\mathbb F_2$. Now consider the permutation model $\frak M$ defined by the group of linear permutations of $A$, and by ideal of supports generated by subsets of
dimension $\le\kappa$.
Denote by $\operatorname{fix}(X)$ the permutations which fix every element of $X$, by $\operatorname{sym}(X)$ the permutations that fix $X$ as a set, and by $[E]$ the span of $E$ as a
subset of $A$. We say that $E\subseteq A$ is a support of $X$ if $\pi\in\operatorname{fix}(E)\Rightarrow\pi\in\operatorname{sym}(X)$.
Final word of terminology, since $A$ will play both the role of set of atoms as well the vector space, given $U\subseteq A$ the complement will always denote a set complement, whereas
the direct complement will be used to refer to a linear subspace which acts as a direct summand with $U$ in a decomposition of $A$.
Claim 1: If $E$ is a subset of $A$ then $\operatorname{fix}(E)$ is the same as $\operatorname{fix}([E])$.
Proof: This is obvious since all the permutations considered are linear. $\square$
From this we can identify $E$ with its span, and since (in $M$) the $[E]$ has the same cardinality of $E$ we can conclude that without loss of generality supports are subspaces.
Claim 2: $\frak M$$\models DC_\kappa$.
Proof: Let $X$ be some nonempty set, and $\lt$ a binary relation on $X$, both in $\frak M$. In $M$ we can find a function $f\colon\kappa\to X$ which witness $DC_\kappa$ in $V$.
Since $\frak M$ is transitive, we have that $\alpha,f(\alpha)\in\frak M$ and thus $\langle\alpha,f(\alpha)\rangle\in\frak M$. Let $E_\alpha$ be a support for $\lbrace\langle\alpha,f(\
alpha)\rangle\rbrace$ then $\bigcup_{\alpha<\kappa} E_\alpha$ is a set of cardinality $<\kappa^+$ and thus in our ideal of suports. It is simple to verify that this is a support of
$f$, therefore $f\in\frak M$ as wanted. $\square$
Claim 3: If $x,y\in A$ are nonzero (with respect to the vector space) then in $M$ there is a linear permutation $\pi$ such that $\pi x=y$ and $\pi y=x$.
Proof: Since $x\neq y$ we have that they are linearly independent over $\mathbb F$. Since we have choice in $M$ we can extend this to a basis of $A$, and take a permutation of this
basis which only switches $x$ and $y$. This permutation extends uniquely to our $\pi$.
up vote 6 down
vote accepted Claim 4: If $U\subseteq A$ and $U\in\frak M$ then either $U$ is a subset of a linear subspace of dimension at most $\kappa$, or a subset of the complement of such space.
Proof: Let $E$ be a support of $U$, then every linear automorphism of $A$ which fixes $E$ preserves $U$. If $U\subseteq [E]$ then we are done, otherwise let $u\in U\setminus [E]$ and
$v\in A\setminus [E]$, we can define (in $M$ where choice exists) a linear permutation $\pi$ which fixes $E$ and switches $u$ with $v$. By that we have that $\pi(U)=U$ therefore $v\in
U$, and so $U=A\setminus[E]$ as wanted. $\square$
Claim 5: If $U\subseteq A$ is a linear proper subspace and $U\in\frak M$ then its dimension is at most $\kappa$.
Proof: Suppose that $U$ is a subspace of $A$ and every linearly independent subset of $U$ of cardinality $\le\kappa$ does not span $U$, we will show $A=U$. By the previous claim we
have that $U$ is the complement of some "small" $[E]$.
Now let $v\in A$ and $u\in U$ both nonzero vectors. If $u+v\in U$ then $v\in U$. If $u+v\in [E]$ then $v\in U$ since otherwise $u=u+v+v\in[E]$. Therefore $v\in U$ and so $A\subseteq
U$, and thus $A=U$ as wanted.$\square$
Claim 6: If $\varphi\colon A\to\mathbb F$ a linear functional then $\varphi = 0$.
Proof: Suppose not, for some $u\in A$ we have $\varphi(u)=1$, then $\varphi$ has a kernel which is of co-dimension $1$, that is a proper linear subspace and $A=\ker\varphi\oplus\
lbrace 0,u\rbrace$. However by the previous claim we have that $\ker\varphi$ has dimension $\kappa$ at most, and without the axiom of choice $\kappa+1=\kappa$, thus deriving
contradiction to the fact that $A$ is not spanned by $\kappa$ many vectors.
Aftermath: There was indeed some trouble in my original proof, after some extensive work in the past two days I came to a very similar idea. However with the very generous help of
Theo Buehler which helped me find the original paper and translate parts, I studied Läuchli's original proof and concluded his arguments are sleek and nicer than mine.
While this cannot be transferred to $ZF$ using the Jech-Sochor embedding theorem (since $DC_\kappa$ is not a bounded statement), I am not sure that Pincus' transfer theorem won't
work, or how difficult a straightforward forcing argument would be.
Lastly, the original Läuchli model is where $\lambda=\aleph_0$ and he goes on to prove that there are no non-scalar endomorphisms. In the case where we use $\mathbb F=\mathbb F_2$ and
$\lambda=\aleph_0$ we have that this vector space is indeed amorphous which in turn implies that very little choice is in such universe.
1. Läuchli, H. Auswahlaxiom in der Algebra. Commentarii Mathematici Helvetici, vol 37, pp. 1-19.
Nice! Thanks for adding as an answer. – Konrad Swanepoel Nov 3 '11 at 11:13
add comment
This is a very partial answer (really in response to Thierry's question) which indicates that it is not provable in ZF that $V^\ast \neq \{0\}$ for every vector space $V$. This answer
piggybacks on an answer Andreas Blass gave here, which gives a model of ZF in which the automorphism group of a vector space over $\mathbb{F}_2$ can be the cyclic group of order 3, which is
really quite exotic.
So, I will prove that if every vector space $V$ over $\mathbb{F}_2$ (of dimension greater than 1) has a nontrivial dual, then $V$ has a nontrivial involution, which would run counter to
Andreas's model. Indeed, suppose there exists a surjective linear map $f: V \to \mathbb{F}_2$. There exists an element $x \in V$ such that $f(x) = 1$. There also exists a surjective map $V/
\langle x \rangle \to \mathbb{F}_2$, hence a surjective map $g: V \to \mathbb{F}_2$ such that $g(x) = 0$, and thus there exists an element $y \in \ker(f)$ such that $g(y) = 1$. It follows
that we have a surjective linear map
up vote
18 down $$\langle f, g \rangle: V \to \mathbb{F}_2 \times \mathbb{F}_2$$
say with kernel $W$. This epimorphism splits, so we have an identification
$$V \cong W \oplus \mathbb{F}_{2}^{2}$$
and clearly now we can exhibit a non-identity involution on the right side which acts as the identity on $W$ and permutes two basis elements of the 2-dimensional summand.
Awesome! Thanks, Todd. – Thierry Zell Dec 15 '10 at 2:43
Very nice. Do you know how badly AC fails in Blass's model? – Konrad Swanepoel Dec 15 '10 at 11:42
5 The permutation model used by Andreas is in fact due to H. Läuchli. [Auswahlaxiom in der Algebra, Comment. Math. Helv. 37 1962/1963] This is the standard model for the existence of a
vector space without a basis. (Note that there is nothing special about $\mathbb{F}_2$ or rather $\mathbb{F}_4$ in the construction.) The vector space in question has the curious
property that all of its proper subspaces are finite dimensional; this gives a very quick proof of the desired result. – François G. Dorais♦ Dec 15 '10 at 12:21
2 @Konrad: Something that I am working on nowadays shows that in Lauchli models it is consistent to have $DC_\kappa$ for arbitrarily high $\kappa$. I'm not sure that Todd's proof will
carry though. – Asaf Karagila Jun 6 '11 at 19:45
1 @Konrad: Of course it carries over! It is very simple, a simple generalization of Blass construction (generalizing Lauchli) still yields a vector spaces without non-scalar automorphisms,
and from here the proof by Todd carries over completely. – Asaf Karagila Jun 6 '11 at 19:48
show 4 more comments
Some restricted forms of (D) are weaker than the Axiom of Choice. Fix a field $F$ and consider the stronger statement:
For every $F$-vector space $V$ and every nonzero $v_0 \in V$ there is a $F$-linear functional $f:V\to F$ such that $f(v_0) = 1$.
When $F$ is a finite field, this is a consequence of the Ultrafilter Theorem or, equivalently, the Compactness Theorem for propositional logic.
To see this, consider the following propositional theory with one propositional variable $P(v,x)$ for each pair $v \in V$ and $x \in F$. The idea of the theory is that $P(v,x)$ should be
true if and only if $f(v) = x$. The axioms for the theory are:
up vote 11 1. $\lnot(P(v,x) \land P(v,y))$ for all $v \in V$ and distinct $x, y \in F$
down vote 2. $\bigvee_{x \in F} P(v,x)$ all $v \in V$
3. $P(v,x) \land P(w,y) \rightarrow P(v + w, x + y)$ for all $v, w \in V$ and $x, y \in F$
4. $P(v,x) \rightarrow P(yv,yx)$ for all $v \in V$ and $x, y \in F$
5. $P(v_0,1)$
Axiom schemes 1 & 2 ensure that the $P(v,x)$ describe the graph of a function $f:V \to F$. Axiom schemes 3 & 4 ensure that the function $f$ thus described is $F$-linear. Finally, the
last axiom 5 ensures that $f(v_0) = 1$. It is clear that every finite subset of the axioms is satisfiable, therefore, by the Compactness Theorem, the whole theory is satisfiable.
Very interesting approach. I've been wondering how to adapt this to an infinite field, but it seems the big problem is axiom 2. So if I try to do it for e.g. $F=\Bbb F_2(x)$, then I
need an existential quantifier over the natural numbers. – Konrad Swanepoel Dec 15 '10 at 13:50
add comment
It seems that very little is known about the strength of statements like this. A few years ago, I encountered (in a pedagogical context) the slightly stronger looking statement "every
non-zero vector in any vector space has a non-zero image under some element of the dual space." Looking in the canonical place, "Consequences of the Axiom of Choice" by Howard and Rubin, I
didn't find this statement in the section on vector spaces, but I found an equivalent statement in the section on fields, form 284: A system of linear equations over a field $F$ has a
up vote solution in $F$ if and only if every finite sub-system has a solution in $F$.
9 down
vote Howard and Rubin also have the version of 284 restricted to finite fields; it's 14BN, where the 14 indicates it's equivalent to the Boolean Prime Ideal Theorem. Unfortunately, there seems to
be essentially no more information about 284.
add comment
I just came across this page on Timothy Gowers' website wherein he states:
there are models of set theory without the axiom of choice that contain infinite-dimensional vector spaces V such that V* really is {0}.
Update: As Konrad points out, there's no reference on his page for the statement above. So I emailed Mr. Gowers and he gave me the following answer which I believe is correct:
up vote Consider $V = \mathbb{R}$ regarded as a vector space over $\mathbb{Q}$. A non-trivial element of $V^{\ast}$ is a non-zero $\mathbb{Q}$-linear map from $\mathbb{R} \to \mathbb{Q}$. Such a map,
6 down being a non-trivial map to $\mathbb{Q}$, will not be $\mathbb{R}$-linear, but it would still be additive because it's $\mathbb{Q}$-linear. I believe it holds in ZF that every additive
vote Lebesgue measurable map $\mathbb{R} \to \mathbb{R}$ is $\mathbb{R}$-linear (there's a proof of this statement here; I don't believe the Lebesgue density theorem requires choice$^{\dagger}$,
and none of the other steps in the proof require choice). Thus an additive non-$\mathbb{R}$-linear map would give us a non-measurable function, and hence a non-measurable set. This argument
so far hasn't used choice, so at this point it suffices to find a model of ZF with no non-measurable sets of reals: just take Solovay's model.
$^{\dagger}$ This is the only point I'm not sure of.
Unfortunately he doesn't give any reference.... – Konrad Swanepoel Dec 15 '10 at 11:41
The proof that additive measurable maps are linear uses $\sigma$-additivity (or at least $\sigma$-subadditivity); see the 4th last line of the proof you link to. I think Countable Choice
is needed to show $\sigma$-subadditivity. But CC holds in Solovay's model (even Dependent Choice), so $\sigma$-additivity should hold there (otherwise what's the use talking about
measurable sets?), so this proof should go through there. – Konrad Swanepoel Dec 15 '10 at 12:49
Since the consistency of Solovay's model needs an inaccessible cardinal, this argument doesn't give the consistency of the negation of (D) relative to the consistency of ZF; you need the
consistency of the existence of an inaccessible cardinal. – Konrad Swanepoel Dec 15 '10 at 12:51
1 @Konrad: If you use Baire category instead of Lebesgue measure, you can avoid the inaccessible. See this old answer of mine - mathoverflow.net/questions/16666/… – François G. Dorais♦ Dec
15 '10 at 12:54
@François: aha, very interesting; I missed that question. So to elaborate: Shelah has a model equiconsistent with ZF where all sets are Baire and where DC holds, and it is known in ZF+DC
that an additive function $\Bbb R\rightarrow \Bbb R$ with the Baire property must be $\Bbb R$-linear (I found it as a special case of Theorem 6 of M. R. Mehdi, On convex functions, J.
London Math. Soc. 39 (1964) 321–326, but it could be older.) – Konrad Swanepoel Dec 15 '10 at 13:35
add comment
EDIT: According to François's comment below, this answer only works for finite fields instead of fields of positive characteristic, as I originally intended. I would like to leave it for a
while in community wiki and see if someone else's can use these ideas to give a further step. END EDIT
Here is another proof that the ultrafilter theorem is enough to deduce statement (D) for vector spaces over finite fields. The idea is to use partial functionals defined on
finite-dimensional subspaces and use a consistency principle to deduce the existence of a functional defined in the whole space. This can be done by using the following theorem, which is
equivalent to the prime ideal theorem (and hence to the ultrafilter lemma):
THEOREM: Suppose for each finite $W \subset I$ there is a nonempty set $H_W$ of partial functions on I whose domains include $W$ and such that $W_1 \subseteq W_2$ implies $H_{W_2} \subseteq
H_{W_1}$. Suppose also that, for each $v \in I$, {$h(v): h \in H_{\emptyset}$} is a finite set. Then there exists a function $g$, with domain $I$, such that for any finite $W$ there exists
up vote 1 $h \in H_W$ with $g|_W \subseteq h$.
down vote
This is theorem 1 in this paper by Cowen, where he proves the equivalence with the prime ideal theorem (a simple proof using compactness for propositional logic is given by the end of the
paper, and is close to what François had in mind). This is essentially also the "Consistency principle" as appearing in Jech's "The axiom of choice", pp. 17, since although Jech's
formulation uses only two-valued functions, the proof he gives there, through the ultrafilter lemma, actually works when the functions are $n$-valued.
Now, for an infinite-dimensional vector space $V$ over a finite field, fix a nonzero $v_0 \in V$ and define the sets $H_W$ for $W \subset V$ as follows: if $W \subseteq U$ for a finite $U$,
consider the set $S_{U}$ of all functionals defined on the (finite-dimensional) subspace generated by $U$ and such that $v_0 \in U \implies f(v_0)=1$. Then $H_W$ is the union of all $S_U$
for finite $U \supseteq W$. By the previous theorem, we have a function $f: V \to \mathbb{F}$, and the restriction property shows that $f$ is linear and $f(v_0)=1$.
I don't understand the generalization. Unless the field really is $\mathbb{F}_p$, there won't be any nonzero functionals that only take values in $\mathbb{F}_p$. – François G. Dorais♦ Oct
16 '11 at 22:28
@François: You are completely right; f will only be linear with respect to multiplication by scalars in F_p. I totally missed this obvious fact. So in the end this is just really another
proof for the finite case. I will edit accordingly and then delete the answer. – godelian Oct 16 '11 at 22:43
Done. Also, I'm wondering if for finite fields statement (D) might not be actually equivalent to theorem 1 (and hence to BPI). Perhaps the proof in Cowen's paper can give some hints if
that's the case. – godelian Oct 16 '11 at 23:04
add comment
Not the answer you're looking for? Browse other questions tagged axiom-of-choice set-theory linear-algebra dual-pairs lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/49388/is-the-non-triviality-of-the-algebraic-dual-of-an-infinite-dimensional-vector-sp?sort=oldest","timestamp":"2014-04-21T04:44:34Z","content_type":null,"content_length":"107589","record_id":"<urn:uuid:ad34532d-ca59-4fd6-91ac-c145d5d5e3f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
A self-homeomorphism of $L_{p,q}$ is isotopic to one which preserves heegaard splitting
up vote 2 down vote favorite
Consider the lens space $L_{p,q}$, which we can describe using its standard heegaard splitting, i.e. define $L_{p,q}$ as a quotient of two solid tori, identifying meridians on the boundary of one
with $(p,q)$-curves on the other. Call $H_1$ and $H_2$ the two solid tori, and $T$ the torus with identified points. I am working on a proof that $L_{p,q}$ has no orientation reversing
self-homeomorphism if $q$ is a quadratic residue mod $p$. A proof can be found here, but it can be greatly simplified if we assume that any homeomorphism $L_{p,q} \to L_{p,q}$ is isotopic to one
which maps $T$ to $T$. In this textbook the author claims that this might always be the case, although he doesn´t give any reasons for this assumption, not even intuitive ones.
Does anybody know if and why this might be the case? It seems to me that we need to exclude cases in which, for example, one torus ends un knotted inside the other, but unfortunately I am very far
from being able to justify anything rigorously.
Thanks for any insight.
1 This follows from the work of Bonahon and Otal who proved that all HS of genus 1 of lens spaces are isotopic to the standard one. – Misha Jun 2 '13 at 10:39
@misha could you provide any references? – Emilio Ferrucci Jun 2 '13 at 10:41
add comment
1 Answer
active oldest votes
Suppose that $L = L(p,q)$ is a lens space. Let $T$ be the standard genus one Heegaard splitting. Then $T$ is unique up to isotopy. It follows that every self-homeomorphism of $L$
preserves $T$ up to isotopy.
up vote 5 down vote The uniqueness result was first proved by Bonahon and Otal in their paper "Scindements de Heegaard des espaces lenticulaires". A proof can also be found in Theorem 2.5 of
accepted Hatcher's three-manifold notes, available here: http://www.math.cornell.edu/~hatcher/3M/3Mdownloads.html
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology orientation 3-manifolds or ask your own question. | {"url":"http://mathoverflow.net/questions/132564/a-self-homeomorphism-of-l-p-q-is-isotopic-to-one-which-preserves-heegaard-sp","timestamp":"2014-04-17T04:04:43Z","content_type":null,"content_length":"54879","record_id":"<urn:uuid:106fd278-c92d-4425-9787-159ad99431a0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Singapore Math Standards Kindergarten B Earlybird Textbook
New EditionSingapore EarlyBird Kindergarten Mathematics Standards Edition
Singapore Math Earlybird is second in a series designed to be a two year kindergarden program. The program aims to prepare young students for subsequent stages of mathematical thinking. In the
Textbook, mathematical concepts are developed in a systematic, engaging and fun way. Hands-on tasks, meaningful activities and attractive illustrations rich in mathematical content engage students'
active participation in the learning process. The series also provides easy-to-follow guidance in various forms to allow for meaningful intervention by both parents and teachers.
Singapore Math
Incudes objectives: recognition of solid shapes, reading and writing of numerals, cardinal number counting, sorting according to color, shape, and length,arranging numbers in sequence, counting and
writing of numerals, understanding the concept of empty set, counting backwards from 10 to 0, one to one matching to find which has more, picture graphs, even and odd numbers within 10, time
sequence, matching events to day-time or night time, recognizing ordinal numbers, becoming aware of shapes and patterns, recognizing and counting pennies, nickels, dimes and quarters.
Singapore Math Sample Chapter 13Singapore Math Sample Chapter 20 | {"url":"http://www.canadianhomeeducation.com/products/32456-singapore-math-kb-earlybird-textbook.aspx","timestamp":"2014-04-18T03:03:57Z","content_type":null,"content_length":"69572","record_id":"<urn:uuid:c53af090-e604-4151-8f64-4f58a4f03c41>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
digitalmars.D - enum conversions
bearophile <bearophileHUGS lycos.com>
Some musings, feel free to ignore this post.
Sometimes I have to convert enums to integers or integers to enums. I'd like to
do it efficiently (this means with minimal or no runtime overhead), and safely
(this means I'd like the type system to prove I am not introducing bugs, like
assigning enums that don't exist).
This function classifies every natural number in one of the three classes
(deficient numbers, perfect numbers, and abundant nubers, according to the sum
of its factors), so I use a 3-enum:
enum NumberClass : int { deficient=-1, perfect=0, abundant=1 }
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int difference = reduce!q{a + b}(0, factors) - n;
return cast(NumberClass)sgn(difference);
std.math.sgn() returns a value in {-1, 0, 1}, so this first version of the
function uses just a cast, after carefully defining the same values for the
NumberClass enums. But casts stop the type system, so it can't guaranteed the
code is working correctly or safely, so if I change the values of the enums the
type system doesn't catch the bug.
This version is safer, works with any value associated to the enum items, but
it performs even two tests at run-time:
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int diff = sgn(reduce!q{a + b}(0, factors) - n);
if (diff == -1)
return NumberClass.deficient;
else if (diff == 0)
return NumberClass.perfect;
return NumberClass.abundant;
This version is about as safe, and uses one array access on immutable array (I
have not used an emum array to avoid wasting even more run time):
NumberClass classifyNumber(int n)
static immutable res = [NumberClass.deficient, NumberClass.perfect,
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int sign = sgn(reduce!q{a + b}(0, factors) - n);
return res[sign + 1];
Using a switch is another safe option, I can't use a final switch. This too has
some run-time overhead:
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int sign = sgn(reduce!q{a + b}(0, factors) - n);
switch (sign) {
case -1: return NumberClass.deficient;
case 0: return NumberClass.perfect;
default: return NumberClass.abundant;
In theory a bit better type system (with ranged integers too as first-class
types) knows that sgn() returns the same values as the enum NumberClass, this
allows the first version without cast and compile-time proof of correctness:
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int difference = reduce!q{a + b}(0, factors) - n;
return sgn(difference);
I don't know what to think.
May 13 2011
Timon Gehr <timon.gehr gmx.ch>
Some musings, feel free to ignore this post.
Sometimes I have to convert enums to integers or integers to enums. I'd like to
(this means I'd > like the type system to prove I am not introducing bugs, like assigning enums that don't exist).
This function classifies every natural number in one of the three classes
its factors), so I use a 3-enum:
enum NumberClass : int { deficient=-1, perfect=0, abundant=1 }
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int difference = reduce!q{a + b}(0, factors) - n;
return cast(NumberClass)sgn(difference);
std.math.sgn() returns a value in {-1, 0, 1}, so this first version of the
NumberClass enums. But
casts stop the type system, so it can't guaranteed the code is working
the bug.
This version is safer, works with any value associated to the enum items, but
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int diff = sgn(reduce!q{a + b}(0, factors) - n);
if (diff == -1)
return NumberClass.deficient;
else if (diff == 0)
return NumberClass.perfect;
return NumberClass.abundant;
This version is about as safe, and uses one array access on immutable array (I
NumberClass classifyNumber(int n)
static immutable res = [NumberClass.deficient, NumberClass.perfect,
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int sign = sgn(reduce!q{a + b}(0, factors) - n);
return res[sign + 1];
Using a switch is another safe option, I can't use a final switch. This too has
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int sign = sgn(reduce!q{a + b}(0, factors) - n);
switch (sign) {
case -1: return NumberClass.deficient;
case 0: return NumberClass.perfect;
default: return NumberClass.abundant;
In theory a bit better type system (with ranged integers too as first-class
allows the first
version without cast and compile-time proof of correctness:
NumberClass classifyNumber(int n)
auto factors = filter!((i){ return n % i == 0; })(iota(1, n));
int difference = reduce!q{a + b}(0, factors) - n;
return sgn(difference);
I don't know what to think.
That overhead you are referring to gets negligible even for moderately large n. An entirely safe version of the code without any overhead would be: enum NumberClass : int { deficient=-1, perfect=0,
abundant=1 } NumberClass classifyNumber(int n) auto factors = filter!((i){ return n % i == 0; })(iota(1, n)); int difference = reduce!q{a + b}(0, factors) - n; //guard the cast: static assert
(NumberClass.min==-1 && NumberClass.max==1); static assert(cast(int)NumberClass.deficient==-1 && cast(int)NumberClass.perfect==0 && cast(int)NumberClass.abundant==1); return cast(NumberClass)sgn
(difference); } This is not the way of least resistance though.
May 13 2011
Wow sorry please neglect that, I completely wiped out an entire line
of code. Ignore that code please.
May 13 2011
Or wait, I didn't? I think I'm confusing two implementations now. LOL.
May 13 2011
Well I don't know about safety, but this will throw an exception at
runtime on invalid enum values, and it also seems to perform a little
better than your original classifyNumber function (The
classifyNumberNewImpl function is the new one):
I'm not sure what is making the code faster, but on my machine I get:
classifyNumberNew: 5543
classifyNumberOld: 5550
May 13 2011 | {"url":"http://www.digitalmars.com/d/archives/digitalmars/D/enum_conversions_136114.html","timestamp":"2014-04-21T05:30:15Z","content_type":null,"content_length":"18802","record_id":"<urn:uuid:c7b49d02-3bf7-4492-b86f-f3ee0a1b2cc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Projectin a 2D point into Worldspace, for makeing up worldspace ray
<prev [Date] next> <prev [Thread] next>
Re: Projectin a 2D point into Worldspace, for makeing up worldspace ray
To: info-inventor-dev@xxxxxxxxxxx
Subject: Re: Projectin a 2D point into Worldspace, for makeing up worldspace ray together with actual viewing direction
From: Christoph Hintermüller <hinzge@xxxxxxxxxxxxxx>
Date: Mon, 07 May 2001 13:09:46 +0200
Sender: owner-info-inventor-dev@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux 2.4.0 i686; en-US; 0.8.1) Gecko/20010326
Problem solved thanks to all:
1) save the actual SbViewportRegion from the actual viewer
2) save from the actual camera: location, type, neardistance fardistance, orientation matrix
2b) for convenience calculate the inverse of the orientation matrix too
3) get from the actualcamera its actual SbViewVolume.
4) extract via the ViewVolume the witdh and hight of the nearplane window and the viiewing direction of the camera 5) if necessary use the size in pixels from the saved Viewport to claculate the
relative screen coordinates of the desired screen point, when given in pixels ... 6) scale the relative coordinates to the ones inside the nearplane using the extracted width and hight 7) make the
nearplanepoint relative to the viewingpoint which is usually centered on the nearplane of the viewing volume (simply assume that facht and subtract haf of width and hight) 8) transform the relative
point from viewiongvolume to wolrdspace applying the orientation matrx to the point 9) add the saved cameralocation to it and a vector having the length of the neardistance and
which points in the viewing direction.
10) if the camera is PERSPECTIVE subtract the cameralocation from the newly generated worldspacepoint in order to get the final ray direction if the camera is orthographic set the final raydirection
to the saved viewing direction of the camera
11) call your ray with the worldspacepouint from 9 and the direction from 10
12) apply your ray to your scene and check if there is an intersection at all :) 13) on redo of the projection or if you have more than one point to project onto the same surface using the same
camera settings, simply repeat the steps 5 to 12 or even better save the relative cooridinates calculated in 5 too for reuse and only do steps 6 to 12
The camera independent way back is a bit easier but requires that at least steps 1 to 4
have been performed once
a) subtract the surfacepoint in worldcoordinates from the camera location to get the vector between themn b) calcualte the angle(bettre the cosine) between the vector from a) and the viewing
direction c) if the cosine from b) is bigger than 0 the surfacepoint lies behind the camera an usually can be ignored d) calculate the absolutevalue of the cosine from b) and check wether the length
of the vector from a) times the absolute cosine is longer than the fardistance or shorter than the neardistance than the point can usually be ignored too as it does definitlynot ly inside the viewing
volume of the recorded camera e) subtract the neardistance form the length calculated in d and devide by the absolutevalue of the cosine from d) f) if the camera is perspective set the length of the
vector from a to the length calculated in e if the camera is orthographic set the length of the viewingdirection vector to the length calculated in e but do not devide this length by the absolute
value of th cosine from d) g) add the vector form f to the original suface point in order to get the corresponding point on the nerplane of the recorde viewing wolume in worldspace coordinates; h)
subtract from the point in g) the location of the camera and the vector having the length of the neardistance to make it relative to the viewvolume i) use tthe inverse of the camera orientation
matric (might ba calculated in 2b) to transfomr the point from h) back into the cameraspace j) make the point from i absolute in respect to the nearpalnewindow and deviede it by its margins to
reoptain the its former relative screen coorinates
Christop Hintermüller
THESIS: God is alive
PROOVE: Who else would have scheduled the mankind and world first
recommendation of resrearch????
CONCLUSION: Scientists do what he wants, willing or not:)
<Prev in Thread] Current Thread [Next in Thread> | {"url":"http://oss.sgi.com/archives/info-inventor-dev/2001-05/msg00013.html","timestamp":"2014-04-16T07:21:04Z","content_type":null,"content_length":"12143","record_id":"<urn:uuid:7cc7a531-45dc-417f-8ec8-859b0e660888>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orthogonal Projections and Linear Transformations
November 21st 2009, 05:19 PM #1
Sep 2009
Orthogonal Projections and Linear Transformations
Much help would be appreciated on these problems, thanks!
In $R^3$ the orthogonal projections on the x-axis, y-axis and z-axis are
defined by $T1(x, y, x) = (x, 0, 0), T2(x, y, z) = (0, y, 0)$, and $T3(x, y, z) = (0, 0, z)$
(a) Show that the orthogonal projections on the coordinate axes are linear
operators, and find their standard matrices.
(b) Show that if $T: R^3$ → $R^3$ is an orthogonal projection on one of the coordinate
axes, then for every vector x in $R^3$, the vector T(x) and x – T(x) are
orthogonal vectors.
(c) Make a sketch showing x and x - T(x) in the case where T is the orthogonal
projection on the x-axis.
(a) Is a composition of one-to-one linear transformations one-to-one? Justify
your conclusion.
(b) Can the composition of a one-to-one linear transformation and a linear
transformation that is not one-to-one be one-to-one? Account for both possible
orders of composition and justify your conclusion.
Much help would be appreciated on these problems, thanks!
In $R^3$ the orthogonal projections on the x-axis, y-axis and z-axis are
defined by $T1(x, y, x) = (x, 0, 0), T2(x, y, z) = (0, y, 0)$, and $T3(x, y, z) = (0, 0, z)$
(a) Show that the orthogonal projections on the coordinate axes are linear
operators, and find their standard matrices.
(b) Show that if $T: R^3$ → $R^3$ is an orthogonal projection on one of the coordinate
axes, then for every vector x in $R^3$, the vector T(x) and x – T(x) are
orthogonal vectors.
(c) Make a sketch showing x and x - T(x) in the case where T is the orthogonal
projection on the x-axis.
(a) Is a composition of one-to-one linear transformations one-to-one? Justify
your conclusion.
(b) Can the composition of a one-to-one linear transformation and a linear
transformation that is not one-to-one be one-to-one? Account for both possible
orders of composition and justify your conclusion.
I bet you've already done some self work on both these problems, so show it to us and tell us where're you stuck and somebody will perhaps try to help you.
Honestly, I don't know 1.) very much.
2.) I know for a.) I used:
Ta(Ta^(-1)(x)) = AA^(-1)x = /x = x on two one-to-one linear transformations. I used a:
X 0
0 X
matrix to see if I get the same equation back. Same principle for b.)
November 21st 2009, 06:49 PM #2
Oct 2009
November 21st 2009, 09:48 PM #3
Sep 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/115994-orthogonal-projections-linear-transformations.html","timestamp":"2014-04-16T15:11:37Z","content_type":null,"content_length":"39445","record_id":"<urn:uuid:1b553723-afeb-438f-ae0d-7632d66aec49>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
TI-84 Plus Graphing Calculator For Dummies
ISBN: 978-0-7645-7140-4
288 pages
June 2004
Read an Excerpt
If you have a TI-84 Plus Graphing Calculator, you have a powerful, sophisticated tool for advanced math. In fact, it’s so sophisticated that you may not know how to take advantage of many of its
features and functions. That’s a good problem to have, and
TI-84 Plus Graphing Calculator For Dummies
is the right solution! It takes the TI-84 Plus to the next power, showing you how to:
● Display numbers in normal, scientific, or engineering notations
● Perform basic calculations, deal with angles, and solve equations
● Create and investigate geometric figures
● Graph functions, inequalities, or transformations of functions
● Create stat plots and analyze statistical data
● Create probability experiments like tossing coins, rolling dice, and so on
● Save calculator files on your computer
● Add applications to your calculator so that it can do even more
TI-84 Plus Graphing Calculator For Dummies was written by C.C. Edwards, author of TI-83 Plus Graphing Calculator For Dummies, who has a Ph.D. in mathematics and teaches on the undergraduate and
graduate levels. The book doesn’t delve into high math, but it does use appropriate math examples to help you delve into:
● Using the Equation Solver
● Using GeoMaster and its menu bar to construct lines, segments, rays, vectors, circles, polygons, perpendicular and parallel lines, and more
● Creating a slide show of transformations of a graph
● Using the Inequality Graphing application to enter and graph inequalities and solve linear programming problems
There’s even a handy tear-out cheat sheet to remind you of important keystrokes and special menus, And since you’ll quickly get comfortable with the built-in applications, there’s a list of ten more
you can download and install on your calculator so it can do even more! TI-84 Plus Graphing Calculator For Dummies is full of ways to increase the value of your TI–84 Plus exponentially.
See More
Part I: Making Friends with the Calculator.
Chapter 1: Coping with the Basics.
Chapter 2: Doing Basic Arithmetic.
Chapter 3: The Math and Angle Menus.
Chapter 4: Solving Equations.
Part II: Doing Geometry.
Chapter 5: Using GeoMaster.
Chapter 6: Constructing Geometric Figures.
Chapter 7: Finding Measurements.
Chapter 8: Performing Transformations.
Part III: Graphing and Analyzing Functions.
Chapter 9: Graphing Functions.
Chapter 10: Exploring Functions.
Chapter 11: Evaluating Functions.
Chapter 12: Graphing Transformations.
Chapter 13: Graphing Inequalities.
Part IV: Probability and Statistics.
Chapter 14: Probability.
Chapter 15: Simulating Probabilities.
Chapter 16: Dealing with Statistical Data.
Chapter 17: Analyzing Statistical Data.
Part V: Communicating with PCs and Other Calculators.
Chapter 18: Communicating with a PC Using TI Connect™.
Chapter 19: Communicating Between Calculators.
Part VI: The Part of Tens.
Chapter 20: Eight Topics That Didn’t Make the Book.
Chapter 21: Ten Great Applications.
Chapter 22: Eight Common Errors.
Chapter 23: Eleven Common Error Messages.
Appendix A: Creating and Editing Matrices.
Appendix B: Using Matrices.
See More
C. C. Edwards has a Ph.D. in mathematics from the University of Wisconsin, Milwaukee, and is currently teaching mathematics on the undergraduate and graduate levels. She has been using technology in
the classroom since before Texas Instruments came out with their first graphing calculator, and she frequently gives workshops at national and international conferences on using technology in the
classroom. She has written forty activities for Texas Instrument’s Explorations Web site, and she was an editor of Eightysomething, a newsletter formerly published by Texas Instruments. She is also
the author of TI-83 Plus Graphing Calculator For Dummies.
Just barely five feet tall, CC, as her friends call her, has three goals in life: to be six inches taller, to have naturally curly hair, and to be independently wealthy. As yet, she is nowhere close
to meeting any of these goals. When she retires, she plans to become an old lady carpenter.
See More
Download Title Size Download
Bonus Chapter 1 - Using Capri Jr. 251.62 KB Click to Download
Bonus Chapter 2 - Constructing Geometric Figures 323.75 KB Click to Download
Bonus Chapter 3 - Finding Measurements 274.40 KB Click to Download
Bonus Chapter 4 - Performing Transformations 241.44 KB Click to Download
Bonus Chapter 5 - Creating Calculator Programs 282.41 KB Click to Download
Bonus Chapter 6 - Controlling Program Flow 344.33 KB Click to Download
Bonus Chapter 7- Controlling Program Input and Output 275.88 KB Click to Download
See More
Buy Both and Save 25%!
TI-84 Plus Graphing Calculator For Dummies (US $16.99)
-and- Inside Your Calculator: From Simple Programs to Significant Insights (US $72.95)
Total List Price: US $89.94
Discounted Price: US $67.45 (Save: US $22.49)
Cannot be combined with any other offers. Learn more. | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0764571400.html","timestamp":"2014-04-19T11:56:53Z","content_type":null,"content_length":"51050","record_id":"<urn:uuid:5b92ee08-f0d9-43c8-bd91-363ce6c4a840>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
A job for pointers? [Archive] - MacRumors Forums
Because I think that the data has to go into a structure more complicated that a simple array. It's just a gut feeling. I don't care if you can do it without pointers.
You could also use an on file SQL database like SQLite. Perhaps a little overkill for this data set but its not a bad skill to practice. | {"url":"http://forums.macrumors.com/archive/index.php/t-1323662.html","timestamp":"2014-04-20T06:34:12Z","content_type":null,"content_length":"9356","record_id":"<urn:uuid:1b820ea5-98e0-4319-a7c1-f2b7fa3fe2a5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
ClipArt images of cones, which are three-dimensional geometric solids that are made by taking all straight lines from the apex (top) at an angle to a plane in a 360 degree radius. A cone has a
circular base, and looks like a triangle when viewing them from only the x and y axis. | {"url":"http://etc.usf.edu/clipart/galleries/327-cones/2","timestamp":"2014-04-20T10:59:24Z","content_type":null,"content_length":"10523","record_id":"<urn:uuid:990a2613-da36-4b24-a065-306c00303e28>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abeka, Bob Jones, and Apologia
HOMESCHOOLERS: We have some very cool advertisers on this website, and as VegSource is supported by advertising, we're happy about that. If you can't see any ads, you might have an ad blocker, or a
setting in your browser which blocks ads from showing up. Consider turning on ads while on vegsource so that you can see what they have to offer, and so that this site can remain free.
Reply To This Post Return to Posts Index VegSource Home
From: Kellie (24.155.134.143)
Subject: Abeka, Bob Jones, and Apologia
Date: January 9, 2013 at 1:57 pm PST
I have the following materials to sell. They are all in great condition unless noted. I accept paypal or money orders. If interested, E-mail: pop57@grandecom.net
7th grade
Of People-3rd edition, Student Text,(pencil writing on end of book) 62189003
Of People, Teacher Key, 62219003 $12.00/set
Basic Mathematics, solution key, 6230001
Basic Mathematics, Test and quiz key, 61972001
Basic Mathematics, Teacher’s Edition, 61956002 $18.00/set
8th grade
Pre-Algebra Basic Math II, Student Work Text, 25763011 (pgs 2-9 most problems worked)
Pre-Algebra Basic Math II, Teacher Key, 25771008
Pre-Algebra Basic Math II, Solution Key, 38725007
Pre-Algebra Basic Math II, Test/Quiz Key, 25798008 (2 of these)
Pre-Algebra Basic Math II, Tests and Quizzes, 2578X012 $30.00/set
Let’s Be Healthy-student text, #1561015(writing and high-lighting in book)
Let’s Be Healthy-answer key, #53449010
Let’s Be Healthy-student quizzes teacher key, #33391013
Let’s Be Healthy-student review and tests teacher key,#15644013 $15.00/set
Exploring Creation With General Science, student text-1st edition $10.00
Writing and Grammar 7, Teacher’s Edition
Tests Packet and Tests Answer Key $50.00/set
Reply To This Post Return to Posts Index VegSource Home
Follow Ups: | {"url":"http://www.vegsource.com/homeschool/fs712/messages/83496.html","timestamp":"2014-04-16T16:16:32Z","content_type":null,"content_length":"39165","record_id":"<urn:uuid:2cfe0b9d-4264-49e3-a5d6-59616d5a88dc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normalizers of maximal compact groups?
up vote 1 down vote favorite
Consider a reductive group over a local field. What is the normalizer of a maximal compact subgroup?
If this is to general, what is the normalizer of $GL(n, \mathbb{Z}_p)$ in $GL(n, \mathbb{Q}_p)$, $U(n)$ in $GL(n, \mathbb{C})$, and $O(n)$ in $GL(n, \mathbb{R})$?
reductive-groups lie-groups
2 This is related to previous questions: mathoverflow.net/questions/60315/… mathoverflow.net/questions/65360/… – Alain Valette Dec 17 '11 at 21:16
add comment
2 Answers
active oldest votes
Hints :
-- For $K= {\rm GL}(n,{\mathbb Z}_p )$. Make $G={\rm GL}(n,{\mathbb Q}_p )$ acts on ${\mathbb Z}_p$-lattices of ${\mathbb Q}_p^n$. Prove that the lattices stabilized by $K$ are the
$p^k {\mathbb Z}_p^n$, $k\in {\mathbb Z}$. Observe that the normalizer $\tilde K$ of $K$ permutes these lattices and conclude that ${\tilde K}={\mathbb Q}_p^\times K$.
up vote 4 down -- For $K=O(n)$ (or $U(n)$). Do someting similar by making $G$ act on the set of positive definite symmetric (hermitian) matrices via $(X,A)\mapsto XA{\bar X}^{t}$.
vote accepted
In general, for a non-archimedean base field, you can make $G$ act on the extended Bruhat-Tits building, but the answer is going to be technical according to whether $G$ has a center
or not, is simply connected or not. Over archimedean fields, I guess you have to use symmetric spaces.
Seems like you know the answer to every question, I can come up with;) Thx. – plusepsilon.de Dec 18 '11 at 10:13
add comment
$SO(n)$ is not only maximal compact in $SL_n(\mathbb{R})$, it is even a maximal subgroup! Same for $SU(n)$ in $SL_n(\mathbb{C})$. This follows e.g. from Exercise A.3 in Chapter VI in S.
Helgason, ``Differential geometry, Lie groups and symmetric spaces'', Academic Press, 1978. From this the normalizers of $O(n)$ in $GL_n(\mathbb{R})$, and of $U(n)$ in $GL_n(\mathbb{C})$, are
easily determined.
To establish maximality of $K=SL_n(\mathbb{Z}_p)$ as a subgroup of $G=SL_n(\mathbb{Q}_p)$, there is a cute argument using the Howe-Moore theorem, telling you that, if $\pi$ is a unitary
representation of $G$ without non-zero fixed vectors, then all coefficients of $\pi$ go to $0$ at infinity on $G$. So assume that $U$ is a proper subgroup of $G$, containing $K$. Since $K$ is
open, $U$ is open too. By simplicity of $G$, the subgroup $U$ cannot have finite index. Let $\pi$ be the quasi-regular representation of $G$ on $\ell^2(G/U)$, it has no non-zero fixed vector;
up vote let $\delta_U\in\ell^2(G/U)$ be the characteristic function of the base-point of $G/U$, corresponding to the trivial coset of $U$. Then the coefficient $g\mapsto<\pi(g)\delta_U|\delta_U>$ is
5 down constant and equal to 1 on $U$, so by Howe-Moore $U$ must be compact. Since $K$ is maximal compact, we have $U=K$. From this you also deduce the normalizer of $GL_n(\mathbb{Z}_p)$ in $GL_n(\
vote mathbb{Q}_p)$.
Ref: R.E. Howe and C.C. Moore. Asymptotic properties of unitary representations. J. Funct. Anal. 32, 72-96, 1979.
EDIT: For the maximality of $SO(n)$ in $SL_n(\mathbb{R})$, the strategy of the proof is nicely explained in http://www.math.jussieu.fr/%7Eginot/GL/TD3-GL2009.pdf
Both answers are very useful. I checked the other simply because it was first. Thanks. – plusepsilon.de Dec 18 '11 at 10:12
add comment
Not the answer you're looking for? Browse other questions tagged reductive-groups lie-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/83694/normalizers-of-maximal-compact-groups","timestamp":"2014-04-18T03:39:58Z","content_type":null,"content_length":"58744","record_id":"<urn:uuid:e2f9a815-3f61-4a6c-9984-e9ecff3e0ea8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
LKML: Hubertus Franke: Re: Analysis for Linux-2.5 fix/improve get_pid(), comparing various approaches
Messages in this thread
From Hubertus Franke <>
Subject Re: Analysis for Linux-2.5 fix/improve get_pid(), comparing various approaches
Date Fri, 9 Aug 2002 14:14:58 -0400
On Friday 09 August 2002 11:36 am, Andries Brouwer wrote:
!!!!!!!!!!! You are in a different space !!!!!!!!
All work was done under the assumption of 16-bit pid_t.
I stated yesterday already that for NumTasks substantially smaller
than the pid_t supported size, this won't be a problem
as your analysis states and my data also states.
You have two choices
(a) move Linux up to 32-bit pid_t
(b) stick within the current 16-bit discussion.
> On Fri, Aug 09, 2002 at 07:22:08AM -0400, Hubertus Franke wrote:
> > Particulary for large number of tasks, this can lead to frequent exercise
> > of the repeat resulting in a O(N^2) algorithm. We call this : <algo-0>.
> Your math is flawed. The O(N^2) happens only when the name space for pid's
> has the same order of magnitude as the number N of processes.
> Now consider N=100000 with 31-bit name space. In a series of
> 2.10^9 forks you have to do the loop fewer than N times and
> N^2 / 2.10^9 = 5. You see that on average for each fork there
> are 5 comparisons.
> For N=1000000 you rearrange the task list as I described yesterday
> so that each loop takes time sqrt(N), and altogether N.sqrt(N)
> comparisons are needed in a series of 2.10^9 forks.
> That is 0.5 comparisons per fork.
> You see that thanks to the large pid space things get really
> efficient. Ugly constructions are only needed when a large fraction
> of all possible pids is actually in use, or when you need hard
> real time guarantees.
> Andries
-- Hubertus Franke (frankeh@watson.ibm.com)
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/ | {"url":"http://lkml.org/lkml/2002/8/9/121","timestamp":"2014-04-18T18:31:07Z","content_type":null,"content_length":"9983","record_id":"<urn:uuid:a284c2fe-3814-4717-9f00-282b73e8dd27>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Re: alpha in Grayscale or Palette", by Graeme Gill
LibTiff Mailing List
TIFF and LibTiff Mailing List Archive
September 2006
Previous Thread
Next Thread
Previous by Thread
Next by Thread
Previous by Date
Next by Date
The TIFF Mailing List Homepage
This list is run by Frank Warmerdam
Archive maintained by AWare Systems
2006.09.17 10:39 "Re: alpha in Grayscale or Palette", by Graeme Gill
Joris wrote:
> By applying the arbitrary weight 1, you can effectively hide the fact
> that you're applying arbitrary weights, and that's what you seem to be
> doing.
Why do you think that's an arbitrary weight ? Even the Alpha
component can be related back to a visible L*a*b* distance,
by the effect it has on the color, giving us the 4th dimension
co-ordinate. If you wanted to do lots of work, you could
even compute this by taking the color against a range of
opaque backgrounds, and using some statistical extraction
of the delta E (ie. median, largest etc.), that represents the
visible result of the Alpha value.
In any case, there's nothing inherently wrong with arbitrary weights
:- they're what you adjust to get an algorithm working in real
world situations.
> As to convertion to Lab, OK, I'm all for that. So now we do indeed have
> a 4D space. To make my point, let's encode L, a and b as their natural
> float value, and alpha as a float value ranging from 0 to 1. So, you're
> saying the difference between L*a*b*alpha (0,0,0,1) (which is
> non-transparent black) and (1,0,0,1) (which is non-transparent
> almost-black) is equal to the difference between (0,0,0,1) (again
> non-transparent black) and (0,0,0,0) (which is total transparency).
Hmm. Why would you do that ? The notional range of L* is 0 to 100,
and a* and b* are often taken to range from -128 to +128 (which
is somewhat arbitrary, but that's the nature of the colorspace
and the real world), so logically the Alpha should be scaled to
be 0 to 100 (at least as a starting point). If, in the light of
the result, fewer table values were to be given to variations
in Alpha, then the scale could be lowered somewhat.
> See, we apply weights (full transparent range is equally important as a
> difference of 1 in the L range). This is obvious if we pick very bad
> weights as we did here. So what are good weights? Full transparent range
> is equally important as a difference of 100 in the L range? Or is it
> twice as important as a difference of 100 in the L range? Why?
Do you expect everything to be handed to you on a plate ? :-)
Experiment and find out of course ! - then write it up, and publish
it in a suitable journal. That's what research is all about.
Graeme Gill. | {"url":"http://www.asmail.be/msg0055551401.html","timestamp":"2014-04-18T15:42:57Z","content_type":null,"content_length":"14630","record_id":"<urn:uuid:1526ae01-b966-4251-89ab-789cfa486ef5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hypocycloid Problem
April 14th 2009, 02:29 PM
Hypocycloid Problem
"A hypocycloid is a curve traced by a fixed point P on a circle C with radius b as C rolls on the inside of a circle with the center on the origin and radius a. If the initial point P is (a,0)
and the parameter is theta, then the parametric equation is
x = (a-b)cos(theta) + bcos((theta)(a-b)/b))
y = (a-b)sin(theta) + bsin((theta)(a-b)/b))
Question 1: Show that the parametric equation is correct"
This is the picture I drew out to clarify..
But I don't even know how to start explaining why it is correct... any suggestions?
April 14th 2009, 10:38 PM
"A hypocycloid is a curve traced by a fixed point P on a circle C with radius b as C rolls on the inside of a circle with the center on the origin and radius a. If the initial point P is (a,0)
and the parameter is theta, then the parametric equation is
x = (a-b)cos(theta) + bcos((theta)(a-b)/b))
y = (a-b)sin(theta) + bsin((theta)(a-b)/b))
Question 1: Show that the parametric equation is correct"
1. In my opinion the given equation describes an epicycloid. If you want to get a hypocycloid the equation should read:
$<br /> \left|\begin{array}{l}x = (a-b) \cos(\theta) + b \cos\left(\dfrac{\theta(a-b)}{b} \right) \\ y = (a-b) \sin(\theta) {\color{red}\bold{-}} b \sin\left(\dfrac{\theta(a-b)}{b} \right) \end
{array} \right.$
2. The distance in your sketch which is labeled a - b is actually $a - 2b$
3. To start:
- Determine where you can find $\theta$ in your sketch.
- Determine where you can find $\left(\dfrac{\theta(a-b)}{b} \right)$ in your sketch.
- Calculate the corresponding distances which form the coordinates of the points placed on the cycloid. | {"url":"http://mathhelpforum.com/advanced-math-topics/83733-hypocycloid-problem-print.html","timestamp":"2014-04-20T18:58:24Z","content_type":null,"content_length":"6528","record_id":"<urn:uuid:6a581683-5a6b-4d0b-b4d9-5b6f260cf5ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphical Models, Causality, And Intervention
Results 1 - 10 of 79
"... The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subject-matter information. In particular, the paper develops
a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if ..."
Cited by 180 (35 self)
Add to MetaCart
The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subject-matter information. In particular, the paper develops a
principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental
data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional
observations or auxiliary experiments from which the desired inferences can be obtained. Key words: Causal inference, graph models, interventions treatment effect 1 Introduction The tools introduced
in this paper are aimed at helping researchers communicate qualitative assumptions about cause-effect relationships, elucidate the ramifications of such assumptions, and derive causal inferences from
a combination...
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks.
Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Cited by 172 (0 self)
Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks.
Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical
statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the
structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified
examples. Keywords--- Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or
probabilistic gra...
- Artificial Intelligence , 1997
"... The ramification problem in the context of commonsense reasoning about actions and change names the challenge to accommodate actions whose execution causes indirect effects. Not being part of
the respective action specification, such effects are consequences of general laws describing dependencies b ..."
Cited by 149 (20 self)
Add to MetaCart
The ramification problem in the context of commonsense reasoning about actions and change names the challenge to accommodate actions whose execution causes indirect effects. Not being part of the
respective action specification, such effects are consequences of general laws describing dependencies between components of the world description. We present a general approach to this problem which
incorporates causality, formalized by directed relations between two single effects stating that, under specific circumstances, the occurrence of the first causes the second. Moreover, necessity of
exploiting causal information in this way or a similar is argued by elaborating the limitations of common paradigms employed to handle ramifications, namely, the principle of categorization and the
policy of minimal change. Our abstract solution is exemplarily integrated into a specific calculus based on the logic programming paradigm. To apper in: Artificial Intelligence Journal On leave from
FG Inte...
, 2001
"... Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and
Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling ..."
Cited by 59 (4 self)
Add to MetaCart
Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen
1990) has initiated a renewed interest for understanding causal concepts in connection with modelling complex stochastic systems. It has become clear that graphical models, in particular those based
upon directed acyclic graphs, have natural causal interpretations and thus form a base for a language in which causal concepts can be discussed and analysed in precise terms. As a consequence there
has been an explosion of writings, not primarily within mainstream statistical literature, concerned with the exploitation of this language to clarify and extend causal concepts. Among these we
mention in particular books by Spirtes, Glymour and Scheines (1993), Shafer (1996), and Pearl (2000) as well as the collection of papers in Glymour and Cooper (1999). Very briefly, but fundamentally,
- In Eighteenth National Conference on Artificial Intelligence
"... This paper concerns the assessment of the effects of actions or policy interventions from a combination of: (i) nonexperimental data, and (ii) substantive assumptions. The assumptions are
encoded in the form of a directed acyclic graph, also called “causal graph”, in which some variables are presume ..."
Cited by 57 (18 self)
Add to MetaCart
This paper concerns the assessment of the effects of actions or policy interventions from a combination of: (i) nonexperimental data, and (ii) substantive assumptions. The assumptions are encoded in
the form of a directed acyclic graph, also called “causal graph”, in which some variables are presumed to be unobserved. The paper establishes a necessary and sufficient criterion for the
identifiability of the causal effects of a singleton variable on all other variables in the model, and apowerful sufficient criterion for the effects of a singleton variable on any set of variables.
- Journal of Artificial Intelligence Research , 1995
"... We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the
traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that ..."
Cited by 54 (8 self)
Add to MetaCart
We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional
view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the
encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause
and effect. Finally, we show how canonical form facilitates counterfactual reasoning. 1. Introduction Knowledge of cause and effect is crucial for modeling the affects of actions. For example, if we
observe a statistical correlation between smoking and lung cancer, we can not conclude from this observation alone that our chances of getting lung cancer will change if we stop smoking. If, however,
we als...
- Artificial Intelligence , 1996
"... This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z
constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irr ..."
Cited by 54 (15 self)
Add to MetaCart
This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z
constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irrelevance, as in "Learning X will not alter our belief in Y , once we know Z." Two versions of
causal irrelevance are analyzed, probabilistic and deterministic. We show that, unless stability is assumed, the probabilistic definition yields a very loose structure, that is governed by just two
trivial axioms. Under the stability assumption, probabilistic causal irrelevance is isomorphic to path interception in cyclic graphs. Under the deterministic definition, causal irrelevance complies
with all of the axioms of path interception in cyclic graphs, with the exception of transitivity. We compare our formalism to that of [Lewis, 1973], and offer a graphical method of proving theorems
- B , 2001
"... Chain graphs are a natural generalization of directed acyclic graphs (DAGs) and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional
independence hypotheses that they represent. There are a number of simple and apparently plausible, but ultim ..."
Cited by 48 (4 self)
Add to MetaCart
Chain graphs are a natural generalization of directed acyclic graphs (DAGs) and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional independence
hypotheses that they represent. There are a number of simple and apparently plausible, but ultimately fallacious interpretations of chain graphs that are often invoked, implicitly or explicitly.
These interpretations also lead to awed methods for applying background knowledge to model selection. We present a valid interpretation by showing how the distribution corresponding to a chain graph
may be generated as the equilibrium distribution of dynamic models with feedback. These dynamic interpretations lead to a simple theory of intervention, extending the theory developed for DAGs.
Finally, we contrast chain graph models under this interpretation with simultaneous equation models which have traditionally been used to model feedback in econometrics. Keywords: Causal model;
, 1998
"... Structural equation modeling (SEM) has dominated causal analysis in the social and behavioral sciences since the 1960s. Currently, many SEM practitioners are having difficulty articulating the
causal content of SEM and are seeking foundational answers. ..."
Cited by 44 (14 self)
Add to MetaCart
Structural equation modeling (SEM) has dominated causal analysis in the social and behavioral sciences since the 1960s. Currently, many SEM practitioners are having difficulty articulating the causal
content of SEM and are seeking foundational answers. | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.55.6298","timestamp":"2014-04-18T12:31:21Z","content_type":null,"content_length":"36988","record_id":"<urn:uuid:a9e4a0dc-9daa-4dce-b01f-afef0f2a2d5c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Not Quite Random Number Generator (NQRNG)
September 13, 2010
By Matt Shotwell
I connected the instrumentation amplifier described in an earlier post to a piezoelectric transducer (buzzer) and made recordings at 5000 gain. The plot below shows 1000 such measurements over 1.0
seconds. There is a 4.0 second (at 1000Hz) sample of the data here piezo.csv. There is a clear sinusoidal signal in these data of about 60Hz. These findings were baffling at first, but apparently
result from capacitative coupling between the measurement device and 60Hz household AC power lines.
I fit simple periodic mean model to these data: $y = a + b sin( lambda ( 2pi t + phi ) ) + e$, where $y$ is the measured ADC (Analog to Digital Converter) value, $t$ is time in seconds, $a$ is the
mean ADC value, $b$ is the peak amplitude, $lambda$ is the frequency in Hertz, $phi$ is the phase in radians, and $e$ is a zero-mean error term. The blue curve in the plot represents the
least-squares fit to these data.
The least-squares estimates are as follows:
• $a$ – 577 ADC units
• $b$ – 7.75 ADC units
• $lambda$ – 59.9 Hz
• $phi$ – 4.80 rad
The estimated frequency ($lambda$) is very close to the theoretical value (60). The ADC of the ATmega168 has 10 bit precision. That is, values can range from $0$ to $2^{10}-1 = 1023$. The Atmel
manual gives the following formula for back-calculating the voltage at the ADC pin: $V = A * V_r / 1024$, where $A$ is the ADC value and $V_r$ is the reference voltage ($5.0V$ in my setup). In
addition, the instrumentation amplifier provides $5000V/V$ gain. Accounting for gain, the voltage measured between the instrumentation electrodes is $V_i = V/5000 = A * 5.0 / 1024 / 5000$. The
estimated peak amplitude in ADC units was $b = 7.75$, so $V_i = 7.75 * 5.0 / 1024 / 5000 = 7.5mu V$. That is, household AC coupling induced a $7.5mu V$ peak potential between the instrumentation
electrodes. In reality, the gain of the amplifier is less than $5000$, and $7.5mu V$ is a lowball estimate.
Assuming there are no other signals in these data (which is likely incorrect), the residuals in this regression ought to resemble independent random variates. The histogram for the residuals looks
good (bell shaped, no noticeable bias or skew), however, there is obvious residual autocorrelation. The figure below is a plot of the autocorrelation within the residuals.
It’s clear that this method isn’t quite ready to replace the pseudo-RNGs. As time permits, I plan to try some signal processing with these data, in hopes to isolate the independent noise. If the only
signal is that induced by capacitative coupling, then a high pass filter (at 60Hz) ought to do the trick.
A recent blog post at Xi’An’s mentioned some tests for randomness, including the Marsaglia Diehard Battery and the NIST Test Suite. Maybe if I get something that isn’t obviously nonrandom, I’ll try
out some of these tests.
for the author, please follow the link and comment on his blog:
BioStatMatt » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/a-not-quite-random-number-generator-nqrng/","timestamp":"2014-04-17T13:18:52Z","content_type":null,"content_length":"43226","record_id":"<urn:uuid:fd42dd91-f10e-43c5-8c23-5fbb0aae2d5b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Paintings of Crockett Johnson
Mathematical Paintings of Crockett Johnson
Selected Works of David Crockett Johnson
Barnaby, New York, NY: Henry Holt and Company, 1943.
Barnaby and Mr. O’Malley, New York: Henry Holt and Company, 1944.
Harold and the Purple Crayon, New York: Harper 7 Row, 1955.
“A Geometrical Look at vp,” Mathematical Gazette, 54 (Feb 1970): 59-60.
“On the Mathematics of Geometry in My Abstract Paintings,” Leonardo, 5 (1972): 97-101.
“A construction for a regular heptagon,” Mathematical Gazette, 17 (March 1975): 17-21.
Papers of Crockett Johnson, Mathematics Collections, National Museum of American History, Smithsonian Institution.
Correspondence in the Harley Flanders Papers, Mathematics Collections, National Museum of American History.
Correspondence in the Ad Reinhardt Papers, Archives of American Art, Smithsonian Institution.
Selected Works about Crockett Johnson
Stephanie Cawthorne and Judy Green, “Cubes, Conic Sections, and Crockett Johnson,” Convergence, vol. 11, 2014. http://www.maa.org/publications/periodicals/convergence/
Stephanie Crawthorne and Judy Green, “Harold and the Purple Heptagon,” Math Horizons (September 2009): 5-9.
Philip Nel, “Crockett Johnson and the Purple Crayon: A Life in Art,” Comic Art, 5 (2004): 2-18.
Philip Nel. Crockett Johnson and Ruth Krauss: A Biography, Jackson: University Press of Mississippi, in preparation.
James B. Stroud, “Crockett Johnson's Geometric Paintings,” Journal of Mathematics and the Arts, 2 #2 (June 2008): 77-99.
For a more detailed bibliography and further information, see the Crockett Johnson Web site created and maintained by Philip Nel.
For a description of American mathematics and science education at the time of Crockett Johnson’s paintings, see the Museum's Web site: “Mobilizing Minds: Teaching Math and Science in the Age of
This introduction and the accounts of Crockett Johnson paintings given below have benefited from insights of Uta C. Merzbach, Judy Green, J. B. Stroud, Philip Nel, Mark Kidwell, Emmy Scandling, and
Joan Krammer.
"Mathematical Paintings of Crockett Johnson - Resources" showing 80 items.
This painting is based on a theorem generalized by the French mathematician Blaise Pascal in 1640, when he was sixteen years old. When the opposite sides of a irregular hexagon inscribed in a
circle are extended, they meet in three points. Pappus, writing in the 4th century AD, had shown in his Mathematical Collections that these three points lie on the same line. In the painting,
the circle and cream-colored hexagon are at the center, with the sectors associated with different pairs of lines shown in green, blue and gray. The three points of intersection are along the
top; the line that would join them is not shown. Pascal generalized the theorem to include hexagons inscribed in any conic section, not just a circle. Hence the figure came to be known as
"Pascal’s hexagon" or, to use Pascal’s terminology, the "mystic hexagon." Pascal’s work in this area is known primarily from notes on his manuscripts taken by the German mathematician
Gottfried Leibniz after his death.
There is a discussion of Pascal’s hexagon in an article by Morris Kline on projective geometry published in James R. Newman's World of Mathematics (1956). A figure shown on page 629 of this
work may have been the basis of Crockett Johnson's painting, although it is not annotated in his copy of the book.
The oil or acrylic painting on masonite is signed on the bottom right: CJ65. It is marked on the back: Crockett Johnson (/) "Mystic" Hexagon (/) (Pascal). It is #10 in the series.
References: Carl Boyer and Uta Merzbach, A History of Mathematics (1991), pp. 359–62.
Florian Cajori, A History of Elementary Mathematics (1897), 255–56.
Morris Bishop, Pascal: The Life of a Genius (1964), pp. 11, 81–7.
date made
Pascal, Blaise
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
The Pythagorean theorem states that in any right triangle, the square of the side opposite the right angle (the hypotenuse), is equal to the sum of the squares of the other two sides. This
painting depicts the “windmill” figure found in Proposition 47 of Book I of Euclid’s Elements. Although the method of the proof depicted was written about 300 BC and is credited to Euclid,
the theorem is named for Pythagoras, who lived 250 years earlier. It was known to the Babylonians centuries before then. However, knowing a theorem is different from demonstrating it, and the
first surviving demonstration of this theorem is found in Euclid’s Elements.
Crockett Johnson based his painting on a diagram in Ivor Thomas’s article on Greek mathematics in The World of Mathematics, edited by James R. Newman (1956), p. 191. The proof is based on a
comparison of areas. Euclid constructed a square on the hypotenuse BΓ of the right triangle ABΓ. The altitude of this triangle originating at right angle A is extended across this square.
Euclid also constructed squares on the two shorter sides of the right triangle. He showed that the square on side AB was of equal area to the rectangle of sides BΔ and Δ;Λ. Similarly, the
area of the square on side AΓ was of equal area to the rectangle of sides EΓ and EΛ. But then the square of the hypotenuse of the right triangle equals the sum of the squares of the shorter
sides, as desired.
Crockett Johnson executed the right triangle in the neutral, yet highly contrasting, hues of white and black. Each square area that rests on the sides of the triangle is painted with a
combination of one primary color and black. This draws the viewer’s attention to the areas that complete Euclid’s proof of the Pythagorean theorem.
Proof of the Pythagorean Theorem, painting #2 in the series, is one of Crockett Johnson’s earliest geometric paintings. It was completed in 1965 and is marked: CJ65. It also is signed on the
back: Crockett Johnson 1965 (/) PROOF OF THE PYTHAGOREAN THEOREM (/) (EUCLID).
Currently not on view
date made
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
In this oil or acrylic painting on masonite, Crockett Johnson illustrates a theorem presented by the Greek mathematician Pappus of Alexandria (3rd century AD). Suppose that one chooses three
points on each of two line straight segments that do not intersect. Join each point to the two more distant points on the other lines. These lines meet in three points, which, according to
the theorem, are themselves on a straight line.
The inspiration for this painting probably came from a figure in the article "The Great Mathematicians" by Herbert W. Turnbull found in the artist's copy of James R. Newman's The World of
Mathematics (p. 112). This figure is annotated. It shows points A, B, and C on one line segment and D, E, and F on another line segment. Line segments AE and DB, AF and DC, and BF and EC
intersect at 3 points (X, Y, and Z respectively), which are collinear. Turnbull's figure and Johnson's painting include nine points and nine lines that are arranged such that three of the
points lie on each line and three of the lines lie on each point. If the words "point" and "line" are interchanged in the preceding sentence, its meaning holds true. This is the
"reciprocation," or principle of duality, to which the painting's title refers.
Crockett Johnson chose a brown and green color scheme for this painting. The main figure, which is executed in seven tints and shades of brown, contains twelve triangles and two
quadrilaterals. The background, which is divided by the line that contains the points X, Y, and Z, is executed in two shades of green. This color choice highlights Pappus' s theorem by
dramatizing the line created by the points of intersection of AE and DB, AF and DC, and BC and EC. There wooden frame painted black.
Reciprocation is painting #6 in this series of mathematical paintings. It was completed in 1965 and is signed: CJ65.
Currently not on view
date made
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
Artists used methods of projecting lines developed by the Italian humanist Leon Battista Alberti and his successors to create a sense of perspective in their paintings. In contrast, Crockett
Johnson made these methods the subject of his painting. He followed a diagram in William M. Ivins Jr., Art & Geometry: A Study in Space Intuitions (1964 edition), p. 76. The figure in
Crockett Johnson’s copy of the book is annotated. This painting has a triangle in the center that is divided by a diagonal line, with the left half painted a darker shade than the right.
Inside the triangle is one large quadrilateral that is divided into four rows of quadrilaterals that are painted various shades of red, purple, blue, and white.
To represent three-dimensional objects on a two-dimensional canvas, an artist must render forms and figures in proper linear perspective. In 1435 Alberti wrote a treatise entitled De Pictura
(On Painting) in which he outlined a process for creating an effective painting through the use of one-point perspective. Investigation of the mathematical concepts underlying the rules of
perspective led to the development of a branch of mathematics called projective geometry.
Alberti’s method (as modified by Pelerin in the early 17th century) and Crockett Johnson’s painting begin with the selection of a vanishing point (point C in the figure from Ivins). The eye
of the viewer is assumed to be across from and on the same level as C. The eye looks through the vertical painting at a picture that appears to continue behind the canvas. To portray on the
canvas what the eye sees, the artist locates point A on the horizon (the horizontal through C). The artist then draws the diagonal from A to the lower right-hand corner of the painting (point
I). The separation of the angle ICH into smaller, equal angles creates lines that delineate parallel lines in the picture plane. The horizontal lines that create small quadrilaterals, and
thus the checkerboard effect, are determined by the intersections of the lines from C with the diagonals FH and EI.
This painting, #7 in the series, dates from 1966. It is signed: CJ66. It is marked on the back: Crockett Johnson 1966 (/) PERSPECTIVE (ALBERTI). It is of acrylic or oil paint on masonite, and
has a wooden frame.
Currently not on view
date made
Alberti, Leon Battista
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
This painting, while similar in subject to the painting entitled Perspective (Alberti), depicts three planes perpendicular to the canvas. These three planes provide a detailed,
three-dimensional view of space through the use of perspective. Three vanishing points are implied (though not shown) in the painting, one in each of the three planes.
The painting shows a 3-4-5 triangle surrounded by squares proportional in number to the square of the side. That is, the horizontal plane contains nine squares, the vertical plane contains
sixteen squares, and the oblique plane, which represents the hypotenuse of the 3-4-5 triangle, contains twenty-five squares. This explains the extension of the vertical and oblique planes and
reminds the viewer of the Pythagorean theorem. Thus, Crockett Johnson has cleverly shown the illustration of two of his other paintings; Squares of a 3-4-5 Triangle (Pythagoras) and Proof of
the Pythagorean Theorem (Euclid), in perspective; hence the title of the painting.
The title of this painting points to the role of the German artist Albrecht Dürer (1471–1528) in creating ways of representing three-dimensional figures in a plane. Dürer is particularly
remembered for a posthumously published treatise on human proportion. In his book entitled The Life and Art of Albrecht Dürer, art historian Erwin Panofsky explains that the work of Dürer
with perspective demonstrated that the field was not just an element of painting and architecture, but an important branch of mathematics.
This construction may well have originated with Crockett Johnson. The oil painting was completed in 1965 and is signed: CJ65. It is #8 in his series of mathematical paintings.
Currently not on view
date made
Duerer, Albrecht
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
In ancient times, the Greek mathematician Apollonius of Perga (about 240–190 BC) made extensive studies of conic sections, the curves formed when a plane slices a cone. Many centuries later,
the French mathematician and philosopher René Descartes (1596–1650) showed how the curves studied by Apollonius might be related to points on a straight line. In particular, he introduced an
equation in two variables expressing points on the curve in terms of points on the line. An article by H. W. Turnbull entitled "The Great Mathematicians" found in The World of Mathematics by
James R. Newman discussed the interconnections between Apollonius and Descartes, and apparently was the basis of this painting. The copy of this book in Crockett Johnson's library is very
faintly annotated on this page. Turnbull shows variable length ON, with corresponding points P on the curve.
The analytic approach to geometry taken by Descartes would be greatly refined and extended in the course of the seventeenth century.
Johnson executed his painting in white, purple, and gray. Each section is painted its own shade. This not only dramatizes the coordinate plane but highlights the curve that extends from the
middle of the left edge to the top right corner of the painting.
Conic Curve, an oil or acrylic painting on masonite, is #11 in the series. It was completed in 1966 and is signed: CJ66. It is marked on the back: Crockett Johnson 1966 (/) CONIC CURVE
(APOLLONIUS). It has a wooden frame.
Currently not on view
date made
Apollonius of Perga
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
The French lawyer and mathematician Pierre de Fermat (1601–1665) was one of the first to develop a systematic way to find the straight line which best approximates a curve at any point. This
line is called the tangent line. This painting shows a curve with two horizontal tangent lines. Assuming that the curve is plotted against a horizontal axis, one line passes through a maximum
of a curve, the other through a minimum. An article by H. W. Turnbull, "The Great Mathematicians," published in The World of Mathematics by James R. Newman, emphasized how Fermat's method
might be applied to find maximum and minimum values of a curve plotted above a horizontal line (see his figures 14 and 16). Crockett Johnson owned and read the book, and annotated the first
figure. The second figure more closely resembles the painting.
Computing the maximum and minimum value of functions by finding tangents became a standard technique of the differential calculus developed by Isaac Newton and Gottfried Leibniz later in the
17th century.
Curve Tangents is painting #12 in the Crockett Johnson series. It was executed in oil on masonite, completed in 1966, and is signed: CJ66. The painting has a wood and metal frame.
Currently not on view
date made
Fermat, Pierre de
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
The Greek mathematician Aristotle, who lived from about 384 BC through 322 BC, believed that heavy bodies moved naturally downward, while lighter substances such as air naturally ascended.
Other forms of terrestrial motion required a sustaining force, which was not expressed mathematically. The Italian Galileo Galilei (1564–1642) challenged Aristotle. He held that motion was
persistent and would continue until acted upon by an opposing, outside force.
In a book entitled Dialogues Concerning the Two Chief World Systems, Galileo presented his ideas in a dispute between three men: Salviati, Sagredo, and Simplicio. Salviati, a spokesman for
Galileo, explained his revolutionary ideas, one of which is illustrated by a diagram that was the basis for this painting. This image can be found in Crockett Johnson's copy of The World of
Mathematics, a book by James R. Newman. It is probable that this image served as inspiration for this painting, although Johnson did not annotate this diagram.
In Galileo's Dialogues, Salviati argued that if a lead weight is suspended by a thread from point A (see figure) and is released from point C, it will swing to point D, which is located at
the same height as the initial point C. Furthermore, Salviati stated that if a nail is placed at point E so that the thread will snag on it, then the weight will swing from point C to point B
and then up to point G, which is also located at the same height as the initial point C. The same occurs if a nail is placed at point F below the line segment CD.
The painting is executed in purple that progresses from light tints to darker shades right to left. This gives the figure a sense of motion akin to that of a pendulum. The background is
washed in gray and black. The line created by the initial and final height of the weight divides the background.
Pendulum Momentum, a work in oil on masonite, is painting #13 in the Crockett Johnson series. It was executed in 1966 and is signed: CJ66. There is a wooden frame painted black.
Currently not on view
date made
Galilei, Galileo
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
Two circles or other similar figures can be placed such that a line drawn from some fixed point to a point of one of them passes through a point on the other, such that the ratio of the
distances from the fixed point to the two points is always the same. The fixed point is called the center of similitude. The circles shown in this painting have two centers of similitude, one
between the circles and one to the right (the center of similitude between the circles is shown). Crockett Johnson apparently based his painting on a diagram from the book College Geometry by
Nathan Altshiller Court (1964 printing). This diagram is annotated in his copy of the book. In the figure, the larger circle has center A, the smaller circle has center B, and the centers of
similitude are the points S and S'. S is called the external center of similitude and S' is the internal center of similitude. The painting suggests several properties of centers of
similitude. For example, lines joining corresponding endpoints of parallel diameters of the two circles, such as TT' in the figure, would meet at the external center of similitude. Lines
joining opposite endpoints meet at the internal center of similitude.
This painting emphasizes the presence of the two circles and line segments relating to centers of similitude, but not the centers themselves. Indeed, the painting is too narrow to include the
external center of similitude.
Some properties of centers of similitude were known to the Greeks. More extensive theorems were developed by the mathematician Gaspard Monge (1746–1818). It is not entirely clear why Crockett
Johnson associated the painting with the artist and mathematician Phillipe de la Hire (1640–1718). A bibliographic note in the relevant section of Court reads: LHr., p. 42, rem. 8. However,
Court was referring to an 1809 book by Simon A. J. LHuiler on the elements of analytic geometry.
This oil painting on masonite is #14 in Crockett Johnson's series. It was completed in 1966 and is signed: CJ66.
References: R. J. Archibald, "Centers of Similitude of Circles," American Mathematical Monthly, 22, #1 (1915), pp. 6–12; unpublished notes of J. B. Stroud.
Currently not on view
date made
de la Hire, Phillipe
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center
The determination of the size and shape of the Earth has occupied philosophers from antiquity. Eratosthenes, a mathematician in the city of Alexandria in Egypt who lived from about 275
through 194 BC, proposed an ingenious way to measure the circumference of the Earth. It is illustrated by this painting. Eratosthenes claimed that the town of Syene (now Aswan) was directly
south of Alexandria, and that the distance between the cities was known. Moreover, he reported that on a day when the vertical rod of a sundial cast no shadow at noon in Syene, the shadow
cast by a similar rod at Alexandria formed an angle of 1/50 of a complete circle.
In the Crockett Johnson painting, the circle represents the Earth and the two line segments drawn from the center display the direction of the two rods. The two parallel lines represent rays
of sunlight striking the Earth, the dark-purple region the shadowed area. The angle of the shadow equals the angle subtended at the center of the Earth, hence the circumference of the entire
Earth can be computed when the angle and the distance of the cities is known.
Crockett Johnson's painting may be after a diagram from the book by James R. Newman entitled The World of Mathematics (p. 206), although the figure is not annotated. Newman published a brief
extract describing ideas of Eratosthenes, based on a first century BC account by Cleomedes.
The Crockett Johnson painting is #15 in the series. It is marked on the back : Crockett Johnson 1966 (/) MEASUREMENT OF THE EARTH (/) (ERATOSTHENES).
Reference: O. Pederson and M. Phil, Early Physics and Astronomy (1974), p. 53.
Currently not on view
date made
Johnson, Crockett
ID Number
catalog number
accession number
Data Source
National Museum of American History, Kenneth E. Behring Center | {"url":"http://americanhistory.si.edu/collections/object-groups/mathematical-paintings-of-crockett-johnson?ogmt_page=mathpaintings-resources&edan_start=0&ogmt_view=grid&edan_fq=name%3A%22Johnson%2C+Crockett%22","timestamp":"2014-04-16T22:31:10Z","content_type":null,"content_length":"93468","record_id":"<urn:uuid:f4f8e47d-326b-42cb-99d6-79dba00284e8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Recent Homework Questions About Outer Space
Post a New Question | Current Questions
A is correct. I assume you know that rock wins over scissors, paper wins over rock, and scissors wins over paper. For the other parts, make a tree diagram complete with corresponding probabilities
and sum all the appropriate outcomes to get the probability. Example: R=rock, P=...
Sunday, July 7, 2013 at 2:43pm
1. Two roommates, roommate A and roommate B, are about to go cruising with a mutual friend and are arguing over who gets to sit in the front seat. Roommate A suggests a game of rock-paper-scissors to
settle the dispute. Consider the game of rock-paper-scissor to be a ...
Sunday, July 7, 2013 at 11:34am
For a family with 2 children, the sample space indicating boy (B) or girl (G) is BB, BG, GB, and GG. If each of the outcomes is equally likely, find the probability that the family has 2 girls, given
that the first child is a girl.
Saturday, July 6, 2013 at 1:25am
An even number is any of 2,4,6,8,10,and 12 (within the sample space above). So the probability of throwing a sum equal to an even number is the sum of the corresponding probabilities, namely: 1/36+3/
Tuesday, July 2, 2013 at 6:48pm
The Space Shuttle travels at a speed of about 5.45 x 103 m/s. The blink of an astronaut's eye lasts about 111 ms. How many football fields (length = 91.4 m) does the Space Shuttle cover in the blink
of an eye?
Friday, June 28, 2013 at 1:54am
You are given the job of laying carpet for a six-story building. At each floor the dimensions of the floor are 75 meters in length by 40 meters in width. A stairwell takes up a space 2.5 m by 1.5 m,
and cuts through each floor. If the rest of the floor area is to be carpeted, ...
Wednesday, June 26, 2013 at 9:17pm
A spaceship is traveling through deep space towards a space station and needs to make a course correction to go around a nebula. The captain orders the ship to travel 2.9 106 kilometers before
turning 70° and traveling 2.7 106 kilometers before assuming the path towards ...
Wednesday, June 26, 2013 at 2:53am
I have a question my brother asked me but I need a math expert. You have two interlocking circles and the radius of circle B goes through the center of circle A and of course the radius of circle A
goes through the center of circle B. The radius of each circle is 30 feet. Now...
Tuesday, June 25, 2013 at 1:03am
3. Magnetic resonance imaging (MRI) is a process that produces internal body images using a strong magnetic field. Some patients become claustrophobic and require sedation because they are required
to lie within a small, enclosed space during the MRI test. Suppose 29% of all ...
Sunday, June 23, 2013 at 9:50pm
A space vehicle is coasting at a constant velocity of 24.9 m/s in the +y direction relative to a space station. The pilot of the vehicle fires a RCS (reaction control system) thruster, which causes
it to accelerate at 0.338 m/s2 in the +x direction. After 53.7 s, the pilot ...
Sunday, June 23, 2013 at 3:12pm
managerial economics
Using optimization theory, analyze the following quotations: a. The optimal number of traffic deaths in the United States is zero. b. Any pollution is too much pollution. c. We cannot pull U.S.
troops out of Iraq. We have committed to such already. ...
Friday, June 21, 2013 at 1:34pm
A. Derive the distance formula (d) shown below for points A = (x1, y1) and B = (x2, y2). 2 2 1 2 2 1 d = (x − x ) + ( y − y ) Note: Look for an application of the Pythagorean theorem where the red
line, segment AB, is the hypotenuse of a right triangle. You can ...
Monday, June 17, 2013 at 8:37pm
Consider a situation where immediately after birth, twins are separated. One continues to live on earth, while the other is whisked away in a space ship at 90% of the speed of light. After 65 years
they are reunited. Will they look the same? Show proof of your answer through ...
Thursday, June 13, 2013 at 1:05am
Theather Appreciation
HENRIK IBSEN 1. Which of these eras is Ibsen considered to be the Father of? a. Symbolism Era b. The Poetic Age c. Romanticism d. Modern Drama 2. Where is Henrik Ibsen from? a. Romania b. Finland c.
Ireland d. Norway 3. According to his power point, which was the first poetic ...
Wednesday, June 12, 2013 at 3:16pm
Suppose that A and B are events defined on a common sample space and that the following probabilities are known. Find P(A or B). (Give your answer correct to two decimal places.) P(A) = 0.34, P(B) =
0.41, P(A | B) = 0.2
Saturday, June 8, 2013 at 3:27pm
1.what are amber light from traffic signal? 2. what are cosmic rays from outer space?
Saturday, June 8, 2013 at 2:18am
1) The are two schools of thought about barrel ageing, proponents of which can be grouped as traditionalists and modernists. 2) Traditionalists support the idea that the wine should be made according
to traditional principles [<~~ADD COMMA HERE] thus allowing long ...
Friday, June 7, 2013 at 11:09am
A single card is selected from an ordinary deck of cards. The sample space is shown in the figure below. Find the probabilities. (Enter the probabilities as fractions.) (a) P(two of diamonds) 1 (b) P
(two) 2 (c) P(diamond) 3
Friday, June 7, 2013 at 12:24am
1. Baozi is a steamed bread stuffed with meat, vegetables, and red beans. The comma after "vegetables" is not required, but is usually a very good idea to get in the habit of using so that your ideas
are not misread. See #1 here, the serial or Oxford comma: http://...
Thursday, June 6, 2013 at 7:27am
1 and 3 are correct, as long as you put a space after those commas. They NEED the commas (so 2 and 4 are incorrect). This construction is called coordinate adjectives. See #5 here: http://
Thursday, June 6, 2013 at 7:20am
English Composition II
I disagree also, and would fire an employee who continued to double space the lines. What a task to speed read that.
Tuesday, June 4, 2013 at 9:24pm
English Composition II
Modern memos expedite the reader s task by featuring: B.double spaces between each line, without any space between paragraphs.
Tuesday, June 4, 2013 at 9:05pm
English Composition II
Modern memos expedite the reader s task by featuring: A. fully developed, very long paragraphs. B. double spaces between each line, without any space between paragraphs. C. lots of boldfaced and/or
italicized terms. D. shorter blocks of text.
Tuesday, June 4, 2013 at 8:55pm
You're right, I think. Let's take an old myth about UFOs. Some people believe it, although it is not true. They refuse to believe that there are no space shifts with aliens. Many people are quick to
believe hoax emails. Some people don't believe in scientifically ...
Monday, June 3, 2013 at 5:11pm
Did you read your other post? http://www.jiskha.com/display.cgi?id=1370132243 Set up a table of outcomes and the corresponding probabilities to create the probability distribution. For example, the
lowest outcome is X=2 (=1+1) with probability 1/36. then X=3 (1+2 or 2+1...
Sunday, June 2, 2013 at 7:06pm
Set up a table of outcomes and the corresponding probabilities to create the probability distribution. For example, the lowest outcome is X=2 (=1+1) with probability 1/36. then X=3 (1+2 or 2+1) with
probability 2/36 ... X=12 (=6+6) with probability 1/36. Find the mean using E(...
Sunday, June 2, 2013 at 12:30pm
You will be presented with three crystallographic directions drawn in a unit cell. Please write the identity of the direction in the space provided below each picture. Use the correct form of
brackets for describing a direction, and leave no spaces between numbers. If you must...
Sunday, June 2, 2013 at 11:43am
Physics 1 College Level
If D = 3204 meters and d = 1247 meters in the figure, what is the magnitude of the force on the space ship to the nearest MN? Consider all three to be point masses. I'm not even sure to begin with
this question. Any help will be much appreciated!
Saturday, June 1, 2013 at 4:09pm
Expectation = ΣxP(x) where x represents the gain from each of the outcomes in the sample space. Here the gains and probabilities are: outcome=oil gain=375000-25000=350000 P(oil)=1/40 xP(oil)=8750
outcome:gas gain=130000-25000=105000 P(gas)=1/20 xP(gas)=5250 outcome:...
Saturday, June 1, 2013 at 9:20am
You will be presented with three crystallographic directions drawn in a unit cell. Please write the identity of the direction in the space provided below each picture. Use the correct form of
brackets for describing a direction, and leave no spaces between numbers. If you must...
Saturday, June 1, 2013 at 8:15am
You will be presented with three crystallographic directions drawn in a unit cell. Please write the identity of the direction in the space provided below each picture. Use the correct form of
brackets for describing a direction, and leave no spaces between numbers. If you must...
Saturday, June 1, 2013 at 1:22am
Hints: Work out the following: - equal numbers of red marbles and yellow marbles (call x=number of red x=number of yellow - twice as many green marbles as red marbles 2x=number of green Now work out
the sample space and the corresponding probabilities.
Friday, May 31, 2013 at 9:07pm
math linear equations
correct. Online, it's best to leave a space after fractions, so it doesn't look like 3/(4x) 3/4 x is better or (3/4)x
Friday, May 31, 2013 at 2:31pm
This is perfect for a Venn diagram. Draw three intersecting circles, where there is a non-empty space where all three circles intersect. Start from the center and work your way out. The center area
contains 4. 9 used carpool and bus. Of those, 4 used all three methods, leaving...
Friday, May 31, 2013 at 11:29am
Suppose a box of marbles contains equal numbers of red marbles and yellow marbles but twice as many green marbles as red marbles. Draw one marble from the box and observe its color. Assign
probabilities to the elements in the sample space. (Give your answers as fractions.) Red...
Thursday, May 30, 2013 at 11:15pm
english 2
The paragraph below needs to be made more coherent. Use repetition, synonyms, and pronouns to link the sentences. Type the improved paragraph in the space provided on the following page Many cities
are easily distinguished by their skylines. In such cities--New York and Paris...
Thursday, May 30, 2013 at 2:22pm
A and B are events defined on a sample space, with the given probabilities. Find P(A and B). (Give your answer correct to two decimal places.) P(A | B) = 0.43 and P(B) = 0.75
Wednesday, May 29, 2013 at 10:44pm
Suppose that A and B are events defined on a common sample space and that the following probabilities are known. Find P(A or B). (Give your answer correct to two decimal places.) P(A) = 0.32, P(B) =
0.36, P(A | B) = 0.24
Wednesday, May 29, 2013 at 10:43pm
The Sun radiates about 4 × 1026 J of energy into space each second. How much mass is released as radiation each second? The speed of light is 3 × 108 m/s. Answer in units of kg
Wednesday, May 29, 2013 at 1:19am
Suppose that A and B are events defined on a common sample space and that the following probabilities are known. Find P(A or B). (Give your answer correct to two decimal places.) P(A) = 0.32, P(B) =
0.36, P(A | B) = 0.24
Tuesday, May 28, 2013 at 4:21pm
A and B are events defined on a sample space, with the given probabilities. Find P(A and B). (Give your answer correct to two decimal places.) P(A | B) = 0.43 and P(B) = 0.75
Tuesday, May 28, 2013 at 4:20pm
Suppose a box of marbles contains equal numbers of red marbles and yellow marbles but twice as many green marbles as red marbles. Draw one marble from the box and observe its color. Assign
probabilities to the elements in the sample space. (Give your answers as fractions.) Red...
Monday, May 27, 2013 at 10:14am
Suppose a box of marbles contains equal numbers of red marbles and yellow marbles but twice as many green marbles as red marbles. Draw one marble from the box and observe its color. Assign
probabilities to the elements in the sample space. (Give your answers as fractions.) Red...
Sunday, May 26, 2013 at 3:28pm
A and B are events defined on a sample space, with the following probabilities. Find P(A and B). (Give your answer correct to two decimal places.) P(A) = 0.5 and P(B | A) = 0.4
Sunday, May 26, 2013 at 2:10pm
help!!!!! Probability!!!
Assume die is fair. Rolling twice gives 36 outcomes, of which the second roll is strictly less than the first (successes) can be enumerated as follows: 21,31,32,41,42,43,51,52,53,54,61,62,63,64,65
Count how many successes there are, and divide by the size of the sample ...
Sunday, May 26, 2013 at 12:36pm
I have got all the other parts but not sure how to come up with this part????In a national survey of 500 business and 500 leisure travelers, each was asked where they would most like "more space."
Sunday, May 26, 2013 at 10:54am
child daycare
Which of the following is one of the steps for successful inclusion? A. Provide every program requested. B. Modify appropriate environments and curriculum. C. Organize all children in an assigned,
designated area. D. Use every inch of space in designated areas. answer C ...
Saturday, May 25, 2013 at 9:56pm
polynomials am i doing it right?
You're on the right track... 1. expand it and write it out when it is a subtraction: (x^4+7x^3+7)-(2x^4-4x^3+1) = x^4+7x^3+7) -2x^4+4x^3-1 = -x^4 + 11 x^3 + 6 2. space out your terms to avoid losing
some of them. (3n^3 + n^2 -n -4)+(5n^3 -4n^2 + 11) = 3n^3 + n^2 -n -4 + 5n...
Saturday, May 25, 2013 at 10:25am
Physics help
The galaxies in the universe are all flying away from each other. The speeds of nearby galaxies are proportional to the distance the galaxy is away from us. This relation, v=Hd is known as Hubble's
law and the constant H is known as Hubble's constant. The evolution of ...
Thursday, May 23, 2013 at 11:26pm
Math (Probability)
The size of the sample space of the last throw is 20*20=400. Out of the 400 outcomes, Bret needs those where his throw is 11 or more than Shawn's, namely: (12,1),(13,1),(14,1)...(20,1) (13,2),
(14,2)...(20,2) ... (19,8),(20,8) (20,9) There are ∑(i), i=1,9 outcomes. Can ...
Tuesday, May 21, 2013 at 6:33am
7th grade math help
Assuming one must make a single choice in each group, so it is a three-step experiment. choices for group 1 = 2 choices for group 2 = 3 choices for group 3 = 2 Total number of outcomes in the sample
space is therefore the product of the choices in each group (step).
Monday, May 20, 2013 at 6:35pm
Grammar (Ms. Sue)
Rewrite the sentences below that contain errors in pronoun-antecedent agreement. If a sentence is correct, write "correct." 1. Max is learning skills that you need to be an astronaut. 2. Some of my
friends wrote to NASA in his spare time to find out how to become an ...
Monday, May 20, 2013 at 5:37pm
Motion along a straight line, such as an axis, In this case,its the general movement of any object without the rotational component. In inertial translational motion without rotation the object
always keeps the same inertial attitude. Rotation and translation together provide ...
Sunday, May 19, 2013 at 6:48pm
Mark flips a coin twice. How many outcomes are in the sample space?
Friday, May 17, 2013 at 7:43am
Mark flips a coin twice. How many outcomes are in the sample space
Thursday, May 16, 2013 at 7:58pm
A cylinder has a height of 16 cm and a radius of 5 cm. A cone has a height of 12 cm and a radius of 4 cm. If the cone is placed inside the cylinder as shown, what is the volume of the air space
surrounding the cone inside the cylinder? (Use 3.14 as an approximation of .)
Wednesday, May 15, 2013 at 1:59pm
Science 1 question SPACE
well i guess there is no such path on images, but thanks Ms.Sue for trying
Tuesday, May 14, 2013 at 10:55pm
Science 1 question SPACE
http://www.google.com/search?q=earth+to+Uranus&hl=en&source=lnms&tbm=isch&sa=X&ei=9vaSUa_jKKOTyQHDgoHwCw&ved=0CAcQ_AUoAQ&biw=711&bih=453 http://www.google.com/search?q=earth+to+Uranus&hl=en&
Tuesday, May 14, 2013 at 10:47pm
Science 1 question SPACE
I re-posted this question because noone helped and I need help fast! I am doing a project on the planet uranus and I need a map showiong the path to my planet that i have chosen. The picture of the
path must include asteroids, comets or other interesting objects. I am having ...
Tuesday, May 14, 2013 at 10:42pm
Science space
I am doing a project on the planet uranus and I need a map showiong the path to my planet that i have chosen. The picture of the path must include asteroids, comets or other interesting objects. I am
having so much trouble looking for such a picture. I have tried looking on ...
Tuesday, May 14, 2013 at 10:18pm
the space shuttle typically orbits 400 km above the earths surface. The earth has a mass of 5.98 * 10^24 kg and a radius of 6380km. A) How much would a 2000 kg part for the space station weigh when
it has been lifted to that orbit in the shuttles cargo bay? B)Use your result ...
Tuesday, May 14, 2013 at 7:54pm
math probability-PLEASE HELP -thanks
thanks for helping me! You spin the spinner twice. draw a tree diagram and list sample spaces to show the possible outcomes. From that answer questions 1,2 and 3. The spinner is 4 equal parts for a,
b, c ,d I know the tree diagram- you Have a,b,c,d, and from each of those ...
Sunday, May 12, 2013 at 2:46pm
math probabilty- please help
You spin the spinner twice. draw a tree diagram and list sample spaces to show the possible outcomes. From that answer questions 1,2 and 3. The spinner is 4 equal parts for a, b, c ,d I know the tree
diagram- you Have a,b,c,d, and from each of those letters ex. A has four ...
Saturday, May 11, 2013 at 2:57pm
math help
) Brian has 96 feet of fencing to enclose a rectangular space in his yard for his dog. He wants the rectangle to be twice as long as it is wide. Write and solve an equation to find the dimensions of
the rectangle.
Friday, May 10, 2013 at 9:59pm
You spin the spinner twice. draw a tree diagram and list sample spaces to show the possible outcomes. From that answer questions 1,2 and 3. The spinner is 4 equal parts for a, b, c ,d I know the tree
diagram- you Have a,b,c,d, and from each of those letters ex. A has four ...
Friday, May 10, 2013 at 4:23pm
Lang. Arts
select the pronoun that belongs in the underlined space. 1.___ of the people in Sri Lanka are Sinhalese,2._______are Buddhist by tradition. A smaller percentage are Tamil, of Hindu background. Civil
wars have torn apart the small nation,3._____is a bloody and violent history. ...
Friday, May 10, 2013 at 10:53am
3. Middle C and the D next to it is one step. -True? 10. A sharp written before an F on a staff applies to a. F and G b. All F's on that one line or space in that measure c. Only that note d. Always
two notes C?
Thursday, May 9, 2013 at 3:32pm
Jackson is creating a water garden surrounded by a triangular patch of grass. If the pond will take up the space indicated by the circle, how many square feet of sod will he need to complete the
garden (use 3.14 for and round your answer up to the next foot)?
Thursday, May 9, 2013 at 1:31pm
Assuming the dice are 6 sided, there are 36 possible outcomes to this situation, 6X6. How many are multiples of 7? Check the sample space, none of them are multiples of 7,impossible to get without a
7 involved.
Wednesday, May 8, 2013 at 9:55pm
One right circular cone is set inside a larger circular cone. The cones share the same Axis, the same vertex, and the same height. Find the volume of the space between the cones of the diameter of
the inside cone is 6in., the diameter of the outside cone is 9in and the height ...
Monday, May 6, 2013 at 1:03pm
A test has 1 true false question and 2 multiple choice questions(A B C D). Draw a tree diagram and determine the sample space for the possible ways to answer the test.
Sunday, May 5, 2013 at 9:00pm
a magnetic rail gun is constucted to give a space vehicle its intial launch velocity, thereby reducing the amount of fuel the vehicle has to carry. it consist of two very long parallel rails across
which is positioned a large current carrying crossbar 7.65 m long. The crossbar...
Saturday, May 4, 2013 at 5:32pm
Does the energy move from Earth s surface and atmosphere out to space through radiation, conduction, or convection? Explain.
Thursday, May 2, 2013 at 3:32pm
Juan's model locomotive is 7 5/8 in long HIs coal car is 6 1/4 in long when hooked together there is a 7/8 inch space between the cars. What is the total length when the two cars are hooked together?
I added the fractions and got 13 7/8 what do I do with the 7/8 in space ...
Wednesday, May 1, 2013 at 6:05pm
Please leave space between words.
Wednesday, May 1, 2013 at 3:13pm
gravitational forces are be acted on a space center what other data need to show the forces a-direction of the opposite force b-magnitute of the downward force graviatational forrces are being acted
on a space station i think what other of the two data are needed to fully show...
Tuesday, April 30, 2013 at 10:58pm
graviatational forrces are being acted on a space station i think what other of the two data are needed to fully show the center being in gavity or motion or what other of the two are being acted on
the space center
Tuesday, April 30, 2013 at 10:47pm
gravitational forces are be acted on a space center what other data need to show the forces a-direction of the opposite force b-magnitute of the downward force rigth answer: b
Tuesday, April 30, 2013 at 9:02pm
A jar contains 1 blue crayon, 1 green crayon, 1 black crayon, and 1 red crayon. You grab 1 crayon without looking. Find the sample space for this experiment.
Tuesday, April 30, 2013 at 1:57pm
A Competitive Coup in the In-Flight Magazine. When the manager for market intelligence of AutoCorp, a major automotive manufacturer, boarded the plane in Chicago, her mind was on shrinking market
share and late product announcements. As she settled back to enjoy the remains of...
Monday, April 29, 2013 at 3:17pm
For the following combination of reactants, enter the formulae of the products in the space provided and give their states. (Separate multiple products with a comma ",". Type your answer using the
format Na2SO4(aq) for Na2SO4(aq). Do not balance the equation. If ...
Sunday, April 28, 2013 at 7:44pm
Is there supposed to be a space between 6 and 5? Or is it ? 65x=1296
Saturday, April 27, 2013 at 6:42pm
two charge 5NC and -2NC are placed at a point (5cm,0,0)and (23cm,0,0)in a region of a space where there is no other electric field.calculate the electrostatic potential energy of this charged system
Saturday, April 27, 2013 at 3:25am
math probablity
Assume a family is planning to have three children. A. What is is the probablity of a family having 2 girls. B. The probablity of a family having at least 1 boy. C. List the sample space using a B
for Boy and G for girl (Hint there should be 8 possible outcomes). ??????
Friday, April 26, 2013 at 9:51pm
Assume a family is planning to have three children. A. What is is the probablity of a family having 2 girls. B. The probablity of a family having at least 1 boy. C. List the sample space using a B
for Boy and G for girl (Hint there should be 8 possible outcomes).
Friday, April 26, 2013 at 4:58pm
Assume a family is planning to have three children. A. What is is the probablity of a family having 2 girls. B. The probablity of a family having at least 1 boy. C. List the sample space using a B
for Boy and G for girl (Hint there should be 8 possible outcomes).
Friday, April 26, 2013 at 4:33pm
Looks like (B) to me why all the white space?
Friday, April 26, 2013 at 11:44am
An observer is located on the ground 6 kilometers from the point where a space shuttle is launched (vertically). When the shuttle is 8 kilometers high, it is travelling at 500 kilometers per hour.
Determine, for that moment, the rate at which the distance between the observer ...
Friday, April 26, 2013 at 1:04am
The molar volume calculated does not take into account that the shape of the sodium atoms affects packing space between atoms. (source: I had the same question)
Friday, April 26, 2013 at 12:52am
VASTS (space)
please answer this questionnnn!!!
Thursday, April 25, 2013 at 11:39pm
you plan to carpet 468ft of space. carpet costs $26.75 per square yard and the tax is 7%. the carpet is sold in 12 ft rolls. what is the cost of the carpet?
Thursday, April 25, 2013 at 8:24pm
No ms.sue it is just multiplication is what it means when there is no space in between the parentheses
Thursday, April 25, 2013 at 8:21pm
No ms.sue it is just multiplication is what it means when there is no space in between the parentheses
Thursday, April 25, 2013 at 8:18pm
Well umm I'm not a physics master but it depends where it is on it's orbit when gravity disappeared. It will move off at a tangent to it's orbit, which may send it into the sun or into a swing around
it, or, more likely, into outer interstellar space. If it's ...
Thursday, April 25, 2013 at 4:07pm
Why do the passengers in high-altitude jet planes feel the sensation of weight while passengers in the International Space Station do not? Please help; need an original answer no Google search. Thank
you in advance...
Thursday, April 25, 2013 at 3:58pm
A particle with a charge of 31 µC moves with a speed of 62 m/s in the positive x-direction. The magnetic field in this region of space has a component of 0.50 T in the positive y-direction, and a
component of 0.61 T in the positive z-direction. What is the magnitude and ...
Thursday, April 25, 2013 at 1:56pm
A particle with a charge of 31 µC moves with a speed of 62 m/s in the positive x-direction. The magnetic field in this region of space has a component of 0.50 T in the positive y-direction, and a
component of 0.61 T in the positive z-direction. What is the magnitude and ...
Thursday, April 25, 2013 at 1:56pm
the bore of an engine cyclinder is the diameter of the piston and the stroke is the distance that the piston will travel up the cyclinder wall. The displacement is the volume of space that the piston
goes thru. If the piston is 3.97" in dia. and the stroke is 4" what...
Thursday, April 25, 2013 at 5:46am
VASTS (space)
In case you didnt already know VASTS has to do with aerospace and what not...this was one of the questions on our discussion forum....this is exactly what it says-- Rags, tie wraps, and safety wire
are examples of ______ contamination? ....there was a similar question that ...
Wednesday, April 24, 2013 at 10:55pm
None of those. The Compton space telescope detected, and somewhat localized gamma radiation.
Wednesday, April 24, 2013 at 9:44pm
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
Post a New Question | Current Questions | {"url":"http://www.jiskha.com/science/outer_space/?page=5","timestamp":"2014-04-16T19:11:35Z","content_type":null,"content_length":"41415","record_id":"<urn:uuid:8b99c4cf-ea77-4311-ba28-0a96d801fe0c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Distinguishing similar patterns with different underlying instabilities: Effect of advection on systems with Hopf, Turing-Hopf, and wave instabilities
FIG. 1.
Space-time plots with size 80 space units (horizontal)×200 time units (downwards). All systems are started from random initial conditions, and no-flux boundary conditions are used. The diffusion
coefficients are D[x] =D[y] =D[z] =0.1 (a); D[x] =0.1 and D[y] =D[z] =2 (b); and D[x] =D[z] =0.1 and D[y] =2 (c). For other parameters see text. Light (dark) color represents high
(low) concentration of the activator x.
FIG. 2.
Effect of advective flow for a system with D[x] =D[y] =D[z] =0.1. Space-time plots with size 80 space units (horizontal) and 400 time units (downwards). For the initial 100 time units, the
velocity v is 0 and then is turned to 0.2 (a), 0.5 (b), and 1.5 (c). The square in (a), with 80 space units×20 time units is expanded at the right. Light (dark) color represents high (low)
concentration of the activator x. (d) Phase diagram for patterns at different velocities of the advective flow.
FIG. 3.
Effect of advective flow for a system with D[x] =0.1 and D[y] =D[z] =2. Space-time plots with size 80 space units (horizontal) and 400 time units (downwards). For the initial 100 time units,
the velocity v is 0 and then is turned to 1.5 (a) and 3.0 (b). The square in (b), with 80 space units×20 time units is expanded at the right. Light (dark) color represents high (low) concentration
of the activator x. (c) Phase diagram for patterns at different velocities of the advective flow.
FIG. 4.
Effect of advective flow for a system with D[x] =D[z] =0.1 and D[y] =2. Space-time plots with size 80 space units (horizontal) and 400 time units (downwards). For the initial 100 time units,
the velocity v is 0 and then is turned to 0.3 (a), 0.7 (b), 1.5 (c), and 2.5 (d). Light (dark) color represents high (low) concentration of the activator x. (e) Phase diagram for patterns at
different velocities of the advective flow.
FIG. 5.
Effect of advective flow for a system with D[x] =0.1 and D[y] =D[z] =2 and f [y]=f [z]=1. Space-time plots with size 80 space units (horizontal) and 300 time units (downwards). For the
initial 100 time units, the velocity v is 0 and then is turned to 0.1 (a) and 2.5 (b). The square in (b), with 30 space units×20 time units is expanded at the right. Light (dark) color represents
high (low) concentration of the activator x. (c) Phase diagram for patterns at different velocities of the advective flow.
FIG. 6.
Effect of advective flow on a system with D[x] =D[y] =0.1 and D[z] =2. Space-time plots with size 80 space units (horizontal) and 300 time units (downwards). For the initial 100 time units, the
velocity v is 0 and then is turned to 0.2 (a), 0.4 (b), and 1.0 (c). Light (dark) color represents high (low) concentration of the activator x. (d) Phase diagram for patterns at different velocities
of the advective flow.
FIG. 7.
Detail of space-time plots with size 100 space units (horizontal) and 40 time units (downwards) when changing v from 0 to 1.0 ((a) and (c)) or to 3.0 ((b) and (d)) on systems with D[u] =0.1, D[w]
=2, and ξ=0 ((a) and (b)) and for D[u] =D[w] =0.1 and ξ=−10 ((c) and (d)). Phase diagram for the effect of advective flow on systems with D[u] =0.1, D[w] =2, and ξ=0 (e) and for D[u]
=D[w] =0.1 and ξ=−10 (f).
Article metrics loading... | {"url":"http://scitation.aip.org/content/aip/journal/chaos/22/4/10.1063/1.4766591","timestamp":"2014-04-17T16:54:36Z","content_type":null,"content_length":"77204","record_id":"<urn:uuid:c560531c-d25a-447e-aae0-605a6368f3bf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decision Graphs
Jon Oliver devised decision graphs (classification graphs) [Oli92, OW92] to remove the difficulty that decision trees (classification trees) have with modelling "disjunctive" functions, for example,
independent variables x[1], x[2], x[3], x[4],
dependent variable y,
function to be learned
y = (x[1] ∧ x[2]) ∨ (x[3] ∧ x[4])
A decision tree can certainly describe such a function,
but it requires two copies of the subtree for (here) x[3] and x[4].
If the learning problem is made slightly harder, for example,
independent variables x[i], i=1..9,
dependent variable y,
function to be learned
y = (x[1]∧x[2]∧x[3]) ∨ (x[4]∧x[5]∧x[6]) ∨ (x[7]∧x[8]∧x[9])
the decision tree-tree to describe it becomes much larger [exercise]. Such trees can be learned, but it generally requires a great deal of data compared to the intuitive complexity of the function to
be learned.
The solution is to "factor out" any common subtrees and replace the tree by an equivalent directed acyclic graph (DAG). Such a DAG may be much smaller than the corresponding tree, have a smaller
message-length, and therefore can be justified by much less data (evidence). Decision-trees and decision-graphs describe exactly the same set of function but decision-graphs model disjunctive
functions more efficiently. (There is a small penalty for describing a DAG that is in fact a tree.)
The tree data structure is extended, to allow DAGs, by adding a new kind of node, a join-node, in addition to a tree's forks (tests) and leaves.
(The examples above use binary variables but decision graphs and trees apply to arbitrary discrete and continuous variables in the obvious way.)
Message Length of a Decision Graph
In learning with MML, the two-part message length consists of (i) the message length of the hypothesis (here a decision graph), plus (ii) the message length of the data given the hypothesis. The 2nd
part is calculated, as for trees, using the distribution of the leaf into which a datum falls. The calculation of the message length of the decision graph is generalised from that for decision trees:
1. Perform a prefix traversal of the decision graph, encoding forks and leaves (as for decision trees) and join nodes, but not traversing below any join nodes.
2. The traversal may have output j≥0 join nodes. Note that, if j>0, at least two, but not necessarily all, of the join nodes output must join with each other[Oli92]. Encode the actual pattern of
joining together.
3. If some join nodes have been joined together in 2, continue traversing and encoding the graph from them, as in 1. Otherwise stop.
In step 1, cf trees, there are now 3 kinds of node. Oliver suggests using some fixed probability for pr(join_node); the tree method is used for a fork or a leaf given that a node is not a join.
In step 2, it is necessary to calculate the number of possible ways, w, in which at least two of the the dangling joins can be linked together, and encode the actual way, log w nits.
The search for an optimal decision graph is harder than that for an optimal decision tree. Wallace and Oliver suggest a greedy search with some amount of lookahead.
[OW92] J. Oliver & C. S. Wallace. Inferring Decision Graphs. TR 92/170, Dept. of Computer Science [*], Monash University, Australia 3800.
Also in, Proc. of the 1992 Aust. Joint Conf. on Artificial Intelligence, pp.361-367, 1992.
[Oli92] J. Oliver. Decision Graphs - An Extension of Decision Trees. TR 92/173, Dept. of Computer Science [*], Monash University, Australia 3800.
Also in, 4th Int. Conf. Artificial Intelligence and Statistics, Florida, pp.343-350, 1993.
[Hicss93] D. Dow, J. Oliver, T. I. Dix, L. Allison & C. S. Wallace, A decision graph explanation of protein secondary structure prediction. 26th Hawaii Int. Conf. Sys. Sci., 1, pp.669-678, 1993.
Also see Bioinformatics.
[*] Became Faculty of Information Technology, Monash. | {"url":"http://www.allisons.org/ll/MML/Structured/DGraph/","timestamp":"2014-04-19T14:30:41Z","content_type":null,"content_length":"22398","record_id":"<urn:uuid:9cec8f2f-0a79-49ff-8a54-7cd4190ef427>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Overview: Straight Lines
Graphing Overview (page 1 of 3)
Sections: Straight lines, Absolute values & quadratics, Polynomials, radicals, rationals, & piecewise
In this overview, we will start with graphing straight lines, and then progress to other graphs. The only major difference, really, is in how many points you need to plot in order to draw a good
graph. But those increased numbers of points will vary with the "interesting" issues related to the various types of graphs.
Before we get started, though, let me say this: You should do NEAT graphs, which means that you should be using a ruler. If you don't have a ruler, go get one. Now. It will help immensely, and you
can get major "brownie points" from your instructor. And, no, using graph paper does NOT excuse you from using a ruler.
Straight lines
Suppose you have "y = 3x + 2". Since this has just "x", as opposed to "x^2" or "|x|", this graphs as just a plain straight line (because it is a linear equation). The first thing you need to do is
draw what is called a "T-chart". It looks like this: Copyright © Elizabeth Stapel 1999-2011 All Rights Reserved
Then you will pick values for x, plug them into the equation, and solve for the corresponding values of y. Don't forget to pick negatives for x; using only positive numbers can be misleading later
on, so it's a bad habit to get in to now. Also, try to plot at least three points. It's just safer that way: if you mess up on one point, you'll know, because it's dot won't line up with the others.
This is what this looks like:
Some people like to add a third column, in which they write down what the actual plot-points are, like this:
Note that, if you're using a graphing calculator, you can probably have the calculator fill in the T-chart for you. Check your manual for a "TABLE" utility, or just read the chapter on graphing. Once
you know how to use this utility, then you can just copy your T-chart from the calculator screen.
Now that you have your points, draw a NICE NEAT set of axes. This means drawing an EVEN, CONSISTENT scale on the axes (evenly spacing the ticks for the numbers), and maybe even labelling the axes.
Draw arrows on the ends of the axes where the numbers are getting bigger (that's what the arrows stand for, ya know!), and draw arrows NOWHERE ELSE. For comparison:
│ This is a good graph. │ This is a bad graph. │
(By the way, did you notice how the tickmarks for "5" and "10" on the axes above are longer than the others? That's not something you have to do, but it can be very helpful when counting off to graph
your points. Just a tip...)
Then plot your points...
...and connect the dots:
So this is a nice straight line, going uphill (which we expected, because it has a positive slope of m = 3) and crossing the y-axis at the y-intercept of y = 2.
Sometimes they give you an equation like "2y – 4x = 3". The first thing you want to do is solve this equation for "y =".This works like this:
2y – 4x = 3
2y = 4x + 3
y = 2x + 1.5
Then you graph as usual.
Sometimes you want to be more careful about the values you pick for x. For instance, suppose you have "y = ( ^2/[3] )x + 4". In this case, make life easier for yourself by choosing x's that are
multiples of 3, so you can cancel out the denominator and avoid fractions. I mean, choosing x = 5 isn't wrong, but x = 3 would be nicer to work with in this particular problem. (For further
information, review the lesson on "Graphing LInear Equations".)
Top | 1 | 2 | 3 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Graphing Overview." Purplemath. Available from
http://www.purplemath.com/modules/graphing.htm. Accessed | {"url":"http://www.purplemath.com/modules/graphing.htm","timestamp":"2014-04-21T07:55:56Z","content_type":null,"content_length":"33285","record_id":"<urn:uuid:700e29fa-45ac-4628-8f03-9ff4809ede96>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hopf module
The Hopf modules over bimonoids are modules in the category of comodules or viceversa. This notion has many generalizations and variants. Relative Hopf modules are an algebraic and possibly
noncommutative analogue of a notion of an equivariant sheaf.
Given a $k$-bialgebra $(H,m_H,\eta,\Delta,\epsilon)$, a left-right Hopf module of $H$ is a $k$-module $M$ with the structure of left $H$-module and right $H$-comodule, where the action $u: H\otimes M
\to M$ and right $H$-coaction $\rho : M\to M\otimes H$ are compatible in the sense that the coaction is a morphism of left modules. In this, the structure of left module on $M\otimes H$ is the
standard tensor product of modules over Hopf algebras, with the action given by $(u\otimes m_H)\circ(H\otimes \tau\otimes H)\circ(\Delta \otimes M\otimes H)$ as $k$-linear map $H\otimes(M\otimes H)\
to M\otimes H$ where $\tau=\tau_{H,M}:H\otimes M\to M\otimes H$ is the standard flip of tensor factors in the symmetric monoidal category of $k$-modules.
An immediate generalization of Hopf modules is for the case where $(E,\eho_E)$ is a right $H$-comodule algebra (a monoid in the category of $H$-comodules); then one can define the category ${}_E\
mathcal{M}^H$ of left $E$- right $H$- relative Hopf modules (less precisely, $(E,H)$-relative Hopf modules, or simply (relative) Hopf modules), which are left $E$-modules that are right $H$-comodules
with a natural compatibility condition. In Sweedler notation for comodules. where $\rho(m) = \sum m_{(0)}\otimes m_{(1)}$, $\rho_E(e) = \sum e_{(0)}\otimes e_{(1)}$, the compatibility condition for
the left-right relative Hopf modules is $\rho (e m) = \sum e_{(0)} m_{(0)} \otimes e_{(1)} m_{(1)}$ for all $m\in M$ and $e\in E$.
There are further generalizations where instead of a bialgebra $H$ and a $H$-comodule algebra $E$ one replaces $E$ by an arbitrary algebra $A$, and $H$ by a coalgebra $C$ and introduces a
compatibility in the sense of a mixed distributive law or entwining (structure). Then the relative Hopf modules become a special case of so-called entwined modules, see the monograph [BW 2003].
Geometrically, relative Hopf modules are instances of equivariant objects (equivariant quasicoherent sheaves) in noncommutative algebraic geometry, the statement of which can be made precise, cf.
[Škoda 2008].
Furthermore, in the context of relative Hopf modules there is an analogue of the faithfully flat descent along torsors from commutative algebraic geometry, and the Galois descent theorems in algebra.
Its main instance is Schneider's theorem, asserting that if $H$ is a Hopf algebra and $U\hookrightarrow E$ a faithfully flat $H$-Hopf-Galois extension then the natural adjunction between the
categories of relative $(E,H)$-Hopf modules and left $U$-modules is an equivalence of categories. This corresponds to the classical theorem saying that the category of equivariant quasicoherent
sheaves over the total space of a torsor is equivalent to the category of the quasicoherent sheaves over the base of the torsor.
One can also consider Hopf bimodules, and similar categories. The category ${}_H^H\mathcal{M}^H_H$ is related to the category of Yetter-Drinfeld modules.
Fundamental theorem on Hopf modules
If $H$ is a Hopf algebra over a field $k$, then the category of the ordinary Hopf modules ${}_H^H\mathcal{M}$ is equivalent to the category of $k$-vector spaces. See e.g. Montgomery’s book.
Related entries include comodule algebra, Schneider's descent theorem, Yetter-Drinfeld module, entwined module
• BW2003: T. Brzeziński, R. Wisbauer, Corings and comodules, London Math. Soc. Lec. Note Series 309, Cambridge 2003.
• Škoda 2008: Z. Škoda, Some equivariant constructions in noncommutative algebraic geometry, Georgian Mathematical Journal 16 (2009), No. 1, 183–202, arXiv:0811.4770 MR2011b:14004
• Susan Montgomery, Hopf algebras and their actions on rings, CBMS Lecture Notes 82, AMS 1993, 240p.
• Peter Schauenburg, Hopf Modules and Yetter - Drinfel′d Modules, Journal of Algebra 169:3 (1994) 874-890 doi; Hopf modules and the double of a quasi-Hopf algebra, Trans. Amer. Math. Soc. 354
(2002), 3349-3378 doi pdf; Actions of monoidal categories, and generalized Hopf smash products, Journal of Algebra 270 (2003) 521-563, doi ps
• A. Borowiec, G. A. Vazquez Coutino, Hopf modules and their duals, math.QA/0007151
• H-J. Schneider, Principal homogeneous spaces for arbitrary Hopf algebras, Israel J. Math. 72 (1990), no. 1-2, 167–195 MR92a:16047 doi | {"url":"http://ncatlab.org/nlab/show/Hopf+module","timestamp":"2014-04-21T02:00:44Z","content_type":null,"content_length":"30205","record_id":"<urn:uuid:90b7539e-7537-4e53-b187-c2e40da668ed>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instructor Class Description
Time Schedule:
George J. Marklin
A A 523
Seattle Campus
Special Topics in Fluid Physics
Offered: AWSp.
Class description
Course Title: The Physics of Spheromaks. Course Text: 'Spheromaks' by Paul M. Bellan --- Spheromaks are easily formed, self-organized magnetized plasma configurations that are being investigated for
their potential to make economical fusion reactors. This course will explain the physics issues underlying their MHD equilibrium and stability and their formation and sustainment using the energy
minimization process of Taylor relaxation.
Student learning goals
Describe a spheromak and explain its advantages compared to other fusion reactor candidates such as the FRC, RFP, tokamak and stellerator.
Explain how a spheromak can be made with a magnetized coaxial plasma gun or an inductive helicity injector.
Define magnetic helicity and show that it is conserved in MHD. Show how to generalize it to open systems and derive the helicity evolution equation.
Explain the Taylor minimum energy principle and derive the Taylor state equation curl(B)=lambda*B for both open and closed systems.
Calculate spheromak solutions to the Taylor state equation in cylindrical and spherical geometry.
Explain how the stability of spheromaks depends on geometry, current profiles and plasma beta.
General method of instruction
Classroom lectures and homework. Reading assignments in textbook and selected journal articles.
Recommended preparation
Students should be familiar with basic plasma physics and basic electricity and magnetism.
Class assignments and grading
Weekly homework and reading assignments.
40% homework 20% midterm exam 40% final exam
The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without
notice. In most cases, the official course syllabus will be distributed on the first day of class. Last Update by George J. Marklin
Date: 08/29/2011 | {"url":"https://www.washington.edu/students/icd/S/aa/523marklin.html","timestamp":"2014-04-19T22:57:31Z","content_type":null,"content_length":"5017","record_id":"<urn:uuid:9f36e3f5-93c7-4746-8aba-0ca1589215a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why should 'e' exist?
Why should "e" exist?
September 9, 2012 12:16 AM Subscribe
Why is the number "e" so...
I get that its important and that it comes up everywhere, in interesting integrals, differential calculus, statistics, everything. So I'm not asking about what cool places this number pops up, but
rather why it should pop up in so many of these places. Philosophically, I guess, why should a number like e exist?
But why?
For example, Pi has a very intuitive explanation: the circumference of a circle divided by its diameter. And the fact that it comes up in non-geometric problems, I guess, can be explained by the
concept that these geometries are fundamental parts of the physical and mathematical universe, and "pi" is a fundamental part of those.
But what about "e"?
What universal concept or physical property of the universe does it represent. What reason is there that it should be so pervasive and important.
BONUS: under different conditions (more dimensions in our universe, or the presence/absence of string theory, or some other sci-fi universe) would math exist as we know it, especially - would the
number "e" exist or have the same value?
posted by nondescript to Science & Nature (17 answers total) 29 users marked this as a favorite
You can use
Euler's Formula
to express
in terms of π and
. In other words, you can express
in units of a property of a unit circle (and something else that follows certain mathematical rules). If there is a universe where a geometry defines π with a different value, or where math is weird
has a different value, then clearly
would be different,
assuming the Euler relation itself still holds
posted by Blazecock Pileon at 12:32 AM on September 9, 2012 [3 favorites]
Logarithms and exponents expressed in base
(rather than base 10) turn out to be mathematically elegant. The exponential function,
, is
its own derivative
. The derivative of ln(x) (the natural log, i.e. log in base e) is 1/x. If you try to differentiate log(x) in base 10, the answer is 1/(x ln(10)). The derivative of 10^x is ln(10)*10^x. Basically, if
you attempt to do exponential and logarithmic math in bases that aren't
, you get extra
terms all over the place just begging to be cancelled out.
Derivatives of logarithmic and exponential functions are important in differential equations, such as those which model physical systems experiencing exponential growth and decay. See the example for
exponential decay
, and note how the system is described by a very simple differential equation which has an exponential (ie. e^x) as its solution.
BONUS: The definition of e is "limit as n goes to infinity of (1 + 1/n)^n". It's self-contained, not physically derived or anything, rather hard for me to imagine it not existing or having a
different value. Another definition is "the number a for which d/dx a^x = a^x". It gets a bit philosophical here but intuitively I understand
as being the number that facilitates mathematical descriptions of exponential growth and decay which are incredibly fundamental parts of physics (see, for example, all of thermodynamics). As long as
the universe contains growth and decay there shall be
posted by PercussivePaul at 1:04 AM on September 9, 2012 [12 favorites]
Seconding the importance of e in differential calculus. Many things happen at a rate proportional to how much stuff there is, and thanks to e^x being its own derivative when you see that (population
growth or decay,
cosmological expansion
- especially during inflation when we might speak of
, linear air resistance), one knows that e^x is lurking around the corner.
To answer your bonus about extra dimensions or some stringy description at high energies, theoretical physicists understand and explore these scenarios using mathematics with the same value of e as
we have; okay the group theory and differential geometry might be a bit more sophisticated but the mathematical principles are unchanged. After all, e is
physical constant
, it's just an irrational number. But while we're at it,
a recent peer-reviewed open-access discussion of the variation of fundamental constants.
In order to make sense at the energy and length scales that we see every day, all these candidate theories have to explain all the stuff we've already seen (just like any scientific theory should),
and so the usual way of doing this is to formulate an
effective field theory
that (carefully) discards stuff happening at high energies and small length scales. There's no freedom in that process for e to change: the mathematics cannot depend on the physics in that way.
posted by Talkie Toaster at 2:35 AM on September 9, 2012 [1 favorite]
e comes up a lot when things are increasing or decreasing in a regular way (by a fixed additive or multiplicative amount). There is a sweet spot of things increasing or decreasing where a function is
its own derivative. Or, a function can be its own derivative multiplied by a constant, or raised to a power, or or or. This is what is happening when e appears in formulas not by itself, but with
some other factors.
A lot of things in nature increase and decrease by regular amounts. Cells multiply... rabbits have sex with each other and make new rabbits... stuff (of all sorts) diffuses away from its source in a
regularly decreasing way (e.g. Gaussian spreading distributions have e in them)... etc.
Regarding your bonus question: e is probably most relevant in a universe that includes both time and quantity.
posted by kellybird at 2:37 AM on September 9, 2012
What universal concept or physical property of the universe does it represent. What reason is there that it should be so pervasive and important.
The part about
that makes it seem plausible to me that it could appear in many places, is that the derivative of e^x is e^x.
Now, not every class of functions can sustain this property. For example, in a quadratic equation (that is, equations of the form
f(x) = ax^2 + bx + c
for any a, b, and c) there's no specific a's, b's, or c's which can let f(x) equal the derivative of itself.
Exponential functions (equations of the form
f(x) = a^x
for any a) are the only type of function that can manage this. And the specific a that allows this to occur is our friend,
! So
f(x) = e^x
is the only function who is its own derivative. (This is one way
can be defined.)
This turns out to be a very useful property whenever you want to deal with both a function and its derivatives, which happens all the time in differential equations. For example, the force on a
harmonic oscillator
is commonly modeled as a function of its position (if you pull a spring further apart, the force will be stronger) and velocity (the faster its going the more force from frictional force there will
be). And force determines an object's acceleration.
So right here in this system we see a differential equation involving position, velocity (which is the derivative of position), and acceleration (which is the derivative of velocity). And as you
might be able to intuit, a function whose derivative equals itself comes in handy in working with these types of equations.
posted by losvedir at 4:21 AM on September 9, 2012 [1 favorite]
Another place where e comes up is when thinking about compound interest.
If you have some principal P
invested at r percent for t years, compounded n times a year, you will end up with P
. We want to investigate what happens as the number of compoundings/year goes to infinity. in other words, what if you compound continuously?
We make the following substitution: mr=n, where m therefore goes to infinity as n does (we are just introducing m, it doesn't hae a natural interpretation. This lets us rewrite the expression above
as P
. If we take the limit as m goes to infinity, the expression (1+1/m)
converges to e (which is a consequence of the fact that e
is its own derivative, as mentioned above. Hence our expression becomes P
. This is an upper bound to the money you could get via compound interest with that principal at that interest rate.
posted by Elementary Penguin at 4:36 AM on September 9, 2012 [2 favorites]
And the fact that it comes up in non-geometric problems, I guess, can be explained by the concept that these geometries are fundamental parts of the physical and mathematical universe, and "pi" is a
fundamental part of those.
You guess wrong.
But that's not your question, which is about e, and which I'd answer this way.
It's not so much that e is so important. It isn't. It's that the
and its inverse, the
, are important. Write L for the log and E for the exponential function. Then e is E(1). Is that important? Well, I dunno, do you think sin(1) is important? Not really, but sine and the other
are important. You often write the exponential function as e^x, which makes it seem that it is somehow "defined" in terms of e, but that's an illusion; in reality, e^x is defined to be the function
which sends 0 to 1 and is its own derivative, or as a power series
1 + x/1! + x^2/2! + x^3/3! + ....
e is defined in terms of e^x, not the other way around! Slogan: math is about functions, not about numbers.
Also, math doesn't care whether string theory is true.
posted by escabeche at 4:59 AM on September 9, 2012 [8 favorites]
Math doesn't care if anything is true. As Bertrand Russell put it, "Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is
true." It's just a bunch of assumptions and definitions and rules to connect and extend them without reference to what does or doesn't exist outside its private world.
Here (with a video!) is someone asking and answering your very question.posted by Obscure Reference at 5:45 AM on September 9, 2012 [2 favorites]
You might be interested in reading
e: The story of a number
by Eli Maor. This is the book that pushed me over the edge to become a math major. Use with caution...
posted by El_Marto at 6:11 AM on September 9, 2012 [4 favorites]
If a function f(x) is its own derivative f'(x), then that function is some constant to the x power. We've assigned the letter e to that constant. As to its ubiquity in physics, I suppose if you want,
you can consider pi ubiquitous because it deals with "circular things" and e ubiquitous because it deals with "recursively derivative things".
posted by babbageboole at 6:59 AM on September 9, 2012 [2 favorites]
Curious to your impression, El_Marto - how accessible is that Maor book? I have a foundation in higher math (calculus, as a chemistry major, but I haven't done any calculus in at least 18 years) and
have sort of been thinking it would be nice to try to somewhat reactivate my "math" brain - but I can't quite make it to trying to integrate recreationally...
posted by nanojath at 7:01 AM on September 9, 2012
The simple version:
amounts to the "unit circle" of exponential growth.
The slightly less simple version (but by no means rigorous):
Elementary Penguin
's explanation comes the closest to how
think of
, since compound interest lets us tweak the compounding period arbitrarily without worrying about things like fractions-of-a-rabbit.
So, let's say you get 100% interest per month on one dollar. After a month you will have $2.00.
If you break that down into 10% every three days, after a month you will have $2.25937.
If you break that down into 0.1% every 43.2 minutes, after a month you will have $2.62877.
And, if you break that down into
small chunks, after a month you will have... $2.71828!
It turns out that when you want to know the growth rate of a continuous process dependent on the quantity you have "now", your answer will
"scale" with
(I use that word loosely, you don't have a literal multiplicative scaling here). Faster growth / higher interest rate / larger litter size, just a bigger scaling factor; Slower growth / lower
interest rate / smaller litter size, just a smaller scaling factor, following the standard formula
e^(rate * time)
–1 (in the above example, I hid the "-1" part of that in the dollar we started with, so the
actually came out to 1.71828)
Hope that helps!
[/ False-start Filter - Female rabbits will pop out a litter of 7ish once a (lunar) month, forever. After three months, the female kits can start doing the same. They can even get "double" pregnant,
in that they can carry two litters at once with different delivery dates!]posted by pla at 8:55 AM on September 9, 2012 [1 favorite]
Nthing everything that's been said about differential equations. e--or, more accurately, the exponential function, as escabeche has pointed out--has geometry too, particularly when you start working
with complex numbers.
The exponential function e^(ix) traces out the unit circle on the complex plane if x is real, and if we allow x to be a complex number, the resulting function is prototypical for periodic functions
on the complex plane. The trigonometric functions, which are our prototypes for periodic functions of real numbers, are actually just rational functions of e^(ix).
Geometrically, numbers of the form e^(ix) serve as the complex analogues of 1 and -1 (which themselves can be written in this form), in that they parameterize operations that don't change the
magnitudes of numbers; as a consequence, they are at the heart of operations that don't distort the geometry of a space, i.e., reflections, rotations, and the like. So as soon as you do geometry, e
is lurking in the background just as much as pi.
posted by Aquinas at 9:01 AM on September 9, 2012
In order to get a satisfying answer to your first question, I think we unfortunately need to engage in a bit of psychology here. The problem is that mathematics doesn't distinguish between
"definitions" of
and "cool places" where it pops up. That's entirely a human thing. So we need to ask what it is that makes the "the circumference of a circle divided by its diameter" explanation so much more
"intuitive" than some other explanation of
(such as
4 ( 1/1 - 1/3 + 1/5 - 1/7 + 1/9 + ...)
or whatever)?
I think a big part of the answer is simply that we live in a physical world that is roughly Euclidean at our scale. Also, it's
the first definition of
that we get, and unlike most of the definitions
, that definition is totally comprehensible with out any knowledge of calculus, limits, or even trigonometry. So that definition sticks. We carry all the baggage behind the definition of Euclidean
space with us at an intuitive level, so we don't see it in the circumference/diameter definition of
doesn't have such a simple geometric definition. If you know calculus well and have absorbed it a deep level, then the comment about the
function that several others have made here will have some resonance. It's not
that's so important so much as it is that the* function that is its own derivative that's important, and that function happens to be writable in the form
. But if calculus doesn't feel as intuitive to you as Euclidean geometry (and for most of us, it isn't), then this feels like just another cool place to find
and not a satisfying intuitive definition.
There's also the famous Euler formula
e^πi = -1
, which can also be used to define
, but if you weren't won over by the calculus definition, this won't seem any more intuitive.
The limit and infinite series definitions of
(such as
(1 + 1/n)^n
1/0! + 1/1! + 1/2! + 1/3! + ...
) are considerably more satisfying than those typically given for
, although there are definitely
-related numbers with pretty satisfying definitions in this sense (most notably
π^2/6 = 1/1^2 + 1/2^2 + 1/3^2 + ...
). But since it's easy to imagine dozens of comparably simple definitions, this doesn't really help matters.
All hope is not lost if you want an intuitive definition that doesn't involve too much advanced math. It's still only an exact definition in the limit, but the circumstances described are pretty
mundane. A gambler who sees that they've got a one-in-a-hundred chance at winning might think that it'd be pretty unlikely for them to be able to play the game a hundred times and never win. But
they'd be wrong. The probability of not winning any of their hundred games is actually about 0.366. And if they play a million one-in-a-million games, that probability is approximately 0.368. What is
this converging to?
Nobody ever uses this as a definition, but why the hell not? It's reality, it's understandable, and you certainly
use it to explain the other definitions if you wanted to, especially the non-calculus ones.
*Technically, there are others, but they're all slight variations of the same thing.
posted by ErWenn at 9:05 AM on September 9, 2012 [2 favorites]
And as for your bonus question:
have many different definitions, and many of those definitions are completely independent of any sort of physical reality. And so in that sense, neither
would ever change with the laws of physics or the layout of reality. However, I think what you're driving at is that because the most common definition of
is based on Euclidean geometry, it's quite possible to imagine a world where the ratio of the circumference to the diameter is
approximately 3.14159. So the question now becomes: is there a definition of
that depends on some physical facet of the universe that we
imagine as different?
I don't know of any.
posted by ErWenn at 9:24 AM on September 9, 2012 [1 favorite]
What is this converging to? 1/e. Nobody ever uses this as a definition, but why the hell not?
I like your suggestion, but I would quibble that nobody ever uses it — it's very close to one of the standard definitions you mention, that e = lim
, so close that you could argue it's "essentially" the same.
(Details: the probability of losing n games when each has an independent probability of 1/n of being won is (1-1/n)
, so this definition gives e = lim
. Writing m=n-1, a little algebra yields (1-1/n)
= (1+1/m)
(1+1/m), which since 1+1/m→1 implies that if either of the expressions (1-1/n)
or (1+1/n)
has a limit, then they both do, and then they have the same limit.)
Still, I agree that it's a very nice elementary way of introducing the number.
BONUS: under different conditions [...] would the number "e" exist or have the same value?
In some discrete math contexts, the natural analogue of the function e^x is the function 2^n. For example, in the "finite calculus" for sequences of numbers (instead of functions on the real line),
the analogue of the derivative is the first difference operator, and the sequence which is its own first differences is the sequence 2^n. With a little imagination, you might be able to see this as
suggesting that if our universe were discrete in the right way, we'd have e=2 (but I doubt that notion would stand up to scrutiny).
posted by stebulus at 11:00 AM on September 9, 2012 [1 favorite]
Math doesn't care if anything is true.
... if you're a formalist. Platonism asserts that
is just descriptions of a perfect universe that we can't actually access. It's not just that math is true relative to itself, but that it's
true. So the concept of "e" just sort of exists whether or not we notice. There are some obvious problems with this idea, not least of which is
how do we access this Platonic realm?
Formalism basically says that math is a matter of pushing symbols around and doesn't bear any real relationship with reality. This is somewhat unsatisfactory.
There's a rather more satisfying third option called "embodied mathematics." A cognitive scientist and a psychologist published
Where Mathematics Comes From
which basically says that mathematics
meaningful because it arises from how we are in the world. It's not about pushing about abstract symbols; at its core, it's a huge tangle of metaphors that build off innate concepts like "this thing
is bigger than that thing" and "one thing plus one thing is two things" (knowledge of the latter has been demonstrated in very young infants). I think they actually put forward a (rather complicated)
metaphorical explanation of
as an example.
posted by BungaDunga at 9:21 PM on September 9, 2012
« Older What is the youngest major liq... | What is the best way to stay c... Newer »
This thread is closed to new comments. | {"url":"http://ask.metafilter.com/224108/Why-should-e-exist","timestamp":"2014-04-17T03:22:50Z","content_type":null,"content_length":"47093","record_id":"<urn:uuid:ac142534-8893-436b-bfdb-00bc8eef25ba>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
) for Warming of the Surface Seawater
Correction of f(CO[2]) for Warming of the Surface Seawater
The next calculation is the correction of the fugacity in the moist equilibrator vapor to the in situ fugacity of CO[2] in equilibrium with the surface seawater. Since the water in the equilibrator
has warmed in transiting from the bow inlet line of the ship up to the equilibrator, a correction must be applied for this warming. The equation which simultaneously describes the temperature
dependence of CO[2] solubility and carbonate equilibria is given by equation (16) in the main text from Weiss et al. (1982):
[2])/^-4t-1.839 · 10^-3 ln f(CO[2]) (eq. 27)
In the case of evaluating the warming correction of seawater in transiting to the equilibrator, [2])/
This calculation gives the fugacity reported in the data file for f(CO[2]) in surface seawater, i.e., the fugacity of CO[2] in moist air at the sea surface conditions. | {"url":"http://cdiac.esd.ornl.gov/oceans/ndp_047/datacorr047.html","timestamp":"2014-04-18T16:46:38Z","content_type":null,"content_length":"6285","record_id":"<urn:uuid:d24ad632-65d3-4219-872d-b457ab41ba6d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Naruto the next Sage Of The Six Path's?
July 02, 2010, 06:25 AM
Re: Is Naruto the next Sage Of The Six Path's?
I am not sure if next sage or reincarnation of previous sage but I believe that sage could be reincarnated in naruto time because from what I know six realms are part of cycle of repeated
reincarnation(Saṃsāra). I personaly hope that naruto will surpass previous Sage Of The Six Path's because in naruto(manga) there is a important theme about next generation surpassing previous one
yet no one surpassed Sage Of The Six Path to prove it truth naruto should completely defeat jubi or something on his level so that's why I hope that naruto will defeat final enemy ALONE because
if he wins together with sasuke or someone else then that would mean that he is weaker then Sage Of The Six Path only way to prove then that he is stronger then Sage Of The Six Path would be if
final enemy is at least two times as strong as jubi. I also wanted to say that I am not really sure about what was madara talking about Uchiha because he could be lying about it because he was
saying before that he is not responsible for kyuubi attack but it is proven false in chapter 500 what I am trying to say is he was saying that senju have half of sage power and the Uchiha second
half but maybe it's not true maybe being child of jinchuuriki of strongest biju could have some side effect, that Idea comes from the fact that naruto is BORN with whiskers and when naruto uses
kyuubi chakra they become bigger so it proves that there is a connection between kyuubi and whiskers, maybe first child of sage inherited part of jubi power and second child part of sage power,
madara told us that uchiha inherited the Sage's eyes and chakra yet kyuubi said that chakra of sasuke is more sinister then his own and there is only one biju with more sinister chakra the jubi
also when we look at the eyes of sage he doesn't have tomoe in them only ripples yet jubi have both maybe jubi eye is a fusion of Rinnegan and his orginal eye that would mean that sharingan was
born from jubi power also when we look at the sasuke the bigger his hatred the stronger he gets and the biju's are mass of hatred so maybe final battle won't be a naruto(descended from the
younger of the two sons) vs sasuke(descended from the elder of the two sons) but a naruto(new Sage Of The Six Path) vs sasuke(new jubi).
Final of naruto(manga) naruto defeats final enemy then creates peace by overcomes circle of hatred by destroying mercenery like ninja system because they were supposed to a guardians not
mercenaries create new path for ninja like they were supposed to be in orginal vision of sage, then he should get married have bunch of children have a long life someone should write a book about
naruto something like that "legend of naruto, a ninja who surpassed the god(sage of the six path's)" a true happy ending.
ps.sorry for bad english
ps2.sorry for long post
July 05, 2010, 09:27 PM
Re: Is Naruto the next Sage Of The Six Path's?
Naruto is supposed to represent the body, Sasuke the eyes. So I wouldn't be surprised if something like what happened to Naruto in 499 happens to Sasuke except for his eyes in his training arc.
August 05, 2010, 03:48 PM
Naruto is the reincarnation of Rikudou Sennin
This series has STRONG Buddhist overtones
Naruto has walked the 6 realms and survived: http://en.wikipedia.org/wiki/Six_realms
Naruto has calmed his heart and his mind and achieved "enlightenment" (i.e. perfect sage mode) through zen training: http://en.wikipedia.org/wiki/Zen
Naruto seems to be in his last cycle of rebirth - http://en.wikipedia.org/wiki/Rebirth_%28Buddhism%29
Naruto is the sage reborn to bring peace and stability to the world before reaching Nirvana
Having physical traits =/= equal reincarnation (Just to address the whole "Nagato is Rikudou" He already has something more important than the strength of Rikudou, he has his heart.)
Example time. A candle lights another candle with its flame, the original candle is blown out. Does the second candle hold the same flame as the first?
That is reincarnation.
Naruto, imo, is the sage reborn. He is not the same person, but has the same traits, similar convictions, and unique drive towards unity. Pain was simply someone who inherited the eyes of the
Rinnegan. If Pain was the reincarnation he would have united the world, which he surely did not.
The sage was not able to completely extinguish violence and hatred from the world his first time around. He gave them tools to unite themselves in order to protect one another from the hatred
that permeated through society. The sage gave them all the ability to use their own chakra and to control the elements. It was a spiritual gift that was perverted into a weapon of war. The sage
did not finish his goal before death. Now it seems that the sage's essence is once again upon the planet to Unite the ninja world. Naruto has the ability to completely change people from withing,
impress his enemies, and to truly make people BELIEVE. It is Naruto's heart, conviction, and spirit that sets him apart from all other ninja in the series that we have caught wind of...except
one. The Rikudou Sennin.
This thread will recieve additional evidence and such as time passes. Myself and a few other came up with this theory on another website over a year and a half ago and it only caught hell in any
discussion. However, as we progress further into the manga, it seems to be turning into fact.
We'll have to see, in the end, what is actually truth.
August 05, 2010, 04:00 PM
Re: Naruto is the reincarnation of Rikudou Sennin
like the thread,but naruto was like this from the beginning.will of fire.love for his comrades but justice not death for his enemies.i just don't think he's RS by controling the nine tails
chakra,also he needs a dojutsu which we have yet to see.but i see the carry on the flame theory,remember the will of fire was passed down thru RS.hashirama lived by the same code as his
predecessors,but on an even note.
August 05, 2010, 04:07 PM
Re: Naruto is the reincarnation of Rikudou Sennin
like the thread,but naruto was like this from the beginning.will of fire.love for his comrades but justice not death for his enemies.i just don't think he's RS by controling the nine tails
chakra,also he needs a dojutsu which we have yet to see.but i see the carry on the flame theory,remember the will of fire was passed down thru RS.hashirama lived by the same code as his
predecessors,but on an even note.
This theory came about long before the obvious Rikudou mode he has now. Read the post again, I address the physical characteristics and why they mean little to nothing in the realm of
reincarnation. One does not need dojutsu in order to be the reincarnation. Just like one does not need a specific hair color, skin color, or nation of origin.
Of course Naruto was like this from the beginning of the manga. He has always been the new container for the essence of the Rikudou, and to be clear he IS his own person. Reincarnation does not
mean that you ARE the previous life it means it is a part of you, take the candle analogy to heart. An individual is not the tiny specific experiences in their lives, but the sum total of their
experiences. Same goes for lifetimes in the idea of reincarnation, one is not any specific life, but the overall sum total of all of their lives.
August 05, 2010, 04:16 PM
Re: Naruto is the reincarnation of Rikudou Sennin
all i'm saying is he is only a fraction of what the real thing was,i understand what you meant.but remember RS held all the biyuu,and had dojutsu.i don't disagree with his heart/soul being the
same,just not as powerful"yet".just makes you wonder how powerful RS was?
August 05, 2010, 04:56 PM
Re: Naruto is the reincarnation of Rikudou Sennin
Because Rikudou-Sennin was the Jyuubi jinchuuriki, it makes sense that his "peices" would have not just portions of the Jyuubi inside them, but portions of himself aswell. Because the Kyuubi is
the biggest "piece" of the Jyuubi puzzle, it has the most resemblance of Rikudou. The Kyuubi tails and silhouette was lost because only the chakra was pulled from it, not it's likeness and mind.
I'm pretty sure if some other ninja had fully mastered the Kyuubi, they'd have gained this Rikudou Mode aswell.
August 05, 2010, 05:12 PM
Re: Naruto is the reincarnation of Rikudou Sennin
^Well, actually, I'm thinking the more powerful the bijuu, the more likeness to Rikudou we'd find. And as far fetched as this may seem...I'm thinking that once Sasuke's fused with Gedou Mazou,
the combined chakras of the other bijuu will give him a similar form in some manner (since so much of Rikudou's chakra will all be within him).
He probably won't have a similar looking body, but his eyes could glow, or something to that effect. I base this on the fact that the Senju enharited the "body and chakra" of Rikudou, and the
Uchiha ehnarited the "eyes and spirit" of Rikudou.
August 05, 2010, 05:23 PM
Re: Naruto is the reincarnation of Rikudou Sennin
you're right,i didn't think about the biyuu that madara has.but what if sasuke's EMS can just control them and still be sharigan?might=out since naruto has the markings but no dojutsu.
August 05, 2010, 05:48 PM
Re: Naruto is the reincarnation of Rikudou Sennin
Because Rikudou-Sennin was the Jyuubi jinchuuriki, it makes sense that his "peices" would have not just portions of the Jyuubi inside them, but portions of himself aswell. Because the Kyuubi is
the biggest "piece" of the Jyuubi puzzle, it has the most resemblance of Rikudou. The Kyuubi tails and silhouette was lost because only the chakra was pulled from it, not it's likeness and mind.
I'm pretty sure if some other ninja had fully mastered the Kyuubi, they'd have gained this Rikudou Mode aswell.
You realize this has way more to do with than that new ability Naruto has, correct.
Naruto read the OP again. Naruto has been showing traits of the Rikudou for the longest time, even before controlling the bijuu inside of himself.
August 06, 2010, 06:01 AM
Re: Naruto is the reincarnation of Rikudou Sennin
No one else has mastered the Kyuubi the way Naruto has, as far as we know.
We were not told that Uzumaki Mito mastered the use of the Kyuubi. We just know that she sealed it into herself, and we can assume she was able to summon it and control it if need be. If she
could manipulate and control the Kyuubi as is, then there was no need to gain control over it like Naruto did.
Kushina was a container for the Kyuubi. We do not know the extent to which she had control over it. We know she could use her chakra to resist the influence, and her chains to physically restrain
it. We do not know if she was able to/had ever used it in battle. From what I would guess, it seems like she was probably forbidden to ever use it as it was top-secret that she had it sealed
inside of her.
Naruto has separated/extracted the Kyuubi's chakra from its will (and hatred), and is able to draw from and use it. As far as we know, he is the only one to ever obtain that type of "mastery".
I think it entirely possible that Rikudou Sennin used an insane amount of chakra to not only separate, but maintain the separation of the Juubi.
If he merely separated it and placed no safeguards, what was to stop them from finding each other and reforming into the Juubi? I would bet the sage poured a lot of his chakra in order to
maintain the separation.
If that assumption is true, then he would have put in chakra proportional to the strength of each bijuu. If the Kyuubi is indeed much stronger than even the Hachibi, then it would make sense that
the largest amount of Rikudou Sennin's chakra went into him. If this is true, when Naruto separated the Kyuubi chakra from it's will/hatred, then he would have access to a large amount of Rikudou
Sennin's chakra.
Also, if we do assume that Naruto is the reincarnation of Rikudou Sennin (which I do agree with to some level), it would be a fair assumption that he would be able to use his chakra fairly
easily. That may explain why when he draws from the Kyuubi chakra, instead of gaining a shroud, he takes on an appearance more similar to Rikudou Sennin himself than the Kyuubi.
To me, it seems more like a combination of the fact that Naruto may be the reincarnation of Rikudou Sennin and has access to his chakra. If you don't meet both conditions, then I would assume
possessing his chakra would not produce the same results.
June 24, 2011, 02:08 PM
Re: Is Naruto the next Sage Of The Six Path's?
Regarding the above post, there is one matter everyone is forgetting about. I agree with the possibility that the Rikudou Sennin did put some of his chakra in the bijuu, but I do not believe it
is the reason for naruto's change. For Naruto to actually look like the RS, the most obvious explanation (and usually the right one) is because he is the descendent of the younger brother,
therefore, he is of the RS's line. As far as we know, no other jichuriki was a descendant of the Rikudou Sennin.
Just my 2 cents btw.
June 24, 2011, 02:32 PM
Re: Is Naruto the next Sage Of The Six Path's?
Maybe it's just a coincidence, but doesn't it look kinda simmilar?
Uploaded with ImageShack.us
You gotta remember that the Sage also was a Jinchuriki, just the 10 tails, mastering the 9th tails will make Naruto look closer to the Jinchuriki (the Sage) mastering the 10 tails.
Nevertheless it may be a forshadowing! :p
Don't forget that Senju has the posibility to have the mutation of Rinnegan, while it is so rare that it almost never happends. Naruto beeing as special as he is, predicted to be the saviour of
the world, would likely be such a rare person, having the Senju blood and all.
But it likely wont happend before he has brobken the cycle between Senju and Uchiha as someone else mentioned, and it likely won't happend before Marada ressurects the 10 tails. Remember Naruto's
mom survived having the 9 tails ripped out, so can Naruto.
I'd love to see Naruto getting the Rinnegan, but it should be at the end of the manga in that case.. :amuse
June 25, 2011, 09:38 AM
Re: Is Naruto the next Sage Of The Six Path's?
I agree with the Jichuriki part, but I think is not all of it. Bee is the Hachibi Jinchuriki and he looks like an octopus, nothing like Naruto or the RS. But the above post does have a point.
Perhaps it is because Naruto is both a Jinchuriki and a descendant of the RS. For the Rinnegan part, I couldn't agree more. Not to mention that the last user of the Rinnegan was Uzumaki Nagato.
Isn't it too much of a coincidence that he is of the same clan as Naruto???? Or is Kishi simply trying to tell us something?
June 30, 2011, 12:45 PM
Re: Is Naruto the next Sage Of The Six Path's?
Is Naruto the next Sage Of The Six Path's?
Naruto is heir to Rikudou Sennin throne. He is an Uzumaki, Sasuke and wanna be Madara is not. Naruto is Nidaime Rikudou not Tobi. :p | {"url":"http://mangahelpers.com/forum/printthread.php?t=61931&pp=15&page=3","timestamp":"2014-04-16T22:21:09Z","content_type":null,"content_length":"30305","record_id":"<urn:uuid:f4f65119-d23f-4da2-ae80-7b41ca63ca6b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method for showing a localization error and related device
Patent application title: Method for showing a localization error and related device
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The method according to the invention is a method for showing the localization error of a plurality of points of a georeferenced image, comprising the following steps: providing a georeferenced
image, in which each image coordinate point is associated with announced values of the geographical coordinates defining the geographical localization of the object corresponding to that point of the
georeferenced image; showing the georeferenced image. The method also comprises the following steps: providing, for each point of the plurality of points of the georeferenced image, an estimated
value of the localization error specific to that point, said error not being uniform over the image; and showing the localization error for at least one point among the plurality of points of the
georeferenced image so as to make it possible for a user to view the localization error.
A method for showing the localization error of a plurality of points (P
; P
; P
) of a georeferenced image (A
; A
; A
), comprising the following steps: providing a georeferenced image (A
; A
; A
), in which each image coordinate (l, c; l
, c
; l
, c
) point (P
; P
; P
) is associated with announced values (x
, y
, z
) of the geographical coordinates defining the geographical localization of the object corresponding to that point (P
; P
; P
) of the georeferenced image (A
; A
; A
); showing the georeferenced image (A
; A
; A
); providing, for each point of the plurality of points (P
; P
; P
) of the georeferenced image (A
; A
; A
), an estimated value of the localization error (ε; ε
; ε
) specific to that point (P
; P
; P
), said error (ε; ε
; ε
) not being uniform over the image (A
; A
; A
); and showing the localization error (ε; ε
; ε
) for at least one point among the plurality of points (P
; P
; P
) of the georeferenced image (A
; A
; A
) so as to make it possible for a user to view the localization error (ε; ε
; ε
The method according to claim 1, wherein the localization error (ε; ε
; ε
) is shown for each point of the plurality of points (P
; P
; P
) of the georeferenced image (A
; A
; A
The method according to claim 1, wherein the localization error (ε; ε
; ε
) is shown on the georeferenced image (A
; A
; A
) itself.
The method according to claim 3, wherein the georeferenced image (A
; A
; A
) and the localization error (ε; ε
; ε
) are displayed via display means (92), the localization error (ε; ε
; ε
) being displayed on the georeferenced image (A
; A
; A
) intermittently.
The method according to claim 4, wherein the localization error (ε; ε
; ε
) of a point (P
; P
; P
) of a georeferenced image (A
; A
; A
) is displayed when the user selects said point (P
; P
; P
The method according to claim 1, wherein an error map (C) is produced, said error map (C) showing the localization error (ε; ε
; ε
) for the plurality of points (P
; P
; P
) of the georeferenced image (A
; A
; A
), and the representation of the localization error (ε; ε
; ε
) consists of showing said error map (C).
The method according to claim 3, wherein the error map (C) is shown superimposed on the georeferenced image (A
; A
; A
) so as to form a combined image (A
), in which the localization error (ε; ε
; ε
) associated with each of the plurality of points (P
; P
; P
) is shown by a first parameter and the object represented by that point (P
; P
; P
) is shown by a second parameter.
The method according to claim 7, wherein the error map (C) and the georeferenced image (A
; A
; A
) are displayed via display means, and the display of the error map (C) on the georeferenced image (A
; A
; A
) is intermittent, the error map (C) being displayed blinking with a blinking frequency lower than the retinal remanence frequency, i.e. comprised between
5 and 20 Hz.
A device for representing the localization error (ε; ε
; ε
) of a plurality of points (P
; P
; P
) of the georeferenced image (A
; A
; A
), which comprises: means for providing a georeferenced image (A
; A
; A
), wherein each image coordinate (l, c; l
, c
; l
, c
) point (P
; P
; P
) is associated with announced values (x
, y
, z
) of the geographical coordinates defining the geographical localization of the object corresponding to that point (P
; P
; P
) of the georeferenced image (A
; A
; A
); means for showing the georeferenced image (A
; A
; A
); the device being characterized in that it also comprises: means for providing, for each point of the plurality of points (P
; P
; P
) of the georeferenced image (A
; A
; A
), an estimated value of the localization error (ε; ε
; ε
) specific to that point (P
; P
; P
), said error (ε; ε
; ε
) not being uniform over the image (A
; A
; A
); and means for showing the localization error (ε; ε
; ε
) for at least one point among the plurality of points (P
; P
; P
) of the georeferenced image (A
; A
; A
) so as to allow a user to visualize the localization error (ε; ε
; ε
The method according to claim 6, wherein the error map (C) is shown superimposed on the georeferenced image (A
; A
; A
) so as to form a combined image (A
), in which the localization error (ε; ε
; ε
) associated with each of the plurality of points (P
; P
; P
) is shown by a first parameter and the object represented by that point (P
; P
; P
) is shown by a second parameter.
BACKGROUND [0001]
The present invention relates to a method for showing the localization error of a plurality of points of a georeferenced image, comprising the following steps:
providing a georeferenced image, in which each image coordinate point is associated with announced values of the geographical coordinates defining the geographical localization of the object
corresponding to that point of the georeferenced image;
showing the georeferenced image.
An image from an observation sensor is said to be georeferenced when it is provided accompanied by a mathematical function making it possible to perform a match between the points of the image and
the geographical coordinates of the corresponding points in the visualized three-dimensional world. Two types of georeferenced images exist: raw images, coming directly from the observation sensor,
and orthorectified images, also called orthoimages, which have in particular been corrected for the effects of the relief of the visualized terrain. Thus, an orthorectified image is an image whereof
the geography has been corrected so that each of its points can be superimposed on a corresponding flat map.
Any object seen in a georeferenced image can thus be localized in the visualized three-dimensional world, also called terrain. This localization is, however, tainted by errors, due in particular to
exposure circumstances, and the local relief of the visualized terrain.
For many applications, such as remote sensing or digital geography, it is important to know the error made during the localization of a given point of the image in the terrain so as to be able to
evaluate the confidence one can have in the localization indication. Currently, the suppliers of georeferenced images provide these images accompanied by the indication of the uniform average error
over the entire image. Yet the localization error varies greatly within a same image. Thus, the average error may be low, whereas in reality it is very high in certain areas of the image, which for
example have a steep relief, and low in other, rather flat areas. The indication of an average error therefore does not allow the user to determine what level of confidence he may give to the
geographical location of a given point of the image.
The invention aims to propose a system allowing a user to simply and intuitively determine what level of confidence he may have in the geographical localization announced for each point of a
georeferenced image.
To that end, the invention relates to a method for showing the localization error as defined above, characterized in that it also comprises the following steps:
providing, for each point of the plurality of points of the georeferenced image, an estimated value of the localization error specific to that point, said error not being uniform over the image; and
showing the localization error for at least one point among the plurality of points of the georeferenced image so as to make it possible for a user to view the localization error.
According to other specific embodiments, the method according to the invention comprises one or more of the following features, considered alone or according to all technically possible combinations:
the localization error is shown for each point of the plurality of points of the georeferenced image;
the localization error is shown on the georeferenced image itself;
the georeferenced image and the localization error are displayed via display means, the localization error being displayed on the georeferenced image intermittently;
the localization error of a point of a georeferenced image is displayed when the user selects said point;
an error map is produced, said error map showing the localization error for the plurality of points of the georeferenced image, and the representation of the localization error consists of showing
said error map;
the error map is shown superimposed on the georeferenced image so as to form a combined image, in which the localization error associated with each point of the plurality of points is shown by a
first parameter and the object represented by that point is shown by a second parameter;
the error map and the georeferenced image are displayed via display means, and the display of the error map on the georeferenced image is intermittent, the error map being displayed blinking with a
blinking frequency lower than the retinal remanence frequency, i.e. comprised between 0.5 and 20 Hz.
The invention also relates to a device for representing the localization error of a plurality of points of the georeferenced image, which comprises:
means for providing a georeferenced image, wherein each image coordinate point is associated with announced values of the geographical coordinates defining the geographical localization of the object
corresponding to that point of the georeferenced image;
means for showing the georeferenced image;
the device being characterized in that it also comprises:
means for providing, for each point of the plurality of points of the georeferenced image, an estimated value of the localization error specific to that point, said error not being uniform over the
image; and
means for showing the localization error for at least one point among the plurality of points of the georeferenced image so as to allow a user to visualize the localization error.
BRIEF DESCRIPTION OF THE DRAWINGS [0025]
The invention will be better understood upon reading the following description, provided solely as an example and done in reference to the appended drawings, in which:
FIG. 1 illustrates a device for determining a localization error;
FIG. 2 is a diagrammatic illustration of the relationship between a raw georeferenced image and the terrain;
FIG. 3 is a diagrammatic illustration of the method for determining a localization error according to a first embodiment;
FIG. 4 is a diagrammatic illustration of the method according to a first alternative of a second embodiment;
FIG. 5 is a diagrammatic illustration of the method according to a second alternative of the second embodiment;
FIG. 6 is a diagrammatic illustration of the relationship between an orthorectified image and a corresponding raw image;
FIG. 7 is a diagrammatic illustration of the method according to a third embodiment;
FIG. 8 illustrates a device for determining a localization error according to the third embodiment;
FIG. 9 is a diagrammatic illustration of a device for showing the localization error of each point of a georeferenced image;
FIG. 10 is a diagrammatic illustration of the method for showing the localization error of each point of a georeferenced image;
FIG. 11 is an illustration of images shown by the device of FIG. 9 according to one embodiment, the top image being a georeferenced image, and the bottom image being a corresponding error map; and
FIG. 12 is an illustration of an image shown by the device of FIG. 9 according to another embodiment, the error map being superimposed on the georeferenced image.
The geographical localization of a point P in the terrain T is defined by terrain coordinates X, Y, Z. The terrain coordinates X, Y, Z can be defined in any coordinate system adapted to define the
localization of an object in the terrain T. Traditionally, one can cite Euclidian referentials such as the 3D Euclidian referential centered on the center of the earth, or one can cite systems of
geographical coordinates where the planimetric coordinates are angular over a reference ellipsoid representing the earth (latitude and longitude coordinates) and the altimetric coordinate is linear
and measured along the local normal at the reference ellipsoid at the considered point, then lastly one can also cite systems of projected coordinates, not Euclidian but metric, where the planimetric
coordinates are expressed in meters, translated into geographical coordinates using a projection formula, usually compliant (for example Mercator, Mercator Transverse, or Universal Transverse
Mercator projections, Lambert conical projection, stereographic projection, etc.) and where the vertical coordinate is built as for the aforementioned geographical referentials (latitude, longitude,
height). It should be noted that it is possible to change references, Euclidian, geographical or cartographical, without changing the substance of the invention. To summarize and for the strict
purposes of the invention, it suffices to consider a trio of numbers X, Y, Z that uniquely determine the localization of any point on the land surface. Hereafter, these coordinates will be called
terrain coordinates.
In the rest of the description, the terrain coordinates X, Y, Z are geographical coordinates, in particular comprising planimetric coordinates X, Y and an altimetric coordinate Z.
The image for which one wishes to determine the localization error is a georeferenced image, i.e. each point of the image is associated with announced values x
, y
, z
of the terrain coordinates, which define the geographical localization in the terrain T of the object represented by that point of the image. Thus, one associates a point P of the terrain with
coordinates x
, y
, z
at each point of the georeferenced image.
The localization error refers to the error on the localization of a point of the image in the terrain T. This error primarily results from uncertainties related to:
(i) the observation sensor; and
(ii) the knowledge available about the representation of the land surface, in other words the mathematical relationship defining the land, said relationship either implicitly or explicitly connecting
together the coordinates X, Y, Z of the points of the land surface. This relationship is hereafter called the terrain model M. It is expressed as follows: M(X,Y,Z)=0 or more traditionally M(X,Y)=Z.
The localization error is expressed, for each point of the image, in units of length, for example in meters, around the announced land coordinates, i.e. around the geographical localization announced
for that point.
FIG. 1 shows a device 1 for determining the localization error ε of a point of a georeferenced image. According to one embodiment, the device 1 comprises a processing and storage unit 2 and interface
means 3 between said unit 2 and a user. The interface means 3 comprise a display device 4, for example a screen, and input peripherals 5, for example a mouse and/or keyboard. The interface means 3
are connected to the processing and storage unit 2 and for example allow the user to act on an image displayed via the display device 4. The processing and storage unit 2 comprises a computer 6, for
example a microprocessor of a computer implementing a program and storage means 7, for example a memory of the computer.
The steps of the method for determining the localization error are carried out by the device 1 under the control of the computer program.
In a first embodiment of the invention, the considered image is a raw image A
. The raw image A
is traditionally an image coming directly from an observation sensor without any geometric preprocessing. The observation sensor used to acquire the raw image A
may be of any type. It is in particular a radar, lidar, infrared or electro-optical sensor, or a multispectral or hyperspectral vision sensor. Such sensors are for example incorporated into
observation satellites, reconnaissance drones, photo devices, or onboard airplanes.
Each point P
of the raw image A
is identified within the raw image A
by image coordinates l, c defining its position in the raw image A
. The values of the image coordinates l, c are real numbers. As illustrated in FIG. 2, each point P
of the raw image A
is associated with an announced value x
, y
, z
of each geographical coordinate defining the geographical localization of the object represented by the point P
of the raw image A
in the terrain T. Thus, in a georeferenced raw image A
, each point P
is associated with a point P of the terrain T with coordinates x
, y
, z
FIG. 3 diagrammatically illustrates the method for determining the localization error of the point P0 of the raw image A0, this method for example being carried out by the device 1 under the control
of the computer program.
In one step 10 of the method, an exposure function f is provided associated with the raw image A
, as well as a terrain model M as defined above.
The exposure function f is a nonlinear function. It associates the point P of geographical coordinates X, Y, Z in the terrain T with the point P
corresponding to the raw image A
with coordinates l, c in the raw image A
. It is expressed as follows:
.sub., . . . , θ
X, Y and Z are the geographical coordinates of the point P of the terrain T;
c and l are the coordinates of the corresponding point P
in the raw image A
; and
, . . . , θ
are magnitudes depending on the exposure conditions.
Hereafter, vector θ refers to the vector whereof the components are the magnitudes θ
, . . . , θ
. Thus, θ=(θ
, θ
, . . . , θ
). Geographical localization vector V also refers to the vector whereof the coordinates are the geographical coordinates X, Y, Z. Thus, V=(X, Y, Z).
The magnitudes θ
, . . . , θ
, are random variables whereof the joint probability law D(θ
, . . . , θ
) is known. The joint law D(θ
, . . . , θ
) is either provided by the producer of the raw image A
, or can be deduced by the computer 6 from information provided by the producer of the raw image A
Thus, the producer of the raw image A
for examples provides the type of the joint law, as well as the order 1 and 2 moments, i.e. the expected value of the law, accompanied by uncertainty data generally in the form of a covariance matrix
of the magnitudes θ
, . . . , θ
In the case where the magnitudes θ
, . . . , θ
are independent and identically distributed random variables, the uncertainty data are for example the standard deviation or the variance of each magnitude θ
, . . . , θ
around its expected value.
In the case where the probability law D(θ
, . . . , θ
) is not provided, the vector θ is assumed to be a Gaussian vector, i.e. where any linear combination of the variables θ
, . . . , θ
follows a Gaussian law. In that case, the order 1 and 2 moments for each variable θ
, . . . , θ
suffice to define the joint probability law under that Gaussian hypothesis.
In the context of the method according to the invention, all of the magnitudes θ
, . . . , θ
are random variables. The invention makes it possible to incorporate the constants. They are then defined by the zero coefficients in the covariance matrix in the row and column concerning them.
The magnitudes θ
, . . . , θ
for example comprise positioning characteristics of the observation sensor during the acquisition of the raw image A
, such as its position and its orientation during the acquisition of the raw image A
, as well as the physical characteristics of the observation sensor having acquired the raw image A
, such as the size of the receiving matrices or the focal distance.
The geographical localization coordinates X, Y and Z for the point P of the terrain T associated with the point P
of the raw image A
depend on the magnitudes θ
, . . . , θ
, in particular via the exposure function f. These geographical coordinates X, Y and Z are therefore random joint law variables D(X, Y, Z). The announced values x
, y
and z
of the geographical coordinates associated with the point P
in the georeferenced raw image A
constitute particular observations of the geographical coordinates X, Y and Z.
The exposure function f of a raw image A
is generally provided with the raw image A
The exposure function f is, according to one embodiment, a physical exposure model, which is a direct translation of the exposure of the sensor. Examples of exposure models are the conical model,
which corresponds to a CCD or CMOS receiver array and represents the traditional exposure of a focal plane camera, the pushbroom model, which represents a sensor in which the receivers are organized
along a one-dimensional strip, and the whiskbroom model, which represents a sensor in which the receiver is reduced to a cell whereof the rapid movement makes it possible to form an image.
Alternatively, the exposure function f is a purely analytical replacement model. In that case, the magnitudes θ
, . . . , θ
are not each directly related to a physical parameter of the exposure, as is the case in the physical exposure model, but are translated in their entirety from the exposure conditions by the producer
of the replacement model. Examples of replacement models are traditionally the polynomial model, the rational fraction model, or the grid model. For this type of model, the producer provides a
covariance matrix for the vector θ.
The terrain model M provided in step 10 provides, in the described embodiment, for any point P of the terrain T, the altimetric coordinate Z as a function of the planimetric coordinates X and Y. It
is provided with an error model err(X, Y), modeling the error of the terrain model M as a random field whereof the probability law D(err) is known.
Thus, the terrain model M is expressed as follows:
Z is the altimetric coordinate of a point P of the terrain T,
X and Y are the planimetric coordinates of that point P,
err(X,Y) is the error of the terrain model M.
The terrain model M is for example a digital surface model (DSM) or a digital elevation model (DEM), these two models providing relief information relative to the ground surface. Alternatively, it is
a digital terrain model (DTM), which provides relief information relative to the bare soil. In the most terrain information-poor cases, this terrain model M may be reduced to a land geoid, i.e. an
equipotential of the earth gravity field coinciding with the average sea level, or a simple geometric model of the earth that can be either ellipsoid in revolution, such as, for example, the "WGS84"
World Geodetic System produced by the American National Imagery Mapping Agency (NIMA) or a simple sphere with an average earth radius or even a so-called flat earth model where the function M is
The error field err(X,Y) being a priori any relationship of the error law D(err), it will subsequently be modeled using Monte Carlo draws of the earth model M and for each draw, the earth error will
be integrated into the drawn model M. To that end, using the Monte Carlo method, and using the probability law D(err) of the error model err(X,Y), a set of observations of the terrain model M are
generated such that that set obeys the probability law D(err) of the error model err(X, Y). These Monte Carlo draws are for example done using an algorithm based on Fourier transform methods.
The terrain model M as traditionally provided by a data producer is a particular case. It corresponds to the identically zero production of the error field err(X,Y).
In step 11 of the method, the exposure function f is reversed using any suitable method, for example using the ray-tracing method, using the terrain model M, so as to obtain a localization
relationship h.
To that end, the following system is implicitly resolved, the image coordinates l, c of the point P
of the raw image A
being set:
.sub., . . . , θ
The localization relationship h is modeled as depending on a random field. Each performance of the localization relationship h is called localization function g. Each localization function g
corresponds to a performance of the error field err(X,Y), i.e. for example a particular Monte Carlo draw of the error field err(X, Y).
The localization relationship h implicitly contains, due to its attainment method, the terrain model M in the hypothesis that a Monte Carlo draw of the error field err(X,Y) of the terrain model M has
been done.
Each localization function g, i.e. each performance of the localization relationship h, is a function that is not necessarily linear. It gives, for each point P
of the raw image A
, at least some of the geographical localization coordinates X, Y, Z associated with that point P
as a function of the magnitudes θ
, . . . , θ
depending on the exposure conditions. In particular, each localization function g gives, for each point P
of the raw image A
, the three geographical localization coordinates X, Y, Z associated with that point P
as a function of the magnitudes θ
, . . . , θ
depending on the exposure conditions.
In step 20 of the method, one estimates, for the point P
of coordinates l, c of the raw image A
, the value of a characteristic statistical magnitude G of the probability law D(X, Y, Z) of the geographical coordinates X, Y, Z associated with the point P
of the raw image A
the probability law D(θ
, . . . , θ
) of the vector θ; and
at least one of the localization functions g, the or each localization function g being applied to the point P
with coordinates l, c of the raw image A
. Each localization function g corresponds to a particular performance of the localization relationship h, i.e. a given Monte Carlo draw of the terrain error err(X, Y).
Advantageously, one estimates the statistical magnitude G from each localization function g obtained by Monte Carlo draws of the terrain error err(X, Y).
The statistical magnitude G for example comprises a component G
, G
, G
according to each of the geographical coordinates X, Y, Z. It is representative of the dispersion of the geographical coordinates X, Y and Z around their respective announced values x
, y
, z
It comprises, according to one embodiment, the standard deviation of each of the geographical coordinates X, Y and Z around their respective announced values x
, y
and z
. For geographical coordinate X, the standard deviation is for example calculated using the formula:
G X
= 1 n × i = 1 n ( x i - x T ) 2 , ##EQU00001##
is an observation of the geographical coordinate X;
is the announced value of the geographical coordinate X;
n corresponds to the number of observations made.
The standard deviation is calculated similarly for geographical coordinates Y and Z.
According to alternatives or optionally, other statistical magnitudes G can be calculated among all of the well-known dispersion indicators. These include the very used statistical order criteria
corresponding to the errors at n %, where n is comprised between 0 and 100. The error at 50% is called median, and the value at 90% is often used. The traditional manner of calculating these is well
known by those skilled in the art (for example by sorting the errors and calculating the maximum of the errors among the smallest n %).
Alternatively or optionally, the statistical magnitude G comprises a planimetric component G
, representative of the dispersion of the planimetric coordinates X and Y around their announced values x
, y
, and an altimetric component G
, representative of the dispersion of the altimetric coordinate Z around its announced value z
According to the first embodiment, the statistical magnitude G is estimated for the point P
of the raw image A
using the Monte Carlo method, by setting up Monte Carlo draws according to the laws of the magnitudes θ
, . . . , θ
through at least one localization function g.
To that end, in a sub-step 210 of step 20, one generates, using the probability law D(θ
, . . . , θ
) of the vector θ provided in step 10, a set of N observations S
, . . . , S
of the vector θ. The observations S
, . . . , S
are chosen using algorithms known by those skilled in the art so that the set of observations S
, . . . , S
obeys the probability law D(θ
, . . . , θ
) of the vector θ. These algorithms are for example algorithms based on the acceptance-rejection method or on Markov process-based methods, these methods being well known by those skilled in the art.
The size of the set, i.e. the number N of observations S
, . . . , S
, is chosen by one skilled in the art, in particular as a function of the desired precision of the estimate and the number n of magnitudes A
, . . . , θ
, i.e. the dimension of the vector θ. The number N of observations of the vector θ is traditionally greater than 1000.
In a sub-step 212 of step 20, one determines, for the point P
of the raw image A
with given coordinates l, c, the results of each of the N observations S
, . . . , S
using at least one localization function g. Each result corresponds to an observation x
, y
, z
of the geographical coordinates X, Y, Z. One thus obtains, at the end of step 212, a set of N observations x
, y
, z
of the geographical coordinates X, Y, Z for each localization function g. Alternatively, one obtains a set of observations x
, y
, z
of the geographical coordinates X, Y, Z for all of the localization functions g obtained by Monte Carlo draws of the terrain error err(X, Y).
In a sub-step 214 of step 20, one estimates the probability law D(X, Y, Z) of the coordinates X, Y and Z from the observation set(s) x
, y
and z
of the geographical coordinates X, Y, Z obtained in sub-step 212.
In a sub-step 216 of step 20, one deduces the statistical magnitude G of the probability law D(X, Y, Z) of the geographical coordinates X, Y and Z. In particular, one deduces each of the components G
, G
, G
of the statistical magnitude G relative to the geographical coordinate X, Y, Z, respectively, of the probability law D(X, Y, Z).
Optionally, one also deduces the expected results E(X), E(Y) and E(Z) of the geographical coordinates X, Y, Z of the probability law D(X, Y, Z).
In step 30, one deduces, from the value of the statistical magnitude G, the localization error E of the point P
of the raw image A
. According to one embodiment, the geographical localization error E is identified, for each geographical coordinate X, Y, Z, with the corresponding component G
, G
, G
of the statistical magnitude G determined in step 20.
According to one alternative, the localization error E comprises a planimetric component ε
, dependent on the planimetric coordinates X and Y. This planimetric component is for example obtained from the components G
and G
of the statistical magnitude G respectively relative to the planimetric coordinates X and Y determined in step 20, by applying the following formula: G
= {square root over (G
)}. Alternatively, it is obtained directly from the planimetric component G
of the statistical magnitude G.
Optionally, the localization error E also comprises an altimetric component ε
depending on the altimetric coordinate Z. The altimetric component ε
is for example identified with the component G
of the statistical magnitude G relative to the altimetric coordinate Z determined in step 20.
Advantageously, the probability law D(X, Y, Z) of the geographical coordinates X, Y, Z is recorded, associated with the point P
, for example in the storage means 7.
Alternatively or optionally, the statistical magnitude G associated with the point P
, for example the standard deviation of the geographical coordinates X, Y, Z around their announced values x
, y
, z
, is recorded. Optionally, the expected values E(X), E(Y) and E(Z) of the geographical coordinates X, Y, Z are also recorded.
Advantageously, steps 10 to 30 are implemented for each point P
of the raw image A
so as to determine the localization error E of each point P
of the georeferenced raw image A
The establishment of Monte Carlo draws of the error field err(X, Y) of the terrain model M improves the precision of the estimate of the localization error ε, since the estimated error takes the
error on the terrain model M into account.
Furthermore, using the Monte Carlo method makes it possible to obtain a good estimate of the probability laws of the geographical coordinates X, Y and Z. It does, however, require a significant
number of calculations, and therefore requires a lengthy calculation time.
The method according to a second embodiment only differs from the method according to the first embodiment in that the terrain error err(X,Y) is not taken into account. In other words, it is
considered that the error on the terrain model M is zero. In that case, Monte Carlo draws are not performed on the terrain error err(X,Y), i.e. the probability law D(err) is considered identically
zero. In that case, the localization relationship h determined in step 11 is deterministic. It is called localization function g. All of the other steps are identical to the steps of the method
according to the first embodiment, except that they are applied to the single localization function g, rather than to the plurality of localization functions g.
The method for determining the localization error ε according to a first alternative of the first and second embodiments is illustrated in FIG. 4. It only differs from the method according to the
first and second embodiments of the invention through the method for estimating the statistical magnitude G used in step 20. In fact, in the first alternative, the statistical magnitude G is
estimated using the method based on a sigma-point method.
To that end, in a sub-step 220 of step 20, one chooses a set of sigma points S
, where each sigma point S
is an observation of the vector θ. Weights ω
and ω
are assigned to each sigma point S
. The set of sigma points S
is chosen so that the average and the covariance matrix calculated by weighted average from these sigma points S
respectively correspond to the expected value E(θ) and the covariance matrix P.sub.θ of the vector θ.
The sigma points S
are generated iteratively, for example using the following equations:
=E(θ)+ζ( {square root over (P.sub.θ)})
for i=1, . . . , n
=E(θ)-ζ( {square root over (P.sub.θ)})
for i=n+1, . . . , 2n
ζ Is a scalar scale factor that determines the dispersion of the sigma points Si around the expected value E(θ) of the vector θ;
( {square root over (P.sub.θ)})
, designates the i
column of the square root of the covariance matrix P.sub.θ.
The values of the scale factor ζ and the weights ω
depend on the type of sigma point approach used. According to one embodiment, the unscented transformation is used as sigma point approach type. The method for choosing the sigma points S
using the unscented transformation is known by those skilled in the art, and is in particular described in the article "Sigma-Point Kalman Filters for Probabilistic Inference in Dynamic State-Space
Models," Rudolph van der Merwe, PhD Thesis, April 2004. Any other type of sigma point approach may also be used in the context of the method according to the invention.
In a sub-step 222 of the method according to the first alternative, the sigma points S
chosen in step 220 are propagated through the localization function g.
To that end, one for example uses the following equations:
V i
= g ( S i ) ##EQU00002## E ( V ) ≈ i = 0 2 L ω i m Vi ##EQU00002.2## P V ≈ i = 0 2 L j = 0 2 L ω i , j c v i v j T ##EQU00002.3##
and ω
are scalar weights whereof the value depends on the type of sigma point approach used.
One thus obtains one or several sets of observations x
, y
, z
of the geographical coordinates X, Y, Z.
One also obtains the covariance matrix P
of the geographical coordinates X, Y, Z and optionally the expected value E(X), E(Y), E(Z) of each of the geographical coordinates X, Y, Z.
Optionally, one estimates, in a sub-step 224, a covariance matrix P of the planimetric coordinates X, Y from the block extracted from the matrix P
corresponding to the coordinates X, Y and, optionally, the expected value E(X), E(Y) of each of the planimetric coordinates X and Y. In this sub-step 224, one also estimates the variance of the
altimetric coordinate Z from the corresponding diagonal term P.sub.v3,3 of the covariance matrix P
In a sub-step 226, one estimates the statistical magnitude G from the set of observations x
, y
, z
of the geographical coordinates X, Y, Z, and in particular from the covariance matrix P
. The standard deviation of the geographical coordinate X, Y, Z is then deduced from the square roots of the values of the diagonal of the matrix P
In the event one estimates the planimetric statistical magnitude G
relative to the planimetric coordinates X and Y, one uses the formula:
= {square root over (Pv
where P
1,1 and P
2,2 respectively correspond to the diagonal terms of the matrix P
relative to the geographical coordinate X and the geographical coordinate Y.
The altimetric component G
of the statistical magnitude G corresponds to the square root of the diagonal term P.sub.v3,3 of the matrix P
relative to the altimetric geographical coordinate Z.
In step 30, the localization error ε is deduced from the statistical magnitude G in the same way as in the first or second embodiments.
The use of the method based on the sigma point approach has the advantage of providing an accurate approximation of the expected value and variants of the geographical coordinates X, Y, Z for an
instantaneous calculation time.
The determination method according to a second alternative, illustrated in FIG. 5, only differs from the first or second embodiments of the invention through the method for estimating the statistical
magnitude G used in step 20. In fact, in the second alternative, the statistical magnitude G is estimated through linearization of the localization function g.
To that end, in a sub-step 230 of step 20, one linearizes the localization function g to obtain a linearized localization function g' around the considered point 8.
In a sub-step 232, one provides or determines, from the probability law D(θ
, . . . , θ
) of the vector θ, the covariance matrix P.sub.θ of the vector θ, and optionally the expected value E(θ).
In a sub-step 234, the covariance matrix P.sub.θ is propagated through the linearized localization function g'. To that end, one for example uses the equation:
^T [0134]
where ∇g is the gradient of g.
One thus obtains an estimate of the covariance matrix P
of the geographical coordinates X, Y and Z.
Optionally, in sub-step 234, the expected value E(θ) of the vector θ is propagated through the localization function g according to the equation
( E ( X ) E ( Y ) E ( Z ) ) = g ( E θ ) , ##EQU00003##
where E
(X), E(Y) and E(Z) are the expected values of the planimetric coordinates X and Y and altimetric coordinates Z.
One thus obtains an estimate of the expected value E(X), E(Y), E(Z) of each of the geographical coordinates X, Y and Z.
In a sub-step 236, one deduces the statistical magnitude G of the covariance matrix P
of the geographical coordinates X, Y and Z. The statistical magnitude G in particular comprises the standard deviation of each of the geographical coordinates X, Y and Z around its respective
announced value x
, y
and z
This statistical magnitude G is deduced from the covariance matrix P
in the same way as in the first alternative.
In step 30, the localization error ε is deduced from the statistical magnitude G in the same manner as in the second embodiment.
The method according to the second alternative has the advantage of being faster to implement than the methods according to the first and second embodiments and according to the first alternative.
However, the localization error obtained is less precise due to the use of the linearized localization function g'.
The methods according to the first and second alternatives are advantageously implemented as alternatives of the method according to the second embodiment, in which one does not take the error of the
terrain model into account.
In the first and second embodiments, as well as in the first and second alternatives, the statistical magnitude G and the localization error E have been estimated relative to the two planimetric
geographical coordinates X, Y and the altimetric geographical coordinate Z or relative to a combination of the planimetric coordinates X and Y.
Alternatively, the statistical magnitude G and the localization error E are estimated relative only to some of the geographical coordinates X, Y, Z, in particular relative to one or two of those
coordinates. In fact, in certain cases, it is not essential to have information on the localization error according to each of the geographical coordinates.
In the case where one uses a different system of coordinates to localize a point P in the terrain T, the statistical magnitude G and the localization error E are calculated relative to at least one
of those coordinates, and for example relative to each of those coordinates or relative to combinations of those coordinates.
The device 1 illustrated in FIG. 1 is capable of implementing the method according to the first embodiment, the second embodiment, or according to the first or second alternatives.
To that end, it comprises means 60 for providing the exposure function f, the terrain model M, the probability law D(θ
, . . . , θ
) of the magnitudes (θ
, . . . , θ
), and any probability law D(err) of the error field err(X,Y) of the considered terrain model M. These means 60 are incorporated into the computer 6, the engagement function f, the terrain model M,
as well as the probability law D(θ
, . . . , θ
) of the magnitudes (θ
, . . . , θ
), and the probability law D(err) of the error field err(X,Y) of the terrain model M for example being stored in databases in the storage means 7.
The device 1 also comprises:
means 62 for deducing a localization relationship h from the exposure function f and using the terrain model M;
means 64 for estimating, using at least one localization function g applied for the point P
of image coordinates l,c of the raw image A
and the probability law D(θ
, . . . , θ
) of the magnitudes θ
, . . . , θ
, the value of the characteristic statistical magnitude G of the probability law D(X, Y, Z) of at least one of the geographical coordinates X, Y, Z associated with the point P
of the raw image A
; and
means 66 for deducing, from the value of the statistical magnitude G, the geographical localization error E of the point P
of the raw image A
The means 62, 64 and 66 are incorporated into the computer 6 of the processing and storage unit 2.
The storage means 7 in particular comprise the image coordinates l, c defining the position of each point P
in the raw image A
, the announced values x
, y
, z
of the geographical coordinates corresponding to each point P
in the raw image A
, and one or more of the following data: the exposure function f and/or the terrain model M accompanied by its error field err(X,Y).
The determination method described in reference to the first embodiment, the second embodiment, and its first and second alternatives, as well as the related device have the advantage of making it
possible to evaluate the localization error at each point of the georeferenced raw image. The estimated localization error thus takes the spatial variability of the localization error into account.
Furthermore, the use of the statistical estimation methods described above makes it possible to obtain a precise estimate of the error, despite the non-linearity of the or each localization function.
In the event one takes the error field of the terrain model into account, the precision of the statistical estimate of the localization error is improved, since it also takes uncertainties coming
from that model into account. Lastly, the localization error is estimated for each point without calling on support points whereof the geographical coordinates are known with certainty. In this way,
it can also be calculated for points of raw images acquired in areas in which one does not have support points with known geographical coordinates.
In a third embodiment of the invention, the georeferenced image is a georeferenced image A
built from one or more raw images A
In the following, the georeferenced image is an orthorectified image A
, also called orthoimage, built from the raw image A
or a plurality of raw images A
FIG. 6 illustrates the relationships between the orthorectified image A
, the terrain T and the raw image A
from which the orthorectified image A
has been built.
Traditionally, an orthorectified image is an image this has been filtered for the influence of the visualized relief. Its geometry has been rectified so that each point can be superimposed on a
corresponding flat map. In other words, it appears to be taken vertically for all points P of the terrain T that it represents, these points P being situated on a perfectly flat terrain; in
particular, the scale of an orthorectified image is uniform over the entire image.
The orthorectified image A
is built, in a known manner, from one or more raw images A
. It comprises points P
, each point P
being identified within the orthorectified image A
by coordinates l
, c
defining its position in the orthorectified image A
. By construction, the values l
, c
of the coordinates of each point P
of the orthorectified image A
correspond to the announced values x
, y
of the planimetric coordinates defining the geographical localization of the object represented by the point P
in the terrain T using a bilinear correspondence. The announced value z
of the altimetric coordinate corresponding to the point P
of the orthorectified image A
is obtained using the terrain model M. Thus, the orthorectified image A
is by nature a georeferenced image whereof the exposure function f is a simple linear function.
The method according to the third embodiment is a method for determining the localization error ε
of a point P
of the orthorectified image A
In the context of this method, the producer of the georeferenced image A
the exposure function f associated with the or each raw image A
the probability law D(θ
, . . . , θ
) of the magnitudes θ
, . . . , θ
depending on the capture conditions for the or each raw image;
the terrain model M, as well as the probability law D(err) of its error field err(X,Y) if any.
The computer 6 then deduces the localization relationship h from the exposure function f and using the terrain model M.
In step 40 of the method according to the third embodiment, one determines the point P
of the raw image A
from which the point P
of the orthorectified image A
was built.
To that end, in a sub-step 400, one determines, using the terrain model M, in which the terrain error err(X, Y) has been taken to be equal to zero and the announced values x
, y
of the planimetric coordinates identical by construction to the coordinates l
, c
of the point P
, the announced value z
of the altimetric coordinate corresponding to the point P
of the orthorectified image A
. One thus obtains the announced values x
, y
, z
of the geographical coordinates defining the geographical localization of the point P
In a sub-step 410, one applies the exposure function f to each point P
of the orthorectified image A
, i.e. to the announced values x
, y
, z
of the geographical coordinates so as to obtain the values of the coordinates l, c of the point P
of the corresponding raw image A
. During the application of the exposure function f, one identifies the magnitudes θ
, . . . , θ
with their expected values indicated by the producer of the raw image A
Thus, at the end of step 40, one has determined the point P
of the raw image A
from which the point P
of the orthorectified image A
was built, i.e. the point P
of the raw image A
corresponding to the point P
of the considered orthorectified image A
. In that context, the values l, c of the coordinates of the point P
are real numbers that are not necessarily integers.
At the end of step 40, one applies steps 10, 11, 20 and 30 of the method according to the first embodiment, the second embodiment and its first or second alternatives to the point P
of the raw image A
corresponding to the point P
of the orthorectified image A
determined in step 40.
At the end of step 30, one has obtained an estimate of the localization error E of the point P
of the raw image A
from which the point P
of the orthorectified image A
was built.
In step 50 of the method, one identifies the localization error E of the point P
of the raw image A
with the localization error ε
of the point P
of the orthorectified image A
Optionally, one reproduces steps 10 to 50 for each point P
of the orthorectified image A
. One thus obtains the localization error ε
of each point P
of the orthorectified image A
The method according to the third embodiment has been explained for an orthorectified image A
. The method applies in the same way to any georeferenced image, formed from one or several raw images, on the condition that one is capable of making each point of the georeferenced image correspond
to a point of a raw image from which it was built.
FIG. 8 illustrates a device 70 for determining the localization error of a point P
of the georeferenced image A
. This device 70 only differs from the device 1 illustrated in FIG. 1 in that it also comprises:
means 74 for determining a point P
of coordinates l, c of one of the raw images A
from which the point P
of the georeferenced image A
was built;
means 76 for deducing the localization error ε
of the point P
of the georeferenced image A
from the geographical localization error ε of the point P
of the raw image A
The means 74 and 76 are incorporated into the computer 6 of the processing and storage unit 2. In that case, the storage means 7 also comprise the coordinates l
, c
defining the position of each point P
in the georeferenced image A
, which is in particular an orthorectified image.
The device 70 is thus capable of also carrying out steps 40 and 50 of the method according to the third embodiment under the control of an adapted computer program.
The determination method described in reference to the third embodiment, as well as the related device, have the advantage of making it possible to evaluate the localization error at each point of
the georeferenced image built from a raw image, and in particular from an orthorectified image. The estimated localization error thus takes the spatial variability of the localization error into
account. Furthermore, the use of the statistical estimation methods described above makes it possible to obtain a precise estimate of the error, despite the nonlinearity of the localization function.
Lastly, in the case where one takes the statistical model of the terrain error into account, the estimated localization error also takes the uncertainties coming from the terrain model into account.
The invention also relates to a device 80 for showing the localization error ε
of a plurality of points P
of a georeferenced image A
. This device 80 is shown diagrammatically in FIG. 9. It comprises:
means 82 for providing the georeferenced image A
to be shown;
means 84 for providing, for each point of the plurality of points P
of the georeferenced image A
, an estimated value of the localization error ε
specific to that point P
, said error not being uniform over the image A
means 86 for showing the georeferenced image A
; and
means 88 for showing the localization error ε
for at least one point among a plurality of points P
of the georeferenced image A
, advantageously for each point of the plurality of points P
, so as to allow a user to visualize the localization error.
The georeferenced error A
to be shown is recorded in a database 90. The database 90 is for example stored in a storage means, such as a computer memory. It associates each point P
of the georeferenced image A
with coordinates l
, c
in the georeferenced image A
the announced values x
, y
, z
of the corresponding geographical coordinates, defining the localization in the terrain T of the object shown by the point P
a value V attributed to said point P
, for example an intensity or radiometry value, said value V being representative of the object represented by point P
; and
the localization error ε
specific to that point P
FIG. 10 diagrammatically illustrates the method for showing the localization error ε
in at least one plurality of points P
of the georeferenced image A
In step 700 of that method, the means 82 for providing the georeferenced image A
provide the georeferenced image A
, for example upon request by a user. To that end, they connect to the database 90 and retrieve data therefrom relative to the georeferenced image A
; in particular, they retrieve, for each point P
of the georeferenced image A
, the announced values x
, y
, z
of the corresponding geographical coordinates, as well as the value V attributed to that point P
In step 800, they provide the data retrieved from the database 90 to the means 86 for showing the georeferenced image A
. These means 86 then show the georeferenced image A
so as to allow the user to visualize it. To that end, the means 86 for example display the georeferenced image A
on a display screen 92 or print the georeferenced image A
In step 1000, the means 84 for providing the estimated value of the localization error ε
connect to the database 90 and retrieve therefrom, for at least one plurality of points P
of the georeferenced image A
, and advantageously for each point P
of the georeferenced image A
, the estimated value of the localization error ε
corresponding to each of said points P
In step 1100, the means 88 for showing the localization error show the localization error ε
corresponding to each point P
and supplied by the means 84 in step 1000. To that end, they for example produce an error map C showing, for each point P
of the georeferenced image A
, the estimated value of the localization error ε
. In the error map C, the localization error ε
is for example coded by the color attributed to the corresponding point P
. Thus, a color level is made to correspond to each value or range of possible values of the localization error ε
. The color coding is for example done using a computer function of the colormap type, this function making a shade of color correspond to each possible value of the localization error ε
. The scale of the colors can for example extend from green to red, green representing the areas of the image A
in which the localization error ε
is below a first threshold, for example smaller than the typical distance in the terrain T between two consecutive pixels of the image A
, red representing the areas of the image A
in which the localization error ε
is above a second threshold, for example above 10 times the first threshold, and yellow representing the intermediate areas, in which the localization error ε
is comprised between the first threshold on the second threshold. These threshold values are to be defined according to the needs related to the considered application. It is also possible to
translate the histogram of the localization errors ε
with statistical quantities.
Alternatively, the localization error ε
is coded using shades of gray, the intensity of a point for example being lower as the localization error ε
is high.
The means 88 for showing the localization error show the error map C, for example by displaying it on the display screen 92, advantageously near the georeferenced image A
, in particular under the georeferenced image A
, as shown in FIG. 11, so as to allow the user to visualize both the georeferenced image A
and the corresponding error map C at the same time. According to one alternative, the representation means 88 print the error map C.
The color code or gray shading used to code the level of the localization error ε
at each point of the georeferenced image A
has the advantage of allowing the user to have a synthesized version of the variability of the localization error ε
on the georeferenced image A
According to the second alternative illustrated in FIG. 12, the means 88 for showing the localization error ε
show the error map C by superimposing it on the georeferenced image A
so as to form a combined image A
. In that case, in the combined image A
, the localization error is shown by a first parameter, for example the color shade, while the value V (radiometric value or intensity) of the corresponding point of georeferenced image A
is shown by a second parameter, for example the level of gray. Furthermore, the device 80 comprises means for adjusting the transparency of the error map C superimposed on the georeferenced image A
. According to this alternative, the error map C is shown superimposed on the georeferenced image A
permanently. Alternatively, it is shown intermittently on the georeferenced image A
. To that end, it is for example displayed on the georeferenced image A
blinking, with a blink frequency greater than 0.5 Hz and lower than 20 Hz so as to cause a remanence of the error map C on a user's retina in the blinking interval.
Steps 1000 and 1100 are for example implemented simultaneously with steps 700 and 800.
The method according to the second embodiment only differs from the method according to the first embodiment by the steps described below.
In step 1200, implemented after step 800 for showing the georeferenced image A
, and before step 1000, the user selects a point P
of the georeferenced image A
, for example using a mouse pointer or through entry using a computer keyboard.
During step 1000, the means 84 for providing the estimated value of the localization error ε
retrieve, from the database 90, only the estimated value of the localization error ε
corresponding to said point P
, and not the estimated value of the localization error ε
of each point P
or of a plurality of points P
of the georeferenced image A
During step 1100, the means 88 for showing the localization error show the localization error ε
corresponding to the point P
and provided by the means 84 in step 1000. To that end, they for example display, near the point P
or superimposed on the point P
, a label on which information is indicated relative to the localization error ε
, optionally accompanied by the announced coordinates x
, y
, z
of the geographical localization P
Optionally, the label also comprises an indication of the expected value E(X). E(Y), E(Z) of each of the geographical coordinates X, Y, Z.
The information relative to the localization error ε
is for example a histogram of the probability law D(X, Y, Z) of the geographical coordinates X, Y, Z.
Optionally or alternatively, it involves the standard deviation of each of the geographical coordinates X, Y, Z around its respective announced value x
, y
, z
Alternatively, it involves the planimetric standard deviation, representative of the planimetric error, i.e. the localization error relative to the planimetric coordinates X and Y and/or the
altimetric standard deviation, corresponding to the standard deviation of the altimetric coordinate Z around its announced value z
The representation method according to the second embodiment has the advantage of allowing the user to visualize the localization error ε
associated with the point P
of his choice of the georeferenced image A
The localization error ε
shown by the device 80 implementing the representation method as described above is for example a localization error ε
calculated using the method for determining the localization error described above, and recorded in the database 90.
The georeferenced image A
is for example a raw georeferenced image, such as the raw georeferenced image A
or an orthorectified image such as the orthorectified image A
Patent applications by THALES
Patent applications in class Three-dimension
Patent applications in all subclasses Three-dimension
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120212479","timestamp":"2014-04-20T06:34:07Z","content_type":null,"content_length":"102611","record_id":"<urn:uuid:9e657648-c778-420e-831e-48f818478da5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamentals of Transportation/Trip Generation/Problem2
Modelers have estimated that the number of trips leaving Rivertown (To) is a function of the number of households (H) and the number of jobs (J), and the number of trips arriving in Marcytown (Td) is
also a function of the number of households and number of jobs.
$To =1H+0.1J;R2 =0.9 \,\!$
$Td =0.1H+1J;R2 =0.5 \,\!$
Assuming all trips originate in Rivertown and are destined for Marcytown and:
Rivertown: 30000 H, 5000 J
Marcytown: 6000 H, 29000 J
Determine the number of trips originating in Rivertown and the number destined for Marcytown according to the model.
Which number of origins or destinations is more accurate? Why?
Last modified on 4 September 2011, at 04:14 | {"url":"http://en.m.wikibooks.org/wiki/Fundamentals_of_Transportation/Trip_Generation/Problem2","timestamp":"2014-04-17T09:40:14Z","content_type":null,"content_length":"15477","record_id":"<urn:uuid:9afa260d-18ba-46c2-ba42-3823b01dab3a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Modal Logic & Topology
How do we unify modal logic with topology or perhaps complex analysis/probability&random variables?
Provided the principles of modal logic, is it possible to translate philosophy into computer language and create the real AI?
Is it possible to apply modal logic to Shor's computational algorithm? How about quantum encryption? Is anyone doing any research on this field? | {"url":"http://www.physicsforums.com/showpost.php?p=715542&postcount=1","timestamp":"2014-04-17T18:37:54Z","content_type":null,"content_length":"9125","record_id":"<urn:uuid:60a81895-50f3-4db7-8cd4-3cc74dca03c3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Web Publishing: Northern Valley
Home Resources
Unit 1 Language & notation of algebra.ppt
UnitII Equations.ppt
Unit III Inequalities.ppt
Using the Calculator to Graph Parabolas0.pdf
Formula Sheet.pdf
Simplifying RadicalsNotes.pdf
mean, median, mode notes.pdfAddPolynomials0.ppt
Top of Page
Unit 1 Language & notation of algebra.ppt
UnitII Equations.ppt
Unit III Inequalities.ppt
Using the Calculator to Graph Parabolas0.pdf
Formula Sheet.pdf
Simplifying RadicalsNotes.pdf
mean, median, mode notes.pdfAddPolynomials0.ppt
Unit 1 Language & notation of algebra.ppt
UnitII Equations.ppt
Unit III Inequalities.ppt
Using the Calculator to Graph Parabolas0.pdf
Formula Sheet.pdf
Simplifying RadicalsNotes.pdf
mean, median, mode notes.pdfAddPolynomials0.ppt | {"url":"http://www.nvnet.org/~Ben-Meir/?OpenItemURL=S01218C29","timestamp":"2014-04-17T04:29:45Z","content_type":null,"content_length":"24009","record_id":"<urn:uuid:4c5d63ab-3c6e-43fe-b4f3-4ff49a09321c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Units in Physics (mechanical, electricity, magnetism, light and optics) including Si units.
This is a reference list with notes of all SI and derived units in physics. The notes provide a brief explanation of some of the more confusing elements, but be warned that the full explanation could
take many pages, and may be explained elsewhere on this website.
Physics has only 5 base units. (Plus the SI units Mole and Candela, but these are rarely used in Physics.)
Name Abbreviation (Symbol) Standard Unit Notes
Name Abbreviation (Symbol) Standard Unit Notes
Length l, x (for distances) Meter (m) A meter is defined as the distance light travels in a vacumm in
Mass m, M (when used with measurements in Kilogram (kg) A kilogram is defined as the weight of a specific platinum-iridium cylinder
Time t Second (s) Seconds are defined as 9,192,631,770 vibrations of radiation from a cesium atom
Temparature T Kelvin (K) A degree kelvin is defined as
Electric I Ampere (A) An ampere is the amount of charge (C) passing through a surface per second, and is defined as the current which produces a
Current force of
Each of these base units is defined on fundamental constants, and all other units are based on these five units. At times it useful to break longer equations down to their most basic units to
determine if the equation makes sense. The most common combinations of these basic units are given their own symbols and names. These common units are as follows.
Name Abbreviation (Symbol) Unit – Notes
(alphabetically) Derivation
Name Abbreviation (Symbol) Unit – Notes
(alphabetically) Derivation
Acceleration a Acceleration is literally the rate of change of the rate of change of an object’s position.
Angle radian A radian is defined as the angle an arc length, equal to the circle’s radius, makes with the center of the circle.
Capacitance C Farad (F)
Charge Q ), q ,e (of Coulomb (C) Literally the charge is the amount of current that flows over the entire time period.
elementary particles)
Density Density is the amount of mass in every cubed unit length
Displacement s, d (distance), h meters – m Displacement is the total change in length in any single direction. Sometimes it is used as the absolute change in distance –if you were walk all
(height) the way around the earth less one meter, your displacement would be one meter
Electric Field E An electric field is the amount of electric potential over any given area.
Electric Flux Electric flux is the electric field through some area.
Electromotive Volt (V) Electromotive force is a potential difference in volts.
Force (emf)
Electron Volt eV The Electron Volt is the amount of energy change of a charge-field system when a charge of magnitude e is moved through a potential difference of
1V. It is used in place of the Joule (energy).
Energy E (total),U Joule (J)
(potential),K (kinetic)
Entropy S
Force F Newton (N)
Frequency Hertz (Hz)
Heat Q Joule (J) Joule is used for work, heat and energy, but remember that Joule is a unit of energy not of energy transfer. This means that heat can have a Joule
of energy, but can’t be measured in Newtons.
Magnetic Field B Tesla (T)
Magnetic Flux Weber (Wb)
Momentum p
Potential V or Voltage (V) Potential is the amount of excess charge. It can be compared with potential energy.
Power P, Power is the amount of work done in any given time.
Pressure P Pascal (Pa)
Resistance R Ohm Resistance is the amount of energy that is lost in the transfer of energy through an object.
Torque Torque is usually shown with units N*m even though it is technically joules.
Velocity Velocity is the speed and direction of an object.
Wavelength Meter (m)
Work W Joule (J) Work is the amount of force outputted over some distance.
For of the derived units such as electric flux there are multiple possible unit definitions. The two standard definitions for electric flux are
This derivation only used units given in the above table, but there are other ways. | {"url":"http://anthologyoi.com/units-in-physics-mechanical-eletricity-magnetism-light-and-optics-including-si-units/","timestamp":"2014-04-17T00:56:42Z","content_type":null,"content_length":"54392","record_id":"<urn:uuid:703260f3-0c9b-44d0-bce6-4aae2fcf64c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
BotEC: Walking to the Center of the Earth
The earth's interior is composed of three main concentric zones: the crust, the mantle, and the core. The outermost layer, the crust, averages 40 km thick on the continents and is thinner (averaging
8 km thick) under the oceans. The middle layer, the mantle, is an average of 2900 km thick, and the core, the innermost layer, has an average radius of 3470 km, about the size of the planet Mars.
1. Assuming that you can walk 10 miles in a day (and that you can stand the heat and pressure!), how many days would it take you to walk to the crust-mantle interface, if you started from the land
surface? Round off to the nearest whole day. (Hint: 1 mile =1.6 km)
2. How many more days would it take to get to the center of the earth?
References and Resources
This SERC page describes the use of Back of the Envelope Calculations
A View from the Back of the Envelope (more info) : This site has a good number of easy simulations and visualizations of back of the envelope calculations.
The Back of the Envelope : This page outlines one of the essays in the book "Programming Pearls" (ISBN 0-201-65788-0). The book is written for computer science faculty and students, but this portion
speaks very well to back of the envelope calculations in general.
Controlled Vocabulary Terms
Resource Type: Activities:Classroom Activity:Short Activity
Special Interest: Quantitative
Quantitative Skills: Estimation
Ready for Use: Ready to Use
Topics: Chemistry/Physics/Mathematics | {"url":"http://serc.carleton.edu/quantskills/activities/botec_earthscale.html","timestamp":"2014-04-20T11:06:56Z","content_type":null,"content_length":"21345","record_id":"<urn:uuid:228001cc-8376-49d2-94e1-1f4d066db2b2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral could lead to Hypergeometric function
How can I perform this integral
\int^∞_a dq \frac{1}{(q+b)} (q^2-a^2)^n (q-c)^n ?
all parameters are positive (a, b, and c) and n>0.
I tried using Mathemtica..but it doesn't work!
if we set b to zero, above integral leads to the hypergeometric function!
I'll preface this by asking what it is for, but I'll try to provide a partial solution, too.
To start, what on Earth is this for? We do we come up with such silly things to integrate?
Second, let's see if we can simplify things considerably:
I'm thinking we might just approach this by means of partial fractions. On cursory examination, I don't see a contour that would simplify things, so brute force might be necessary. | {"url":"http://www.physicsforums.com/showthread.php?s=067d138119fca7ca43442acb6b9d0f06&p=4567575","timestamp":"2014-04-24T06:28:58Z","content_type":null,"content_length":"37135","record_id":"<urn:uuid:4aef7b33-27c0-408d-893d-1c5b62986175>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lyndhurst, NJ Math Tutor
Find a Lyndhurst, NJ Math Tutor
...I've been teaching 3D-animation with Maya for over 4 years. Prior to that, I taught my robotics teams to create 3D-animations for competitions using Maya and 3D-Studio Max. I'm also proficient
with GNU 3D programs such as Blender.
83 Subjects: including discrete math, SAT math, Java, statistics
...As for teaching style, I feel that the concept drives the skill. If you have the idea of what to do on a problem, you do not need to complete 10 similar problems. As such, I like to spend more
time on the why than the what.
26 Subjects: including trigonometry, linear algebra, logic, ACT Math
...I have also taught Algebra 1, Algebra 2, Geometry and SAT math. I have been a teacher for 8 years. Half of that time I taught mathematics.
10 Subjects: including trigonometry, discrete math, logic, algebra 1
...Additionally, calculus has remained an essential part of the "bread and butter" that I have used to understand the dynamic variations in the populations of cells undergoing therapeutic
treatment during my work in the National Cancer Institute PSOC Network. I can work with you using your school c...
13 Subjects: including algebra 1, algebra 2, calculus, differential equations
...Even more exciting is watching that light-bulb go on when I am tutoring. I have experience tutoring students of all ages, (elementary through graduate school) in many subject areas - although
my real passion is math. I have a BA in statistics from Harvard and will be starting nursing school shortly.
18 Subjects: including algebra 1, algebra 2, biology, chemistry
Related Lyndhurst, NJ Tutors
Lyndhurst, NJ Accounting Tutors
Lyndhurst, NJ ACT Tutors
Lyndhurst, NJ Algebra Tutors
Lyndhurst, NJ Algebra 2 Tutors
Lyndhurst, NJ Calculus Tutors
Lyndhurst, NJ Geometry Tutors
Lyndhurst, NJ Math Tutors
Lyndhurst, NJ Prealgebra Tutors
Lyndhurst, NJ Precalculus Tutors
Lyndhurst, NJ SAT Tutors
Lyndhurst, NJ SAT Math Tutors
Lyndhurst, NJ Science Tutors
Lyndhurst, NJ Statistics Tutors
Lyndhurst, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/lyndhurst_nj_math_tutors.php","timestamp":"2014-04-18T23:49:00Z","content_type":null,"content_length":"23657","record_id":"<urn:uuid:9ade1f3f-394e-4f0e-8d54-3a9863671baa>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
illiMath2001 NSF VIGRE REU at Mathematics UIUC
Last edited 23may03 by gfrancis
Find this document at http://new.math.uiuc.edu/im2001
Jump to: calendar
report 5aug01
Summer Research Experience: illiMath2001
11 June - 3 August
Program Director: George Francis
email: gfrancis@math.uiuc.edu
In this program students with strong interest and some experience in mathematical graphics participated in research programs of their associated mentor (AM). The program provided the facilities (and
any necessary training in their use) for the student to assist their AM in creating visual materials illustrating research in a variety of media, including the production of custom soft-ware,
web-pages, animations, print quality images and virtual environments (CAVE and CUBE). The program also provided the student with tutorials in the mathematical background and lectures by specialists
in the subjects of their projects. A similar REU program, the Audible Sketchpad for the CAVE, was conducted for the past two years. Here are the titles for the projects this summer, their student
principal investigator (SPI), associated mentor (AM) and corresponding mentor (CM), and brief synospis are here.
"Alice on the Eightfold-Way"; SPI: Matthew Woodruff and Ben Bernard; AM: Dr. Ben Schaeffer, ISL, Beckman, UIUC; CM: Dr. Jeff Weeks, Canton, NY and Prof. John Sullivan, UIUC
The Integrated Systems Lab (ISL) of the Beckman Institute is building the CUBE, which is the first rigid walled 6-sided CAVE in the US. This long-range project to implement Thurston's Geometries in
the CUBE got underway this summer. The collaboration between im2001 students, ISL scientists, the Math Dept, and Dr. Jeff Weeks, achieved proof-of-concept implementations of five of Thurston's eight
geometries, programmed in {\tt syzygy}, the experimental CUBE library for distributed graphics computing on Linux clusters. For this purpose a syzygy linux cluster was created in Altgeld Hall. See
our preprint of preliminary results.
Kairomone"; SPI: Lorna Salaman and Matthew Woodruff; AM: Dr. Karen Shuman, VIGRE Post-doc, UIUC Math Dept; CM: Prof. Robert Acar, Math Dept, U. Puerto Rico at Maiaguez.
The corn root worm (diabrotica) is a serious agricultural pest. The late UIUC entomologist, Robert Metcalf, his coworkers and students, have developed a model for integrated pest management involving
naturally occurring chemical food attractants, the kairomones, that influence the behavior of the diabrotica beetle in a corn field ringed with kairomone baited traps. A real-time interactive
computer animation modelling diabrotica behavior begun last summer was completed, and extensive documentation, useage instructions and background material was assembled this summer. A summary was
presented as a PME talk at MathFest, Madison, WI.
"Narnia"; SPI: Alison Ortony; AM: Elizabeth Denne, graduate student UIUC Math Dept; and Stuart Levy, senior research programmer, NCSA; (CM) Prof. John Sullivan, Math Dept, UIUC
Some years ago Stuart Levy initiated a project of adapting to the CAVE the software package polycut by Prof. Ken Brakke of Susquehanna College. Polycut simulates navigating through 3-manifolds which
are branched coverings of 3-space, with knotted and linked branching curves. The mathematics of this subject generalizes the classical theory of Riemann surfaces by one dimension, involving topology,
geometry and group theory. Though a complete solution to this visualization problem still eludes us, extensive geometrical documentation and evaluation of extant software was undertaken this summer
and presented as a PME talk at MathFest, Madison, WI.
"CAVE Navigator" AM: Dr. Volodymyr Kindratenko, NCSA
This project will systematize the scores of different modes people have devised over the past decade for navigating the virtual worlds in the CAVE immersive virtual environments at the NCSA and
elsewhere. The mathematics here involves Lie Groups, in particular the double covering of SO(2) by the unit quaternion group. But a thorough command of the 3D calculus of curves and surfaces is an
adequate beginning. Additional information will be provided by the project staff.
"Bishop Coaster"; SPI: Ben Farmer; AM: Michael Pelsmajer, graduate student UIUC Math Dept; and Prof. Paul McCreary, Xavier University, New Orleans;
As we learned in Advanced Calculus, the Frenet frame along a differentiable space curve consists of a unit tangent vector, the unit normal vector pointing towards the center of curvature, and their
cross-product, the binormal. At inflection points and along straight sections of the 3D track the Frenet frame is undefined. For these and other reasons, it yields to a more robust, less
temperamental frame invented by Richard Bishop. This project investigates the relationship between these two framings and explores applications of the Bishop Frame to common problems in 3D-geometry.
"carniBeats, soniBlui"; SPI: Ben Shanbaum; AM: Michael Pelsmajer, graduate student UIUC Math Dept; and Prof. Guy Garnett, UIUC Music Department.
Two complementary solutions to a problem posed by Guy Garnett on how to explore sonic space in the CAVE. The first is an application of the Bishop Frame algorithms to a carneval ride with a series of
swiveling space shuttles attached to a 3D-Lissajou based track. The second is a sonification of the gesture based 3D-painting RTICA, ``Alaska Blui'', by Chris Hartman.
"Quaternions"; SPI: Robert Shuttleworth; AM: George Francis.
This RTICA implements an efficient quaternion-to-matrix, and matrix-to-quaternion translation algorithm, and applies it to compare a quadratic Bezier spline to succesive SLERPs in a camera path.
"Silhouette"; SPI: Doug Nachand; AM: Prof. John Hart, UIUC CS Dept.
"Phillips and DeWitt Eversions"; SPI: Doug Nachand; AM: George Francis.
There remains several sphere eversions that have not yet been made computational. Two (technically related) of these are the Tony Phillips eversion (Scientific American, 1966) which is the first
published one, and Bryce DeWitt (maybe non-) eversion, which was proposed (but never "proved") in 1967 at the same Batelle conference which spawned the Froissart-Morin eversion. Both eversions are
based on horizontal slices (plane curves) of a surface undergoing a regular homotopy.
"Schprel: Special Relativity Project"; SPI: Mark Flider; AM: George Francis.
The purpose of this project is to show the apparent distortion of objects to a moving observer when the speed of light is slow enough to become non-negligible in relativistic physics. It was
developed as an RTICA (Real-Time Interactive Computer Animation) for Prof. George Francis' Math 198 and is now being developed in the NCSA's CAVE and the Beckman Institute's CUBE. | {"url":"http://new.math.uiuc.edu/im2001/","timestamp":"2014-04-19T11:57:08Z","content_type":null,"content_length":"8313","record_id":"<urn:uuid:aa7d4206-b87f-42d5-bf6f-72a75c34bc8c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to solve a triangle given two sides and a non-included angle.
This topic is part of the TCS FREE high school mathematics 'How-to Library'. It shows you how find the unknown side and angles of a triangle when given two sides and an angle not lying between these
(See the index page for a list of all available topics in the library.) To make best use of this topic, you need to download the Maths Helper Plus software. Click here for instructions.
This triangle has internal angles 'A', 'B' and 'C', and sides of length 'a', 'b' and 'c':
If three of these six measurements are known, then it may be possible to find the other three.
This is called 'solving' the triangle, and this topic will show you how to solve triangles for the unknown side and angles when any two sides and a non-included angle are given.
NOTE: The non-included angles are the angles that do not lie between the two given sides.
These are the formulas used to solve triangles:
1. The sum of the internal angles equals 180º ...
A + B + C = 180º
2. The 'sine rule' ...
3. The 'cosine rule' ...
a² = b² + c² - 2bc cosA
b² = a² + c² - 2ac cosB
c² = b² + a² - 2ba cosC
We will now use an example to show how these rules are applied to solve a triangle when two sides and a non-included angle are given.
Example: A triangle has sides a=5 and b=7, and a non-included angle A=30º. Solve for the unknown side and the two unknown angles.
Often this type of triangle problem has two solutions. In this case, there are two possible triangles that can be constructed with this information (See case 1 and case 2 below.) The reason for the
two answers will be explained later.
Step 1: Begin by using the sine rule to find the unknown angle opposite one of the given sides.
NOTE: This is the only occasion that we start with the sine rule!
Angle 'B' is opposite the given side b=7. Using the sine rule we have:
Calculating an angle for a triangle by using an inverse sin operation has two possible answers, one obtuse (greater than 90º) and the other acute (less than 90º). If we are not sure that the angle is
acute, as for angle 'B' in our example, then we must explore both the obtuse and the acute cases. We will call them 'case 1' and 'case 2'
Find the inverse sin of 0.7 using a scientific calculator...
B = sin^-1(0.7)
= 44.427º
[case 1]
Inverse sin has two possible answers for a right triangle. The inverse sin function on your calculator gives you only one possible solution. To find the other, subtract this angle from 180º.
So we have the second possible value of angle B:
(180º - C)
= 135.573º
[case 2]
Step 2: Find the remaining unknown angle.
The sum of the internal angles equals 180º ...
A + B + C = 180º
C = 180º - (A+B)
case 1:
= 180 - (30º + 44.427º)
= 180 - 74.427º
= 105.573º
case 2:
= 180 - (30º + 135.573º)
= 180 - 165.573
= 14.427º
Step 3: Use the sine rule to find the remaining unknown side.
case 1:
case 2:
The triangle is now solved. This diagram shows both case 1 and case 2 solutions on the same diagram:
The large blue triangle is the case 1 solution. The sides and angles marked on the diagram are all for the case 1 solution. The case 2 solution is the smaller shaded triangle with one red side. The
red side is side 'a', which can have 2 possible positions. This is how the two triangles are created. If side 'a' is just long enough to reach the base line, then there is only one solution, and
angle B is a right angle. If side 'a' is too short to reach the base line, then there are no solutions possible.
The Method section below shows you how Maths Helper Plus can easily solve your triangles, creating both a labelled diagram and full working steps.
Maths Helper Plus can solve a triangle given two sides and a non-included angle. Full working steps and a labelled diagram are created. The steps below will show you how...
┃Step 1 Download the free support file... We have created a Maths Helper Plus document containing the completed example from this topic. You can use this to practice the steps described below, and ┃
┃as a starting point for solving your own problems. ┃
┃ ┃
┃ File name: 'Triangle sover - ASS.mhp' File size: 6kb ┃
┃ Click here to download the file. ┃
┃ ┃
┃If you choose 'Open this file from its current location', then Maths Helper Plus should open the document immediately. If not, try the other option: 'Save this file to disk', then run Maths Helper ┃
┃Plus and choose the 'Open' command from the 'File' menu. Locate the saved file and open it. If you do not yet have Maths Helper Plus installed on your computer, click here for instructions. ┃
NOTE: This document has already been set up to solve the example triangle as described in the 'theory' section of this topic.
Step 2 Display the triangle solver options box
Double click the mouse in the border to the left of the calculations. ( This area is shaded pale blue in the diagram below.) The triangle solver options box will display its 'Lengths & Angles' tab...
Click the 'Clear' button to remove the previous triangle, then click on the 'a' edit box. Type the new length for side 'a' of your triangle. Repeat for side 'b' and the non-included angle 'C'.
NOTE: There are other ways of entering two sides and a non-included angle, eg: sides 'b' and 'c', and angle 'B', etc. The calculations are the same in each case, but different letters are used, and
the triangle diagram is rotated to a different position.
Click the 'Apply' button at the bottom of the edit box. The calculated values will display on the options box.
Click the 'OK' button to close the options box. The calculations and triangle diagram will be displayed on your screen.
Step 3 Adjust the size of the diagram
If the triangle diagram is too big to display properly on your computer screen, briefly press the F10 key to reduce its size. To make the diagram bigger, hold down a Ctrl key while you press F10.
┃Still don't understand or have further questions about this topic ?┃
┃ Then ask us! Click here now! ┃ | {"url":"http://www.teacherschoice.com.au/Maths_Library/Trigonometry/solve_trig_ASS.htm","timestamp":"2014-04-21T04:31:24Z","content_type":null,"content_length":"24956","record_id":"<urn:uuid:b48093b6-fb06-4aaf-9785-77871c63e0de>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Middlesbrough v Burnley
The Riverside Stadium, England
Referee: M Clattenburg | Attendance: 20689
* Local time based on your geographic location.
• Offside called on Keith Treacy
• Throw-in: Kieran Trippier takes it (Defending)
• Tom Heaton makes the save (Parry)
• Albert Adomah hits a good right footed shot. Outcome: save
• Jonathan Woodgate clears the ball from danger.
• Ben Gibson clears the ball from danger.
• Throw-in: Kieran Trippier takes it (Attacking)
• Throw-in: George Friend takes it (Attacking)
• Throw-in: George Friend takes it (Attacking)
• Shay Given takes a long goal kick
• Keith Treacy hits a good right footed shot, but it is off target. Outcome: over bar
• Ben Gibson clears the ball from danger.
• Tom Heaton (Burnley) takes a freekick. Outcome: Pass
• Albert Adomah commits a foul on Kieran Trippier resulting on a free kick for Burnley
• Throw-in: Daniel Lafferty takes it (Defending)
• Tom Heaton makes the save (Catch)
• Emanuele Ledesema hits a good left footed shot. Outcome: save
• Throw-in: George Friend takes it (Defending)
• Throw-in: Daniel Lafferty takes it (Defending)
• Junior Stanislas crosses the ball.
• Jozsef Varga clears the ball from danger.
• Junior Stanislas crosses the ball.
• Shay Given makes the save (Parry)
• Junior Stanislas hits an impressive right footed shot. Outcome: save
• Dean Whitehead clears the ball from danger.
• Tom Heaton takes a long goal kick
• Curtis Main hits a good right footed shot, but it is off target. Outcome: miss right
• Jonathan Woodgate clears the ball from danger.
• Junior Stanislas crosses the ball.
• Keith Treacy crosses the ball.
• Throw-in: George Friend takes it (Defending)
• Michael Duff clears the ball from danger.
• Throw-in: Kieran Trippier takes it (Attacking)
• Middlesbrough makes a sub: Ben Gibson enters for Daniel Ayala. Reason: Injury
• George Friend blocks the shot
• Sam Vokes hits a good right footed shot, but it is off target. Outcome: blocked
• Throw-in: Kieran Trippier takes it (Defending)
• Grant Leadbitter crosses the ball.
• George Friend clears the ball from danger.
• Daniel Lafferty crosses the ball.
• Throw-in: Kieran Trippier takes it (Attacking)
• Shay Given takes a long goal kick
• Dean Marney hits a good header, but it is off target. Outcome: over bar
• Keith Treacy crosses the ball.
• Junior Stanislas crosses the ball.
• George Friend clears the ball from danger.
• Sam Vokes crosses the ball.
• Daniel Ayala clears the ball from danger.
• Tom Heaton (Burnley) takes a freekick. Outcome: Pass
• Jozsef Varga is awarded a yellow card. Reason: unsporting behaviour
• Throw-in: Jozsef Varga takes it (Attacking)
• Junior Stanislas clears the ball from danger.
• Jozsef Varga crosses the ball.
• Throw-in: Jozsef Varga takes it (Attacking)
• Jonathan Woodgate clears the ball from danger.
• Tom Heaton takes a long goal kick
• Albert Adomah hits a good right footed shot, but it is off target. Outcome: over bar
• Sam Vokes clears the ball from danger.
• Throw-in: Jozsef Varga takes it (Attacking)
• Shay Given (Middlesbrough) takes a freekick. Outcome: Pass
• Burnley makes a sub: Keith Treacy enters for Scott Arfield. Reason: Tactical
• Offside called on Sam Vokes
• Throw-in: George Friend takes it (Attacking)
• Jonathan Woodgate clears the ball from danger.
• Jonathan Woodgate clears the ball from danger.
• Throw-in: Kieran Trippier takes it (Attacking)
• Scott Arfield crosses the ball.
• George Friend clears the ball from danger.
• Jonathan Woodgate clears the ball from danger.
• Junior Stanislas crosses the ball.
• Jonathan Woodgate clears the ball from danger.
• Junior Stanislas crosses the ball.
• Jozsef Varga clears the ball from danger.
• Jozsef Varga clears the ball from danger.
• Tom Heaton (Burnley) takes a freekick. Outcome: Pass
• Middlesbrough makes a sub: Curtis Main enters for Lukas Jutkiewicz. Reason: Tactical
• Lukas Jutkiewicz commits a foul on Kieran Trippier resulting on a free kick for Burnley
• Throw-in: George Friend takes it (Attacking)
• Throw-in: George Friend takes it (Attacking)
• Tom Heaton (Burnley) takes a freekick. Outcome: Pass
• Emanuele Ledesema commits a foul on Kieran Trippier resulting on a free kick for Burnley
• Throw-in: Jozsef Varga takes it (Attacking)
• Throw-in: Kieran Trippier takes it (Defending)
• Scott Arfield is awarded a yellow card. Reason: unsporting behaviour
• Throw-in: Kieran Trippier takes it (Attacking)
• Kieran Trippier crosses the ball.
• Albert Adomah clears the ball from danger.
• Throw-in: Kieran Trippier takes it (Attacking)
• Dean Whitehead clears the ball from danger.
• Daniel Ayala clears the ball from danger.
• Throw-in: Daniel Lafferty takes it (Defending)
• Tom Heaton takes a long goal kick
• Emanuele Ledesema hits a good left footed shot, but it is off target. Outcome: miss left
• Throw-in: Jozsef Varga takes it (Attacking)
• Richard Smallwood crosses the ball.
• Throw-in: Jozsef Varga takes it (Attacking)
• Throw-in: Kieran Trippier takes it (Attacking)
• Dean Whitehead clears the ball from danger.
• Middlesbrough makes a sub: Grant Leadbitter enters for Marvin Emnes. Reason: Tactical
• Lukas Jutkiewicz hits a good right footed shot, but it is off target. Outcome: miss right
• Tom Heaton takes a long goal kick
• Burnley makes a sub: Junior Stanislas enters for Michael Kightly. Reason: Tactical
• Scott Arfield clears the ball from danger.
• Albert Adomah crosses the ball.
• Tom Heaton makes the save (Parry)
• Lukas Jutkiewicz hits a good right footed shot. Outcome: save
• George Friend clears the ball from danger.
• Jason Shackell clears the ball from danger.
• Shay Given takes a long goal kick
• Sam Vokes hits a good right footed shot, but it is off target. Outcome: miss left
• Tom Heaton takes a long goal kick
• Emanuele Ledesema crosses the ball.
• Emanuele Ledesema (Middlesbrough) takes a freekick. Outcome: Cross
• Michael Kightly commits a foul on Marvin Emnes resulting on a free kick for Middlesbrough
• Jonathan Woodgate clears the ball from danger.
• Daniel Ayala clears the ball from danger.
• Jozsef Varga clears the ball from danger.
• Kieran Trippier crosses the ball.
• Throw-in: George Friend takes it (Defending)
• Throw-in: Kieran Trippier takes it (Attacking)
• Throw-in: George Friend takes it (Defending)
• Throw-in: George Friend takes it (Defending)
• Throw-in: Daniel Lafferty takes it (Defending)
• Jozsef Varga clears the ball from danger.
• Shay Given makes the save (Catch)
• Danny Ings hits a good left footed shot. Outcome: save
• Kieran Trippier clears the ball from danger.
• Throw-in: Michael Kightly takes it (Defending)
• Richard Smallwood clears the ball from danger.
• Dean Whitehead clears the ball from danger.
• Dean Marney (Burnley) takes a freekick. Outcome: Open Play
• Dean Whitehead commits a foul on Dean Marney resulting on a free kick for Burnley
• Throw-in: Jozsef Varga takes it (Defending)
• Throw-in: Daniel Lafferty takes it (Attacking)
• George Friend clears the ball from danger.
• Throw-in: Jozsef Varga takes it (Attacking)
• Shay Given (Middlesbrough) takes a freekick. Outcome: Pass
• Offside called on Scott Arfield
• Throw-in: Daniel Lafferty takes it (Attacking)
• Albert Adomah blocks the shot
• Michael Kightly hits a good right footed shot, but it is off target. Outcome: blocked
• Shay Given makes the save (Tip Over)
• Danny Ings hits a good header. Outcome: save
• George Friend clears the ball from danger.
• Tom Heaton takes a short goal kick
• Albert Adomah crosses the ball.
• Jonathan Woodgate clears the ball from danger.
• Throw-in: Daniel Lafferty takes it (Defending)
• Jozsef Varga clears the ball from danger.
• Emanuele Ledesema (Middlesbrough) takes a freekick. Outcome: Open Play
• Michael Duff commits a nasty foul on Marvin Emnes resulting on a free kick for Middlesbrough
• Shay Given takes a long goal kick
• Dean Marney hits a good right footed shot, but it is off target. Outcome: miss left
• Jozsef Varga clears the ball from danger.
• Throw-in: Kieran Trippier takes it (Attacking)
• David Jones clears the ball from danger.
• Throw-in: Jozsef Varga takes it (Attacking)
• Throw-in: Daniel Lafferty takes it (Attacking)
• Throw-in: Jozsef Varga takes it (Attacking)
• Throw-in: Jozsef Varga takes it (Attacking)
• Throw-in: Kieran Trippier takes it (Attacking)
• Shay Given (Middlesbrough) takes a freekick. Outcome: Pass
• Offside called on Sam Vokes
• Jonathan Woodgate clears the ball from danger.
• Throw-in: Kieran Trippier takes it (Attacking)
• Shay Given takes a long goal kick
• Jonathan Woodgate clears the ball from danger.
• Michael Kightly (Burnley) takes a freekick. Outcome: Open Play
• Emanuele Ledesema commits a foul on Michael Kightly resulting on a free kick for Burnley
• Daniel Ayala clears the ball from danger.
• Throw-in: Daniel Lafferty takes it (Defending)
• Jonathan Woodgate clears the ball from danger.
• Richard Smallwood clears the ball from danger.
• George Friend clears the ball from danger.
• Daniel Lafferty clears the ball from danger.
• Throw-in: Jozsef Varga takes it (Attacking)
• Jason Shackell clears the ball from danger.
• George Friend crosses the ball.
• Jason Shackell clears the ball from danger.
• Marvin Emnes crosses the ball.
• Michael Duff clears the ball from danger.
• Albert Adomah crosses the ball.
• David Jones is awarded a yellow card. Reason: unsporting behaviour
• Dean Whitehead (Middlesbrough) takes a freekick. Outcome: Open Play
• David Jones commits a foul on Lukas Jutkiewicz resulting on a free kick for Middlesbrough
• Emanuele Ledesema hits a good left footed shot. Outcome: goal
• That last goal was assisted by Marvin Emnes
• Richard Smallwood clears the ball from danger.
• Daniel Ayala clears the ball from danger.
• Scott Arfield crosses the ball.
• George Friend clears the ball from danger.
• Michael Kightly crosses the ball.
• Daniel Ayala clears the ball from danger.
• George Friend clears the ball from danger.
• Throw-in: Kieran Trippier takes it (Attacking)
• George Friend clears the ball from danger.
• Jonathan Woodgate clears the ball from danger.
• Throw-in: Jozsef Varga takes it (Attacking)
• Jonathan Woodgate hits a good header. Outcome: save
• Tom Heaton makes the save (Catch)
• Emanuele Ledesema (Middlesbrough) takes a freekick. Outcome: Open Play
• David Jones commits a foul on Marvin Emnes resulting on a free kick for Middlesbrough
• Richard Smallwood (Middlesbrough) takes a freekick. Outcome: Pass
• Scott Arfield commits a foul on Richard Smallwood resulting on a free kick for Middlesbrough
• Throw-in: Daniel Lafferty takes it (Defending)
• Jonathan Woodgate clears the ball from danger.
• Jason Shackell clears the ball from danger.
• Throw-in: Jozsef Varga takes it (Attacking)
• Michael Duff clears the ball from danger.
• Throw-in: Jozsef Varga takes it (Attacking)
• Kieran Trippier crosses the ball.
• Throw-in: Daniel Lafferty takes it (Defending)
• Jonathan Woodgate clears the ball from danger.
• Dean Whitehead clears the ball from danger.
• Tom Heaton takes a long goal kick
• Albert Adomah crosses the ball.
• Throw-in: George Friend takes it (Attacking)
• Michael Duff (Burnley) takes a freekick. Outcome: Pass
• Lukas Jutkiewicz commits a foul on Jason Shackell resulting on a free kick for Burnley
• Jonathan Woodgate clears the ball from danger.
• Throw-in: Daniel Lafferty takes it (Attacking)
• Jonathan Woodgate clears the ball from danger.
• Tom Heaton takes a long goal kick
• Emanuele Ledesema hits a good left footed shot, but it is off target. Outcome: over bar
• Emanuele Ledesema (Middlesbrough) takes a freekick. Outcome: Shot
• Jason Shackell commits a foul on Lukas Jutkiewicz resulting on a free kick for Middlesbrough
• Jozsef Varga (Middlesbrough) takes a freekick. Outcome: Open Play
• Michael Kightly commits a foul on Jozsef Varga resulting on a free kick for Middlesbrough
• Michael Duff clears the ball from danger.
• Marvin Emnes crosses the ball.
• Throw-in: George Friend takes it (Attacking)
• Michael Duff clears the ball from danger.
• Jonathan Woodgate clears the ball from danger.
• Tom Heaton takes a long goal kick
• Emanuele Ledesema hits a good left footed shot, but it is off target. Outcome: miss right
• Throw-in: George Friend takes it (Attacking)
• Shay Given takes a long goal kick
• Michael Kightly hits a good right footed shot, but it is off target. Outcome: miss right
• Throw-in: Kieran Trippier takes it (Defending)
• Jozsef Varga clears the ball from danger.
• Jozsef Varga clears the ball from danger.
• Throw-in: Michael Kightly takes it (Attacking)
• Throw-in: Michael Kightly takes it (Attacking)
• Jonathan Woodgate clears the ball from danger.
• Jonathan Woodgate clears the ball from danger.
• Tom Heaton takes a long goal kick
• Lukas Jutkiewicz hits a good header, but it is off target. Outcome: miss right
• Albert Adomah crosses the ball.
• Dean Marney clears the ball from danger.
• Albert Adomah crosses the ball.
• Scott Arfield clears the ball from danger.
• George Friend crosses the ball.
• Kieran Trippier clears the ball from danger.
• Albert Adomah crosses the ball.
• Throw-in: George Friend takes it (Defending)
• Daniel Ayala clears the ball from danger.
• Tom Heaton (Burnley) takes a freekick. Outcome: Pass
• Offside called on Albert Adomah
• Throw-in: Jozsef Varga takes it (Attacking)
• Jonathan Woodgate clears the ball from danger.
• Jonathan Woodgate clears the ball from danger.
• Daniel Lafferty crosses the ball.
• Daniel Ayala clears the ball from danger.
• Shots (on goal)
• tackles
• Fouls
• possession
• Middlesbrough
• -
Match Stats
12(5) Shots (on goal) 10(3)
6 Fouls 7
0 Corner kicks 0
1 Offsides 4
44% Time of Possession 56%
1 Yellow Cards 2
0 Red Cards 0
3 Saves 4 | {"url":"http://www.espnfc.com/en/gamecast/367767/gamecast.html?soccernet=true&cc=","timestamp":"2014-04-17T18:41:33Z","content_type":null,"content_length":"160788","record_id":"<urn:uuid:09c1f870-8e64-47a7-a0e3-e6cb9b2912f8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Identification and Writing of Equivalent Rates
5.4: Identification and Writing of Equivalent Rates
Difficulty Level:
Created by: CK-12
Practice Identification and Writing of Equivalent Rates
Have you ever tried to figure out a better buy?
Kyle went to the bookstore. There was a big sale going on. In one corner of the store, you could buy four books for twelve dollars. In another part of the bookstore, you could buy three books for
nine dollars dollars. Kyle thinks that this is the same deal.
Is he correct?
To answer this question, you will need to understand equivalent rates. This is the information that will be presented in this Concept.
What is a rate?
A rate is a special kind of ratio. It compares two different types of units, such as dollars and pounds.
Suppose you are buying turkey at a supermarket, and you pay $12 for 2 pounds of turkey. That is an example of a rate. The turkey you bought cost $6 per pound for the turkey. That is another example
of a rate.
Notice the word “per”. This word signals us that we are talking about a rate.
We use rates all the time. We use them when shopping for example, per pound. We use them with gasoline, think per mile. We use them with speed, per gallon or with pricing, $4.00 per yard of material.
Sometimes, two rates are equivalent or equal.
How can we tell if two rates are equivalent?
Two rates are equivalent if they show the same relationship between two units of measure. We can use the same strategies to find equivalent rates that we use to find equivalent ratio.
Determine if these two rates are equivalent 40 miles in 2 hours, and 80 miles in 4 hours.
You can think of this as the distance and time of two different cars. Did they both travel at an equal rate?
First, express each rate as a fraction. Be sure to keep the terms consistent. That is, if the first ratio compares miles to hours, the second ratio should also compare miles to hours.
$40 \ miles \ \text{in}\ 2 \ hours &= \frac{40 mi}{2h} = \frac{40}{2}\\80 \ miles \ \text{in} \ 4 \ hours &= \frac{80 mi}{4h} = \frac{80}{4}$
Change the ratio, $\frac{40}{2}$
Since $2 \times 2 = 4$$\frac{40}{2}$
$\frac{40}{2} = \frac{40 \times 2}{2 \times 2} = \frac{80}{4}$
This shows that the ratio $\frac{80}{4}$$\frac{40}{2}$
This means that the rate 80 miles in 4 hours is equivalent to the rate 40 miles in 2 hours.
The two cars traveled at the same rate.
You can also cross multiply to determine if two rates are equivalent. Let’s look at an example where this strategy is applied.
Determine if these two rates are equivalent 5 meters every 3 seconds and 20 meters every 18 seconds.
You can think of this in terms of speed. The machine wound 5 meters of wire in three seconds. A second machine wound 20 meters of wire in 18 seconds. Did they wind the wire at the same rate?
First, cross multiply to determine if the rates are equivalent or not.
$\frac{5m}{3 \sec} &\overset{?}{=} \frac{20 m}{18 \sec}\\\frac{5}{3} &\overset{?}{=} \frac{20}{18}\\3 \times 20 &\overset{?}{=} 5 \times 18\\60 &\overset{?}{=}90\\60 &eq 90$
Since the cross products are not equal, the rates are not equivalent.
Determine if each rate is equivalent to the other rate.
Example A
3 feet in 9 seconds and 6 feet in 18 seconds
Solution: Equal
Example B
5 miles in 30 minutes and 6 miles in 42 minutes
Solution: Not Equal
Example C
5 pounds for $20.00 and 8 pounds for $32.00
Solution: Equal
Now back to the books.
Kyle went to the bookstore. There was a big sale going on. In one corner of the store, you could buy four books for twelve dollars. In another part of the bookstore, you could buy three books for
nine dollars dollars. Kyle thinks that this is the same deal.
Is he correct?
To figure this out, we can write two ratios and compare them. Let's use each deal as our ratios.
Simplifying these two ratios will show us that they both are equal to $\frac{1}{3}$
These two ratios are equivalent.
Here are the vocabulary words that are found in this Concept.
A special kind of ratio that compares two different quantities.
Guided Practice
Here is one for you to try on your own.
Write an equivalent rate for 3 out of 10.
We can do this by creating any equal ratio. We do this by multiplying both values in the ratio by the same number.
Here are some possible answer.
6 to 20
9 to 30
12 to 40
15 to 50
There are many possible answers.
Video Review
Here is a video for review.
- This is a James Sousa video on rates and unit rates.
Directions: Write an equivalent rate for each rate.
1. 2 for $10.00
2. 3 for $15.00
3. 5 gallons for $12.50
4. 16 pounds for $40.00
5. 18 inches for $2.00
6. 5 pounds of blueberries for $20.00
7. 40 miles in 80 minutes
8. 20 miles in 4 hours
9. 10 feet in 2 minutes
10. 12 pounds in 6 weeks
Directions: Simplify each rate.
11. 2 for $10.00
12. 3 for $15.00
13. 5 gallons for $25.00
14. 40 pounds for $40.00
15. 18 inches for $2.00
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts---Grade-7/r4/section/5.4/","timestamp":"2014-04-21T03:09:06Z","content_type":null,"content_length":"134830","record_id":"<urn:uuid:83ac49ce-b078-4349-9d13-c887f222b575>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1999 [00408]
[Date Index] [Thread Index] [Author Index]
Re: Combinatorica questions!!!
• To: mathgroup at smc.vnet.net
• Subject: [mg20545] Re: [mg20499] Combinatorica questions!!!
• From: Andrzej Kozlowski <andrzej at tuins.ac.jp>
• Date: Sat, 30 Oct 1999 00:13:59 -0400
• Sender: owner-wri-mathgroup at wolfram.com
These questions sound to me suspiciously like some sort of homework? Arn't
they? I guess we should have some sort of policy here whether we should
answer such questions...
Anyway, in the absence of any guidance I have decided that it is O.K. to
answer such questions provided one uses Mathematica in doing so. So let's
start with the first. By a castle I assume you mean the chess piece that is
normally in English called a rook. The question is not completely clear in
that yo do not explain whether you want to count the number of distinct
moves or distinct paths a rook must follow to get from the bottom left hand
corner to the upper right hand corner. I assume you mean the number of
paths. That means that we can assume that a path is determined by a sequence
of moves:
(right, up, right, up, ...).
Let's call the length of such a path the number of right moves in it (which
is half the total length of such a list). O.K., there is clearly only one
path of length 1, which means the rook goes all the way to the right and
then all the way to the top. Let's count the number of paths of length 2.
First, let's calculate the number of different ways we can move from the
bottom left hand corner to the bottom right hand corner in two steps. That's
the same as the number of partitions of 8 into a sum of two positive
integers, counting partitions like 3+5 and 5+3 as different. This can be
found as the coefficient of x^8 in the expression:
Coefficient[Sum[x^i, {i, 1, 8}]^2, x^8]
In other words, there are 7 different ways to get from the bottom left hand
corner to the top left hand corner in two steps. Just to be sure let's list
Of course there is exactly the same number of ways to get from the bottom
right hand corner to the top right hand corner in two steps. So the total
number of ways to get from the bottom left hand corner to the top right hand
corner in a path of length 2 is 7^2, i.e. 49 (each way of going from left to
right can be paired with each way of going from bottom to top).
Similarly the number of ways of getting from the bottom left hand corner to
the top right hand corner in a path of lenght 3 is
Coefficient[Sum[x^i, {i, 1, 8}]^3, x^8]
So the total number of ways (unless I have made some bad mistake somewhere)
Sum[Coefficient[Sum[x^i, {i, 1, 8}]^j, x^8]^2, {j, 1, 8}]
The second question is much easier. A choice of 10 balls in your situation
is exactly the same as a choice of an integer between 0 and 8, an integer
betwen 0 and 9 and an integer between 0 and 7 so that the sum is 10. That's
the same as the coeficient of x^10 in the following expression:
Sum[x^i, {i, 0, 8}]*Sum[x^i, {i, 0, 9}]*Sum[x^i, {i, 0, 7}] // Expand
1 + 3 x + 6 x + 10 x + 15 x + 21 x + 28 x + 36 x +
44 x + 51 x + 56 x + 59 x + 60 x + 59 x +
56 x + 51 x + 44 x + 36 x + 28 x + 21 x +
15 x + 10 x + 6 x + 3 x + x
As you can see it is 56. (One could also just get the answer using the
function Coefficient). In part 2 you require that each color be chosen at
least one. That means that we want to know
Coefficient[Sum[x^i, {i, 1, 8}]*Sum[x^i, {i, 1, 9}]*Sum[x^i, {i, 1, 7}],
Andrzej Kozlowski
Toyama International University
> From: "Keren Edwards" <kedwards at cobol2java.com>
To: mathgroup at smc.vnet.net
> Organization: NetVision Israel
> Date: Wed, 27 Oct 1999 02:04:59 -0400
> To: mathgroup at smc.vnet.net
> Subject: [mg20545] [mg20499] Combinatorica questions!!!
> Hi all!!
> 2 different questions:
> 1. how many ways does a castle have to reach from the bottom left side
> corner
> of a chess board to the upper right corner of the board if he can
> move right
> and up only?
> 2. you have 8 red identical balls, 9 purple identical balls and 7 white
> identical ones.
> a. How many ways can you choose 10 balls with no matter to the
> order of the balls?
> b. How many ways can you choose 10 balls with no matter to the
> order of the balls, if each color must
> be chosen once at least?
> Many thanx. | {"url":"http://forums.wolfram.com/mathgroup/archive/1999/Oct/msg00408.html","timestamp":"2014-04-19T07:08:16Z","content_type":null,"content_length":"38754","record_id":"<urn:uuid:b398322f-0552-4d4d-a37b-0ff05da8ba7d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cheap Beer, Paradoxical Dice, and the Unfounded Morality of Economists
Sometimes a concept can be so intuitively obvious that it actually becomes more difficult to teach and discuss. Take transitivity. We say that real numbers have the transitive property. That means
that if you have three real numbers (A, B and C) and you know A > B and B > C then you also know that A > C.
Transitivity is just too obvious to get your head around. In order to think about a concept you really have to think about its opposite as well --
A > B, B > C and C > A. None too imaginatively, we call these opposite relationships intransitive or non-transitive. Non-transitive relationships are deeply counter-intuitive. We just don't expect
the world to work like that. If you like butterscotch shakes better than chocolate shakes and chocolate shakes better than vanilla shakes, you expect to like butterscotch better than vanilla. If you
can beat me at chess and I can beat Harry, you assume you can beat Harry. There is. of course, an element of variability here -- Harry might be having a good day or you might be in a vanilla kind of
mood -- but on average we expect these relationships to hold.
The only example of a non-transitive relationship most people can think of is the game Rock, Paper, Scissors. Other games with non-transitive elements include the boomer classic Stratego where the
highest ranked piece can only be captured by the lowest and my contribution, the game
which was designed specifically to give students a chance to work with these concepts.
While these games give us a chance to play around with non-transitive relationships, they don't tell us anything about how these relationships might arise in the real world. To answer that question,
it's useful to look at another
Here are the rules. We have three dice marked as follows:
Die A {2,2,4,4,9,9}
Die B {1,1,6,6,8,8}
Die C {3,3,5,5,7,7}
Because I'm a nice guy, I'm going to let you pick the die you want. I'll then take one of the two remaining dice. We'll roll and whoever gets the higher number wins. Which die should you pick?
The surprising answer is that no matter which one you pick I'll still have the advantage because these are
non-transitive dice
. A beats B five out of nine times. B beats C five out of nine times. C beats A five out of nine times. The player who chooses second can always have better odds.
The dice example shows that it's possible for systems using random variables to result in non-transitive relationships. Can we still get these relationships in something deterministic like the rules
of a control system or perhaps the algorithm a customer might use to decide on a product?
One way of dealing with multiple variables in a decision is to apply a threshold test to one variable while optimizing another. Here's how you might use this approach to decide between two six-packs
of beer: if the price difference is a dollar or less, buy the better brand; otherwise pick the cheaper one.* For example, let's say that if beer were free you would rank beers in this order:
1. Sam Adams
2. Tecate
3. Budweiser
If these three beers cost $7.99, $6.99 and $5.99 respectively, you would pick Tecate over Bud, Sam Adams over Tecate and Bud over Sam Adams. In other words, a rock/paper/scissors relationship.
Admittedly, this example is a bit contrived but the idea of a customer having a threshold price is not outlandish, and there are precedents for the idea of a decision process where one variable is
ignored as long as it stays within a certain range.
Of course, we haven't established the existence, let alone the prevalence of these relationships in economics but their very possibility raises some interesting questions and implications. Because
transitivity is such an intuitively appealing concept, it often works its way unnoticed into the assumptions behind all sorts of arguments. If you've shown A is greater than B and B is greater than
C, it's natural not to bother with A and C.
What's worse, as Edward Glaeser has observed, economists tend to be reductionists, and non-transitivity tends to play hell with reductionism. This makes economics particularly dependent on
assumptions of transitivity. Take Glaeser's widely-cited proposal for a "
moral heart of economics
Teachers of first-year graduate courses in economic theory, like me, often begin by discussing the assumption that individuals can rank their preferred outcomes. We then propose a measure — a ranking
mechanism called a utility function — that follows people’s preferences.
If there were 1,000 outcomes, an equivalent utility function could be defined by giving the most favored outcome a value of 1,000, the second best outcome a value of 999 and so forth. This “utility
function” has nothing to do with happiness or self-satisfaction; it’s just a mathematical convenience for ranking people’s choices.
But then we turn to welfare, and that’s where we make our great leap.
Improvements in welfare occur when there are improvements in utility, and those occur only when an individual gets an option that wasn’t previously available. We typically prove that someone’s
welfare has increased when the person has an increased set of choices.
When we make that assumption (which is hotly contested by some people, especially psychologists), we essentially assume that the fundamental objective of public policy is to increase freedom of
But if these rankings can be non-transitive, then you run into all sorts of problems with the very idea of a utility function. (It would also seem to raise some interesting questions about revealed
preference.) Does that actually change the moral calculus? Perhaps not but it certainly complicates things (what exactly does it mean to improve someone's choices when you don't have a well-ordered
set?). More importantly, it raises questions about the other assumptions lurking in the shadows here. What if new options affect the previous ones in some other way? For example, what if the value of
new options diminishes as options accumulate?
It's not difficult to argue for the assumption that additional choices bring diminishing returns. After all, the more choices you have, the less likely you are to choose the new one. This would imply
that any action that takes choices from someone who has many and gives them to someone has significantly fewer represents a net gain since the choice is more likely to be used by the recipient. Let's
say we weight the value of a choice by the likelihood of it being used, and if we further assume that giving someone money increases his or her choices, then taking money from a rich person and
giving it to a poor person should produce a net gain in freedom.
Does this mean Glaeser's libertarian argument is secretly socialist? Of course not. The fact that he explicitly cites utility functions suggests that he is talking about a world where orders are well
defined, and effects are additive and you can understand the whole by looking at the parts. In that world his argument is perfectly valid.
But as we've just seen with our dice and our beer, we can't always trust even the most intuitively obvious assumptions to hold. What's more, our examples were incredibly simple. The distribution of
each die just had three equally probable values. The purchasing algorithm only used two variables and two extremely straightforward rules.
The real world is far more complex. With more variables and more involved rules and relationships, the chances of an assumption catching us off guard only get greater.
*Some economists might object at this point that this algorithm is not rational in the strict economics sense of the word. That's true, but unless those economists are also prepared to argue that all
consumers are always completely rational actors, the argument still stands.
1 comment:
1. Well done. Economics, as it is taught today, is on a very weak mathematical basis. Look at the equilibrium problem in which economists assume a single equilibrium, that the state converges and
that the process is not path dependent. It's like calculus before mathematicians figured out why series converge, except with weaker empirical backing. (Mathematicians could at least try catching
the tortoise and, like Achilles, sit on its back and argue convergence from there.) | {"url":"http://observationalepidemiology.blogspot.com/2011/04/cheap-beer-paradoxical-dice-and.html","timestamp":"2014-04-20T16:33:20Z","content_type":null,"content_length":"107571","record_id":"<urn:uuid:987aa2d6-8d22-4dbd-a2a2-e02464d631dd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Stephen Simpson.
Title: Degrees of unsolvability and the Borel hierarchy
Seminar: Logic Seminar
Speaker: Stephen G. Simpson, Pennsylvania State University
Abstract Link: http://www.math.psu.edu/simpson/papers/massmtr.pdf
Given a real number x, let x' be the halting problem relative to x. In other words, x' is the problem of deciding whether a given computer program which uses x as a Turing oracle will or will not
eventually halt. By means of Gödel numbering, we may view x' as another real number. Thus we have an operator x |---> x' from real numbers to real numbers, known as the Turing jump operator. By
iterating the Turing jump operator along the ordinal numbers which are computable using x as an oracle, we obtain what is known as the hyperarithmetical hierarchy relative to x. In this talk we
explore the relationship between the hyperarithmetical hierarchy and the well-known hierarchy of Borel sets in Euclidean space indexed by the countable ordinal numbers. We sketch a proof of the
following theorem of Kautz, 1991: If S is a Borel set at level alpha+2 in the effective Borel hierarchy relative to x, then S includes a set whose Lebesgue measure is the same as that of S and which
is at level 2 of the effective Borel hierarchy relative to the alpha-th Turing jump of x. As time permits, we discuss recent refinements involving LR-reducibility.
Room Reservation Information
Room Number: MB315
Date: 09 / 15 / 2009
Time: 02:30pm - 03:45pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=5112","timestamp":"2014-04-19T12:13:02Z","content_type":null,"content_length":"4340","record_id":"<urn:uuid:cd4d5d7e-cc10-426d-ac8d-998dd94fb9df>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real World Integers
Image Source: http://www.divediscover.whoi.edu
When we think of Integers, or negative numbers in the real world, we usually think of being below sea level, or we may think of being at low temperatures.
Image Source: http://www.odec.ca
The following Slideshare presentation shows several other interesting applications of Integers in the Real World.
If you are using an Apple device to view this blog; then you will not be able to see the above Slideshare. Apple devices are severely limited in regards to viewing web content, and so you will need
to view the following alternative 2 Meg PDF version of the slideshow.
(This PDF document may take a couple of minutes to load in).
Click here to view PDF version of Slideshow
Related Items
Introduction to Integers
Arranging Integers in Order
Adding Integers Using Number Lines
Adding Integers Using Zero Pairs
Subtracting Integers
Multiplying Integers
Dividing Integers
Integers Order of Operations
Directed Number Integers Games
Integers in Drag Racing
If you enjoyed this post, why not get a free subscription to our website.
You can then receive notifications of new pages directly to your email address.
Go to the subscribe area on the right hand sidebar, fill in your email address and then click the “Subscribe” button.
To find out exactly how free subscription works, click the following link:
If you would like to submit an idea for an article, or be a guest writer on our blog, then please email us at the hotmail address shown in the right hand side bar of this page.
18 Responses to Real World Integers
You must be logged in to post a comment.
This entry was posted in Integers, Math in the Real World and tagged directed numbers, integers, integersin the real world, interesting integers, negative numbers, negative numbers in the real world,
real world integers, real world negative numbers. Bookmark the permalink. | {"url":"http://passyworldofmathematics.com/real-world-integers/","timestamp":"2014-04-18T23:15:59Z","content_type":null,"content_length":"56168","record_id":"<urn:uuid:9b49211b-7b52-4c61-973e-82286da8753b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Satellite orbits may be uniquely described by six (6) independent parameters. These orbital parameters are usually expressed either by Keplerian elements or by State Vectors. State Vectors represent
the 3-D Position and Velocity components of the orbital trajectory at a certain snapshot in time or epoch. Keplerian elements are descriptive of the size, shape, and orientation of an orbital
ellipse. State Vectors and Keplerian Elements each have their own unique advantages; either one can be computed from the other.
Pre-Launch Orbit Determination
Vectors are an excellent tool for pre-launch orbital predictions. Vectors can use an Earth-Fixed reference frame, with the epoch time expressed as an offset to the Launch date and time. Times offset
relative to Launch are labeled as Mission Elapsed Time (MET). Referencing to both the Earth-fixed reference frame and MET makes the Vector completely time-independent and thus a constitutes general
orbital description.
Complete sets of Keplerian elements are valid only for a specific Launch time. Therefore, pre-Launch Keplerian elements have limited usefulness. If the Launch time changes, updates need to be made
before the elements are useful for orbital predictions.
A time-independent State Vector can be readily combined with the Launch date and time, using software such as VEC2TLE, to produce a valid set of Keplerian elements. The Keplerian elements can then,
in turn, be used by tracking software for orbital predictions. Nominal time-independent State Vectors may be downloaded for the mission(s) indicated below:
. Phase 3D Estimates . STS-90
Further details regarding the capabilities of VEC2TLE are available on the Orbitessera Web site.
Sources of Space Shuttle State Vectors
Space Shuttle State Vectors are obtained from several NASA Sources. While every effort is made to keep the Orbital Data on the AMSAT Web Page current, this is not always possible. The interested user
may wish, at some point, to generate Keplerian elements by manually entering a State Vector. If this is attempted, be sure that the Settings in VEC2TLE conform to those applicable to the vector.
Sources of Shuttle State Vectors Include:
• NORAD's Space Track can be accessed by their Web Site [NOTE: vector data is not currently available from the Space Track Web Page], In order to log on to the web site, you must be assigned an
account. An account may be obtained by filling out the online application. The Vectors are in the Earth-Centered Inertial (ECI), True Equator and Equinox of Date Coordinate frame in units of km
and km/sec. Vector data may not be redistributed except by agreement. See the site for additional information.
• NASA Human Spaceflight Web Page has information about current flights. The Realtime Data section also provides periodic updates to State Vectors and Keplerian Elements. The Vectors are in the
Earth-Centered Inertial (ECI), Mean of 1950 (M50) Coordinate frame in units of meters and meters/sec.
• The Rockwell Mission Support Room provides vector data on a cooperative basis to a few individuals who, in-turn, reciprocate with the corresponding Keps and also release these data publicly. The
Vectors are in the Earth-Centered Inertial (ECI), Mean of 1950 (M50) Coordinate frame in units of ft and ft/sec.
• DOD C-Band Radar Network provides vector data on a cooperative basis to a few individuals who, in-turn, reciprocate with the corresponding Keps and also release these data publicly. The Vectors
are in the Earth-Fixed Greenwich (EFG) Coordinate frame in units of ft and ft/sec.
Originally contributed by Ken Ernandes, N2WWD. Additional updates by Emily Clarke, W0EEC. | {"url":"http://www.amsat.org/amsat-new/information/faqs/sv_keps.php","timestamp":"2014-04-20T00:42:40Z","content_type":null,"content_length":"9741","record_id":"<urn:uuid:95d6272e-8659-4e5a-9532-84d6502034f5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Weighted Mean
weighted mean
is similar to an
arithmetic mean
(the most common type of
), where instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in
descriptive statistics
and also occurs in a more general form in several other areas of mathematics.
If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few
counter-intuitive properties, as captured for instance in Simpson's paradox.
The term weighted average usually refers to a weighted arithmetic mean, but weighted versions of other means can also be calculated, such as the weighted geometric mean and the weighted harmonic mean
Given two school classes, one with 20 students, and one with 30 students, the grades in each class on a test were:
Morning class = 62, 67, 71, 74, 76, 77, 78, 79, 79, 80, 80, 81, 81, 82, 83, 84, 86, 89, 93, 98
Afternoon class = 81, 82, 83, 84, 85, 86, 87, 87, 88, 88, 89, 89, 89, 90, 90, 90, 90, 91, 91, 91, 92, 92, 93, 93, 94, 95, 96, 97, 98, 99
The straight average for the morning class is 80 and the straight average of the afternoon class is 90. The straight average of 80 and 90 is 85, the mean of the two class means. However, this does
not account for the difference in number of students in each class, and the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by
either averaging all the numbers without regard to classes, or weighting the class means by the number of students in each class:
bar{x} = frac{4300}{50} = 86.
Or, using a weighted mean of the class means:
bar{x} = frac{(20)80 + (30)90}{20 + 30} = 86.
The weighted mean makes it possible to find the average student grade also in the case where only the class means and the number of students in each class are available.
Mathematical definition
Formally, the weighted mean of a non-empty set of data
$\left[x_1, x_2, dots , x_n\right],,$
with non-negative weights
$\left[w_1, w_2, dots, w_n\right],,$
is the quantity calculated by
$bar\left\{x\right\} = frac\left\{ sum_\left\{i=1\right\}^n w_i x_i\right\}\left\{sum_\left\{i=1\right\}^n w_i\right\},$
which means:
bar{x} = frac{w_1 x_1 + w_2 x_2 + cdots + w_n x_n}{w_1 + w_2 + cdots + w_n}.
So data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights must not be negative. They may be zero, but not all of them (because division
by zero is not allowed). In the special case, often encountered in practice, where the weights are normalized (i.e. are nonnegative and sum up to 1), the denominator of the fraction simplifies to 1.
Length-weighted mean
For weighting a response variable based upon its dependency on x, a distance variable.
bar{y} = frac{y_2 x_2 - y_1 x_1}{x_2 - x_1}
Convex combination
Since only the relative weights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such a linear combination is called a convex combination.
Using the previous example, we would get the following:
frac{20}{20 + 30} = 0.4,
frac{30}{20 + 30} = 0.6,
bar{x} = frac{(0.4)80% + (0.6)90%}{0.4 + 0.6} = 86%
This simplifies to:
bar{x} = (0.4)80% + (0.6)90% = 86%
Dealing with variance
For the weighted mean of a list of data for which each element $x_i,!$ comes from a different probability distribution with known variance $\left\{sigma_i\right\}^2,$, one possible choice for the
weights is given by:
w_i = frac{1}{sigma_i^2}.
The weighted mean in this case is:
bar{x} = frac{ sum_{i=1}^n x_i/{sigma_i}^2}{sum_{i=1}^n 1/{sigma_i}^2},
and the variance of the weighted mean is:
sigma_{bar{x}}^2 = frac{ 1 }{sum_{i=1}^n 1/{sigma_i}^2},
which reduces to $sigma_\left\{bar\left\{x\right\}\right\}^2 = frac\left\{ \left\{sigma_0\right\}^2 \right\}\left\{n\right\}$, when all $sigma_i = sigma_0.,$
The significance of this choice is that this weighted mean is the maximum likelihood estimator of the mean of the probability distributions under the assumption that they are independent and normally
distributed with the same mean.
Correcting for over/under dispersion
Weighted means are typically used to find the weighted mean of experimental data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point.
Typically experimentally errors are underestimated, because the experimenter does not know all sources of error in calculating the variance of each data point. In this event, the variance in the
weighted mean must be corrected to account for the fact that $chi^2$ is too large. The correction that must be made is
$sigma_\left\{bar\left\{x\right\}\right\}^2 rightarrow sigma_\left\{bar\left\{x\right\}\right\}^2 chi^2_nu$
where $chi^2_nu,$ is $chi^2,$ divided by the number of degrees of freedom, in this case $n-1,$. This gives the variance in the weighted mean as:
$sigma_\left\{bar\left\{x\right\}\right\}^2 = frac\left\{ 1 \right\}\left\{sum_\left\{i=1\right\}^n 1/\left\{sigma_i\right\}^2\right\} times frac\left\{1\right\}\left\{\left(n-1\right)\right\}
sum_\left\{i=1\right\}^n frac\left\{ \left(x_i - bar\left\{x\right\} \right)^2\right\}\left\{ sigma_i^2 \right\} ,$
Weighted sample variance
Typically when you calculate a mean it is important to know the
standard deviation
of that mean. When a weighted mean
is used, the variance of the weighted sample is different from the variance of the unweighted sample. The
sample variance
is defined similarly to the normal
sample variance:
sigma^2 = frac{ sum_{i=1}^N{left(x_i - muright)^2} }{
} ;;; sigma_text{weighted}^2 = frac{ sum_{i=1}^N{{w_i}left(x_i - mu^*right)^2} }{ sum_{i=1}^N{w_i} }.
For small sample of populations, it is customary to use an unbiased estimator for the population variance. In normal unweighted samples, the N in the denominator (corresponding to the sample size) is
changed to N − 1. While this is simple in unweighted samples, it becomes tedious for weighted samples. Thus, the unbiased estimator of weighted population variance is given by :
s^2 = frac{ sum_{i=1}^N{w_i} }{ left(sum_{i=1}^N{w_i}right)^2 - sum_{i=1}^N{{w_i}^2} } sum_{i=1}^N{{w_i}left(x_i - mu^*right)^2}.
Which can also be written in terms of running sums for programming as:
s^2 = frac{ sum_{i=1}^N{w_i {x_i}^2} sum_{i=1}^N{w_i} - left(sum_{i=1}^N{w_i x_i}right)^2 }{ left(sum_{i=1}^N{w_i}right)^2 - sum_{i=1}^N{{w_i}^2} }.
The standard deviation is simply the square root of the variance above.
Accounting for correlations
In the general case, suppose that $mathbf\left\{X\right\}=\left[x_1,dots,x_n\right]$, $mathbf\left\{C\right\}$ is the covariance matrix relating the quantities $x_i$, $bar\left\{x\right\}$ is the
common mean to be estimated, and $mathbf\left\{W\right\}$ is the design matrix [1, ..., 1] (of length n). The Gauss–Markov theorem says that the estimate of the mean having minimum variance is given
$sigma^2_bar\left\{x\right\}=\left(mathbf\left\{W\right\}^T mathbf\left\{C\right\}^\left\{-1\right\} mathbf\left\{W\right\}\right)^\left\{-1\right\},$
$bar\left\{x\right\} = sigma^2_bar\left\{x\right\} \left(mathbf\left\{W\right\}^T mathbf\left\{C\right\}^\left\{-1\right\} mathbf\left\{X\right\}\right).$
See also
• Bevington, Philip. Data Reduction and Error Analysis for the Physical Sciences.
External links | {"url":"http://www.reference.com/browse/Weighted+Mean","timestamp":"2014-04-17T16:28:56Z","content_type":null,"content_length":"87186","record_id":"<urn:uuid:3f31c8eb-bc95-4e98-9606-1ecdf9298d0d>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Rony
Total # Posts: 32
Please display answer here
she purchase a lamp for Rs. 900 and sell it for Rs. 1500, determine the following a)rupee markup, b)markup percentage on cost, c)markup percentage of selling price
social study
Z located on the top of the globe X located in the center of the globe U located almost in the bottom of the globe please help
social study
A comparison of the average temperatures on any given day at points z,x, and U on the glove would probably show that the temperature at point X is? 1 warmer than at z and cooler than at U 2 warmer
than at z and cooler than at U 3 cooler than at z and warmer than at U 4 cooler ...
F = MA F = qv*B ma = qvb a = qbv/m m = mass of electron q = charge of electron
math (pre-calc)
Solve for x: 1. 3^2x+3^x+1-4=0 2. log3(x=12)+log3(x+4)=2 3. lnx+ln(x+2)=4 4. 3*4^x+4*2^x+8=0
math(advanced algebra&trigonometry)
i know... a^2=b^2+c^2-2bc cos A but i didn't get the correct answer...which is 14 is it like a^2=20^2+14.5^2-2(20)(14.5) cos135
math(advanced algebra&trigonometry)
the playground at a daycare center has a triangular-shaped sandbox. Tow of the sides measure 20 feet and 14.5 feet and form and included angle of 45 degree. Find the length and the third side of the
sandbox to the nearest tenth of a foot.
math(advanced algebra&trigonometry)
The Vietnam Veterans Memorial in Washington, D.C., is made up of two walls, each 246.75 feet long, that meet at an angle of 125.2°. Find, to the nearest foot, the distance between the ends of the
walls that do not meet.
math(advanced algebra&trigonometry)
If a is an angle in standard position and its terminal side passes through the point (-3,2), find the exact value of csc a. Answer: radical 13/2
math (pre-calc)
Analyzing a Stock. The beta, B of a stock represents the relative risk of a stock compared with a market basket of stocks, such as Standard and poor's 500 Index of stocks. Beta is computed by finding
the slope of the line of best fit between the rate of return of the stock...
math(advanced algebra&trigonometry)
If θ is an angle in standard position and its terminal side passes through the point (−3,2), find the exact value of cscθ . I know the answer...but i want to know how you get the answer... The answer
is radical 13/2
math(advanced algebra&trigonometry)
A circle has a radius of 4 inches. In inches, what is the length of the arc intercepted by a central angle of 2 radians? which of the following is the answer: 1) 2 radian 2) 2 3) 8 radian 4) 8
math(advanced algebra&trigonometry)
it is cosx thanks
math(advanced algebra&trigonometry)
the expression cos4x cos3x + sin4x sin3x is equivalent to which one of the following is the answer: 1) sinx 2) sin7x 3) cosx 4) cos7x
math(advanced algebra&trigonometry)
i know the answer...it is 197º40 but i want to know how you get the answer... some one plz help... thanks
math(advanced algebra&trigonometry)
find, to the nearest minute, the angle whose measure is 3.45 radians.
A shoe manufacturer determines that the annual cost of making x pairs of one type of shoe is $30 per pair plus $100,000 in fixed overhead costs. Each pair of shoes that is manufactured is sold
wholesale for $50. a. Find the equations that model Revenue and cost and graph each ...
Find the real zeros of f(x)=(x-1)^2 + 5(x-1)+4
Find the zeros of f(x)=(x-1)^2 + 5(x-1)+4
The weekly rental cost of a 20 foor recreational vehicle is $129.50 plus $0.15 per mile. -Find a linear function that express the cost C as a function miles driven m. - What is the rental cost if 860
miles are driven? -How many miles were driven if the rental cost is $213.80?
f(x)=2x^2+1 and g(x)=3x-2 find f(x+h)-f(x)
some one help me with this question..plz.. Thank You.
A potential difference of 0.90 V exists from one side to the other side of a cell membrane that is 5.0 NM thick. What is the electric field across the membrane?
Resistors R1, R2, and R3 have resistances of 15.0 Ohms, 9.0 Ohms and 8.0 Ohms respectively. R1 and R2 are connected in series, and their combination is in parallel with R3 to form a load across a
6.0-V battery. -What is the total resistance of the load? -What is the current in...
A 15.0 Ohms resistor is connected in series to a 120V generator and two 10.0 Ohms resistors that are connected in parallel to each other. -What is the total resistance of the load? -What is the
magnitude of the circuit current? -What is the current in one of the 10.0 Ohms resi...
i know when its paralle 1/Req= 1/R1 + 1/R2...
Two resistors, one 12Ohms and the other 18 Ohms, are connected in parallel. What is the equivalent resistance of the parallel combination?
r is equal 1.25cm
What is the force between two charged spheres 1.25 cm apart if the charge on one sphere is 2.50 C and the charge on other sphere is 1.75 X 10^-8 C
a potential difference of 0.90 V exists from one side to the other side of a cell membrane that is 5.0 nm thick. What is the electric field across the membrane?
The number of Pb atoms per unit cell | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Rony","timestamp":"2014-04-18T10:44:14Z","content_type":null,"content_length":"12685","record_id":"<urn:uuid:fd0e8dc5-d46d-4cb0-9624-d2f246c7916e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
A query on the (old) motivation for renormalizable theories
hi metroplex021!
… But can't one say the following: say I am interested in studying only 2->2 interactions. Then presumably I only need to renormalize the 2, 3 and 4-point functions in order to derive predictions for
these sorts of interactions. …
no, the number of input and output particles is irrelevant
even with 2->2, there are infinitely many terms in the Dyson expansion …
forget Feynman diagrams, it's those
infinitely many terms
that need to have a finite sum | {"url":"http://www.physicsforums.com/showthread.php?p=3401168","timestamp":"2014-04-19T17:45:54Z","content_type":null,"content_length":"39417","record_id":"<urn:uuid:6bc3da8d-ed3c-404b-a082-47a6a4e5b980>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the number!
Re: Find the number!
Getting good at this, phrontister and bobbym!
Re: Find the number!
You sure are happy! Anymore?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
My number is:
1) Nearest to 50.
2) Nearest to 7^2.
3) A prime number.
What is it?
Make sure the difference is the least so it could be nearest to 50 and 7^2.
Real Member
Re: Find the number!
Last edited by anonimnystefy (2012-12-09 23:55:10)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Real Member
Re: Find the number!
Hi julianthemath,
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Find the number!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
My number is:
1) A composite number.
2) It is greater than 5.
3) Is a square number.
4) No more than 40.
5) Has these factors, 1, 2, 3, 4, 6, 9, 12, 24, 36.
What is it?
There is a hint!
Re: Find the number!
It is a 36. 6 x 6 = 36
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
Yes siree!
Re: Find the number!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
Thanks, if only that were true.
Got another one?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
I need some more replies for #VI.
Just wait, bobbym.
Re: Find the number!
My number is:
1) A composite number.
2) Is a perfect square.
3) Nearer to 4!
What is it?
Give square root and square number!
Re: Find the number!
Hi julianthemath;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
Good, bobbym!
Re: Find the number!
Last edited by julianthemath (2012-12-15 18:08:09)
Re: Find the number!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
A long time since the last post!
9. If I have 1000 strawberries, and I give a prime number of strawberries to my brother, then I will have 977 left. How many strawberries did I give to my brother?
Re: Find the number!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
10. I have $10,000. I bought:
2 game consoles for $500 each
3 monopoly games for $200 each
How much money do I have left?
Re: Find the number!
8400 dollars are left.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Find the number!
1) It is a Squimintrius number.
2) It is a multiple of 4.
3) It is also a multiple of 12.
Re: Find the number!
What is a squiddymimus number?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=243852","timestamp":"2014-04-16T04:28:22Z","content_type":null,"content_length":"41125","record_id":"<urn:uuid:62dee434-b616-4d6f-b1d2-8b446e467259>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
ECONOMICS 5050 -- First Homework Assignment
ECONOMICS 5050 -- First Homework Assignment DUE: Mar. 7, 2005
PART I (10 points) (Use less than 200 words)
The US has a “Mixed Economy”. Explain what this means. Try to be clear and include examples, but be concise.
PART II (30 points)
Using separate diagrams for each of the following, with supply and demand clearly labeled, please depict the effect on the equilibrium price and quantity of the good that will be produced and sold.
Each event is to be treated as independent of the others. Provide your answers in the form of a separate graph for each event. Be sure to label everything correctly and to indicate clearly the
direction of any change in the market.
1. The effect of a decrease in the price of cookies on the market for milk.
2. The effect of an increase in the price of scones on the market for tea.
3. The effect of a rise in the price of butter on the market for margarine.
4. The effect of an improvement in production technology on the market for cell phones.
5. The effect of a decrease in incomes on the market for Do-It Yourself Home Electrical books.
6. The effect of a price ceiling on the market for apartments (rent control).
7. The effect of “nesting” (when people want to spend more time at home with family - especially common in uncertain or dangerous times) on the market for board games.
8. The effect of a decrease in the price of cotton on the market for t-shirts.
9. The effect of a decrease in incomes on the market for big screen TVs.
10. The effect of a tax on the market for alcohol.
PART III (30 points)
A firm has fixed costs of $120 and other costs as indicated in the table below.
1. Complete the table.
│ │ │ │ │ │ Average │Average│ │
│Total Product│Total Fixed Cost│Total Variable Cost │Total Cost│Average Fixed Cost│Variable Cost│ Total │Marginal Cost│
│ │ │ │ │ │ │ Cost │ │
│ 0 │ │ │ │ │ │ │ │
│ 1 │ │ 75 │ │ │ │ │ │
│ 2 │ │ │ 240 │ │ │ │ 45 │
│ 3 │ │ │ │ │ │ 100 │ │
│ 4 │ │ │ │ │ 75 │ │ │
2. a) If TFC are greater than 0 in a graph where should the ATC curve be relative to the AVC curve?
b) In the case of U shaped average cost curves:
Where will MC be the same as AVC?
Where will MC be the same as ATC?
3. Sketch a typical set of average and marginal U shaped cost curves together on a graph with clear labels for each axis and each curve.
PART IV (30 points) For full credit you must SHOW YOUR WORK.
This part deals with demand and supply analysis. Assume a competitive market for a hypothetical consumer good. ("Competitive" means that no single buyer or seller has any control over the price of
the good.)
Your graphs in this problem do not need to be to scale, but will probably help you to think through the problem so taking some care (perhaps more than one draft of the graphs) is advised. Use algebra
to identify the key points such as equilibria.
You are given the following information:
On the demand side, 5,000 units will be demanded per month when the market price is $5.00. With every one-dollar price change, the quantity demanded changes by 1,000 units.
On the supply side, 2,000 units will be supplied per month when the market price is $2.00. With every one-dollar price change, the quantity supplied changes by 2,000 units.
1. Establish demand and supply schedules for this market, and prepare a demand and supply diagram.
(Hint: The demand and supply equations will be helpful.
Recall from algebra the point slope formula for a line y - y[1] = slope (x - x[1]),
or in this context P - P[1] = slope (Q - Q[1]). For P[1 ]and Q[1 ]you need to use the coordinates of one of the points on the line. The slope is the change in the price that occurs as you move along
the line for each unit change in quantity.)
2. Determine the equilibrium price and equilibrium quantity in this market.
3. Now assume that the government imposes a 20% excise tax on this product. Determine the impact of this tax on the market supply of this good, and draw the new supply curve in your diagram.
4. Determine the new equilibrium price and equilibrium quantity in this market.
5. Determine the amount of monthly tax revenue which the government derives from this tax.
6. Determine the "incidence" of this tax (i.e., how the "burden" of this tax is shared between buyers and sellers).
7. Now determine the price elasticity of demand by calculating the elasticity coefficient, using the "mid-points formula," for the price range between the original and the new equilibrium price.
Hint: Recall the midpoints formula uses averages in getting the percentage change in quantity and percentage change in price to avoid the problem of elasticity estimates changing depending on the
direction taken between the points. The basic formula for elasticity is
| (Q[2] - Q[1])/ (Q[1] + Q[2])|
| (P[2] - P[1] )/(P[1] + P[2])|
Is the demand in this price range elastic, unit-elastic, or inelastic?
8. Use the "total revenue test" to give a rough check of your answer to the question above.
9. Then determine the price elasticity of supply along the original supply curve, for the same price range and using the same formula.
Is the supply in this range elastic, unit-elastic, or inelastic?
10. Finally, repeat the procedure to determine the price elasticity of supply along the new supply curve, for the same price range. | {"url":"http://archive.csustan.edu/econ/peterson/hw1505s05.html","timestamp":"2014-04-20T13:43:24Z","content_type":null,"content_length":"36550","record_id":"<urn:uuid:9901816c-c295-4094-bc94-b33fb8dc54bc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
(9a+10)(9a-10) Its difference of squares
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so you got it?
Best Response
You've already chosen the best response.
could u tell me the steps
Best Response
You've already chosen the best response.
i mean could u show it
Best Response
You've already chosen the best response.
I realized that we were missing the middle term Usually the equation is in the form \(ax^2+bx+c\) In this case the equation was in this form \(ax^2+c\) it was missing the bx So that means its
difference of squares meaning that middle term canceled out
Best Response
You've already chosen the best response.
oh i get it thanx pippa u are a life saver
Best Response
You've already chosen the best response.
I am back should i finish off explaining or did u get it?
Best Response
You've already chosen the best response.
could u finish it
Best Response
You've already chosen the best response.
So anyways it is in the form \(a^2-b^2=(a-b)(a+b)\) \(81x^2-100=(9x-10)(9x+10)\) In this case our a=9x and b=10 So \(a^2=(9x)^2=81x^2 \text{ and } b^2=(10)^2=100\)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505e7106e4b0583d5cd1bed7","timestamp":"2014-04-18T16:21:14Z","content_type":null,"content_length":"56016","record_id":"<urn:uuid:d880aa6f-538b-4502-8236-2b2cdedb4355>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Insertion sort
02-25-2010 #1
Registered User
Join Date
Mar 2009
Insertion sort
void insertionSort()
node* temp=head;
while(temp->next != NULL)
node* second =temp->next;
if (second->exponent==temp->exponent)
if (head->exponent > second->exponent)
head= second;
node* current = head->next;
while(current->exponent < second->exponent)
I have made insertion sort funtion for a doubly linked list , but its not working , i cant figure out the errors.If anybody can guide a bit. =|
Perhaps you've overcomplicated it. You could ignore the case where an item is greater than all items inserted thus far:
template <class TItem>
void InsertionSort(TItem *&head) {
TItem *newList = NULL, *oldList = head;
while (oldList != NULL) {
//take an item off the old list
TItem *temp = oldList;
oldList = oldList->next;
TItem *curr = newList, *prev = NULL;
//find the place in the new list for this item
while (curr != NULL) {
if (*temp < *curr)
prev = curr;
curr = curr->next;
//put the item in the new list
if (prev == NULL)
newList = temp;
prev->next = temp;
temp->next = curr;
head = newList;
Sure that's slower for an already sorted list, but we're not using InsertionSort for sorting long lists anyway, if we know what's good for us.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
I dun know how to work with the templates i would be thankful, if u point out where my code needs to changed i would be thnakful.
I have made a doubly linked list of polynomials and now i am using operator overloading to add two polynomials. The add function see its the exponents of the two polynomials are same and then add
them. It returns the only node after addition whose addition is being done, but i want to get the whole list of polynomials , whether they are added or not.
1x^1 2x^2 when added with 1x^1 3x^6
2x^1 2x^2 3x^6
So kindly tell how to insert the terms (those whose exponents dont match) to the result list.
list operator + (const list& k)
node* first;
first = head;
node* second;
second = k.head;
node* result=head;
list r;
while (first != NULL)
while (second != NULL)
if (first->exponent == second->exponent )
result->coefficient =first->coefficient + second->coefficient;
return r;
Just ignore that first line and call it with any type you like that has a member called 'next'.
Okay so the code I posted was for a singly-linked list, but it's easy enough to add a few more lines to hook up the links in the opposite direction.
The thing about your code is that it's uncommented and not easy to understand. To be honest, there is no one-line change to make that fixes it. Even this line:
while(temp->next != NULL)
is wrong because it will crash when head is NULL.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Ok thnks . And what about the second problem
if (first->exponent == second->exponent )
result->coefficient =first->coefficient - second->coefficient;
// AFTER INSERTION I WANT TO DELETE THE NODE OF FIRST AND SECOND whose exponent are same, but m having problem with this.
02-25-2010 #2
02-25-2010 #3
Registered User
Join Date
Mar 2009
02-25-2010 #4
Registered User
Join Date
Mar 2009
02-26-2010 #5
02-26-2010 #6
Registered User
Join Date
Mar 2009 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/124191-insertion-sort.html","timestamp":"2014-04-18T22:09:16Z","content_type":null,"content_length":"61652","record_id":"<urn:uuid:512db6f1-1821-4080-acb4-3cbef5868960>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Right Triangles
The longest side of a right triangle is always directly across from the 90 degree angle. This side is called the hypotenuse.
The other two sides are often called
the legs. For now, it really doesn't
matter which one of the legs you call
a or b, as long as you make one a
and the other b.
Because the 3-4-5 right triangle is simple to work with,
let's use it to show how the Pythagorean formula
verifies that a triangle is a right triangle.
Since the numbers on both sides of the = mark are the same, the 3-4-5 triangle is a right triangle.
Please note: The Pythagorean formula is a rule! To verify that a triangle is a right triangle, use the
If the numbers on each side of the = sign are equal, then the triangle is a right triangle.
Right triangle practice: Check these triangles to see which ones are right triangles.
a b c a b c
(1) 6 8 10 (6) 9.99 13.32 16.65
(2) .48 .64 .8 (7) 4.5 6 7.5
(3) 1 1 1 (8) 7 10 13
(4) 8 6 10 (9) 9 2.5 13
(5) 120 160 200 (10) 2 2.6666 3.3333
Lower case letters (a, b, c)
are used as variables for the sides.
Upper case letters (A, B, C)
are used for the angles opposite
those sides. | {"url":"http://mathforum.org/sarah/hamilton/ham.rttriangles.html","timestamp":"2014-04-18T03:14:52Z","content_type":null,"content_length":"6174","record_id":"<urn:uuid:99cb3246-232e-484e-b262-184aafac20cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
University of Wollongong, Australia - FIN - 222
Chapter 16 - Capital structure policyMultiple Choice31. A company's capital structure is the mix of financial securities used to finance its activitiesand can include all of the following excepta.
shares.b. bonds.*c. equity options.d. preference sh
University of Wollongong, Australia - FIN - 222
Chapter 17 - Dividends and dividend policyMultiple Choice31. Which type of dividend is most likely to be used to distribute the revenue from a one-timesale of a large asset?a. Regular cash dividendb.
Unusual dividend*c. Special dividendd. Interim d
Shandong University - ECON - 102
Copyright Zheng Zhenlong & Chen Rong, 20081 Copyright Zheng Zhenlong & Chen Rong, 20082 A B 5 1000 A 6 B 8-1 8-1 A B Copyright Zheng Zhenlong & Chen Rong, 20083 8-1 A B A B 1.2% A B 0.7%
A A B A 6% 1000 B LIBOR+1% 1000 A B B A Copyright Zheng Z
Oregon State - ECON - 201
Econ 201 Lecture 7When all relevant production costs are incurred by sellers, and when all relevant product benefits accrue to buyers, the marketequilibrium price and quantity are socially
optimal.PSP*DQ*QWhen Smart for One is Dumb for AllProduc
Northwest Missouri State University - ECON - 102
Answers to Odd-Numbered Exercises
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
Northwest Missouri State University - ECON - 102
York University - ADMS - 4561
If;1z-) r.-YORK UNIVERSITYMID-TERM EXAM - SUMMER (S2) 2011AP/ADMS 4561 TAXATION OF PERSONAL INGOME IN CANADANumber ofQuestions: 4Number of Pages:8 (please ensure you have all pages before you start)
Time allowed:2 hours (70 marks)lnstructions
York University - ADMS - 4561
YORK UNIVERSITYMID.TERM EXAM - WNTER 2O1OAP/ADMS 456I TAXATION OF PERSONAL INCOME IN CANADANumber ofQuestions:4Number of Pages:8 (please ensure you have all pages before you start)Time allowed:2
hours (70 marks)lnstructions (Please read before y
York University - ADMS - 4561
I4561-3:9122109ADMS 4561 Lecture 3 - Employment Income?0I I * chnmges irr rcd-Iasf u;rclilted .I*nrrary 17,ReadingsFIT Ch. 3 (entire chapter - some will be review, some will be new)(bring with you)
Income Tax Act: Sections 5,6,'/,8, 13(7)(g),67.1-
York University - ADMS - 4561
I4561-41Lecture 4: Investments and Income from Propertyrxpe*;r$*ti ,$*tm!? tcfw_ * e kaltgesim- Part I -trrtsti'ec$ADMS Lectures 4 and 5 build on ADMS 3520 Lecture 3 to coverIncome from Property (FIT
Chapter 6).Lecture 4 covers solre simple tg
York University - ADMS - 4561
Lecture 3 Problem SetProblem1-last UXXI*letl_oLllr.!J I I. cfw_!:hnmxcs isr r*rt!- Eric Employee(Disability insurance, loans and company car)Eric Employee is employed by Pubco, a public company in
Ontario.1.Eric's pay slips for 2010 show the foll
York University - ADMS - 4561
ADMS 4561 Section A Summer 2011 Makeup Quiz (20 marks: 40minutes)YOU MUST ANSWER THE QUIZ ON THIS ANSWER SHEET. ANSWERS WLL NOT BEMARKED ELSEWHERE.You may use a non-programmable calculator and the
Quiz/ Midterm 1 lnformation Sheetincluded with this q
York University - ADMS - 4561
lntroduction to Federal lncomeivd(I[2,OOO LIABILITY OF INDIVIDUALS FOR INCOME TAXThe major issue in this area is the residence of an individual. Residency has become moreand more important in recent
years as taxpayers with increased mobility attempt
York University - ADMS - 4561
, a, ltegCh. 3/Employment lncome,.t2.Ta:r Act specifies how the quantity of the benefitqill be calculated.primary benefits:,- ., ,r,r ' .-These rules consider two11 7,a, , -,(1) the benefit that the
employee receives from having the use of a c
York University - ADMS - 4561
ta799Ch. l3/Planning the Use of a Corporation and Shareholder-Manager Remunerationgenuine liability to exist, there must be an enforceable claim by the creditor with a;easonableexpectation ttrat the
debt will in fact be paid by the debtor."/I n13,O7O
York University - ADMS - 4561
1107Ch. l9/lnternational Taxation in Canadatll9,OOO LIABTLITY FOR CANADIAN TAX:ItJl9,O1O ResidentsThe concept ofresidency is discussed in detail in Chapter 2. Residency is the basis underwhich Canada
has jurisdiction to tax income. Citizenship is i
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
22XX7.27.2 XXXXXX14.4
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM (1) (20 points)mA block of mass m is connected to a block of massby a massless string that2mpasses over a massles and frictionless pulley. The blockis connected to an2gideal vertical spring
with spring constant k ! hich is fixed to the f
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM (1) (25 points)AA driver wants to jump across a 90-m wide canyon with hisBycar. The car starts from rest at point A, travels with a constantacceleration, moves horizontally off the cliff on
the highergside through point B into the air and
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM (1) (25 points)Three identical thin rods are put together to form a rigid Y shaped object withCthree arms in a vertical plane as shown in the figure. An object of mass m w itha hole through
its center can slide without friction on the arm BC t
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM 1 (20 points)Two blocks of mass mA = 2.0 kg and mB = 8.0 kg are placed against a wallmBmAas shown in the figure. A constant force of magnitude F = 5 0 N is applied tothe 2.0-kg block, making
an angle = 37 with the horizontal. Theygcoeffici
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM 1 (25 points)yA projectile is fired from the origin O with the initial speedgv0 = 100 m/s at an angle 0 = 53 from the horizontal. Thexprojectile just passes horizontally above a hill of
height H and!v0hits a point A on the other side, whi
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM 1 (25 points)A block of mass m = 10 kg is attached to the free end of akmassless spring (k = 70 N/m) that has its equilibrium length, asshown in the figure. A horizontal force FP = 200 N is
applied toFPthe block which is initially at rest. T
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM (1) (25 points)!!Two constant forces F1 and F2 act on a 4.0-kg particle!moving in the xy plane. The force F1 has a magnitude of 5.0 !F2N and makes an angle of 37 with the positive x axis,
while!the force F2 has a magnitude of 15 N and mak
Middle East Technical University - PHYSICS - PHYS 105
PROBLEM (1) (25 points)A satellite of mass m circles a uniform spherical planet of unknown mass in a circular orbit of radius r with a period T. Themagnitude of the gravitational force exerted on the
satellite by the planet is F. Express your answers in
Middle East Technical University - PHYSICS - PHYS 106
PROBLEM (1) (20 points)An insulating solid sphere of radius a has a uniform volume charge density .!Express your answers in terms of some or all of the given quantities andrelated constants as
needed.a(a) (6 pts) Using Gauss's law, find the magnitud
Middle East Technical University - PHYSICS - PHYS 106
PROBLEM (1) (25 points)Two square plates of sides L are placed parallel to each other with separation dsuch that L > d. The plates carry uniformly distributed static charges +Q0 andQ0. A metal
block has width L, length L, and thickness slightly less th
Middle East Technical University - PHYSICS - PHYS 106
PROBLEM (1) (20 points)yAn infinite flat slab of nonconducting material has thicknessSlab2L and a uniform volume charge density +. Take the x axis+!to be along the direction of the slab's thickness
with theC3L L Oorigin at the center of the slab.
Middle East Technical University - PHYSICS - PHYS 106
PROBLEM (1) (25 points)yA thin glass rod is bent into the shape of a semicircle of radius R. Anegative charge Q is uniformly distributed along the rod. It is placedin the xy-plane with its center at
the origin, as shown in the figure.QExpress your a
Middle East Technical University - PHYSICS - PHYS 106
PROBLEM (1) (25 points)ATwo cylindrical wires A and B are made of differentBmaterials and each have length LA = LB = L. Wire A and Bhave the diameters DA = 2D and DB = D andresistivitiesA = and B =
3, respectively. The wires are joined asshown in
Middle East Technical University - PHYSICS - PHYS 106
PROBLEM (1) (25 points)Positive charge Q is distributed uniformly along a thin glassyrod of length L which lies along the x axis between x = L/2and x = +L/2. An electron of mass m and charge e is
placed+on the positive x axis at x = L/2 + r, a const
Middle East Technical University - PHYSICS - PHYS 106
PROBLEM (1) (25 points)A parallel plate capacitor of plate separation d is half filledwith a dielectric slab of dielectric constant K = 5, as shownAird/2din the figure. With the slab in place, it is
charged to aKpotential difference V and then dis
Air Force Academy - CS - 321
ECS Case IntroductionIn this section you will learn the background information that will prepare you to understand and complete each of the milestones of this case study. This information includes a
history of the business, a description of the business'
Air Force Academy - CS - 321
Week 1: Milestone 1 DescriptionCase BackgroundIn this milestone, you will prepare a Request for System Services Form, which is the trigger for the Preliminary Investigation Phase. Also, you will use
fact-finding techniques to extract and analyze informa
Concordia Canada - COMP - 232
Lecture Notes on DISCRETE MATHEMATICSEusebius Doedel1LOGIC Introduction. First we introduce some basic concepts needed in our discussion of logic. These will be covered in more detail later. A set is
a collection of "objects" (or "elements"). Examples.
Old Dominion - FIN - 432
Old Dominion - FIN - 432
Old Dominion - FIN - 432
Old Dominion - FIN - 432
Old Dominion - FIN - 432
Old Dominion - FIN - 432
Old Dominion - FIN - 432 | {"url":"http://www.coursehero.com/file/7037099/Tut06-MCQs-ch13CostOfCapital/","timestamp":"2014-04-20T23:42:53Z","content_type":null,"content_length":"62902","record_id":"<urn:uuid:cb63099f-ec97-4e97-9dfd-a1b46fa1665e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Formally Unknowability, or absolute Undecidability, of certain arithmetic
Replies: 22 Last Post: Jan 29, 2013 8:21 PM
Messages: [ Previous | Next ]
Re: Formally Unknowability, or absolute Undecidability, of certain
Posted: Jan 29, 2013 8:21 PM
On 29/01/2013 11:29 AM, Michael Stemper wrote:
> In article <xbfNs.425$OE1.376@newsfe26.iad>, Nam Nguyen <namducnguyen@shaw.ca> writes:
>> On 27/01/2013 12:07 PM, Frederick Williams wrote:
>>> Nam Nguyen wrote:
>>>> In some past threads we've talked about the formula cGC
>>>> which would stand for:
>>>> "There are infinitely many counter examples of the Goldbach Conjecture".
>>>> Whether or not one can really prove it, the formula has been at least
>>>> intuitively associated with a mathematical unknowability: it's
>>>> impossible to know its truth value (and that of its negation ~cGC) in
>>>> the natural numbers.
>>> No one thinks that but you.
>> If I were you I wouldn't say that. Rupert for instance might not
>> dismiss the idea out right, iirc.
>>> Its truth value might be discovered tomorrow.
>> You misunderstand the issue there: unknowability and impossibility
>> to know does _NOT_ at all mean "might be discovered tomorrow".
> Are you implying that GC have been proven to be indepedent of the usual
> axioms of number theory?
No. We don't even know if any usual axiom-system for the natural numbers
(e.g. PA) is syntactically consistent, or inconsistent (in which all
formulas would be provable).
For the record, I've always maintained that the issue of impossibility
to know of the _truth value_ of cGC is language-structure-centric,
independent of the notion of formal axiom-system.
There is no remainder in the mathematics of infinity.
Date Subject Author
1/27/13 Formally Unknowability, or absolute Undecidability, of certain arithmetic namducnguyen
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certain Frederick Williams
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of Frederick Williams
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. Jesse F. Hughes
1/27/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. Jesse F. Hughes
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of Frederick Williams
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. namducnguyen
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. fom
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certain Frederick Williams
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
arithmetic formulas.
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certainarithmeticformulas. ross.finlayson@gmail.com
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain arithmeticformulas. Michael Stemper
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certain arithmetic formulas.
1/28/13 Re: Formally Unknowability, or absolute Undecidability, of certain fom
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain namducnguyen
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain fom
arithmetic formulas.
1/29/13 Re: Formally Unknowability, or absolute Undecidability, of certain Graham Cooper
arithmetic formulas. | {"url":"http://mathforum.org/kb/message.jspa?messageID=8186329","timestamp":"2014-04-18T16:29:34Z","content_type":null,"content_length":"45564","record_id":"<urn:uuid:0257a7bc-b0ae-4e5c-ab46-41761199bc2a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Half-life is the concept of time required for half of radioactive isotope s nuclei to decay. The amount remaining is calculated as the (initial amount) (1/2) (# of 1/2 lives)^n in which the number of
1/2 lives is equal to the time elapsed over the length of half-life.
It might be wondering if some, is there's any radioactive on this earth constantly spontaneously decaying how do we actually still have radioactive isotopes if our earth is billions millions of years
old? Well actually they take a long time for things to decay doesn't happen instantly sometimes it does but sometimes they happens every millions of years such as uranium takes milli- the half life
of uranium takes millions of years meaning that when the year started we had a certain amount and now we had after 4.5 I think billion years and million years it's half of the volume uranium is
actually left on this earth so what do I mean by half life? So half life is the time required for half of the radioisotopes nuclei to decay.
So let's take strontium 90 for example, it's our radioactive isotope it's half life is 29 years so if I have originally if I have no time pass by, no year so no half life I have my in my position 10
grams of strontium 90 okay so one half life flows by and that's 29 years so 29 years later I [IB] my 10 grams that I originally had, half of it is left as the definition of half life states so that
means I have 5 grams of strontium 90 left okay. After 2 half life go by 58 years, my oroginal sample of 10 grams has gone through 2 half life. Half of it's gone after the first 29 years and again
another half after the second 29 years give me 2.5 grams left after 58 years and so on and so forth until actually no atoms are left of strontium 90, so we could do this inside the chart and do this
step by step by step by step and it'll take a long time or we can go in translate this into a formula. And the formula is basically says the amount remaining how much we have is equal to initial
amount which [IB] which is what you did 10 times one half which is exactly what we did times to the nth power so the n is the number of half lifes we went we went through so first half I said one
second life had one half times two which is squared open one half times one half which is squared so number of half life is n. Okay so we can actually breakdown that n too because we might not go we
might want to know like fraction of half life though things that do not have half life so I'm expect that and even further and said the number of half life which is we're going to denote it n equals
little t which is the time that went by divided by big T which is the length of the half life okay so let's actually put this and this problem together.
Alright so let's say we have iron 59 a radioactive isotope of iron is used in medicine to diagnose blood circulation disorders okay. The half life of iron 59 is 44 and a half days. How much of the of
the 2 gram sample remain after 133.5 days. Okay so let's actually look at our formula and plug everything in so the amount remaining is what we're looking for so that's x, initially we had 2 grams
where we're going to multiply by one half to the what power? We didn't tell us the number of half lifes we have to break it down from time elapsed which 44 and a half days oh no sorry actually 133
and a half days went by the half life was 44 and a half days, make sure when you're doing this that this, the time elapse of the time and this time the units of time for these two are the same so you
can't have seconds on top and days in the bottom so make sure they're actually the same and when you [IB] when you 2 times one half to the 30, 133 over 5 this actually if you did it Math it actually
equals 3 half lifes it's going to give me x equals 0.25, 0.25 grams of Fe59 leftover which makes sense it's less than half because it's more than one half life actually 3 half lifes went by so that
actually should be a lot less than 2 grams so a lot of it decomposing and went off into change into something else.
Let's do something harder you might see a problem that's a little bit tougher and that's this so let's say we're talking about carbon 14 and carbon 14 is a radioactive isotope of carbon 12 I mean its
actually [IB] atmosphere with atmosphere has plenty of it and it also it's always happening. There is a certain radio, ratio of this isotope in the atmosphere always so in this atmosphere it's
actually used in, very helpful for plants to use for photosynthesis the plants then take this and use it for photosynthesis, animals then eat the plants and so now all of the living actually have
some ratio of carbon 14 that's equal through whatever is in the atmosphere, so all living objects have this in their system and it's totally fine. What happens is over time it'll breakdown so we have
carbon 14 and it will break down because it is radioactive and it will go through beta decay and it will change into nitrogen no big deal and this takes 5,730 years to occur okay so it's not going to
this isn't actually happening in our lifetime but what happens when you die? This will start breaking down so we're not going to be adjusting that carbon 14 anymore and we're going to start it's
going to stop breaking down and decomposing into this nitrogen.
So then if a fossil C-14 ratio is one sixteenth that of the atmosphere. How old is the fossil? So we're going to actually say the amount remaining is one sixteenth of the original ratios was, so
we're going to say one sixteenth. The initial amount was 1 a ratio was the same as that of the atmosphere okay great times one half we want to figure out the number the time elapsed so we're looking
for little t and we know the half life is 5,730 years. Now I'm going to go into logarithms I'm not going to explain logarithms if you want to see more on logarithms you might want to check out the
Math videos but I'm just going to assume that you know logarithms and say because I'm looking for the exponent I'm going to take the natural log of base 5 so I'm going to say the ln of natural log of
one sixteenth is equal to the natural log of 1 one half, because the natural log of 1 is 1. Natural log of one half times 2 to the 5730 and when natural logs with natural logs I can take this
exponent and put it in front so it's now becomes, now it's not exponent anymore so I'm going to say okay the natural log of one sixteenth is equal to t, let me see, t over 5730 times the natural log
of 2 so I'm sorry one half, I'm going to divide by the natural log of one half and so natural log of one sixteenth over the natural log of one half, I'm going to have to bring my calculator up to do
that, this is 4 equals t over 5730 so then I'm going to just do basic algebra multiply 4 times 5730 and that gives me 22,920 years have passed by.
This is good this is called carbon dating and it's actually a good way to figure out how old a fossil is and this is how this ratio that they use and it's actually use half lifes, so half life is a
is a great way to figure how old something is or how much you're going to have after a certain period of time.
half life decay | {"url":"https://www.brightstorm.com/science/physics/nuclear-physics/halflife/","timestamp":"2014-04-19T07:28:52Z","content_type":null,"content_length":"62188","record_id":"<urn:uuid:681dc0d8-b8fd-4123-a3ce-bc010603ab46>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Absolute Value Inequality
January 7th 2009, 07:33 PM #1
MHF Contributor
Jul 2008
Absolute Value Inequality
A tool and die shop makes a metal pull tab for a pop can. The length of the tab is 1.2 inches. The measurement may have an error of as much as 0.002 inches. Write an absolute value inequality
that shows the range of the possible lengths of the tabs. Solve the inequality.
Since |x- 1.2| measures how much x differs from 1.2, that is just $|x- 1.2|\le 0.002$. That, of course, is the same as [tex]-0.002\le x- 1.2\le 0.002[tex] so $1.2- 0.002\le x\le 1.2+ 0.002$ or
$1.198\le x\le 1.202$.
January 8th 2009, 04:27 AM #2
MHF Contributor
Apr 2005
January 9th 2009, 08:02 AM #3
MHF Contributor
Jul 2008 | {"url":"http://mathhelpforum.com/algebra/67263-absolute-value-inequality.html","timestamp":"2014-04-18T04:56:17Z","content_type":null,"content_length":"36868","record_id":"<urn:uuid:fdbccf09-e804-48d3-ab24-9bfe11a6bf03>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wilshire Park, LA Algebra Tutor
Find a Wilshire Park, LA Algebra Tutor
...Each of these girls saw improvements in their SAT Math II scores after two weeks of rigorous tutoring: one's score improved a total of 70 points, from 630 to 700. Math is the subject about
which I am most passionate, and I work extra hard to impart that excitement and desire to improve with all ...
60 Subjects: including algebra 2, reading, algebra 1, Spanish
...Additionally, since I am certified in SAT prep, if their SAT score is not competitive enough for the desired school, I offer SAT tutoring to help them raise their score. I have taken
Biostatistics as part of my studies during my first two years of medical school at NYCOM. I am familiar with fin...
43 Subjects: including algebra 2, algebra 1, English, reading
...Generally, I have taught beginning to intermediate voice and piano and I have tutored up to 3rd semester theory (for college). The best kind of learning is when the teacher enables their
student to become self-sufficient and to follow their own intuition. Ballroom dance also has been a passion o...
69 Subjects: including algebra 1, reading, chemistry, Spanish
...I received my Bachelors Degree at Georgetown University in English Literature. Additionally, I have practical experience with every elementary-level subject. I am currently employed as an
elementary school teaching assistant, and have worked as an English teacher abroad.
28 Subjects: including algebra 1, English, reading, writing
...Upon completion of my medical education I entered into a surgical residency. After completing a year of residency I decided that I no longer was interested in pursuing a career in surgery, and
so I decided to enter another field of medicine that I am currently applying for. In the interim of my...
18 Subjects: including algebra 1, algebra 2, chemistry, English
Related Wilshire Park, LA Tutors
Wilshire Park, LA Accounting Tutors
Wilshire Park, LA ACT Tutors
Wilshire Park, LA Algebra Tutors
Wilshire Park, LA Algebra 2 Tutors
Wilshire Park, LA Calculus Tutors
Wilshire Park, LA Geometry Tutors
Wilshire Park, LA Math Tutors
Wilshire Park, LA Prealgebra Tutors
Wilshire Park, LA Precalculus Tutors
Wilshire Park, LA SAT Tutors
Wilshire Park, LA SAT Math Tutors
Wilshire Park, LA Science Tutors
Wilshire Park, LA Statistics Tutors
Wilshire Park, LA Trigonometry Tutors
Nearby Cities With algebra Tutor
Bicentennial, CA algebra Tutors
Century City, CA algebra Tutors
Farmer Market, CA algebra Tutors
Glendale Galleria, CA algebra Tutors
La Tijera, CA algebra Tutors
Lafayette Square, LA algebra Tutors
Miracle Mile, CA algebra Tutors
Oakwood, CA algebra Tutors
Pico Heights, CA algebra Tutors
Playa, CA algebra Tutors
Rimpau, CA algebra Tutors
Sanford, CA algebra Tutors
Santa Western, CA algebra Tutors
Toluca Terrace, CA algebra Tutors
Westwood, LA algebra Tutors | {"url":"http://www.purplemath.com/Wilshire_Park_LA_Algebra_tutors.php","timestamp":"2014-04-21T07:09:35Z","content_type":null,"content_length":"24373","record_id":"<urn:uuid:061b06bd-3f58-474f-92f4-071570761be5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Karl Gauss
Karl Friedrich Gauss was born in Brunswick, Germany in 1777. Gauss studied mathematics at the University of Göttingen from 1795 to 1798. He became the Director of the Gottingen Observatory from 1807
until his death. His father was a manual laborer but noticed his son's talents quite early. It has been said that Karl displayed incredible talent in math at a very young age. There are stories that
tell of him managing his father's business accounts before the age of 5 and apparently even catching a payroll error. When a teacher asked him to add up the numbers between 1 and 100, (to keep him
busy) Gauss quickly found a short cut for the answer 5050. A well known today....thanks to Gauss. He called mathematics "the queen of the sciences" and arithmetic "the queen of mathematics.Gauss had
three children.
At 24 years of age, he wrote a book called Disquisitines Arithmeticae, which is regarded today as one of the most influential books written in math.
He also wrote the first modern book on number theory, and proved the law of quadratic reciprocity.
In 1801, Gauss discovered and developed the method of least squares fitting, 10 years before Legendre, unfortunately, he didn't publish it.
Gauss proved that every number is the sum of at most three triangular numbers and developed the algebra of congruences.
Famous Quote:
'Ask her to wait a moment - I am almost done.' Apparently said while working, and being informed that his wife was dying.
Recommended Reads
Combination of Observations Least Subject to Error by Karl Gauss This is the first translation in English of two of Gauss' memoirs on least squares, that was initially published in 1820. It contains
his final, definitive treatment of the area along with material on probability, statistics, numerical analysis, and geodesy, presented in the original French and in English on facing pages. An
afterword by Stewart places Gauss' contributions in historical perspective. Of interest to engineers, statisticians, mathematicians, computer scientists, and historians.
Remarkable Mathematicians Author Ioan profiles 60 famous mathematicians who were born between 1700 and 1910 and provides insight to their remarkable lives and their contributions to the field of
math. This text is organized chronologically and provides interesting information about the details of the mathematicians lives.
One of my favorite things to do in the classroom is to ask the students to add all the numbers between 1 and 100 (including 1 and 100)to see how they solve the problem. It has been said that Carl
Gauss' teacher asked him to to this and within minutes he had the solution. Carl simply added the pairs of numbers and said, there must be 50 pairs of the same number. For instance, 99 + 2 = 101, 98
+ 3 = , from there he deduced that 50 pairs of 101 mean that the answer was 5050. | {"url":"http://math.about.com/cs/mathematicians/a/gauss.htm","timestamp":"2014-04-17T12:59:47Z","content_type":null,"content_length":"39755","record_id":"<urn:uuid:434de3f5-6726-4f86-95d7-5c8428b4655e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consistent Sets of Estimates for Regressions with Errors in All Variables,”Econometrica
Results 1 - 10 of 37
- JOURNAL OF ECONOMETRICS , 2001
"... ..."
- American Economic Review , 1997
"... Do scale economies contribute to our understanding of international trade? Do international trade flows encode information about the extent of scale economies? To answer these questions we
examine the large class of general equilibrium theories that imply Helpman-Krugman variants of the Vanek factor ..."
Cited by 47 (4 self)
Add to MetaCart
Do scale economies contribute to our understanding of international trade? Do international trade flows encode information about the extent of scale economies? To answer these questions we examine
the large class of general equilibrium theories that imply Helpman-Krugman variants of the Vanek factor content prediction. Using an ambitious database on output, trade flows, and factor endowments,
we find that scale economies significantly increase our understanding of the sources of comparative advantage. Further, the Helpman-Krugman framework provides a remarkable lens for viewing the
general equilibrium scale elasticities encoded in trade flows. In particular, we find that a third of all goods-producing industries are characterized by scale. (The modal range of scale elasticities
for this group is 1.10-1.20 and the economywide scale elasticity is 1.05.) Implications are drawn for the trade-and-wages debate (skill-biased scale effects) and endogenous growth. (JEL F11, F12, D2)
Over the last 20 years, general equilibrium models of international trade featuring increasing returns to scale have revitalized the international trade research agenda. Yet general equilibrium
econometric work remains underdeveloped: it has been scarce, only occasionally well-informed by theory, and almost always devoid of economically-meaningful alternative hypotheses. There are
exceptions of course. These include Helpman (1987), Hummels and Levinsohn (1993, 1995), Brainard (1993, 1997), Harrigan (1993, 1996), and Davis and Weinstein (1996). However, this list is as short as
the work is hard. The complexity of general equilibrium, increasing returns to scale predictions has deflected empirical research of the kind that is closely aligned with theory. Surprisingly, one
empirically tractable predic...
, 2000
"... Abstract. In this paper we analyze whether regional economic integration across U.S. states conditions local labor-market adjustment. We examine the mechanisms through which states absorb
changes in labor supplies and whether industry production techniques are similar across states. There are two ma ..."
Cited by 33 (4 self)
Add to MetaCart
Abstract. In this paper we analyze whether regional economic integration across U.S. states conditions local labor-market adjustment. We examine the mechanisms through which states absorb changes in
labor supplies and whether industry production techniques are similar across states. There are two main findings. First, states absorb changes in employment primarily through changes in production
techniques that are common across all states and through changes in the output of traded goods, with the former mechanism playing the larger role. In contrast, state-specific changes in production
techniques, which are one indication of state-specific changes in relative factor prices, account for relatively little factor absorption. Second, industry production techniques are very similar
across states, especially for neighboring states and states with similar relative labor supplies. Both sets of results are consistent with productivity-adjusted FPE across either all states or
groupings of related states.
- Journal of Applied Econometrics , 1995
"... This paper applies robustness ideas from the modern statistics literature to the study of the augmented Solow model. It also tests the model in other dimensions, including sensitivity to
measurement error. The main ¯ndings are that the speed of conditional convergence is highly uncertain, that techn ..."
Cited by 30 (3 self)
Add to MetaCart
This paper applies robustness ideas from the modern statistics literature to the study of the augmented Solow model. It also tests the model in other dimensions, including sensitivity to measurement
error. The main ¯ndings are that the speed of conditional convergence is highly uncertain, that technology parameters obtained from the augmented Solow model cannot be trusted, and that the model
does not work well when attention is restricted to either the OECD or developing countries. Not only that, the equation for steady state human capital is rejected by the data. I am grateful to Steven
Klepper, John Muellbauer, Steve Nickell and Steve Redding for useful comments and suggestions. y Email jon.temple@nu±eld.oxford.ac.uk This paper examines the adequacy of the augmented Solow model for
explaining international variation in the standard of living. In particular, is technology usefully described by a common Cobb-Douglas production function, which takes human capital as one of its
- Psychometrika , 1999
"... The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior
distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameter ..."
Cited by 27 (8 self)
Add to MetaCart
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior
distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is
uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, e.g.,
output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior
distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make
inferences about the parameters of underidentified models, as we illustrate on a simple errors-in-variables model.
- NBER Technical Working Paper , 2004
"... Using Monte Carlo simulations, this paper evaluates the bias properties of estimators commonly used to estimate growth regressions derived from the Solow model. We explicitly allow for
measurement error in the right-hand side variables as well as country-specific effects that are correlated with the ..."
Cited by 17 (2 self)
Add to MetaCart
Using Monte Carlo simulations, this paper evaluates the bias properties of estimators commonly used to estimate growth regressions derived from the Solow model. We explicitly allow for measurement
error in the right-hand side variables as well as country-specific effects that are correlated with the regressors. Using an OLS estimator applied to a single cross-section of variables averaged over
time (the between estimator) performs best in terms of the extent of bias on each of the estimated coefficients. The Blundell-Bond system GMM estimator also performs relatively well. The
fixed-effects and the Arellano-Bond estimators overstate the speed of convergence under a wide variety of assumptions concerning the type and extent of measurement error, while between understates it
somewhat. Finally, fixed effects and Arellano-Bond bias towards zero the slope estimates on the human and physical capital accumulation variables, while the between estimator and Blundell-Bond bias
these coefficients upwards.
, 1999
"... As currently implemented, the workhorse econometric models of international trade (monopolistic competition, Heckscher-Ohlin, and gravity) do not rigorously incorporate product prices into their
estimating equations. They are thus of limited value for assessing the gains from trade liberalization ..."
Cited by 12 (0 self)
Add to MetaCart
As currently implemented, the workhorse econometric models of international trade (monopolistic competition, Heckscher-Ohlin, and gravity) do not rigorously incorporate product prices into their
estimating equations. They are thus of limited value for assessing the gains from trade liberalization. We model general equilibrium product price e#ects using the CES monopolistic competition model.
We then estimate the model and, mimicking computable general equilibrium (CGE) models, use the model to estimate the compensating variation associated with trade liberalization.
- J. Econometrics , 2008
"... This paper addresses the problem of data errors in discrete variables. When data errors occur, the observed variable is a misclassified version of the variable of interest, whose distribution is
not identified. Inferential problems caused by data errors have been conceptualized through convolution a ..."
Cited by 11 (1 self)
Add to MetaCart
This paper addresses the problem of data errors in discrete variables. When data errors occur, the observed variable is a misclassified version of the variable of interest, whose distribution is not
identified. Inferential problems caused by data errors have been conceptualized through convolution and mixture models. This paper introduces the direct misclassification approach. The approach is
based on the observation that in the presence of classification errors, the relation between the distribution of the “true ” but unobservable variable and its misclassified representation is given by
a linear system of simultaneous equations, in which the coefficient matrix is the matrix of misclassification probabilities. Formalizing the problem in these terms allows one to incorporate any prior
information − e.g., validation studies, economic theory, social and cognitive psychology − into the analysis through sets of restrictions on the matrix of misclassification probabilities. Such
information can have strong identifying power; the direct misclassification approach fully exploits it to derive identification regions for any real functional of the distribution of interest. A
method for estimating the identification regions and construct their confidence sets is given, and illustrated with an empirical analysis of the distribution of
- Journal of Economic Integration , 2000
"... A large share of world trade, especially among the OECD countries, is two-way trade within industries, so called intra-industry trade. Despite this, few attempts have been made to examine why
countries export some products within industries, whereas they import others. We examine this issue, by mean ..."
Cited by 11 (0 self)
Add to MetaCart
A large share of world trade, especially among the OECD countries, is two-way trade within industries, so called intra-industry trade. Despite this, few attempts have been made to examine why
countries export some products within industries, whereas they import others. We examine this issue, by means of regression analysis, by examining the shares of IIT that are vertical and horizontal
and by examining price dispersion. The regression results suggest that an abundant human capital endowment as well as a large domestic market increases the quality of OECD-countries’ manufacturing
exports, thus offering support for comparative advantage models as well as newer geography models. We do not, however, find support of increased concentration of production within industries. But,
human capital becomes an increasingly important determinant of quality over time. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1701332","timestamp":"2014-04-18T19:21:13Z","content_type":null,"content_length":"39313","record_id":"<urn:uuid:90db7e60-af3a-49a0-83b3-499f7ff3e0b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trig identities: if tan x = -4/3, 0<x<pi, find tan(x + pi), [Archive] - Free Math Help Forum
05-14-2008, 09:34 PM
Hi. I seem to have some trouble when an equation with tan surfaces. If someone could help me it would be great. want to know if i actually even started correctly.
1) suppose tan x = -4/3 with 0 < x < pi. Find the exact value of tan (x + pi)
If i start this with the sum/difference identity of tan (x + pi) = tan(x) + tan (pi) / 1 - tan(x) tan (pi)
Then this should get me through to the answer right?
I hope.
2) suppose tan x = -4/3 with 0 < x < pi. Find the exact value of cot x.
Ok for this one I chose a double angle identity but the more i looked at my steps and solution the more i doubted it. Suffice to say i erased that answer.
Any help would be appreciated.
Thank you | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-56381.html","timestamp":"2014-04-17T12:44:41Z","content_type":null,"content_length":"8786","record_id":"<urn:uuid:010501fc-5134-4a40-8e7b-84238e68fd3e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Algebra
Thanks, mikau!
A few can be solved fully, some can have answers using the "hide" tag, and some have no answers at all. This is to help people practice for fun or for exams.
It is totally fine to just post questions. The answers can be given by others, possibly even people asking "is this right?"
I am happy with this to be in full "forum" style, chatty, and with mulitple ways to solve if applicable. I think this will help make the exercises "come alive".
But I think it also good to keep the exercises "on topic", so I will come along later and edit these posts, probably incorporate them in the "sticky" topic.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=38742","timestamp":"2014-04-19T04:58:43Z","content_type":null,"content_length":"33939","record_id":"<urn:uuid:8d57d03c-87dc-4a8b-999a-cbd802400ab0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: The correct order for testing for different kinds of endogeneity
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: The correct order for testing for different kinds of endogeneity
From Maarten buis <maartenbuis@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: The correct order for testing for different kinds of endogeneity
Date Thu, 26 Mar 2009 15:34:02 +0000 (GMT)
--- On Thu, 26/3/09, Stephen Armah wrote:
> should I first test for auto-correlation and
> heteroskedasticity in Stata before testing for
> endogeneity or is is better to do the reverse?
Any such sequence you shoose will strictly speaking
be wrong, unless you can frame the sequence of
tests in way that is similar to (Marcus, Peritz,
Gabriel 1976). I wouldn't worry too much about that
The biger problem is that testing of model assumptions
is a pretty horrible idea anyhow. The very purpose
of a model is to simplify reality, ergo the
assumptions are supposed to be wrong, otherwise
the model would be a lousy simplification.
However, we don't want the assumptions to be
too wrong, otherwise the results would not say much
either. Statistical testing is not designed for this
kind of tradeoff: The logic behind testing is that
a hypothisis is either true or false, while when
we do model selection we already know that the
assumption is false but we want to see whether an
assumption is either useful or not useful. For
this reason graphical investigations of the various
model assumptions are by far preferable
I know that this is a rant and that opions differ
on this. If a reviewer/editor/supervisor/peer asks
you for such a test, than you should just give it
to them. Just don't take those tests too serious,
and don't forget to look at the graphs.
-- Maarten
Marcus, R, E. Perity, and K.R. Gabriel. 1976. On
closed testing procedures with special reference
to ordered analysis of variance. Biometrika
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-03/msg01389.html","timestamp":"2014-04-19T23:18:02Z","content_type":null,"content_length":"7747","record_id":"<urn:uuid:23880612-a63c-4a63-9e5d-1da6f1fbec06>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research Polymath
Hardy used to say that number theory is the purest subject. He was proud of the fact that it had no applications in that time. Now however, it has many applications to cryptography, code breaking
etc.. In the same way, quantum entanglement does not seem to have any real practical applications. However this following problem and thought process helped someone solve a problem he was having
trouble on involving statistical entanglement. So it goes to show you, many subjects, no matter how pure they are, can have real world applications.
Question. Consider three cities: Philadelphia, New York City, and Trenton. These can be represented by subsystems $A$, $B$ and $C$. Now if we measure $A$, $B$ and $C$ separately, it is impossible to
obtain information about the entire system. The information is encoded in the nonlocal correlations between the subsystems.
This problem helped someone in a real world setting.
Principles of Mathematical Analysis by Walter Rudin
Chapter 1: The Real and Complex Number Systems
One thing I notice about Rudin in general is that he tends to “pull rabbits out of hats.” For instance, on pg.2, he does the following: Let $A = \{p: p^2<2, \ p \in \mathbb{Q}, \ p>0 \}$ and $B = \
{p: p^2>2, \ p \in \mathbb{Q}, \ p>0 \}$. He wants to show that $A$ contains no largest element and $B$ contains no smallest. Then all of a sudden he does the following: For every $p>0$ associate $q
= p-\frac{p^2-2}{p+2} = \frac{2p+2}{p+2}$. Then consider $q^2-2$ to show that $q \in A$ or $q \in B$.
Collaborative Learning
As I mentioned in my previous post, I want to experiment with collaborative learning as well. My plan is to start with a Rudin’s Principles of Mathematical Analysis and apply a “polymath approach.”
Although I think each post will be a new chapter. Unlike lectures, I think this approach will allow people to learn actively and with questions. Also, one can get many different perspectives of the
Quantum Entanglement: Discussion Thread
First of all, we need to gather some resources to tackle this problem. I found John Preskill’s notes on quantum computation to be very valuable. But what about $\mathbb{Z}_3$ is special? Moreover,
why are we considering spaces of the form $\mathbb{Z}_p$ where $p$ is prime? As mentioned before here, quantum entanglement can be modelled more generally than tensor products in hilbert spaces. We
can consider cartesian products of various sets. But will this general view help us tackle our more specific problem?
Proposal: Quantum Entanglement
Polymath Proposal
The goal of this project is to study “entanglement” in finite vector spaces. Entanglement is the hallmark of quantum mechanical systems. Quantum mechanics is modelled by vector spaces over $\mathbb
{C}$ the field of complexes. We propose to consider entanglement in finite dimensional vector spaces over $\mathbb{Z}_p$ ($p$ is prime). We will think of this as providing a test bed (or toy model)
for general conjectures about entanglement. One way to characterize entanglement over any field is described as follows.
Suppose we have two non-interacting subsystems $A$ and $B$. Each of these subsystems is described by Hilbert Spaces $\mathcal{H}_{A}$ and $\mathcal{H}_{B}$. Then the Hilbert space for the composite
system is $\mathcal{H}_{A} \otimes \mathcal{H}_{B}$. Now if the state of the composite system $AB$ is $|\psi \rangle_{A} \otimes |\phi\rangle_{B}$, then this is a product state. More generally,
suppose we have different bases for $\mathcal{H}_A$ and $\mathcal{H}_B$. Call these $\{|i \rangle_A \}$ and $\{|j \rangle_B \}$. Then we can represent the state of the composite system as $\sum_{i,j}
c_{ij}|i \rangle_{A} \otimes |j \rangle_{B}$.
Now if $c_{ij} eq c_{i}^{A}c_{j}^{B}$, then we have an entangled state. In other words, you cannot assign pure states to either subsystem $A$ or $B$. The key problem for us is whether entanglement
exists in finite vector spaces. We will analyze systems such as $\mathbb{Z}_3 \times \mathbb{Z}_3$. Simple systems like this will lead to further insight to vector spaces such as $\mathbb{Z}_p \times
\mathbb{Z}_p$ (where $p$ is prime). More specifically, does entanglement exist in all finite vector spaces? .
The purpose of this blog is to encourage collaborative research on tractable research problems. This is much in the spirt of Terence Tao’s polymath blog. I might experiment with “collaborative
learning” as well. For example, suppose we want to go through Rudin’s Principles of Mathematical Analysis. Perhaps we can apply a polymath approach to this task as well. | {"url":"http://courantklein.wordpress.com/","timestamp":"2014-04-18T10:59:05Z","content_type":null,"content_length":"32479","record_id":"<urn:uuid:5d6d82a4-1366-4370-b42d-ef75ffc1ea40>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
In an arbitrary abelian category, does chain complex homology commute with coproduct?
up vote 9 down vote favorite
On page 55 of Weibel's Introduction to homological algebra the following passage appears:
Here are two consequences that use the fact that homology commutes with arbitrary direct sums of chain complexes
I understand why homology commutes with arbitrary direct sums when the direct sum of a collection of monics is a monic (i.e the direct sum functor is exact) but I was under the impression that there
were abelian categories where the direct sum functor is not exact. After a bit of thought, I realised that I don't know an example of an abelian category in which the coproduct functor is not exact.
Sheaves of abelian groups on a fixed topological space give an example of an abelian category in which the product functor is not exact.
Question 1: Is the passage from Weibel's book correct? If so, then why?
Question 2: Is there an example of an abelian category where the direct sum functor is not exact?
homological-algebra ct.category-theory
Or are you really after the result that homology is a functor which commutes with direct sums? Consider the category $S$ of chain complexes $A_i\to B_i \to C_i$ (yes, only three terms) (say of
R-modules, which should be enough by standard embedding theorems). Homology is a functor $S \to Mod_R$. Does this preserve sums? – David Roberts Sep 18 '11 at 23:49
2 @David: The ``standard embeddings" do not work here because they do not preserve infinite direct sums. As I said above, it is clear that homology commutes with direct sum in categories where
coproducts are exact. (R-modules are such a category) – Daniel Barter Sep 19 '11 at 0:38
add comment
2 Answers
active oldest votes
I couldn't think of a natural example of an abelian category in which direct sums are not exact (I think this is called axiom AB4). For example, sheaves of abelian groups and
R-modules both have this property. However there are natural examples of abelian categories where direct products are not exact (i.e. not satisfying AB4*), for example, the category
of abelian sheaves on a space.
Taking the opposite category of such a category will then give an example of a category not satisfying AB4 (albeit, not a very nice one).
Once you have such an example, homology of chain complexes in this category will not commute with direct sum:
if $A_i \to B_i$ is a sequence of monos such that $\bigoplus (f_i :A_i \to B_i)$ is not a mono, then consider the sequence of two-term complexes
$A_i \to B_i$.
up vote 17 down $H^0$ of each of these complexes is zero, but $H^0$ of the direct sum is the kernel of $\bigoplus f_i$.
vote accepted
Here is one way to see that Sh(X) does not satisfy AB4* (probably not the easiest). Assume for simplicity X = [0,1]. Take a finite open cover, $\mathcal U_i$ of X by balls of radius
$1/i$. Let $A_i$ be the sheaf
$\prod _{U \in \mathcal U_i} j_{U!} \mathbb Z_U$.
This has an epimorphism to $\mathbb Z_X$, but the direct product of all of them together is not epimorphic: taking sections over any open set $V$ will kill off any $A_i$ when no $1/
i$-ball contains $V$.
I hope this is correct!
"Once you have such an example, homology of chain complexes in this category will not commute with direct sum". Ok, this is what I was looking for. thanks Sam! – Daniel Barter Sep
19 '11 at 2:45
add comment
This was an error in the original book, and I added a correction to the errata in 2007. Homology does not commute with direct sums unless (AB4) holds, as Sam points out. -Chuck
up vote 17 down vote
5 Well, well! Welcome to MO, Professor Weibel! – Todd Trimble♦ Nov 5 '13 at 0:25
add comment
Not the answer you're looking for? Browse other questions tagged homological-algebra ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/75795/in-an-arbitrary-abelian-category-does-chain-complex-homology-commute-with-copro?answertab=oldest","timestamp":"2014-04-18T03:26:56Z","content_type":null,"content_length":"59993","record_id":"<urn:uuid:4be57087-e517-4af8-a65b-44004efefaed>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
chemistry- some help plzz totally lost on this one
Number of results: 214,067
chemistry- some help plzz totally lost on this one
what answer did you get??
Monday, July 13, 2009 at 1:15pm by help
Im totally lost on how to do redox reactions. Can any one show me the steps on how to do MnO4-1 +C7H60 => Mn2 +c7H6O2? im trying to study for a test and im totally lost. thanks in advanced.
Monday, May 31, 2010 at 3:24pm by Kelly
chemistry- some help plzz totally lost on this one
Imagine that you are in chemistry lab and need to make 1.00 L of a solution with a pH of 2.70. You have in front of you * 100 mL of 7.00×10−2 M HCl * 100 mL of 5.00×10−2 M NaOH, and * plenty of
distilled water. You start to add HCl to a beaker of water when someone...
Monday, July 13, 2009 at 1:15pm by DrFunk
Can some one explain these to me i need the factors. i am totally lost. thanks x^2+x-6 x^2+7x+10 x^2+5x+6 x^2+2xy+15y^2
Monday, March 31, 2008 at 12:12am by joyce
chemistry- some help plzz totally lost on this one
Start with 100 mL HCl, left with 83.0 means you have added 17.0 mL HCl to the beaker. Start with 100 mL NaOH, left with 88 mL means you have added 12 mL NaOH. Calculate moles HCl and moles NaOH added
and subtract to determine the excess acid in the beaker. Then use pH 2.70 to ...
Monday, July 13, 2009 at 1:15pm by DrBob222
Physics-Totally lost
I'm totally lost on this one- the earth is surrounded by an electric field with a strenth of 100 N/C at the surface. Its electric properties are the same as a point charge located at its center. How
would the gravatational force and electrical force compare foor an electron at...
Thursday, April 7, 2011 at 11:03am by Rafaela
Chemistry - Urgent
Totally lost.
Sunday, October 11, 2009 at 8:07pm by Jacob
social studies
plzz just plzz HELP me i have to get this done now plzz i dont have time to look it all up and i really need your help im begging you
Wednesday, January 2, 2013 at 11:06am by peace
plzz plzz plzz help me
Tuesday, October 23, 2007 at 12:48am by anonymous person
Physics-Please help!
I don't want someone to do this for me-I'm just totally lost and don't even know where to start-I'm totally lost on this one- the earth is surrounded by an electric field with a strenth of 100 N/C at
the surface. Its electric properties are the same as a point charge located ...
Thursday, April 7, 2011 at 3:31pm by Rafaela
plz help I'm totally lost on this one I do not even know where to begin any help would be greatly appreciated.
Sunday, July 24, 2011 at 9:53pm by ROCKY
Algebra 2
Hey I am working on math homework that involves "Standard Form to Vertex Form by Averaging." My class is just starting to learn this today, but I am totally lost still on what to do. Can you please
help? The formula for this is y=a(x - h)^2 + k the h is the x-value vertex and ...
Monday, February 8, 2010 at 11:41pm by Tessa
Another hour gone by working on this and i still can't figure it out. Im totally lost!
Wednesday, September 19, 2012 at 9:07pm by Drew
English comp
I am trying to turn a poem into prose and I am totally lost. Can some help me if I send them the poem?
Tuesday, April 2, 2013 at 7:09pm by Rhonda
|3a 2z 5m| |2k 5 6| I am trying this one but am totally lost. You gave the answer to one earlier. If I am in need of the value of the variables z Is this answer z=2
Thursday, December 17, 2009 at 1:51pm by mike
Explain how to factor the following trinomials forms: x2 + bx + c and ax2 + bx + c. I am totally lost still I read and read and the concept is lost in my head. Can someone please help me? I know
there is more than one way to Factor this.
Thursday, March 1, 2012 at 3:15am by KAYG
I am totally lost on this problem. Here is my new answers but I am sure that they are wrong. Part A; a. 1/2 b. 1/2 c. 1/3 d. 1/4 e. 1/3 f. 1/4 g. 1/3 Part B I am really lost on. Thanks for your help.
Friday, July 10, 2009 at 2:22pm by B.B.
The isolated system allows no heat lost nor gained from outside, which would totally invalidate the data.
Wednesday, November 26, 2008 at 10:32pm by bobpursley
very urgent!!!!!!!!! plzz plzz answer or attempt!
Tuesday, January 18, 2011 at 6:13pm by Dylan
I'm totally lost! Help please!: Determine the energy gained when converting 500.0 grams of steam at 100.0 degrees C to 120 degrees C.
Saturday, October 6, 2007 at 10:12pm by Alyssa
Consider the molecular level representation of a gas There are: (a picture) 6 blue dots 5 orange dots 3 green dots (i think they are the diatomic one, because it is pairs) If the partial pressure of
the diatomic gas is .750, what is the total temperature? I am totally lost, ...
Sunday, December 9, 2012 at 11:39am by Angela
totally lost!!
Friday, March 7, 2014 at 2:33pm by robert
If the Ka of a monoprotic weak acid is 4.5x10^-6 what is the pH of a .10M solution of this acid. I dont even know how to start this problem, I'm totally lost please help!
Saturday, July 13, 2013 at 11:07am by Molly B
College Chemistry-Please help me
I need to find the ionic equation of: 3Ba(OH)2(aq)+2H3PO4(aq)->Ba3(PO4)2+6H20(l) I'm totally lost on this question
Monday, October 4, 2010 at 1:36am by KELLY
human services
I am totally lost!!!
Thursday, March 4, 2010 at 10:36pm by sally
Algebra 1
Help me work these right please, I'm totally lost?
Sunday, June 20, 2010 at 10:16pm by Sam R.
how to find the LCD for 11x/yz^2, 8x/y^2z? I know the answer is yz^2 but how do you get that. can you plzz plzz explain..... this it is really important..... thnx ...... a lot..
Monday, July 6, 2009 at 7:06pm by gagan
Ned to know the similarities between the numbers 1,4,6,8,9,10,12,14,15,16,27,39,49,56,57,58,70 the question was these numbers are or are not something??? I'm totally lost.
Monday, November 9, 2009 at 7:40pm by Ryan
college chemistry
I need to know how to calculate the value of the equilibrium constant for the reaction of zinc metal in a solution of silver nitrate at 25 degrees c. I am totally lost and do not even know if I have
all the data I need. Help!
Saturday, November 28, 2009 at 3:06am by angela
Math 115 Ch 8
have no idea..i am totally lost
Saturday, December 6, 2008 at 12:32pm by Debbie
week 7 math 116 checkpoint
totally lost need help
Saturday, April 16, 2011 at 1:12am by Gracie
Math-New Questions
oooooh i totally got lost on this....
Tuesday, July 26, 2011 at 5:08pm by Jen
How much torque is applied to a bicycle wheel whose radius is 0.40 m in order to accelerate the 15 kg bike and 78 kg rider at a rate of 3.4 m/s^2? I'm totally lost on this one. Which equation do I
need to use?
Tuesday, January 8, 2008 at 9:11pm by Lindsay
Physics: Angular Width. Why is my answer wrong?
Well, that was a step my teacher did so I just followed it. However, .00082 degrees is not the correct answer either (or the variant of 8.2e-4). So I am just totally lost on this one. :/
Friday, November 12, 2010 at 10:36am by Sar
computer science
i dnt have anything. im totally lost
Wednesday, March 10, 2010 at 6:41pm by jay
i couldn't find the answer can u give me the answers pleaseeeee i'm totally lost
Thursday, December 15, 2011 at 6:28pm by nancy
Integrate dx/xlnx. What method am I supposed to use? I'm totally lost.
Monday, January 2, 2012 at 8:10pm by W
College math
(-30a^14b^8/10a^17b^-2)totally lost on me??
Saturday, September 29, 2012 at 9:30pm by Angela
how mant moles are in 1.30g of (NH4)2SO4? answer in units of mol. today our teacher briefly went over this and doing my homework i totally lost it. and cant remember for the life of me what to do
could you please help me and show me step by step what to do. thanks
Wednesday, March 18, 2009 at 9:06pm by Leona
math. plzz help
I'm currently working on this too. The first one is 5.8, the second one is 14.7, and I'm not too sure what the third one is. :)
Tuesday, March 11, 2014 at 9:57am by Nicole
Hi. I am taking a computer programming class (C++). I need some help on some programs. I tried going to my teacher - he was of no help and there are no tutors. I have a lot of code, but I'm having
problems. I looked online for help, but they're using different methods that I ...
Monday, May 21, 2007 at 5:38pm by Jay
help plzz
can some 1 help wit my synonym question plz
Sunday, March 7, 2010 at 10:27pm by beonca
That is even more confusing. I am totally lost on this problem. I know that I have to multiply the three possibilities, so (7/20)(7/20)(13/20) three times (same numbers, different order) and I get
0.07963. I thought I would add those three numbers or essentially multiply that ...
Sunday, May 11, 2008 at 8:42pm by Christine
Ok totally lost here. For someone new to this could you please explain in understandable terms?
Monday, July 1, 2013 at 10:49pm by Trish
Algebra Recursive sequence help
I need some help as I'm totally lost! I need to find the first five terms of each recursive sequence. The n-1 on the problem shows it below the a. 1. A1=3 An=2a n-1- 3 2. C1=64, C2=32 Cn= Cn-2 - Cn-1
Sunday, March 31, 2013 at 7:56pm by Kris
Math (Algebra 3)
A population of ladybugs rapidly multiplies so that the population t days from now is given by A(t) = 3000e^(.01t). How many ladybugs are present now? How many will there be after a week? im totally
lost on this one! Where do i begin and where do i end?!
Wednesday, April 2, 2008 at 4:28pm by John
math again
This is a King Henry Died By Drinking Chocolate Milk,Table.I am totally lost in this.
Wednesday, September 16, 2009 at 8:38pm by Cheyenne
Thanks anyway I have been at this for 6 hours I found someone to help me understand. I appreciate your kindness.
Tuesday, March 13, 2012 at 4:09am by TOTALLY lost
No, I know I totally understand the reason that I come to this website is to get tutored only. I will post I have done for the rest so if I did an error some one can correct me.
Wednesday, July 8, 2009 at 11:44am by Chayo
(2x)^2(3x)^3=3.548E82 How do I even do this type of math problem am just totally lost here????
Wednesday, February 27, 2008 at 4:20pm by Josh
im totally lost with my homework this is all gibberish to me. What are the coordinates of the vertex of the graph of the quadratic function y= -x^2 -1? A.(0,-1) B.(0,2) C.(-1,-1) D.(-1,0)
Wednesday, May 21, 2008 at 7:30pm by Brandy
I have ths secton to, I just don't know where to start. My learning teams, needs it tonite. And I am totally lost.
Thursday, September 30, 2010 at 9:59pm by jess
Hi i have to write an essay about The Sun Also Rises by Ernest Hemingway..i was wondering what some thesis statements would be.....is this an example of one: In this book the lost generation was a
term that described how the main characters lived their lives? im not sure, if ...
Thursday, November 19, 2009 at 12:45pm by Jake
I am totally lost on this one. 1,2,2-trimethylcyclopentane I think I have to draw the cyclopentane shape with the single bond going straight CH3 and I am lost from there. Draw the cyclopentane as a
5-cornered box. C-C |  | C  C  \  /   C Start numbering...
Thursday, March 29, 2007 at 11:34pm by Kellie
I'm totally lost and I didn't want to continue posting under my name tootie. Thinking that I wouldn't get the help since I had too many posts
Sunday, February 20, 2011 at 3:05pm by Liam
Graph the function f(x)=6-1/(x+4)^2 using transformations. State: a.the domain b.the range c.the asymptotes i do not get this at all, i am totally lost! please help!!!
Monday, April 15, 2013 at 10:00am by jenny
I don't understand posts like this one. The question is so broad that it indicates virtually no preparation by the student. Surely your text has a discussion of the subject, with examples. If there's
some part that's troubling you, like asymptotes, what it means to be ...
Thursday, November 10, 2011 at 5:25pm by Steve
I need help with a poster for radon and i need some ideas i cant think of anything. If u can can somebody help me. It's due by March 12, 2012. And plzz if can u give me some ideas. Thx. =)
Thursday, March 8, 2012 at 6:30pm by Portia
Your plight reminds me of this old proverb: "For Want of a Nail For want of a nail the shoe was lost. For want of a shoe the horse was lost. For want of a horse the rider was lost. For want of a
rider the message was lost. For want of a message the battle was lost. For want of...
Saturday, April 7, 2012 at 6:01pm by Ms. Sue
Verify that the functions f and g are inverses of each other by showing f(g(x)) = x and g(f(x)) = x f(x) = x^3 + 5 g(x) = 3sqrtx-5 ( 3 is inside check mark on the sqrt. I am sooo totally lost on
Sunday, December 18, 2011 at 12:45am by Alissa
social studies
plzz HELP me with my social studies i need to get this one done within 10 min plzz 1. Which word most closely means civility? (1 point) respect discipline patriotism liberty 2. Civic-mindedness most
encourages people to (1 point) speak their minds about issues honestly. treat ...
Wednesday, January 2, 2013 at 11:06am by peace
12th grade - English (World Mythology)
The website is: zarathushtra . com /z/gatha/dji/forward . htm What is Tagore’s main argument? Find a sentence or two that make up Tagore’s thesis statement and cite it here. What distinction does
Tagore make between “blind obedience” and “the path of freedom?” Tagore was an ...
Tuesday, June 16, 2009 at 1:22pm by Julie
Biographical sketch- help plzz.
the one that is last?
Wednesday, October 7, 2009 at 7:49pm by Anna
A sample of charcoal from an archaeological site contains 55 g of carbon and decays at a rate of .877 Bq. How old is it? ok I am totally lost, I don't know where to start
Monday, May 5, 2008 at 5:23pm by J
A sample of charcoal from an archaeological site contains 55 g of carbon and decays at a rate of .877 Bq. How old is it? ok I am totally lost, I don't know where to start
Monday, May 5, 2008 at 9:13pm by J
how do i put fractions in order from least to greatest? it's on my homework and i am totally lost. i forgot because i learned this like 2-3 years ago. please help me!
Tuesday, April 13, 2010 at 3:56pm by taylor
10-5=3y 2x-3y=1 i am totally lost help please
Sunday, October 26, 2008 at 6:00pm by Danny
MAT 116 Week 7 quiz
need help with week 7 checkpoint. totally lost
Sunday, January 10, 2010 at 11:38am by Gracie
did you get the 1st one.. plzz help anyone? plz
Tuesday, October 15, 2013 at 1:40pm by rambo
Do you know the 2st and 2nd one,,help plzz.??
Thursday, October 24, 2013 at 6:42am by pi
Physics Classical Mechanics
does anyone know the last one?? plzz help
Monday, October 28, 2013 at 6:11am by pi
Physics Classical Mechanics
the last one plzz
Monday, October 28, 2013 at 6:10am by roph
Science only one question...help plzz!
Tuesday, February 11, 2014 at 3:37pm by Billa_Bong
HELP PLEASE!!
can someone please help me on this im totally lost.
Monday, April 21, 2008 at 7:58pm by Joey
HELP PLEASE!!
can someone please help me on this im totally lost.
Monday, April 21, 2008 at 7:58pm by Joey
Algebra 2
I am totally lost! (3x^4--8x^2 -3x-3) / (x^2-3)
Tuesday, March 1, 2011 at 12:23am by Paige
Need some clarification...... plzz
I am having a bit of trouble answering this question.... DO you mind giving a brief explanation of what it means and, if possible, some hints on possible answers? In what ways did the indigenous
societies and the revolution in Haiti overcome or challenge colonial rule?
Saturday, October 18, 2008 at 7:55pm by Martha
Is heat lost in heating water and ice? No. Entropy is lost, but heat energy is not lost.
Sunday, December 3, 2006 at 4:28pm by Lisa
For non-ideal oscillators (such as a real pendulum) energy is lost and the amplitude (which is the maximum displacement) is no longer constant but also decreases with time. How fast the energy lost
is described by a TIME CONSTANT, τ = m/b. Typically after ONE time ...
Friday, January 31, 2014 at 11:19pm by Ike
Uhmmm.. I am totally lost! Help? There are 7.0 x 10^6 red blood cells in 1.0 mm^3 of blood. How many red blood cells are in 1 L of blood?
Saturday, October 6, 2012 at 11:27am by Kayla
does anybody knows the last one(8th question) plzz. hop.
Tuesday, October 15, 2013 at 1:40pm by rambo
I would like to know how would I start this assignment I am totally lost I have know ideas on how to begin this help please
Wednesday, June 1, 2011 at 6:50pm by jellybeam
At a fun fair, each mother there had brought 2 children. At the end of the day, it was found that 36 mothers had lost one or both of their children and 62 children had lost their mothers. How many
mothers lost only one of their children and how many mothers lost both of their ...
Monday, September 2, 2013 at 2:51am by coran
Can someone give me a growth and decay word problem solve it and explain how you arrive at the final answer in a way I understand it. Thanks ahead of time.
Tuesday, March 13, 2012 at 4:09am by TOTALLY lost
plzz reply
Monday, November 19, 2012 at 10:56pm by Dr Bob plzzz help
11th grade Physics
An open pipe 0.40m in length is placed vertically in a cylindrical bucket having a bottom area of 0.10 m^2. Water is poured into the bucket until a sounding tuning fork of frequency 440 Hz, placed
over the pipe, produces resonance. Find the mass of water in the bucket at this ...
Tuesday, November 2, 2010 at 3:19pm by Garry
Teacher Aide; Enhancing Children's Self Esteem
I had a student once in 9th grade algebra class, just totally lost, mainly, for lack of 5th grade math skills. She was failing terribly, just totally lost. I called her dad in, and went over her
work, and skllls. He stated to me that he was damned tired of going over this ...
Tuesday, March 12, 2013 at 8:48am by bobpursley
Dr.Bob222 - You are absolutely right. It totally slipped my mind that 0.3 has ONE sigfig. Thank you.. your help is much appreciated!
Sunday, May 9, 2010 at 9:36pm by Jake
Dr.Bob222 - You are absolutely right. It totally slipped my mind that 0.3 has ONE sigfig. Thank you.. your help is much appreciated!
Sunday, May 9, 2010 at 9:36pm by Jake
Math 116a
7X+4>-13 or 9x+4>-23 The solution of the compound inequality is ? I am totally lost!!!!!!!!
Thursday, September 22, 2011 at 2:38pm by katie
Math 116 A
7X+4>-13 or 9x+4>-23 The solution of the compound inequality is ? I am totally lost!!!!!!!!
Thursday, September 22, 2011 at 4:02pm by katie
like i said, im totally lost. i can graph the numbers, i just need the numbers, and i cant figure it out :/
Thursday, September 29, 2011 at 1:15pm by Anonymous
Science only one question...help plzz!
Tuesday, February 11, 2014 at 3:37pm by Ms. Sue
math. plzz help
I got the last one. I'm pretty sure it's 14.8.
Tuesday, March 11, 2014 at 9:57am by Nicole
the function f is defined: f(x)= x+3 if -2<=x<1 9 if x=1 -x+2 if x>1 a. find the domain b. locate the intercepts I am totally lost, can someone please help me with this?
Tuesday, March 26, 2013 at 9:56pm by mercedes
3/8 * N=13 N= 8*13/3 students.=104/3 which is not an integer. Perhaps some student lost his head over this assignment, or some students forgot, or some dog ate one of the students string, or your
teacher messed up making this problem up.
Wednesday, April 4, 2012 at 12:26pm by bobpursley
Determine the rhythm and meter of each set. "Workers earn it. Spendthrifts burn it. Bankers lend it Women spend it. Forgers fake it. Taxes take it. Dying leave it. Heirs recieve it. Thrifty save it.
Misers crave it. Robbers seize it. Rich increase it. Gamblers lose it. I could...
Monday, October 22, 2007 at 9:49am by Christian
Algebra 2
Hey I am working on math homework that involves "Standard Form to Vertex Form by Averaging." My class is just starting to learn this today, but I am totally lost still on what to do. Can you please
help? The formula for this is y=a(x - h)^2 + k the h is the x-value vertex and ...
Monday, February 8, 2010 at 10:06pm by Tessa
Biographical sketch- help plzz.
I like your second one better.
Wednesday, October 7, 2009 at 7:49pm by Ms. Sue
kunoi ,, vertical one please.!! tried many times plzz help!
Tuesday, October 15, 2013 at 1:40pm by rambo
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=chemistry-+some+help+plzz+totally+lost+on+this+one","timestamp":"2014-04-18T13:27:43Z","content_type":null,"content_length":"34934","record_id":"<urn:uuid:fb5c1bb8-7187-42bc-84cc-e6f938d7d614>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
New to C any help would be great.
08-21-2006 #1
Registered User
Join Date
Aug 2006
New to C any help would be great.
Hey I am learning to program in C at the moment and I'm having a few problems. I keep getting parse errors when I try to compile as well as a problem with the break statement.
Any help would be really appreciated. Thanks.
/* Check to see if a number is prime or not.
A number is prime if it can only be dividied by itself and 1
but by no other number.
#include <stdio.h>
int main()
int prime = 1;
int number;
int count;
printf("Please enter a number ");
if (scanf("%d", number) != 1)
printf("Invalid number entered\n");
return 1;
for (count=1, count<number, count++);
if (number % count = 0)
prime = 0;
if prime
printf("\n$d is prime\n", number);
printf("\n$d is not prime\n", number);
return 0;
You have two parsing errors and a logic error. In your for loop, you are using commas instead of semi-colons. Your last if() has no parantheses. As for the logic error, you are breaking before
setting prime to 0; therefore, prime will never be set to 0. Just switch those two lines over and you should be ok.
PS: You also have $d instead of %d in your two last printf() statements.
Edit: Here's the code, you also had mixed '=' with '=='.
/* Check to see if a number is prime or not.
A number is prime if it can only be dividied by itself and 1
but by no other number.
#include <stdio.h>
int main()
int prime = 1;
int number;
int count;
printf("Please enter a number ");
if (scanf("%d", number) != 1)
printf("Invalid number entered\n");
return 1;
for (count=1; count<number; count++);
if (number % count == 0)
prime = 0;
printf("\n%d is prime\n", number);
printf("\n%d is not prime\n", number);
return 0;
Last edited by Desolation; 08-21-2006 at 05:57 AM.
if (number % count = 0)
= is the assignment operator. == is the comparison operator.
If you understand what you're doing, you're not learning anything.
the problem with break statement is this that your for ends with ;.
/* Check to see if a number is prime or not.
A number is prime if it can only be dividied by itself and 1
but by no other number.
#include <stdio.h>
int main()
int prime = 1;
int number;
int count;
printf("Please enter a number ");
if (scanf("%d", &number) != 1)
printf("Invalid number entered\n");
return 1;
for (count=1; count<number; count+= 2)
if ((number % count) == 0)
prime = 0;
if (prime)
printf("\n%d is prime\n", number);
printf("\n%d is not prime\n", number);
return 0;
> for (count=1, count<number, count++);
Gotta love those trailing ; on the end of for loops
Also, you use ; to separate steps in a for loop, not commas
Why do I get the feeling this code was deliberately broken, and it was the OP's homework to fix all the syntax errors....
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
Ahhhhh ok thanks very much for all your help. What is a parsing error? Is it similar to a syntax error?
Thanks once again.
You have two parsing errors and a logic error. In your for loop, you are using commas instead of semi-colons. Your last if() has no parantheses. As for the logic error, you are breaking before
setting prime to 0; therefore, prime will never be set to 0. Just switch those two lines over and you should be ok.
PS: You also have $d instead of %d in your two last printf() statements.
Edit: Here's the code, you also had mixed '=' with '=='.
/* Check to see if a number is prime or not.
A number is prime if it can only be dividied by itself and 1
but by no other number.
#include <stdio.h>
int main()
int prime = 1;
int number;
int count;
printf("Please enter a number ");
if (scanf("%d", number) != 1)
printf("Invalid number entered\n");
return 1;
for (count=1; count<number; count++);
if (number % count == 0)
prime = 0;
printf("\n%d is prime\n", number);
printf("\n%d is not prime\n", number);
return 0;
check your for loop
08-21-2006 #2
Registered User
Join Date
May 2006
08-21-2006 #3
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
08-21-2006 #4
08-21-2006 #5
08-21-2006 #6
Registered User
Join Date
Aug 2006
08-21-2006 #7 | {"url":"http://cboard.cprogramming.com/c-programming/82087-new-c-any-help-would-great.html","timestamp":"2014-04-17T19:44:46Z","content_type":null,"content_length":"65989","record_id":"<urn:uuid:15b83ff8-a66d-4a9a-85d4-c7e57ecb866c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find equation of line f(x) parallel to given line
July 30th 2009, 03:48 PM #1
Junior Member
Jul 2009
Find equation of line f(x) parallel to given line
This is another one of those problems that I kinda get the idea of how to do it but then just can't think of anything else. So, to the problem.
$<br /> f(x) = x^3<br />$
$f'(x) = 3x^2$
$<br /> 3x-y+1 = 0<br />$
$y = 3x+1$
From here we know that $m=3$ since $y=mx+b$
Then I made $f'(x)=3$
To get the value of x.
Now I go back to the given line and plug in:
My point is $(1,4)$
I did most of it as I was writing this, I think my light bulb turned on.
Did I do it right or is there another way of doing this?
Last edited by drkidd22; July 30th 2009 at 04:25 PM.
This is another one of those problems that I kinda get the idea of how to do it but then just can't think of anything else. So, to the problem.
$<br /> f(x) = x^3<br />$
$f'(x) = 3x^2$
$<br /> 3x-y+1 <br />$is this 3x-y+1 = 0 ?
$y = 3x+1$
From here we know that $m=3$ since $y=mx+b$
Then I made $f'(x)=3$
To get the value of x.
x = -1 also
Now I go back to the given line and plug in:
$y=3(1)+1$3(1) + 1 = 4, not 5
This is another one of those problems that I kinda get the idea of how to do it but then just can't think of anything else. So, to the problem.
$<br /> f(x) = x^3<br />$
$f'(x) = 3x^2$
$<br /> 3x-y+1<br />$
$y = 3x+1$
From here we know that $m=3$ since $y=mx+b$
Then I made $f'(x)=3$
To get the value of x.
Now I go back to the given line and plug in:
My point is $(1,5)$
I did most of it as I was writing this, I think my light bulb turned on.
Did I do it right or is there another way of doing this?
red part is wrong
if I understood the question you should enter the $x=1$ in the $<br /> f(x) = x^3<br />$ function, instead of the line equation. the line equation will give you the line equation, if you correct
the red part
I'm not complettly clear on this problem yet. So what is the equation of the line parallel to the given line? and how is it done?
you've established that the slope of a line parallel to y = 3x+1 is 3.
$f'(x) = 3$
$3x^2 = 3$
$x = \pm 1$
now ... get the two points on the original curve where f'(x) = 3
$y = x^3$
$y = (1)^3$ ... (1,1)
y - 1 = 3(x - 1)
y = 3x - 2 is the first line
$y = (-1)^3$ ... (-1,-1)
y + 1 = 3(x + 1)
y = 3x + 2 is the second line
now look at the attached graph ... (I made the colors match)
Thanks alot, that clears it up. May I ask what program/software you used to make the graph?
July 30th 2009, 03:58 PM #2
July 30th 2009, 04:03 PM #3
Junior Member
Jul 2009
July 30th 2009, 04:04 PM #4
Junior Member
Jul 2009
July 30th 2009, 04:08 PM #5
Junior Member
Jul 2009
July 30th 2009, 04:27 PM #6
July 30th 2009, 05:13 PM #7
Junior Member
Jul 2009
July 30th 2009, 05:26 PM #8
July 30th 2009, 05:45 PM #9
Junior Member
Jul 2009
July 31st 2009, 05:39 AM #10 | {"url":"http://mathhelpforum.com/calculus/96568-find-equation-line-f-x-parallel-given-line.html","timestamp":"2014-04-18T09:52:19Z","content_type":null,"content_length":"73879","record_id":"<urn:uuid:14006d82-9917-4d3f-b368-c6c4b98670d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
MB0048 : State the different types of models used in OR. Explain briefly the general methods for solving these OR models?
Posted on: October 3, 2011
MB0048 : State the different types of models used in OR. Explain briefly the general methods for solving these OR models?
Answer: – Types of OR Models
A model is an idealized representation or abstraction of a real-life system. The objective of a model is to identify significant factors that affect the real-life system and their interrelationships.
A model aids the decision-making process as it provides a simplified description of complexities and uncertainties of a problem in a logical structure. The most significant advantage of a model is
that it does not interfere with the real-life system.
Classification of OR models
You can broadly classify OR models into the following types.
a. Physical Models include all form of diagrams, graphs and charts. They are designed to tackle specific problems. They bring out significant factors and interrelationships in pictorial form to
facilitate analysis. There are two types of physical models:
I. Iconic models
II. Analog models
Iconic models are primarily images of objects or systems, represented on a smaller scale. These models can simulate the actual performance of a product. Analog models are small physical systems
having characteristics similar to the objects they represent, such as toys.
b. Mathematical or Symbolic Models employ a set of mathematical symbols to represent the decision variable of the system. The variables are related by mathematical systems. Some examples of
mathematical models are allocation, sequencing, and replacement models.
c. By nature of Environment: Models can be further classified as follows:
I. Deterministic model in which everything is defined and the results are certain, such as an EOQ model.
II. Probabilistic Models in which the input and output variables follow a defined probability distribution, such as the Games Theory.
d. By the extent of Generality Models can be further classified as follows:
I. General Models are the models which you can apply in general to any problem. For example: Linear programming.
II. Specific Models on the other hand are models that you can apply only under specific conditions. For example: You can use the sales response curve or equation as a function of only in the
marketing function.
General methods for solving these OR models
The basic dominant characteristic feature of operations research is that it employs mathematical representations or models to analyze problems. This distinct approach represents an adaptation of the
scientific methodology used by the physical sciences. The scientific method translates a real given problem into a mathematical representation which is solved and retransformed into the original
context. The OR approach to problem solving consists of the following steps: Defining the problem, Constructing the model, Solving the model, Validating the model and Implementing the final result.
a) Definition
The first and the most important step in the OR approach of problem solving is to define the problem. You need to ensure that the problem is identified properly because this problem statement will
indicate three major aspects:
1) A description of the goal or the objective of the study
2) An identification of the decision alternative to the system
3) The recognition of the limitations, restrictions and requirements of the system.
b) Construction
Based on the problem definition, you need to identify and select the most appropriate model to represent the system. While selecting a model, you need to ensure that the model specifies quantitative
expressions for the objective and the constraints of the problem in terms of its decision variables. A model gives a perspective picture of the whole problem and helps tackling it in a well-organized
manner. Therefore, if the resulting model fits into one of the common mathematical models, you can obtain a convenient solution by using mathematical techniques. If the mathematical relationships of
the model are too complex to allow analytic solutions, a simulation model may be more appropriate. There are various types of models which you can construct under different conditions.
c) Solution
After deciding on an appropriate model you need to develop a solution for the model and interpret the solution in the context of the given problem. A solution to a model implies determination of a
specific set of decision variables that would yield an optimum solution. An optimum solution is one which maximizes or minimizes the performance of any measure in a model subject to the conditions
and constraints imposed on the model.
d) Validation
A model is a good representation of a system. However, the optimal solution must work towards improving the system’s performance. You can test the validity of a model by comparing its performance
with some past data available from the actual system. If under similar conditions of inputs, your model can reproduce the past performance of the system, then you can be sure that your model is
valid. However, you will still have no assurance that future performance will continue to duplicate the past behavior. Secondly, since the model is based on careful examination of past data, the
comparison should always reveal favorable results. In some instances, this problem may be overcome by using data from trial runs of the system. Note that such validation methods are not appropriate
for non-existent systems, since data will not be available for comparison.
e) Implementation
You need to apply the optimal solution obtained from the model to the system and note the improvement in the performance of the system. You need to validate this performance check under changing
conditions. To do so, you need to translate these results into detailed operating instructions issued in an understandable form to the individuals who will administer and operate the recommended
system. The interaction between the operations research team and the operating personnel reaches its peak in this phase.
1 Response to "MB0048 : State the different types of models used in OR. Explain briefly the general methods for solving these OR models?" | {"url":"https://nikhatshahin.wordpress.com/2011/10/03/mb0048-q1-state-the-different-types-of-models-used-in-or-explain-briefly-the-general-methods-for-solving-these-or-models/","timestamp":"2014-04-18T02:57:23Z","content_type":null,"content_length":"85222","record_id":"<urn:uuid:2e53f606-91ef-4c34-bc3f-ad30be4fb476>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next Article
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
A surface which has a family of geodesics of curvature
N. Ando
Faculty of Science, Kumamoto University, 2--39--1 Kurokami, Kumamoto 860--8555, Japan
Abstract: We will study a surface in \mbox{\boldmath{$R$}}$^3$ without any umbilical point such that the integral curves of some principal distribution are geodesics. In particular, the lines of
curvature of such a surface will be characterized intrinsically and extrinsically: the semisurface structure of such a surface will be characterized in terms of local representation of the first
fundamental form; the curvatures and the torsions of the lines of curvature as space curves will be characterized.
Classification (MSC2000): 53A05, 53A99, 53B25
Full text of the article:
Electronic version published on: 14 May 2007. This page was last modified: 27 Jan 2010.
© 2007 Heldermann Verlag
© 2007–2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/BAG/vol.48/no.1/19.html","timestamp":"2014-04-17T13:01:49Z","content_type":null,"content_length":"3591","record_id":"<urn:uuid:d3c3cd78-5c4e-45de-b84f-9bf26cd90b1f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gouraud Shading
On the left is a curved surface rendered using flat shaded polygons, the other is Gouraud shaded. It is quite clearly smoother and more attractive and well worth spending the time to code.
Unlike a flat shaded polygon, you can specfy a different shade for each vertex of a polygon, the rendering engine then smoothly interpolates the shade across the surface. The technique is very
similar to the standard scan converting, and can be handled very quickly with integer maths.
Some points to note though: It is best to restrict gouraud shading to three sided polygons. Sometimes polygons may not look quite as you expect when they have more than three vertices with very
different shades.
Scan converting
The scan converting algorithm is very similar to the flat shading algorithm, so I won't go into very much detail. In this case, you not only calculate the change in X value down the scanline, you
also keep track of the shade. You do this in exactly the same way.
Now, before the horizontal strip is rendered to the screen, some small adjustments need to be made. It will help to have a picture here to clarify what's going on.
OK, the tiny polygon shown here is flat shaded just for clarity. Consider the line to be rendered, A-B. If this were a perfect world, it would be possible to render the line A-B. However, this is
a pixelated screen, so it's only possible to render the line of three pixels C-D. The center of the first pixel, C, does not lie exactly on the point A, and pixel D does not lie exactly on the
point B, so a slight adjustment is made for the sake of increased accuracy. Now, you may think this is just being pedantic, however, taking the trouble to increase the accuracy results in a much
more beautiful polygon. This is especially true when you add texture mapping etc and the polygons are quite small.
So, how to make this adjustment. Firstly calculate the gradient of shade across the line as usual:
Gradient = (Bs - As) / (Bx - Ax)
Where As = shade at A, and Ax is the X value of A. Now calculate the exact value of the shade at C:
Cs = As + (1-frac(Ax))*Gradient
So, now you need to be able to render a strip of Gouraud polygon to the screen. This involves calculating the shade of each pixel and writing it to the screen. This is a simple process, since the
shade changes linearly across the scanline. The process can be demonsrtated by a little pseudocode.
Shade = Cs
loop X from Cx to Dx
plot pixel at (X,Y) with colour Shade
Shade = Shade + Gradient
End of X loop
As usual, it's possible to reall crank up the speed of this algorithm by writing it in machine code. The algorithm given below will only really work on a PC in an 8-bit screen mode, but of course
there are other methods for other platforms. Please note that this method is not perfect and will not produce perfect results where the rate of change of shade across a polygon is very small. But
it is fine for most cases and faster than a Breshenham type mothod.
This method relies on the fact that the x86 series chips have 16 bit registers that can be treated as two 8 bit registers. In this implementation, the highest 8 bits of a regester are used as the
integer part of the shade being interpolated, and the lower 8 bits are used as the fractional part.
This inner loop given just renders one horizontal strip of a polygon in an 8 bit screen mode. It uses 32 bit memory pointers. Because of the fact that it uses a fixed point value for the shade,
it is possible to assign non-integer values for the shade values of the vertices, making for slightly more accurate rendering.
The routine requires a little setup code to begin with:
length = Dx - Cx + 1
ScreenPtr = y*ScreenWidth + Cx
Now the inner loop:
CX = length
AX = Cs * 256
BX = Gradient
EDI = ScreenPtr
mov [edi], ah ;write the pixel to the screen
inc edi ;next pixel
add ax, bx ;update shade
dec cx ;decrement counter
jnz top ;loop if more pixels
Notes on Gouraud Shading
Gouraud shading is by no means perfect, but it can make a real difference over flat shaded polygons. Problems with Gouraud shading occur when you try to mix light sourcing calculations with big
Imagine you have a large polygon, lit by a light near it's center. The light intensity at each vertex will be quite low, because they are far from the light. The polygon will be rendered quite
dark, but this is wrong, because it's centre should be brightly lit. You can see this happening in the game Descent. Firing flares around, especially into corners, causes the surroundings to
light up. But try firing a flare into the middle of a large flat wall or floor, and you will see that it has very little effect.
However, if you are using a large number of small polygons, with a relatively distant light source, Gouraud shading can look quite acceptable. Infact, the smaller the polygons, the closer it
comes to Phong shading. | {"url":"http://freespace.virgin.net/hugo.elias/graphics/x_polygo.htm","timestamp":"2014-04-16T13:21:59Z","content_type":null,"content_length":"6392","record_id":"<urn:uuid:bda8afad-4db2-4eed-ad3b-b7fd40570542>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |