content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
prove existence of a subsequence that converges uniformly
April 26th 2010, 09:05 AM
derek walcott
prove existence of a subsequence that converges uniformly
Let fn be a uniformly bounded sequence of integrable functions on [a,b] (not necessarily continuous). Set
Fn(x) = integral (x to a) fn(t)dt
for x is an element in [a,b].
Prove that there exists a subsequence of the sequence (Fn) that converges absolutely uniformly on [a,b].
any help would be appreciated
April 26th 2010, 09:34 AM
Your sequence $(F_n)_n$ is uniformly bounded because $f_n$ is. Also the uniform boundedness of the $f_n$ can be used to check that $(F_n)_n$ is equicontinuous $(|F_n(x)-F_n(y)|\leq \sup_n f_n |
x-y|)$ and then conclude by Arzela Ascoli. | {"url":"http://mathhelpforum.com/differential-geometry/141525-prove-existence-subsequence-converges-uniformly-print.html","timestamp":"2014-04-19T05:24:43Z","content_type":null,"content_length":"4798","record_id":"<urn:uuid:aae26485-6577-45e4-8087-56b9189640dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] advice on stochastic(?) optimisation
bryan cole bryan.cole@teraview....
Fri Aug 29 05:28:23 CDT 2008
Firstly, thanks everyone for the responses. I think there are enough
pointers here to get me started.
> Do you have a formula for the function f ? If what you have is only
> noisy observations of f(x), without knowing f, that's basically what
> stochastic approximation is about: you have a big literature about
> this kind of problems.
In fact, this is an instrumentation optimisation. Each function sample
operation is in fact an experimental measurement.
> The first article is the one introducing
> Robbins-Monro algorithm:
> Robbins, H. and Monro, S. "A Stochastic Approximation Method." Ann.
> Math. Stat. 22, 400-407, 1951.
> A recent book covering the field is the Kushner and Yin book:
"Stochasic Approximation" seems to be just what I need. I'm reading up on
it now...
> The problem of those algorithms is that they are hard to implement in
> python, because of their recursive nature, hence non vectorizable. If
> your function/observation is hard to compute, it may not be a big
> problem, though.
The expense in terms of my "function" evaluations is so great (~max
sample rate is 15 measurements per second), the python overhead will be
However, I hope I can exploit the fact that my function is quite slowly
varying. It something like a distorted 2D Gaussian. It can be assumed
there's only one maximum within the region bounds. The main problem is
that the measurements are noisy, so attempts to estimate the function
gradient are very error-prone.
This seems like it should be a common problem in experimental science /
instrumentation, so there ought to be lots of info on this subject. I
just didn't know what heading to search under.
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2008-August/017992.html","timestamp":"2014-04-17T13:38:53Z","content_type":null,"content_length":"4634","record_id":"<urn:uuid:7fc219b5-866e-4002-b25b-6755ec14f884>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithm of the Week: Topological Sort Revisited
We already know what about the topological sort of a directed acyclic graph. So why do we need to revisit this algorithm? First of all I never mentioned its complexity, thus to understand why we do
need a revision let’s get again on the algorithm.
We have a directed acyclic graph (DAG). There are no cycles so we must go for some kind of order putting all the vertices of the graph in such an order, that if there’s a directed edge (u, v), u must
precede v in that order.
The process of putting all the vertices of the DAG in such an order is called topological sorting. It’s commonly used in task scheduling or while finding the shortest paths in a DAG.
The algorithm itself is pretty simple to understand and code. We must start from the vertex (vertices) that don’t have predecessors.
We put them in our sorted list in random order. Since they don’t depend on each other we can assume they are equally sorted already. Indeed thinking of a task schedule if there are tasks that don’t
have predecessors (they don’t depend on other tasks before them) and that don’t depend on each other we can put them in random order (and execute them in random order).
Once we have the vertices with no predecessors we must remove the edges starting from them. Then – go again with the vertices with no predecessors.
It’s as simple as that, so why do we need a revision of this algorithm? Well, basically because of its efficiency.
As we know most of the graph algorithms depend on the way the graph is represented in our application. We consider as the two main representations the adjacency matrix …
… and adjacency lists.
Let’s first take a look of some of the main approaches to get the topologically sorted list at the end of the algorithm.
What can we do in order to find the vertices with no predecessors? We can only scan the entire list of vertices.
Adjacency Matrix
In case we’re using adjacency matrix we need|V|^2 space to store the graph. To find the vertices with no predecessors we have to scan the entire graph, which will cost us O(|V|^2) time. And we’ll
have to do that |V| times. This will be |V|^3 time consuming algorithm and for dense graphs this will be quite an ineffective algorithm.
Adjacency Lists
What about the adjacency list? There we need |E| space to store a directed graph. How fast can we find a node with no predecessor? Practically we’ll need O(|E|) time. Thus in the worst case we have
again O(|V|^2) time consuming programs.
So what can be done in order to optimize this algorithm?
Practically we can start by picking up a random vertex and “go back” until we get a node with no predecessors. This approach can be very effective yet also very ineffective. First of all if we have
to scan all the way back to a node with no predecessors this will cost us |V| time, but if we stuck on a node that don’t have a preceding node then we’ll have a constant speed.
This means that we can modify the algorithm a bit in order to improve a lot the algorithm. We just need to store both incoming and outgoing edges and slightly modify the adjacency lists.
What’s the algorithm now?
First we easily find the nodes with no predecessors. Then, using a queue, we can keep the nodes with no predecessors and on each dequeue we can remove the edges from the node to all other nodes.
Pseudo Code
Represent the graph with two lists on each vertex (incoming edges and outgoing edges)
Make an empty queue Q;
Make an empty topologically sorted list T;
Push all items with no predecessors in Q;
While Q is not empty
a. Dequeue from Q into u;
b. Push u in T;
c. Remove all outgoing edges from u;
Return T;
This approach will give us a better performance than the “brute force” approach. The running time complexity is O(|V| + |E|). The problem is that we need additional space and an operational queue,
but this approach is a perfect example of how by using additional space you can get a better performing algorithm.
Published at DZone with permission of Stoimen Popov, author and DZone MVB. (source)
Boyko Bantchev replied on Wed, 2012/12/19 - 6:45am
The loop in your pseudo-code only removes nodes from the queue but never adds to it, as it should :) If corrected, it will be an instance of a breadth-first graph traversal.
But in fact a better option is to do depth-first traversal. Start with an empty list T, and for each node with no predecessors initiate a DF traversal. Each time you abandon a node because it has no
non-visited successors, add that node to the front of T. When you are done, T is a topologically sorted list of the nodes of the graph. This algorithm runs in linear time and needs no additional
space except 1 bit per node to keep track of the visited nodes.
Allen Coin replied on Wed, 2012/12/19 - 9:42amBoyko Bantchev
Good point; that is a good way of doing it. | {"url":"http://architects.dzone.com/articles/algorithm-week-topological-0?mz=36885-nosql","timestamp":"2014-04-20T18:51:43Z","content_type":null,"content_length":"72542","record_id":"<urn:uuid:4e3e89ca-581b-406a-85c1-610853c16e37>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roof Structure Question find lengths
I give you a hint. The area of Triangle ABD = area ABC +area BCD Now, call $BD=h$, the height. Then you can find, $CD=\sqrt{4700^2-h^2}$ Thus, area ABD= $(1/2)h(3300 +\sqrt{4700^2-h^2})$ area ABC can
be computed by Heron's Formula area BCD= $(1/2)h\sqrt{4700^2-h^2}$ You have an equation to solve for!
Hello, Zak! In $\Delta ABC$, use the Law of Cosines to find $\angle ACB.$ $\cos(\angle ACB)\:=\:\frac{3300^2 + 4700^2 - 7000^2}{2(3300)(4700)}\:=\:-0.516441006$ Hence: . $\angle ACB \:\approx\:121.1^
o$ Then: . $\angle BCD\:=\:180^o - 121.1^o \:=\:58.9^o$ In right triangle $BDC\!:\;\;\sin 58.9^o \:=\:\frac{BD}{4700}$ Therefore: . $BD \:=\:4700\sin58.9^o \:=\:4024.717024 \:\approx\:4025$
Last edited by Soroban; December 3rd 2006 at 05:51 PM.
Hello, Zak! An alternate approach . . . In $\Delta ABC$, use the Law of Cosines to find angle $A.$ $\cos A\:=\:\frac{7000^2 + 3300^2 - 4700^2}{2(7000)(3300)} \:=\:0.8181818181$ . . Hence: . $A \:\
approx\:35.1^o$ In right triangle $BDA\!:\;\sin 35.1^o \:= \:\frac{BD}{7000}\quad\Rightarrow\quad BD \:=\:7000\sin35.1^o \:=\:4025.036764$ . . Therefore: . $BD \:\approx\:4025$ | {"url":"http://mathhelpforum.com/geometry/8347-roof-structure-question-find-lengths.html","timestamp":"2014-04-19T20:15:10Z","content_type":null,"content_length":"44274","record_id":"<urn:uuid:27c7bd78-434a-4041-825e-4e11d6d0bc38>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of General Linear Model
general linear model
(GLM) is a statistical
linear model
. It may be written as
$mathbf\left\{Y\right\} = mathbf\left\{X\right\}mathbf\left\{B\right\} + mathbf\left\{U\right\},$
where Y is a matrix with series of multivariate measurements, X is a matrix that might be a design matrix, B is a matrix containing parameters that are usually to be estimated and U is a matrix
containing errors or noise. The residual is usually assumed to follow a multivariate normal distribution. If the residual is not a multivariate normal distribution, generalized linear models may be
used to relax assumptions about Y and U.
The general linear model incorporates a number of different statistical models: ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary linear regression, t-test and F-test. If there is only one column in Y (i.e.,
one dependent variable) then the model can also be referred to as the multiple regression model (multiple linear regression).
Hypothesis tests with the general linear model can be made in two ways: multivariate and mass-univariate.
An application of the general linear model appears in the analysis of multiple brain scans in scientific experiments where Y contains data from brain scanners, X contains experimental design
variables and confounds. It is usually tested in a mass-univariate way and is often referred to as statistical parametric mapping. | {"url":"http://www.reference.com/browse/General+Linear+Model","timestamp":"2014-04-20T22:37:41Z","content_type":null,"content_length":"79647","record_id":"<urn:uuid:4f04437e-d302-4a3d-875b-34b38bad6dfe>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bristol, IL Math Tutor
Find a Bristol, IL Math Tutor
...Writing is my life's passion, and I love finding, creating, and sharing strategies to assist students with math and reading. I also enjoy teaching Spanish. This past year I've taught Spanish to
students from 2nd to 8th grade, and I just finished tutoring a student in Spanish at the high school level.
25 Subjects: including algebra 1, elementary (k-6th), GRE, grammar
...Since then, I have been tutoring (privately) mainly Mathematics. After I graduated with Physics Major, I started tutoring Physics as well. So, altogether, I have almost 20 years experience of
tutoring and 10 years of teaching in the subjects I mentioned.
11 Subjects: including calculus, algebra 1, algebra 2, geometry
...I have conducted original statistical research in Experimental Psychology, Government Research Methods, and Economics/Finance, including my honors thesis forecasting economic crises using
international panel data. (Exceptionally complex panel data combines observations from different places and ...
57 Subjects: including trigonometry, differential equations, linear algebra, SAT math
...I have Bachelor of Science in Mathematics Education and a Master of Science in Applied Mathematics. I taught four years of High School Math (3 years teaching Algebra 2/Trig). I've been tutoring
math since I was in high school 15 years ago. I taught four years of High School Math (All 4 years teaching Honors Geometry). I've been tutoring math since I was in high school 15 years ago.
10 Subjects: including statistics, algebra 1, algebra 2, calculus
...I have taken the following classes; Calculus I, II, & III, Differential Equations, Linear Algebra, Physics I & II, Advanced Linear Algebra I & III work for a large engineering firm in the area,
but tutor in math (pre-Algebra up thru Calculus, Diff. EQ, linear Algebra, etc.), physics, logic class...
31 Subjects: including linear algebra, discrete math, logic, ACT Math | {"url":"http://www.purplemath.com/bristol_il_math_tutors.php","timestamp":"2014-04-17T04:39:29Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:0b3aecb7-98a9-48c8-80f9-79bfcda9de8d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Metroplex Algebraic Geometry, Algebra and Number Theory (AGANT) Seminar,
a joint venture of UNT, UTA and TCU
Date/Time/Room: Friday (9/19/2008) at 4:00 pm in 304 Pickard Hall
Speaker: Dr. Dimitar Grantcharov, Assistant Professor
Department of Mathematics, University of Texas at Arlington.
"Infinite-dimensional Weight Representations of Lie Algebras"
Abstract: In the early 20th century H. Weyl classified all finite-dimensional representations of the classical Lie algebras in terms of the so-called character formula. Following work of G. Benkart,
D. Britten, V. Futorny, F. Lemire, A. Joseph and others, in 2000, O. Mathieu achieved a major breakthrough in the representation theory by obtaining an infinite-dimensional analog of Weyl's result.
Using a combination of algebro-geometric methods and non-commutative localization constructions, O. Mathieu classified all simple infinite-dimensional weight representations of the classical Lie
algebras. In this talk we will discuss some recent results related to the theory of weight representations. These results are a part of an ongoing joint project of the speaker with V. Serganova. | {"url":"http://www.uta.edu/math/pages/main/abstracts/grandim08.html","timestamp":"2014-04-18T08:44:33Z","content_type":null,"content_length":"5671","record_id":"<urn:uuid:f89781cd-bf21-4ff4-8a69-9a6d942cc390>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Often called gridding, interpolation creates images by estimating values for pixel centres (nodes) on a regular network of rows and columns from regularly or irregularly scattered data points.
• extrapolate data beyond point locations
• have regularly spaced data for contouring or raster calculations
• visualize trends in point data
• smooth or enhance estimated surface variability
Potential Problems
• values incorrectly extrapolated into areas with sparse data leading to misguided interpretations
• discontinuities are difficult to model
• algorithmic artifacts may produce phantom features, noise, or unrealistic surface undulation
• may require many iterations to optimize model
Interpolation Parameters
1. Diameter of search area (tolerance circle)
2. Method for calculating pixel values from points within search area
3. Number of points to use in calculation
4. Pixel size
1. SEARCH AREA
Geostatistics (autocorrelation and semivariance) provide insight into defining a reasonable search area.
The two most common methods for selecting points, within a search area, to calculate the value of a node are:
1. Nearest neighbour - nearest points to node
2. Radius - all points within a given radius
1. Local - node values computed from an equation with coefficients determined using a subset of scattered data points (e.g. IDW, splines [Minimum curvature], kriging)
2. Global - node values computed from an equation with coefficients determined using all scattered data points (e.g. trend surface, double fourier)
3. NUMBER OF DATA POINTS
The number of data points used in computing the value of a node.
• Few points --> enhance local anomalies
• Many points --> subdue local anomalies
4. PIXEL SIZE
Selecting a pixel size: Nyquist Rule states that there should be 2-3 pixels between average spacing of data points. A good practical example of this concept can be found in the gridding of
geophysical data.
A good test for pixel size is by image subtraction. If the difference between two image "volumes" of different pixel size is less than 5%, then a smaller pixel size will not increase the accuracy of
the model.
1. SEARCH AREA (Defining a reasonable search area)
Geostatistics (regionalized variable theory) provides a very useful tool for helping define a reasonable search area for interpolation. This tool is called a semivariogram, a graph of semivariance
vs. lag, and provides insight into the degree of autocorrelation in a dataset. We need a few definitions:
│Term │Defintion │
│Autocorrelation/ │statistical concepts expressing the degree to which the value of an attribute at spatially adjacent points varies with the distance or time separating the observations. │
│autocovariance │ │
│Regionalized │a single-values function defined over a metric space (a set of coordinates) that represent the variation of natural phenomena that are too irregular at the scale of interest to │
│variable │be modeled analytically │
│Lag │a user-specified distance class within which semivariance is computed for a set of data points. │
│ │Given two locations x and (x + h), a measure of one-half of the mean square differences (the semivariance) produced by assigning the value z(x + h) to the value z(x), where h │
│Semivariance │(known as thelag) is the inter-sample distance, i.e. │
│ │where N refers to the number of data pairs that are separated by the same distance h (Source) │
│Semivariogram │A graph of semivariance versus lag h. │
│Kriging │Kriging is a weighted average method of gridding which determines weights based on the location of the data and the degree of spatial continuity present in the data as expressed│
│ │by a semi-variogram. The weights are determined so that the average error of estimation is zero and the variance of estimation minimized. │
3. NUMBER OF DATA POINTS
4. PIXEL SIZE
Selecting a pixel size: Nyquist Rule states that there should be 2-3 pixels between average spacing of data points. A good practical example of this concept can be found in the gridding of
geophysical data.
A good test for pixel size is by image subtraction. If the difference between two image "volumes" of different pixel size is less than 5%, then a smaller pixel size will not increase the accuracy of
the model.
Good introduction to interpolation, semivariance, kriging, etc.
The following is an excellent reference for gridding methods:
This Web page is one of many useful pages linked from:
Krajewski, Dr. Stephen A. & Gibbs, Betty L. (1994) Understanding Contouring: A practical guide to spatial estimation and contouring using a computer AND Basics of Using Variograms. Gibbs Associates,
Boulder, CO. ISBN 0-943909-16-3 | {"url":"http://elearning.algonquincollege.com/coursemat/viljoed/gis8746/concepts/interp/interpolation.htm","timestamp":"2014-04-17T21:50:15Z","content_type":null,"content_length":"8429","record_id":"<urn:uuid:4971d93b-8b56-46d3-955e-c4c8973d9d42>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Novel System Anomaly Prediction System Based on Belief Markov Model and Ensemble Classification
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 179390, 10 pages
Research Article
A Novel System Anomaly Prediction System Based on Belief Markov Model and Ensemble Classification
College of Computer Science and Technology, Zhejiang University, Hangzhou 310012, China
Received 17 March 2013; Revised 13 July 2013; Accepted 31 July 2013
Academic Editor: Yingwei Zhang
Copyright © 2013 Xiaozhen Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Computer systems are becoming extremely complex, while system anomalies dramatically influence the availability and usability of systems. Online anomaly prediction is an important approach to manage
imminent anomalies, and the high accuracy relies on precise system monitoring data. However, precise monitoring data is not easily achievable because of widespread noise. In this paper, we present a
method which integrates an improved Evidential Markov model and ensemble classification to predict anomaly for systems with noise. Traditional Markov models use explicit state boundaries to build the
Markov chain and then make prediction of different measurement metrics. A Problem arises when data comes with noise because even slight oscillation around the true value will lead to very different
predictions. Evidential Markov chain method is able to deal with noisy data but is not suitable in complex data stream scenario. The Belief Markov chain that we propose has extended Evidential Markov
chain and can cope with noisy data stream. This study further applies ensemble classification to identify system anomaly based on the predicted metrics. Extensive experiments on anomaly data
collected from 66 metrics in PlanetLab have confirmed that our approach can achieve high prediction accuracy and time efficiency.
1. Introduction
As computer systems are growing increasingly complicated, they are more vulnerable to various anomalies such as performance bottlenecks and service level objective (SLO) violations [1]. Thus, it
requires the computer systems to be more capable of managing anomalies under time pressure, and avoiding or minimizing the system unavailability by monitoring the computer systems continuously.
Anomaly management methods can be classified into two categories: passive methods and proactive methods. Passive methods notify the system administrator only when errors or faults are detected. These
approaches are appropriate to manage anomalies that can be easily measured and fixed in a simple system. However, in nowadays dynamic and complex computer systems, detecting some anomalies may have a
high cost, which is unacceptable for continuously running applications. Proactive methods take preventive actions when anomalies are imminent; thus, they are more appropriate for systems that need to
avert the impact of anomalies and achieve continuous operation. Nowadays proactive methods are preferred in both academic research and real world applications.
Previous work has addressed the problem of system anomaly prediction, which can be categorized as data-driven methods, event-driven methods, and symptom-driven methods [2].
Event-driven methods directly analyze the error or failure that events report and use error reports as input data to predict future system anomaly. Salfner and Malek use error reports as input and
then perform a trend analysis to predict the occurrence of failure in a telecommunication system by determining the frequency of error occurrences [3]. Kiciman and Fox use decision tree to identify
faulty components in J2EE application server by classifying whether requests are successful or not. These approaches have the basic assumption that anomaly-prone system behavior can be identified by
characteristics of anomaly [4]. This is why only reoccurring anomaly presented in the error report can be predicted by event-driven method.
Data-driven methods learn from the temporal and spatial correlation of anomaly occurrence. They aim at recognizing the relationship between upcoming failures and occurrence of previous failures.
Zhang and Ma use modified KPCA method to diagnose anomalies in nonlinear processes [5]. In nonlinear fault detection scenario, they utilize statistical analysis to improve the learning techniques [6
], which is also applicable for large scale fault diagnosis processes [7]. Liang et al. exploit these correlation characteristics of anomaly on IBM’s BlueGene/L [8]. They find that the occurrence of
a failure is strongly correlated to the time stamp and the location of others in a cluster environment. Zhang et al. propose a hybrid prediction technique which uses model checking techniques; an
operational model is explored to check if a desirable temporal property is satisfied or violated by the model itself [9]. To conclude, the basic idea of data-driven methods is that upcoming anomalies
are from the occurrence of the previous ones.
Symptom-driven methods analyze some workload-related data such as input workload and memory workload in order to predict further system resource utilization. Tan and Gu [10] monitor a series of
run-time metrics (CPU, memory, I/O usage, and network), use a discrete-time Markov chain to forecast the system metrics in the future, and finally predict the system state based on Naïve Bayesian
classification. Luo et al. [11] build autoregressive model using various parameters from an Apache webserver to predict further system resource utilization; failures are predicted by detecting
resource exhaustion.
Efficient proactive anomaly management relies on the system monitoring data, and the metric system generated by monitor infrastructures are continuously arriving and invariably noisy, so one big
challenge is to provide high accurate and good and efficient system anomaly prediction for noisy monitoring data stream. Recently, some approaches have been proposed for system anomaly prediction
using discrete-time Markov chain (DTMC) [10, 12]. However, their work does not consider the issue that monitoring data may oscillate around the real value as we mentioned previously. DTMC which uses
explicit state boundaries will lead to significantly different values even when the metrics oscillation around the boundaries is very slight. Soubaras [13] proposed Evidential Markov chain model
which extends DTMC to overcome the noise value around explicit state boundaries problem caused by inaccuracies monitoring metrics. The problem of Evidential Markov chain is that although it works
excellently in a static data scenario, it cannot be applied directly to stream data. Its fixed transition matrix is too restrictive for continuously changing stream data and brings in enormous amount
of calculation.
In this paper, we present the design and implementation of an approach to solve the system anomaly prediction problem on noise data stream. We first present an improved belief Markov chain (BMC) to
fit into a data stream scenario. We use a stream-based -means clustering algorithm [14] to dynamically maintain and generate Markov transition matrix. Only information of microclusters is stored
after clustering, and new comers will falls into or newly establish one of the groups. Compared to Evidential Markov chain method, by which all the data has to be stored and recalculated every time
when new one arrives to get Markov state, our approach is time efficient and more feasible in a highly dynamic and complex system. We then employ aggregate ensemble classification method [15] to
determine whether the system will turn into anomaly in the future. Aggregate ensemble classification can address the incorrect anomaly mark problem in a continuously running system.
Extensive experiments on PlanetLab dataset [16] of different parameter settings show that averagely BMC achieves 14.8% smaller mean prediction error than DTMC method in various previous works [10, 12
, 17, 18]. Our system anomaly prediction method (SAPredictor), which combines BMC and aggregate ensemble classification, is proved to achieve better prediction performance than other prediction
models, for example, DTMC+Naïve Bayes, DTMC+KNN, and DTMC+C4.5. SAPredictor demonstrates the best performance in the three key criteria, namely, 71.6% for precision, 84.6% for recall, and 77.5% for
The main contributions of this paper are summarized as follows.(1)We propose the belief Markov chain by improving the Evidential Markov model using a stream-based -means clustering algorithm and make
it more suitable for system metrics prediction on noisy data stream.(2)We integrate belief Markov chain and aggregate ensemble classification as SAPredictor to predict system anomaly.(3)We validate
the effectiveness of SAPredictor by extensive experiments on real system data.
The rest of this paper is organized as follows. Section 2 introduces our SAPredictor method. Section 3 demonstrates the experiments and analyzes the results. Finally, we conclude and give some future
research directions in Section 4.
2. Approach Overview
In this section, we present the detailed design of SAPredictor. We first describe the problem of system anomaly prediction and then propose our SAPredictor method, which is composed by the two
components: belief Markov chain model and aggregate ensemble classification model. Belief Markov chain model is used to predict the changing pattern of measurement metrics; aggregate ensemble
classification is a supervised learning method which employs multiple classifiers and combines their predictions. In this work, we use sliding window to partition the system metrics stream into some
chunks and then train the belief Markov chain and aggregate ensemble learning model by the history. The future system status is predicted by putting future metrics as input into the classification
2.1. Problem Statement
For a system, we have a vector of observations at time for the system metrics, . is a vector that contains system metric time series at time , namely, (), is the th metric. We label at time as
normal (state 0) or anomaly (state 1) by monitoring the system state at time . The system anomaly prediction problem we focus on in this paper is that whether will fall into anomaly status in the
next steps, where and . To solve this problem, we need to first forecast the future value of for each metric. Then, we train ensemble classifier EC based on a sliding window of, where is the size
of sliding window. Finally, we use EC to test on () and predict the state label of .
2.2. SAPredictor Approach
Figure 1 describes the SAPredictor system anomaly prediction approach. measurement metrics (e.g., CPU, memory, I/O usage, network, etc.) are collected from the system continuously. Then, the
collected system metrics streams are partitioned into some chunks by sliding window. The current and history chunks are used to train the belief Markov chain model and aggregate ensemble learning
model. Then, the future system metrics is predicted by the belief Markov chain model, and having these metrics as input into the aggregate ensemble classification model, we can ascertain whether the
system will fall into anomaly in the future. Belief Markov chain and aggregate ensemble classification will be presented in the following subsections.
2.2.1. System Metrics Value Prediction
In this section, we first introduce why the Evidential Markov chain which is based on the Dempster-Shafer theory [19] is preferred over discrete-time Markov chain in dealing with system anomaly
prediction for noisy data, and then we explain the advantages of our belief Markov chain method compared to Evidential Markov method in a data stream environment.
When we build discrete-time Markov chain model, it is necessary to divide all the data into discrete states. Traditional discretion techniques used in discrete-time Markov chain include equal-width
and equal-depth. Both techniques generate status with explicit boundaries using all the data. However, the system metrics being monitored are usually imprecise due to system noise and measurement
error. Thus, discrete-time Markov chain which uses explicit boundary to divide the states will generate highly different prediction results even if their initial values are almost the same.
Evidential Markov model [13] has made big improvement by being capable of coping with noisy data. Following is an example of explicit boundary problem in discrete-time Markov chain.
In one possible situation, we have a metric ranging in [], and then we use equal-width approach to discrete the range into three bins, namely, [), [), and []. , , and denote the states when metric is
in [), [), and [], respectively. The transition matrix for the metric is a matrix:
Here, each element in matrix denotes the probability of transition from state to state . When we use discrete-time Markov chain to predict future value, a vector is needed to denote the probability
of the metric in each state at time . If we have an initial value 99 which is in state , then the corresponding probability vector is . We can calculate the probability vector after one time unit as
Here, the probability vector represents that the initial value will transfer into most likely, and the predicted value after one step will be as the mean of state . However, if the initial value
turns to be 101, then the vector will be []. By applying (2) again, it turns out that the prediction value will stay in state with the predicted value of in the next step:
Note that there is only a slight difference between 99 and 101 in the initial value, yet the forecasted value after one step is in large difference from 75 to 125.
As the example shows, discrete-time Markov chain uses explicit state boundaries, and it will have very different prediction value if the original metric is around the state boundary. To solve this
problem, we propose belief Markov chain based on the Dempster-Shafer theory. The Dempster-Shafer theory is an inaccurate inference theory. It can handle the uncertainty caused by unknown prior
knowledge and extend the basic event space to its power set. The detailed definitions for Dempster-Shafer [19] are as follows.
Definition 1 (frame of discernment). Suppose that is the exhaustive set of random variable , so and the elements in are mutually exclusive. Then, the set of all possible subsets of is called a frame
of discernment :
We use to represent the subset in power set of which contains elements.
Definition 2 (mass function). Have and , for every subset of ; if the following statements satisfy, then the function is called the mass function on :
Definition 3 (transferable belief model). Suppose that we have discernment frame and mass function on . Then, the probability for each random variable in can be calculated by transferable belief
The subset of includes both single event set and multiple event combinations . This is why we need transferable belief model to calculate the probability of one single random variable.
Figure 2 illustrates a metric divided into states, , and each pair of adjacent states has a state which means that the value is in cross-region between state and state . When using BMC model to
predict, the initial metric may belong to a single state entirely or belong to the cross-region of two adjacent states. So, the discernment frame of this problem can be simplified to
Then, we declare the mass function to assign probability to each subset in . Any function that satisfies (5) can be used as mass function. The probability of each event in can be calculated as
At last, we need to infer the transition matrix which describes the probabilities of moving from one state to others as we did in discrete-time Markov chain. Each element of transition matrix denotes
the probability of the currently state , and then it moves to state . It can be calculated by
However, the Evidential Markov chain needs to store all the data and recalculate the Markov state when new data arrives, this is not time efficient and feasible for the systems that need real-time
response, especially for data stream applications. Thus, we improve Evidential Markov chain using stream-based -means clustering method. The arriving data points can be mapped onto states using data
stream clustering algorithm where each cluster represents a Markov state. For each cluster representing state , we need to store a transition count vector . All transition counts can be seen as a
transition count matrix where is the number of clusters. As we use stream clustering, there is a list of operations for cluster: adding a new data to an existing cluster, creating a new cluster,
deleting clusters, merging clusters, and splitting clusters. And we use Jaccard [20] as a dissimilarity threshold to detect clusters. Thus, the states are adaptively changing to fit the arriving
data, which is also an advantage compared to Evidential Markov chain method.
2.2.2. System Status Classification
In this section, we first introduce why we choose ensemble classification to forecast the system status and then how the aggregate ensemble method can address the concept drift and noisy data problem
in data stream. Tan and Gu [10] apply single statistical classifier on static dataset to make classification. Though this approach works well on static dataset, it is not applicable in a dynamic
environment where system logs are generated continuously, and even the underlying data generating mechanism and cause of anomaly are constantly changing. To capture the time-evolving anomaly pattern,
many solutions have been proposed to build classification models from data stream.
One simple model is using online incremental learning [11, 21]. The incremental learning methods deliver a single learning model to represent an entire data stream and update the model continuously
when new data arrives. Ensemble classification always regards the data stream as several separated data chunks and trains classifiers based on these chunks using different learning algorithms, and
then ensemble classifier is built through voting of these base classifiers. Although these models are being proved to be efficient and accurate, they depend on the assumption that data stream being
learned is high quality and without consideration of data error. However, in real world applications, like system monitoring data stream and sensor network data stream, they are always containing
erroneous data values. As a result, the tradition online incremental model is likely to lose accuracy in the data stream which has error data values.
Ensemble learning is a supervised method which employs multiple learners and combines their predictions. Different from the incremental learning, ensemble learning trains a number of models and gives
out final prediction based on classifiers voting. Because the final prediction is based on a number of base classifiers, ensemble learning can adaptively and rapidly address the concept drift and
error data problem in data stream. Based on the above reason, we choose to use ensemble classification.
In summary, the ensemble of classification can be categorized into two categories: horizontal ensemble and vertical ensemble classification [15]. The horizontal ones build classifiers using several
buffered chunks, while the vertical ones build classifiers using different learning algorithm on the current chunks.
Vertical ensemble is shown in Figure 3. It uses different classification algorithms (e.g., we simply set ) to build classifier on the current chunk and then use the results of these classifiers to
form an ensemble classification model. The vertical ensemble only uses the current chunk to build classifiers, and the advantage of vertical ensemble classification is that it uses different
algorithms to build the classifier model which can decrease the bias error between each classifiers. However, the vertical ensemble assumes that the data stream is errorless. As we discussed before,
the real-world data stream always contains error. So, if the current chunk is mostly containing noise data, then the result may have severe performance deterioration. To address this problem,
horizontal ensemble which uses multiple history chunks to build classifiers is employed.
Horizontal ensemble is showed in Figure 4. The data stream is separated into consecutive chunks (e.g., and are history chunks, and is the current chunk), and the aim of ensemble learning is to build
classifiers on these chunks and predict data in the yet-to- arrive chunk ( in this picture). The advantage of horizontal structure is that it can handle the noise data in the stream because the
prediction of newly arriving data chunk depends on the average of different chunks. Even if the noise data may deteriorate some chunks, the ensemble can still generate relatively accurate prediction
The disadvantage of horizontal ensemble is that the data stream is continuously changing, and the information contained in the previous chunks may be invalid so that use these old-concept classifiers
will not improve the overall result of prediction.
Because of the limitation of both horizontal and vertical ensembles, in this paper, we use a novel ensemble classification which uses different learning algorithms to build classifiers on buffered
chunks and then train -by- classifier as Figure 5 shows. By building an aggregate ensemble, it is capable of solving a real-world data stream containing both concept drifting and data errors.
3. Experiment and Result
3.1. Experiment Setup
We evaluate our SAPredictor method on the anomaly data collected from realistic system: PlanetLab. The PlanetLab [22] is a global research network that supports the development of new network
services. The PlanetLab data set [16] which we use in this paper contains 66 system-level metrics such as CPU load, free memory and disk usage, shown by Table 1. The sampling interval is 10 seconds.
There are 50162 instances, and among which 8700 are labeled as anomalies.
Our experiments were conducted on a 2.6-GHz Inter Dual-Core E5300 with 4GB of memory running Ubuntu10.4. We use sliding window (window size = 1000 instances) based validation because in real system,
the labeled instances are sorted in chronological order of collecting time. The reason that we do not use cross-validation is that it randomly divides the dataset into pieces without considering the
chronological order. Under such circumstances, it is possible that current data is used to predict past data, which does not make sense. Thus, sliding window validation is more appropriate for our
3.2. The Metrics Prediction Accuracy
Short term predictions are helpful to prevent potential disasters and limit the damage caused by system anomalies. Usually, predicting near term future is more clever and successful than long term
predictions [5]. So, in our experiment, we assess system state prediction in short term.
In this experiment, we choose -means discretion technique to create state boundary. The reason is that by -means the state will have more adjacent data compared to the state discrete by equal-width
and equal-depth, when we divide the data into clusters, because each middle point of the cluster will be used as a state. We set the size of bins as 5, 10, 15, … to 30, and evaluate the quality of
metric prediction by mean prediction error (MPE) as the study by Tan and Gu [10]:
is the test dataset, and is the number of instances in is the number of system metrics, and is the actual value of metric . is the prediction value of metric , which is represented by the mean value
of samples in that bin. The less the value of MPE, the more accurate the predictor.
We assess the MPE in near term future (1–5 time units ahead) for different bin sizes (5, 10, 15, 20, 25, and 30) on PlanetLab dataset. Figure 6 shows the MPE of PlanetLab for time units (1–5) with
bin size of 20. From these two figures, we have the following observations: BMC can achieve less prediction error than DTMC from time units 1 to 5. One step prediction has the most notable advantage,
and the advantage decreases slightly as time goes on, which means that our algorithm fits better when the forecast period is shorter; BMC and DTMC both lose prediction accuracy as time goes by, which
indicates that predict anomaly in longer term is more challenging.
Figure 7 shows the MPE of PlanetLab with different bin sizes (5, 10, 15, 20, 25, and 30) when time unit is one. From these figures, we can see that both methods have higher MPE with less number of
bins. The reason is that less number of states tends to group a larger range of data into a bin. Since the mean of the bin is used as the prediction value, the gap between the prediction value and
the real value will be enlarged.
In Tables 2, 3, 4, 5, and 6, we compare the mean prediction error of DTMC and BMC under different noise percentage. The noise percentage means that the monitoring value at state oscillates around the
true value in the range of as illustrated in Figure 2, where is the value of the last state and is the value of next state. We choose n from 10 to 50 in our experiment because the previous will be
falsely recognized as state or state if is larger than 50%. Thus, in this paper, we set the noise in the percentage from 10% to 50%. The mean prediction error results in Tables 2–6 show that our
proposed method BMC has better prediction quality than DTMC. Both BMC and DTMC have the smallest prediction error in one step prediction, and the error magnifies as prediction steps become larger.
BMC has the most notable advantage over DTMC in one step prediction and the advantage decreases as step goes larger. Based on the above observation, we conclude that our algorithm has better
performance than DTMC in all noise ranges and fits better when we forecast imminent anomalies.
3.3. Ensemble Classification with Data Stream
In this experiment, we compare three ensemble classification methods and other classification algorithms, as decision tree and logistic regression. For ease of comparisons, we first summarize the
assessment of criteria of different classification methods. Suppose that a data stream has data chunks. We aim to build a classifier to predict all instances’ label in the yet-to-come chunk. To
simulate different types of data stream, we use the following approaches used in [21]: noise selection—we randomly select 20% chunks from each dataset as noise chunks and then arbitrarily assign each
instance a class label which does not equal its original class label, and finally we put these noisy data chunks back into the data stream.
The performance of system anomaly prediction is evaluated by 3 criteria according to [20]: precision, recall, and -measure. We use Table 7 to help explain the definitions of these criteria, where
state 0 denotes normal and state 1 denotes anomaly.
These three criteria are defined as
We define precision as the proportion of successful prediction for each predicted state in chunk , recall as the probability of each real state to be successfully predicted in the chunk , and
-measure as the harmonic mean of precision and recall.
Following the above process times, we have the average precision, recall, and -measure. Ideally, a good classifier for noise data stream should have high average precision, high average recall, and
high average -measure.
Table 8 shows the quality of classification between different classifiers. In this experiment, we choose three basic classifiers C4.5, Logistic, and Naïve Bayes as our base classifiers. And we set
the sliding window size as 1000 instances. Column 2 to Column 4 in Table 8 are the classification results that employ single classifier. So, we choose the to train the model and test the model use
then repeat the process by training the model using and test on and so on. HTree, HNB, and HLogist are three horizontal ensemble classification methods which use both history and current chunks to
train the classifier model. So, we first use to train the model and test on and then use both and to train the model and test on . Repeat this process until the end of the data stream. VerEn is the
vertical ensemble model which uses all three base classifiers to train on the current chunk and test the next chunk. The last column is the aggregate ensemble which builds all base classifiers on
history and current chunks.
The result in Table 8 shows that AggEn performs the best for all three measurements, the single Naïve Bayes is the second best, and VerEn is the third best. And HLogist and Logistic are listed as the
3.4. Anomaly Prediction System Cost
We have evaluated the overhead of our anomaly prediction model. Table 9 shows the average training time and prediction time. The training time includes the time of building BMC model and inducing the
anomaly classifier. The prediction time includes the time to retrieve state transition probabilities and generate the classification result for a single data record. These results are collected over
100 experiment runs. We observe that the total training time is within several hundreds of milliseconds, and the prediction requires almost 200 microseconds. The above overhead measurements show that
our approach is practical for performing online prediction of system anomalies.
3.5. SAPredictor Compared with Other Models
In this section, we compare the prediction quality between SAPredictor and DTMC combining other state-of-the-art classifiers in the machine learning literature, that is, -Nearest Neighbor, C4.5,
Naïve Bayes, and Tree-Augmented Naïve Bayesian (TAN) Network. We compare two kinds of prediction models: one is our SAPredictor which uses ensemble classification based on predicted metrics from BMC,
the other is DTMC combining different single classifiers mentioned above. The performance of system anomaly prediction is evaluated by the same criteria used in Section 3.3: precision, recall, and
Table 10 presents the experiment results of SAPredictor and other classifiers integrating DTMC on the dataset of PlanetLab. We notice that Naïve Bayes and KNN have the worst performance: its recall
scores are 26.6% and 50.5%, respectively, and -measure scores are 25.0% and 57.3%, respectively. SAPredictor receives the highest scores in recall and -measure on this dataset, which are 84.6% and
77.5%. Thus, our SAPredictor is much more accurate than the other models.
4. Conclusions and Future Work
In this paper, we propose a novel system anomaly prediction model SAPredictor: it has clear advantages over discrete-time Markov chain which combines other classifiers. SAPredictor consists of two
parts, one is belief Markov chain method which extends Evidential Markov chain by being capable of dealing with stream data, and the other is aggregate ensemble classification which identifies
anomaly based on the value predicted by BMC. To conclude, SAPredictor can handle data stream from real application and systems with noise and measurement error.
Our experiments show that the BMC model achieves higher prediction accuracy than DTMC at any noise level and is especially fit for imminent anomaly prediction. SAPredictor achieves better system
status prediction quality than the other popular models such as DTMC + Naïve Bayes, DTMC + C4.5, and DTMC + KNN. Our SAPredictor has small overhead, which makes it more practical for performing
online prediction of system anomalies.
In the future, we plan to test and make possible improvement of SAPredictor in more real applications. In this paper, we consider the system as either normal or abnormal, while in reality the
situation could have been more complicated. SAPredictor also can be improved to distinguish each kind of anomalies when making prediction and sending different level of alert. We also plan to publish
a tool of SAPredictor and apply it to complex, distributed systems.
This research was supported by the Ministry of Industry and Information Technology of China (no. 2010ZX01042-002-003-001).
1. A. Ray, “Symbolic dynamic analysis of complex systems for anomaly detection,” Signal Processing, vol. 84, no. 7, pp. 1115–1130, 2004. View at Publisher · View at Google Scholar · View at Scopus
2. F. Salfner, M. Lenk, and M. Malek, “A survey of online failure prediction methods,” ACM Computing Surveys, vol. 42, no. 3, article 10, 2010. View at Publisher · View at Google Scholar · View at
3. F. Salfner and M. Malek, “Using hidden semi-Markov models for effective online failure prediction,” in Proceedings of the 26th IEEE International Symposium on Reliable Distributed Systems (SRDS
'07), pp. 161–174, October 2007. View at Publisher · View at Google Scholar · View at Scopus
4. E. Kiciman and A. Fox, “Detecting application-level failures in component-based Internet services,” IEEE Transactions on Neural Networks, vol. 16, no. 5, pp. 1027–1041, 2005. View at Publisher ·
View at Google Scholar · View at Scopus
5. Y. Zhang and C. Ma, “Fault diagnosis of nonlinear processes using multiscale KPCA and multiscale KPLS,” Chemical Engineering Science, vol. 66, no. 1, pp. 64–72, 2011. View at Publisher · View at
Google Scholar · View at Scopus
6. Y. Zhang, H. Zhou, S. J. Qin, and T. Chai, “Decentralized fault diagnosis of large-scale processes using multiblock kernel partial least squares,” IEEE Transactions on Industrial Informatics,
vol. 6, no. 1, pp. 3–10, 2010. View at Publisher · View at Google Scholar · View at Scopus
7. Y. Zhang, “Modeling and monitoring of dynamic processes,” IEEE Transactions on Neural Networks and Learning System, vol. 23, no. 2, pp. 277–284, 2012.
8. Y. Liang, Y. Zhang, H. Xiong, and R. Sahoo, “Failure prediction in IBM BlueGene/L event logs,” in Proceedings of the 7th IEEE International Conference on Data Mining (ICDM '07), pp. 583–588,
October 2007. View at Publisher · View at Google Scholar · View at Scopus
9. P. Zhang, H. Muccini, A. Polini, and X. Li, “Run-time systems failure prediction via proactive monitoring,” in Proceedings of the 26th IEEE/ACM International Conference on Automated Software
Engineering (ASE '11), pp. 484–487, November 2011. View at Publisher · View at Google Scholar · View at Scopus
10. Y. Tan and X. Gu, “On predictability of system anomalies in real world,” in Proceedings of the 18th Annual IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and
Telecommunication Systems (MASCOTS '10), pp. 133–140, August 2010. View at Publisher · View at Google Scholar · View at Scopus
11. G. Luo, K.-L. Wu, and P. S. Yu, “Answering linear optimization queries with an approximate stream index,” Knowledge and Information Systems, vol. 20, no. 1, pp. 95–121, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
12. X. Gu and H. Wang, “Online anomaly prediction for robust cluster systems,” in Proceedings of the 25th IEEE International Conference on Data Engineering (ICDE '09), pp. 1000–1011, April 2009. View
at Publisher · View at Google Scholar · View at Scopus
13. H. Soubaras, “On evidential markov chains,” in Foundations of Reasoning Under Uncertainty, vol. 249 of Studies in Fuzziness and Soft Computing, pp. 247–264, Springer, Berlin, Germany, 2010.
14. C. C. Aggarwal, J. Han, J. Wang, and P. S. Yu, “A framework for clustering evolving data streams,” in Proceedings of the International Conference on Very Large Data Bases (VLDB '03), pp. 81–92,
15. P. Zhang, X. Zhu, Y. Shi, L. Guo, and X. Wu, “Robust ensemble learning for mining noisy data streams,” Decision Support Systems, vol. 50, no. 2, pp. 469–479, 2011. View at Publisher · View at
Google Scholar · View at Scopus
16. Y. Zhao, Y. Tan, Z. Gong, X. Gu, and M. Wamboldt, “Self-correlating predictive information tracking for large-scale production systems,” in Proceedings of the 6th International Conference on
Autonomic Computing (ICAC '09), pp. 33–42, Barcelona, Spain, June 2009. View at Publisher · View at Google Scholar · View at Scopus
17. C.-H. Lee, Y.-L. Lo, and Y.-H. Fu, “A novel prediction model based on hierarchical characteristic of web site,” Expert Systems with Applications, vol. 38, no. 4, pp. 3422–3430, 2011. View at
Publisher · View at Google Scholar · View at Scopus
18. D. Katsaros and Y. Manolopoulos, “Prediction in wireless networks by Markov chains,” IEEE Wireless Communications, vol. 16, no. 2, pp. 56–63, 2009. View at Publisher · View at Google Scholar ·
View at Scopus
19. J. Y. Halpern, Reasoning about Uncertainty, MIT Press, Cambridge, Mass, USA, 2003. View at Zentralblatt MATH · View at MathSciNet
20. J. Han and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2006.
21. S. Pang, S. Ozawa, and N. Kasabov, “Incremental linear discriminant analysis for classification of data streams,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 35, no. 5, pp.
905–914, 2005. View at Publisher · View at Google Scholar · View at Scopus
22. Y. Zhao, Y. Tan, Z. Gong, X. Gu, and M. Wamboldt, “Self-correlating predictive information tracking for large-scale production systems,” in Proceedings of the 6th International Conference on
Autonomic Computing (ICAC '09), pp. 33–42, ACM, June 2009. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/mpe/2013/179390/","timestamp":"2014-04-19T09:54:12Z","content_type":null,"content_length":"239779","record_id":"<urn:uuid:c4862154-70cf-4950-b27a-e0b6ac04ff23>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2005 [00587]
[Date Index] [Thread Index] [Author Index]
Re: Convincing Mathematica that Sqrt[a+b]Sqrt[a-b]==Sqrt[a^2+b^2]
• To: mathgroup at smc.vnet.net
• Subject: [mg63312] Re: Convincing Mathematica that Sqrt[a+b]Sqrt[a-b]==Sqrt[a^2+b^2]
• From: Bill Rowe <readnewsciv at earthlink.net>
• Date: Fri, 23 Dec 2005 05:08:35 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
On 12/20/05 at 11:35 PM, hattons at globalsymmetry.com (Steven T.
Hatton) wrote:
>Is there a way to convince Mathematica to multiply
>Sqrt[a+b]Sqrt[a-b] to produce Sqrt[a^2+b^2]?
Well there is:
z = Sqrt[a + b]*Sqrt[a - b];
Sqrt[a^2 - b^2]
To reply via email subtract one hundred and four | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Dec/msg00587.html","timestamp":"2014-04-17T07:00:03Z","content_type":null,"content_length":"34554","record_id":"<urn:uuid:19503c56-3304-44d1-b46d-8735b36ce1e1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring Bias in Published Work
by Justin Esarey
In a series of previous posts, I’ve spent some time looking at the idea that the review and publication process in political science—and specifically, the requirement that a result must be
statistically significant in order to be scientifically notable or publishable—produces a very misleading scientific literature. In short, published studies of some relationship will tend to be
substantially exaggerated in magnitude. If we take the view that the “null hypothesis” of no relationship should not be a point at $\beta = 0$ but rather a set of substantively ignorable values at or
near zero, as I argue in another paper and Justin Gross (an assistant professor at UNC-CH) also argues in a slightly different way, then this also means that the literature will tend to contain many
false positive results—far more than the nominal $\alpha$ value of the significance test.
This opens an important question: is this just a problem in theory, or is it actually influencing the course of political science research in detectable ways?
To answer this question, I am working with Ahra Wu (one of our very talented graduate students studying International Relations and political methodology at Rice) to develop a way to measure the
average level of bias in a published literature and then apply this method to recently published results in the prominent general interest journals in political science.
We presented our initial results on this front at the 2013 Methods Meetings in Charlottesville, and I’m sad to report that they are not good. Our poster summarizing the results is here. This is an
ongoing project, so some of our findings may change or be refined as we continue our work; however, I do think this is a good time to summarize where we are now and seek suggestions.
First, how do you measure the bias? Well, the idea is to be able to get an estimate for $E[\beta | \hat{\beta} = \hat{\beta_{0}}$ and stat. sig.]. We believe that a conservative estimate of this
quantity can be accomplished by simulating many draws of data sets with the structure of the target model but with varying values of $\beta$, where these $\beta$ values are drawn out of a prior
distribution that is created to reflect a reasonable belief about the pattern of true relationships being studied in the field. Then, all of the $\hat{\beta}$ estimates can be recovered from properly
specified models, then used to form an empirical estimate of $E[\beta | \hat{\beta} = \hat{\beta_{0}}$ and stat. sig.]. In essence, you simulate a world in which thousands of studies are conducted
under a true and known distribution of $\beta$ and look at the resulting relationship between these $\beta$ and the statistically significant $\hat{\beta}$.
The relationship that you get between $E[\hat{\beta}$|stat. sig] and $\beta$ is shown in the picture below. To create this plot, we drew 10,000 samples (N = 100 each) from the normal distribution $k\
sim\Phi(\mu=0,\,\sigma=\sigma_{0})$ for three values of $\sigma_{0}\in\{0.5,\,1,\,2\}$ (we erroneously report this as 200,000 samples in the poster, but in re-checking the code I see that it was only
10,000 samples). We then calculated the proportion of these samples for which the absolute value of $t=\frac{\beta+k}{\sigma_{0}}$ is greater than 1.645 (the cutoff for a two-tailed significance
test, $\alpha=0.10$ ) for values of $\beta\in[-1,3]$.
As you can see, as $\hat{\beta}$ gets larger, its bias also grows–which is a bit counterintuitive, as we expect larger $\beta$ values to be less susceptible to significance bias: they are large
enough such that both tails of the sampling distribution around $\beta$ will still be statistically significant. That’s true, but it’s offset by the fact that under many prior distributions extremely
large values of $\beta$ are unlikely–less likely, in fact, than a small $\beta$ that happened to produce a very large $\hat{\beta}$! Thus, the bias actually rises in the estimate.
With a plot like this in hand, determining $E[\beta | \hat{\beta} = \hat{\beta_{0}}$ and stat. sig.] is a mere matter of reading the plot above. The only trick is that one must adjust the parameters
of the simulation (e.g., the sample size) to match the target study before creating the matching bias plot.
Concordantly, we examined 177 quantitative articles published in the APSR (80 articles in volumes 102-107, from 2008-2013) and the AJPS (97 articles in volumes 54-57, from 2010-2013). Only articles
with continuous and unbounded dependent variables are included in our data set. Each observation of the collected data set represents one article and contains the article’s main finding (viz., an
estimated marginal effect); details of how we identified an article’s “main finding” are in the poster, but in short it was the one we thought that the author intended to be the centerpiece of his/
her results.
Using this data set, we used the technique described above to estimate the average % absolute bias, $[|\hat{\beta}-\beta|/|\hat{\beta}|]$, excluding cases we visually identified as outliers. We used
three different prior distributions (that is, assumptions about the distribution of true $\beta$ values in the data set) to create our bias estimates: a normal density centered on zero ($\Phi(\mu =
0, \sigma = 3)$), a diffuse uniform density between –1022 and 9288, and a spike-and-slab density with a 90% chance that $\beta = 0$ and a 10% chance of coming from the prior uniform density.
As shown in the Table below, our preliminary bias estimates for all of these prior densities hover in the 40-50% range, meaning that on average we estimate that the published estimates are $\approx$
40-50% larger in magnitude than their true values.
prior density avg. % absolute bias
normal 41.77%
uniform 40%
spike-and-slab 55.44%
*note: results are preliminary.
I think it is likely that these estimates will change before our final analysis is published; in particular, we did not adjust the range of the independent variable or the variance of the error term
$\varepsilon$ to match the published studies (though we did adjust sample sizes); consequently, our final results will likely change. Probably what we will do by the end is examine standardized
marginal effects—viz., t-ratios—instead of nominal coefficient/marginal effect values; this technique has the advantage of folding variation in $\hat{\beta}$ and $\hat{\sigma}$ into a single
parameter and requiring less per-study standardization (as t-ratios are already standardized). So I’m not yet ready to say that these are reliable estimates of how much the typical result in the
literature is biased. As a preliminary cut, though, I would say that the results are concerning.
We have much more to do in this research, including examining different evidence of the existence and prevalence of publication bias in political science and investigating possible solutions or
corrective measures. We will have quite a bit to say in the latter regard; at the moment, using Bayesian shrinkage priors seems very promising while requiring a result to be large (“substantively
significant”) as well as statistically significant seems not-at-all promising. I hope to post about these results in the future.
As a parting word on the former front, I can share one other bit of evidence for publication bias that casts a different light on some already published results. Gerber and Malhotra have published a
study arguing that an excess of p-values near the 0.05 and 0.10 cutoffs, two-tailed, is evidence that researchers are making opportunistic choices for model specification and measurement that enable
them to clear the statistical significance bar for publication. But the same pattern appears in a scenario when totally honest researchers are studying a world with many null results and in which
statistical significance is required for publication.
Specifically, we simulated 10,000 studies (each of sample size n=100) where the true DGP for each study j is $y=\beta_{j}x+\varepsilon$, $x\sim U(0,1)$, $\varepsilon\sim\Phi(\mu=0,\,\sigma=1)$. The
true value of $\beta_{j}$ has a 90% chance of being set to zero and a 10% chance of being drawn from $\Phi(\mu=0,\,\sigma=3)$ (this is the spike-and-slab distribution above). Consquently, the vast
majority of DGPs are null relationships. Correctly-specified regression models $\hat{y}=\hat{\gamma}+\hat{\beta}x$ are estimated on each simulated sample. The observed (that is,
published—statistically significant) and true, non-null distribution of standardized $\beta$ values (i.e., t-ratios) from this simulation are shown below.
This is a very close match for a diagram of t-ratios published in the Gerber-Malhotra paper, which shows the distribution of z-statistics (a.k.a. large-sample t-scores) from their examination of
published articles in AJPS and APSR.
So perhaps the fault, dear reader, is not in ourselves but in our stars—the stars that we use in published tables to identify statistically significant results as being scientifically important.
One Comment to “Measuring Bias in Published Work”
1. Are you aware of this paper
STAR WARS: THE EMPIRICS STRIKE BACK Abel Brodeur Mathias Lé Marc Sangnier, Yanos Zylberberg June 2012 | {"url":"http://politicalmethodology.wordpress.com/2013/07/31/measuring-bias-in-published-work/","timestamp":"2014-04-17T07:09:32Z","content_type":null,"content_length":"70396","record_id":"<urn:uuid:e036295f-8634-4346-ab62-0a0725dae897>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compound Interest Worksheets and Printables
Print the Compound Interest Worksheet/Printable in PDF
(answers are on the 2nd page of the PDF)
Financial institutions use compound interest to calculate the amount of interest paid to you on money or the amount of interest you will owe for a loan. This worksheet focuses on word problems for
compound interest:
If you deposited $200 in an one year investment that paid interest at the rate of 12% compounded semi-annually, what amount would you have after 1 year?
Answer: $224.72 | {"url":"http://math.about.com/od/wordproblem1/ss/Compound-Interest-Worksheets-And-Printables_2.htm","timestamp":"2014-04-19T17:22:56Z","content_type":null,"content_length":"43243","record_id":"<urn:uuid:2ed91050-8d14-4f71-9886-93949a2dd1d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Or else, it's [rilly] complex
Or else, it’s [rilly] complex
Unfiltered uses pattern matching to route incoming requests, so we’re pretty sensitive to the performance of partial functions in Scala. Lately I’ve been looking at the orElse method of
PartialFunction as part of a potential refactoring and saw some surprising results.
Say you have these two partial functions defined:
val pf: PartialFunction[String, Boolean] = {
case "hello" => true
val fallback: PartialFunction[String, Boolean] = {
case _ => false
And then you create a chained partial function:
val std = pf.orElse(pf).orElse(pf).orElse(fallback)
The second and third pf add no value; we just want to see how they affect performance. And since it is better to be able to test arbitrary numbers of things, instead of the above you would use a
fold. Sort of like this…
def std(n: Int) =
(pf /: (1 to n)) {
(a,_) => a.orElse(pf)
(Don’t worry about pasting this into your repl, I’ll link to the github in a bit.)
For an n of 50 you might expect to see some difference in performance between the application of a value in the domain of pf compared to one that must be “orElsed” all the way until fallback. Conjure
a few more functions, one to time a block in milliseconds and one to repeat it alot, then see what happens:
scala> val std50 = std(50)
std50: Test.PF = <function1>
scala> time { alot { std50("hello") } }
res1: Long = 446
scala> time { alot { std50("hell") } }
res2: Long = 60
I don’t know about you, but I was expecting “hello” to be faster, since it matches the first pf and doesn’t have to be tested against all any of the others. Instead it’s an order of magnitude slower.
What gives?
A nesting we go
It’s probably a good idea at this point to review the definition of the PartialFunction#orElse method in Scala.
def orElse[A1 <: A, B1 >: B](that: PartialFunction[A1, B1]) =
new PartialFunction[A1, B1] {
def isDefinedAt(x: A1): Boolean =
PartialFunction.this.isDefinedAt(x) || that.isDefinedAt(x)
def apply(x: A1): B1 =
if (PartialFunction.this.isDefinedAt(x))
A new partial function is produced each time you call orElse, and yes it does look like it’s short-circuiting in the right places. It’s not apparent why the “hello” case would be so much slower,
instead of a little bit faster.
To see what’s really happening, consider a tiny example:
val pf1 = pf.orElse(pf)
val pf2 = pf1.orElse(pf)
val pf3 = pf2.orElse(fallback)
So what happens when we apply this?
First, the merged partial function pf3 must check if pf2 is defined for “hello”. Then pf2 must check with pf1, and finally pf1 can say that yes pf is defined for “hello”.
Done? Not at all! pf3 can safely call pf2("hello"), but then we are back in the same apply method defined above. pf2 must check (again) whether pf1 is defined, and pf1 will check pf. And then we can
call pf1("hello")…
The calls look like this:
1. pf3(“hello”)
2. pf2.isDefinedAt(“hello”)
3. pf1.isDefinedAt(“hello”)
4. pf.isDefinedAt(“hello”)
1. pf2(“hello”)
2. pf1.isDefinedAt(“hello”)
3. pf.isDefinedAt(“hello”)
1. pf1(“hello”)
2. pf.isDefinedAt(“hello”)
1. pf(“hello”)
And that’s what we used to call [S:exponential:S] quadratic (it’s been a while!) complexity, back in programming school. It explains why “hello” is so ungodly slow. But what about “hell”?
1. pf3(“hell”)
2. pf2.isDefinedAt(“hell”) || pf.isDefinedAt(“hell”)
3. pf1.isDefinedAt(“hell”) || pf.isDefinedAt(“hell”)
4. pf.isDefinedAt(“hell”) || pf.isDefinedAt(“hell”)
1. fallback(“hell”)
As the stack unwinds we have to make a call that was short circuited in the “hello” case. But since the nested partial functions are not applied, we avoid rechecking isDefined for all lower levels,
at each level. As a result, this case is much faster for larger values of n.
The Crowbar
Most people probably aren’t using large values of n where orElse is concerned, but as I said we’re a bit touchy with this stuff in Unfiltered. I wanted to come up with an alternative implementation
that has the linear complexity that most of us assumed orElse had all along.
The problem is, PartialFunction does not give you much to work with. Its only fundamental difference from a standard function is isDefinedAt; all its other methods, like lift, are conveniences built
on top of it.
If only there were some way to tentatively apply the function such that we get the value back if it succeeds, to avoid all this mad disassembling and reassembling of the orElse russian doll. If only
we had the interface that lift provides, on partial functions themselves. Well, there is one way.
trait PartialAttempt[-A,+B] extends PartialFunction[A,B] {
def attempt(x: A): Option[B]
Then toss in a few bat wings, ground up sudafed, and old Java books:
def asAttempt[A,B](pf: PartialFunction[A,B]): PartialAttempt[A,B] =
pf match {
case pa: PartialAttempt[_,_] => pa
case pf => new AttemptWrapper(pf)
class AttemptWrapper[-A,+B](underlying: PartialFunction[A,B])
extends PartialAttempt[A,B] {
val lifted = underlying.lift
def isDefinedAt(x: A) = underlying.isDefinedAt(x)
def apply(x: A) = underlying.apply(x)
def attempt(x: A) = lifted(x)
Finally, nesting doll class that knows about attempt:
class OrElse[A,B,A1 <: A, B1 >: B](
left: PartialAttempt[A,B],
right: PartialAttempt[A1,B1]
) extends PartialAttempt[A1,B1] {
def isDefinedAt(x: A1): Boolean = {
left.isDefinedAt(x) || right.isDefinedAt(x)
def apply(x: A1): B1 = {
left.attempt(x) orElse {
} getOrElse {
throw new MatchError(x)
def attempt(x: A1): Option[B1] = {
left.attempt(x).orElse {
And now we can define our replacement orElse:
def orElse[A, B, A1 <: A, B1 >: B](
left: PartialFunction[A,B],
right: PartialFunction[A1, B1]
): PartialFunction[A1, B1] =
new OrElse(asAttempt(left), asAttempt(right))
Does it work?
n= 45 std: 402, 46 opt: 21, 51
n= 46 std: 418, 44 opt: 22, 53
n= 47 std: 436, 46 opt: 24, 53
n= 48 std: 454, 50 opt: 23, 55
n= 49 std: 474, 47 opt: 25, 55
n= 50 std: 503, 50 opt: 26, 58
It works!
std is the standard library orElse, opt is the one implemented as above. The first timing is for “hello”, the second for “hell”. With opt we avoid the nasty worst case behavior on “hello”, and in
fact it’s faster than for “hell” which is what we originally expected to happen. You can try it yourself, see n8han/orelse on github.
You might be thinking, couldn’t I just lift all my PartialFunctions and implement a similar orElse without the ugly matching on a subtype? Sure! Just rewrite your code to use the type (A => Option
[B]) everywhere instead of partial functions.
But as it stands partial functions have valuable support in the language, particularly the ability to be defined with a pattern matching block. And with Unfiltered, I’m reluctant to clutter the type
pool when we already have an interface that people seem to like and understand. So yeah, we might incorporate something sneaky like this, with a package private extension of PartialFunction.
It can be our linearly complex secret.
anime-hentai likes this
drfmunoz likes this
softpress likes this
softpress reblogged this from coderspiel
mobocracy reblogged this from coderspiel
ayaaron likes this
coderspiel posted this | {"url":"http://code.technically.us/post/13107005975/or-else-its-rilly-complex","timestamp":"2014-04-18T23:15:43Z","content_type":null,"content_length":"45916","record_id":"<urn:uuid:a61074cd-22cf-4c14-9cac-5c6019300791>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Feynman propagator on a causal set.
How sure are you that spacetime is continuous? One approach to quantum gravity, causal set theory, models spacetime as a discrete structure: a causal set. This talk begins with a brief introduction
to causal sets, then describes a new approach to modelling a quantum scalar field on a causal set. We obtain the Feynman propagator for the field by a novel procedure starting with the Pauli-Jordan
commutation function. The candidate Feynman propagator is shown to agree with the continuum result. This model opens the door to physical predictions for scalar matter on a causal set. | {"url":"http://perimeterinstitute.ca/videos/feynman-propagator-causal-set","timestamp":"2014-04-20T05:49:05Z","content_type":null,"content_length":"26260","record_id":"<urn:uuid:cb7ff786-c14f-4f79-8baa-d62de4985831>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ex Astris Scientia - Warp Propulsion - 1 Real Physics and Interstellar
1 Real Physics and Interstellar Travel
The Physics and Technology of Warp Propulsion
1.1 Classical Physics - 1.2 Special Relativity - 1.3 Twin Paradox - 1.4 Causality Paradox
1.5 Other Obstacles to Interstellar Travel - 1.6 General Relativity - 1.7 Examples of Relativistic Travel
This chapter summarizes some very basic theorems of physics. They mostly predate the theories of special relativity and of general relativity.
Newton's laws of motion Isaac Newton discovered the following laws. They are still valid (meaning that they are a very good approximation) for speeds much slower than the speed of light.
An object at rest or in uniform motion in a straight line will remain at rest or in the same uniform motion, respectively, unless acted upon by an unbalanced force. This is also known as the law of
2. The acceleration a of an object is directly proportional to the total unbalanced force F exerted on the object, and is inversely proportional to the mass m of the object (in other words, as mass
increases, the acceleration has to decrease). The acceleration of an object has the same direction as the resulting force. This is also known as the law of acceleration.
Eq. 1.1
3. If one object exerts a force on a second object, the second object exerts a force equal in magnitude and opposite in direction on the object body. This is also known as the law of interaction.
Gravitation Two objects with a mass of m[1] and m[2], respectively, and a distance of r between the centers of mass, attract each other with a force F of:
Eq. 1.2
G=6.67259*10^-11m^3kg^-1s^-2 is Newton's constant of gravitation. If an object with a mass m of much less than Earth's mass is close to Earth's surface, it is convenient to approximate Eq. 1.2 as
Eq. 1.3
Here g is an acceleration slightly varying throughout Earth's surface, with an average of 9.81ms^-2.
Momentum conservation In an isolated system, the total momentum is constant. This fundamental law is not affected by the theories of Relativity.
Energy conservation In an isolated system, the total energy is constant. This fundamental law is not affected by the theories of Relativity.
Second law of thermodynamics The overall entropy of an isolated system is always increasing. Entropy generally means disorder. An example is the heat flow from a warmer to a colder object. The
entropy in the colder object will increase more than it will decrease in the warmer object. This why the reverse process, leading to lower entropy, would never take place spontaneously.
Doppler shift If the source of the wave is moving relative to the receiver or the other way round, the received signal will have a different frequency than the original signal. In the case of sound
waves, two cases have to be distinguished. In the first case, the signal source is moving with a speed v relative to the medium, mostly air, in which the sound is propagating at a speed w:
Eq. 1.4
f is the resulting frequency, f[0] the original frequency. The plus sign yields a frequency decrease in case the source is moving away, the minus sign an increase if the source is approaching. If the
receiver is moving relative to the air, the equations are different. If v is the speed of the receiver, then the following applies to the frequency:
Eq. 1.5
Here the plus sign denotes the case of an approaching receiver and an according frequency increase; the minus sign applies to a receiver that moves away, resulting in a lower frequency.
The substantial difference between the two cases of moving transmitter and moving receiver is due to the fact that sound needs air in order to propagate. Special relativity will show that the
situation is different for light. There is no medium, no "ether" in which light propagates and the two equations will merge to one relativistic Doppler shift.
Particle-wave dualism In addition to Einstein's equivalence of mass and energy (see 1.2 Special Relativity), de Broglie unified the two terms in that any particle exists not only alternatively but
even simultaneously as matter and radiation. A particle with a mass m and a speed v was found to be equivalent to a wave with a wavelength lambda. With h, Planck's constant, the relation is as
Eq. 1.6
The best-known example is the photon, a particle that represents electromagnetic radiation. The other way round, electrons, formerly known to have particle properties only, were found to show a
diffraction pattern which would be only possible for a wave. The particle-wave dualism is an important prerequisite to quantum mechanics.
Special relativity (SR) doesn't play a role in our daily life. Its impact becomes apparent only for relative speeds that are considerable fractions of the speed of light, c. I will henceforth
occasionally refer to them as "relativistic speeds". The effects were first measured as late as towards the end of the 19th century and explained by Albert Einstein in 1905.
Side note There are many approaches in literature and in the web to explain special relativity. Please refer to the appendix. A very good reference from which I have taken several suggestions is
Jason Hinson's article on Relativity and FTL travel [Hin1]. You may wish to read his article in parallel.
The whole theory is based on two postulates:
There is no invariant "fabric of space" or "ether" relative to which an absolute speed could be defined or measured. The terms "moving" or "resting" make only sense if they refer to a certain other
frame of reference. The perception of movement is always mutual; the starship pilot who leaves Earth could claim that he is actually resting while the solar system is moving away.
2. The speed of light, c=3*10^8m/s in the vacuum, is the same in all directions and in all frames of reference. This means that nothing is added or subtracted to this speed as the light source
apparently moves.
Frames of reference In order to explain special relativity, it is crucial to introduce frames of reference. Such a frame of reference is basically a point-of-view, something inherent to an individual
observer who perceives an event from a certain angle. The concept is in some way similar to the trivial spatial parallax where two or more persons see the same scene from different spatial angles and
therefore give different descriptions of it. However, the following considerations are somewhat more abstract. "Seeing" or "observing" will not necessarily mean a sensory perception. On the contrary,
the observer is assumed to take into consideration every "classic" measurement error such as signal delay or Doppler shift.
Aside from these effects that can be rather easily handled mathematically there is actually one even more severe constraint. The considerations on special relativity require inertial frames of
reference. According to the definition in the general relativity chapter, this would be a floating or free falling frame of reference. Any presence of gravitational or acceleration forces would not
only spoil the measurement, but could ultimately even question the validity of the SR. One provision for the following considerations is that all observers should float within their starships in
space so that they can be regarded as local inertial frames. Basically, every observer has their own frame of reference; two observers are in the same frame if their relative motion to a third frame
is the same, regardless of their distance.
Space-time diagram Special relativity takes place in the four-dimensional space-time. In an inertial frame all the Cartesian spatial coordinates x, y and z are equivalent (there is no "up" and "down"
in space in a total absence of gravitational forces). Hence, for the sake of simplicity, we may substitute the three axes with one generic horizontal space (x-) axis which may stand for any of the
coordinates x,y,z or all of them. Together with the vertical time (t-) axis we obtain a two-dimensional diagram (Fig. 1.1). It is very convenient to give the distance in light-years and the time in
years. Irrespective of the current frame of reference, the speed of light always equals c and would be exactly 1ly per year in our diagram, according to the second postulate. Any light beam will
therefore always form an angle of either 45° or -45° with the x-axis and the t-axis, as indicated by the yellow lines.
Fig. 1.1 Space-time diagram for a resting observer
A resting observer O (or better: an observer who thinks to be resting) draws an exactly perpendicular x-t-diagram for himself. The x-axis is equivalent to t=0 and is therefore a line of simultaneity,
meaning that for O everything located on this line is simultaneous. This applies to every horizontal line t=const. likewise. The t-axis and every line parallel to it denote x=const. and therefore no
movement in this frame of reference. If O is to describe the movement of another observer O* with respect to himself, O*'s time axis t* is sloped, and the reciprocal slope indicates a certain speed v
=x/t. Fig. 1.2 shows O's coordinate system in gray, and O*'s in O's view in white. The common origin denotes that O and O* pass by each other (x=x*=0) at the time t=t*=0. At the first glance it seems
strange that O*'s x*-axis is sloped into the opposite direction than his t*-axis, distorting his coordinate system.
Fig. 1.2 Space-time diagram for a resting Fig. 1.3 Sloped x*-axis of a moving observer
and a moving observer
The x*-axis can be explained by assuming two events A and B occurring at t*=0 at some distance to the two observers, as depicted in Fig. 1.3. O* sees them simultaneously, whereas O sees them at two
different times. Since the two events are simultaneous in O*'s frame, the line A-0-B defines his x*-axis. A and B might be located anywhere on the 45-degree light paths traced back from the point "O*
sees A&B", so we need further information to actually locate A and B. Since O* is supposed to actually *see* them at the same time (and not only date them back to t*=0), we also know that the two
events A and B must have the same distance from the origin of the coordinate system. Now A and B are definite, and connecting them gives us the x*-axis. Some simple trigonometry would reveal that
actually the angle between x* and x is the same as between t* and t, only the direction is opposite.
The faster the moving observer is, the closer will the two axes t* and x* move to each other. It becomes obvious in Fig. 1.2 that finally, at v=c, they will merge to one single axis, equivalent to
the path of a light beam.
Time dilation The above space-time diagrams don't have a scale on the x*- and t*-axes so far. It is not the same scale as on the x- and t-axis, respectively. The method of determining the t*-scale is
illustrated in the upper left half of Fig. 1.4. When the moving observer O* passes the resting observer O, they both set their clocks to t*=0 and t=0, respectively. Some time later, O's clock shows t
=3, and at a yet unknown instant O*'s clock shows t*=3. The yellow light paths show when O will actually *see* O*'s clock at t*=3 (which is obviously after his own t=3 because the light needs time to
travel), and vice versa. If O is smart enough, he can calculate the time when this light was emitted (by tracing back the -45° yellow line to O*'s t*-axis). His lines of simultaneity are exactly
horizontal (red line), and the reconstructed event "t*=3" will take place at some yet unknown time on his t-axis. And even though O has taken into account the speed of light, "t*=3" still happens
later than t=3. The quotation marks distinguish O's reconstruction of "t*=3" and O*'s direct reading t*=3. O* will do the same by reconstructing the event "t=3" (green line), which happens after his
own t*=3. In other words, just as O sees the time of O* dilated, O* can say the same about O's time.
Fig. 1.4 Illustration of time dilation (upper left half) and length contraction (lower right half)
Since O*'s x*-axis and therefore his green line of simultaneity is sloped while the x-axis is not, it is impossible that any two events t=3 and "t*=3" and the two corresponding events t*=3 and "t=3"
are simultaneous (and would occupy a single point) on the respective t-axis. If there is no absolute simultaneity, at least one of the two observers would see the other one's time dilated (in slow
motion). Now we have to apply the first postulate, the principle of relativity. There must not be any preferred frame of reference, all observations have to be mutual. Either observer sees the other
one's time dilated by the same factor. Not only in this example, but always. In our diagram the red and the green line have to cross to fulfill this postulate. More precisely, the ratio "t*=3" to t=3
has to be equal to "t=3" to t*=3. Some further calculations yield the following time dilation:
Eq. 1.7
Side note When drawing the axes to scale in an x-t diagram, one has to account for the inherently longer hypotenuse t* and multiply the above formula with an additional factor cos alpha to
"project" t* on t.
Note that the time dilation would be the square root of a negative number (imaginary) if we assume an FTL speed v>c. Imaginary numbers are not really forbidden, on the contrary, they play an
important role in the description of waves. Anyway, a physical quantity such as the time doesn't make any sense once it gets imaginary. Unless a suited interpretation or a more comprehensive theory
is found, considerations end as soon as a time (dilation), which has to be finite and real by definition, would become infinitely large or imaginary. The same applies to the length contraction and
mass increase. Star Trek's warp theory (although it does not really have a mathematical description) circumvents all these problems in a way that no such relativistic effects occur.
Length contraction The considerations for the scale of the x*-axis are similar as those for the t*-axis. They are illustrated in the bottom right portion of Fig. 1.4. Let us assume that O and O* both
have identical rulers and hold their left ends x=x*=0 when they meet at t=t*=0. Their right ends are at x=l on the x-axis and at x*=l on the x*-axis, respectively. O and his ruler rest in their frame
of reference. At t*=0 (which is not simultaneous with t=0 at the right end of the ruler!) O* obtains a still unknown length "x=l" for O's ruler (green line). O* and his ruler move along the t*-axis.
At t=0, O sees an apparent length "x=l" of O*'s ruler (red line). Due to the slope of the t*-axis, it is impossible that the two observers mutually see the same length l for the other ruler. Since
the relativity principle would be violated in case one observer saw two equal lengths and the other one two different lengths, the mutual length contraction must be the same. Note that the geometry
is virtually the same as for the time dilation, so it's not astounding that length contraction is determined by the factor gamma too:
Eq. 1.8
Side note Once again, note that when drawing the x*-axis to scale, a correction is necessary, a factor of cos alpha to the above formula.
Addition of velocities One of the most popular examples used to illustrate the effects of special relativity is the addition of velocities. It is obvious that in the realm of very slow speeds it's
possible to simply add or subtract velocity vectors from each other. For simplicity, let's assume movements that take place in only one dimension so that the vector is reduced to a plus or minus sign
along with the absolute speed figure, like in the space-time diagrams. Imagine a tank that has a speed of v compared to the ground and to an observer standing on the ground (Fig. 1.5). The tank fires
a projectile whose speed is determined as w by the tank driver. The resting observer, on the other hand, will register a projectile speed of v+w relative to the ground. So far, so good.
Fig. 1.5 Non-relativistic addition of velocities Fig. 1.6 Relativistic addition of velocities
The simple addition (or subtraction, if the speeds have opposite directions) seems very obvious, but it isn't so if the single speeds are considerable fractions of c. Let's replace the tank with a
starship (which is intentionally a generic vessel, no Trek ship), the projectile with a laser beam and assume that both observers are floating, one in open space and one in his uniformly moving
rocket (no acceleration), at a speed of c/2 compared to the first observer (Fig. 1.6). The rocket pilot will see the laser beam move away at exactly c. This is still exactly what we expect. However,
the observer in open space won't see the light beam travel at v+c=1.5c but only at c. Actually, any observer with any velocity (or in any frame of reference) would measure a light speed of exactly c=
Space-time-diagrams allow to derive the addition theorem for relativistic velocities. The resulting speed u is given by:
Eq. 1.9
For v,w<<c we may neglect the second term in the denominator. We obtain u=v+w as we expect it for small speeds. If vw gets close to c^2, the speed u may be close to, but never equal to or even faster
than c. Finally, if either v or w equals c, u is equal to c as well. There is obviously something special to the speed of light. c always remains constant, no matter where in which frame and which
direction it is measured. c is also the absolute upper limit of all velocity additions and can't be exceeded in any frame of reference.
Mass increase Mass is a property inherent to any kind of matter. We may distinguish two forms of mass, one that determines the force that has to be applied to accelerate an object (inert mass) and
one that determines which force it experiences in a gravitational field (heavy mass). At latest since the equivalence principle of GR they have been found to be absolutely identical.
However, mass is apparently not an invariant property. Consider two identical rockets that lifted off together at t=0 and now move away from the launch platform in opposite directions, each with a
constant absolute speed of v. Each pilot sees the launch platform move away at v, while Eq. 1.9 shows us that the two ships move away from each other at a speed u<2v. The "real" center of mass of the
whole system of the two ships would be still at the launch platform, however, each pilot would see a center of mass closer to the other ship than to his own. This may be interpreted as a mass
increase of the other ship to m compared to the rest mass m[0] measured for both ships prior to the launch:
Eq. 1.10
This function is plotted in Fig. 1.7.
Fig. 1.7 Mass increase for relativistic speeds
So each object has a rest mass m[0] and an additional mass m-m[0] due to its speed as seen from another frame of reference. This is actually a convenient explanation for the fact that the speed of
light cannot be reached. The mass increases more and more as the object approaches c, and so would the required momentum to propel the ship.
Finally, at v=c, we would get an infinite mass, unless the rest mass m[0] is zero. The latter must be the case for photons which always move at the speed of light, which even define the speed of
light. If we assume an FTL speed v>c, the denominator will be the square root of a negative number, and therefore the whole mass will be imaginary. As already stated for the time dilation, there is
not yet a suitable theory how an imaginary mass could be interpreted.
Mass-energy equivalence Let us consider Eq. 1.10 again. It is possible to express it as follows:
Eq. 1.11
It is obvious that we may neglect the third order and the following terms for slow speeds. If we multiply the equation with c^2 we obtain the familiar Newtonian kinetic energy ½m[0]v^2 plus a new
term m[0]c^2. Obviously already the resting object has a certain energy content m[0]c^2. We get a more general expression for the complete energy E contained in an object with a rest mass m[0] and a
moving mass m, so we may write (drumrolls!):
Eq. 1.12
Energy E and mass m are equivalent; the only difference between them is the constant factor c^2. If there is an according energy to each given mass, can the mass be converted to energy? The answer is
yes, and Trek fans know that the solution is a matter/antimatter reaction in which the two forms of matter annihilate each other, thereby transforming their whole mass into energy.
Light cone Let us have a look at Fig. 1.1 again. There are two light beams running through the origin of the diagram, one traveling in positive and one in negative x direction. The slope is 1ly per
year, which equals c. If nothing can move faster than light, then every t*-axis of a moving observer and every path of a message sent from one point to another in the diagram must be steeper than
these two lines. This defines an area "inside" the two light beams for possible signal paths originating at or going to (x=0,t=0). This area is marked dark green in Fig. 1.8. The black area is
"outside" the light cone. The origin of the diagram marks "here (x=0)" and "now (t=0)" for the resting observer.
Fig. 1.8 The light cone
The common-sense definition tells us that "future" is any event at t>0, and past is any event at t<0. Special relativity shows us a different view of these two terms. Let us consider the four marked
events which could be star explosions (novae), for instance. Event A is below the x-axis and within the light cone. It is possible for the resting observer O to see or to learn about the event in the
past, since a -45° light beam would reach the t-axis at about one and a half years prior to t=0. Therefore this event belongs to O's past. Event B is also below the x-axis, but outside the light
cone. The event has no effect on O in the present, since the light would need almost another year to reach him. Strictly speaking, B is not in O's past. Similar considerations are possible for the
term "future". Since his signal wouldn't be able to reach the event C, outside the light cone, in time, O is not able to influence it. It's not in his future. Event D, on the other hand, is inside
the light cone and may therefore be caused or influenced by the observer.
What about a moving observer? One important consequence of the considerations in this whole chapter was that two different observers will disagree about where and when a certain event happens. The
light cone, on the other hand, remains the same, irrespective of the frame of reference. So even if two observers meeting at t=0 have different impressions about simultaneity, they will agree that
there are certain, either affected (future) or affecting (past) events inside the light cone, and outside events they shouldn't bother about.
The considerations about the time dilation in special relativity had the result that the terms "moving observer" and "resting observer" are interchangeable as are their space-time diagrams. If there
are two observers with a speed relative to each other, either of them will see the other one move. Either observer will see the other one's clock ticking slower. Special relativity necessarily
requires that the observations are mutual, since it forbids a preferred, absolutely resting frame of reference. Either clock is slower than the other one? How is this possible?
The problem Specifically, the twin paradox is about twins of whom one travels to the stars at a relativistic speed, while the other one stays on Earth. It is obvious that the example assumes twins,
since it would be easier to see if one of them actually looks older than the other one when they meet again. Anyway, it should work with unrelated persons as well. What happens when the space
traveler returns to Earth? Is he the younger one, or maybe his twin on Earth, or are they equally old?
Side note The following example for the twin paradox deliberately uses the same figures as Jason Hinson's excellent treatise on Relativity and FTL travel [Hin1], to increase the chance of
understanding it. And because I was too lazy to recalculate everything. ;-)
To anticipate the result, the space traveler will be the younger one when he returns. The solution is almost trivial. Time dilation only remains the same as long as both observers stay in their
respective frames of reference. However, if the two observers want to meet again, one of them or both of them have to change their frame(s) of reference. In this case it is the space traveler who has
to decelerate, turn around, and accelerate his starship in the other direction. It is important to note that the whole effect can be explained without referring to any general relativity effects.
Time dilation attributed to acceleration or gravity will change the result, but it will not play a role in the following discussion. The twin paradox is not really paradox, it can be solved, and this
is best done with a space-time diagram.
Part 1: moving away from Earth Fig. 1.9 shows the first part of the FTL travel. O is the "resting" observer who stays on Earth the whole time. Earth is subsequently regarded as an approximated
inertial frame. Strictly speaking, O would have to float in Earth's orbit, according to the definition in general relativity. Once again, however, it is important to say that the following
considerations don't need general relativity at all. I only refer to O as staying in an inertial frame so as to exclude any GR influence.
Fig. 1.9 Illustration of the twin paradox, moving away from Earth
The moving observer O* is supposed to travel at a speed of 0.6c relative to Earth and O. When O* passes by O (x=x*=0), they both set their clocks to zero (t=t*=0). So the origin of their space-time
diagrams is the same, and the time dilation will become apparent in the different times t* and t for simultaneous events. As outlined above, t* is sloped, as is x* (see also Fig. 1.3). The
measurement of time dilation works as outlined in Fig. 1.4. O's lines of simultaneity are parallel to his x-axis and perpendicular to his t-axis. He will see that 5 years on his t-axis correspond
with only 4 years on the t*-axis (red arrow), because the latter is stretched according to Eq. 1.7. Therefore O*'s clock is ticking slower from O's point-of-view. The other way round, O* draws lines
of simultaneity parallel to his sloped x*-axis and he reckons that O's clock is running slower, 4 years on his t*-axis compared to 3.2 years on the t-axis (green arrow). It is easy to see that the
mutual dilation is the same, since 5/4 equals 4/3.2. Who is correct? Answer: Both of them, since they are in different frames of reference, and they stay in these frames. The two observers just see
things differently; they wouldn't have to care whether their perception is "correct" and the other one is actually aging slower -- unless they wanted to meet again.
Part 2: resting in space Now let us assume that O* stops his starship when his clock shows t*=4 years, maybe to examine a phenomenon or to land on a planet. According to Fig. 1.10 he is now resting
in space relative to Earth and his new x**-t** coordinate system is parallel to the x-t system of O on Earth. O* is now in the same frame of reference as O. And this is exactly the point: O*'s clock
still shows 4 years, and he notices that not 3.2 years have elapsed on Earth as briefly before his stop, but 5 years, and this is exactly what O says too. Two observers in the same frame agree about
their clock readings. O* has been in a different frame of reference at 0.6c for 4 years of his time and 5 years of O's time. This difference becomes a permanent offset when O* enters O's frame of
reference. Paradox solved.
Fig. 1.10 Illustration of the twin paradox, landing or resting in space
It is obvious that the accumulative dilation effect will become the larger the longer the travel duration is. Note that O's clock has always been ticking slower in O*'s moving frame of reference. The
fact that O's clock nevertheless suddenly shows a much later time (namely 5 instead of 3.2 years) is solely attributed to the fact that O* is entering a frame of reference in which exactly these 5
years have elapsed.
Once again, it is crucial to annotate that the process of decelerating would only change the result qualitatively, since there could be no exact kink, as O* changes from t* to t**. Practical
deceleration is no sudden process, and the transition from t* to t** should be curved. Moreover, the deceleration itself would be connected with a time dilation according to GR, but the paradox is
already solved without taking this into account.
Part 3: return to Earth Let us assume that at t*=4 years, O* suddenly gets homesick and turns around immediately instead of just resting in space. His relative speed is v=-0.6c during his travel back
to Earth, the minus sign indicating that he is heading to the negative x direction. It is obvious that this second part of his travel should be symmetrical to the first part at +0.6c in Fig. 1.9, the
symmetry axis being the clock comparison at t=5 years and t**=4 years. This is exactly the moment when O* has covered both half of his way and half of his time.
Fig. 1.11 Illustration of the twin paradox, return to Earth
Fig. 1.11 demonstrates what happens to O*'s clock comparison. Since he is changing his frame of reference from v=0.6c to v=-0.6c relative to Earth, the speed change and therefore the effect is twice
as large as in Fig. 1.10. Assuming that O* doesn't stop for a clock comparison as he did above, he would see that O's clock directly jumps from 3.2 years to 6.8 years. Following O*'s travel back to
Earth, we see that the end time is t**=8 years (O*'s clock) and t=10 years (O's clock). The traveling twin is actually two years younger at their family reunion.
We could imagine several other scenarios in which O might catch up with the traveling O*, so that O is actually the younger one. Alternatively, O* could stop in space, and O could do the same travel
as O*, so that they would be equally old when O reaches O*. The analysis of the twin paradox shows that the simple statement "moving observers age slower" is insufficient. The statement has to be
modified in that "moving observers age slower as seen from any different frame of reference, and they notice it when they enter this frame themselves".
As already stated further above, two observers in different frames of reference will disagree about the simultaneity of certain events (see Fig. 1.3). What if the same event were in one observer's
future, but in another observer's past when they meet each other? This is not a problem in special relativity, since no signal is allowed to travel faster than light. Any event that could be
theoretically influenced by one observer, but has already happened for the other one, is outside the light cone depicted in Fig. 1.8. Causality is preserved.
Fig. 1.12 depicts the space-time diagrams of two observers with a speed relative to each other. Let us assume the usual case that the moving observer O* passes by the resting observer O at t=t*=0.
They agree about the simultaneity of this passing event, but not about any other event at t<>0 or t*<>0. Event A is below the t-axis, but above the t*-axis. This doesn't matter as long as the two can
send and receive only STL signals. Event A is outside the light cone, and the argument would be as follows: A is in O*'s future (future in a wider sense), but he has no means of influencing it at t*=
0, since his signal couldn't reach it in time. A is in O's past (past in a wider sense), but it doesn't matter because he can't know of it at t=0.
What would be different if either FTL travel or FTL signal transfer were possible? In this case we would be allowed to draw signal paths of less than 45 degrees steepness in the space-time diagram.
Let us assume that O* is able to send an FTL signal to influence or to cause event A in the first place, just when the two observers pass each other. Note that this signal would travel at v>c in any
frame of reference, and that it would travel back in time in O's frame, since it runs into negative t-direction in O's orthogonal x-t coordinate system, to an event that is in O's past. If O* can
send an FTL signal to cause the event A, then a second FTL signal can be sent to O to inform him of A as soon as it has just happened. This signal would run at v>c in positive t-direction for O, but
in negative t*-direction for O*. So the situation is exactly inverse to the first FTL signal. Now O is able to receive a message from O*'s future.
Fig. 1.12 Illustration of a possible causality paradox
The paradox occurs when O, knowing about the future, decides to prevent A from happening. Maybe O* is a bad guy, and event A is the death of an unfortunate victim, killed because of his FTL message.
O would have enough time to hinder O*, to warn the victim or to take other precautions, since it is still t<0 when he receives the message, and O* has not yet caused event A.
The sequence of events (in logical rather than chronological order) would be as follows:
1. At t=t*=0, the two observers pass each other and O* sends an FTL message that causes A.
2. A happens in O*'s past (t*<0) and in O's future (t>0).
3. O learns about event A through another FTL signal, still at t<0, before he meets O*.
4. O might be able to prevent A from happening. However, how could O have learned about A if it actually never happened?
This is obviously the SR version of the well-known grandfather paradox. Note that these considerations don't take into account which method of FTL travel or FTL signal transfer is used. Within the
realm of special relativity, they should apply to any form of FTL travel. Anyway, if FTL travel is feasible, then it is much like time travel. It is not clear how this paradox can be resolved. The
basic suggestions are the same as for generic time travel and are outlined in my time travel article.
1.5 Other Obstacles to Interstellar Travel
Power considerations Rocket propulsion (as a generic term for any drive using accelerated particles) can be described by momentum conservation, resulting in the following simple equation:
Eq. 1.13
The left side represents the infinitesimal speed increase (acceleration) dv of the ship with a mass m, the right side is the mass decrease -dm of the ship if particles are thrusted out at a speed w.
This would result in a constant thrust and therefore in a constant acceleration, at least in the range of ship speeds much smaller than c. Eq. 1.13 can be integrated to show the relation between an
initial mass m[0], a final mass m[1] and a speed v[1] to be achieved:
Eq. 1.14
The remaining mass m[1] at the end of the flight, the payload, is only a fraction of the total mass m[0], the rest is the necessary fuel. The achievable speed v[1] is limited by the speed w of the
accelerated particles, i.e. the principle of the drive, and by the fuel-to-payload ratio.
Let us assume a photon drive as the most advanced conventional propulsion technology, so that w would be equal to c, the speed of light. The fuel would be matter and antimatter in the ideal case,
yielding an efficiency near 100%, meaning that according to Eq. 1.14 almost the complete mass of the fuel could contribute to propulsion. Eq. 1.13 and Eq. 1.14 would remain valid, with w=c. If
relativistic effects are not yet taken into account, the payload could be as much as 60% of the total mass of the starship, if it's going to be accelerated to 0.5c. However, the mass increase at high
sublight speeds as given in Eq. 1.10 spoils the efficiency of any available propulsion system as soon as the speed gets close to c, since the same thrust will effect a smaller acceleration. STL
examples will be discussed in Section 1.7.
Acceleration and Deceleration Eq. 1.13 shows that the achievable speed is limited by the momentum (speed and mass) of the accelerated particles, provided that a conventional rocket engine is used.
The requirements of such a drive, e.g. the photon drive outlined above, are that a considerable amount of particles has to be accelerated to a high speed at a satisfactory efficiency.
Even more restrictively, the human body simply couldn't sustain accelerations of much more than g=9.81ms^-2, which is the acceleration on Earth's surface. Accelerations of several g are taken into
account in aeronautics and astronautics only for short terms, with critical peak values of up to 20g. Unless something like Star Trek's IDF (inertial damping field) will be invented [Ste91], it is
probably the most realistic approach to assume a constant acceleration of g from the traveler's viewpoint during the whole journey. This would have the convenient side effect that an artificial
gravity equal to Earth's surface would be automatically created.
Fig. 1.13 Concept of the turn-around starship
According to Newton's first and second postulates it will be necessary to decelerate the starship as it approaches the destination. Thus, the starship needs a "brake". It wouldn't be very wise to
install a second, equally powerful engine at the front of the starship for this purpose. Moreover, the artificial gravity would act in the opposite direction during the deceleration phase in this
case. The alternative solution is simple: Half-way to the destination, the starship would be simply turned around by means of maneuvering thrusters so that it now decelerates at a rate of g; the
artificial gravity would remain exactly the same. Actually, complying with the equivalence principle of general relativity, if the travelers didn't look out of the windows or at their sensor
readings, they wouldn't even notice that the ship is now decelerating. Only during the turn-around the gravity would change for a brief time while the main engines are switched off. Fig. 1.13 depicts
such a turn-around ship, "1." is the acceleration phase, "2." the turn-around, "3." the deceleration.
Doppler shift In the chapter on classical physics, the Doppler effect has been described as the frequency increase or decrease of a sound wave. The two cases of a moving source and moving observer
only have to be distinguished in case of an acoustic signal, because the speed of sound is constant relative to the air and therefore the observer would measure different signal speeds in the two
cases. Since the speed of light is constant, there is only one formula for Doppler shift of electromagnetic radiation, already taking into account SR time dilation:
Eq. 1.15
Note that Eq. 1.15 covers both cases of frequency increase (v and c in opposite directions, v/c<0) and frequency decrease (v and c in the same direction, v/c>0). Since the power of the radiation is
proportional to its frequency, the forward end will be subject to a higher radiation power and dose than the rear end, assuming isotropic (homogeneous) radiation.
Actually, for STL travel the Doppler shift is not exactly a problem. At 0.8c, for instance, the radiation power at the bow is three times the average of the isotropic radiation. Visible light would
"mutate" to UV radiation, but its intensity would still be far from dangerous. Only if v gets very close to c (or -c, to be precise), the situation could get critical for the space travelers, and an
additional shielding would be necessary. On the other hand, it's useless anyway to get as close to c as possible because of the mass increase. For v=-c, the Doppler shift would be theoretically
It is not completely clear (but may become clear in one of the following chapters) how Doppler shift can be described for an FTL drive in general or warp propulsion in particular. It could be the
conventional, non-relativistic Doppler shift that applies to warp drive since mass increase and time dilation are not valid either. In this case the radiation frequency would simply increase to 1+|v/
c| times the original frequency, and this could be a considerable problem for high warp speeds, and it would require thick radiation shields and forbid forward windows.
As we can infer from the name, general relativity (GR) is a more comprehensive theory than special relativity. Although the concept as a whole has to be explained with massive use of complex
mathematics, the basic principles are quite evident and perhaps easier to understand than those of special relativity. General relativity takes into account the influence of the presence of a mass,
of gravitational fields caused by this mass.
Inertial frames The chapter on special relativity assumed inertial frames of reference, that is, frames of reference in which there is no acceleration to the object under investigation. The first
thought might be that a person standing on Earth's surface should be in an inertial frame of reference, since he is not accelerated relative to Earth. This idea is wrong, according to GR. Earth's
gravity "spoils" the possible inertial frame. Although this is not exactly what we understand as "acceleration", there can't be an inertial frame on Earth's surface. Actually, we have to extend our
definition of an inertial frame.
Principle of equivalence Consider the rocket in the left half of Fig. 1.14 whose engines are powered somewhere in open space, far away from any star or planet. According to Newton's Second Law of
Motion (see 1.1 Classical Physics), if the engine force is constant the acceleration will be constant too. The thrust may be adjusted in a way that the acceleration is exactly g=9.81ms^-2, equal to
the acceleration in Earth's gravitational field. The passenger will then be able to stand on the rocket's bottom as if it were Earth's surface, since the floor of the rocket exerts exactly the same
force on him in both cases. Compare it to the right half of Fig. 1.14; the two situations are equivalent. "Heavy mass" and "inert mass" are actually the same.
One might object that there should still be many differences. Specifically one should expect that a physicist who is locked up in such a starship (without windows) should be able to find out whether
it is standing on Earth or accelerating in space. The surprising result of GR is that he will get exactly the same experimental results in both cases. Imagine that the rocket is quite long, and our
physicist sends out a laser beam from the rocket's bottom to its top. In the case of the accelerating rocket we would not be surprised that the frequency of the light beam decreases, since the
receiver would virtually move away from the source while the beam is on the way. This effect is the familiar Doppler shift. We wouldn't expect the light frequency (and therefore its intensity) to
decrease inside a stationary rocket too, but that's exactly what happens in Earth's gravitational field! The light beam has to "climb up" in the field, thereby losing energy, which becomes apparent
in a lower frequency. Obviously, as opposed to common belief so far, light is affected by gravity.
Fig. 1.14 Equivalence of rocket standing on Earth Fig. 1.15 Equivalence of free falling rocket
and rocket accelerated in space (no inertial frames) and rocket floating in space (inertial frames)
Let us have a look at Fig. 1.15. The left half shows a rocket floating in space, far away from any star or planet. No force acts upon the passenger, he is weightless. It's not only a balance of
forces, but all forces are actually zero. This is an inertial frame, or at least a very good approximation thereof. There can be obviously no perfect inertial frame as long as there is still a
certain mass present. Compare this to the depiction of the free falling starship in the right half of Fig. 1.15. Both the rocket and the passenger are attracted with the same acceleration a=g.
Although there is acceleration, this is an inertial frame too, and it is equivalent to the floating rocket. The point is that in both cases the inside of the rocket is an inertial frame, since the
ship and passenger don't exert any force/acceleration on each other. About the same applies to a parabolic flight or a ship in orbit during which the passengers are weightless.
We might have found an inertial frame also in the presence of a mass, but we have to keep in mind that this can be only an approximation. Consider a very long ship falling down to Earth. The
passenger in the rocket's top would experience a smaller acceleration than the rocket's bottom and would have the impression that the bottom is accelerated with respect to himself. Similarly, in a
very wide rocket (it may be the same one, only turned by 90 degrees), two people at either end would see that the other one is accelerated towards him. This is because they would fall in slightly
different radial directions to the center of mass. None of these observations would be allowed within an inertial frame. Therefore, we are only able to define local inertial frames.
Time dilation As already mentioned above, there is a time dilation in general relativity, because light will gain or lose potential energy when it is moving farther away from or closer to a center of
mass, respectively. The time dilation depends on the gravitational potential as given in Eq. 1.2 and amounts to:
Eq. 1.16
G is Newton's gravitational constant, M is the planet's mass, and r is the distance from the center of mass. Eq. 1.16 can be approximated in the direct vicinity of the planet using Eq. 1.3:
Eq. 1.17
In both equations t* is the time elapsing on the surface, while t is the time in a height h above the surface, with g being the standard acceleration. The time t* is always shorter than t so that,
theoretically, people living on the sea level age slower than those in the mountains. The time dilation has been measured on normal plane flights and summed up to 52.8 nanoseconds when the clock on
the plane was compared to the reference clock on the surface after 40 hours [Sex87]. 5.7 nanoseconds had to be subtracted from this result, since they were attributed to the time dilation of
relativistic movements that was discussed in the chapter about special relativity.
Curved space The time dilation in the above paragraph goes along with a length contraction in the vicinity of a mass:
Eq. 1.18
However, length contraction is not a commonly used concept in GR. The equivalent idea of a curved space is usually preferred. For it is obviously impossible to illustrate the distortions of a
four-dimensional space-time, we have to restrict our considerations to two spatial dimensions. Imagine two-dimensional creatures living on an even plastic foil. The creatures might have developed a
plane geometry that allows them to calculate distances and angles. Now someone from the three-dimensional world bends and stretches the plastic foil. The two-dimensional creatures will be very
confused about it, since their whole knowledge of geometry doesn't seem to be correct anymore. They might apply correction factors in certain areas, compensating for points that are measured as being
closer together or farther away from each other than their calculations indicate. Alternatively, a very smart two-dimensional scientist might come up with the idea that their area is actually not
flat but warped. About this is what general relativity says about our four-dimensional space-time.
Fig. 1.16 Illustration of curved space
Fig. 1.16 is limited to the two spatial dimensions x and y. It can be regarded as something like a "cross-section" of the actual spatial distortion. We can imagine that the center of mass is
somewhere in the middle "underneath" the x-y plane, where the curvature is most pronounced (a "gravity well").
Speed of light A light beam passing by an area with strong gravity such as a star will not be "straight", at least it will not appear straight as seen from flat space. Using an exact mathematical
description of curved space, the light beam will follow a geodesic. Using a more illustrative idea, because of its mass the light beam will be deflected by a certain angle. The first reliable
measurements were performed during a total solar eclipse in 1919 [Sex87]. They showed that the apparent positions of stars whose light was passing the darkened sun were farther away from the sun than
the "real" positions measured in the night. It is possible to calculate the deflection angle assuming that light consists of particles and using the Newtonian theory of gravitation, however, this
accounts for only half the measured value.
There is another effect involved that can only be explained with general relativity. As it is the case in materials with different refraction indices, light will "avoid" regions in which its apparent
speed is reduced. A "detour" may therefore become a "shortcut". This is what happens in the vicinity of a star and what is responsible for the other 50% of the light deflection.
Now we can see the relation of time dilation, length contraction, the geometry of space and the speed of light. A light beam would have a definite speed c=r/t in the "flat" space in some distance
from the center of mass. Closer to the center, space itself is "curved", and this again is equivalent to the effect that everything coming from outside would apparently "shrink". A ruler with a
length r would appear shortened to r*. Since the time t* is shortened with respect to t by the same factor, c=r/t=r*/t* remains constant. This is what an observer inside the gravity well would
measure, and what the external observer would confirm. On the other hand, the external observer would see that the light inside the gravity well takes a detour (judging from his geometry of flat
space) *or* would pass a smaller effective distance in the same time (regarding the shortened ruler) *and* the light beam would be additionally slowed down because of the time dilation. Thus, he
would measure that the light beam actually needs a longer time to pass by the gravity well than t=r/c. If he is sure about the distance r, then the effective c* inside the gravity well must be
Eq. 1.19
It was confirmed experimentally that a radar signal between Earth and Venus takes a longer time than the distance between the planets indicates if it passes by close to the sun.
Black holes Let us have a look at Eq. 1.16 and Eq. 1.18 again. Obviously something strange happens at r=2GM/c^2. The time t* becomes zero (and the time dilation infinite), and the length x* is
contracted to zero. This is the Schwarzschild radius or event horizon, a quantity that turns up in several other equations of GR. A collapsing star whose radius shrinks below this event horizon will
become a black hole. Specifically, the space-time inside the event horizon is curved in a way that every particle will inevitably fall into its center. It is unknown how dense the matter in the
center of a black hole is actually compressed. The mere equations indicate that the laws of physics as we know them wouldn't be valid any more (singularity). On the other hand, it doesn't matter to
the outside world what is going on inside the black hole, since it will never be possible to observe it.
Fig. 1.17 Sequence of events as a person enters a black hole
The sequence of events as a starship passenger approaches the event horizon is illustrated in Fig. 1.17. The lower left corner depicts what an external observer would see, the upper right corner
shows the perception of the person who falls into the black hole. Entering the event horizon, he would get a distorted view of the outside world at first. However, while falling towards the center,
the starship and its passenger would be virtually stretched and finally torn apart because of the strong gravitational force gradient. An external observer outside the event horizon would perceive
the starship and its passenger move slower the closer they get to the event horizon, corresponding to a time dilatation. Eventually, they would virtually seem to stand still exactly on the edge of
the event horizon. He would never see them actually enter the black hole. By the way, this is also a reason why a black hole can never appear completely "black". Depending on its age, the black hole
will still emit a certain amount of (red-shifted) radiation, aside from the Hawking radiation generated because of quantum fluctuations at its edge.
1.7 Examples of Relativistic Travel
A trip to Proxima Centauri As already mentioned in the introduction, it is essential to overcome the limitations of special relativity to allow sci-fi stories to take place in interstellar space.
Otherwise the required travel times would exceed a person's lifespan by far.
Side note Several of the equations and examples in this sub-chapter are taken from [Ger89].
Let us assume a starship with a very advanced, yet slower-than-light (STL) drive, were to reach Proxima Centauri, about 4ly away from Earth. This would impose an absolute lower limit of 4 years on
the one-way travel. However, considering the drastic increase of mass as the ship approaches c, an enormous amount of energy would be necessary. Moreover, we have to take into account a limited
engine power and the limited ability of humans to cope with excessive accelerations. A realistic STL travel to a nearby star system could work with a turn-around starship as shown in Fig. 1.13 which
will be assumed in the following.
To describe the acceleration phase as observed from Earth's frame of reference, the simple relation v=gt for non-relativistic movements has to be modified as follows:
Eq. 1.20
Side note It's not surprising that the above formula for the effective acceleration is also determined by the factor gamma, yet, it requires a separate derivation that I don't further explain to
keep this chapter brief.
The relativistic and (hypothetical) non-relativistic speeds at a constant acceleration of g=9.81ms^-2 are plotted over time in Fig. 1.18. It would take 409 days to achieve 0.5c and 2509 days to
achieve 0.99c at a constant acceleration of g. It is obviously not worth while extending the acceleration phase far beyond 0.5c where the curves begin to considerably diverge. It would consume six
times the fuel to achieve 0.99c instead of 0.5c, considering that the engines would have to work at a constant power output all the time, while the benefit of covering a greater distance wouldn't be
that significant.
Fig. 1.18 Comparison of Newtonian and relativistic movement
To obtain the covered distance x after a certain time t, the speed v as given in Eq. 1.20 has to be integrated over time:
Eq. 1.21
Side note Note that the variable tau instead of t is only used to keep the integration consistent and satisfy mathematicians ;-), since t couldn't denote both the variable and the constant.
There are two special cases which also become obvious in Fig. 1.18: For small speeds gt<<c Eq. 1.21 becomes the Newtonian formula for accelerated movement x=½gt^2. Therefore the two curves for
non-relativistic and relativistic distances are almost identical during the first few months (and v<0.5c). If the theoretical non-relativistic speed gt exceeds c (which would be the case after
several years of acceleration), the formula may be approximated with the simple linear relation x=ct, and the according graph is a straight line. This is evident, since we can assume the ship has
actually reached a speed close to c, and the effective acceleration is marginal. A distance of one light-year would then be bridged in slightly more than a year.
If the acceleration is suspended at 0.5c after the aforementioned 409 days, the distance would be 4.84 trillion km which is 0.51ly. With a constant speed of 0.5c for another 2509-409=2100 days the
ship would cover another 2.87ly, so the total distance would be 3.38ly. On the other hand, after additional 2100 days of acceleration to 0.99c our ship has bridged 56,5 trillion km, or 5.97ly. As we
could expect, the constant acceleration to twice the speed is not as efficient as in the deep-sublight region where it should have doubled the covered distance.
A maximum speed of no more than 0.5c seems useful, at least for "close" destinations such as Proxima Centauri. With an acceleration of g=9.81ms^-2 the flight plan could look as follows:
Flight to Proxima Centauri Speed Distance Earth time Ship time
Acceleration @ g 0 to 0.5c 0.51ly 1.12 years 0.96 years
Constant speed 0.5c 2.98ly 5.96 years 5.12 years
Deceleration @ -g 0.5c to 0 0.51ly 1.12 years 0.96 years
Total - 4ly 8.20 years 7.04 years
Tab. 1.1 Flight plan for an STL trip to Proxima Centauri
The table already includes a correction of an "error" in the above considerations which referred to the time t as it elapses on Earth. The solution of the twin paradox revealed that the space
traveler who leaves Earth at a certain speed and stops at the destination changes their frame of reference each time, no matter whether or not we take into account the effects of the acceleration
phases. This is why the special relativistic time dilatation will become asymmetric. As the space traveler returns to Earth's frame of reference -either by returning to Earth or by landing on Proxima
Centauri which can be supposed to be roughly in the same frame of reference as Earth- he will have aged less than his twin on Earth. During his flight his ship time t* elapses slower than the time t
in Earth's frame of reference:
Eq. 1.22
Eq. 1.22 is valid if the speed v is constant. During his constant speed period of 5.96 years in Earth's frame of reference the space traveler's clock would proceed by 5.12 years. In case of an
acceleration or deceleration we have to switch to infinitesimal time periods dt and dt* and replace the constant velocity v with v(t) as in Eq. 1.20. This modified equation has to be integrated over
the Earth time t to obtain t*:
Eq. 1.23
Side note The function arsinh is called "area sinus hyperbolicus".
The space traveler would experience 0.96 years during the acceleration as well as the deceleration. The times are summarized in Tab. 1.1, yielding a total time experienced by the ship passenger of
7.04 years, as opposed to 8.20 years in the "resting" frame of reference on Earth.
Traveling to the edge of the universe The prospect of slowing down time as the ship approaches c offers fascinating possibilities of space travel even without FTL drive. The question is how far an
STL starship could travel within a passenger's lifetime, assuming a constant acceleration of g=9.81ms^-2 all the time. Provided there is a starship with virtually unlimited fuel, the following theory
would have to be proven: If the space traveler continued acceleration for many years, his speed would very slowly approach, but never exceed c, if observed from Earth. This wouldn't take him very far
in his lifetime. However, according to Eq. 1.23 time slows down more and more, and this is the decisive effect. We might want to correct the above equations with the slower ship time t* instead of
the Earth time t. We obtain the ship speed v* and distance x* if we apply Eq. 1.23 to Eq. 1.20 and Eq. 1.21, respectively.
Eq. 1.24
Eq. 1.25
Note that the term tanh(gt*/c) is always smaller than 1, so that the measured speed always remains slower than c. On the other hand, x* may rise to literally astronomical values. Fig. 1.19 depicts
the conjectural travel to the edge of the universe, roughly 10 billion light-years away, which could be accomplished in only 25 ship years! The traveler could even return to Earth which would require
another 25 years; but there wouldn't probably be much left of Earth since the time elapsed in Earth's frame of reference would sum up to 10 billion years, obviously the same figure as the bridged
distance in light-years.
Fig. 1.19 Plan for an STL trip to the edge of the universe
Side note Apart from the objection that there would be hardly unlimited fuel for the travel, the above considerations assume a static universe. The real universe would further expand, and the
traveler could never reach its edge which is probably moving at light speed.
Fuel problems The non-relativistic relation of thrust and speed was discussed in Section 1.5. If we take into account relativistic effects, we see that at a constant thrust the effective acceleration
will continually decrease to zero as the speed approaches c. The simple relation v=gt is not valid anymore and has to be replaced with Eq. 1.20. Thus, we have to rewrite the fuel equation as follows:
Eq. 1.26
The two masses m[0] and m[1] still denote non-relativistic rest masses of the ship before and after the acceleration, respectively. Achieving v[1]=0.5c would require not much more fuel than in the
non-relativistic case, the payload could still be 56% of the total mass compared to 60%. This would be possible, provided that a matter/antimatter power source is available and the power conversion
efficiency is 100%. If the aspired speed were 0.99c, the ship would have an unrealistic fuel share of 97%. The flight to the edge of the universe (24 ship years at a constant apparent acceleration of
g) would require a fuel mass of 56 billion times the payload which is beyond every reasonable limitations, of course.
If we assume that the ship first accelerates to 0.5 and then decelerates to zero on the flight to Proxima Centauri, we will get a still higher fuel share. Considering that Eq. 1.26 only describes the
acceleration phase, the deceleration would have to start at a mass of m[1], and end at a still smaller mass of m[2]. Taking into account both phases, we will easily see that the two mass factors have
to be multiplied:
Eq. 1.27
This would mean that the payload without refueling could be only 31% for v[1]=0.5c. For v[1]=0.99c the ship would consist of virtually nothing but fuel. Just for fun, flying to the edge of the
universe and landing somewhere out there would need a fuel of 3*10^21 tons, if the payload is one ton (Earth's mass: 6*10^21t).
Proceed to next chapter
Back to index page | {"url":"http://www.ex-astris-scientia.org/treknology/warp1.htm","timestamp":"2014-04-16T15:59:53Z","content_type":null,"content_length":"89244","record_id":"<urn:uuid:d6b67f43-86fe-49b5-a418-9f874cfa5c42>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the Solution Set for an Inequality
1. I need to find the solution set for |(3x+2)/(x+3)|>3.
3. When I solve the inequality (3x+2)/(x+3)>3, I get 2>9 which is clearly false. When I solve the inequality (3x+2)/(x+3)> -3, I come with the solution set (-inf. -11/6). My teacher is saying that
there the solutotion set is (-inf. -3)U(-3, -11/6).
I just can't figure out how to get to that solution. I can't figure out where that -3 is coming from. In his sparse notes on my assignment, he says there are two subcases for each of the two cases in
number 1. Those are when x < -3 and when x > -3. I just cant' figure out how to use these cases. | {"url":"http://www.physicsforums.com/showthread.php?t=361924","timestamp":"2014-04-18T18:23:34Z","content_type":null,"content_length":"25658","record_id":"<urn:uuid:f9f0f634-8d81-4bd0-80b2-2fa4a0a84b9b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
The ALOG function returns the natural logarithm of X.
For input of a complex number, Z = X + iY, the complex number can be rewritten as Z = R exp(iθ), where R = abs(Z) and θ = atan(y,x). The complex natural log is then given by,
alog(Z) = alog(R) + iθ
In the above formula, the use of the two-argument arctangent separates the solutions at Y = 0 and takes into account the branch-cut discontinuity along the real axis from -∞ to 0, and ensures that
exp(alog(Z)) is equal to Z ^^1.
Example 1
Find the natural logarithm of 2 and print the result by entering:
PRINT, ALOG(2)
IDL prints:
Example 2
Find the complex natural log of sqrt(2) + i sqrt(2) and print the result by entering:
PRINT, ALOG(COMPLEX(sqrt(2), sqrt(2)))
IDL prints:
( 0.693147, 0.785398)
Note: The real part of the result is just ALOG(2) and the imaginary part gives the angle (in radians) of the complex number relative to the real axis.
See the ATAN function for an example of visualizing the complex natural log.
Example 3
Find the decay rate of tritium given its half-life (t[1/2]) of 12.32 years and the decay rate = -(ln(2))/t[1/2] :
PRINT, -(ALOG(2))/12.32
IDL prints:
Result = ALOG(X)
Return Value
Returns the natural logarithm of X.
The value for which the natural log is desired. For real input, X should be greater than or equal to zero. If X is double-precision floating or complex, the result is of the same type. All other
types are converted to single-precision floating-point and yield floating-point results. If X is an array, the result has the same structure, with each element containing the natural log of the
corresponding element of X.
Thread Pool Keywords
This routine is written to make use of IDL’s thread pool, which can increase execution speed on systems with multiple CPUs. The values stored in the !CPU system variable control whether IDL uses the
thread pool for a given computation. In addition, you can use the thread pool keywords TPOOL_MAX_ELTS, TPOOL_MIN_ELTS, and TPOOL_NOTHREAD to override the defaults established by !CPU for a single
invocation of this routine. See Thread Pool Keywords for details.
Version History
See Also
ALOG10, ATAN
Resources and References
1. See formulas 4.4.1-3 in Abramowitz, M. and Stegun, I.A., 1964: Handbook of Mathematical Functions (Washington: National Bureau of Standards).
This page has no user notes yet. Be the first one! | {"url":"http://exelisvis.com/docs/ALOG.html","timestamp":"2014-04-16T16:43:01Z","content_type":null,"content_length":"53957","record_id":"<urn:uuid:f017b39d-13e5-420b-ac91-ff8d7560da7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can I find the location of the pixels between adjacent regions.
2 Answers
No products are associated with this question.
How can I find the location of the pixels between adjacent regions.
Dear all, I segmented an image using watershed and got the Label matrix. I've already found the neighbors of each region using graycomatrix. My problem is that I would like to find the location of
the pixels between each pair of adjacent regions. Using bwboundaries and bwperim give all the pixel locations of each region. I just want to get the locations of the pixels situated between each
adjacent regions. Thanks
0 Comments
As you can imagine, there will be some ambiguities in some places, like when three regions meet. However, that will be just at a few pixels. I'd recommend doing two for loops to find where every
region overlaps every other region. Then dilate each region and AND them to find the overlap pixels. Run this demo:
clc; % Clear command window.
clear; % Delete all variables.
close all; % Close all figure windows except those created by imtool.
imtool close all; % Close all figure windows created by imtool.
workspace; % Make sure the workspace panel is showing.
fontSize = 15;
% 1.
% Make a binary image containing two overlapping circular objects.
center1 = -10;
center2 = -center1;
dist = sqrt(2*(2*center1)^2);
radius = dist/2 * 1.4;
lims = [floor(center1-1.2*radius) ceil(center2+1.2*radius)];
[x,y] = meshgrid(lims(1):lims(2));
bw1 = sqrt((x-center1).^2 + (y-center1).^2) <= radius;
bw2 = sqrt((x-center2).^2 + (y-center2).^2) <= radius;
bw = bw1 | bw2;
% Enlarge figure to full screen.
set(gcf, 'units','normalized','outerposition',[0 0 1 1]);
% Give a name to the title bar.
set(gcf,'name','Demo by ImageAnalyst','numbertitle','off');
subplot(2, 2, 1);
title('bw', 'FontSize', fontSize);
% 2.
% Compute the distance transform of the complement of the binary image.
D = bwdist(~bw);
subplot(2, 2, 2);
title('Distance transform of ~bw', 'FontSize', fontSize)
% 3.
% Complement the distance transform, and force pixels that don't belong to the objects to be at -Inf.
D = -D;
D(~bw) = -Inf;
% 4.
% Compute the watershed transform and display the resulting label matrix as an RGB images.
L = watershed(D);
rgb = label2rgb(L,'jet',[.5 .5 .5]);
subplot(2, 2, 3);
title('Watershed transform of D', 'FontSize', fontSize)
% Code above was from the watershed demo, with some changes by Image Analyst.
% Code below is from Image Analyst.
numberOfRegions = max(L(:));
% Enlarge figure to full screen.
set(gcf, 'units','normalized','outerposition',[0 0 1 1]);
% Give a name to the title bar.
set(gcf,'name','Demo by ImageAnalyst','numbertitle','off')
for r1 = 1 : numberOfRegions
% Get the region
region1 = ismember(L, r1);
% Dilate it.
region1Dilated = imdilate(region1, true(3));
for r2 = r1 + 1 : numberOfRegions
% Get the region
region2 = ismember(L, r2);
% Dilate it.
region2Dilated = imdilate(region2, true(3));
% Find the intersection
intersectionRegions = region1Dilated & region2Dilated;
% Display everything.
subplot(2, 3, 1);
title('Region1', 'FontSize', fontSize);
subplot(2, 3, 4);
title('Region1 Dilated', 'FontSize', fontSize);
subplot(2, 3, 2);
title('Region2', 'FontSize', fontSize);
subplot(2, 3, 5);
title('Region2 Dilated', 'FontSize', fontSize);
subplot(2, 3, 6);
title('intersectionRegions', 'FontSize', fontSize);
0 Comments
Edited by
Matt J
on 23 Nov 2012
Use intersect(...,'rows') on the data returned from bwboundaries to get the intersection of the sets of boundary pixels for 2 regions.
0 Comments | {"url":"http://mathworks.com/matlabcentral/answers/54580","timestamp":"2014-04-19T06:55:29Z","content_type":null,"content_length":"29295","record_id":"<urn:uuid:18b2d7c4-d50e-4958-b205-5314c397850c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many Charm packs for a queen size simple block quilt?
December 7th, 2010, 01:23 PM #1
Junior Member
Fabric Fanatic
Hello everyone
I would like to know how many Charm packs (42 5" squares) I would need to make a queen size quilt. Or a king size.
Thank you! And I'm sorry if this has been asked before!
Ps. I'm trying to figure this out so I can give my husband my Christmas wish list! LOL!
My math stinks but I will check with other people and see what they say. Or you could go to "Ask the MSQC girls" and ask them. They own the shop. Meanwhile I'll check around.
Hello and welcome to the forum!
I just googles 'how many 5" squares to make a queen size quilt and got a few different answers.
One site said that a queen size quilt is 60x80". That seemed a tad small for me. Another said they are 86x88". That might be correct. Then another said 90x108" which is counting in extra to use
for tucking in and for covering oversized pillows. I would go with the 90x108".
So to get 90" across you would need 90 divided by 5 is 18 blocks across?
To get 108" you would need 108 divided by 5 is 22 blocks across so -
18 x 22 is 396 - wow that's alot of blocks, but then again, they are only 5" big.
So - if you are using charm packs - let's see - there are 42 in a pack so 396 divided by 42 is 9 packs to get to 378 blocks so you would need 10 packs with some left over - that sounds like alot
of charm packs.
Maybe someone would like to double check this to make sure I am right?
What if you woke up today and the only things you had were the things you thanked God for yesterday?
Hi Trish, I think you are right because it takes 5 packs for a twin and a king is 2 twins So to be safe 10 packs is good.
Standard QUEEN-sized mattress measures 60" x 80."
Including a 12" drop on the sides and foot brings the queen sized quilt to 84" x 92."
The finished size of your squares will be 4.5".
84" divided by 4.5" rounds up to 19 squares across for the width of the quilt.
92" divided by 4.5" rounds up to 21 squares down for the length of the quilt.
19 squares for the width times 21 squares for the length = 399 squares.
399 squares divided by 42 squares per pack - YOU WILL NEED 10 PACKS
Standard KING-sized mattress measures 76" x 80."
Including a 12" drop on the sides and foot brings the king sized quilt to 100" x 92."
100" divided by 4.5" - rounds up to 23 squares
92" divided by 4.5" - rounds up to 21 squares
23 X 21 = 483 squares divided by 42 squares per pack - YOU WILL NEED 12 PACKS
If you have a California King bed you will need more. Just follow my lead on the math to figure out how many more packs you would need.
You can always add borders to make the quilt longer on the sides if 12" isn't enough. Some mattress are taller than others these days.
You might want to double check with someone else to make sure I'm right. Good luck!
I know who I'm going to with a math problem next time, lol!
PS - I think like you Trish; Google is my encyclopedia.
Standard QUEEN-sized mattress measures 60" x 80."
Including a 12" drop on the sides and foot brings the queen sized quilt to 84" x 92."
The finished size of your squares will be 4.5".
84" divided by 4.5" rounds up to 19 squares across for the width of the quilt.
92" divided by 4.5" rounds up to 21 squares down for the length of the quilt.
19 squares for the width times 21 squares for the length = 399 squares.
399 squares divided by 42 squares per pack - YOU WILL NEED 10 PACKS
Standard KING-sized mattress measures 76" x 80."
Including a 12" drop on the sides and foot brings the king sized quilt to 100" x 92."
100" divided by 4.5" - rounds up to 23 squares
92" divided by 4.5" - rounds up to 21 squares
23 X 21 = 483 squares divided by 42 squares per pack - YOU WILL NEED 12 PACKS
If you have a California King bed you will need more. Just follow my lead on the math to figure out how many more packs you would need.
You can always add borders to make the quilt longer on the sides if 12" isn't enough. Some mattress are taller than others these days.
You might want to double check with someone else to make sure I'm right. Good luck!
Sewing mends the soul.
Come by and visit my blog:
Do the math; count your blessings
Laughing is good exercise. It's like jogging on the inside.
Unless we are creating we are not fully alive
~ Madeleine L'Engle
OMG! Thank you ladies so much!
I have a queen size bed
How many charm packs, 42 each 10" squares, would I need for my queen size bed with a 12' drop on the sides and bottom?
I promise I wont ask anymore questions! LOL!
Thank you
Last edited by jenbuggy85; December 8th, 2010 at 12:05 AM.
You mean you want to use a Layer Cake instead of charms? Each layer cake =4 charms.
Here you go ladies! Don't know how accurate this is but I found this on Ebay under their free quides:
HOW MANY BLOCKS DO I NEED TO MAKE MY QUILT??
10" BLOCKS
We are providing quilters with these handy charts of some of the more popular block sizes in hopes that it will provide quidance and the information necessary to purchase or make the correct
number of blocks. There are so many beautiful block kits available on Ebay today but many quilters are uncertain of how many blocks they will need to purchase in order to make their quilt.
The number of blocks in our chart below are based on the mattress size only.
Add extra blocks or borders for drop at sides of mattress. Happy Quilting!
CRIB size = 30" x 40"
12 Ten Inch Blocks Needed
Layout = 3 x 4
TWIN size = 40" x 80"
32 Ten Inch Blocks Needed
Layout = 4 x 8
FULL Size = 60" x 80"
48 Ten Inch Blocks Needed
Layout = 6 x 8
QUEEN Size = 60" x 80"
48 Ten Inch Blocks Needed
Layout = 6 x 8
KING Size = 80" x 80"
64 Ten Inch Blocks Needed
Layout = 8 x 8
8" BLOCKS
8" blocks are such a common size now with so many precut quilt kits available today.
Knowing how many you need for your desired size quilt can be very confusing.
Hopefully the chart below will give you some guidance and get rid of a lot of the confusion .
The number of blocks in our chart below are based on the mattress size only.
Add extra blocks or borders for drop at sides of mattress. Happy Quilting!
CRIB size = 24" x 48"
18 Eight Inch Blocks
Layout = 3 x 6
TWIN size = 40" x 80"
50 Eight Inch Blocks
Layout = 5 x 10
FULL Size = 56" x 80"
70 Eight Inch Blocks
Layout = 7 x 10
QUEEN Size = 64" x 80"
80 Eight Inch Blocks
Layout = 8 x 10
KING Size = 80" x 80"
100 Eight Inch Blocks
Layout = 10 x 10
6" BLOCKS
We are providing quilters with these handy charts of some of the more popular block sizes in hopes that it will provide quidance and the information necessary to purchase or make the correct
number of blocks. There are so many beautiful block kits available on Ebay today but many quilters are uncertain of how many blocks they will need to purchase in order to make their quilt.
The number of blocks in our chart below are based on the mattress size only.
Add extra blocks or borders for drop at sides of mattress. Happy Quilting!
CRIB size = 24" x 48"
32 Six Inch Blocks Needed
Layout = 4 x 8
TWIN size = 40" x 80"
91 Six Inch Blocks Needed
Layout = 7 x 13
FULL Size = 56" x 80"
117 Six Inch Blocks Needed
Layout = 9 x 13
QUEEN Size = 64" x 80"
130 Six Inch Blocks Needed
Layout = 10 x 13
KING Size = 80" x 80"
182 Six Inch Blocks Needed
Layout = 13 x 14
How many 5" charms do I need to complete my quilt?
We are providing quilters with these handy charts of some of the more popular block sizes in hopes that it will provide quidance and the information necessary to purchase or make the correct
number of blocks or quilt squares / charms. There are so many beautiful block kits and charm packs available on Ebay today but many quilters are uncertain of how many blocks they will need to
purchase in order to make their quilt.
The number of blocks in our chart below are based on the mattress size only.
Add extra blocks or borders for drop at sides of mattress. Happy Quilting!
CRIB size = 25" x 45"
45 Five Inch Blocks Needed
Layout = 5 x 9
TWIN size = 40" x 75"
120 Five Inch Blocks Needed
Layout = 8 x 15
FULL Size = 55" x 75"
165 Five Inch Blocks Needed
Layout = 11 x 15
QUEEN Size = 60" x 80"
192 Five Inch Blocks Needed
Layout = 12 x 16
KING Size = 80" x 80"
256 Five Inch Blocks Needed
Layout = 16 x 16
4" BLOCKS
We are providing quilters with these handy charts of some of the more popular block sizes in hopes that it will provide quidance and the information necessary to purchase or make the correct
number of blocks. There are so many beautiful block kits available on Ebay today but many quilters are uncertain of how many blocks they will need to purchase in order to make their quilt.
The number of blocks in our chart below are based on the mattress size only.
Add extra blocks or borders for drop at sides of mattress. Happy Quilting!
CRIB size = 24" x 48"
72 Four Inch Blocks Needed
Layout = 6 x 12
TWIN size = 40" x 80"
190 Four Inch Blocks Needed
Layout = 10 x 19
FULL Size = 56" x 80"
266 Four Inch Blocks Needed
Layout = 14 x 19
QUEEN Size = 64" x 80"
300 Four Inch Blocks Needed
Layout = 15 x 20
KING Size = 80" x 80"
380 Four Inch Blocks Needed
Layout = 19 x 20
3" BLOCKS
By popular demand we are adding a chart for 3" blocks or squares.
The 3" (and 2") square is very frequently the square used for WATERCOLOR quilting.
With this chart you will know exactly how many squares you need to purchase to make your desired size quilt or wallhanging.
The number of blocks in our chart below are based on the mattress size only.
Add extra blocks or borders for drop at sides of mattress. Happy Quilting!
CRIB size = 24" x 48"
128 Three Inch Blocks Needed
Layout = 8 x 16
TWIN size = 40" x 80"
325 Three Inch Blocks Needed
Layout = 13 x 25
FULL Size = 56" x 80"
450 Three Inch Blocks Needed
Layout = 18 x 25
QUEEN Size = 64" x 80"
540 Three Inch Blocks Needed
Layout = 20 x 27
KING Size = 80" x 80"
702 Three Inch Blocks Needed
Layout = 26 x 27
-its not the number of breaths you take, but the moments that take your breath away!
wow, that's a lot of math.
But very helpful, also for me.
thx girls
I have still problems with konverting inches in metric-system to figure out how big it is.
December 7th, 2010, 05:56 PM #2
Missouri Star
December 7th, 2010, 05:57 PM #3
Missouri Star
December 7th, 2010, 06:03 PM #4
Missouri Star
December 7th, 2010, 06:05 PM #5
Missouri Star
Join Date
Apr 2010
Orange County, CA and Baarn, Netherlands temporarily
December 7th, 2010, 09:33 PM #6
Missouri Star
December 8th, 2010, 12:01 AM #7
Junior Member
Fabric Fanatic
December 8th, 2010, 12:50 AM #8
Missouri Star
December 8th, 2010, 01:56 AM #9
December 8th, 2010, 05:51 AM #10
9 Patch Princess | {"url":"http://forum.missouriquiltco.com/quilting-questions/1706-how-many-charm-packs-queen-size-simple-block-quilt.html","timestamp":"2014-04-21T04:30:57Z","content_type":null,"content_length":"96297","record_id":"<urn:uuid:6e2537d2-1890-46b3-af67-a7703b1b2d5d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
morethan 3 consecutive rows
Results 21 to 30 of 31
Thread: morethan 3 consecutive rows
Click Here to
1. 04-12-2006, 05:44 AM #21 Expand Forum to
Junior Member Full Width
Join Date
Mar 2006
can you not just sort by ROWID to get the natural order that rows were inserted into the table? is it valid?
2. 04-12-2006, 06:31 AM #22
Junior Member
Join Date
Mar 2006
ok i finally whacked this one but for one little bug:
create table tmp_test as
select '3/1/2006 7:00a.m' as d,'a' as t,'3' as v from dual union all
select '3/1/2006 7:00a.m','a','1' from dual union all
select '3/1/2006 7:00a.m','a','1' from dual union all
select '3/1/2006 7:03a.m','a','3' from dual union all
select '3/1/2006 7:03a.m','a','4' from dual union all
select '3/1/2006 7:03a.m','a','5' from dual union all
select '3/1/2006 7:06a.m','a','6' from dual union all
select '3/1/2006 7:06a.m','a','1' from dual union all
select '3/1/2006 7:06a.m','a','2' from dual union all
select '3/1/2006 7:09a.m','a','3' from dual union all
select '3/1/2006 7:09a.m','a','5' from dual union all
select '3/1/2006 7:09a.m','a','4' from dual union all
select '3/1/2006 7:12a.m','b','3' from dual union all
select '3/1/2006 7:12a.m','b','2' from dual union all
select '3/1/2006 7:12a.m','b','1' from dual;
following my own advice (in english) given last post i made this query:
case when
SUM(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 and
MIN(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 then
end as consec
from tmp_test
it reliably identifies consecutives and for run length of N, the last N-2 rows are flagged with a 1. i called this column CONSEC:
3/1/2006 7:00a.m a 1
3/1/2006 7:00a.m a 1
3/1/2006 7:03a.m a 3
3/1/2006 7:03a.m a 4
3/1/2006 7:03a.m a 5 1
3/1/2006 7:06a.m a 6 1
3/1/2006 7:06a.m a 1
3/1/2006 7:06a.m a 2
so now we can use a lookahead to see if the CONSEC column holds a 1 character, and because our CONSEC currently shows all but the first 2 of a consecutive range, we look
ahead 2, or 1, or look at the current position:
case when
consec = 1 or
lead(consec,1) OVER (ORDER BY d, rowid) = 1 or
lead(consec,2) OVER (ORDER BY d, rowid) = 1
then 'Y'
end as flag
case when
SUM(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 9 and
MIN(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 then
end as consec
from tmp_test
D T V CONSEC FLAG
3/1/2006 7:00a.m a 3
3/1/2006 7:00a.m a 1
3/1/2006 7:00a.m a 1
3/1/2006 7:03a.m a 3 Y
3/1/2006 7:03a.m a 4 Y
3/1/2006 7:03a.m a 5 1 Y
3/1/2006 7:06a.m a 6 1 Y
3/1/2006 7:06a.m a 1
3/1/2006 7:06a.m a 2
3/1/2006 7:09a.m a 3 Y
3/1/2006 7:09a.m a 5 Y
3/1/2006 7:09a.m a 4 1 Y
3/1/2006 7:12a.m b 3 1 Y
3/1/2006 7:12a.m b 2
3/1/2006 7:12a.m b 1
3/1/2006 7:13a.m c 4 Y
3/1/2006 7:14a.m d 4 Y
3/1/2006 7:14a.m d 5 1 Y
3/1/2006 7:14a.m d 1
3/1/2006 7:14a.m d 3
3/1/2006 7:14a.m d 3
3/1/2006 7:14a.m d 1
(i added some more values to the table after the initial create table sql)
Last edited by cjard; 04-12-2006 at 06:34 AM.
3. 04-12-2006, 06:43 AM #23
Junior Member
Join Date
Mar 2006
please note i forgot the question and even ignored my own advice a little, i prepared an sql for more than 2 consecutive rows, and poster wanted more than 3, so heres how
we change the sql:
case when
consec = 1 or
lead(consec,1) OVER (ORDER BY d, rowid) = 1 or
lead(consec,2) OVER (ORDER BY d, rowid) = 1 or
--add a lead row for every blank. more than 3 consec needs 3 lead rows
--more than 5 consec (i.e. 6+ consec) needs 5 leads
--note that each lead row must lead a higher lookahead
--the lookahead of this row is 3
lead(consec,3) OVER (ORDER BY d, rowid) = 1
then 'Y'
end as flag
case when
--the sum over the last N rows (more than 3 rows requires 3 PRECEDING)
--must be CONSECUTIVES_REQD * MIN_THRESHOLD
--in our case we want a run of 4 or more rows with value >= 3
--so sum must be >= 4rows*3orMore == 4*3 == 12
--remember to change the X PRECEDING value of X
SUM(v) OVER (ORDER BY d, rowid ROWS 3 PRECEDING)>= 12 and
MIN(v) OVER (ORDER BY d, rowid ROWS 3 PRECEDING)>= 3 then
end as consec
from tmp_test
so for a consecutive run length N of values greater than Y you need
N-1 entries in the LEAD() section of the sql, with lead values 1 to N-1
a SUM() over the preceding N-1 rows of value >= N*Y
a MIN() over the preceding N-1 rows of value >= Y
ok, now you can edit this sql for any run length of any value
4. 04-12-2006, 07:23 AM #24
Super Moderator
Join Date
Dec 2000
Ljubljana, Slovenia
Of course not! ROWID has got nothing to do with what you call "natural order that rows were inserted into the table". There's no such thing in the Oracle database as
built-in indicator of the inserted rows ordering. ROWID merely represents *the possition* of a particular row on the disk storage, it has no information about the time the
row was inserted. And by comparing ROWIDs of two rows, you can not say which one was inserted befor the other one.
So, in short: ROWID can not be used for ordering rows in the way that original poster needs to.
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
5. 04-20-2006, 11:19 AM #25
Junior Member
Join Date
Mar 2006
well, dude's going to ahve to decide on something else that is precise enough to order his rows properly then, eh? i've only ever seen rowid increment for the data work
that ive been doing and its been quite a reliable thing that for two rows inserted in the same second, the rowid has been able to order them in order of creation. i'll take
your advice on board though should i need it in the future. thanks!
6. 04-20-2006, 04:25 PM #26
Super Moderator
Join Date
Dec 2000
Ljubljana, Slovenia
Then I suppose you've never ever observed how new rows can take place that has been releeased by deleted rows from the same table, thus obtaining a "lower" ROWIDs compared
to some rows that were inserted before them? Or new table extents beeng allocated in the tablespace in such location that all rows that will end in that extent in the
future will result in ROWIDs that are lower than any currently existing table's row? Or that ROWIDs of the existing rows can change for various reasons, thus making the
insertion time even less corelated to the row's ROWID?
In short, as I've allready said: by comparing ROWIDs of two rows, you can definitely not conclude which of those two rows was inserted before the other one. All you can do
based on their ROWIDs is *to guess*.
Jurij Modic
ASCII a stupid question, get a stupid ANSI
24 hours in a day .... 24 beer in a case .... coincidence?
7. 05-10-2006, 05:33 PM #27
Junior Member
Join Date
Oct 2002
Hi all,
Finally, i got the exact requirement from the client. Please find the requirement in the attached word document.
The requirement is very scary to me. Hope the experts can help.
Last edited by rajan1; 05-10-2006 at 08:42 PM.
8. 05-10-2006, 05:56 PM #28
Old Cranky Philosopher
Join Date
Nov 2002
Geneva Switzerland
The requirements STILL don't explicitly include a sort order. Your client must specify that.
Is it by (id, testdate)?
Does this produce a UNIQUE sequence?
"The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman
9. 05-10-2006, 08:52 PM #29
Junior Member
Join Date
Oct 2002
Hi all,
I was bouncing back and forth with the client to confirm on the requirement. As a result i modified the requirement please find the modified requirement attached. I am
trying my best to clearly define the requirement to our experts here.
1. Sort is by id and testdate
2. Select all the rows where the difference between (testdate column) current row and the lead row or the next row is 3 minutes.
I don't know how to proceed with consecutive times. Experts please help.
10. 05-11-2006, 05:18 PM #30
Junior Member
Join Date
Oct 2002
The three consecutive applies only within a date or for each date.
can you not just sort by ROWID to get the natural order that rows were inserted into the table? is it valid?
ok i finally whacked this one but for one little bug: Code: create table tmp_test as select '3/1/2006 7:00a.m' as d,'a' as t,'3' as v from dual union all select '3/1/2006 7:00a.m','a','1' from dual
union all select '3/1/2006 7:00a.m','a','1' from dual union all select '3/1/2006 7:03a.m','a','3' from dual union all select '3/1/2006 7:03a.m','a','4' from dual union all select '3/1/2006
7:03a.m','a','5' from dual union all select '3/1/2006 7:06a.m','a','6' from dual union all select '3/1/2006 7:06a.m','a','1' from dual union all select '3/1/2006 7:06a.m','a','2' from dual union all
select '3/1/2006 7:09a.m','a','3' from dual union all select '3/1/2006 7:09a.m','a','5' from dual union all select '3/1/2006 7:09a.m','a','4' from dual union all select '3/1/2006 7:12a.m','b','3'
from dual union all select '3/1/2006 7:12a.m','b','2' from dual union all select '3/1/2006 7:12a.m','b','1' from dual; following my own advice (in english) given last post i made this query: Code:
select d, t, v, case when SUM(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 and MIN(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 then 1 end as consec from tmp_test it reliably identifies
consecutives and for run length of N, the last N-2 rows are flagged with a 1. i called this column CONSEC: Code: 3/1/2006 7:00a.m a 1 3/1/2006 7:00a.m a 1 3/1/2006 7:03a.m a 3 3/1/2006 7:03a.m a 4 3/
1/2006 7:03a.m a 5 1 3/1/2006 7:06a.m a 6 1 3/1/2006 7:06a.m a 1 3/1/2006 7:06a.m a 2 so now we can use a lookahead to see if the CONSEC column holds a 1 character, and because our CONSEC currently
shows all but the first 2 of a consecutive range, we look ahead 2, or 1, or look at the current position: Code: select d, t, v, consec, case when consec = 1 or lead(consec,1) OVER (ORDER BY d, rowid)
= 1 or lead(consec,2) OVER (ORDER BY d, rowid) = 1 then 'Y' end as flag from ( select d, t, v, case when SUM(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 9 and MIN(v) OVER (ORDER BY d, rowid ROWS 2
PRECEDING)>= 3 then 1 end as consec from tmp_test ); results: Code: D T V CONSEC FLAG 3/1/2006 7:00a.m a 3 3/1/2006 7:00a.m a 1 3/1/2006 7:00a.m a 1 3/1/2006 7:03a.m a 3 Y 3/1/2006 7:03a.m a 4 Y 3/1/
2006 7:03a.m a 5 1 Y 3/1/2006 7:06a.m a 6 1 Y 3/1/2006 7:06a.m a 1 3/1/2006 7:06a.m a 2 3/1/2006 7:09a.m a 3 Y 3/1/2006 7:09a.m a 5 Y 3/1/2006 7:09a.m a 4 1 Y 3/1/2006 7:12a.m b 3 1 Y 3/1/2006
7:12a.m b 2 3/1/2006 7:12a.m b 1 3/1/2006 7:13a.m c 4 Y 3/1/2006 7:14a.m d 4 Y 3/1/2006 7:14a.m d 5 1 Y 3/1/2006 7:14a.m d 1 3/1/2006 7:14a.m d 3 3/1/2006 7:14a.m d 3 3/1/2006 7:14a.m d 1 (i added
some more values to the table after the initial create table sql) stomped!
create table tmp_test as select '3/1/2006 7:00a.m' as d,'a' as t,'3' as v from dual union all select '3/1/2006 7:00a.m','a','1' from dual union all select '3/1/2006 7:00a.m','a','1' from dual union
all select '3/1/2006 7:03a.m','a','3' from dual union all select '3/1/2006 7:03a.m','a','4' from dual union all select '3/1/2006 7:03a.m','a','5' from dual union all select '3/1/2006 7:06a.m','a','6'
from dual union all select '3/1/2006 7:06a.m','a','1' from dual union all select '3/1/2006 7:06a.m','a','2' from dual union all select '3/1/2006 7:09a.m','a','3' from dual union all select '3/1/2006
7:09a.m','a','5' from dual union all select '3/1/2006 7:09a.m','a','4' from dual union all select '3/1/2006 7:12a.m','b','3' from dual union all select '3/1/2006 7:12a.m','b','2' from dual union all
select '3/1/2006 7:12a.m','b','1' from dual;
select d, t, v, case when SUM(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 and MIN(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 then 1 end as consec from tmp_test
3/1/2006 7:00a.m a 1 3/1/2006 7:00a.m a 1 3/1/2006 7:03a.m a 3 3/1/2006 7:03a.m a 4 3/1/2006 7:03a.m a 5 1 3/1/2006 7:06a.m a 6 1 3/1/2006 7:06a.m a 1 3/1/2006 7:06a.m a 2
select d, t, v, consec, case when consec = 1 or lead(consec,1) OVER (ORDER BY d, rowid) = 1 or lead(consec,2) OVER (ORDER BY d, rowid) = 1 then 'Y' end as flag from ( select d, t, v, case when SUM(v)
OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 9 and MIN(v) OVER (ORDER BY d, rowid ROWS 2 PRECEDING)>= 3 then 1 end as consec from tmp_test );
D T V CONSEC FLAG 3/1/2006 7:00a.m a 3 3/1/2006 7:00a.m a 1 3/1/2006 7:00a.m a 1 3/1/2006 7:03a.m a 3 Y 3/1/2006 7:03a.m a 4 Y 3/1/2006 7:03a.m a 5 1 Y 3/1/2006 7:06a.m a 6 1 Y 3/1/2006 7:06a.m a 1 3
/1/2006 7:06a.m a 2 3/1/2006 7:09a.m a 3 Y 3/1/2006 7:09a.m a 5 Y 3/1/2006 7:09a.m a 4 1 Y 3/1/2006 7:12a.m b 3 1 Y 3/1/2006 7:12a.m b 2 3/1/2006 7:12a.m b 1 3/1/2006 7:13a.m c 4 Y 3/1/2006 7:14a.m d
4 Y 3/1/2006 7:14a.m d 5 1 Y 3/1/2006 7:14a.m d 1 3/1/2006 7:14a.m d 3 3/1/2006 7:14a.m d 3 3/1/2006 7:14a.m d 1
please note i forgot the question and even ignored my own advice a little, i prepared an sql for more than 2 consecutive rows, and poster wanted more than 3, so heres how we change the sql: Code:
select d, t, v, consec, case when consec = 1 or lead(consec,1) OVER (ORDER BY d, rowid) = 1 or lead(consec,2) OVER (ORDER BY d, rowid) = 1 or --add a lead row for every blank. more than 3 consec
needs 3 lead rows --more than 5 consec (i.e. 6+ consec) needs 5 leads --note that each lead row must lead a higher lookahead --the lookahead of this row is 3 lead(consec,3) OVER (ORDER BY d, rowid) =
1 then 'Y' end as flag from ( select d, t, v, case when --the sum over the last N rows (more than 3 rows requires 3 PRECEDING) --must be CONSECUTIVES_REQD * MIN_THRESHOLD --in our case we want a run
of 4 or more rows with value >= 3 --so sum must be >= 4rows*3orMore == 4*3 == 12 --remember to change the X PRECEDING value of X SUM(v) OVER (ORDER BY d, rowid ROWS 3 PRECEDING)>= 12 and MIN(v) OVER
(ORDER BY d, rowid ROWS 3 PRECEDING)>= 3 then 1 end as consec from tmp_test ); so for a consecutive run length N of values greater than Y you need N-1 entries in the LEAD() section of the sql, with
lead values 1 to N-1 a SUM() over the preceding N-1 rows of value >= N*Y a MIN() over the preceding N-1 rows of value >= Y ok, now you can edit this sql for any run length of any value
select d, t, v, consec, case when consec = 1 or lead(consec,1) OVER (ORDER BY d, rowid) = 1 or lead(consec,2) OVER (ORDER BY d, rowid) = 1 or --add a lead row for every blank. more than 3 consec
needs 3 lead rows --more than 5 consec (i.e. 6+ consec) needs 5 leads --note that each lead row must lead a higher lookahead --the lookahead of this row is 3 lead(consec,3) OVER (ORDER BY d, rowid) =
1 then 'Y' end as flag from ( select d, t, v, case when --the sum over the last N rows (more than 3 rows requires 3 PRECEDING) --must be CONSECUTIVES_REQD * MIN_THRESHOLD --in our case we want a run
of 4 or more rows with value >= 3 --so sum must be >= 4rows*3orMore == 4*3 == 12 --remember to change the X PRECEDING value of X SUM(v) OVER (ORDER BY d, rowid ROWS 3 PRECEDING)>= 12 and MIN(v) OVER
(ORDER BY d, rowid ROWS 3 PRECEDING)>= 3 then 1 end as consec from tmp_test );
cjard Of course not! ROWID has got nothing to do with what you call "natural order that rows were inserted into the table". There's no such thing in the Oracle database as built-in indicator of the
inserted rows ordering. ROWID merely represents *the possition* of a particular row on the disk storage, it has no information about the time the row was inserted. And by comparing ROWIDs of two
rows, you can not say which one was inserted befor the other one. So, in short: ROWID can not be used for ordering rows in the way that original poster needs to.
Jurij Modic ASCII a stupid question, get a stupid ANSI 24 hours in a day .... 24 beer in a case .... coincidence?
jmodic well, dude's going to ahve to decide on something else that is precise enough to order his rows properly then, eh? i've only ever seen rowid increment for the data work that ive been doing and
its been quite a reliable thing that for two rows inserted in the same second, the rowid has been able to order them in order of creation. i'll take your advice on board though should i need it in
the future. thanks!
cjard Then I suppose you've never ever observed how new rows can take place that has been releeased by deleted rows from the same table, thus obtaining a "lower" ROWIDs compared to some rows that
were inserted before them? Or new table extents beeng allocated in the tablespace in such location that all rows that will end in that extent in the future will result in ROWIDs that are lower than
any currently existing table's row? Or that ROWIDs of the existing rows can change for various reasons, thus making the insertion time even less corelated to the row's ROWID? In short, as I've
allready said: by comparing ROWIDs of two rows, you can definitely not conclude which of those two rows was inserted before the other one. All you can do based on their ROWIDs is *to guess*.
Hi all, Finally, i got the exact requirement from the client. Please find the requirement in the attached word document. The requirement is very scary to me. Hope the experts can help. Thanks, Rajan
The requirements STILL don't explicitly include a sort order. Your client must specify that. Is it by (id, testdate)? Does this produce a UNIQUE sequence?
"The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous" - Gibbon, quoted by R.P.Feynman
Hi all, I was bouncing back and forth with the client to confirm on the requirement. As a result i modified the requirement please find the modified requirement attached. I am trying my best to
clearly define the requirement to our experts here. 1. Sort is by id and testdate 2. Select all the rows where the difference between (testdate column) current row and the lead row or the next row is
3 minutes. I don't know how to proceed with consecutive times. Experts please help. Thanks, Rajan
The three consecutive applies only within a date or for each date. Thanks, Rajan | {"url":"http://www.dbasupport.com/forums/showthread.php?51056-morethan-3-consecutive-rows&p=227181","timestamp":"2014-04-19T15:51:04Z","content_type":null,"content_length":"110805","record_id":"<urn:uuid:7071b02c-f092-4d8d-944f-05c8f6d0370f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Neural Network - Multi Step Ahead Prediction
4 Answers
Accepted answer
Neural Network - Multi Step Ahead Prediction
Latest activity Answered
Greg Heath
on 25 Mar 2014 at 7:24
Hi all, please I need your help !
I've read all the posts here about Time Series Forecasting but still can't figure it out ! I'm drained.. :-(
I've a NARX neural network with 10 hidden neurons and 2 delays. As input I have a 510x5 (called Inputx) and as output I have a 510x1 (called Target).
I want to forecast 10 days ahead but it's really not working...
I tried the following code but I'm stuck now. :-(
Would you mind to help me ? Some code will be awesome. :-(
***////////////////////////////////////////////******** ***/////////////////////////////////////////// ******
inputSeries = tonndata(Inputx,false,false);
targetSeries = tonndata(Target,false,false);
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
[xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
yc = netc(xc,xic,aic);
***////////////////////////////////////////////******** ***/////////////////////////////////////////// ******
1 Comment
Two things,please change the title of your post in something useful and format the code: http://www.mathworks.com/matlabcentral/answers/13205-tutorial-how-to-format-your-question-with-markup#
Hi Jack,
When using narxnet, the network performs only a one-step ahead prediction after it has been trained. Therefore, you need to use closeloop to perform a multi-step-ahead prediction and turn the network
into parallel configuration.
Take a look at this example for a multi-step-ahead prediction, N steps. This uses the dataset magdata.mat which is available in the Neural Network Toolbox. Also, some of the inputs will be used for
performing the multi-step-ahead prediction, and results validated with the original data. I hope the comments help to understand.
%% 1. Importing data
S = load('magdata');
X = con2seq(S.u);
T = con2seq(S.y);
%% 2. Data preparation
N = 300; % Multi-step ahead prediction
% Input and target series are divided in two groups of data:
% 1st group: used to train the network
inputSeries = X(1:end-N);
targetSeries = T(1:end-N);
% 2nd group: this is the new data used for simulation. inputSeriesVal will
% be used for predicting new targets. targetSeriesVal will be used for
% network validation after prediction
inputSeriesVal = X(end-N+1:end);
targetSeriesVal = T(end-N+1:end); % This is generally not available
%% 3. Network Architecture
delay = 2;
neuronsHiddenLayer = 50;
% Network Creation
net = narxnet(1:delay,1:delay,neuronsHiddenLayer);
%% 4. Training the network
[Xs,Xi,Ai,Ts] = preparets(net,inputSeries,{},targetSeries);
net = train(net,Xs,Ts,Xi,Ai);
Y = net(Xs,Xi,Ai);
% Performance for the series-parallel implementation, only
% one-step-ahead prediction
perf = perform(net,Ts,Y);
%% 5. Multi-step ahead prediction
inputSeriesPred = [inputSeries(end-delay+1:end),inputSeriesVal];
targetSeriesPred = [targetSeries(end-delay+1:end), con2seq(nan(1,N))];
netc = closeloop(net);
[Xs,Xi,Ai,Ts] = preparets(netc,inputSeriesPred,{},targetSeriesPred);
yPred = netc(Xs,Xi,Ai);
perf = perform(net,yPred,targetSeriesVal);
legend('Original Targets','Network Predictions','Expected Outputs')
9 Comments
Show 6 older comments
Hi, i am using NARX to predict a daily stock market index data (Sensex 2003x1 matrix) as target and another daily stock market data(Nifty) as input. I have done it using the example you have shown
The code:
%%%newNARX code 24/4/2013
%% 1. Importing data
% Matrix of 2003x1 each are
% daily stock market indices data
% of Nifty & Sensex
load Nifty.dat;
load Sensex.dat;
% %%S = load('magdata');
% %%X = con2seq(S.u);
% %%T = con2seq(S.y);
% To scale the data it is converted to its log value:
lognifty = log(Nifty);
logsensex = log(Sensex);
X = tonndata(lognifty,false,false);
T = tonndata(logsensex,false,false);
% X = con2seq(x);
% T = con2seq(t);
%% 2. Data preparation
N = 300; % Multi-step ahead prediction
% Input and target series are divided in two groups of data:
% 1st group: used to train the network
inputSeries = X(1:end-N);
targetSeries = T(1:end-N);
% 2nd group: this is the new data used for simulation. inputSeriesVal will
% be used for predicting new targets. targetSeriesVal will be used for
% network validation after prediction
inputSeriesVal = X(end-N+1:end);
targetSeriesVal = T(end-N+1:end); % This is generally not available
%% 3. Network Architecture
delay = 2;
neuronsHiddenLayer = 50;
% Network Creation
net = narxnet(1:delay,1:delay,neuronsHiddenLayer);
%% 4. Training the network
[Xs,Xi,Ai,Ts] = preparets(net,inputSeries,{},targetSeries);
net = train(net,Xs,Ts,Xi,Ai);
Y = net(Xs,Xi,Ai);
% Performance for the series-parallel implementation, only
% one-step-ahead prediction
perf = perform(net,Ts,Y);
%% 5. Multi-step ahead prediction
inputSeriesPred = [inputSeries(end-delay+1:end),inputSeriesVal];
targetSeriesPred = [targetSeries(end-delay+1:end), con2seq(nan(1,N))];
netc = closeloop(net);
[Xs,Xi,Ai,Ts] = preparets(netc,inputSeriesPred,{},targetSeriesPred);
yPred = netc(Xs,Xi,Ai);
perf = perform(net,yPred,targetSeriesVal);
legend('Original Targets','Network Predictions','Expected Outputs')
Network predictions are coming very bad.. I guess there is some problem with the close loop's initial input states and initial layer states. please help.
Please i don't understand some idea about your answer of Lucas García. Anyone know please show me. First, code of Lucas that use to predict N values in future with one step: predict value at t+1 by
past values at t-1, t-2,...,t-d. Is it right? Second, why we don't use a loop as "for" with i from 1 to N. and each i, we will predict value at t+i by past values at t+i-1,...t+i-d. I think It's more
accuracy than first way.
No need for a loop. The input and targets are time series, not single points from time series. Each step is performed automatically via sim or net.
When the loop is called,there is only the input series. The target series is replaced by output feedback.
Here is an example that may help. A NARX network is trained on series inputs X and targets T, then the simulation is picked up at the end of X using continuation input data X2 with a closed loop
network. The final states after open loop simulation with X are used as the initial states for closed loop simulation with X2.
[x,t] = simplenarx_dataset;
net = narxnet;
[X,Xi,Ai,T] = preparets(net,x,{},t);
net = train(net,X,T,Xi,Ai);
[Y,Xf,Af] = sim(net,X,Xi,Ai);
% INPUT DATA USING CLOSED LOOP NETWORK.
% Closed Loop Network
netc = closeloop(net);
% 10 More Steps for the first (now only) input
X2 = num2cell(rand(1,10));
% Initial input states for closed loop continuation will be the
% first input's final states.
Xi2 = Xf(1,:);
% Initial 2nd layer states for closed loop contination will be the
% processed second input's final states. Initial 1st layer states
% will be zeros, as they have no delays associated with them.
Ai2 = cell2mat(Xf(2,:));
for i=1:length(net.inputs{1}.processFcns)
fcn = net.inputs{i}.processFcns{i};
settings = net.inputs{i}.processSettings{i};
Ai2 = feval(fcn,'apply',Ai2,settings);
Ai2 = mat2cell([zeros(10,2); Ai2],[10 1],ones(1,2));
% Closed loop simulation on X2 continues from open loop state after X.
Y2 = sim(netc,X2,Xi2,Ai2);
2 Comments
Thank you very much Mark for your answer ! :-))
I have tried this code, and it is great, but when I try to apply it for my problem, I get really bad results. I tried with changing input and feedback delays, as well as number of hidden neurons, but
the results are always bad (figure) (green line is multi step predistion)
The code is given below:
net = narxnet(ID,FD,HL);
[X,Xi,Ai,T] = preparets(net,x,{},WS);
net.divideFcn = 'divideblock';
net = train(net,X,T,Xi,Ai);
[Y,Xf,Af] = sim(net,X,Xi,Ai);
% INPUT DATA USING CLOSED LOOP NETWORK.
% Closed Loop Network
netc = closeloop(net);
Xi2 = Xf(1,:);
Ai2 = cell2mat(Xf(2,:));
for i=1:length(net.inputs{1}.processFcns)
fcn = net.inputs{i}.processFcns{i};
settings = net.inputs{i}.processSettings{i};
Ai2 = feval(fcn,'apply',Ai2,settings);
Ai2 = mat2cell([zeros(10,2); Ai2],[10 1],ones(1,2));
Y2 = sim(netc,X2,Xi2,Ai2);
hold on
legend('Input data - target series','One-step ahead prediction','Multi-step prediction beyond target series');
Be aware that predicting outputs this way (similar to cascade relaization of linear system) has great sensitivity to parametar estimation errors because they propagate in the process Mark Hudson
Beale mentioned. This is highlighted in hard, multiple steps ahead problems.
Parallel realizations (simoltanoius output estimation...for instance 10 outputs of neural network for next 10 time steps) tend to be less sensitive to this errors. I have implemented this with my
code which is alway prone to error :) So my subquestion is:
Is there some specific way to prepare my data for training with some matlab function?
0 Comments
When the loop is closed, the net should be retrained with the original data and initial weights the same as the final weights of the openloop configuration.
0 Comments | {"url":"http://www.mathworks.es/matlabcentral/answers/14970-neural-network-multi-step-ahead-prediction","timestamp":"2014-04-20T11:08:43Z","content_type":null,"content_length":"56196","record_id":"<urn:uuid:259ac3b9-9c56-438a-b355-64761c07126a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please check for me, affine transformations
June 5th 2006, 07:28 AM #1
Junior Member
May 2006
Please check for me, affine transformations
Brief solutions at the bottom
In this question, f and g are both affine transformations. The transformations f is a reflection in the line y = -x +1, and g maps the points (0,0), (1,0) and (0,1) to the points (1,5), (1,-4)
and (0,-5) respectively.
a) Determine g in the form g(x) = Ax + a, where A is a 2 x 2 matrix and a is a vector with two components.
b) Write down the matrix that represents reflection in an appropriate line through the origin, and find f (in the same form as for g in part (a)) by first translating an appropriate point to the
c) Find the affine transformation g o f (in the same form as for g and f in parts (a) and (b)).
d) Hence or otherwise, find the images of the points (0,0), (0,-2) (2,-2) and (2,0) under g o f. Mark these points and images on the same diagram, making it clear which points maps to which.
Describe g o f geometrically as a single transformation.
Here is what I got;
a) g(x) = (0 -1)x + (1)
1 0 -5
b) f(x) = (0 -1) + (1)
-1 0 1
c) gof = (1 0)x + ( 0)
0 -1 -6
d) Using solution from (c) I got images are (0,-6) , (0, -4), (2,-4) and (2,-6)
Otherwise I got images as (0,-4) , (0,-2) , ( 2,-2) and (2,-4)
Don’t know which if any is correct. I might have made mistakes in earlier workings.
Brief solutions at the bottom
In this question, f and g are both affine transformations. The transformations f is a reflection in the line y = -x +1, and g maps the points (0,0), (1,0) and (0,1) to the points (1,5), (1,-4)
and (0,-5) respectively.
a) Determine g in the form g(x) = Ax + a, where A is a 2 x 2 matrix and a is a vector with two components.
b) Write down the matrix that represents reflection in an appropriate line through the origin, and find f (in the same form as for g in part (a)) by first translating an appropriate point to the
c) Find the affine transformation g o f (in the same form as for g and f in parts (a) and (b)).
d) Hence or otherwise, find the images of the points (0,0), (0,-2) (2,-2) and (2,0) under g o f. Mark these points and images on the same diagram, making it clear which points maps to which.
Describe g o f geometrically as a single transformation.
Here is what I got;
a)...........g(x) = (0 -1)x + (1.)
.......................(1 0) .....(-5)
if $x=[0,0]'$, your $g(x)=[1,-5]'$ rather than $[1,5]'$ as is given in the question.
June 6th 2006, 02:11 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/3271-please-check-me-affine-transformations.html","timestamp":"2014-04-18T08:21:36Z","content_type":null,"content_length":"35959","record_id":"<urn:uuid:cd629536-6854-408a-84f7-4d53b35cd447>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector or not vector?
March 29th 2007, 01:16 PM #1
Mar 2007
Vector or not vector?
If a vector is simply "any quantity having magnitude and
direction", then how can a vector's components NOT transform according to the rule of transformation for the components of a vector when we apply a transformation of the coordinate system? Or
alternatively, given three numbers (for the 3D case), how can we say that they are NOT the components of a vector in a given coordinate system, if we admit that they transform according to the
rule of transformation for vector components when we change the coordinate system? I'm not sure anymore if a vector is simply "any quantity having magnitude and direction", I mean, in a given x-y
coordinate system (2D case), I can define a vector, say (3,5) - but it's clear that (3,5) are NOT the components of my vector in another coordinate system. Does that mean that my vector is not a
vector? I think it just means that (3,5) are not the components of my vector. But then, how do I specify my vector independently of the coordinate system, for isn't a vector an entity that exists
regardless of any coordinate system?
I know my question seems confusing. Please read my following analysis before answering:
Vector or not vector?
Last edited by BobbyFluffyPrickles; March 29th 2007 at 10:46 PM.
vector or not a vector?
The definition you told is fine. However, the notion you have about the components of a vector is not ok completely. Actually the components are generally said and the representation you are
having, talks about rectangular components of a vector. As the coordinate system changes the rectangular components of a vector also chnage their direction accordingly keeping the magnitude
If you are talking about a vector, the magnitude remains changed irrespective of the coordinate system. However, as you change the coordinate system or may be if you are rotating the coordinate
system then the direction changes and accordingly the direction of the rectangular components change.
I hope this helps!!!!!!!!!
Thank you for your reply, jagabandhu
well fine!!!!!!!!
I do not know how much you know about the things.
Well may be you can get some ideas in the website
The Velocity Vector and its Components
other wise you may post your curosities in the website
www.askiitians.com :: Index
I hope this may help you out. In the mean time I would like to look back to my notes to clarify more about it!!!!!
April 3rd 2007, 07:27 AM #2
Mar 2007
April 3rd 2007, 07:58 AM #3
Mar 2007
April 4th 2007, 07:32 AM #4
Mar 2007 | {"url":"http://mathhelpforum.com/geometry/13119-vector-not-vector.html","timestamp":"2014-04-16T14:12:59Z","content_type":null,"content_length":"37599","record_id":"<urn:uuid:7d45bb37-c4b3-4908-8289-8d9447f425d9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
By Kardi Teknomo, PhD.
<Next | Previous | Index>
A unit vector is a vector of unit length. Any non-zero vector can be normalized into a unit vector by a division of a vector by its norm, that is
Note that unit vector is not equal to one vector
Suppose we have vector
The norm of the vector is
Converting it to unit vector becomes
Now the norm of the unit vector is
The interactive program below will help you to convert your vector input into a unit vector of any positive dimension. The program will also show you the norm of input vector, norm of unit vector
(which is always 1) and sum of the unit vector.
Some important properties of unit vector are
• The inner product of a unit vector to itself is one
• Two unit vectors
• In an Euclidean space, the standard unit vectors that orthogonal to each other has name:
o unit vector of the first dimension is
o unit vector of the second dimension is
o unit vector of the third dimension is
• The dot products of the standard unit vector:
o Dot product of the same standard unit vector is one
o Dot product of the perpendicular standard unit vector is zero
• The cross product of the standard unit vectors:
o Cross product of the same standard unit vector is zero
o Cross product of the perpendicular standard unit vector form a cycle
See also: dot product, cross product, vector norm, basis vector
<Next | Previous | Index>
Rate this tutorial or give your comments about this tutorial
This tutorial is copyrighted.
Preferable reference for this tutorial is
Teknomo, Kardi (2011) Linear Algebra tutorial. http:\\people.revoledu.com\kardi\ tutorial\LinearAlgebra\ | {"url":"http://people.revoledu.com/kardi/tutorial/LinearAlgebra/UnitVector.html","timestamp":"2014-04-20T05:44:43Z","content_type":null,"content_length":"23486","record_id":"<urn:uuid:c705c179-e970-41f1-a348-5099eef37307>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How many bits must be “flipped” (i.e., changed from 0 to 1 or from 1 to 0) in order to capitalize a lowercase ‘a’ that’s represented in ASCII?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Do you know the decimal value for each of the characters?
Best Response
You've already chosen the best response.
Take the decimal equivalent to your ascii, convert to binary, compare the results.
Best Response
You've already chosen the best response.
@JoãoVitorMC it's more useful to start from hex values: A <=> 41 <=> 0100 0001 a <=> 61 <=> 0110 0001 B <=> 42 <=> 0100 0010 b <=> 62 <=> 0110 0010 ... ... Z <=> 5A <=> 0101 1010 z <=> 7A <=>
0111 1010 just toggle b5 from 0 to 1 to transform uppercase to lowercase or toggle b5 from 1 to 0 to transform from lowercase to uppercase: that's all
Best Response
You've already chosen the best response.
going from lowercase to uppercase in ascii is equivalent to subtracting 32 from the decimal notation of the lowercase letter. 'a' = 97, 97-32 = 65 = 'A'. Subtracting 32 is equivalent to doing an
XOR with 00100000, with the possibility of a carry bit in the case that you XOR a 0 with a 1, which could in turn trigger a bit flip, which means that at most the 3 most significant bits could
flip. However, the lowercase alphabet in ASCII is only 97-122 in decimal. That entire range begins with 011 in binary, thus subtracting 00100000 will always only flip the '32' place bit. So not
only for 'a', but for all 26 letters in the english alphabet, exactly 1 bit must be flipped to capitalize it in ASCII.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507d7f86e4b040c161a2d8e8","timestamp":"2014-04-19T19:37:27Z","content_type":null,"content_length":"35845","record_id":"<urn:uuid:545402f0-00fa-4c3b-bf80-450c97fffd3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: TopologyVol.23, No. 2, pp. 211-217,1984 0040-9383/84 $3.00+.00
Printedin Great Britain. PergamonPressLtd.
(Received 1 May 1982)
IT WAS established by Brown[2] that any locally-fiat imbedding of S "-~ in S" divides S"
into two domains, each of whose closures is an n-ball. Somewhat later[5] the h-eobordism
theorem further established that if S"- 1is a smooth of PL submanifold of S" then so are
the resulting n-balls, provided that n > 5. (The case n < 3 had been known since
For n = 4 little is known. The goal of this paper is to present an elementary proof of
the conjecture for the special ease described below.
A collared handlebody decomposition of a 3-manifold M will be a decomposition
= W~ c W1 c W~ c W2 ~ W~ ,-" ... c W,_ 1c W;_ 1~ Wn ~- M such that, for
0 < i < n, Wi is obtained from W~_~by attaching a handle hi ~- D t × D 3-k to aW~_ t along
~Dk x D 3-k, and W~ is obtained from W~by attaching a collar to dW~.
It will be convenient to regard S 4 as the two-point compactification of S 3 × R, so
S 3 x ~ ~ S 4. Let p: S 3 x ~.-.).R, n: S 3 × ~...-~S 3 be the standard projections. A PL
imbedding g: $3--,S 3 x R ~ S 4 is a critical level imbedding if there is a collared handle- | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/337/1383639.html","timestamp":"2014-04-24T15:55:31Z","content_type":null,"content_length":"8442","record_id":"<urn:uuid:f202f19e-f3ab-4a18-a047-737be71f22e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential Equations
Way back in algebra we learned that a solution to an equation is a value of the variable that makes the equation true. This is backwards kind of thinking we need for differential equations.
To check if a number is a solution to an equation, we evaluate the left-hand side of the equation at that number, then evaluate the right-hand side of the equation at that number. If we get the same
value for both sides of the equation, the number is a solution to the equation. | {"url":"http://www.shmoop.com/differential-equations/solutions-differential.html","timestamp":"2014-04-21T09:55:30Z","content_type":null,"content_length":"26289","record_id":"<urn:uuid:01455481-0b30-4127-8b23-a9093a096c05>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yousef Saad -- Books
• Iterative methods for sparse linear systems (2nd edition) This is the same text as the book with the same title offered by SIAM [Available here.]. See description below for more information.
Note: This is has a different format from that of the SIAM print. [to allow hyper-references in particular.]
• Numerical Methods for Large Eigenvalue Problems - 2nd Edition This is the second edition of a book published almost two decades ago by Manchester University Press (See below). The book is
published by SIAM. [Available here for purchase from SIAM].
• Iterative methods for sparse linear systems (2nd edition) This is a second edition of a book initially published by PWS in 1996. It is available from SIAM. In this new edition, I revised all
chapters by incorporating recent developments, so the book has seen a sizable expansion from the first edition. At the same time I also removed some of the topics that have become less important.
A notable addition is a chapter on multigrid techniques. The first edition posted (see below) will remain on my web-site (as is). The table of contents of the new edition can be accessed in:
post-script   or   PDF .
Two distinct erratas are available for this book. One was sent in April 2004 to the publisher at the occasion of the second printing and the other was sent in May 2007 at the occasion of the 3rd
• Iterative methods for sparse linear systems (1st edition) This book, originally published in 1996 by PWS, is now out of print. PWS (and now ITP) no longer owns the copyright. I revised the
manuscript and I am making the post script available for those who want to use it. A new edition of this book is now available from SIAM, see above.
• Numerical Methods for Large Eigenvalue Problems This book was originally published by Manchester University Press (Oxford rd, Manchester, UK) in 1992 -- (ISBN 0 7190 3386 1) and in the US under
Halstead Press (John Wiley, ISBN 0 470 21820 7). It is currently out of print. The version available here is actually an updated one. You will find 4 post-script files (about 100 pages each) in a
compressed (gz) tar format. | {"url":"http://www-users.cs.umn.edu/~saad/books.html","timestamp":"2014-04-16T21:52:02Z","content_type":null,"content_length":"4399","record_id":"<urn:uuid:3558259f-4775-4ba7-b3f4-fd8c5985750f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Moment of inertia physical pendulum
My attempt:
the inertia of a rod (physical pendulum)
[itex]I = md^{2} + \frac{1}{12}mL^{2}[/itex]
Without the mass, the moment inertia is :
[itex]d = \frac {L}{2}[/itex] the center of mass in in the middle of the physical pendulum.
[itex]I = m\frac {L^{2}}{4} + \frac{1}{12}mL^{2}[/itex]
[itex]I = \frac{1}{3} mL^{2}[/itex]
OK. Your physical pendulum is just a thin rod of mass m. (Is that what you had in mind? Or were you supposed to consider
physical pendulum?)
if i add a mass in the end of the pendulum then
[itex]I = mL^{2} + \frac{1}{12}mL^{2}[/itex]
[itex]I = \frac {13}{12} mL^{2}[/itex]
Is the added mass equal to the mass of the rod? Since you want the rotational inertia about the axis, why are you adding the rotational inertia of the rod about its center of mass?
Don't forget that when you add mass the location of the center of mass changes. | {"url":"http://www.physicsforums.com/showthread.php?p=4220071","timestamp":"2014-04-21T02:14:06Z","content_type":null,"content_length":"63641","record_id":"<urn:uuid:fcc49d39-9d0b-42ca-aeec-bdce48355043>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the least prime number that is greater than 90?
In mathematics, an integer sequence is a sequence (i.e., an ordered list) of integers.
An integer sequence may be specified explicitly by giving a formula for its nth term, or implicitly by giving a relationship between its terms. For example, the sequence 0, 1, 1, 2, 3, 5, 8, 13, …
(the Fibonacci sequence) is formed by starting with 0 and 1 and then adding any two consecutive terms to obtain the next one: an implicit description. The sequence 0, 3, 8, 15, … is formed according
to the formula n2 − 1 for the nth term: an explicit definition.
A prime number (or a prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is called a composite
number. For example, 5 is prime because only 1 and 5 evenly divide it, whereas 6 is composite because it has the divisors 2 and 3 in addition to 1 and 6. The fundamental theorem of arithmetic
establishes the central role of primes in number theory: any integer greater than 1 can be expressed as a product of primes that is unique up to ordering. The uniqueness in this theorem requires
excluding 1 as a prime because one can include arbitrarily-many instances of 1 in any factorization, e.g., 3, 1 × 3, 1 × 1 × 3, etc. are all valid factorizations of 3.
The property of being prime (or not) is called primality. A simple but slow method of verifying the primality of a given number n is known as trial division. It consists of testing whether n is a
multiple of any integer between 2 and $\sqrt{n}$. Algorithms much more efficient than trial division have been devised to test the primality of large numbers. Particularly fast methods are available
for numbers of special forms, such as Mersenne numbers. As of February 2013[update], the largest known prime number has 17,425,170 decimal digits.
Number theory (or arithmetic) is a branch of pure mathematics devoted primarily to the study of the integers, sometimes called "The Queen of Mathematics" because of its foundational place in the
discipline. Number theorists study prime numbers as well as the properties of objects made out of integers (e.g., rational numbers) or defined as generalizations of the integers (e.g., algebraic
Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory are often best understood through the study of analytical objects
(e.g., the Riemann zeta function) that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in
relation to rational numbers, e.g., as approximated by the latter (Diophantine approximation).
Related Websites: | {"url":"http://answerparty.com/question/answer/what-is-the-least-prime-number-that-is-greater-than-90","timestamp":"2014-04-20T00:39:18Z","content_type":null,"content_length":"26454","record_id":"<urn:uuid:9a65ce2e-2edf-4c32-96a8-ca05e5d65763>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
y=6x-9 identify the slope
• one year ago
• one year ago
Best Response
You've already chosen the best response.
y=mx+b is the slope equation. where m equals the slope, and b is where the line crosses the y axis. So 6 will be the slope
Best Response
You've already chosen the best response.
ok thank you
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512b956ee4b02acc415d47dd","timestamp":"2014-04-21T10:22:56Z","content_type":null,"content_length":"32384","record_id":"<urn:uuid:d39559b5-3bf6-4a45-ba45-8eff404ab1f1>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: 1
2008 by The University of Chicago. All rights reserved.DOI: 10.1086/527498
Appendix from P. A. Hohenlohe and S. J. Arnold, "MIPoD: A
Hypothesis-Testing Framework for Microevolutionary Inference from
Patterns of Divergence"
(Am. Nat., vol. 171, no. 3, p. 366)
Construction of the Phylogenetic Divergence Matrix
To further illustrate the construction of the phylogenetic divergence matrix A, we combine the G matrix shown
in figure 1A and the phylogeny shown in figure 2A. We thus have traits and taxa, so A will be am p 2 n p 3
6-by-6 matrix. First, the G matrix shown in figure 1A is parameterized by
S 5
p 0.6 , (A1)
J 0.8
so the 2-by-2 G matrix is
2.49 0.50
G p . (A2)
0.50 2.51
The matrix of shared ancestry T for the taxa x, y, and z is
t t t 1,000 200 200xx xy xz
T p t t t p 200 1,000 800 , (A3)yx yy yz | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/684/2526747.html","timestamp":"2014-04-17T22:42:03Z","content_type":null,"content_length":"8134","record_id":"<urn:uuid:e0698dd3-3348-40ed-aec2-b60db0bf0835>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Plotting Doesn't Plot' topic
Author Comment/Response
I have written the following Mathematica script:
NN = Sqrt[2 Sqrt[2]]/(Sqrt[2 (Sqrt[2] - 1)]*Pi*q);
x0 = Sqrt[Sqrt[2] (1 - I)]*q;
DD = 4 q^4*
NN*((1 - I)/
1 - I]/(s + I (C - x0))^2 + (1 -
I)/(Sqrt[1 + I]*(s + I (C + Conj[x0]))^2))/(2^(3/
4) q^3 (2 I));
Poly = s + DD + s I \[CapitalLambda] DD;
q = .15;
\[CapitalLambda] = 0;
sol = Solve[Poly == 0, s];
sol1 = N[s /. sol[[1]]];
sol2 = N[s /. sol[[2]]];
sol3 = N[s /. sol[[3]]];
sol4 = N[s /. sol[[4]]];
Plot[Re[sol1], {C, -10, 5}, PlotStyle -> Thick]
Plot[Im[sol1], {C, -10, 5}, PlotStyle -> Thick]
I have written similar scripts for other types of polynomials similar to Poly, and for some reason I get mixed results. The result here is that the plot is empty. Sometimes the plots
actually plot what they should, and other times I get nothing. Then I copy and paste from the files that do work, and it works out as I expect it to.
I have checked, and the output of Solve[*] is a function of C that shouldn't be causing such problems. Is anybody familiar with how to get around this, or why it is happening in the first
place? It seems very random to me right now why some of these inputs work and others don't.
Attachment: bustedcode.nb, URL: , | {"url":"http://forums.wolfram.com/student-support/topics/21254","timestamp":"2014-04-21T02:12:51Z","content_type":null,"content_length":"26925","record_id":"<urn:uuid:dcf521bf-74ea-4502-9d25-4de78ae265d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Guide Entry 08.06.10
Yale-New Haven Teachers Institute Home
When Will We Ever Use This? Predicting Using Graphs, by Nancy J. Schmitt
Guide Entry to 08.06.10:
High-school students are forever asking, “When will we ever use this?” To a math teacher the critical importance of math skills appears clear. The students’ inexperience makes it difficult for them
to envision how they might some day use some of the skills required in the math curriculum. Finding activities that are “fun,” and appropriate to the skill levels of the students, is a challenge for
a math teacher anywhere.
This unit will be organized linearly where each lesson builds on itself. Graphing may be done by hand, with computer graphing software, or graphing calculators, depending on the availability of
technology to the classroom teacher and the technical ability of the students. Because I teach at a magnet school with a business focus, these lessons will emphasize business decisions. A student’s
ability to perform data analysis and present the analysis in a clear format is crucial to good business foundations. It is the intent of this unit to provide the mathematical background to enable the
student to produce an appropriate graphical display based on the data analysis. However, the materials and topics will be appealing to the teenager, so that any student will be able to connect with
the lessons and see their application to some aspect of their current lives or future careers. The math skills and ideas that are included in this unit are based on learning to read and create
scatter plots and line graphs, fit a line to a scatter plot and make simple predictions from data within the scope of the data (interpolation and extrapolation). The skill level is geared to an
Algebra I class, but may be adapted to middle school or intensified for Algebra II, where regression analysis of the data by the student may be included.
(Recommended for Algebra I, grade 9)
Contents of 2008 Volume VI | Directory of Volumes | Index | Yale-New Haven Teachers Institute | {"url":"http://www.yale.edu/ynhti/curriculum/guides/2008/6/08.06.10.x.html","timestamp":"2014-04-20T08:20:15Z","content_type":null,"content_length":"5128","record_id":"<urn:uuid:10d3a376-df12-417d-a5e8-d604ba29390e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Küchemann, Dietmar
Number of items: 10.
Küchemann, Dietmar and Hoyles, Celia and Kuchemann, Dietmar (2006) Influences on students' mathematical reasoning and patterns in its development: insights from a longitudinal study with particular
reference to geometry. International Journal of Science and Mathematics Education, 4 (4). pp. 581-608. ISSN 1571-0068
Hoyles, Celia and Küchemann, Dietmar and Healy, Lulu and Yang, Min (2005) Students' developing knowledge in a subject discipline: insights from combining quantitative and qualitative methods.
International Journal of Social Research Methodology, 8 (3). pp. 225-238. ISSN 1364-5579
Hoyles, Celia and Küchemann, Dietmar and Foxman, Derek and Kuchemann, Dietmar (2003) Comparing geometry curricula: insights for policy and practice. Mathematics in School, 32 (3). pp. 2-6. ISSN
Hoyles, Celia and Küchemann, Dietmar and Foxman, Derek and Kuchemann, Dietmar (2003) The role of proof in different geometry curricula. Mathematics in School, 32 (4). pp. 36-40. ISSN 0305-7259
Hoyles, Celia and Küchemann, Dietmar and Kuchemann, Dietmar (2002) Students' understandings of logical implication. Educational Studies in Mathematics, 51. pp. 193-223. ISSN 0013-1954
Book Section
Küchemann, Dietmar and Hoyles, Celia and Kuchemann, Dietmar (2002) Students' understanding of logical implication and its converse. In: Proceedings of the 26th Conference of the International Group
for the Psychology of Mathematics Education. School of Education and Professional Development, University of East Anglia, Norwich, pp. 241-248. ISBN 0953998363
Küchemann, Dietmar and Hoyles, Celia and Kuchemann, Dietmar (2001) Investigating factors that influence students' mathematical reasoning. In: Proceedings of the 25th Conference of the International
Group for the Psychology of Mathematics Education. Freudenthal Institute, Faculty of Mathematics and Computer Science, Utrecht University, Utrecht, pp. 257-264. ISBN 9074684165
Conference or Workshop Item
Küchemann, Dietmar and Hoyles, Celia and Kuchemann, Dietmar (2004) Year 10 students' proofs of a statement in number/algebra and their responses to related multiple choice items: longitudinal and
cross-sectional comparisons. In: UNSPECIFIED.
Hoyles, Celia and Küchemann, Dietmar and Kuchemann, Dietmar (2002) Students' explanations in geometry: insights from a large-scale longitudinal survey. In: UNSPECIFIED.
Hoyles, Celia and Foxman, Derek and Küchemann, Dietmar and Kuchemann, Dietmar (2002) A comparative study of geometry curricula. Qualifications and Curriculum Authority, London. ISBN 1858385091 | {"url":"http://eprints.ioe.ac.uk/view/creators/K=FCchemann=3ADietmar=3A=3A.html","timestamp":"2014-04-16T13:20:02Z","content_type":null,"content_length":"13511","record_id":"<urn:uuid:9dcaba75-47bd-4e10-af35-73351f9c0824>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problems with trigonometry or algebra?
Ask a university teacher of science about their new students’ mathematical difficulties and the chances are you’ll be told that students can’t rearrange equations. They may go on to tell you that
this is the fault of poor school-teaching or of dumming down the school curriculum ‘these days’. I used to think that this argument was wrong on two counts. I still think that we should be looking
deeper into the causes of our students’ misunderstandings rather than apportioning blame. But what about rearranging equations?
I thought I’d posted previously about the fact S151 Maths for Science students are actually quite good at rearranging equations, but I can’t find this post. Perhaps that’s just as well, because I may
have generalised inappropriately. When answering questions on the ‘algebra’ chapter of Maths for Science, students seem good at rearranging equations (if not very good at substituting numerical
values and getting the units right!). However I have recently analysed the question shown below from the trigonometry chapter:
A correct answer to this question requires students to evaluate d/(tanθ). However the most common error (in 8.2% of responses) occurs when students find dtanθ instead. This may be caused by students
thinking that tanθ = h/d or it may be that students know that tanθ = d/h but then fail to rearrange this equation correctly to give an equation for h. So perhaps our students are not so good at
rearranging equations after all. Bother!
So, perhaps it is not so much to do with whether students can do algebra or trigonometry but rather that they are poor at translating skills and understanding learnt in one context to a different | {"url":"http://www.open.ac.uk/blogs/SallyJordan/?p=824","timestamp":"2014-04-16T16:59:41Z","content_type":null,"content_length":"9661","record_id":"<urn:uuid:927e8c1f-34b1-48b1-929b-2e20cfbcb3cf>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forces on INCLINED Inclined Planes
Dynamics: Inclines & Machines
A. Forces on INCLINED Inclined Planes:
Review Topics Covered:
This is a general review of essential topics.
More in-depth tutorials can be found using the quick search tool
• Free Body Diagrams
• Forces
• Vector Components,
• Gravity, and Weight
Free Body Diagrams (F.B.D.)
A free body diagram is a simple sketch showing all the forces acting on an object when it is on its own (i.e. removed from its surrounding). Here are some guidelines for drawing Free Body Diagrams.
• Forces are represented by vectors
• A relative scale should be used to show the relative size of the forces whenever possible.
• Only the forces acting on the object should be included not the forces that the object exerts on other objects.
• All forces (vectors) should be drawn with respect to the center of gravity (or center of mass of the object). This point is usually located at the geometric center of the object.
Also see FORCES
A force is usually described as a vector quantity with a definite magnitude and a definite direction which can be either a push or a pull on an object.
Also see Components of Vectors
Any vector is space can be defined by its coordinates in terms of its x, y, and z position.
In a two dimensional frame we need only consider its x and y co-ordinates. Therefore any vector can be thought of as a resultant when its horizontal component in the x direction is added to its
vertical component in the y direction.
The component of a vector V in standard position is Vx and the y component is Vy.
Vx = V cosQ and Vy = VsinQ
Then we can define V as
v = v[x ]+ v[y]
[Gravity & Weight]
[The force of gravity as we know id expressed as Fg = mg. Where g is the acceleration due to gravity close to the surface of the earth is (9.8 m/s][2 ). A "more common" name for force of gravity is
Weight. Weight should not be substituted as a synonym for mass. ]
The acceleration of an object down an inclined plane (a) is equal to the value of the acceleration due to gravity (g) diluted by a factor corresponding to the sine of the angle of the incline with
respect to the horizontal (sinq).
Friction is generally defined as the "force that opposes" motion.
When the applied force on an object is equal or less than the force of friction, the object will not move. If the applied force is slightly bigger than the force of friction, the object will move in
the direction of the applied force. There are two types friction forces acting on the object.
• Static friction: before the object starts to move
• Kinetic friction: while the object is moving
Static Friction is larger than Kinetic (moving) friction.
If an object is moving at constant speed , it will slow down and eventually come to a stop because the force of friction is constantly acting against it.
In our laboratory experiments we found that friction depends on several factors:
1. The weight of the object i.e. the force of gravity ( Fg). Recall that Fg = m x g
2. Hence indirectly friction depends on mass
3. Friction also depends on the type of surface
4. The presence of a lubricant
The general equation for calculating friction is F[f] = mmg
Where m is the coefficient of friction, m is the mass and g is 9.8 m/s^2
Sample Problem: The inclined Plane
A skier goes down a smooth 30^o hill (frictionless) for a distance of 10 m to the bottom of the hill where he then continues on a frozen, frictionless pond. After that, he goes up a hill inclined at
25^o to the horizontal. How far up this second hill does he go before stopping if the coefficient of friction on this hill is 0.10?
Analysis and Solution:
Part I - Accelerated Motion
a = (g)(sin30^o)^
= 9.8 X 0.5 m/s^2
^V[1 ]= 0 ( ^V[2 ]) ^2 = ( ^V[2 ]) ^1 + 2(a)(d) d = 10 m
( ^V[2 ]) ^ = 9.9 m/s
Part II - No acceleration
Therefore Fnet = ma = 0 and V[1 ]is the same as V[2 ]from Part 1.
Part III - Friction up an inclined plane
From the Free Body Diagram
F[net] = -F[gx] - F[f ]
ma = -(m)(g)(sin 25^o) - µ(m)(g)(cos 25^o)
m drops off on both sides of the equation
we solve for the acceleration, a
we use the acceleration to find the distance, d
Where V[2 ] is zero (skier stops), Solving for d, where V[1 ]= 9.9 m/s from part 1, and a = -4.99 m/s^2
^We obtain a value for d:
Therefore the distance traveled by the skier before he stops up the second hill is 9.8 m
Biology Chemistry Computer Engineering Electronics Mathematics Physics Science Home | {"url":"http://www.clickandlearn.org/Physics/sph4u/inclines.htm","timestamp":"2014-04-18T11:26:28Z","content_type":null,"content_length":"13770","record_id":"<urn:uuid:722fbb2b-a401-425e-8262-387286ec572f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Randomized Convergence
December 27th 2010, 06:36 PM
Randomized Convergence
I have essentially no knowledge of statistics, so this may be a well known topic.
Let $r:\mathbb{N} \to \{0,1\}$ be arbitrary.
What are the odds $\displaystyle \sum_{n=1}^\infty \frac{(-1)^{r(n)}}n$ converges?
-Thanks all!
December 27th 2010, 11:01 PM
Is that what you really want to ask? What you are asking appears to be:
does (or rather what is the probability that):
$\displaystyle \sum_{n=1}^{\infty}\dfrac{z_n}{n}$
converges where the $z_n$'s are sampled uniformly on the top half of the unit circle in the complex plane.
If that is what you mean, then the answer is probably (I will need to work out how to prove it but this is what I would place my money on): the real part converges with probability 1 and the
imaginary part converges with probability 0.
The heuristic argument behind this is the the real part behaves on average like the alternating harmonic series, while the imaginary part behaves on average like the harmonic series.
December 27th 2010, 11:05 PM
Is that what you really want to ask? What you are asking appears to be:
does (or rather what is the probability that):
$\displaystyle \sum_{n=1}^{\infty}\dfrac{z_n}{n}$
converges where the $z_n$'s are sampled uniformly on the top half of the unit circle in the complex plane.
If that is what you mean, then the answer is probably (I will need to work out how to prove it but this is what I would place my money on): the real part converges with probability 1 and the
imaginary part converges with probability 0.
The heuristic argument behind this is the the real part behaves on average like the alternating harmonic series, while the imaginary part behaves on average like the harmonic series.
$\{0,1\}$ is the set containing just $0$ and $1$, not $[0,1]$.
December 27th 2010, 11:27 PM
Is that what you really want to ask? What you are asking appears to be:
does (or rather what is the probability that):
$\displaystyle \sum_{n=1}^{\infty}\dfrac{z_n}{n}$
converges where the $z_n$'s are sampled uniformly on the top half of the unit circle in the complex plane.
If that is what you mean, then the answer is probably (I will need to work out how to prove it but this is what I would place my money on): the real part converges with probability 1 and the
imaginary part converges with probability 0.
The heuristic argument behind this is the the real part behaves on average like the alternating harmonic series, while the imaginary part behaves on average like the harmonic series.
Ahhh.. rereading this what I should have taken this to mean is
does (or rather what is the probability that):
$\displaystyle \sum_{n=1}^{\infty}\dfrac{z_n}{n}$
converges where the $z_n$'s are sampled uniformly on $\{-1, 1\}$ (that is take the values $\pm1$ each with probability $0.5$).
Then the same heuristic would suggest that this converges with probability 1. I can show that the partial sums approach a random variable with zero mean and variance 2 (but this only guarantees
that the sum becomes unbounded with probability 0, not that the sum converge)
December 28th 2010, 12:39 AM
Considering CB's formulation : does the series $\displaystyle\sum_{n=1}^\infty \frac{z_n}{n}$, where $\displaystyle P(z_n=1)=P(z_n=-1)=1/2$, converge ? Yes it does and we can prove it using
martingales. I'm sorry but for that, you will need some knowledge of probability :p
Consider the natural filtration $\displaystyle \mathcal F_n=\sigma(z_1,\dots,z_n)$ and define $\displaystyle M_n=\sum_{i=1}^n \frac{z_i}{i}$.
It's easy to prove that $M_n$ is a $\mathcal F_n$-martingale, because the $z_i$ are independent with mean 0.
With this independence and mean 0, we can also write that $\displaystyle E[M_n^2]=\sum_{i=1}^n E\left[\frac{z_i}{i}\right]^2=\sum_{i=1}^n \frac{1}{i^2}<\sum_{i=1}^\infty \frac{1}{i^2}<\infty$
Hence $M_n$ is bounded in $L^2$ and from a martingale theorem, we deduce that it converges almost surely and in $L^2$ to a random variable $M_\infty$, which is in $L^2$.
So we get that $\displaystyle \sum_{i=1}^\infty \frac{z_i}{i}=\lim_{n\to\infty}\sum_{i=1}^n\frac{z _i}{i}$ converges almost surely (that is to say with probability 1).
December 29th 2010, 12:26 AM
You can use the Khintchine-Kolmogorov Convergence Theorem instead.
December 29th 2010, 06:55 AM
Let's define a random variable $\chi$ as...
$\displaystyle \chi= \sum_{n=1}^{\infty} \chi_{n} = \sum_{n=1}^{\infty} \frac{z_{n}}{n}$ (1)
... where the $z_{n}$ are discrete random variables with $P \{z_{n}=-1\}= P \{z_{n}=+1 \}= \frac{1}{2}$. Each $\chi_{n}$ has p.d.f. given by...
$\displaystyle \sigma_{n}(x)= \frac{\delta(x -\frac{1}{n}) + \delta(x+ \frac{1}{n})}{2}$ (2)
... and each $\sigma_{n} (*)$ has Fourier transform given by...
$\displaystyle \Sigma_{n} (\omega) = \cosh (i\ \frac{\omega}{n}) = \cos \frac{\omega}{n}$ (3)
Setting $\sigma(x)$ the p.d.f of $\chi$ and $\Sigma(\omega)$ its Fourier transform is...
$\displaystyle \Sigma(\omega)= \prod_{n=1}^{\infty} \Sigma_{n} (\omega) = \prod_{n=1}^{\infty} \cos \frac{\omega}{n}$ (4)
Now $\sigma(x)$ [if it exists...] can be obtained as inverse Fourier Transform of $\Sigma(\omega)$... but that requires some more efforts from me! (Thinking)...
Kind regards
December 29th 2010, 07:58 AM
Let's define a random variable $\chi$ as...
$\displaystyle \chi= \sum_{n=1}^{\infty} \chi_{n} = \sum_{n=1}^{\infty} \frac{z_{n}}{n}$ (1)
... where the $z_{n}$ are discrete random variables with $P \{z_{n}=-1\}= P \{z_{n}=-1 \}= \frac{1}{2}$. Each $\chi_{n}$ has p.d.f. given by...
$\displaystyle \sigma_{n}(x)= \frac{\delta(x -\frac{1}{n}) + \delta(x+ \frac{1}{n})}{2}$ (2)
... and each $\sigma_{n} (*)$ has Fourier transform given by...
$\displaystyle \Sigma_{n} (\omega) = \cosh (i\ \frac{\omega}{n}) = \cos \frac{\omega}{n}$ (3)
Setting $\sigma(x)$ the p.d.f of $\chi$ and $\Sigma(\omega)$ its Fourier transform is...
$\displaystyle \Sigma(\omega)= \prod_{n=1}^{\infty} \Sigma_{n} (\omega) = \prod_{n=1}^{\infty} \cos \frac{\omega}{n}$ (4)
Now $\sigma(x)$ [if it exists...] can be obtained as inverse Fourier Transform of $\Sigma(\omega)$... but that requires some more efforts from me! (Thinking)...
Kind regards
But what's the point of all this ?
Also, precise with respect to which measure you're taking the pdf oO
December 29th 2010, 11:21 AM
The following example will [I do hope...] clarify...
... let's define the random variable $\chi$ as...
$\displaystyle \chi= \sum_{n=1}^{\infty} \chi_{n} = \sum_{n=1}^{\infty} \frac{z_{n}}{2^{n}}$ (1)
... where the $z_{n}$ are discrete random variables with $P \{z_{n}=-1\}= P \{z_{n}=+1 \}= \frac{1}{2}$. Each $\chi_{n}$ has p.d.f. given by...
$\displaystyle \sigma_{n}(x)= \frac{\delta(x -\frac{1}{2^{n}}) + \delta(x+ \frac{1}{2^{n}})}{2}$ (2)
... and each $\sigma_{n} (*)$ has Fourier transform given by...
$\displaystyle \Sigma_{n} (\omega) = \cosh (i\ \frac{\omega}{2^{n}}) = \cos \frac{\omega}{2^{n}}$ (3)
Setting $\sigma(x)$ the p.d.f of $\chi$ and $\Sigma(\omega)$ its Fourier transform is...
$\displaystyle \Sigma(\omega)= \prod_{n=1}^{\infty} \Sigma_{n} (\omega) = \prod_{n=1}^{\infty} \cos \frac{\omega}{2^{n}}$ (4)
Now it is well known the 'infinite product'...
$\displaystyle \prod_{n=1}^{\infty} \cos \frac{\omega}{2^{n}} = \frac{\sin \omega}{\omega}$ (5)
... so that is...
$\displaystyle \sigma(x)= \mathcal{F}^{-1} \{\frac {\sin \omega}{\omega} \} = \left\{\begin{array}{ll}\frac{1}{2} ,\,\,|x|<1\\{}\\0 ,\,\, |x|>1\end{array}\right.$ (6)
... i.e. $\chi$ is uniformely distributed between -1 and +1... and that's not a surprise! (Wink)...
For the [very interesting...] question proposed by chip588@ we have to extablish if exists or not...
$\displaystyle \sigma(x)= \mathcal{F}^{-1} \{ \prod_{n=1}^{\infty} \cos \frac{\omega}{n} \}$ (7)
Kind regards | {"url":"http://mathhelpforum.com/advanced-statistics/166982-randomized-convergence-print.html","timestamp":"2014-04-19T15:53:40Z","content_type":null,"content_length":"32079","record_id":"<urn:uuid:2124d813-adbd-4092-9e48-28f1e1833de5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH M447 3229 Mathematical Models and Applications I
Mathematics | Mathematical Models and Applications I
M447 | 3229 | Thompson
P: M301 or M303, M311, M360 or M365, which may be taken concurrently, or
consent of instructor. Formation and study of mathematical models used in
the biological, social, and management sciences. Mathematical topics
include games, graphs, Markov and Poisson processes, mathematical
programming, queues, and equations of growth. M446, I Sem. | {"url":"http://www.indiana.edu/~deanfac/blfal99/math/math_m447_3229.html","timestamp":"2014-04-23T06:59:10Z","content_type":null,"content_length":"921","record_id":"<urn:uuid:4f164f87-666a-4f17-902c-9bb87da53941>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Set theory definition of addition, negative numbers, and subtraction?
up vote 3 down vote favorite
Using the definition of natural numbers $0 = \emptyset$ and $S(n) = n \cup \lbrace n \rbrace$ where S is the successor function, what is the definition of addition on natural numbers?
Concerning the definition of negative integers, the wikipedia entry is a bit ambigous: http://en.wikipedia.org/wiki/Integer#Construction
It seems to claim that the integers are defined such that, for example, the natural number 2 is not equal to the integer 2, which is defined as $\lbrace n \in \mathbb{N}^2 | \exists m \in \mathbb
{N}:n = (m + 2,m) \rbrace$ or simply $ \lbrace (2,0), (3,1), (4,2), ... \rbrace$, whereas the natural number 2 would be $ \lbrace 0,1 \rbrace = \lbrace \emptyset , \lbrace \emptyset \rbrace \rbrace
$. It was to my understanding that the above definition for integers was reserved for the negative integers, for example $ -1 = \lbrace (0,1), (1,2), ... \rbrace $, and the integers would be defined
as the union of the sets of negative integers and natural numbers. Is wikipedia wrong and natural 2 = integer 2, or is it the other way around? And in the latter case, are rational 2, real 2 and
complex 2 also distinct? (And the ordinal number 2, and the cardinal number 2...)
Assuming the former case, is there an "official" way of defining things next? It seems to me like the easiest way of doing things would be defining first subtraction for $(n,m)$ where $ n \ge m $;
that is, $n-m$ is the natural number $ d $ such that $ n = m + d $. Then defining additive inversion, i.e. the unary operation $-$ s.t. $ 0 \mapsto 0$, $ n \mapsto -n$ for nonzero natural $n$ (here
$-n$ is the already defined negative integer and not the $-$ operation on $n$), and for negative integers $n$, $ n \mapsto m $ where $m$ is the natural number s.t. $ (0,m) \in n $. From there we can
define addition and subtraction more generally for all combinations of natural numbers and negative integers, using additive inversion whenver we get a negative result, for example, $ 3 - (-4) = 3 +
4 $, $ 3 + (-4) = 3 - 4 = - (4-3) $, $ (-3) - 4 = (-3) + (-4) = - (3+4) $.
Tl;dr: what is the definition of addition on naturals, is natural 2 = integer 2 or are they distinct elements, and how do we define addition and subtraction on integers?
set-theory definitions
Perhaps you should read Benacerraf's classic 'What numbers could not be' (scribd.com/doc/56939539/Benacerraf-What-Numbers-Could-Not-Be). The point of numbers is not that they are coded in a
4 certain way, but that they are models of the axioms of arithmetic, ring theory etc. If you are worried about whether $2\in\mathbb{N}$ is equal to $2\in\mathbb{Z}$, then what's going to really bake
your noodle is the statement $3\in 5$ (and many others, like $\pi\in \sin x$, which is a legitimate assertion in ZFC). $3\in 5$ is true in some codings of the natural numbers and false in others.
– David Roberts Nov 6 '12 at 6:19
@David Thanks for this comment. That was what I wanted to point out in my answer, but you also give very good (counter?)examples of why not asking these questions. – jmc Nov 6 '12 at 6:24
add comment
closed as off topic by Andres Caicedo, Douglas Zare, Andreas Blass, Goldstern, Andy Putman Nov 7 '12 at 4:08
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you
believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
2 Answers
active oldest votes
There are many different ways of defining the natural numbers, integers, fractions, reals and complex numbers. I for myself do not think there is a canonical way. Thus, Wikipedia is
not wrong, and there is not a way to do it *more right".
You certainly do not want to think of all these numbers as their underlying sets. One could start thinking about the intersection of $\frac{2}{3}$ (as fraction) with $\pi$ (as real),
but it would make absolutely no sense.
What really matters is the algebraic structure. No matter which definition you give of $\mathbb{N}, \mathbb{Z}, \mathbb{Q}, \mathbb{R}, \mathbb{C}$, you can identify them in a natural
way. If $\mathbb{Z}_{1}$ is your first set-theoretic definition of the integers, and $\mathbb{Z}_{2}$ is another one, then there is a canonical function $f \colon \mathbb{Z}_{1} \to \
up vote 5 down mathbb{Z}_{2}$ such that:
vote accepted
• $f(m +_{1} n) = f(m) +_{2} f(n)$
• $f(m \times_{1} n) = f(m) \times_{2} f(n)$
• $f(0_{1}) = 0_{2}$
• $f(1_{1}) = 1_{2}$.
So, what really matters is this algebraic structure of these sets.
Oh, and those natural identifications that I wrote about also preserve things like natural orderings (if available) and such. You might want to take a look at www.en.wikipedia.org/
wiki/Category_theory . I hope that it might help you to look at your questions from a different (in my eyes, more satisfying) view. – jmc Nov 6 '12 at 4:04
add comment
The set of natural numbers $\mathbb N$, together with its natural addition, is a commutative semigroup. There is a standard way (*) to extend a commutative semigroup $(S,+)$ to a
commutative group $(G,+)$: G is the quotient space of $S\times S$ by the relation $(a,b)\sim (a',b')$ if and only if $a'+b=a+b'$, with addition $[a,b]+[a',b']=[a+a',b+b']$. All the
formulas with sign are easily shown.
The semi group $S$ is not a subset of $G$ but embeds naturally as a semigroup by the injective map $a\mapsto [a,0]$.
up vote 1
down vote The construction of ($\mathbb Z$,$+)$ from ($\mathbb N$,$+)$ is exactly the construction above.
(*) but I don't think we can call it "official"
Well, it is pretty natural in the sense that that it is adjoint to the inclusion of commutative groups into commutative semigroups (/monoids). I would call it official. But does it
answer his question? The poster seems pretty concerned about underlying sets for these objects. – jmc Nov 6 '12 at 3:59
@Johan: I don't think Taladris meant that the adjoint isn't official but rather that this particular construction of it, as a quotient of the product, isn't official. – Andreas Blass Nov
6 '12 at 12:29
@Taladris: In order to conclude that $S$ embeds naturally in $G$, one needs to assume more about $S$ than just that it's a commutative semigroup. One needs cancellation. So it's OK for $
\mathbb N$, but in the general case one only has a natural homomorphism $S\to G$ which might not be one-to-one. – Andreas Blass Nov 6 '12 at 12:31
An addendum to my preceding comment: In the absence of cancellation, the alleged equivalence relation defined in this answer can fail to be transitive. The general construction would
identify $(a,b)$ with $(a',b')$ if there is some $c\in S$ such that $a'+b+c=a+b'+c$. (As in my previous comment, this is unnecessary in the construction of $\mathbb Z$ from $\mathbb N$
because there we have cancellation. – Andreas Blass Nov 6 '12 at 12:35
@Andreas: you're perfectly right. Actually, the definition of semigroup seems to differ from one author to another. "Semigroup" in my answer is what is called in Wikipedia a "monoid with
cancellation property", i.e. a set with an associative binary operation, an identity element for this operation and which has the cancellation property. – Taladris Nov 6 '12 at 13:31
add comment
Not the answer you're looking for? Browse other questions tagged set-theory definitions or ask your own question. | {"url":"http://mathoverflow.net/questions/111606/set-theory-definition-of-addition-negative-numbers-and-subtraction","timestamp":"2014-04-17T10:17:29Z","content_type":null,"content_length":"60532","record_id":"<urn:uuid:ef9c0cf9-a655-4da5-adcc-8e3dbbc11745>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Jokes 4 Mathy Folks
My students frequently ask me if I know any mathematical jokes. Of course, I do know some. I don’t quite know why they so enjoy hearing me tell them. Perhaps it’s just that it’s fun to see the
professor making a fool of himself. Or maybe it’s the geeky in-group feeling that we get by being able to laugh at things that the proverbial person in the street would not understand:
Do you know any anagrams of Banach-Tarski?
Banach-Tarski Banach-Tarski.
G. Patrick Venebush must be very self-confident. In his introduction to this collection of mathematical jokes, he says that “all of the jokes in this book are funny,” though their appropriate context
may vary. Some, he says, “will be funnier to elementary school students than to adults,” while others “should only be told at the pub.”
Well, some of the jokes in this book are funny. Some are very familiar, and so don’t generate more than a smile of recognition. Some are not jokes at all:
What is the difference between an argument and a proof?
An argument will convince a reasonable man, but a proof is needed to convince an unreasonable one.
Whether that's true or not, I don't see why it even comes close to being funny. And some, alas, have been garbled:
How can you tell that Harvard was planned by a mathematician?
The divinity school is next to the grad school.
No, no, no, it’s the div school that’s next to the grad school. And I really hope there is no curl school at Harvard.
What do you get if you cross an elephant with a mountain climber?
You can’t, because a mountain climber is a scaler.
Of course, that should be a mosquito, not an elephant, because the mosquito is a vector.
In general, the shorter jokes (question and answer, like the ones above, one-liners, light bulb jokes) are better than the longer jokes, though I suppose the latter could be made funny if told well.
A few famous jokes are not here, such as the one that ends with “Consider a spherical cow…” (Gene Wolfe once said that the cow’s name was probably Rotunda.) Several jokes appear slightly differently
from the way I’ve heard them, which is par for the course: jokes are folk literature, and they change as they move from one person to the next.
Should you buy the book? I don’t know. After all, a great many of these jokes can be found online. (In fact, the author asks his readers to send him any jokes he has missed, and promises to post
those online.) But if you like mathematical jokes, you might enjoy having a copy.
My favorite? I don’t know. Sometimes I go for nonsense:
How many topologists does it take to change a lightbulb?
Just one. But what will you do with the doughnut?
Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College in Waterville, ME. | {"url":"http://www.maa.org/publications/maa-reviews/math-jokes-4-mathy-folks","timestamp":"2014-04-21T13:05:38Z","content_type":null,"content_length":"96749","record_id":"<urn:uuid:be91baec-5ae6-4af8-8b91-1ac5e98c1a1c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting Deg F To Deg C For Ramp Rates - In the Studio
Following on from the discussion on re-firing and cracks...........
I've tried to convert the below ramp from deg F to deg C, and calculate how long the firing will take. (Theory of course, I know the kiln may not be able to keep up/down with the ramp rate, but it
will give me a guide as to firing time.
Cone 6 Glaze Firing
100 degrees F per hour to 220 degrees F, no hold
350 degrees F per hour to 2000 degrees F, no hold
150 degrees F per hour to 2185 degrees F, hold 15 minutes
On the way down:
500 degrees F per hour to 1900 degrees F, no hold
125 to 175 degrees F per hour to 1450 degrees F, no hold
Cool naturally from 1450 degrees F
[Deleted text that didn't format - see post after this for ]
The maths on both sides looks OK. I converted the deg F to deg C using the Excel Convert formula, and double-checked the results using online conversion sites, so they are correct. But the results to
work out time taken don't make sense. I used (EndTemp-StartTemp)/Ramp, looking at degF and degC separately, the results look sensible, but I would expect them to be the same as each other.
So, why does firing in Deg C takes longer than firing in Deg F. I don't believe in Santa Claus, and I don't believe these figures either.
Can anyone throw any light on this? | {"url":"http://community.ceramicartsdaily.org/topic/5347-converting-deg-f-to-deg-c-for-ramp-rates/","timestamp":"2014-04-17T03:53:36Z","content_type":null,"content_length":"88907","record_id":"<urn:uuid:bd2e46fd-8c1c-4477-a74f-ca22f4a8d040>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
The European Mathematical Society
Mathematics Meets Physics on the occasion of Antti Kupiainen's 60th birthday
http://wiki.helsinki.fi/display/mathphys/mathphys2014 Location: See the above web site for more information
A four day conference exploring the frontiers of mathematical physics on the occasion of Antti Kupiainen's 60th birthday will be organized in Helsinki, June 24-27 2014.
The aim of the conference is to foster the exchange of recent breakthroughs, new ideas and advances in methodology by bringing together world-leading experts, ranging the full spectrum from pure
mathematics to physics. | {"url":"http://www.euro-math-soc.eu/node/4243","timestamp":"2014-04-18T00:40:22Z","content_type":null,"content_length":"11798","record_id":"<urn:uuid:982f64d9-ca1b-4840-a6d3-805614b8efa3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix Operations Preserving Hurwitz Stability
up vote 2 down vote favorite
I begin with terminology I use in the question. A real square matrix $A$ is
• negative-stable if for every eigenvalue $\lambda$ of $A$, ${\mathrm{Re}}(\lambda) < 0$;
• $\ast$-negative-stable if for every eigenvalue $\lambda$ of $A$, either $\lambda = 0$ or ${\mathrm{Re}}(\lambda) < 0$;
• nonpositive-stable if for every eigenvalue $\lambda$ of $A$, ${\mathrm{Re}}(\lambda) \leqslant 0$.
I made up the term '$\ast$-negative-stable' and I would welcome better and/or established terminology. For example, the Laplacian matrix of a nonnegatively weighted (directed or undirected) graph is
To put it broadly, I am looking for what is known about matrix operations that preserve the above stability properties.
Let $A$ be a real $n{\times}n$ matrix and let $u$, $v$, $w$ be real $n{\times}1$ vectors. Consider the real $n{\times}n$ matrices $D = \mathrm{diag}(u)$ and $B = vw^{\mathrm{T}}$, and the real number
$\alpha = w^{\mathrm{T}}v$. I am particularly interested in what additional conditions on the matrix $A$ would make the following implications true. (I do not mean simultaneously true.) They concern
preserving stability from $A$ to $AD$ for the first three and from $A$ to $A+B$ for the last three.
1. ( $A$ is negative-stable and $u$ is positive ) $\Rightarrow$ ( $AD$ is negative-stable )
2. ( $A$ is $\ast$-negative-stable and $u$ is nonnegative ) $\Rightarrow$ ( $AD$ is $\ast$-negative-stable )
3. ( $A$ is nonpositive-stable and $u$ is nonnegative ) $\Rightarrow$ ( $AD$ is nonpositive-stable )
4. ( $A$ is negative-stable and $\alpha < 0$ ) $\Rightarrow$ ( $A + B$ is negative-stable )
5. ( $A$ is $\ast$-negative-stable and $\alpha \leqslant 0$ ) $\Rightarrow$ ( $A + B$ is $\ast$-negative-stable )
6. ( $A$ is nonpositive-stable and $\alpha \leqslant 0$ ) $\Rightarrow$ ( $A + B$ is nonpositive-stable )
In implications 2 and 5 about $\ast$-negative-stability, it would be acceptable to assume that $A$ is similar to a Laplacian matrix (but Laplacian matrices should not be assumed to be symmetric).
Would that be sufficient?
Addendum 1
Here is a way $\ast$-negative stability can be useful in studying negative/Hurwitz stability. Suppose I know that a matrix $A$ is similar to $C = \begin{pmatrix} O_{p{\times}p} & O_{p{\times}q} \\\ S
& T \end{pmatrix}$, where $T$ is nonsingular. Then $T$ is negative-stable if and only if $A$ is $\ast$-negative-stable, a potentially useful observation if $A$ looks easier to work with than $T$. The
notion of nonpositive stability can become useful in similar (but not identical) circumstances.
linear-algebra ds.dynamical-systems matrices graph-theory
1 I suggest you to look at M-matrix theory, if you haven't yet. It should help you settling 1--3, at least. Moreover, I'd like to point out that our second and third definition of stability are
somehow counterintuitive: for instance, $A=\begin{bmatrix}0 & 1 \\ 0 & 0\end{bmatrix}$ is "*-negative-stable" and "nonpositive-stable", yet $e^{At}$ diverges. Usually a condition on the
multiplicities of purely imaginary eigenvalue is assumed in addition. – Federico Poloni May 16 '12 at 7:39
Thanks, Federico. I recognize that "$\ast$-negative-stable" and "nonpositive-stable" can seem strange without explanation of where they come from. Also, note that $\ast$-negative-stable matrices
do not have purely imaginary eigenvalues, except possibly zero. I augmented my question with Addendum 1 to provide some motivation. – Gilles Gnacadja May 16 '12 at 17:32
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged linear-algebra ds.dynamical-systems matrices graph-theory or ask your own question. | {"url":"https://mathoverflow.net/questions/97080/matrix-operations-preserving-hurwitz-stability","timestamp":"2014-04-16T13:36:42Z","content_type":null,"content_length":"51270","record_id":"<urn:uuid:36106713-7587-419a-973c-6d1351ed58c1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Helmut Werner (1931-1985)
• Schaback, R., Helmut Werner, a vita written specifically for this website.
• Braess, D. Helmut Werner, Computing 36 (1986), 181-182. For those with access, this article can be found here.
• Braess, D., and R. Schaback, Helmut Werner, Jahresbericht DMV 89 (1987), 179-195. This article can be found in this issue of Jahresbericht.
• Cuyt, A., A bibliography of the works of Prof. Dr. H. Werner, J. Comp. and Applied Math. 19 (1987), 3-8. For those with access, this article can be found here.
• Werner, I., In Memoriam, pages VII-IX of Rational Approximation and its Applications in Mathematics and Physics, (Lancut, 1985), 331–350, LNM 1237, Springer, Berlin, 1987. | {"url":"http://www.math.technion.ac.il/hat/people/werner.html","timestamp":"2014-04-17T21:24:01Z","content_type":null,"content_length":"2632","record_id":"<urn:uuid:f0ba8d79-c7a8-459f-9bac-fda971f64386>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can you marry your half-sister's daughter?
Can you marry your half-sister's daughter?
Gary Trudeau
poses an
interesting question
Let's lend Zipper and Jeff a hand. Here is a pedigree of the family with Jeff, Alex (Jeff's half-niece), J.J. (Jeff's half-sister and Alex's mom), and Joanie (Jeff and J.J.'s mom) labeled. Jeff and
Alex are the filled in symbols. Circles indicate females and squares indicate males. Horizontal lines are matings and the vertical lines lead downward toward the progeny of that mating.
We will define the inbreeding coefficient (
) as the probability that both alleles in a single individual are identical by descent (IBD). IBD simply means that both alleles came from the same allele in one of the ancestors -- in this case,
Joanie. We are trying to figure out what is the probability that Jeff and Alex's hypothetical child would get the same allele from both of his/her parents. To do this, we will redraw the pedigree so
that we only focus on the individuals in question. Each passing of gametes (birth) is represented by an arrow. Only one of the two parents is shown for each mating. Jeff and Alex's hypothetical child
is represented by a diamond.
Joanie has two alleles for the gene in question -- let's call it the "
" gene --
. She could have passed either allele on to either of her children, J.J. and
Jeff, with equal probability, 0.5. In order for the hypothetical child to get two copies of the same allele (IBD) both Jeff and Alex would have to carry the same allele and they would both need to
pass it on to the child. Here are the important probabilities:
1. The probability Jeff gets the A[1] allele from Joanie is 0.5. The probability he passes on the A[1] allele to his child assuming he has the allele is 0.5. The probability he passes on the A[1]
allele to his child is the product of these two probabilities: 0.25.
2. The probability Alex gets the A[1] allele from Joanie is the probability J.J. gets the allele from Joanie times the probability J.J. passes it along to Alex. The product of those two events (0.5
x 0.5) is 0.25. The probability Alex passes on the A[1] allele to her child assuming she gets the allele from Joanie is 0.5. Therefore, the probability Alex passes along the A[1] allele to her
child is the probability she gets it times the probability she passes it along, or 0.125 (0.25 x 0.5).
3. Now that we have the probability Jeff passes the A[1] allele to his child (0.25) and the probability Alex passes the allele to her child (0.125) we can calculate the probability they both pass
the allele on to their hypothetical child if they were to mate. This just the product of the previous two probabilities (0.25 x 0.125): 0.03125, or 1/32.
Up to this point, we have only dealt with the A[1] allele. Jeff and Alex's hypothetical child could also receive the A[2] allele from both parents. Repeating the steps above for the A[2] allele gives
the same answer as for the A[1] allele, 1/32. Because the child can either have the genotype A[1]A[1] or A[2]A[2], the two events are mutually exclusive. This means we can add the probability that
the child will be A[1]A[1] and the probability the child will be A[2]A[2] to get the probability the child will have two alleles that are IBD for a single gene. This gives us a final answer of 1/16.
For any given gene, Jeff and Alex's child has a 6.25% chance (1/16) of being IBD. For most genes this should not be a problem, but if the child ended up being homozygous for a recessive deleterious
allele, this could be catastrophic. As the probability of being IBD at a given locus increases, so does the probability of having two copies of a recessive deleterious allele. As a point of
comparison, the probability of being IBD for a child from a mating between first cousins is [S:1/32:S] 1/16. Matings between cousins is generally frowned upon because of the risks associated with
heritable diseases. Jeff and Alex's child would have [S:twice:S] the same risk of a heritable disease due to recessive deleterious alleles [S:than:S] as a child from a mating between first cousins.
In conclusion, Jeff can have his half-niece Alex stay with him, but only Zipper is allowed to fool around with her.
Hedrick, PW. 1999.
Genetics of Populations
. Jones and Bartlett Publishers, Sudbury, MA, USA.
5 Comments: | {"url":"http://evolgen.blogspot.com/2005/10/can-you-marry-your-half-sisters_16.html","timestamp":"2014-04-18T23:25:07Z","content_type":null,"content_length":"32041","record_id":"<urn:uuid:094ffa50-c26c-42c1-a10b-c2a17fdaf498>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Rhombus Area Problem
Here’s a rhombus area problem involving triangles and ratios: Find the area of rhombus RHOM given that MB is 6 and that the ratio of RB to BH is 4 : 1, as shown in the following figure.
This problem’s a bit gnarly. You might feel that you’re not given enough information to solve it or that you just don’t know how to begin. If you ever feel this way when you’re in the middle of a
math problem, here’s a great tip for you:
If you get stuck when doing a geometry problem — or any kind of math problem, for that matter — do something, anything! Begin anywhere you can: Use the given information or any ideas you have (try
simple ideas before more-advanced ones) and write something down. Maybe draw a diagram if you don’t have one. Put something down on paper. This tip is surprisingly effective. One idea may trigger
another, and before you know it, you’ve solved the problem.
Then, because all sides of a rhombus are congruent, RM must equal RH, which is 4x + x, or 5x.
Because the lengths of the sides must be positive, you reject the answer x = –2. The length of the base, segment RH, is thus 5(2), or 10. (Triangle RBM is a 3-4-5 triangle blown up by a factor of 2.)
Now use the parallelogram/rhombus area formula: | {"url":"http://www.dummies.com/how-to/content/a-rhombus-area-problem.html","timestamp":"2014-04-19T02:50:40Z","content_type":null,"content_length":"51963","record_id":"<urn:uuid:f781736f-4bb1-40c4-9737-63efe734cd63>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rvalue Reference Recommendations for Chapter 25
Document number: N1860=05-0120
Howard E. Hinnant
Rvalue Reference Recommendations for Chapter 25
Related papers
Rvalue Reference Recommendations for Chapter 20
Rvalue Reference Recommendations for Chapter 21
Rvalue Reference Recommendations for Chapter 23
Rvalue Reference Recommendations for Chapter 24
Rvalue Reference Recommendations for Chapter 26
Rvalue Reference Recommendations for Chapter 27
This paper recommends proposed wording with respect to the rvalue reference for the C++0X working draft. This paper restricts its scope to Chapter 25 "Algorithms library" for the purpose of breaking
the library work associated with the rvalue reference up into manageable chunks. This paper largely follows the lead of N1771: Impact of the rvalue reference on the Standard Library, but adds more
detail as appropriate. Refer to N1771 for detailed motivation for these changes.
With the exception of this introduction, all non-proposed wording will have a background color and formatting that
looks like this, so that motivation and description is more easily distinguished from proposed wording.
In the proposed wording below, text to be inserted is formatted like , while wording to be deleted is formatted like [DEL:this:DEL].
The proposed wording in this paper accomplishes three tasks:
1. New algorithms move and move_backward are introduced.
2. The random_shuffle signature is altered to accept rvalue generators.
3. The requirements on the value_type of several algorithms are reduced from CopyConstructible and CopyAssignable to MoveConstructible and MoveAssignable.
This third action is the most important as it allows clients to use these algorithms with sequences of movable but non-copyable types. For example:
vector<unique_ptr<T> > v;
sort(v.begin(), v.end(), indirect_less()); // ok to sort unique_ptr's
25 - Algorithms library
Header <algorithm> synopsis
The synopsis is updated with two new functions: move and move_backward. These two functions are merely convenience functions as any copying style algorithm can be turned into a moving style
algorithm with the use of move_iterator. For example:
copy(make_move_iterator(first), make_move_iterator(last), result);
is equivalent to:
move(first, last, result);
However the anticipated frequency of use of move and move_backward warrant the special treatment.
Additionally the signature (though not the description) of random_shuffle is modified so as to accept lvalue and rvalue random number generators.
namespace std {
// lib.alg.modifying.operations, modifying sequence operations:
// lib.alg.copy, copy:
template<class InputIterator, class OutputIterator>
OutputIterator copy(InputIterator first, InputIterator last,
OutputIterator result);
template<class BidirectionalIterator1, class BidirectionalIterator2>
(BidirectionalIterator1 first, BidirectionalIterator1 last,
BidirectionalIterator2 result);
template<class RandomAccessIterator>
void random_shuffle(RandomAccessIterator first,
RandomAccessIterator last);
template<class RandomAccessIterator, class RandomNumberGenerator>
void random_shuffle(RandomAccessIterator first,
RandomAccessIterator last,
25.2 - Mutating sequence operations
Insert new section for move and move_backward.
25.2.[DEL:2:DEL] - Swap
Reduce requirements for swap to MoveConstructible and MoveAssignable.
template<class T> void swap(T& a, T& b);
-1- Requires: Type T is [DEL:CopyConstructible (lib.copyconstructible):DEL] and [DEL:Assignable (lib.container.requirements):DEL].
25.2.[DEL:7:DEL] - Remove
Reduce requirements for remove and remove_if to MoveConstructible and MoveAssignable.
template<class ForwardIterator, class T>
ForwardIterator remove(ForwardIterator first, ForwardIterator last,
const T& value);
template<class ForwardIterator, class Predicate>
ForwardIterator remove_if(ForwardIterator first, ForwardIterator last,
Predicate pred);
-1- Requires: The type of *first shall satisfy the [DEL:Assignable:DEL] requirements.
-2- Effects: Eliminates all the elements referred to by iterator i in the range [first, last) for which the following corresponding conditions hold: *i == value, pred(*i) != false.
-3- Returns: The end of the resulting range.
-4- Remarks: Stable.
-5- Complexity: Exactly last - first applications of the corresponding predicate.
25.2.[DEL:8:DEL] - Unique
Reduce requirements for unique to MoveConstructible and MoveAssignable.
template<class ForwardIterator>
ForwardIterator unique(ForwardIterator first, ForwardIterator last);
template<class ForwardIterator, class BinaryPredicate>
ForwardIterator unique(ForwardIterator first, ForwardIterator last,
BinaryPredicate pred);
-1- Effects: Eliminates all but the first element from every consecutive group of equal elements referred to by the iterator i in the range [first, last) for which the following corresponding
conditions hold: *i == *(i - 1) or pred(*i, *(i - 1)) != false
-2- Requires: The comparison function shall be an equivalence relation.
-3- Returns: The end of the resulting range.
-4- Complexity: If the range (last - first) is not empty, exactly (last - first) - 1 applications of the corresponding predicate, otherwise no applications of the predicate.
25.2.[DEL:9:DEL] - Rotate
Reduce requirements for rotate to MoveConstructible and MoveAssignable. Note that we already have Swappable as well.
template<class ForwardIterator>
void rotate(ForwardIterator first, ForwardIterator middle,
ForwardIterator last);
-1- Effects: For each non-negative integer i < (last - first), places the element from the position first + i into position first + (i + (last - middle)) % (last - first).
-2- Remarks: This is a left rotate.
-3- Requires: [first, middle) and [middle, last) are valid ranges. The type of *first shall satisfy the Swappable requirements (20.1.4).
-4- Complexity: At most last - first swaps.
25.2.[DEL:11:DEL] - Random shuffle
Change the signature of random_shuffle. No other change is needed for the specification of random_shuffle.
template<class RandomAccessIterator>
void random_shuffle(RandomAccessIterator first,
RandomAccessIterator last);
template<class RandomAccessIterator, class RandomNumberGenerator>
void random_shuffle(RandomAccessIterator first,
RandomAccessIterator last,
25.2.[DEL:12:DEL] - Partitions
Reduce requirements for stable_partition to MoveConstructible and MoveAssignable. Note that we already have Swappable as well.
template<class BidirectionalIterator, class Predicate>
stable_partition(BidirectionalIterator first,
BidirectionalIterator last, Predicate pred);
-5- Effects: Places all the elements in the range [first, last) that satisfy pred before all the elements that do not satisfy it.
-6- Returns: An iterator i such that for any iterator j in the range [first, i), pred(*j) != false, and for any iterator k in the range [i, last), pred(*j) == false. The relative order of the
elements in both groups is preserved.
-7- Requires: The type of *first shall satisfy the Swappable requirements (20.1.4).
-8- Complexity: At most (last - first) * log(last - first) swaps, but only linear number of swaps if there is enough extra memory. Exactly last - first applications of the predicate.
25.3 - Sorting and related operations
25.3.1 - Sorting
25.3.1.1 - sort
Reduce requirements for sort to MoveConstructible and MoveAssignable. Note that we already have Swappable as well.
template<class RandomAccessIterator>
void sort(RandomAccessIterator first, RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void sort(RandomAccessIterator first, RandomAccessIterator last,
Compare comp);
-1- Effects: Sorts the elements in the range [first, last).
-2- Requires: The type of *first shall satisfy the Swappable requirements (20.1.4).
-3- Complexity: Approximately N log N (where N == last - first) comparisons on the average.*
25.3.1.2 - stable_sort
Reduce requirements for stable_sort to MoveConstructible and MoveAssignable. Note that we already have Swappable as well.
template<class RandomAccessIterator>
void stable_sort(RandomAccessIterator first, RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void stable_sort(RandomAccessIterator first, RandomAccessIterator last,
Compare comp);
-1- Effects: Sorts the elements in the range [first, last).
-2- Requires: The type of *first shall satisfy the Swappable requirements (20.1.4).
-3- Complexity: It does at most N(log N)^2 (where N == last - first) comparisons; if enough extra memory is available, it is N log N.
-4- Remarks: Stable.
25.3.1.3 - partial_sort
Reduce requirements for partial_sort to MoveConstructible and MoveAssignable. Note that we already have Swappable as well.
template<class RandomAccessIterator>
void partial_sort(RandomAccessIterator first,
RandomAccessIterator middle,
RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void partial_sort(RandomAccessIterator first,
RandomAccessIterator middle,
RandomAccessIterator last,
Compare comp);
-1- Effects: Places the first middle - first sorted elements from the range [first, last) into the range [first, middle). The rest of the elements in the range [middle, last) are placed in an
unspecified order.
-2- Requires: The type of *first shall satisfy the Swappable requirements (20.1.4).
-3- Complexity: It takes approximately (last - first) * log(middle - first) comparisons.
25.3.1.4 - partial_sort_copy
Reduce requirements for partial_sort to MoveConstructible and CopyAssignable. Note that we already have Swappable as well. Also note that while CopyAssignable is required, CopyConstructible is
template<class InputIterator, class RandomAccessIterator>
partial_sort_copy(InputIterator first, InputIterator last,
RandomAccessIterator result_first,
RandomAccessIterator result_last);
template<class InputIterator, class RandomAccessIterator,
class Compare>
partial_sort_copy(InputIterator first, InputIterator last,
RandomAccessIterator result_first,
RandomAccessIterator result_last,
Compare comp);
-1- Effects: Places the first min(last - first, result_last - result_first) sorted elements into the range [result_first, result_first + min(last - first, result_last - result_first)).
-2- Returns: The smaller of: result_last or result_first + (last - first)
-3- Requires: The type of *result_first shall satisfy the Swappable requirements (20.1.4).
-4- Complexity: Approximately (last - first) * log(min(last - first, result_last - result_first)) comparisons.
25.3.2 - Nth element
Reduce requirements for nth_element to MoveConstructible and MoveAssignable. Note that we already have Swappable as well.
template<class RandomAccessIterator>
void nth_element(RandomAccessIterator first, RandomAccessIterator nth,
RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void nth_element(RandomAccessIterator first, RandomAccessIterator nth,
RandomAccessIterator last, Compare comp);
-1- After nth_element the element in the position pointed to by nth is the element that would be in that position if the whole range were sorted. Also for any iterator i in the range [first, nth) and
any iterator j in the range [nth, last) it holds that: !(*i > *j) or comp(*j, *i) == false.
-2- Requires: The type of *first shall satisfy the Swappable requirements (20.1.4).
-3- Complexity: Linear on average.
25.3.4 - Merge
Reduce requirements for inplace_merge to MoveConstructible and MoveAssignable. Note that we already have Swappable as well.
template<class BidirectionalIterator>
void inplace_merge(BidirectionalIterator first,
BidirectionalIterator middle,
BidirectionalIterator last);
template<class BidirectionalIterator, class Compare>
void inplace_merge(BidirectionalIterator first,
BidirectionalIterator middle,
BidirectionalIterator last, Compare comp);
-6- Effects: Merges two sorted consecutive ranges [first, middle) and [middle, last), putting the result of the merge into the range [first, last). The resulting range will be in non-decreasing
order; that is, for every iterator i in [first, last) other than first, the condition *i < *(i - 1) or, respectively, comp(*i, *(i - 1)) will be false.
-7- Requires: The type of *first shall satisfy the Swappable requirements (20.1.4).
-8- Complexity: When enough additional memory is available, (last - first) - 1 comparisons. If no additional memory is available, an algorithm with complexity N log N (where N is equal to last -
first) may be used.
-9- Remarks: Stable.
25.3.6 - Heap operations
Reduce requirements for the heap operations to MoveConstructible and MoveAssignable (no longer requiring CopyConstructible and CopyAssignable). Note that we already have Swappable as well for
pop_heap and sort_heap.
25.3.6.1 - push_heap
template<class RandomAccessIterator>
void push_heap(RandomAccessIterator first, RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void push_heap(RandomAccessIterator first, RandomAccessIterator last,
Compare comp);
-1- Effects: Places the value in the location last - 1 into the resulting heap [first, last).
-2- Requires: The range [first, last - 1) shall be a valid heap. .
-3- Complexity: At most log(last - first) comparisons.
25.3.6.2 - pop_heap
template<class RandomAccessIterator>
void pop_heap(RandomAccessIterator first, RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void pop_heap(RandomAccessIterator first, RandomAccessIterator last,
Compare comp);
-1- Effects: Swaps the value in the location first with the value in the location last - 1 and makes [first, last - 1) into a heap.
-2- Requires: The range [first, last) shall be a valid heap. The type of *first shall satisfy the Swappable requirements (20.1.4).
-3- Complexity: At most 2 * log(last - first) comparisons.
25.3.6.3 - make_heap
template<class RandomAccessIterator>
void make_heap(RandomAccessIterator first, RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void make_heap(RandomAccessIterator first, RandomAccessIterator last,
Compare comp);
-1- Effects: Constructs a heap out of the range [first, last).
-3- Complexity: At most 3 * (last - first) comparisons.
25.3.6.4 - sort_heap
template<class RandomAccessIterator>
void sort_heap(RandomAccessIterator first, RandomAccessIterator last);
template<class RandomAccessIterator, class Compare>
void sort_heap(RandomAccessIterator first, RandomAccessIterator last,
Compare comp);
-1- Effects: Sorts elements in the heap [first, last).
-2- Requires: The type of *first shall satisfy the Swappable requirements (20.1.4).
-3- Complexity: At most N log N comparisons (where N == last - first). | {"url":"http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1860.html","timestamp":"2014-04-17T04:03:19Z","content_type":null,"content_length":"31140","record_id":"<urn:uuid:adcbb161-d4ee-47a6-98e2-b3075eb6b7c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Microsoft Word - How to use equation font elsewhere?
up vote 4 down vote favorite
In Microsoft Word 2007 if I insert an equation the font style is Cambria Math (Italics). I like the look of this font for equations. However, I would prefer to use equation field codes rather than
the equation editor. When I use the same font style with a field code it doesn't look like the font in the Equation Editor:
How can I match the Equation Editor font in the field code? Ideally, I'd like to set this font in a Style.
microsoft-word fonts equations
That's not the same font (or characters). The serif is way different. – user3463 May 25 '12 at 22:22
add comment
1 Answer
active oldest votes
The italic letters in Word 2007 Equation Editor are not normal letters in italic but special characters from the Mathematical Alphanumeric Symbols block in Unicode. For example, when
you type “C” in Equation Editor, it gets converted to U+1D436 MATHEMATICAL ITALIC CAPITAL C.
Such characters and their glyphs have been designed for mathematical usage. This is reflected in their spacing. And their design is genuinely italic. In contrast, if you take a letter
in the Cambria Math font and apply Ctrl I to it, Word uses “fake italic” or “engineering italic”, which means just algorithmic slanting. The letters get excessively slanted and do not
change their basic form. This can easily be seen by comparing italic “a” in Cambria Math and U+1D44E MATHEMATICAL ITALIC SMALL A.
up vote 3
down vote To enter a mathematical italic character outside Equation Editor, enter its Unicode number (the characters “U+” may be omitted if the preceding character is not a digit or a letter A–F
accepted or X) and press Alt X. Alternative, use the Insert → Symbol command, set font to Cambria Math, and scroll down the table of characters down to the last part to “Extended characters –
Plane 1” or use the dropdown to get there.
If you wish to use such characters frequently, the best way is probably to set up a keyboard layout for them, using MSKLC.
I was afraid the answer wasn't going to be simple. I have no experience with MSKLC but I will look into it. In the meantime, do you think it might be best to just use a different font
altogether for my "Equation" style? Something with the proper spacing and serif? Would you have any recommendations? Some quick Google searches didn't yield much. – brady May 29 '12
at 13:22
I don’t have much experience with field codes and can’t tell how much the font issue matters there. But in normal copy text, I’ve found it necessary to use consistently Equation
Editor inline formulas for all math expressions (provided that I use it for display formulas). There’s just too big a difference between, say, italic “x” in Cambria and the
mathematical italic x produced by the Equation Editor. MSKLC is easy to use, though I think I’ll need to find a way to set up layouts in a more convenient way than manually (when I
find time). – Jukka K. Korpela May 29 '12 at 16:56
add comment
Not the answer you're looking for? Browse other questions tagged microsoft-word fonts equations or ask your own question. | {"url":"http://superuser.com/questions/428928/microsoft-word-how-to-use-equation-font-elsewhere?answertab=active","timestamp":"2014-04-23T14:31:48Z","content_type":null,"content_length":"67412","record_id":"<urn:uuid:137c50f3-7c29-467c-9b1d-1386f9cb2927>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hochschild (co)homology and representation theory
up vote 7 down vote favorite
Dear members of Mathoverflow,
I just discovered the notion of Hochschild (co)homology. I understand well the formalism however I am wondering about the meaning of this (co)homology for representation theory.
I consider an algebra $\mathcal{A}$, such as $\mathcal{U}(su(n))$, a finite dimensional representation space $V$ for $\mathcal{A}$ (they are well known in the case of $\mathcal{U}(su(n))$) and the $\
mathcal{A}$-bimodule $\mathcal{M}=\mathrm{End}(V)$.
In this example, what would be the interpretation of Hochschild (co)homology from the point of view of representation theory ? To what is it an obstruction ?
I can compute things in simple cases such as $\mathcal{U}(su(2))$ but I do not the global interpretation emerge in this case...
Thank you in advance, Damien.
rt.representation-theory lie-algebra-cohomology
1 Not a proper answer, but just a comment: the view of 1st and 2nd degree Hochschild cohomology as obstructions to the splitting of certain extensions ought to be explained in several sources:
Weibel's book goes into some detail on this. – Yemon Choi Oct 4 '12 at 8:44
2 In the case of enveloping algebras of semisimple Lie algebras: my guess (but I am not sure this is true in detail) is that the vanishing of higher cohomology groups corresponds to the Lie algebra
being semisimple, so all submodules of a given module split off as module summands. Presumably if one replaces su(n) by something solvable then not every indecomposable module is irreducible, and
my instinct is that this should correspond to some non-trivial $H^1$ – Yemon Choi Oct 4 '12 at 8:48
1 @Damien, I give you a +1 not for the question but for having discovered Hochschild (co)homology, which I think, according to my own experience, is a matter of celebration in one's life. – Fernando
Muro Oct 4 '12 at 9:48
About Yemon's comment: If $\mathfrak{g}$ is a finite-dimensional complex semisimple Lie algebra, then the higher (Hochschild) cohomology groups for the universal enveloping algebra of $\mathfrak
2 {g}$, taken with trivial coefficients, are nonzero in degree $\dim \mathfrak{g}$, and sometimes in lower nonzero degrees as well, depending on the Lie type of $\mathfrak{g}$. But in this case, $H^
1$ taken with an arbitrary finite-dimensional coefficient module is always zero, which does correspond to the semisimplicity of finite-dimensional representations. – Christopher Drupieski Oct 4
'12 at 15:28
Look at D. Happel, Hochschild cohomology of finite-dimensional algebras, in Seminaire d'Algebre Paul Dubreil et Marie-Paul Malliavin, 39eme Annee (Paris, 1987/1988), Lecture Notes in Mathematics
1404 (1989), 108-126. – Benjamin Steinberg Oct 4 '12 at 15:34
show 2 more comments
1 Answer
active oldest votes
If $A$ is a $k$-algebra and $V$, $W$ are $A$-modules, then $Hom_k(V,W)$ is an $A$-bimodule in a natural way (acting contravariantly in $V$ and covariantly in $W$). It is well known (can
up vote 2 be found in Cartan-Eilenberg) that $n^{th}$-Hochschild cohomology of $Hom_k(V,W)$ is $Ext_A^n(V,W)$. Your case of $End(V)$ would then give $Ext_A(V,V)$.
down vote
Thank you for your answer but I know already this property. My question was more about its meaning. If you prefer, what kind of information (for example for representation theory) do
Hochschild cohomologies bring ? – Damien S. Oct 9 '12 at 8:01
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-algebra-cohomology or ask your own question. | {"url":"http://mathoverflow.net/questions/108790/hochschild-cohomology-and-representation-theory","timestamp":"2014-04-20T16:25:13Z","content_type":null,"content_length":"58068","record_id":"<urn:uuid:061cb34d-e10c-4f37-a9ca-34f44d8c907e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Learning and Mathematics: Hiebert and Wearne, Teaching
Replies: 40 Last Post: Jan 21, 2004 3:45 AM
Messages: [ Previous | Next ]
Re: Learning and Mathematics: Hiebert & Wearne, Teaching
Posted: Feb 29, 1996 11:50 PM
Mark, This is an excellent analysis. The key is to ENGAGE THEIR CURIOUSITY
in some way, to get them thinking about what is going to happen. This can
be done with 'real life' or more likely pseudo-'real life' problems, but
these need to be problems that they care about (i.e., forget most of the
ridiculously contrived occupational questions). I have often found
whimsical, silly problems to be effective. There is no pretense that the
situation could ever happen, but still it is expressed in every-day terms.
And let's not underestimate the effectiveness of a really compelling
mathematical problem expressed purely in mathematical terms. To quote your
summary, 'We must teach kids to think. That will never go out of style.'
The strange thing is that, if given a chance, kids often LIKE to think!
At 7:38 PM 02/29/96, Marksaul@aol.com wrote:
>Well, I have not found that in 26 years of teaching. Sometimes it helps to
>give an application, and sometimes it leaves them cold.
>The trouble is in defining "real life". Is Archie comics real life?
> Seinfeld? Work in an office or factory? For me, real life is a good novel
>or math puzzle and a sweet Florida orange.
>Too many people these days see this as "economic reality". But students
>mostly do *not* see things this way. And there are many parts of math where
>the applications are too difficult to show students.
>In high school, when students are asking why they are learning the subject,
>the jig us up. They are adolescent, and are asking as a challenge, not to
>get a serious answer. The job of the teacher is to try to prevent this
>question from coming up. Sometimes one does it by asking it first oneself.
> But sometimes it is done by intriguing the student on a completely different
>level from that of "real life", whatever that may be.
>> I feel that this is one of the greatest needs/deficiencies of teachers at
>>all levels--we do not know where/how the mathematics that we are teaching is
>>used. Too many of us have been in school all of our lives and have very
>>limited experience in using mathematics in the workplace. Our primary
>>application of mathematics is that of an ordinary consumer.
>This is true. And consumer math has a well-deserved reputation for being
>extremely lukewarm, so students spew it out.
>Applications of math are fascinating and can be very helpful in the
>classroom. But I think math in the workplace is a limiting goal.
>See what I mean by economic reality? This is an historical phenomenon (i.e.
>a fad). We in American are rightly concerned about the loss of our economic
>hegemony over the world, and are obsessed with trying to do something about
>it. And my feeling is that the kids suffer.
>Which of us knows what math will be in the workplace even five years from
>now? And which will be on the dustheap, lying above slide rules and log
>tables? No. We must teach kids to think. That will never go out of style.
Date Subject Author
2/14/96 Learning and Mathematics: Hiebert and Wearne, Teaching K. Ann Renninger
2/15/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Nette Witgert
2/16/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Johanna K. Peters-Burton
2/16/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Stephen Weimar
2/16/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Kristin E. Waugh
2/16/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Laurie Gerber
2/16/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Jim LaCasse
2/17/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Lyndsley Wilkerson
2/19/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Emily Mott
2/19/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Emily Mott
2/21/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Jane Ehrenfeld
2/21/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Shawn R. Beckett
2/22/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching J. Wendell Wyatt
2/22/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Lou Talman
2/22/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching John Conway
2/22/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Will Craig
2/22/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Jim LaCasse
2/23/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching John Conway
2/23/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching andrew@plan9.att.com
2/24/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Johnny Hamilton
2/25/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Mara Landers
2/26/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Nette Witgert
2/26/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Stephen Weimar
2/27/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Will Craig
2/27/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Emily Mott
2/27/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching NiYa N. Costley
2/28/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Tony Thrall
2/29/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Mara Landers
2/29/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Marksaul@aol.com
2/29/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Laurie Gerber
2/29/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching W Gary Martin
3/1/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Brian Hutchings
3/1/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Brian Hutchings
3/6/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Liza Ortiz
3/9/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Judith Haemmerle
3/13/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Richard Tchen
3/17/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Richard Tchen
3/17/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Richard Tchen
3/20/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Kristin E. Waugh
3/27/96 Re: Learning and Mathematics: Hiebert & Wearne, Teaching Richard Tchen
1/21/04 zaheer | {"url":"http://mathforum.org/kb/thread.jspa?threadID=350904&messageID=1075735","timestamp":"2014-04-17T11:06:17Z","content_type":null,"content_length":"67686","record_id":"<urn:uuid:286aca6e-e72a-41a2-867c-7859791b11f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
add or subtract as indicated and write the result in standard form (2-7i)+(6+5i)+(2+5i)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
You mean like this 10+3i?
Best Response
You've already chosen the best response.
i dont know i dont understand it
Best Response
You've already chosen the best response.
Ya that's right, I will try to explain it
Best Response
You've already chosen the best response.
To add two complex numbers, add their real parts and add their imaginary parts: (a 1 + b 1 i) + (a 2 + b 2 i) = (a 1 + a 2) + (b 1 + b 2)i .
Best Response
You've already chosen the best response.
Don't forget to click on the best response button :)
Best Response
You've already chosen the best response.
and i need the work i dont know how to do it i even had my sister whose in college try to help me but she didnt understand.
Best Response
You've already chosen the best response.
The formula I posted shows you exactly how you need to work it out to get the answer
Best Response
You've already chosen the best response.
Here is an example using the formula i posted (12 + 6i) + (11 + 5i) = (12 + 11) + (6 + 5)i = 23 + 11i
Best Response
You've already chosen the best response.
I know you have three equations so just add an extra variable a3 and b3i
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507cd086e4b07c5f7c1fb4b6","timestamp":"2014-04-21T04:51:29Z","content_type":null,"content_length":"46754","record_id":"<urn:uuid:aa8db069-bdc9-422a-b7e3-eab50490c6b9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Markov Modeling - Examples
Markov Modeling for Reliability – Part 4: Examples
4.1 Primary/Backup System with Internal/External Fault Monitoring
The system shown schematically in the figure below consists of a primary unit (Unit 1) with continuous internal fault monitoring, a backup unit (Unit 2) with no self-monitoring, and an external
monitoring unit (Unit 3) whose function is to monitor the health of the backup unit.
Diagram of Active System with self-monitoring and
Back-up System with an Independent Monitor
The failure rate of Unit 1 is l[1] = 5.0E-05 per hour. The full time self-monitoring of this unit enables it’s functionality to be verified prior to every flight. (The duration of each flight is
assumed to be 5 hours.) If found to be faulty or inoperative, it is repaired before dispatch.
The failure rate of Unit 2 is l[2] = 2.5E-05 per hour. The backup system has no self-monitoring, but is monitored continuously by an independent monitor (Unit 3). If the backup system fails and the
monitor is working, the backup is repaired before the next dispatch. If the monitor is not working, the backup can fail latently, but it is checked every 10 flights (50 hours). If the backup unit
is found faulty at one of these 50-hour checks, with no indication of backup system failure from the monitor, it is assumed that the monitor system is also failed, so both Units 2 and 3 are repaired
prior to the next flight.
The external monitor (Unit 3) has a failure rate of l[3] = 2.5E-05 per hour. If it fails, it can be repaired in one of two ways. First, as noted above, if the backup system is found failed at its
periodic 50-hour inspection and there was no monitor indication of a backup system failure, then the monitor is repaired along with the backup system prior to the next flight. Second, a periodic
check of the monitor is performed every 100 flights (500 hours), and if the monitor is found to be failed, it is repaired prior to the next flight.
The MTBFs of the individual units are 20,000 and 40,000 hours, whereas the periodic inspection intervals are only 5, 50, and 500 hours, all of which are orders of magnitude smaller than the MTBFs.
Also, most of the states being repaired are first-order states, i.e., they are just one failure removed from the full-up state, so there is no appreciable loss of accuracy in modeling these repairs
as continuous transitions with constant rates given by m = 2/T for the respective intervals. The exception to this is the state in which both the monitor and the backup system are failed. The 50-hour
periodic inspection/repair of this state will actually have an effective repair rate somewhat greater than 2/T, but it is conservative to use 2/T, so for convenience we will use this expression for
all the repair rates. Thus we set m[1] = 2/5, m[2] = 2/50, and m[3] = 2/500, and we can construct the Markov model for the overall system as shown below.
As usual, we set the repair rate on the total system failure state (State 6) to infinity, which effectively eliminates that state from the system equations. The system failure rate is simply the rate
of entry into that state, i.e., l[sys] = (P[1] + P[4])l[2] + (P[3] + P[5])l[1]. Also, since the probabilities of the remaining states must sum to 1, we can disregard one of them, so we need only
consider the steady-state equations for the states 1 through 5, as listed below.
Combining these with the conservation equation P[0] + P[1] + P[2] + P[3] + P[4] + P[5] = 1, we have six equations in six unknowns. In terms of the matrix notation of Section 2.3, the average system
failure rate for this example is
and L is the row vector L = [ 0 l[2] 0 l[1] l[2] l[1] ]. Inserting the values of the failure and repair rates, this gives the result l[sys] = 6.42E-09 per hour.
The sensitivity of the system failure rate to variations in the component failure rates and repair times can also be evaluated. For example, the plot below shows the system failure rate as a function
of the monitor inspection interval for various values of the backup system interval. This type of plot can assist the analyst and designer in determining the optimum maintenance intervals the achieve
the required level of reliability with the minimum economic burden.
4.2 Two-Unit System with Latent Failures of Protective Elements
Consider a system consisting of two redundant units, each equipped with protection from a common threat (such as lightning, inclement weather, etc), and suppose failure of the protection occurs at
the rate r and is undetectable until the unit is subjected to the external threat, at which time the unit fails. In addition, each unit has a detectable failure rate due to generic causes of R, and
an exponential repair transition with rate m. Whenever a unit is repaired, its threat protection is also checked and, if necessary, repaired. Let s denote the rate of occurrence of the common
external threat.
Each unit can be in one of three states, which we will denote as 0, 1, and 2, corresponding to fully healthy, protection failed, and fully failed, respectively. (For a fully failed unit it is
irrelevant whether the protection is failed or not, because the unit will remain inoperative until it is repaired, at which time the protection will also be restored if it is failed.) Since the two
channels are symmetrical, the overall system can be in one of just five functional states, denoted as 00, 01, 02, 11, and 12. (The state 22 signifies the non-functional state with both units
inoperative, which will be repaired immediately.) The Markov model for this system is shown below.
As discussed previously, the total failure state (“22”) is just a place-holder, since a system leaves the population when it enters this state, and doesn’t return to the population until it is
repaired or replaced by a system in state 00. Therefore, the rate w is irrelevant to the hazard rate of the operational population. The system failure rate is the rate of entering state 22, which is
The steady-state system equations are
along with the conservation equation
This gives us five equations in five unknowns, and we can solve this system of equations to determine the steady-state values of P[02], P[11], and P[12], which we can substitute into equation (4.2-1)
to give the system failure rate explicitly as a function of R, r, m and s.
The plot below shows the rates of the four ways of entering state 22 as a function of s, given the parameters R = 10^-^5, r = (0.02)R, and m = 1/150. The upper line represents the rate of entering
state 22 from state 02, which is by far the most likely way of reaching state 22. The red lines represent the rates triggered by the occurrence of the external threat and, as can be seen, the maximum
contribution occurs for s near the square root of 2 times R + r.
Now, at the two extremes of s equals zero or infinity, the system failure rate is
These two expressions are identical, except that R is replaced with R + r. Depending on the values of the parameters, the system failure rate may increase monotonically between these two levels, or
it may pass through a maximum and drop back down, as illustrated in the two plots below.
In many applications the value of s (the rate of encountering the external threat of sufficient severity to cause a unit failure) is unknown, so it is necessary to choose a conservative value. If we
set m to infinity (meaning that individual detected unit failures are assumed to be repaired immediately), the expression for the system failure rate reduces to
which is zero if s is either zero or infinite. In this case we can differentiate with respect to s and set the resulting expression equal to zero to find that the value of s giving the maximum value
of l[sys] is
so the worst-case system failure rate (assuming immediate repair of detected component failures) is
This shows that complete system failure for the two-unit system is on the order of the failure rate of a single unit. In fact, for a system with zero rate of detected failures (R = 0), and with a
rate r for failure of each individual unit’s protection feature, the worst-case system failure rate is (0.306) r.
4.3 Three-Unit System with Latent Failures of Protective Elements
We can apply this same type of analysis to a 3-unit system, where each units can be either healthy, or with undetected failure of protection, or in a detected failure state. As before, we denote
these by the indices 0, 1, and 2. The units are symmetrical, so the order doesn’t matter. Thus the subscript “012” (for example) signifies that one unit is fully healthy, one has failed protection,
and one is in a detected inoperative state. The Markov model for this system is shown below.
As explained for the 2-unit system, the total failure state (222) is just a place-holder, so there are only nine states of the system. For convenience we will number the states from 1 to 9 in the
order 000, 001, 002, 011, 012, 022, 111, 112, 122. By examining the model we can define the transition matrix M shown below.
In terms of this matrix the system equations are
where P is the column vector consisting of the probabilities of states 1 through 9. The eventual steady-state solution has dP/dt = 0 and therefore MP = 0, but since the rows of M sum to zero they are
not independent. Replacing the first row of M with the condition that the sum of all the probabilities is unity, we have nine independent conditions, so we can solve for the probabilities. Letting A
denote the modified version of M (i.e., with the first row replaced with 1’s), and letting C denote the column vector with C[1] = 1 and C[j] = 0 for all j > 1, the steady state probabilities are
given by
In terms of these probabilities, the total system failure rate can be read off the model diagram as
where we have reverted back to the original index notation for the various states. To illustrate, consider the case of a system comprised of three parallel units, each with a detected failure rate R
= 10^-5 per hour and undetected rate r = R/50 (i.e., 2%) of protection failure, and a repair rate m = 1/150 hours. The five individual contributions to the overall system failure rate, along with the
total rate, are plotted in the figure below (on a logarithmic scale).
This example (as well as the previous two-unit example) involves two somewhat unusual features. First, the system relies on the failures of certain components to drive the inspection and repair of
other components which are themselves latent. Great care must be taken when invoking this sort of argument, because the average rate R of detected failures may not be uniformly applicable to all the
systems in service. For example, it may be that half of the in-service units each fail (detected) twice per year, whereas the other half of the in-service units essentially never fail, i.e., they
have R = 0. The average failure rate for the entire fleet of units is one failure per year, and yet half the fleet will operate for many years without a failure. As a result, it would be misleadingly
optimistic to claim credit for a once-per-year inspection of the latent protection components for those units.
The second unusual feature of this example is both more subtle and more robust. It accounts for the fact that latent failures of protection components will be revealed at the rate of external
disturbances severe enough to cause (in the absence of protection) loss of function. This is why, in example 4.2, the rate of total system failure remains on the order of r, even if we assume R = 0.
In other words, even if no credit is taken for inspections of the protection components triggered by detected failures, it is still not necessary to assume that all the protection features are
latently failed, because failures of those features in an individual unit will be disclosed at a rate proportional to the threat.
Return to Markov Modeling Table of Contents | {"url":"http://www.mathpages.com/home/kmath232/part4/part4.htm","timestamp":"2014-04-19T19:52:08Z","content_type":null,"content_length":"57402","record_id":"<urn:uuid:eb54ff61-963a-469e-8e49-73165c2038d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: WS Pacing
Replies: 0
WS Pacing
Posted: Sep 15, 1996 8:03 AM
I think the crucial question for each of us is to determine what are the
crucial ideas in each topic and somehow find out if our students are learning
them at our pace. The great danger in a student centered classroom with
activity sheets is that the students (and teacher) will have a great time
doing the activities but not build up real conceptual knowledge about what
they have done.
Steps I have taken to try to eliminate this problem:
1) Made them aware of the point I just made above at least every other day
for a while.
2) Each student must have a notebook with an outline of each topic and some
listings and or discussions of each vocabularly work/major concept.
3) Homework assignment sheets lead off with a couple of probing questions to
show them what I mean. For instance, I asked:
I assume you know when a bar graph is more appropriate than the dotplot and
visa-versa. Can you explain why! The next day most students had a gut sense
of the answer but were unable to put an answer into acceptable form ..in
other words...they were unable to pass a test on this point. One student was
able to explain the answer using the terminology that they had already
covered in the book...I hope the rest got the idea. The idea is the
activities just get you started. Then you have to LEARN the material
Helpful idea: My wife is a doctor. She often has to read material and then
use a new technic or make a decision about the care of a patient using the
material the next day. She reads and learns material in a different way then
we often do. She MUST really understand what she reads. She must think
about the new ideas within a context of what else she knows about medicine.
She must develop a vocabulary that is part of the accepted terminology and
has meaning to her peers. I suggest to my students that learning statistics
this not a life or death experience, but if we thought about learning as my
wife does we all would be much better prepared.
4) Send some time each period pulling some ideas together.
- For instance I had them all line up by height and then they asked them
to physically (without counting) determine the person with the median height.
I then had the tallest person stoop over and said that this person is
actually now very short. How would that alter our answer (watch out on this
one - I faked myself out). Then maybe have the shortest person "get
5) Assign short readings in BPS (they each own a copy) and the texts on
reserve at the library and ask them to compare the presentation in these
resources to those in WS.
Disclaimer...Am I getting all this done all the time....In your dreams Coons. | {"url":"http://mathforum.org/kb/thread.jspa?threadID=192715","timestamp":"2014-04-17T19:39:37Z","content_type":null,"content_length":"16192","record_id":"<urn:uuid:3b949a27-9e8a-4633-bf21-e5e0efe896d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colpitts oscillator - Top Videos
A Colpitts oscillator, invented in 1918 by American engineer Edwin H. Colpitts,^[1] is one of a number of designs for LC oscillators, electronic oscillators that use a combination of inductors (L)
and capacitors (C) to produce an oscillation at a certain frequency. The distinguishing feature of the Colpitts oscillator is that the feedback for the active device is taken from a voltage divider
made of two capacitors in series across the inductor.^[2]^[3]^[4]^[5]
The Colpitts circuit, like other LC oscillators, consists of a gain device (such as a bipolar junction transistor, field effect transistor, operational amplifier, or vacuum tube) with its output
connected to its input in a feedback loop containing a parallel LC circuit (tuned circuit) which functions as a bandpass filter to set the frequency of oscillation.
A Colpitts oscillator is the electrical dual of a Hartley oscillator, where the feedback signal is taken from an "inductive" voltage divider consisting of two coils in series (or a tapped coil). Fig.
1 shows the common-base Colpitts circuit. L and the series combination of C[1] and C[2] form the parallel resonant tank circuit which determines the frequency of the oscillator. The voltage across C
[2] is applied to the base-emitter junction of the transistor, as feedback to create oscillations. Fig. 2 shows the common-collector version. Here the voltage across C[1] provides feedback. The
frequency of oscillation is approximately the resonant frequency of the LC circuit, which is the series combination of the two capacitors in parallel with the inductor
$f_0 = {1 \over 2 \pi \sqrt {L \left ({ C_1 C_2 \over C_1 + C_2 }\right ) }}$
The actual frequency of oscillation will be slightly lower due to junction capacitances and resistive loading of the transistor.
As with any oscillator, the amplification of the active component should be marginally larger than the attenuation of the capacitive voltage divider, to obtain stable operation. Thus, a Colpitts
oscillator used as a variable frequency oscillator (VFO) performs best when a variable inductance is used for tuning, as opposed to tuning one of the two capacitors. If tuning by variable capacitor
is needed, it should be done via a third capacitor connected in parallel to the inductor (or in series as in the Clapp oscillator).
Practical example[edit]
Fig. 3 shows a working example with component values. Instead of bipolar junction transistors, other active components such as field effect transistors or vacuum tubes, capable of producing gain at
the desired frequency, could be used.
One method of oscillator analysis is to determine the input impedance of an input port neglecting any reactive components. If the impedance yields a negative resistance term, oscillation is possible.
This method will be used here to determine conditions of oscillation and the frequency of oscillation.
An ideal model is shown to the right. This configuration models the common collector circuit in the section above. For initial analysis, parasitic elements and device non-linearities will be ignored.
These terms can be included later in a more rigorous analysis. Even with these approximations, acceptable comparison with experimental results is possible.
Ignoring the inductor, the input impedance can be written as
$Z_{in} = \frac{v_1}{i_1}$
Where $v_1$ is the input voltage and $i_1$ is the input current. The voltage $v_2$ is given by
$v_2 = i_2 Z_2$
Where $Z_2$ is the impedance of $C_2$. The current flowing into $C_2$ is $i_2$, which is the sum of two currents:
$i_2 = i_1 + i_s$
Where $i_s$ is the current supplied by the transistor. $i_s$ is a dependent current source given by
$i_s = g_m \left ( v_1 - v_2 \right )$
Where $g_m$ is the transconductance of the transistor. The input current $i_1$ is given by
$i_1 = \frac{v_1 - v_2}{Z_1}$
Where $Z_1$ is the impedance of $C_1$. Solving for $v_2$ and substituting above yields
$Z_{in} = Z_1 + Z_2 + g_m Z_1 Z_2$
The input impedance appears as the two capacitors in series with an interesting term, $R_{in}$ which is proportional to the product of the two impedances:
$R_{in} = g_m \cdot Z_1 \cdot Z_2$
If $Z_1$ and $Z_2$ are complex and of the same sign, $R_{in}$ will be a negative resistance. If the impedances for $Z_1$ and $Z_2$ are substituted, $R_{in}$ is
$R_{in} = \frac{-g_m}{\omega ^ 2 C_1 C_2}$
If an inductor is connected to the input, the circuit will oscillate if the magnitude of the negative resistance is greater than the resistance of the inductor and any stray elements. The frequency
of oscillation is as given in the previous section.
For the example oscillator above, the emitter current is roughly 1 mA. The transconductance is roughly 40 mS. Given all other values, the input resistance is roughly
$R_{in} = -30 \ \Omega$
This value should be sufficient to overcome any positive resistance in the circuit. By inspection, oscillation is more likely for larger values of transconductance and smaller values of capacitance.
A more complicated analysis of the common-base oscillator reveals that a low frequency amplifier voltage gain must be at least four to achieve oscillation.^[6] The low frequency gain is given by:
$A_v = g_m \cdot R_p \ge 4$
If the two capacitors are replaced by inductors and magnetic coupling is ignored, the circuit becomes a Hartley oscillator. In that case, the input impedance is the sum of the two inductors and a
negative resistance given by:
$R_{in} = -g_m \omega ^ 2 L_1 L_2$
In the Hartley circuit, oscillation is more likely for larger values of transconductance and larger values of inductance.
Oscillation amplitude[edit]
The amplitude of oscillation is generally difficult to predict, but it can often be accurately estimated using the describing function method.
For the common-base oscillator in Figure 1, this approach applied to a simplified model predicts an output (collector) voltage amplitude given by:^[7]
$V_C = 2 I_C R_L \frac{C_2}{C_1 + C_2}$
where $I_C$ is the bias current, and $R_L$ is the load resistance at the collector.
This assumes that the transistor does not saturate, the collector current flows in narrow pulses, and that the output voltage is sinusoidal (low distortion).
This approximate result also applies to oscillators employing different active device, such as MOSFETs and vacuum tubes.
External links[edit]
• Lee, T. The Design of CMOS Radio-Frequency Integrated Circuits. Cambridge University Press. 2004.
• Ulrich L. Rohde, Ajay K. Poddar, Georg Böck "The Design of Modern Microwave Oscillators for Wireless Applications ", John Wiley & Sons, New York, NY, May, 2005, ISBN 0-471-72342-8.
• George Vendelin, Anthony M. Pavio, Ulrich L. Rohde " Microwave Circuit Design Using Linear and Nonlinear Techniques ", John Wiley & Sons, New York, NY, May, 2005, ISBN 0-471-41479-4. | {"url":"http://www.mashpedia.com/Colpitts_oscillator","timestamp":"2014-04-20T06:07:06Z","content_type":null,"content_length":"93419","record_id":"<urn:uuid:2a37a7c4-7e99-403e-ace0-f8b74db81136>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework 8
Modern Physics, Spring '10
Homework 8 (covers Townsend Chapter 5)
1. Problem 5.2 (Show that the operator...)
2. Problem 5.5 (For a particle in a harmonic oscillator potential...)
3. Problem 5.8 (Let the operator...)
4. In section 5.4 Townsend develops the famous Heisenberg Uncertainty Principle for position and momentum (equation 5.50). He says that this is just one of "many other important uncertainty
relations" -- one of which is then developed in the next section. Let's think about some other possibilities. For example, consider a free particle ("free" meaning that V(x) = 0 everywhere). Is there
some minimum value to the product of the uncertainties in its momentum and its energy? How about for a particle in a harmonic oscillator potential? (Note, you don't have to figure out what the HUP
type equation would look like exactly... the question is only whether the right hand side is zero or nonzero in the two cases.)
5. Well, I can't resist asking you some things about Section 5.6. On page 171, Townsend summarizes the EPR argument and notes that "For Einstein, this 'spooky action at a distance' was unacceptable,
something that 'no reasonable definition of reality' should permit." Is Townsend here disagreeing with Einstein, i.e., saying that 'spooky action at a distance' is acceptable? More generally, what do
you take him to be saying here?
6. Again with Section 5.6. Near the top of page 170, Townsend talks about the collapse postulate and says: "How this collapse happens is a mystery. It is referred to as the measurement problem." Then
later, at the very end of the section, he suggests instead that "the crux of the measurement problem" is the fact that, according to Schroedinger's equation, the final state (of e.g. a cat in a box)
should be a superposition if the initial state is. What is the relationship between these two statements about what the measurement problem is? That is, do they amount to the same thing, or are they
totally different and unrelated, or what? Stepping back, do you see any serious "problem" associated with "measurement" for this theory?
Last modified: Monday, December 19, 2011, 9:18 AM | {"url":"https://courses.marlboro.edu/mod/page/view.php?id=5259","timestamp":"2014-04-18T22:01:15Z","content_type":null,"content_length":"24222","record_id":"<urn:uuid:fde5ad0b-6a12-4b94-adaf-e4df25749f07>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics for the Terrified
ISBN: 9780139554100 | 0139554106
Edition: 2nd
Format: Paperback
Publisher: Pearson College Div
Pub. Date: 7/1/1998
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/statistics-terrified-2nd-kranzler-gerald/bk/9780139554100","timestamp":"2014-04-17T10:41:06Z","content_type":null,"content_length":"38185","record_id":"<urn:uuid:865d7e94-ea43-47bf-9a72-5afb6320d28f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Textbook Search Addition
Replies: 1 Last Post: Jun 5, 1995 11:20 AM
Messages: [ Previous | Next ]
Re: Textbook Search Addition
Posted: Jun 5, 1995 11:20 AM
Subject: Textbook Search Addition
From: JoanI10084@aol.com at internet
Date: 6/4/95 8:07 AM
>Greetings again-
>I recently posted a request for information about 3 textbook series that my
>school district is considering. I left one important program out. We are
>interested in information about the following:
>***Addison Wesley- K-8
>***Silver Burdett- K-8
>***Scott Foresman- K-8
>***Prentice Hall for the Middle Grades- 6-8
>If anyone uses these and can offer comments-positive or negative-it would be
>greatly appreciated.
Of these, I've only used the Addison-Wesley (I've taught 3-5 grade math from
them). I'd love to hear what other people think, but personally, I loathed
these books. Here's what I saw:
1) Skills are introduced individually, with very little connection.
So adding with regrouping is different from adding is different from
subtracting is different from subtracting across a middle zero is
different from...
You get the idea. I had no desire to communicate to them that math was
about learning all the different rules for all the different situations.
2) Each skill is introduced with a (sometimes engaging, sometimes not)
story problem. The problem is worked out in detail for the students.
On the opposite page is between 40 and 60 drill problems, none of them
with interesting stories attached. My kids would have been bored silly.
3) There were 3 additional soft-bound books to use as companions to the
hard-cover text. They were called (if I remember correctly) "Reteaching,"
"Building Thinking Skills," and "More Practice." The "More Practice"
problems looked exactly like the ones in the book. The "Reteaching"
problems looked exactly like the ones in the book, but they had an example
worked out at the top. The "Building Thinking Skills" book was great,
and it was the only part of the series I used after the first two weeks.
4) The "geometry" chapter in each of the books consisted entirely of
naming shapes. (The increase in difficulty from 3-5 included adding
3D shapes into the mix.) No building, no drawing, no clay, ...
Nothing that should be in the geometry experience of kids.
You don't really say what you're looking for in a text series. The AW
book will provide lots of drill, lots of problems for the teacher to give
as homework, and one small soft-cover book of real gems that's all too
easy to ignore. I dug up my own materials and taught from them, because
the AW series was the only text I had. Luckily, I had computers, very
small classes (4-5 students), and lots of other materials.
I also had great kids. Unfortunately, all of their math experience had
been with the AW books, so they didn't think I was really teaching them
math. They liked it, but it wasn't math. One kid had been diagnosed
very young as learning disabled. He had never really been able to remember
the algorithms for adding, subtracting, etc. and he had done horribly
on timed multiplication tests. However, he had amazing problem solving
skills. He could read and understand problems, apply both old and new
concepts to solving them, and he was a very creative and careful thinker.
He could answer probing questions about his problem solutions; he had
clearly thought them through. But he kept insisting he was no good at
math. At the end of the year, all the kids had to take a standardized
test. For the first time, they could use calculators. This bright little
boy said to me, "but if I use a calculator, I'll get them all right."
And he did, too. No thanks to AW.
Sorry to ramble...
Michelle Manes
Education Development Center
Date Subject Author
6/4/95 JoanI10084@aol.com
6/5/95 Re: Textbook Search Addition Michelle Manes | {"url":"http://mathforum.org/kb/thread.jspa?threadID=481474&messageID=1474185","timestamp":"2014-04-21T13:20:03Z","content_type":null,"content_length":"20915","record_id":"<urn:uuid:2a9ffd2b-db42-4855-90bb-d297b012c539>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bending Moments
Question : A beam 10 meters long, with 3 forces acting on it and 2 re-acting forces on it to keep it in equilibrium. The re-acting forces are at 2 meters from either end. The forces onto the beam are
at 3 meters from the left end of 10kN, 10 meters from the left end of 20KN and a distributed load over the full 10 meter beam of 5KN.
a) work out the 2 re-acting forces
b) Calculate the bending moment at 1meter intervals along the beam
c) Draw a bending moment diagram of the beam
Right so there's the question, my issue I believe; is calculating the re-acting forces, I have tried numerous ways, and believe that both reacting force will be 40KN each.
As taking moments from R1, (10 x 1) + (50 x 3) + (20 x 8) = R2 x 8
R2 = 40
As the UDL = 50KN at 5 meters, R1 = 80 - 40 =40KN
Then I drew a sheer diagram which seemed fine.
Then I calculated each meter individually expecting the 10th meter to equal zero.
However I get M10 = 0 - (10 x 5 x5) + (40 x 8 ) - (10 x 7 ) + (40 x 2) = 60
Obviously not right!!!
Could someone please give me some guidance whether you believe it's the re-acting forces or whether it's my calculations for working out each meter.
Any Help Would Be Appreciated. | {"url":"http://www.physicsforums.com/showthread.php?t=520416","timestamp":"2014-04-17T21:26:52Z","content_type":null,"content_length":"23307","record_id":"<urn:uuid:0b9ff361-9398-478f-8949-4f705853901a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Re: On Physical Church-Turing Thesis
Dmytro Taranovsky dmytro at mit.edu
Wed Feb 11 12:03:58 EST 2004
I do not have references to the precise formulations of recursiveness vs.
recursiveness with randomness. However, as you (Toby Ord)
suggested, one can ask whether there are definable in second order
arithmetic (without parameters) but non-recursive functions that are
physically computable. A positive answer would mean that the physical
world is non-recursive, while a negative answer would make it almost
certain that the physical world is recursive (but possibly with
Do you have a reference for infinite travel in finite time in Newtonian
Notice that in classical mechanics, the n-body problem in gravitation is
decidable (so as to the approximate location of the bodies at a given
time) up to (but probably not including) the moment of a collision or
ejection to infinity, when the theory breaks down because of faulty
assumptions (point particles, classical mechanics, etc.)
>There are also many rather bizarre limitations that recursive physics
> would place upon the universe. For example, it would mean that no
> measurable quantity (in some currently under specified sense) could
> grow faster than the busy beaver function or slower than the inverse of
> the busy beaver function
A quantity that grows faster than the "busy beaver function" would
quickly become much larger than the observable universe, while if it
grows at the rate of an inverse busy beaver function it would become
essentially constant (for billions of years). I do not view the
limitations as bizarre.
>As has been pointed out in another current post, this problem (as put)
> is no different from knowing whether or not a given computer (with
> unbounded memory) is computing multiplication.
I think that the results (should they come in) will be accepted as long
as they are reasonable. However, I would be skeptical if the results
tell as that Peano Arithmetic is inconsistent without providing a human
verifiable proof of inconsistency.
I should note that discovery of a failure of physical Church thesis
would be a most extraordinary event. It would have trillions of dollars in
applications assuming that it can be used to *effectively* solve the
halting problem.
Dmytro Taranovsky
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2004-February/007896.html","timestamp":"2014-04-19T07:03:54Z","content_type":null,"content_length":"4860","record_id":"<urn:uuid:a9ecb595-f374-400d-a009-183806c746cb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by bebe
Total # Posts: 38
How much energy is required to raise the temperature of 3 kg of lead from 15C to 20C? Use the table below and this equation: Q = mcT.
A model of a boat is 1 3/8 feet long. How long is the model of ship that is that is three times as long, if the models are made to the same scale? Make a drawing to help you solve the problem, and
then calculate the answer.
Wendy found a rule for a table. The rule she made is y = x + 2.She says that means every y-value will be even.Is she correct?
Sal made a map of his neighborhood.According to his map,Sal's house is 6 units away from the grocery store. The grocery store is 5 units away from the coffee shop.The coffee shop is 1 unit from Sal's
house.How is this possible?
on a map, John is standing at (11,11). His friend Lucy is standing at (1,11).John took 10 steps to the right. Is he standing with Lucy now?
Doug is standing at (2,1).Susan is standing (1,2).Who is farther to the right? How do you know?
describe the interval you would use for a bar graph if the data ranges from 12 to 39 units
william wants to known how many kilograms a 1000 gram cantaloupe weighs he converts the measurement and says the melon 1,000,000 kilograms what did he do wrong
Mary said that the best unit of measure to weigh her cat was ounces.Is she correct? Explain.
Nathan owes Michael $5.00.Nate expressed how much he owes as an integer.Michael expressed how much Nate owes him as an integer.What do the two integers have in common?
Alexandra has 18 yo-yos.She spent $9.00 to buy them all.She wrote the equation 18 divided by 9 = y to find how many yo-yos she got for each dollar she spent.Sven has 9 yo-yos.He spent $18.00 to buy
them all.He wrote the equation 18 divided by 9 = y.What does Sven's equatio...
explain how you can use a grid to subtract 1.65 - 0.98
draw a number line and label at least 5 points as fractions and decimals.explain your answer
glenda wrote 1/7 of her paper on Monday,1/14 of her paper on tuesday,and 2/28 of her paper on wednesday.she said she wrote more than half of her paper,is she correct?why or why not?
in what way is subtracting fractions with unlike denominators like adding fractions with unlike denominators?
In missy's sports-cards collection, 5/7 of the cards are baseball. In frank's collection, 12/36 are baseball.Frank says they have the same fraction of baseball cards.Is he correct?
explain why 1/2 of 12 horizontal squares of Region A is not larger tha n 1/2 of 12 vertical squares Region B.
Fluorine-18, which has 1/2 life of 110 minutes, is used in PET scans. If 100 mg of fluorine-18 is shipped @8AM, how many milligrams of the radioisotope are still active if the sample arrives at the
lab at 1:30 pm?
a 65kg student holding a 5 kg ball is at rest on ice. if the student throws the ball at 3m/s to the left, what is her velocity after throwing the ball
a 65kg student holding a 5 kg ball is at rest on ice. if the student throws the ball at 3m/s to the left, what is her velocity after throwing the ball
Week 4: Exercises from E-Text 12. Individual Assignment: Exercises From the E-Text Due 7/12/10 Resource: Applied Statistics in Business and Economics Prepare answers to the following exercises
from the Applied Statistics in Business and Economics e-text: o Chapte...
A plumbing supplier s mean monthly demand for vinyl washers is 24,212 with a standard deviation of 6,053. The mean monthly demand for steam boilers is 6.8 with a standard deviation of 1.7. Compare
the dispersion of these distributions. Which demand pattern has more relati...
What are the challenges of managing your chosen terrestrial resource issue? What human activities contribute to the problem?
need help
The nature of carbon dioxide in the atmosphere is different,though. Global warming is a Global problem and is not specific to one area. Does thgis make carbon offsets a better idea than mitigation?
heyyy. i need the answer for ths question tooo...
determine wheter each of the following reactions occur. 2Ni(s) + MgSO4(aq)---> if it doesnt occur explain why
there it is in case anyone else needs it.. =]]
there it is in case anyone else needs it.. =]]
Because the ions are charged, indicating high polarity, which lipids do not possess; they're hydrophobic, and therefore ions cannot pass through the lipid bilayer since lipids are highly unpolar,
unless through an ion channel, which has a higher polarity
haha got it
solve the equation (x-5)^2=10
what happens when the king of beasts runs in front of a train?
Bacteria Y has a mutation within it s genome, at gene R, that provides resistance to neomycin. Bacteria Z is sensitive to neomycin, however, when Bacteria Y is placed in a mixture with Bacteria Z,
Bacteria Z also gains neomycin resistance. One cell of Bacterium Y undergoe...
Bacteria Y has a mutation within it s genome, at gene R, that provides resistance to neomycin. Bacteria Z is sensitive to neomycin, however, when Bacteria Y is placed in a mixture with Bacteria Z,
Bacteria Z also gains neomycin resistance. One cell of Bacterium Y undergoe...
Ethnic Diversity
Thank you all your ur helpppp
social science
How does the government intervene to control environmental pollution? externalities or rivalry consumption
can you tell me the answer to this riddle
what happens when the king of beasts runs in front of a train? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=bebe","timestamp":"2014-04-21T13:34:19Z","content_type":null,"content_length":"13478","record_id":"<urn:uuid:3e03dae1-c72c-4dfd-a14c-a3276445b401>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bar Charts
Bar Charts, like pie charts, are useful for comparing classes or groups of data. In bar charts, a class or group can have a single category of data, or they can be broken down further into multiple
categories for greater depth of analysis.
Things to look for:
Bar charts are familiar to most people, and interpreting them depends largely on what information you are looking for. You might look for:
• the tallest bar.
• the shortest bar.
• growth or shrinking of the bars over time.
• one bar relative to another.
• change in bars representing the same category in different
• classes.
Other tips:
• Watch out for inconsistent scales. If you're comparing two or more charts, be sure they use the same scale. If they don't have the same scale, be aware of the differences and how they might
trick your eye.
• Be sure that all your classes are equal. For example, don't mix weeks and months, years and half-years, or newly-invented categories with ones that have trails of data behind them.
• Be sure that the interval between classes is consistent. For example, if you want to compare current data that goes month by month to older data that is only available for every six months,
either use current data for every six months or show the older data with blanks for the missing months.
Bar chart statistics:
For each bar in the bar chart, the following statistics are useful:
Mean the average height of all the bars.
Maximum the maximum value (tallest bar) in the series.
Minimum the minimum value (shortest bar) in the series.
Sample Size the number of values (bars) in the series.
Range the maximum value minus the minimum value.
Standard Deviation Indicates how widely data is spread around the mean. | {"url":"http://www.skymark.com/resources/tools/bar_charts.asp","timestamp":"2014-04-20T03:09:52Z","content_type":null,"content_length":"9771","record_id":"<urn:uuid:7675ae11-116b-4dce-9c71-caf557e2374d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
8087 math coprocessor ppt
Some Information About
"8087 math coprocessor ppt"
is hidden..!! Click Here to show 8087 math coprocessor ppt's more details..
Do You Want To See More Details About
"8087 math coprocessor ppt"
? Then
with your need/request , We will collect and show specific information of 8087 math coprocessor ppt's within short time.......So hurry to Ask now (No Registration , No fees ...its a free service from
our side).....Our experts are ready to help you...
In this page you may see 8087 math coprocessor ppt related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in proper format with
Page / Author Tagged Pages
Title: graphics processing unit
Download Full Report And Abstract
Page Link: graphics processing unit
Download Full Report And Abstract - seminar report of gpu, abstract of gpu, graphics processing unit full report, what is math intensive process in gpu, latest graphical processing unit technology
Posted By: computer science crazy pdf for seminors, graphical processing unit seminar report, gpu seminar report, abstract on gpu technical seminar, gpu seminar report pdf, graphics processing
Created at: Saturday 21st of unit abstract, seminar on graphics processer unit, abstract and ppt on graphics processer unit and, full information on graphics processer unit, abstract gpu,
February 2009 07:39:44 PM abstract graphics processing unit architecture, graphics processing unit seminar report, abstract of the graphics cards, seminar report on graphic card, graphic
Last Edited Or Replied at :Friday processor unit, gpu abstract,
20th of January 2012 08:40:16 AM
Title: PID Controller BLDC motor
Mathematical model
Page Link: PID Controller BLDC
motor Mathematical model - bldc motor modelling pdf, mathematical model bldc motor, mathematical modeling of bldc motor ppt, bldc motor mathematical model, bldc mathematical model,
Posted By: nitz456@gmail.com mathematical modeling pid, pid controller mathematical model, seminar topics on bldc motor, matlab code for mathematical model of bldc, designing a pid
Created at: Tuesday 16th of controller using matlab, mathematical modelling of bldc motor, mathematical modelling of bldc motor in matlab, 8051 based bldc controller projects pdf, pid using
February 2010 08:09:00 AM math model, find pid using math model, mathematical model of bldc motor speed control, mathematical model of bldc motor, pid controller for bldc motor,
Last Edited Or Replied at mathematical modelling bldc motor, bldc pid,
:Wednesday 11th of April 2012
04:13:41 AM
Title: RAID report
Page Link: RAID report -
Posted By: chandanrao raid levels seminor, hadoop diskreduce blaum roth, raid report, blaum roth, seminar report on raid, raid absrtract for seminar, raid seminar report, raid seminar
Created at: Sunday 21st of March , math about raid in ppt file, raid seminar pdf, raid seminar abstract, raid prototyping technique, seminar report on raid technology, seminar on raid 5, seminar
2010 06:47:51 PM repport on raid, seminar topics on raid controller, d borthakur the hadoop distributed le system architecture and design 2009, raid redundant array of
Last Edited Or Replied at :Monday inexpensive disks for data intensive scalable computing presented by kanwar rajinder pal singh, seminar on raid technology, seminar on raid technology report,
20th of February 2012 08:53:25 AM raid doc, seminar report on raid pdf,
Title: Application of Mathematics
in Robotics full report
Page Link: Application of
Mathematics in Robotics full report applications of mathematics in electronics engineering in ppts, mathematics relation with robotics project, how is math used in robiotics, use mathematica to
- program robots, applications of mathematics in electronics engineering, ppt on maths applications topics, application of mathematics in engineering ppt,
Posted By: project topics application of mathematics in robotics, application of maths ppt, mathematics used in robotics, maths used in robotics, applications of robotics in maths,
Created at: Tuesday 13th of April applications of mathematics in robotics, applications of mathematics in engineering ppt, maths of robotics, applications of mathematics to engineering ppt,
2010 05:57:26 AM applications of maths in real life ppt,
Last Edited Or Replied at :Tuesday
13th of April 2010 05:57:26 AM
Title: COMPARISON OF DISTRIBUTION
Page Link: COMPARISON OF
DISTRIBUTION SYSTEMS POWER FLOW comparision of various methods of load flow, backward forward multiple source distribution system, comparison of load flow solution methods, r x ratio for
ALGORITHMS FOR VOLTAGE DEPENDENT distribution system, an algorithm for radial distribution power flow in complex mode including voltage controlled buses, ppt on an algorithm for radial
LOADS - distribution power flow in complex mode including voltage control buses, load flow radial distribution networks, load flow for distribution system ppt,
Posted By: seminar surveyer mathematical solution of power system load flow using forward dist flow equation, ppt on load flow analysis, gauss, powered by mybb free math solver for the
Created at: Monday 10th of January substitution method, dependency ratio ppt,
2011 10:10:55 AM
Last Edited Or Replied at :Monday
10th of January 2011 10:10:55 AM
Title: DEVELOPMENT OF TUNING
Page Link: DEVELOPMENT OF TUNING
ALGORTHM OF PID CONTROLLER USING pid controller in plc, pid control using plc math, plc with pid controller, pid controller or plc, pid using plc, pid algorithm code plc, pid using math in plc,
PLC - pid tuning using plc, pid control programming using plc, which machines uses plc and pid together, controllertuningprojectstopics, pid controller on plc, pid
Posted By: seminar class controller algorithm, software implementation of pid control algorithm, pid controller doc, pid controller, plc o computer, level control system pid using plc,
Created at: Monday 07th of March plc pid tuning, plc pid controller, pid controllers tuning, seminar reports on tuning of controllers, tuning of controlls, temperature control system pid using
2011 11:11:51 AM plc,
Last Edited Or Replied at :Monday
07th of March 2011 11:11:51 AM
Title: 3D CAPTCHA
Page Link: 3D CAPTCHA -
Posted By: Anushekar
Created at: Sunday 27th of March
2011 03:03:06 PM 3d captcha code, 3d captcha, report on 3d captcha, 3d captcha seminar report, 3d captcha ppt, seminar report on math captcha, captcha, seminar on 3d captchas,
Last Edited Or Replied at seminor topic on 3d captcha, what is 3d captcha, whwt is 3d captcha, 3d captcha report seminar, 3d captcha report seminar in pdf,
:Wednesday 15th of February 2012
09:38:39 AM
Title: Electrical Machines EE256
Page Link: Electrical Machines
EE256 - questions and answers about dc machines ppt, question and answers on electrical machines ii, pole pitch related math in dc generator, electrical machines dc
Posted By: seminar class generators questions, diffenece self excited and separate, electrical machines question answer, diff bw pm excited and self excited generators, electrical
Created at: Monday 28th of March machines 2 question with answers, write down the brushes of a dc machine, electrical machines question and answers in pdf, electrical machines 1 questions and
2011 08:58:17 AM answers download, answers ge question 20442, difference bw self excited seperately excited, free electrical machines question answer, lap winding and wave
Last Edited Or Replied at :Saturday winding ppt, winding dc machines ppt,
15th of September 2012 09:09:51 AM
Title: Laplace Transforms
Page Link: Laplace Transforms -
Posted By: seminar class application of laplace transform report, complete report on laplace transform, application of mathematicsinlaplace transform with ppt, www complete report
Created at: Saturday 02nd of April onlaplace transform, a report laplace transforms, how to make a report on inverse laplace transform, latest seminar topics in maths, math seminar topics,
2011 08:39:26 AM mathematics seminar topics, mathematics seminar, ppt on application of laplace transformation, math seminar topic, synopsis of mathematical topic laplace
Last Edited Or Replied at :Tuesday transform, laplace, application of laplace transform in electrical for seminar, laplace transform ppt, laplace transform, application of differential equation by
07th of August 2012 10:41:15 AM laplace transform ppt, mathmatics,
Title: DSP FPGA BASED INTEGRATED
Page Link: DSP FPGA BASED
INTEGRATED CONTROLLER DEVELOPMENT dsp controller in power electronics seminars, project on dsp with fpga, dsp fpga based controller, en zt, powered by mybb boat engine high performance parts,
FOR HIGH PERFORMANCE ELECTRIC - seminar topics based on dsp fpga, controller based dsp project, powered by mybb high performance windows, which dsp is used for drive control, a dsp and fpga
Posted By: seminar class based integrated controller development solutions for high performance electric drives ppt, fpga transputer, pc coprocessor card, modern dsp based controller, r
Created at: Tuesday 19th of April d projects on dsp based inverter ac speed control drives, dsp based controllers for electrical drves ppt, fpga based dsp projects, digital control power
2011 10:34:49 AM electronics ppt, mod le dsp fdga,
Last Edited Or Replied at :Tuesday
19th of April 2011 10:34:49 AM | {"url":"http://seminarprojects.com/s/8087-math-coprocessor-ppt","timestamp":"2014-04-21T10:14:57Z","content_type":null,"content_length":"49413","record_id":"<urn:uuid:99230cf6-02ce-4af7-9f89-e1c8b08bd04c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is Slovens Formula
Old Unopened Bottles of Whiskey :: Art of Drink
It seems people are finding lots of old bottles of whisky that are still sealed and are wondering whether they are still good and what the value of the bottle would be.
The 2 385 nd International Research Symposium in Service Management Yogyakarta, INDONESIA, 26 – 30 July 2011 formula with 10% precision level) was used to .
Plant hormone. Wikipedia, the free encyclopedia
Plant hormones (also known as phytohormones) are chemicals that regulate plant growth, which, in the UK, are termed 'plant growth substances'. Plant hormones are.
Slovin's Formula Sampling Techniques | eHow
Slovin's Formula Sampling Techniques. When it is not possible to study an entire population (such as the population of the United States), a smaller sample is taken.
So, You Want to be a Mediator?. mediate
I find myself regularly asked “what do I need to do to become a mediator?” While I do not pretend to have all of the answers, here are some suggestions for.
Aleli powerpoint - SlideShare
Oct 14, 2010 · Aleli powerpoint Presentation Transcript. Learners’ Preferences and Teaching Strategies in Teaching Mathematics of Fourth Year High School at Mabitac.
What is the formula to find the marginal product - The Q&A.
Formula for marginal product of labor? Change in Quantity/ Change in Units of Labor. What is the formula in finding the margin of error in slovins formula if the.
Slovin's Formula: What is it and When do I use it?
Main Index > Slovin's Formula If you take a population sample, you must use a formula to figure out what sample size you need to take. Sometimes you know s
Coefficient of Determination: What it is and How to.
Description of the coefficient of determination. Free online calculators and homework help forum. Hundreds of how to articles.
What is the formula for calculating a sample size ?
What is the formula to calculate size of a sample? by using the capture-recapture method which gives you the total size of organisms in a population.
Research on Socio-economic status of parents and academic.
research on socio-economic status of parents and acdemic performances of students ababa, sheryl ann b. gallarde, khizel jane p. gica, aileen l. gillado, lor-ann b.
tulsa community college utica learning interchange trebuchet building plans seance sir henry morgan | {"url":"http://55a.th.christmassdecoration.com/","timestamp":"2014-04-17T01:12:53Z","content_type":null,"content_length":"6130","record_id":"<urn:uuid:4f1e4208-e4d3-48f5-9d94-327440edd459>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acceleration of the Universe - A.R. Liddle
5.2. Scaling solutions and trackers
One reason for having optimism that quintessence can at least address the coincidence problem comes from an interesting class of solutions known as scaling solutions or trackers. These arise because
the scalar field does not have a unique equation of state p p(
in which terminology cosmological constant behaviour corresponds to w = - 1, the scalar field velocity depends on the Hubble expansion, which in turn depends not only on the scalar field itself but
on the properties of any other matter that happens to be around. That is to say, the scalar field responds to the presence of other matter.
A particularly interesting case is the exponential potential
which we already saw in the early Universe context as Eq. (30). If there is only a scalar field present, this model has inflationary solutions a t^2/^2 for ^2 < 2, and non-inflationary power-law
solutions otherwise. However, if we add conventional matter with equation of state p = w28, 29, 30]. These solutions take the form of scaling solutions, where the scalar field energy density (indeed
both its potential and kinetic energy density separately) exhibit the same scaling with redshift as the conventional matter. That is to say, the scalar field mimics whatever happens to be the
dominant matter in the Universe. So, for example, in a matter-dominated Universe, we would find [] a^3. If the matter era were preceded by a radiation era, at that time the scalar field would
redshift as 1/a^4, and it would make a smooth transition between these behaviours at equality. The ratio of densities is decided only by the fundamental parameters w. So, at any epoch one expects the
scalar field energy density to be comparable to the conventional matter.
Unfortunately this is not good enough. We don't want the scalar field to be behaving like matter at the present, since it is supposed to be driving an acceleration, and we need it to be negligible in
the fairly recent past. This requires us to consider alternatives to the exponential potential, a common example being the inverse power-law potential [28]
where 31] gave a complete classification of Einstein gravity models with scaling solutions, defined as models where the scalar field potential and kinetic energies stay in fixed proportion. The
exponential potential is a particular case of that, but in general the scaling of the components of the scalar field energy density need not be the same as the scaling of the conventional matter, and
indeed the inverse power-law potential is an example of that; if the conventional matter is scaling as 1/a^m where m = 3(1 + w), there is an attractor solution in which the scalar field densities
will scale as
With negative
This type of scenario can give a model capable of explaining the observational data, though it turns out that quite a shallow power-law is required in order to get the field to be behaving
sufficiently like a cosmological constant (current limits require w[] < - 0.6 at the present epoch, where [] 32]). Also, the epoch at which the field takes over and drives an acceleration is still
more or less being put in by hand; it turns out that the acceleration takes over when m[Pl], and so ^-120 is required to ensure this epoch is delayed until the present.
Various other forms of the potential have been experimented with, and many possibilities are known to give viable evolution histories [33]. While such models do give a framework for interpretting the
type Ia supernova results, in many cases with the possibility ultimately of being distinguished from a pure cosmological constant, I believe it is fair to say that so far no very convincing
resolution of either the cosmological constant problem or the coincidence problem has yet appeared. However, quintessence is currently the only approach which has any prospect of addressing these | {"url":"http://ned.ipac.caltech.edu/level5/Liddle3/Liddle5_2.html","timestamp":"2014-04-19T04:49:22Z","content_type":null,"content_length":"7555","record_id":"<urn:uuid:85effc8c-6b2a-47b1-b8b6-1473d3ab3f35>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the propagation of plane waves in dissipative anisotropic media.
ASA 124th Meeting New Orleans 1992 October
1aPA6. On the propagation of plane waves in dissipative anisotropic media.
Jose M. Carcione
Osservatorio Geofisico Sperimentale, P.O. Box 2011 Opicina, 34016 Trieste, Italy
Hamburg Univ., Germany
Fabio Cavallini
Osservatorio Geofisico Sperimenale, Trieste, Italy
A theory for propagation of time-harmonic fields in dissipative anisotropic media is not a simple extension of the elastic theory. Firstly, one has to decide for an appropriate constitutive equation
that reduces to Hooke's law in the elastic limit. In this work, one relaxation function is assigned to the mean stress and three relaxation functions are assigned to the deviatoric stresses in order
to model the quality factors along preferred directions. Secondly, in dissipative media there are two additional variables compared to elastic media: the magnitude of the attenuation vector and its
angle with respect to the wave-number vector. When these vectors are colinear (homogeneous waves), phase velocity, slowness, and attenuation surfaces are simply derived from the complex velocity,
although even in this case many of the elastic properties are lost. The wave fronts, defined by the energy velocities, are obtained from the energy balance equation. The attenuation factors are
directly derived from the complex velocities, but the quality factors require the calculation of the potential and loss energy densities, yet resulting in a simple function of the complex velocities.
[Work supported by EEC.] | {"url":"http://www.auditory.org/asamtgs/asa92nwo/1aPA/1aPA6.html","timestamp":"2014-04-20T16:54:54Z","content_type":null,"content_length":"2000","record_id":"<urn:uuid:1f809f0d-3e7d-4a48-90da-ae978238869b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I need help on this problem. How do I solve ln x + ln6x = 8 ? without a calculator ?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50591880e4b0cc1228932cfa","timestamp":"2014-04-20T19:00:52Z","content_type":null,"content_length":"34901","record_id":"<urn:uuid:4646dc56-52dc-431e-8ef1-c9b4ae48424e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
randomization tests
March 11th 2008, 04:36 PM #1
randomization tests
how do you approach these kind of questions?
(a) In an experiment, the following survival times (in minutes) were recorded, under a certain condition, of the peroneal nerves of four cats and ten rabbits.
Cats: 25 33 43 45
Rabbits: 15 16 16 17 20 23 28 28 35 35
Use a randomization test to investigate whether these samples are from the same distribution. You should define your null and alternative hypotheses, and state clearly what you can conclude from
the results of your test.
(b) Consider the following data relating to the mandible lengths (in millimetres) of 10 male golden jackals "Canis aureus" in the collection at the British Natural History Museum:
Males: 120 107 110 116 114 111 113 117 114 112
Calculate an estimate of the mean mandible length for male golden jackals, given the observed data. Calculate the standard error of your estimate for the mean mandible length (using the bootstrap
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/30730-randomization-tests.html","timestamp":"2014-04-19T11:21:45Z","content_type":null,"content_length":"29516","record_id":"<urn:uuid:a184d4e7-765b-49fa-b5cf-90f8febaa70a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 48
If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right.
In the triangle ABC let the square on one side BC equal the sum of the squares on the sides BA and AC
I say that the angle BAC is right.
Draw AD from the point A at right angles to the straight line AC. Make AD equal to BA, and join DC.
Since DA equals AB, therefore the square on DA also equals the square on AB.
Add the square on AC to each. Then the sum of the squares on DA and AC equals the sum of the squares on BA and AC.
But the square on DC equals the sum of the squares on DA and AC, for the angle DAC is right, and the square on BC equals the sum of the squares on BA and AC, for this is the hypothesis, therefore the
square on DC equals the square on BC, so that the side DC also equals BC.
Since DA equals AB, and AC is common, the two sides DA and AC equal the two sides BA and AC, and the base DC equals the base BC, therefore the angle DAC equals the angle BAC. But the angle DAC is
right, therefore the angle BAC is also right.
Therefore if in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle
is right. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookI/propI48.html","timestamp":"2014-04-21T02:00:01Z","content_type":null,"content_length":"3826","record_id":"<urn:uuid:db16430e-1dc5-487a-b146-39c2aeab94f2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/that1chick/answered","timestamp":"2014-04-18T16:46:18Z","content_type":null,"content_length":"109441","record_id":"<urn:uuid:9ba3b41b-5fe6-4761-8d33-902331dcf144>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Epi-mono factorizations
November 12, 2012 by Qiaochu Yuan
In many familiar categories, a morphism $f : a \to b$ admits a canonical factorization, which we will write
$a \xrightarrow{e} c \xrightarrow{m} b$,
as the composite of some kind of epimorphism $e$ and some kind of monomorphism $m$. Here we should think of $c$ as something like the image of $f$. This is most familiar, for example, in the case of
$\text{Set}, \text{Grp}, \text{Ring}$, and other algebraic categories, where $c$ is the set-theoretic image of $f$ in the usual sense.
Today we will discuss some general properties of factorizations of a morphism into an epimorphism followed by a monomorphism, or epi-mono factorizations. The failure of such factorizations to be
unique turns out to be closely related to the failure of epimorphisms or monomorphisms to be regular.
The category of factorizations
Define the category of epi-mono factorizations of a morphism $f$ to be the category whose objects are epi-mono factorizations
$\displaystyle a \xrightarrow{e} c \xrightarrow{m} b$
of $f$ (so $e$ is an epimorphism, $m$ is a monomorphism, and $m \circ e = f$) and whose morphisms $(e_1, c_1, m_1) \to (e_2, c_2, m_2)$ are morphisms $g : c_1 \to c_2$ making the diagram
commute; that is, such that $e_2 = g \circ e_1$ and $m_1 = m_2 \circ g$. These two properties are already enough to conclude the following:
1. $g$ is an epimorphism (since $e_2$ is an epimorphism).
2. $g$ is a monomorphism (since $m_1$ is a monomorphism).
3. If $g$ exists, it is unique (since $e_1$ is an epimorphism, or alternately since $m_2$ is a monomorphism).
Thus the category of epi-mono factorizations of a morphism is a preorder. Moreover, the morphisms $g$ in the category are both monomorphisms and epimorphisms. Call such a morphism a fake isomorphism
if it is not an isomorphism (this terminology is nonstandard).
If we are working in a category with no fake isomorphisms, such as $\text{Set}$ or $\text{Grp}$, then any two epi-mono factorizations which are related by a morphism are isomorphic via a unique
isomorphism. (This doesn’t rule out the possibility that there are two epi-mono factorizations which are not related by any morphisms at all.) However, because there are categories with fake
isomorphisms, we do not expect uniqueness of epi-mono factorizations in general.
Example. In $\text{CRing}$, let $D$ be an integral domain and let $f : D \to \text{Frac}(D)$ be the inclusion of $D$ into its field of fractions. If $D$ is not a field, then $f$ is a fake
isomorphism; moreover, the category of epi-mono factorizations of $f$ is equivalent to the poset of subrings of $\text{Frac}(D)$ containing $D$, or equivalently the poset of localizations of $D$.
Example. In $\text{Top}$, any continuous bijection $f : X \to Y$ which does not have a continuous inverse is a fake isomorphism. Without loss of generality, we may take $X$ and $Y$ to have the same
underlying set; then we are just talking about a pair of topologies on $X$ one of which is strictly finer than the other. The category of epi-mono factorizations of $f$ is then equivalent to the
poset of topologies intermediate between these two topologies.
More generally, if $f : a \to b$ is a fake isomorphism, then it admits two nonisomorphic factorizations
$\displaystyle a \xrightarrow{\text{id}_a} a \xrightarrow{f} b, a \xrightarrow{f} b \xrightarrow{\text{id}_b} b$.
So the problem of non-uniqueness of epi-mono factorizations is closely related to the problem of existence of fake isomorphisms. Furthermore, previously we showed that a morphism which is either both
a monomorphism and a regular epimorphism or which is both a regular monomorphism and an epimorphism is necessarily an isomorphism. It follows conversely that the existence of fake isomorphisms
indicates the existence of epimorphisms or monomorphisms which are not regular.
Besides uniqueness, in full generality it is also necessary to worry about existence. For example, consider the free category on an idempotent. This is a category with a single object $\bullet$ and a
single non-identity morphism $f : \bullet \to \bullet$ satisfying $f^2 = f$. Then $f$ is neither a monomorphism nor an epimorphism, since the above identity shows that it is neither left nor right
cancellable, and since the only possible factorizations of $f$ are as
$\displaystyle f = f \circ f = f \circ \text{id} = \text{id} \circ f$
it follows that $f$ does not admit an epi-mono factorization.
Above we observed that one issue with the category of epi-mono factorizations is that it may fail to be connected: that is, there may be two epi-mono factorizations that are not related by any chain
of morphisms, hence even if there were no fake isomorphisms we would still not be able to conclude that epi-mono factorizations are unique.
However, mild categorical hypotheses guarantee that this is not an issue.
Theorem: Suppose that a category $C$ has either pushouts or pullbacks. Moreover, suppose that $C$ has no fake isomorphisms (e.g. because all monomorphisms are regular or because all epimorphisms are
regular). Then epi-mono factorizations in $C$ are unique (up to unique isomorphism).
Proof. The second hypothesis and the conclusion are categorically self-dual but the first hypothesis is not, so it suffices to prove the statement under the assumption that $C$ has pushouts. If $a \
xrightarrow{e_1} c_1 \xrightarrow{m_1} b$ and $a \xrightarrow{e_2} c_2 \xrightarrow{m_2} b$ are two epi-mono factorizations of a morphism $f : a \to b$, consider the pushout $c_1 \sqcup_a c_2$
together with the inclusions $i_1, i_2 : c_1, c_2 \to c_1 \sqcup_a c_2$ and induced map $g : c_1 \sqcup_a c_2 \to b$:
We claim that $i_1 \circ e_1 = i_2 \circ e_2$ is an epimorphism. To see this, suppose $p, q : c_1 \sqcup c_2 \to d$ are two other morphisms such that
$\displaystyle p \circ i_1 \circ e_1 = p \circ i_2 \circ e_2 = q \circ i_1 \circ e_1 = q \circ i_2 \circ e_2$.
Since $e_1, e_2$ are epimorphisms, it follows that $p \circ i_1 = q \circ i_1$ and $p \circ i_2 = q \circ i_2$. Hence $p \circ i_1, p \circ i_2$ and $q \circ i_1, q \circ i_2$ describe the same
commutative square, from which it follows by the universal property of the pushout that they factor through the same morphism $c_1 \sqcup c_2 \to d$, namely $p = q$.
It follows that $i_1, i_2$ are both epimorphisms. On the other hand, since $g \circ i_1 = m_1$ and $g \circ i_2 = m_2$ are monomorphisms, it follows that $i_1, i_2$ are both monomorphisms. Since $C$
has no fake isomorphisms, it follows that $i_1, i_2$ are both isomorphisms, hence $g$ is a monomorphism and $c_1 \sqcup_a c_2$ determines an epi-mono factorization which is isomorphic to both $c_1$
and $c_2$. The conclusion follows. $\Box$
Corollary: Suppose $C$ is a category with either pushouts or pullbacks. Then epi-mono factorizations in $C$ are unique if and only if $C$ has no fake isomorphisms.
Note that the corollary does not say anything about existence.
Reblogged this on Convoluted Volatility – StatArb,VolArb; Macro. and commented:
Add your thoughts here… (optional)
One Response | {"url":"http://qchu.wordpress.com/2012/11/12/epi-mono-factorizations/","timestamp":"2014-04-17T06:41:00Z","content_type":null,"content_length":"101764","record_id":"<urn:uuid:9952b437-f6c8-4c6f-9ab6-c386f7592470>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department Research
Shu-chuan Chen, Associate Professor, Ph.D.
Dr. Chen's research mainly focuses on bioinformatics, especially in developing statistical methods and algorithms for functional genomic data. Dr. Chen's past publications involved with the
development of mixture models for clustering high dimensional sequences, its related theoretical justifications and applications. She also published papers in neuron spike trend studies, data mining,
analysis of election data and DNA sequences' matching probability.
Yu Chen, Associate Professor, Ph.D.
My main research interest lies in Lie theory and representation theory. Suppose that W is a Coxeter group, e.g., a finite group generated by reflections on a Euclidean space. Then W can be decomposed
into a disjoint union of left cells or two-sided cells. These cell structures can be applied to construct the representations of W and the representations of the Hecke algebra of W explicitly. When W
is a Weyl group or an affine Weyl group, the two-sided cells of W are also closely related to the unipotent classes in the corresponding linear algebraic group. Algebraists try to find an explicit
description of the left cells and two-sided cells in each W, especially in the case when W is an affine Weyl group.
I am also interested in applied mathematics, e.g., the theory of asset pricing in mathematical finance.
DeWayne Derryberry, Assistant Professor, Ph.D.
I am an applied statistician focused on collaboration and consulting with scientists. I often assist in data analysis when colleagues in other fields (geosciences, biology, etc.) have difficult or
unusual problems. Some of the problems I have worked on recently involve meta-analysis, discriminant analysis with messy data, partial least squares (projection to latent variables), geographically
weighted regression to explore spatial patterns and possible causes of prostate and breast cancer, and the use of LiDAR remote sensing to estimate landscape characteristics in semi-arid climates.
When appropriate, information criteria plays a role in model selection and assessment. I intend to use the right tool for the job, so I must teach myself new techniques as needed.
I also have a continuing interest in statistics education, including statistical literacy. I am developing a collection of cases for a case-based approach to applied statistics similar to the
Statistical Sleuth by Ramsey and Schafer, but aimed at the undergraduate level.
Robert Fisher, Professor, Ph.D.
My main research interest lies in Differential Geometry. My most recent publication with H. T. Laquer is titled Hyperplane Envelopes and the Clairaut Equation, Journal of Geometric Analysis, Vol. 20,
Issue 3 (2010), Pages 609-650. The paper brings a modern perspective to the classical problem of envelopes of families of affine hyperplanes. In the process, the classical results are generalized and
A key step in the work is the use of "generalized immersions". Briefly, every classical immersion defines a generalized immersion in a canonical way so that generalized immersions can be understood
as ordinary immersions "with singularities." Next, the concept of an envelope is given a modern definition, namely, an envelope is a generalized immersion "solving the family" that has a universal
mapping property relative to all other full rank "solutions". The beauty of this approach becomes apparent in the "Envelope Theorem". With one mild assumption, namely that the associated family of
linear hyperplanes is immersed, it is proven that a family of affine hyperplanes always has an envelope, and that envelope is essentially unique.
Briana Foster-Greenwood, , Ph.D.
My research combines several areas of mathematics: Algebra (Commutative and Noncommutative), Geometry, Representation Theory, Invariant Theory, Combinatorics, and Cohomological Algebra. More
specifically, I am interested in complex reflection groups (including symmetry groups of regular complex polytopes) and the deformation theory of algebras arising from group actions.
Given a finite group acting linearly on a finite-dimensional vector space V, one can define a skew group algebra as the semidirect product of the group algebra and the coordinate ring of V. Various
algebras of interest (such as graded Hecke algebras obtained by replacing a commutative relation vw-wv=0 with a noncommutative relation of a certain form) are already known to occur as deformations
of skew group algebras. What other associative deformations are possible? Current projects invoke invariant theory and partial orderings on groups to determine the Hochschild cohomology governing
potential deformations of a skew group algebra.
Yury Gryazin, Associate Professor, Ph.D.
The main research interests of Dr. Gryazin lie in the area of Numerical Analysis and Scientific Computation. More specifically, he focuses on the development of computational approaches to the
solution of applied mathematical problems arising from wide range of applications including computational electromagnetics, medical imaging, inverse problems, computational fluid dynamics, and
computational finance. The results of his research recently appearing in internationally recognized publications were related to Krylov subspace based numerical methods for large sparse nonsymmetric
algebraic systems and regularized stochastic optimization algorithms in risk portfolio management.
Leonid Hanin, Professor, Ph.D.
The main focus of my current research is mathematical modeling and solving associated statistical problems in biomedical sciences including cell biology, molecular biology, biochemistry, radiation
biology, bioinformatics, cancer biology and epidemiology, and clinical oncology. The mathematical basis of this work is probability models, stochastic processes and differential equations. I am also
working on mathematical problems of heat transfer.
Cathy Kriloff, Professor, Ph.D.
I use primarily algebraic and combinatorial methods to study the representation theory of graded (also called degenerate affine) Hecke algebras that are built from the groups of symmetries of
• regular polygons in the plane
• the regular icosahedron (or its dual the dodecahedron) in 3-space, and
• the two related dual regular polyhedra in 4-dimensional space with five-fold symmetry (the 120-cell and 600-cell)
The algebras are infinite-dimensional but their representation theory (the ways they can act as linear transformations on finite-dimensional vector spaces) is tightly controlled by combinatorial
properties of the finite symmetry group from which they are built. These non-crystallographic cases are less commonly considered than the crystallographic cases that arise within Lie theory, but the
differences and similarities in their representation theory provide intriguing hints at possible new underlying objects and directions for unifying currently separate theories. Recent generalizations
of graded Hecke algebras include the specific examples listed above, providing new avenues of exploration and further motivation to study these cases. Various appearances of non-crystallographic
objects in the mathematics and physics literature (e.g., in the study of representation theory, in connection with moment graphs in geometry, in several combinatorial contexts, and in quasicrystals,
amorphous solids, and wavefronts) also indicate tantalizing connections and directions for further study.
Patrick Lang, Professor, Ph.D.
Mathematical analysis of statistical methods.
Bennett Palmer, Professor, Ph.D.
My research involves applications of variational calculus to problems in differential geometry. The shape of surface interfaces is found by minimizing a certain surface energy. We are particularly
interested in anisotropic interfacial energies. This means that the energy depends on the direction of the surface, as in a crystal.
Tracy Payne, Professor, Ph.D.
Dr. Payne's research is on geometric and dynamical problems related to Lie groups and Lie algebras. Recently she has been interested in the Ricci flow for homogeneous spaces, soliton metrics on
nilpotent Lie algebras, and Anosov maps on nilmanifolds.
Dennis Stowe, Professor, Ph.D.
Dr. Stowe's recent research addresses the differential geometry of second-order differential equations in the complex plane. It emphasizes using the Schwarzian derivative to deduce properties of
solutions and properties of conformal or harmonic mappings.
Jim Wolper, Professor, Ph.D.
My training is in algebraic geometry (PhD, Brown, 1981), but I also have significant background in computer science and subjects related to aviation. I have a strong interest in applications of
algebraic geometry and representation theory incoding and cryptography.
Current projects
• Computational Complexity of Quadrature Rules. I am applying concepts from information theory to develop new algorithms for estimating the Riemann integral of a function defined by a table of
• Information Theoretic Schottky Problem. I am studying the statistical properties of period matrices of complex algebraic curves to determine to what extent the distribution of the periods
determines (1) whether the matrix is in fact a period matrix [Schottky Problem] and (2) properties of the curve [Torelli Problem].
• Theta Divisors in Moduli Spaces of Vector Bundles and Automorphism Groups of Curves. I have several results in this direction, but the project is not mature enough to be written up.
• Using Turbulence to Fly Faster. A development of and analysis of "relative dolphin flight" for powered aircraft.
Other Recent projects
• Linear Codes from Schubert Varieties. Much of this is joint work with Sudhir Ghorpade.
• Analytic Computation of Some Automorphism Groups of Riemann Surfaces, Kodai Mathematical Journal, 30 (2007), 394-408.
Wenxiang Zhu, Associate Professor, Ph.D.
Numerical solutions of differential equations and optimal control problems with partial differential equation constraints. Particularly interested in phase field approaches to optimal control
Yunrong Zhu, Assistant Professor, Ph.D.
Dr. Zhu's research is in the area of numerical analysis. The main focus is on developing and analyzing numerical approximation and efficient solvers for both linear and nonlinear partial differential
equations arising from many physics, engineering and biochemistry problems, such as groundwater simulation, electromagnetic, electrostatic interactions, and density functional theory, etc. | {"url":"http://www.isu.edu/math/research/","timestamp":"2014-04-16T07:26:28Z","content_type":null,"content_length":"26476","record_id":"<urn:uuid:ac7d49f1-1de8-4ce7-bd51-5e0d6b145392>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimal algorithms for generating quantile information 1 in X + Y and matrices with sorted columns
- J. ACM , 1983
"... Abstract. The goal of this paper is to point out that analyses of parallelism in computational problems have practical implications even when multiprocessor machines are not available. This is
true because, in many cases, a good parallel algorithm for one problem may turn out to be useful for design ..."
Cited by 234 (7 self)
Add to MetaCart
Abstract. The goal of this paper is to point out that analyses of parallelism in computational problems have practical implications even when multiprocessor machines are not available. This is true
because, in many cases, a good parallel algorithm for one problem may turn out to be useful for designing an efficient serial algorithm for another problem. A d ~ eframework d for cases like this is
presented. Particular cases, which are discussed in this paper, provide motivation for examining parallelism in sorting, selection, minimum-spanning-tree, shortest route, max-flow, and matrix
multiplication problems, as well as in scheduling and locational problems.
- SIAM J. COMPUTING , 1984
"... Given n demand points in the plane, the p-center problem is to find p supply points (anywhere in the plane) so as to minimize the maximum distance from a demo & point to its respective nearest
supply point. The p-median problem is to minimize the sum of distances from demand points to their respecti ..."
Cited by 117 (1 self)
Add to MetaCart
Given n demand points in the plane, the p-center problem is to find p supply points (anywhere in the plane) so as to minimize the maximum distance from a demo & point to its respective nearest supply
point. The p-median problem is to minimize the sum of distances from demand points to their respective nearest supply points. We prove that the p-center and the p-media problems relative to both the
Euclidean and the rectilinear metrics are NP-hard. In fact, we prove that it is NP-hard even to approximate the p-center problems sufficiently closely. The reductions are from 3-satisfiability. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4516425","timestamp":"2014-04-20T12:43:09Z","content_type":null,"content_length":"15458","record_id":"<urn:uuid:1ce40c0c-4a53-4565-91d6-0d086cc9564d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shortest distance
December 29th 2011, 07:37 AM
Shortest distance
Consider the integral I with integrand F(x,y,y'), limits a,b with y(b) undetermined.
(1) Derive both the Euler-Lagrange equation and the endpoint condition that 'partial' df/dy' vanishes at b.
(2) Use this result to show that the shortest distance from the origin to the line x = 1 lies
along the line y = 0.
I have done question (1). I included it for context. For (2) I am not sure what the functional is.
December 30th 2011, 02:56 AM
Re: Shortest distance
It's the arc length integral. See here for details:
Arc length - Wikipedia, the free encyclopedia
Assuming that $f$ is differentiable, the length of an arc over an interval $[a;b]$ described by a function $f$ is given by
$\int_a^b \sqrt{1+f'(x)} dx$
December 31st 2011, 02:17 AM
Re: Shortest distance
Thanks for that. So in this case are my limits (endpoints) x=0 and x=1 with boundary condition y(0)=0? Then I need to use natural boundary condition that df/dy'= 0 when x=1? I got the right
answer just need to know it was through correct working.
January 2nd 2012, 01:48 PM
Re: Shortest distance
Frankly, I can't tell. Your approach looks reasonable. But I don't know about the "natural boundary condition", nor do I understand why this condition should necessarily hold in your case. I'm no
expert in variational calculus. Sorry! | {"url":"http://mathhelpforum.com/differential-equations/194762-shortest-distance-print.html","timestamp":"2014-04-18T12:56:53Z","content_type":null,"content_length":"6155","record_id":"<urn:uuid:9974761e-7749-479f-b165-4225734c80b9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
The nature of variation 1
I refer readers to Andrew's comments on a graph purporting to demonstrate the existence of a month-of-year selection bias in the NHL, cited on the Freakonomics blog as an example of "overwhelming"
evidence of such effects in sports. (The original graph may have come from here.)
In particular, note the Professor's point #4. It is always necessary to ask oneself if perceived "trends" are real or not before attempting to provide an explanation. What Andrew computed can be
interpreted to mean that approximately 30% of the time, we expect to see percentages larger than 9% or smaller than 7%. Thus, out of 12 months, we'd expect to see about 3.6 months with those
"extreme" values (even if players were randomly picked from the population so that their birthdays would have been evenly spread out). The NHL line contains 4 such values and so while there is some
evidence of bias, it is certainly not "overwhelming" as Freakonomics suggested.
The chart itself is, sadly, misleading by its very choice of comparing NHL players to the populations of Canada and USA. To cite the original website, the key message of this chart was:
The 761 NHL players show a distinctly different pattern than that for Canada or the United States with the highest percentage of births in January and February and the lowest in September and
This "pattern" is the larger observed dispersion of NHL monthly percentages from the mean percentage of 8%, as compared to Canada or USA. In other words, the NHL line fluctuates more wildly.
Too bad there is a statistical law that guarantees this "pattern": the law says that in looking at sample averages, the larger the sample size, the smaller the dispersion. (This is why Andrew used
the sample size 761/12 in his calculation.) Because the Canada and USA lines represent averages of millions of people while the NHL line represents only 761 people, it is absolutely no surprise to
find the NHL line fluctuating more wildly!
Thus, the comparison is not valid. It'd have been more useful to have drawn the NHL line for various historical periods. If all the lines show a downward slope, then it would be time to examine why
this is occurring.
To further fix ideas, look at the following set of lines. Each line represents an alternative universe in which 761 people were randomly selected to be NHL players from the US and Canadian
populations. While in theory the line connecting monthly percentages should be flat (at 1/12 or 8%, i.e. the green lines below), in reality, because of random selection, the lines fluctuate quite a
While the amount of dispersion is not "overwhelming", perhaps the observed trend of decreasing percentage with increasing month is unusual enough to warrant further study. I'll take a closer look
next time.
References: Andrew Gelman's blog, Freakonomics blog, Freakonomics NYT column
im statistical and im study in faculity of scince 4th year im looking for information about : p-value with exampls, im asking u for help ,all my thankful for u
This is only a preview. Your comment has not yet been posted.
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment
As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.
Having trouble reading this image? View an alternate.
Post a comment
Recent Comments | {"url":"http://junkcharts.typepad.com/junk_charts/2006/05/natural_variati.html","timestamp":"2014-04-18T08:04:17Z","content_type":null,"content_length":"62847","record_id":"<urn:uuid:75d2a878-cc9c-44dd-a582-931cf6eebfe5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: interaction dummy or separate regression
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: interaction dummy or separate regression
From Austin Nichols <austinnichols@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: interaction dummy or separate regression
Date Wed, 28 Sep 2011 13:34:36 -0400
Khieu, Hinh <Hdkhieu@usi.edu>L
I already ruled out option 2, as selection on the dependent variable
is not allowed. Option 1 is okay, but I would suggest you explore an
alternative link, as the explanatory variables likely have a very
nonlinear relationship with Y. Imagine the cube root of y is a linear
function of x, and you regress y on x, in which case extreme values of
y will seem to have a very different relationship with x. See the help
file for -glm- for starters. You can also select on X, as I
mentioned, which may illuminate the appropriate link, if you can
construct a reasonable piecewise linear approximation. Better advice
may follow if you are more explicit about what you are modeling. I.e.
what is Y, what is the data, etc.
On Wed, Sep 28, 2011 at 1:19 PM, Khieu, Hinh <Hdkhieu@usi.edu> wrote:
> Austin,
> Thank you very much for your note. I have a feeling it is not right, but not what is not right and you answered it. So, to test whether debt or equity is used to finance abnormal Y, there are only two ways:
> 1. put abnormal Y as dependent variable and drop all dummy and interaction terms
> 2. still keep Chg in Y as dependent variable but run one regression based on Abnormal Y observations alone and another regression based on Other Y.
> I wonder if you can do me a favor by commenting my two solutions above.
> Thank you very much.
> Regards,
> Hinh
> ________________________________________
> From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] On Behalf Of Austin Nichols [austinnichols@gmail.com]
> Sent: Wednesday, September 28, 2011 12:11 PM
> To: statalist@hsphsun2.harvard.edu
> Subject: Re: st: interaction dummy or separate regression
> Khieu, Hinh <Hdkhieu@usi.edu> :
> You are not going to get unbiased estimates of any of those coefs, if
> that's what you mean. You are not allowed to select on Y, nor include
> a transformation of it as a regressor, and I strongly recommend you
> explore what you are estimating using a simulation on generated data
> where you know the true effects (a1, a2, etc.). You are allowed to
> select on an exogenous X variable, but not on "abnormal" Y.
> On Wed, Sep 28, 2011 at 1:00 PM, Khieu, Hinh <Hdkhieu@usi.edu> wrote:
>> Dear statalist members,
>> I have the following model and I am not sure if there is an econometric issue with it. I would appreciate any amount of help. Change in Y = a1*growth opportunities + a2*profit + a3*debt + a4*equity + a5*dummy (=1 if change in Y is abnormally high, zero otherwise) + a6 * debt * dummy + a7 * equity * dummy, where abnormally high is defined to be whenever change in Y is greater than 2 times the industry average of Y over the last 3 years (t, t-1, and t-2).
>> I run fixed effects regression with firm and year dummies on the above model for 2 groups of firms: large firms versus small firms. My question is: is there any mechanical or econometric problem with using the dummy for abnormal Y and its interaction with debt and equity? I know I can split the sample into abnormal Y and normal Y and run two separate regressions. But I want to know specifically if the model above is problematic from an econometric perspective. What if I drop the dummy and keep only the interactions?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-09/msg01287.html","timestamp":"2014-04-21T03:09:17Z","content_type":null,"content_length":"11625","record_id":"<urn:uuid:a2cfbe1d-eb2e-4775-9762-3db37e8e5c32>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling the NASA/ASTM Flammability Test for Metallic Materials Burning in Reduced Gravity
STP1395: Modeling the NASA/ASTM Flammability Test for Metallic Materials Burning in Reduced Gravity
Steinberg, TA
The University of Queensland, Brisbane,
Wilson, DB
Consultant, Mesilla Park, NM
Pages: 24 Published: Jan 2000
Flammability tests of iron using the Lewis Research Center's (LeRC) 2.2 s drop tower are modeled. Under the conditions of the test, after ignition, about 2.0 s of burning in reduced gravity
(0.01-0.001 g) occurs. Observations (film and video) show the accumulating product mass to be well-mixed; therefore the system is modeled as a semi-batch reactor, that is, reactants continuously fed
with the product accumulating in the reactor. The regression of the melting sample is considered steady-state. Real-time temperature and pressure measurements of the chamber gas provide measurements
for model validation. The model consists of a set of 22, non-linear, first-order differential equations which are solved using MATLAB®. The model predicts, for 0.32-cm-diameter iron rods burning at
4300 kPa, an average reaction temperature of 3600 K and a molten oxide temperature of 3400 K. The system experimental parameters are the thermal conductivity of the molten liquid, k^Fe(ℓ), the
thermal conductivity of the molten iron oxide mixture, k^FeO(ℓ), and the heat transfer coefficient, h, between the molten oxide and the oxygen in the chamber and chamber itself. These model parameter
values are: k^Fe(ℓ)=1.4 J/s cm K, 1.8 ⩽ k^FeO(ℓ) ⩽ 35.0 J/s cm K, and 0.24 ⩽ h ⩽ 2.24 J/s cm^2 K. It is suggested that the internal circulation within the molten ball formed during burning decreases
as the ball grows. More work is necessary to understand the chemical nature of the reacting oxygen and determine the species formed during burning.
iron combustion, reduced gravity combustion, microgravity combustion, reaction rate, burning metals, heterogeneous combustion, metal oxidation, metal combustion
Paper ID: STP12501S
Committee/Subcommittee: G04.01
DOI: 10.1520/STP12501S | {"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP12501S.htm","timestamp":"2014-04-19T02:16:24Z","content_type":null,"content_length":"13706","record_id":"<urn:uuid:36622ae5-5383-498d-91a1-0e9e72b74663>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Linear Optimization
, 2000
"... Everyone with some background in Mathematics knows how to solve a system of linear equalities, since it is the basic subject in Linear Algebra. In many practical problems, however, also
inequalities play a role. For example, a budget usually may not be larger than some specified amount. In such situ ..."
Cited by 18 (6 self)
Add to MetaCart
Everyone with some background in Mathematics knows how to solve a system of linear equalities, since it is the basic subject in Linear Algebra. In many practical problems, however, also inequalities
play a role. For example, a budget usually may not be larger than some specified amount. In such situations one may end up with a system of linear relations that not only contains equalities but also
inequalities. Solving such a system requires methods and theory that go beyond the standard Mathematical knowledge. Nevertheless the topic has a rich history and is tightly related to the important
topic of Linear Optimization, where the object is to nd the optimal (minimal or maximal) value of a linear function subject to linear constraints on the variables; the constraints may be either
equality or inequality constraints. Both from a theoretical and computational point of view both topics are equivalent. In this chapter we describe the ideas underlying a new class of solution
"... this article the motivation for desiring an "interior" path, the concept of the complexity of solving a linear programming problem, a brief history of the developments in the area, and the
status of the subject as of this writing are discussed. More complete surveys are given in Gonzaga (1991a,1991b ..."
Cited by 3 (1 self)
Add to MetaCart
this article the motivation for desiring an "interior" path, the concept of the complexity of solving a linear programming problem, a brief history of the developments in the area, and the status of
the subject as of this writing are discussed. More complete surveys are given in Gonzaga (1991a,1991b,1992), Goldfarb and Todd (1989), Roos and Terlaky (1997), Roos, Terlaky and Vial (1997), Terlaky
(1996), Ye (1997), Wright (1996) and Wright (1998). Generalizations to nonlinear problems are briefly discussed as well. For thorough treatment of interior point algorithms on those areas, the reader
is referred to den Hertog (1993), Nesterov and Nemirovskii (1993) and Saigal, Vandenberghe and Wolkowicz (1998). | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2616729","timestamp":"2014-04-20T05:33:43Z","content_type":null,"content_length":"17889","record_id":"<urn:uuid:c934594c-981a-466b-bc95-dae71b0edc2d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: The Givenchy Code (Julie Kenner)
You've got to love the tag lines for this book: "A heel-breaking adventure in code-breaking that will bring out the math geek and the fashionista in you". "Cryptography is the new black".
A woman with an undergraduate degree in math earning her graduate degree in history gets caught up in a dangerous game of life and death when an eccentric millionaire's dying wish makes his assassin
role-playing game into a reality. Hunted by an unknown assailant and protected by a gorgeous, sexy, honest and true ex-marine, Mel's obsessive interest in buying clothes and sleeping with her
protector is not at all affected by the fact that she has only hours left to decode all of the clues and find the antidote to the poison that will kill her.
Since I'm not a huge fan of either The Da Vinci Code or Sex and the City, didn't figure I was going to like this. However, despite the fact that it certainly owes a lot to each, I actually liked this
little romance thriller better! Okay, I find the plot (the role playing game becomes real) and the protagonist's obsession with shoes a bit hard to accept, but at least the book is fun and
engrossing. Plus, the math in it may not be high level, but it is basically correct.
(quoted from The Givenchy Code)
Okay, I'm a geek, but I confess I was a little giddy. I had no idea why someone had sent me a coded message, but whoever it was knew me well. My BS is in math with a minor in history. That surprises
most people. Apparently math majors are supposed to be surgically attached to their calculators and wear plastic pocket protectors. It's an irritating sterotype. Like saying blondes have more fun.
I'm a blonde, and believe me, that's one old adage that simply doesn't hold true. (I will say, though, that even when the hair falls short, the math comes in surprisingly handy. Take parties, for
example. Whenever the conversation gets slow, I can amaze and astound the other revelers with fractals, Fibonacci numbers and Smullyan's logic games In those situations, I really am the life of the
Three of the clues include the formulas for curves in the plane, namely
y=mx+b (line)
x^2+y^2=r^2 (circle)
y=a cosh(a x) (catenary)
If you're interested in buying this book, be careful when buying to make sure you are getting the one by Julie Kenner. The name "Givenchy Code" is cute and really fits the book. However, another
author got to it first with an unrelated parody of Dan Brown's book and even got www.thegivenchycode.com.
Contributed by Sarah-Kate
Although it does not have a high literary quality, The Givenchy Code is still an enjoyable light read. The math in it is basic but accurate. | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf519","timestamp":"2014-04-21T02:52:35Z","content_type":null,"content_length":"10930","record_id":"<urn:uuid:0ed0960d-38ba-4dd2-aaca-0f6ce931dc18>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electric Math Problem - Freddy's Coffee House
Re: Electric Math Problem
50,000 ohms
Re: Electric Math Problem
I bet you would put up LOTS of resistance if we came to lift your matress!!!!
Re: Electric Math Problem
50 million ohms.
I have this much money stashed under the matters.
Re: Electric Math Problem
5,000,000 ohms
Electric Math Problem
Hello All, Need alittle help on this math problem,,, What would the resistance of a vdr be when the voltage is at 0.5 kilovolts and the current is at 100 microamperes?
The answer is 5.0 million ohms.
For the solution the engineer's way of thinking, using scientific notation.
R = V/I
R = 0.5*10^3 / 100*10^-6
which is the same as,
R = 5*10^2 / 1*10^-4
which is the same as,
R = 5/1 times 10^2/10^-4
which is the same as,
R = 5 times 10^6
or 5 million ohms
See how easy it was using scientific notation!
Re: Electric Math Problem
I missed the 100 in the micro amp. I calculated the formula using 1 microamp. da!
Darn, that means there is only 5,000,000 stuffed under the mattress.
Re: Electric Math Problem
Everyone is doing Theo's homework for him.
Re: Electric Math Problem
he need to find and use the equations.
Reference this, you should be able to figure anything out. | {"url":"http://nepacrossroads.com/about10218.html","timestamp":"2014-04-17T11:05:14Z","content_type":null,"content_length":"33204","record_id":"<urn:uuid:6ebcb90a-57f9-4fb0-a213-fbb0f6ad61c9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defining a SIMD Interface
Of all the articles at #AltDevBogADay I really enjoy the ones on optimization. I’ve always worked at smaller companies where programmers act in a more generalist capacity. While this has allowed me
the opportunity to solve a wide range of problems, I have not had access to someone who could formally introduce me to the black art of optimization. Because of this I’ve had to scour books and blogs
looking for ways to increase the performance of my own code and fill in the blanks from there.
So for my “show your ignorance” post I would like to display how I have reasoned around creating an SIMD enabled math library. I consider it a clever way to approach the problem, as I haven’t seen a
similar implementation. Having said that there might be a good reason why it shouldn’t be done this way, so feedback is appreciated. If its a valid approach I plan on continuing development and
releasing using a permissive license.
So lets begin.
The Problem
So we’re writing a math library. At the very least we’ll need abstractions for vectors, in 2D, 3D and 4D flavors, and matrices. We’re developing for the PC so SSE is available, and we write our
library using SSE intrinsics.
Now we want to port our work over to iOS. The SIMD instruction set for ARM processors is NEON. Unfortunately this is not available on devices before the iPhone 3GS and iPad. So to support those
devices we’ll need to create a version using regular old floats.
This is turning out to be quite a bit of work.
Rather than rewriting all our math code for each architecture we can define an SIMD interface and then build our math classes using that. Then when porting to a different architecture we just
implement the interface and all the math classes and functions should just work.
The Interface
The SIMD interface should expose common operations available in the instruction set, such as basic math operations. It should also expose more complicated operations, such as the dot product, square
root and shuffling. For the sake of this article we’ll just define enough to normalize a 3D vector.
In pseudocode this would look like.
interface simd_type
simd_type(value0, value1, value2, value3);
simd_type operator* (rhs);
simd_type dot3(value1, value2);
simd_type inv_sqrt(value);
The C++ analog for an interface is a class with only pure virtual methods. This isn’t an acceptable way to approach the problem since we want this code to perform as fast as possible. So instead
we’ll create a concrete SIMD type that follows a consistent naming scheme for its member functions. Then for our math classes we’ll take in an SIMD type as a template argument. Something like the ill
fated concepts that was dropped from the latest iteration of C++ would be useful in this case just because we could verify the fact that the SIMD type implemented the interface properly.
So our 3D vector class would look something like this.
template <typename Real, typename Rep>
class vector3
{ }
vector3(Real x, Real y, Real z)
: _rep(x, y, z, (Real)0)
{ }
template <typename Real, typename Rep>
friend vector3<Real, Rep> normalize(const vector3<Real, Rep>& value)
return vector3(value._rep * inv_sqrt(dot3(value._rep, value._rep)));
vector3(const Rep& rep)
: _rep(rep)
{ }
Rep _rep;
} ;
Then we create a SSE2 implementation of the SIMD type.
class sse_float
inline sse_float()
{ }
inline sse_float(float value0, float value1, float value2, float value3)
: _values(_mm_set_ps(value3, value2, value1, value0))
{ }
inline sse_float operator* (const sse_float& rhs) const
return sse_float(_mm_mul_ps(_values, rhs._values));
inline friend sse_float dot3(const sse_float& value1, const sse_float& value2)
const __m128 t0 = _mm_mul_ps(value1._values, value2._values);
const __m128 t1 = _mm_shuffle_ps(t0, t0, _MM_SHUFFLE(0,0,0,0));
const __m128 t2 = _mm_shuffle_ps(t0, t0, _MM_SHUFFLE(1,1,1,1));
const __m128 t3 = _mm_shuffle_ps(t0, t0, _MM_SHUFFLE(2,2,2,2));
return sse_float(_mm_add_ps(t1, _mm_add_ps(t2, t3)));
inline friend sse_float inv_sqrt(const sse_float& value)
// Perform a Newton-Raphson iteration on the reciprocal
// yn+1 = (yn * (3 - xyn^2)) / 2
const __m128 yn = _mm_rsqrt_ps(value._values);
const __m128 xyn2 = _mm_mul_ps(_mm_mul_ps(value._values, yn), yn);
return sse_float(_mm_mul_ps(_mm_mul_ps(_mm_set_ps1(0.5f), yn),
_mm_sub_ps(_mm_set_ps1(3.0f), xyn2)));
inline sse_float(const __m128 values)
: _values(values)
{ }
__m128 _values;
} ;
The following code demonstrates the vector normalizing.
int main()
typedef vector3<float, packed_type<float> > vector3f;
vector3f test(1.0f, 2.0f, 3.0f);
vector3f normal = normalize(test);
Additionally we can do an implementation for instruction sets without SIMD.
template <typename Real>
class packed_type
inline packed_type()
{ }
inline packed_type(float value)
_values[0] = value;
_values[1] = value;
_values[2] = value;
_values[3] = value;
inline packed_type(float value0, float value1, float value2, float value3)
_values[0] = value0;
_values[1] = value1;
_values[2] = value2;
_values[3] = value3;
inline packed_type operator* (const packed_type& rhs) const
return packed_type(
_values[0] * rhs._values[0],
_values[1] * rhs._values[1],
_values[2] * rhs._values[2],
_values[3] * rhs._values[3]);
template <typename Real>
inline friend packed_type<Real>
dot3(const packed_type<Real>& value1, const packed_type<Real>& value2)
return packed_type(
(value1._values[0] * value2._values[0]) +
(value1._values[1] * value2._values[1]) +
(value1._values[2] * value2._values[2]));
template <typename Real>
inline friend packed_type<Real> inv_sqrt(const packed_type<Real>& value)
return packed_type(
(Real)1 / std::sqrt(value._values[0]),
(Real)1 / std::sqrt(value._values[1]),
(Real)1 / std::sqrt(value._values[2]),
(Real)1 / std::sqrt(value._values[3]));
Real _values[4];
By adding an additional layer below the math library the amount of code required to support different architectures is minimized. Assuming the compiler does a good job of inlining the member
functions, performance should be as good as those written without an additional layer.
Designing Fast Cross-Platform SIMD Vector Libraries
Becoming a console programmer : Math Libraries Part 2
Square Roots in vivo: normalizing vectors | {"url":"http://www.altdevblogaday.com/2011/04/29/defining-an-simd-interface/","timestamp":"2014-04-20T01:09:04Z","content_type":null,"content_length":"24985","record_id":"<urn:uuid:6bed7398-20fc-49fa-990a-72cafaeb1a26>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Linear equations under gp
Karim BELABAS on Tue, 31 Oct 2000 17:49:07 +0100 (MET)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: Linear equations under gp
[Leonhard Moehring:]
> I'd like to know, if a pari user has tried to
> solve fairly large linear equations under pari/gp.
> While i know that there are other packages that
> specialize in linear equations, ofc it would be
> a lot more convenient to solve matrices under gp,
> where the matrix coefficients are calculated.
> I've tried to work as space efficient as possible
> (e.g. doing matrix operations in place),
> but the best i could get to work was a 1500x1500
> real matrix, with a 28 digit accuracy on a 150 MB stack
> (linux), using the pari routine matsolve.
> Now 150M divided by 1.5K squared and take away some
> for the overhead is around 60 bytes per coefficient.
> That number seemed slightly higher than what I had
> expected, so I wondered if anybody out there has some
> experiences if I could solve somehow larger matrices
> under these circumstances in Pari.
First, what version of PARI are you using ? | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0010/msg00001.html","timestamp":"2014-04-18T08:05:56Z","content_type":null,"content_length":"4544","record_id":"<urn:uuid:bfad376e-225f-40c8-92b9-29b6da92511e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Narrow Search
Earth and space science
Sort by:
Per page:
Now showing results 11-16 of 16
This resource explains how to estimate the global consequence of a person's actions to quantify what it is to "think globally." To lend meaning to the result, it introduces "order-of-magnitude"
thinking. Three examples, on the global impact of a... (View More) short drive, a little water and an hour of light, are described. This resource is from PUMAS - Practical Uses of Math and Science -
a collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View Less)
This resource describes the slide rule as an analog computer, and uses the slide rule to demonstrate the concept of isomorphism, which is one of the most pervasive and important concepts in
mathematics. This resource is from PUMAS - Practical Uses... (View More) of Math and Science - a collection of brief examples created by scientists and engineers showing how math and science topics
taught in K-12 classes have real world applications. (View Less)
Some simple arithmetic can help put the quantity of fuel in a potential oil spill - in this case 400,000 gallons - in perspective. In this example, students calculate the area that would be covered
by oil from the volume measurement. This resource... (View More) is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by scientists and engineers showing how
math and science topics taught in K-12 classes have real world applications. (View Less)
In this resource, the author uses graphing and the linear scale to explain what logarithms are then describes examples that show how logarithms are used in the field of engineering. Examples include
vibration levels in the Space Shuttle and the... (View More) Richter Scale for earthquakes. This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created
by scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View Less)
This math problem demonstrates a lawyer's use of some very simple science and math. The case involves a $26 million lawsuit over a construction waste landfill and lead contamination. This resource is
from PUMAS - Practical Uses of Math and Science -... (View More) a collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have
real world applications. (View Less)
In this example, a computer scientist describes how the remainder in integer division has utility in pattern recognition and in computer programming. This resource is from PUMAS - Practical Uses of
Math and Science - a collection of brief examples... (View More) created by scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View | {"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics%3ANumber+and+operations&resourceType=Instructional+materials%3AInformative+text&page=2","timestamp":"2014-04-21T16:12:39Z","content_type":null,"content_length":"59911","record_id":"<urn:uuid:9480ffa6-92e9-4900-9280-18b63e05f327>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Models of Sharing Graphs
Models of Sharing Graphs (A Categorical Semantics of Let and Letrec)
Masahito Hasegawa
PhD thesis ECS-LFCS-97-360, University of Edinburgh (1997) (examined on 12th June 1997)
xii+134 pages, 234 x 156mm hardcover; ISBN 1-85233-145-3, Springer Distinguished Dissertation Series (1999)
A general abstract theory for computation involving shared resources is presented. We develop the models of sharing graphs, also known as term graphs, in terms of both syntax and semantics.
According to the complexity of the permitted form of sharing, we consider four situations of sharing graphs. The simplest is first-order acyclic sharing graphs represented by let-syntax, and others
are extensions with higher-order constructs (lambda calculi) and/or cyclic sharing (recursive letrec binding). For each of four settings, we provide the equational theory for representing the sharing
graphs, and identify the class of categorical models which are shown to be sound and complete for the theory. The emphasis is put on the algebraic nature of sharing graphs, which leads us to the
semantic account of them.
We describe the models in terms of the notions of symmetric monoidal categories and functors, additionally with symmetric monoidal adjunctions and traced monoidal categories for interpreting
higher-order and cyclic features. The models studied here are closely related to structures known as notions of computation, as well as models for intuitionistic linear type theory. As an interesting
implication of the latter observation, for the acyclic settings, we show that our calculi conservatively embed into linear type theory. The models for higher-order cyclic sharing are of particular
interest as they support a generalized form of recursive computation, and we look at this case in detail, together with the connection with cyclic lambda calculi.
We demonstrate that our framework can accommodate Milner's action calculi, a proposed framework for general interactive computation, by showing that our calculi, enriched with suitable constructs for
interpreting parameterized constants called controls, are equivalent to the closed fragments of action calculi and their higher-order/reflexive extensions. The dynamics, the computational counterpart
of action calculi, is then understood as rewriting systems on our calculi, and interpreted as local preorders on our models.
Pointers to Related Work
• Z. M. Ariola and J. W. Klop, Equational term graph rewriting. Fundamentae Infomaticae 26 (1996) 207-240.
• R.F. Blute, J.R.B. Cockett and R. A. G. Seely, Feedback for linearly distributive categories: traces and fixpoints. To appear in Journal of Pure and Applied Algebra.
• A. Corradini and F. Gadducci, A 2-categorical presentation of term graphs. In Proc. CTCS'97, Springer LNCS 1290 (1997) 87-105.
• M. Hasegawa, Recursion from cyclic sharing: traced monoidal categories and models of cyclic lambda calculi. In Proc. TLCA'97, Springer LNCS 1210 (1997) 196-213.
• A. Jeffrey, Premonoidal categories and a graphical view of programs. Manuscript (1998).
• A. Joyal, R. Street and D. Verity, Traced monoidal categories. Mathematical Proceedings of the Cambridge Philosophical Society 119(3) (1996) 447-468.
• R. Milner, Calculi for interaction. Acta Informatica 33(8) (1996) 707-737.
• H. Miyoshi, Rewriting logic for cyclic sharing structures. In Proc. FLOPS'98, World Scientific (1998) 167-186.
• P. Selinger, Categorical structure of asynchrony. In Proc. MFPS 15, ENTCS 20 (1999).
Back to Hassei's Research Page / Home Page | {"url":"http://www.kurims.kyoto-u.ac.jp/~hassei/papers/thesis.html","timestamp":"2014-04-20T08:40:24Z","content_type":null,"content_length":"5297","record_id":"<urn:uuid:c9690d5a-6c19-49a2-bfb1-689e881fe2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Options joined with |
Options joined with |
I'm writing a class that I want to be able to accept several options with one parameter, similar to the way you can specify setiosflags( ios::fixed | ios::showpoint )
I tried to check the type of some of these ios flags, but they are enum's, which doesn't help me very much. I understand that for int's, the | operator effectively adds them ( 5 | 2 = 7, for
example ), and that by using ints with values that are powers of 2, one can figure out which combination of objects was passed by looking at the sum. Is that usually how it's done (using int's)?
If so, is there an easy way to break the sum down into the individual combinations? The only way I can think of immediately is to divide by the highest power of 2 that leaves a power of 2 as a
remainder, but it seems like it could be easier...TIA for any help.
The bitwise OR operator does not necessarily add the two numbers. It may work for 5 and 2 but try 3 and 1. You will still get 3. It just OR's the bits of the 2 numbers. You can look up the truth
tables online somewhere.
Try using each power of 2 as a flag. Like.
#define EFFECT_1 0x000F
#define EFFECT_2 0x00F0
#define EFFECT_3 0x0F00
#define EFFECT_4 0xF000
That way if you have an int or something full of effects, which you SET by
int effect |= EFFECT_1;
The |= will keep any other effect you already had and add this one.
To CHECK an effect do this.
if( effect & EFFECT_4 )
// Effect 4 is on
Hope this helps a little.
I understand that for int's, the | operator effectively adds them ( 5 | 2 = 7, for example )
Not really. That only applies if both numbers have no set bits in common:
5 | 2 = 101 | 010 = 111 = 7
5 | 4 = 101 | 100 = 101 = 5
Anyway, back to your question... which I didn't really get :rolleyes:. It is true that flags are numbers written on the form 2^n (0 <= n <= MAXBITS), and that you can store several flags in one
To set one flag, use:
FlagStorage |= Flag;
To clear one flag, use:
(MAXVALUE is the greatest value for the datatype of FlagStorage, ie 255 for a char)
FlagStorage &= (MAXVALUE - Flag);
To toggle one flag, use:
FlagStorage ^= Flag;
To retrieve one flag, use:
bool IsItSet = (FlagStorage & Flag);
Damn, not just beaten once but beaten twice... :( ... :D
FlagStorage &= (MAXVALUE - Flag);
cmon magos you can do better than that....
dont forget there is a ~ operator.
FlagStorage &= ~Flag
Originally posted by Stoned_Coder
cmon magos you can do better than that....
dont forget there is a ~ operator.
FlagStorage &= ~Flag
Bah, ok then...
You win this one :D
Thanks, all, especially Salem and Wizard. That's exactly what I needed to know, and thanks for clearing up the bit about | and adding. | {"url":"http://cboard.cprogramming.com/cplusplus-programming/38068-options-joined-%7C-printable-thread.html","timestamp":"2014-04-17T16:41:52Z","content_type":null,"content_length":"10825","record_id":"<urn:uuid:3e9f156b-2cb9-4c41-b599-001130c7ed80>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
How much value does a walk to Barry Bonds have?
by Cyril Morong
Click here to see my sabermetric blog called Cybermetrics
Pete Palmer gave the following events run values:
1B = .47
2B = .78
3B = 1.09
HR = 1.40
BB = .33
Out = .25 (this varies)
That is, the average single has a run value of .47. One potential problem in evaluating Barry Bonds is all the walks he gets, whether intentional, semi-intentional or otherwise. Should they all get a
value of .33 as they do in the linear weights system? Are his walks worth less because the Giant hitters behind him are below average? Does the increased chance of a double play reduce the value of
these walks?
The first thing I looked at was the value of a walk in a high slugging percentage (SLG) vs. a low slugging percentage environment. I ran a regression in which team runs was the dependent variable and
1Bs, 2Bs, 3Bs, HRs, BBs, SBs, CSs, and outs were the dependent variables. The first regression in this case included the top 50 NL teams in SLG (average SLG .435) from 1980-2000 (excluding strike
years). The value of a walk was .405 (the linear regression technique determines these values, so, holding all other events constant, a walk increases runs by .405). Then for the lowest 50 teams
(.353 SLG), it was .267. When I did this for the top (.272 average AVG) and bottom (.244) 50 in batting average (AVG), the value of a walk was about .33 in each case. So a low AVG environment does
not affect walk value.
It looks like going from a high AVG environment to a low one has no impact on the walk value. Going from a high SLG environment to a low one does, by about .14. I doubt that Bonds’s walks would fall
this much here, since the Giants have had good team SLGs. Now a lot of that comes from Bonds, but without him they won’t fall to .353 (and he is still part of the environment, if you walk him you
raise the chance you might have to face him later slightly).
So if the guys hitting behind Bonds are below average, there might be some diminishing of the value of a walk, but probably not much. But how have Giant hitters been hitting behind Bonds? I looked at
how all the Giant hitters hit in each of the next two slots after Bonds for the years 2001-03. The numbers below include only what the hitters in question hit in the lineup slots behind Bonds, not
what they might have hit in other slots.^1 The Giant hitters easily exceeded the league averages for those seasons. So there is no reason here to think the value of a walk to Bonds is less than .33
based on this.
Year Giants AVG Giants SLG League AVG League SLG
2001 0.298 0.507 0.261 0.425
2002 0.269 0.440 0.259 0.410
2003 0.271 0.426 0.262 0.417
Now turning back to grounding into double plays. Below I show the rate at which the Giants after Bonds grounded into double plays (DPs/(AB+BB)). Then the league rate for each season as well.
Considering that Bonds is on first base so much, it is surprising that these hitters’ DP rate is so close to the league average. So it appears that when Bonds walks, very little, if any, value is
lost due to DPs.
Year Giants DP Rate League DP Rate
2001 0.014 0.020
2002 0.029 0.022
2003 0.026 0.021
Now, we also need to know how the two guys who bat behind Bonds hit with runners on base. Here are their collective AVG and SLG with Runners on Base (ROB).
Year Giants/ROB AVG Giants/ROB SLG
2001 0.280 0.459
2002 0.320 0.461
2003 0.288 0.447
So there is no problem here (In all of baseball, from 1991-2000, both AVG and SLG were .011 higher with (ROB) than with none on-I compiled data from STATS, INC Player Profiles books). Now I don’t
actually have how these guys hit with Bonds in particular on base and this is not weighted by how often they batted behind Bonds (in general, the guys with the most ROB at-bats behind Bonds did well
anyway). This is just based on their total at-bats with ROB. But it would be very strange if they hit exceedingly well with other runners on base and just happened to not hit well when Bonds was on.
But there is one thing that seems contradictory. The rate at which Bonds scores runs (other than by HRs). I divided his runs minus HRs by the number of times he reached base (not including HRs). Then
I did the same for the league. Bonds’ rate is lower.
Year League Rate Bonds' Rate
2001 0.322 0.215
2002 0.307 0.236
2003 0.318 0.280
Now if the guys behind Bonds hit well with ROB and don’t ground into an unusual number of double plays, why doesn’t Bonds score at the league rate? The league average for the three years is .316
while for Bonds it is .244. Bonds scores about 23% less often than the average player. I wondered if this was just normal for a middle of the order hitter. So I looked at this same rate for other NL
hitters who batted predominantly 4^th (at least 3/4ths of the time, as Bonds did) in 2003. Here are their rates and the overall or composite rate for the group.
Player Rate
C. Jones 0.322
Lowell 0.275
Nevin 0.239
Alou 0.307
Kent 0.329
Kearns 0.233
McGriff 0.207
Composite 0.288
Now the composite rate is very close to what Bonds had in 2003. Since I only included hitters who batted leadoff 3/4ths of the time, there are not more. So Bonds is not that unusual in 2003. But it
is probably the case that his rate in 2001 and 2002 are still on the low side. So I looked at how all major league cleanup hitters hit for the two most recent years that Retrosheet has data. The rate
in 1992 and 1993 were .290 and .300. Not too much higher than what Bonds had last year. But those were somewhat low scoring seasons. So I looked at 1987 (.314) and 1986 (.303). So again, Bonds’s rate
in 2003 is fairly normal. But it was definitely low in 2001 and 2002. So maybe somehow his walks have less value than average.
There is more, however, to this than whether or not Bonds scores. Some of those walks advance runners, who may be more likely to score. With runners on 1^st and runners on 1^st and 2^nd,
Bonds had 131 walks from 2001-03. That was about 25% of his walks. The major league average is about 18%.^2 So when Bonds walks, he is more likely to push a runner or runners along the bases than
other batters will when he walks. This would tend to make his walks a little more valuable than .33. Now I said above that Bonds scored only 77% as often as the average player when he gets on base.
So there is no reason for his walks to be worth less than 77% of the value of a normal walk. But this lower scoring rate might be due to Bonds being slow this late in his career or conservative base
running. And of course his walks move runners along more than average. But how much value does this have?
So I turned to a run expectancy table from Tangotiger.^3 Here it is
RE 99-02 0 1 2
Empty 0.555 0.297 0.117
1st 0.953 0.573 0.251
2nd 1.189 0.725 0.344
3rd 1.482 0.983 0.387
1st_2nd 1.573 0.971 0.466
1st_3rd 1.904 1.243 0.538
2nd_3rd 2.052 1.467 0.634
Loaded 2.417 1.650 0.815
This says that if you have a runner on 1^st and no outs you can expect to score .953 runs. But if you move from that to runners on 1^st and 2^nd the expected runs rises by about .6. So if Bonds gets
a walk with a man on 1^st, this should raise run expectancy by .6. If all of Bonds’ walks were such, his walks would have a value of .6. Of course, they are not. He gets walked in all manner of
situations. Each time the run expectancy rises by an amount that can be found in the table. So I took all of Bonds’s walks from the various base situations (none on, runners on 1^st and 2^nd, etc.)
and looked at the table to see how much run expectancy increased. I then calculated a weighted average of the value of a walk to Bonds over the years 2001-3 based on how much it increased run
expectancy (if, for example, a walk increased run expectancy by .5 and walks in those situations happened 10% of the time for Bonds, this contributed a value of .05-see Appendix for complete
details). I came up with .339, very close to the linear weights value of Pete Palmer.^4 So it appears that his walk value is pretty normal (especially since the following hitters tend to hit at least
as well as average). But what about intentional walks?
In the Big Bad Baseball Annual of 1999 Jim Furtado gives an intentional walk a value of .25 (p. 481-other walks get a value of .34). In my own regressions, covering the years 1955-2000, I
got about .233 (using data from the Lee Sinins Sabermetric Encyclopedia). About 31% of Bonds’s walks were intentional. A weighted average of his walk value using .24 for intentional walks and .33 for
others leaves a value of about .302. Now many of his walks are probably “semi-intentional.” But we can’t know for sure how many. The lower bound values of his walks has to be at least .25, probably
much closer to .30. Since walks help determine on-base percentage which in turns determines OPS (OBP + SLG), and since a walk to Bonds has pretty close to the normal value, his OPS is still a very
good representation of his value.
1. For 2001, I looked at slots 4-5. For 2002, slots 4-6 (since Bonds batted quite a bit in both slots 3-4; Bonds’s numbers for slot 4 are not included, of course). For 2003, I looked at slots 5-6.
2. From Tom Ruane, for the years 1982, 1983 and 1987 which is at
3. Sabermetrics 101: Run Expectancy Matrix, 1999-2002 Which is at
4. The data on his walks comes from ESPN at
It does not give the breakdown of how many outs each situation occurred. For example, it just says he had 40 walks with runners on 1^st and 2^nd. The increase in run expectancy is different if you go
from runners on 1^st and 2^nd to bases loaded depending on how many outs you have (you can see this in the table). I took the simple average in each case of the increase for no outs, one and two
outs. Those 3 out situations all occur about one-third of the time. I don’t know if the out percentage varies depending on the base situation.
Back to Cyril Morong's Sabermetric Research
Appendix-Calculating the value of a walk to Bonds
Using the values of the run expectancy table from above, here is the increase in expected runs from a walk. In the table below, 1 to 12 means going from having a man on 1^st to men on 1^st and 2^nd.
If this change occurs, expected runs increase by .529 if there are no outs. 0 to 1 means going from none on to a man on 1^st. 13 to 123 means going from having runners on 1^st and 3^rd to having
bases loaded.
│ │Expected│ Run │Increase│ │ │ │ │
│ Change │ 0 Outs │1 Out│ 2 Outs │ AVG │Walks│% of total │Value│
│ 1 to 12 │ 0.529 │0.410│ 0.136 │0.358│ 91 │ 0.174 │0.062│
│12 to 123│ 0.935 │0.667│ 0.428 │0.677│ 40 │ 0.077 │0.052│
│ 0 to 1 │ 0.398 │0.276│ 0.134 │0.269│ 210 │ 0.402 │0.108│
│ 3 to 13 │ 0.951 │0.670│ 0.287 │0.636│ 33 │ 0.063 │0.040│
│ 2 to 12 │ 0.384 │0.246│ 0.122 │0.251│ 95 │ 0.182 │0.046│
│23 to 123│ 0.365 │0.183│ 0.181 │0.243│ 32 │ 0.061 │0.015│
│13 to 123│ 0.513 │0.407│ 0.277 │0.399│ 21 │ 0.04 │0.016│
│ Sum │ │ │ │ │ 522 │ │0.339│
The AVG column is the average of the three previous columns. So .358 is the average of a walk with a man of 1^st (.526 + .410 + .136 = 1.075 and that divided by 3 = .358). I took the average since
each out situation comes up about 33% of the time. I don’t know how many outs there were when Bonds got walks. The walk column just lists how many walks Bonds got in each of those situations. So he
got 91 walks when there was a runner on 1^st only, 210 with none on. The next column lists what share of the total walks (522) came in that situation. For instance, 17.4% of his walks came with a man
on 1^st. This is the weight that the walks from that situation get. The next column is simply the “% of total” times the AVG column. So .174*.358 = .052. Adding each number from the last column gets
the walk value of .339.
I posted the following to the SABR list on Aug. 27, 2004 to answer a question:
“To try to answer Larry Grasso's question, I calculated OBP in the following way
H + BB + HBP - IBB divided by AB + BB + HBP - IBB
Then I recalculated OPS for the top 10 guys in the NL. Here they are. Bonds is still ahead by alot. His OBP fell to just .515
B. Bonds, SF 1.318
T. Helton, Col 1.059
A. Pujols, StL 1.047
J. Edmonds, StL 1.043
A. Beltre, LA 1.038
S. Rolen, StL 1.013
J. Thome, Phi 1.007
J. Drew, Atl 1.003
L. Berkman, Hou 0.985
A. Dunn, Cin 0.971
I also calculated batting runs using the Linear Weights values of 1B = .47, 2B = .78, 3B = 1.09, HR = 1.4, BB = .33 and outs = -.25. I counted all walks the same, intentional or not. HBP were counted
as walks. The top 10 guys in OPS in the NL come out as
B. Bonds, SF 104.77
T. Helton, Col 66.11
A. Pujols, StL 62.8
L. Berkman, Hou 56.83
J. Edmonds, StL 56.24
A. Beltre, LA 55.51
J. Thome, Phi 54.52
S. Rolen, StL 53.63
J. Drew, Atl 51
A. Dunn, Cin 49.5
Then I made an IBB worth just .23 runs for everyone. This value was published in the Big Bad Baseball Annual. Then we get
B. Bonds, SF 95.37
T. Helton, Col 64.61
A. Pujols, StL 61.9
L. Berkman, Hou 55.63
J. Edmonds, StL 55.24
A. Beltre, LA 55.11
S. Rolen, StL 53.13
J. Thome, Phi 52.22
J. Drew, Atl 50.8
A. Dunn, Cin 48.4
Then I took into account that the chance for a GDP increases with a man on first. A couple of weeks ago I determnined that the guys batting in the 2 slots behind Bonds this year had hit into about 28
more GDPs than the normal rate would have given (per PA). Let's say it is 30 now. Then let's say that 15 of them came after regular walks and 15 came after IBBs. Then I gave those 30 a run value of
-.25, since Bonds is out on the GDPs and reduced his BBs and IBBs accordingly. The top 10 would be
B. Bonds, SF 79.47
T. Helton, Col 64.61
A. Pujols, StL 61.9
L. Berkman, Hou 55.63
J. Edmonds, StL 55.24
A. Beltre, LA 55.11
S. Rolen, StL 53.13
J. Thome, Phi 52.22
J. Drew, Atl 50.8
A. Dunn, Cin 48.4
Now I did not penalize any other hitter for this. And those GDPs are not really Bonds's fault. I am just trying to see where this takes us. But Bonds is still way ahead of everyone. No park
adjustments either. My guess is that would only help Bonds. Unless Bonds is having a horrible year fielding and Rolen is having an incredible year fielding. Bonds is MVP. I don't see Pujols making up
18 runs with defense. And Rolen would have to make up 26 runs fielding. Maybe Bonds is a -10 in fielding runs (which is actually pretty bad for an LFer). Rolen would still have to be +16. He has
reached that 3 times in 7 years. He was -3 last year.
All of the assumptions are unlikely to be true at the same time. I think Bonds is the MVP so far.”
Have the Giants been scoring the expected number of runs in each of the past four years? The first thing I did was run a regression in which team runs per game was the dependent variable and team OPS
was the independent variable. The regression equation for runs per game from 2001-04 is
R/G = 13.266*OPS – 5.29.
Using that to predict Giants’ runs per game in each of the last four years and then seeing how much above or below they are per 162 games, we get the following
2001: –67
2002: –49
2003: –23
2004: –1.5
So they were below expectations in all four years. (A regression without the Giants gave very similar results. I also checked using the simplest Runs Created formula and that shows the Giants falling
even more runs short of expectations in each year). Now the last two years are not bad. But 2001-2 are pretty bad. The Giants scored fewer runs than expected, 67 and 49, respectively, in those two
seasons. Now Bonds walked more in 2002 than 2001. So if all those walks to Bonds were holding them down, why did they only fall 49 runs short in 2002 instead of even more than 67 short, as they did
the year before? Then this year, with a record number of walks and intentional walks to Bonds, they scored just about what we would expect. Here are his walk totals followed by his intentional walks
for the last 4 years.
2001: 177-35
2002: 198-68
2003: 148-61
2004: 232-121
I tried a regression that was a little more sophisticated than one with just OPS. I had runs scored as a funtion of 1Bs, 2Bs, 3Bs, HRs, BBs (including HBP) and outs (AB-H). The equation was:
Runs = 416 + .597*1B + .773*2B + .806*3B +1.55*HR +.26*BB -.222*Outs
The -.222 for outs seems high. In other regressions, I have gotten around .09 (which is close to what Jim Furtado has in the 1999 Big Bad Baseball Annual). I checked the data, which I got from ESPN’s
site. It seems right. Here is how the Giants actual runs compared to that predicted by the above equation
2001: –63
2002: –46
2003: –28
2004: –9
That is the same pattern as I got from the OPS regession. The Giants generally scored fewer runs than expected. Then I broke down the walks into intentional and non-intentional. The regression
equation was
Runs = 567 + .584*1B + .784*2B + .896*3B + 1.51*HR + .342*BB - .26*IBB - .26*Outs
Yes, I got a minus sign on IBBs. In other regressions, it has been positive. Jim Furtado gives IBBs a positive value. One regression I did was on all teams from 1955-2000 gave IBBs a value of +.23.
A regression on all teams from 1997-2000 gave IBBs a value of .20. If I take the Giants out of the regression which covers 2001-04, the value of an IBB goes to -.38 (it has a t-value of –2.5 or so in
both cases, so it might be statistically significant).
Are teams doing a better job of issuing IBBs the last four years? Does anyone know why this might be if it is true?
Here is how the Giants actual runs compared to that predicted by the last equation
2001: –36
2002: –9
2003: –3
2004: +50
Now, over the four years combined, the Giants scored about as many runs as expected, with only one year being far below expectations. And this is based on an equation that gives intentional walks as
negative. Could this mean that the other teams have been giving Bonds intentional walks at the right times, keeping the Giants from scoring as many runs as we might normally expect?
Maybe. But the two years that came in really bad by the OPS regression and the regression with all walks lumped together are 2001 and 2002. In one of those years, 2001, Bonds walked 177 times. In
1923, Ruth walked 170 times and the Yankees scored 823 runs while Runs Created predicted 811 (from Total Baseball, 5^th edition). So a team can have a guy with a real high walk total and score the
runs we expect. And don’t forget the Giants scored just about the number of runs expected this year while Bonds had an incredible total of 232 walks and 121 IBBs.
If anyone has the time and inclination, take a look at the data. Run the same regressions that I have. I don’t think I did anything wrong and I think the data is correct. I am surprised by the high
negative value for batting outs and the negative value for IBBs. I wonder if somenone else will get the same regression results.
Cyril Morong | {"url":"http://cyrilmorong.com/Walkvalue2.htm","timestamp":"2014-04-21T02:55:15Z","content_type":null,"content_length":"102770","record_id":"<urn:uuid:9165017b-c1bf-4874-addf-9eea39369899>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Context Free Art
From Context Free Art
Basic Concept
The SQUARE, CIRCLE, and TRIANGLE primitive shapes are implemented as lists of path operations, which are drawn on a canvas. The new path feature allows new primitive shapes paths to be defined. A
path contains a list of path operations and path commands. Paths are defined to support the path drawing features found of SVG files and the OpenVG specification.
The simplest path is a path sequence followed by a path command. The path sequence specifies where to draw and the path command specifies whether to fill the path sequence or stroke it. A path
sequence begins with a MOVETO to set where the drawing part of the path starts, some number of path operations (LINETO, CURVETO, ARCTO) that specify the path for drawing, and an optional CLOSEPOLY
that causes the path sequence to return to the its beginning (i.e., the MOVETO). Here is a simple path:
path box {
MOVETO{x 0.5 y 0.5}
LINETO{x -0.5 y 0.5}
LINETO{x -0.5 y -0.5}
LINETO{x 0.5 y -0.5}
CLOSEPOLY{} // go back to (0.5, 0.5) and close the path
STROKE{} // draw a line on the path using the default line width (10%)
There are a couple of ways that a path can be more complex:
• There can be multiple path commands that can cause the path sequence to be stroked or filled many times.
□ The FILL or STROKE commands can have color adjustments that cause the same path sequence to be drawn with different colors.
□ The FILL or STROKE commands can have geometric adjustments that cause the same path sequence to be drawn in different places (or rotated, or scaled, or flipped, or skewed).
• There can be multiple path sequences that are all drawn by the path command (or path commands if you have more than one). Just follow the path sequence with another path sequence (a MOVETO and
some more drawing operations).
• The path operations and path commands can be put in loops (see below).
• After the path command (or path commands) there can be more path sequences and path commands.
• The MOVETO at the beginning of the path sequence is actually optional if:
□ It is at the beginning of the path and the MOVETO is to (0, 0)
□ If the MOVETO follows a CLOSEPOLY, FILL or STROKE and the position of the MOVETO is the same as the position of the last drawing operation in the preceding path sequence.
• If you leave out the path command at the end of the path (for the last set of path sequences) then the path sequence(s) will be filled using the non-zero filling rule.
Example of optional MOVETOs and multiple sets of path sequences and commands:
Example of multiple path commands and path operation loops. Example of
geometric adjustments in loops and color adjustments in path commands: path dot {
MOVETO{y 1} // required
startshape stars ARCTO{y -1 r 1}
background {sat 0.35} ARCTO{y 1 r 1}
FILL{hue 216 sat 1 b 0.7333}
path stars { MOVETO{y 1} // can be left out
MOVETO{x cos(90-144) y sin(90-144)} ARCTO{y -1 r 1}
4* {r 144} LINETO {y 1} STROKE{b -1 width 0.25}
CLOSEPOLY{} MOVETO{y -1} // can be left out
5* {r 72} { ARCTO{y 1 r 1}
FILL{y 1.88 b 0.5 p evenodd} STROKE{b 1 width 0.25}
STROKE{y 1.88} }
} The second and third MOVETO operations are redundant. If they were left out then Context Free would automatically have
generated them because all they do is move to the same position as the ARCTO operations that are at the end of the preceding
path sequences.
Example of implicit fill command:
path heptagon {
MOVETO{x cos(90-360/7) y sin(90-360/7)}
6* {r (360/7)} LINETO{y 1}
Path Operations
The supported path operations are:
• MOVETO{x x y y} - Moves the path to the point (x, y) without drawing, begins a new path sequence
• LINETO{x x y y} - Draws to the point (x, y)
• ARCTO{x x y y rx x_radius ry y_radius r ellipse_angle param parameters} - draws an elliptical arc segment to the point (x, y), the ellipse has a radius (rx, ry) and is rotated by the ellipse
angle (in degrees)
• ARCTO{x x y y r radius param parameters} - draws a circular arc segment with the specified radius to the point (x, y)
• CURVETO{x x y y x1 control_x_1 y1 control_y_1} - draw a quadratic bezier curve to point (x, y) with a control point at (x1, y1)
• CURVETO{x x y y x1 control_x_1 y1 control_y_1 x2 control_x_2 y2 control_y_2} - draw a cubic bezier curve to point (x, y) with a starting control point at (x1, y1) and an ending control point at
(x2, y2)
• CURVETO{x x y y} - draw a smooth quadratic bezier curve to point (x, y) with a control point that is the mirror of the previous bezier curve †
• CURVETO{x x y y x2 control_x_2 y2 control_y_2} - draw a smooth cubic bezier curve to point (x, y) with a starting control point that is the mirror of the ending control point of the previous
bezier curve and an ending control point at (x2, y2) †
It is not necessary to list all of the path operation parameters. If a path operation parameter is omitted then a default value will be used. The default position is (0,0) for end points and control
points. If the x or y part of a position or control point is omitted then 0 will be used. However, for cubic bezier curves either x2 or y2 must be specified for Context Free to know that it is cubic
and not quadratic. For non-smooth cubic and quadratic bezier curves either x1 or y1 must be specified for Context Free to know that the non-smooth variant is desired. For arcs, the default ellipse
radius is (1,1) and the default angle is 0.
† The smooth forms of the quadratic and cubic curve operations infer the unspecified control point by looking at the preceding curve operation. If the preceding operation is not a curve operation
(CURVETO, CURVEREL, ARCTO, or ARCREL) then a smooth curve operation is not permitted.
Relative Path Operations
Each of the absolute path operations above has a relative form in which the position of the previous path operation is added to the current position (and to any control points):
• MOVEREL{x x y y} - Moves the path to the relative point (x, y) without drawing
• LINEREL{x x y y} - Draws to the relative point (x, y)
• ARCREL{x x y y rx x_radius ry y_radius r ellipse_angle param parameters} - draws an elliptical arc segment to the relative point (x, y), the ellipse has a radius (rx, ry) and is rotated by the
ellipse angle (in degrees)
• ARCREL{x x y y r radius param parameters} - draws a circular arc segment with the specified radius to the relative point (x, y)
• CURVEREL{x x y y x1 control_x_1 y1 control_y_1} - draw a quadratic bezier curve to relative point (x, y) with a relative control point at (x1, y1)
• CURVEREL{x x y y x1 control_x_1 y1 control_y_1 x2 control_x_2 y2 control_y_2} - draw a cubic bezier curve to relative point (x, y) with a starting relative control point at (x1, y1) and an ending
relative control point at (x2, y2)
• CURVEREL{x x y y} - draw a smooth quadratic bezier curve to relative point (x, y) with a control point that is the mirror of the previous bezier curve
• CURVEREL{x x y y x2 control_x_2 y2 control_y_2} - draw a smooth cubic bezier curve to relative point (x, y) with a starting control point that is the mirror of the ending control point of the
previous bezier curve and an ending relative control point at (x2, y2)
Path Ending Operations
Path sequences can be explicitly ended and closed:
• CLOSEPOLY{param parameters} - ends the current polygon and draws a line from the ending position back to the beginning position, unless the beginning and end points coincide exactly. If the path
sequence is stroked then the connected beginning and end are drawn with a line joint, rather than with two line end caps.
A path sequence is implicitly ended without closure by starting a new path sequence with MOVETO/MOVEREL, by following it with one or more path commands, or by ending the path.
path SomeOpenPathsAndAClosedPath {
LINEREL{y 1}
LINEREL{x 1}
MOVETO{x 1} // ended previous path
LINEREL{y 1}
LINEREL{x 1}
STROKE{} // ended previous path sequence, strokes both open path sequences
MOVETO{x 2}
LINEREL{y 1}
LINEREL{x 1}
} // ended last path sequence and fills it, which looks like a closed path
Aligned Path Closures
A design might have a series of drawing operations that should result in the last point being the same as the first point. But when the path is closed a small line segment might be seen at the join
between the beginning and end points. This is due to floating point math errors causing the end point to be slightly off from the beginning point. There is an alternate form of CLOSEPOLY that
modifies the last drawing point so that the end point exactly matches the beginning point:
• CLOSEPOLY{p align} - ends the current polygon and sets the ending point to exactly match the beginning point.
Context Free will scan backward until it finds a MOVETO or MOVEREL and sets the ending drawing operation to the same point as the MOVETO/MOVEREL.
Path Commands
After a path sequence there can be one or more path commands. Path commands instruct Context Free to draw (stroke or fill) all of the path sequences between the path command and the previous group of
path commands.
• STROKE {shape adjustments width stroke_width param parameters} - Stroke the preceding path sequences with a pen of the specified width. Stroke width is relative to the size of the shape. If the
stroke size is not specified then the default of 0.1 is used. If color or shape modifications are specified then the path sequences are modified when they are drawn.
• FILL {shape adjustments param parameters} - Fill the preceding path sequences. If color or shape modifications are specified then the path sequences are modified when they are drawn. If the path
sequences are intersecting or self-intersecting then a filling rule determines whether a given piece is filled or not. The default filling rule is non-zero, but even-odd filling can also be
specified (see below).
Non-Zero filling
Even-Odd filling
Note that the shape adjustments in path commands can either be basic or ordered (see basic vs. ordered).
If a path is completed (by a closing curly brace, '}') with no path command following the last set of path sequences then an implicit fill command is appended to the path.
Many path operations and commands have parameters that modify their action. These parameters have the form of the keyword 'p' or 'param' followed by a string. The string can be without quotes if it
has no white-space characters. If there are white-space characters then the string must be enclosed in quotes.
• ARCTO/ARCREL cw parameter - indicates that the clock-wise arc is drawn
• ARCTO/ARCREL large parameter - indicates that the large arc is drawn
• CLOSEPOLY align parameter - indicates that the closing path operation should be modified to exactly match the beginning path position
• FILL evenodd parameter - indicates that the even-odd filling rule is used
• STROKE miterjoin parameter - indicates that join between path sequence segments (and between the beginning and end of closed paths) should have miter joins.
• STROKE roundjoin parameter - indicates that join between path sequence segments (and between the beginning and end of closed paths) should have round joins.
• STROKE beveljoin parameter - indicates that join between path sequence segments (and between the beginning and end of closed paths) should have bevel joins.
• STROKE buttcap parameter - indicates that the end points of unclosed path sequences should have butt caps.
• STROKE roundcap parameter - indicates that the end points of unclosed path sequences should have round caps.
• STROKE squarecap parameter - indicates that the end points of unclosed path sequences should have square caps.
Join types
Cap types
• STROKE iso parameter - indicates that stroke width is not transformed if the stroke is transformed, short for isowidth.
Top is default, bottom is isowidth
Arc Parameters
The basic arc drawing operation specifies a start point, an end point, and an ellipse. The ellipse is positioned such that the start point and end points touch the ellipse and the arc is drawn from
the start to the finish. However, in general there are two possible ellipse positions for any pair of starting and ending points, and two different arcs on each ellipse that can be drawn. The cw and
large parameters indicate which of the four possible arcs are drawn.
Two of the four arcs are large, i.e., more than 180°. Specifying the large parameter indicates that one of these arcs should be drawn. Otherwise one of the arcs that are less than 180° will be drawn.
Two of the four arcs draw from start to end clockwise around the ellipse and two draw counter-clockwise. Specifying the cw parameter indicates that a clockwise arc should be drawn. Otherwise a
counter-clockwise arc will be drawn.
Setting the radius of the arc to be negative has the effect of inverting the arc drawing direction. A counter-clockwise arc will be drawn clockwise if the radius is negative. A clockwise arc will be
drawn counter-clockwise if the radius is negative.
Bezier Control Points
The control points for bezier curve segments control the slope of the curve at the ends. For cubic bezier curves each end has its own control point and the slope at each end is indicated by the slope
of the line from the end to the control point. For quadratic bezier curves both ends share a single control point and the slope at each end is indicated by the slope of the line from each end to the
shared control point.
For smooth cubic bezier curves the starting control point is the mirror of the ending control point on the previous bezier curve or arc curve. For smooth quadratic bezier curves the single control
point is the mirror of the ending control point on the previous bezier curve or arc curve. The preceding curve does not need to be of the same order (quadratic or cubic) as the smooth curve. The
preceding curve can even be an ARCTO. Context Free will figure out what control point will match the slope and curvature between the smooth curve and the curve the precedes it.
Loops in Paths
Paths support the same extended loop constructs as rules, with a slight difference. Simple loops look pretty much the same as rule simple loops:
path trill {
MOVETO {x cos(234) y sin(234)}
5* {r -144}
CURVETO {y 1 x1 (cos(234) + cos(324)) y1 (sin(234) + sin(324)) x2 1 y2 1}
CLOSEPOLY {p align}
5* {r 72}
STROKE {y 2 p buttjoin a -0.5}
But for complex loops there is the restriction that loops must either be all path operations or all path commands, no mixing is allowed.
path suns {
MOVETO{x 1}
20* {r (360/20)} {
LINETO{x (2*cos(360/40)) y (2*sin(360/40))}
LINETO{x cos(360/20) y sin(360/20)}
CLOSEPOLY{p align}
5* {r 72} {
FILL{y 4}
STROKE{y 4 b -0.1}
Z changes are not allowed in the loop transform for path operation or command loops and color changes are not allowed in the loop transform for path operation loops. | {"url":"http://www.contextfreeart.org/mediawiki/index.php/Paths","timestamp":"2014-04-18T20:42:18Z","content_type":null,"content_length":"43934","record_id":"<urn:uuid:a5c53be3-2a23-4151-8068-e530f079dc99>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Wikipedia: RANSAC is an abbreviation for "RANdom SAmple Consensus". It is an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers.
In the case of a 2D line fitting, Ransac is very simple:
bestScore = Infinity
bestInliers = []
for a sufficient number of iterations :-)
currentModel = estimate a line from 2 points randomly selected in dataset
currentScore = 0
currentInliers = []
for all points in the dataset
p = current point in the dataset
currentError = distance(currentModel, p)
if (currentError < threshold)
currentScore += currentError
currentScore += threshold
if (currentScore < bestScore)
bestScore = currentScore
bestModel = currentModel
bestInliers = currentInliers
return bestModel, bestInliers, bestScore
Given a set of 2d points you can use Ransac to estimate a fitting line. But the resulting line should only be estimated using inliers and not be contaminated by the outliers. Example of ransac
iteration: bad model estimated | final ransac iteration if everything went well :-)
Live demo:
Source code: ransac.js (core Ransac + RobustLineFitting) available under MIT license. | {"url":"http://www.visual-experiments.com/demo/ransac.js/","timestamp":"2014-04-20T18:22:33Z","content_type":null,"content_length":"5782","record_id":"<urn:uuid:12eb2808-1456-4bda-92de-af300b88cb11>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science Jokes
Mathematician, Physicist, and Engineer (et al) Jokes
A man was walking down the street with two suitcases when a stranger came up and asked, "Have you got the time?" The man put down the suitcases and looked at his wristwatch and said, "It's exactly
five-forty six and fifty point six seconds and the barometric pressure is 30.06 and rising and if you'd like to see where we are by satellite positioning, I can show you that too, or get onto the
Internet, check your e-mail, make a long distance call, send a fax. It's also a pager and it plays recorded books and it receives FM." "That's amazing. I've got to have that watch. I'll pay you ten
thousand for that." "No, it's not ready for sale yet. I'm the inventor. I'm still working out the bugs. I haven't got it all programmed yet, it's not completely voice-activated." "I've got to buy
that watch. Fifteen thousand. Twenty." "Well, okay." He takes off the watch and the stranger walks away with it and the guy holds up the suitcases. "Don't you want the batteries?"
Engineering is the art of molding materials we do not fully understand into shapes we cannot fully analyze and preventing the public from realizing the full extent of our ignorance.
The difference between mechanical engineers and civil engineers is that mechanical engineers build weapons, civil engineers build targets.
The optimist sees a glass that's half full. The pessimist sees a glass that's half empty. An engineer sees a glass that's twice as big as it needs to be.
A businessman needed to employ a quantitative type person. He wasn't sure if he should get a mathematician, an engineer, or an applied mathematician. As it happened, all the applicants were male. The
businessman devised a test. The mathematician came first. Miss How, the administrative assistant took him into the hall. At the end of the hall, lounging on a couch, was a beautiful woman. Miss How
said, "You may only go half the distance at a time. When you reach the end, you may kiss our model." The mathematician explained how he would never get there in a finite number of iterations and
politely excused himself. Then came the engineer. He quickly bounded halfway down the hall, then halfway again, and so on. Soon he declared he was well within accepted error tolerance and grabbed the
beautiful woman and kissed her. Finally it was the applied mathematician's turn. Miss How explained the rules. The applied mathematician listened politely, then grabbed Miss How and gave her a big
smooch. "What was that about?" she cried. "Well, you see I'm an applied mathematician. If I can't solve the problem, I change it!"
A group of wealthy investors wanted to be able to predict the outcome of a horse race. So they hired a group of biologists, a group of statisticians, and a group of physicists. Each group was given a
year to research the issue. After one year, the groups all reported to the investors. The biologists said that they could genetically engineer an unbeatable racehorse, but it would take 200 years and
$100 billion. The statisticians reported next. They said that they could predict the outcome of any race, at a cost of $100 million per race, and they would only be right 10% of the time. Finally,
the physicists reported that they could also predict the outcome of any race, and that their process was cheap and simple. The investors listened eagerly to this proposal. The head physicist
reported, "We have made several simplifying assumptions. First, let each horse be a perfect rolling sphere . . ."
A man flying in a hot air balloon realizes he is lost. He reduces his altitude and spots a man in a field down below. He lowers the balloon further and shouts, "Excuse me, can you tell me where I am?
" The man below says, "Yes, you're in a hot air balloon, about 30 feet above this field." "You must be an engineer," says the balloonist. "I am. How did you know?" "Everything you told me is
technically correct, but it's of no use to anyone." The man below says, "You must be in management." "I am. But how did you know?" "You don't know where you are, or where you're going, but you expect
me to be able to help. You're in the same position you were before we met, but now it's my fault."
A math student and a physics student are camping. The physics students takes his turn to do the cooking first. He makes a tasty stew, but in so doing, uses up all the water. The next day, it is the
math student's turn to do the cooking. The physics student watches him go to the creek to fetch the water. He puts the water into the pot and then stops and goes off to do something else. Puzzled,
the physics student asks the math student when he is going to finish making dinner. The math student tells him that there is nothing left to do as now it has been reduced to a problem which has
already been solved.
A Mathematician, a Biologist and a Physicist are sitting in a street café watching people going in and coming out of the house on the other side of the street. First they see two people going into
the house. Time passes. After a while they notice three persons coming out of the house. The Physicist: "The measurement wasn't accurate." The Biologist: "They have reproduced." The Mathematician:
"If one person enters the house then it will be empty again."
A mathematician, an engineer, and a physicist are out hunting together. They spy a deer* in the woods. The physicist calculates the velocity of the deer and the effect of gravity on the bullet, aims
his rifle and fires. Alas, he misses; the bullet passes three feet behind the deer. The deer bolts some yards, but comes to a halt, still within sight of the trio. "Shame you missed," comments the
engineer, "but of course with an ordinary gun, one would expect that." He then levels his special deer-hunting gun, which he rigged together from an ordinary rifle, a sextant, a compass, a barometer,
and a bunch of flashing lights which don't do anything but impress onlookers, and fires. Alas, his bullet passes three feet in front of the deer, who by this time wises up and vanishes for good.
"Well," says the physicist, "your contraption didn't get it either." "What do you mean?" pipes up the mathematician. "Between the two of you, that was a perfect shot!"
[*How they knew it was a deer: The physicist observed that it behaved in a deer-like manner, so it must be a deer. The mathematician asked the physicist what it was, thereby reducing it to a
previously solved problem. The engineer was in the woods to hunt deer, therefore it was a deer.]
A mathematician, an engineer and a physicist sat around a table discussing how to measure the volume of a cow. The mathematician suggested the use of geometry and symmetry relationships of the cow,
but his idea was rejected on the grounds of being too time consuming. The engineer suggested placing the cow in a pool of water and measuring the change in the height of the water, but his idea was
rejected on the grounds of impracticality. "It's easy," said the physicist. "We'll make an assumption that the cow is a small sphere, calculate the volume and then blow it up to the actual size."
A mathematician, a physicist, and an engineer were given a red rubber ball and told to find the volume. The mathematician measured the diameter and evaluated a triple integral, the physicist filled a
beaker with water, put the ball in the water, and measured the total displacement, and the engineer looked up the model and serial numbers in his red-rubber-ball table.
A mathematician and a physicist are given the task of describing a room. They both go in, and spend hours meticulously writing down every detail. The next day, the room is changed, and they are again
given the task. The physicist spends the better part of the day, but the mathematician, amazingly enough, leaves within a minute. he hands in a single sheet of paper with the following description:
"Put picture back on wall to return to previously solved state."
A physicist, a chemist, and a statistician are called in to see their dean. Just as they arrive the dean is called out of his office, leaving the three professors there. The professors see with alarm
that there is a fire in the wastebasket. The physicist says, "We must cool down the materials until their temperature is lower than the ignition temperature and then the fire will go out." The
chemist says, "No! No! We must cut off the supply of oxygen so that the fire will go out due to lack of one of the reactants." While the physicist and chemist are debating, they observe with alarm
that the statistician is running around the room starting other fires. They both scream, "What are you doing?" To which the statistician replies, "Trying to get an adequate sample size."
A physicist and a mathematician setting in a faculty lounge. Suddenly, the coffee machine catches on fire. The physicist grabs a bucket and leaps towards the sink, fills the bucket with water and
puts out the fire. The second day, they are sitting in the same lounge, and the coffee machine catches on fire again. This time, the mathematician stands up, gets a bucket, hands the bucket to the
physicist, thus reducing the problem to a previously solved one.
A team of engineers were required to measure the height of a flag pole. They only had a measuring tape, and were getting quite frustrated trying to keep the tape along the pole. A mathematician comes
along, removes the pole from the ground and lays it on the ground, measuring it easily. When he leaves, one engineer says to the other: "Just like a mathematician! We need to know the height, and he
gives us the length!"
An assemblage of the most gifted minds in the world were all posed the following question: "What is 2 x 2 ?" The engineer whips out his slide rule and shuffles it back and forth, and finally
announces "3.99". The physicist consults his technical references, sets up the problem on his computer, and announces "it lies between 3.98 and 4.02." The mathematician cogitates for a while,
oblivious to the rest of the world, then announces: "I don't what the answer is, but I can tell you, an answer exists!" The philosopher says, "But what do you mean by 2 x 2 ?" The logician says,
"Please define 2 x 2 more precisely." The accountant closes all the doors and windows, looks around carefully, then asks "What do you want the answer to be?"
An economist, an engineer, and a physicist are marooned on a deserted island. One day they find a can of food washed up on the beach and contrive to open it. The engineer said: "let's hammer the can
open between these rocks". The physicist said: "that's pretty crude. We can just use the force of gravity by dropping a rock on the can from that tall tree over there". The economist is somewhat
disgusted at these deliberations, and says: "I've got a much more elegant solution. All we have to do is assume a can-opener."
An engineer, a physicist, a mathematician, and a statistician are taken, one at a time, into a room to undergo a psychological test. In the room is a table (upon which is a pad and pencil), a chair,
a bucket of water, and a waste basket rigged so that it can be set ablaze from an adjacent room in which the psychologists watch. The engineer is first, and the basket is set ablaze. The engineer
immediately jumps up, grabs the bucket of water and dashes the entire thing onto the fire, flooding the entire room and extinguishing the fire. The physicist is next. The basket ignites, the
physicist quickly calculates exactly how much water is required to extinguish the flames and pours exactly that amount, neatly extinguishing the flames. The mathematician next. The basket blazes up,
the mathematician calculates exactly how much water is required to put out the fire, and then walks out of the room. The statistician is last. The basket is ignited. He grabs the bucket, pours half
on one side, half on the other, and announces, "It's out."
An engineer, a physicist and a mathematician are staying in a hotel while attending a technical seminar. The engineer wakes up and smells smoke. He goes out into the hallway and sees a fire, so he
fills a trash can from his room with water and douses the fire. He goes back to bed. Later, the physicist wakes up and smells smoke. He opens his door and sees a fire in the hallway. He walks down
the hall to a fire hose and after calculating the flame velocity, distance, water pressure, trajectory, etc. extinguishes the fire with the minimum amount of water and energy needed. Later, the
mathematician wakes up and smells smoke. He goes to the hall, sees the fire and then the fire hose. He thinks for a moment and then exclaims, "Ah, a solution exists!" and then goes back to bed.
An engineer, a physicist, and a philosopher were hiking through the Scottish highlands. Coming to the top of a hill, they saw a solitary black sheep standing before them. The engineer said,
"Remarkable! Scottish sheep are black." The physicist said, "Strange! Some of the sheep in Scotland must be black." The philosopher said, "Um. At least one of the sheep in Scotland is black, on one
side anyway."
Dean, to the physics department. "Why do I always have to give you guys so much money, for laboratories and expensive equipment and stuff. Why couldn't you be like the math department - all they need
is money for pencils, paper and wastepaper baskets. Or even better, like the philosophy department. All they need are pencils and paper."
Engineers think that equations approximate the real world. Scientists think that the real world approximates equations. Mathematicians are unable to make the connection.
Four engineers were travelling by car to a seminar, when unfortunately, the vehicle broke down. The chemical engineer said, "Obviously, some constituent of the fuel has caused this failure to occur."
The mechanical engineer replied, "I disagree, I would surmise that an engine component has suffered a catastrophic structural failure." The electrical engineer also had a theory: "I believe an
electrical component has ceased to function, thereby causing an ignition malfunction." The software engineer thought for some time. When at last he spoke he said, "What would happen if we all got out
and then got back in again?"
One day a farmer called up an engineer, a physicist, and a mathematician and asked them to fence of the largest possible area with the least amount of fence. The engineer made the fence in a circle
and proclaimed that he had the most efficient design. The physicist made a long, straight line and proclaimed, "We can assume the length is infinite," and pointed out that fencing off half of the
Earth was certainly a more efficient way to do it. The Mathematician just laughed at them. He built a tiny fence around himself and said "I declare myself to be on the outside."
There are three umpires at a baseball game. One is an engineer, one is a physicist, and one is a mathematician. There is a close play at home plate and all three umpires call the man out. The manager
runs out of the dugout and asks each umpire why the man was called out. The physicist says "He's out because I calls 'em as I sees 'em." The engineer says "He's out because I calls 'em as they are."
And the mathematician says "He's out because I called him out."
Three men with degrees in mathematics, physics and biology are locked up in dark rooms for research reasons. A week later the researchers open the a door, the biologist steps out and reports: 'Well,
I sat around until I started to get bored, then I searched the room and found a tin which I smashed on the floor. There was food in it which I ate when I got hungry. That's it.' When they free the
man with the degree in physics and he says: 'I walked along the walls to get an image of the room's geometry, then I searched it. There was a metal cylinder at five feet into the room and two feet
left of the door. It felt like a tin and I threw it at the left wall at the right angle and velocity for it to crack open.' Finally, the researchers open the third door and hear a faint voice out of
the darkness: 'Let C be an open can.'
Three people were going to the guillotine, and the first was the lawyer, who was led to the platform and blindfolded and put his head on the block. The executioner pulled the lanyard and nothing
happened. So, out of mercy, the authorities allowed him to go free. The next man to the guillotine was a physician, and he lay his head on the block, and they pulled the lanyard ... nothing. The
blade didn't come down. So, to be fair, they let him go too. The third man to the guillotine was an engineer. They led him to the guillotine and he laid his head on the block and then he said, "Hey,
wait. I think I see your problem."
An engineer, a physicist and a mathematician find themselves in an anecdote, indeed an anecdote quite similar to many that you have no doubt already heard. After some observations and rough
calculations the engineer realizes the situation and starts laughing. A few minutes later the physicist understands too and chuckles to himself happily as he now has enough experimental evidence to
publish a paper. This leaves the mathematician somewhat perplexed, as he had observed right away that he was the subject of an anecdote, and deduced quite rapidly the presence of humor from similar
anecdotes, but considers this anecdote to be too trivial a corollary to be significant, let alone funny. | {"url":"http://www.angelo.edu/faculty/kboudrea/cheap/cheap4_engineers.htm","timestamp":"2014-04-25T00:39:47Z","content_type":null,"content_length":"22183","record_id":"<urn:uuid:b4bf6282-6b75-4aa1-b16c-1899914752ce>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
bounds on the monotone network complexity of the logical permanent
Results 11 - 20 of 24
- Combinatorica , 1998
"... Our main result is a combinatorial lower bounds criterion for a general model of monotone circuits, where we allow as gates: (i) arbitrary monotone Boolean functions whose minterms or maxterms
(or both) have length 6 d, and (ii) arbitrary real-valued non-decreasing functions on 6 d variables. This r ..."
Cited by 6 (0 self)
Add to MetaCart
Our main result is a combinatorial lower bounds criterion for a general model of monotone circuits, where we allow as gates: (i) arbitrary monotone Boolean functions whose minterms or maxterms (or
both) have length 6 d, and (ii) arbitrary real-valued non-decreasing functions on 6 d variables. This resolves a problem, raised by Razborov in 1986, and yields, in a uniform and easy way,
non-trivial lower bounds for circuits computing explicit functions even when d !1. The proof is relatively simple and direct, and combines the bottlenecks counting method of Haken with the idea of
finite limit due to Sipser. We demonstrate the criterion by super-polynomial lower bounds for explicit Boolean functions, associated with bipartite Paley graphs and partial t-designs. We then derive
exponential lower bounds for clique-like graph functions of Tardos, thus establishing an exponential gap between the monotone real and non-monotone Boolean circuit complexities. Since we allow real
gates, the criterion...
, 2003
"... Secret sharing is a very important primitive in cryptography and distributed computing. In this work, we consider computational secret sharing (CSS) which provably allows a smaller share size
(and hence greater efficiency) than its information-theoretic counterparts. Extant CSS schemes result in ..."
Cited by 6 (0 self)
Add to MetaCart
Secret sharing is a very important primitive in cryptography and distributed computing. In this work, we consider computational secret sharing (CSS) which provably allows a smaller share size (and
hence greater efficiency) than its information-theoretic counterparts. Extant CSS schemes result in succinct share-size and are in a few cases, like threshold access structures, optimal. However, in
general, they are not efficient (share-size not polynomial in the number of players n), since they either assume efficient perfect schemes for the given access structure (as in [10]) or make use of
exponential (in n) amount of public information (like in [5]). In this paper, our goal is to explore other classes of access structures that admit of efficient CSS, without making any other
assumptions. We construct efficient CSS schemes for every access structure in monotone P . As of now, most of the efficient information-theoretic schemes known are for access structures in algebraic
NC . Monotone P and algebraic NC are not comparable in the sense one does not include other. Thus our work leads to secret sharing schemes for a new class of access structures. In the second part of
the paper, we introduce the notion of secret sharing with a semi-trusted third party, and prove that in this relaxed model efficient CSS schemes exist for a wider class of access structures, namely
monotone NP.
, 1991
"... In this paper we study the lower bounds problem for monotone circuits. The main goal is to extend and simplify the well known method of approximations proposed by A. Razborov in 1985. The main
result is the following combinatorial criterion for the monotone circuit complexity: a monotone Boolean fun ..."
Cited by 5 (2 self)
Add to MetaCart
In this paper we study the lower bounds problem for monotone circuits. The main goal is to extend and simplify the well known method of approximations proposed by A. Razborov in 1985. The main result
is the following combinatorial criterion for the monotone circuit complexity: a monotone Boolean function f(X) of n variables X = fx 1 ; : : : ; x n g requires monotone circuits of size exp(\Omega\
Gamma t= log t)) if there is a family F ` 2 X such that: (i) each set in F is either a minterm or a maxterm of f; and (ii) D k (F)=D k+1 (F) t for every k = 0; 1; : : : ; t \Gamma 1: Here D k (F) is
the k-th degree of F , i.e. maximum cardinality of a subfamily H ` F with j " Hj k: 1 Introduction The question of determining how much economy the universal non-monotone basis f; ; :g provides over
the monotone basis f; g has been a long standing open problem in Boolean circuit complexity. In 1985, Razborov [10, 11] achieved a major development in this direction. He worked out the,
- SIAM Journal on Computing , 1998
"... We prove that a monotone circuit of size n d recognizing connectivity must have depth ((log n) 2 = log d). For formulas this implies depth ((log n) 2 = log log n). For ((log n) 2)which is
optimal up to a con-polynomial-size circuits the bound becomes stant. Warning: Essentially this paper has been p ..."
Cited by 4 (0 self)
Add to MetaCart
We prove that a monotone circuit of size n d recognizing connectivity must have depth ((log n) 2 = log d). For formulas this implies depth ((log n) 2 = log log n). For ((log n) 2)which is optimal up
to a con-polynomial-size circuits the bound becomes stant. Warning: Essentially this paper has been published in SIAM Journal on Computing is hence subject to copyright restrictions. It is for
personal use only. 1
"... Abstract. A wide range of positive and negative results have been established for learning different classes of Boolean functions from uniformly distributed random examples. However,
polynomial-time algorithms have thus far been obtained almost exclusively for various classes of monotone functions, ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract. A wide range of positive and negative results have been established for learning different classes of Boolean functions from uniformly distributed random examples. However, polynomial-time
algorithms have thus far been obtained almost exclusively for various classes of monotone functions, while the computational hardness results obtained to date have all been for various classes of
general (nonmonotone) functions. Motivated by this disparity between known positive results (for monotone functions) and negative results (for nonmonotone functions), we establish strong
computational limitations on the efficient learnability of various classes of monotone functions. We give several such hardness results which are provably almost optimal since they nearly match known
positive results. Some of our results show cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than 1/2 + 1 / √ n; this accuracy bound is close to
optimal by known positive results (Blum et al., FOCS ’98). Other results show that under a plausible cryptographic hardness assumption, a class of constant-depth, sub-polynomialsize circuits
computing monotone functions is hard to learn; this result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation ’04).
Our main tool is a complexitytheoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O’Donnell (JCSS ’04). 1
"... this paper we show that this, in fact, is not the case. We extend the framework in [1] to show that monotone arithmetic circuits require exponential size even for computing the 0-1 permanent.
Cited by 3 (0 self)
Add to MetaCart
this paper we show that this, in fact, is not the case. We extend the framework in [1] to show that monotone arithmetic circuits require exponential size even for computing the 0-1 permanent.
, 1995
"... We use the techniques of Karchmer and Widgerson [KW90] to derive strong lower bounds on the expected parallel time to compute boolean functions by circuits. By average time, we mean the time
needed on a self-timed circuit, a model introduced recently by Jakoby, Reischuk, and Schindelhauer, [JRS94] i ..."
Cited by 3 (0 self)
Add to MetaCart
We use the techniques of Karchmer and Widgerson [KW90] to derive strong lower bounds on the expected parallel time to compute boolean functions by circuits. By average time, we mean the time needed
on a self-timed circuit, a model introduced recently by Jakoby, Reischuk, and Schindelhauer, [JRS94] in which gates compute their output as soon as it is determined (possibly by a subset of the
inputs to the gate). More precisely, we show that the average time needed to compute a boolean function on a circuit is always greater than or equal to the average number of rounds required in
Karchmer and Widgerson's communication game. We also prove a similar lower bound for the monotone case. We then use these techniques to show that, for a large subset of the inputs, the average time
needed to compute s \Gamma t connectivity by monotone boolean circuits is\Omega\Gamma/42 2 n). We show, that, unlike the situation for worst case bounds, where the number of rounds characterize
circuit depth, in th...
"... We give a simple proof that a monotone circuit for the k-clique problem in an n-vertex graph p 3 requires depth k,whenk p n=2 2 The proof is based on an equivalence between the depth of a
Boolean circuit for a function and the number of rounds required to solve a related communication problem. This ..."
Cited by 3 (0 self)
Add to MetaCart
We give a simple proof that a monotone circuit for the k-clique problem in an n-vertex graph p 3 requires depth k,whenk p n=2 2 The proof is based on an equivalence between the depth of a Boolean
circuit for a function and the number of rounds required to solve a related communication problem. This equivalence was shown by Karchmer and Wigderson. Warning: Essentially this paper has been
published in Information Processing Letters and is hence subject to copyright restrictions. It is for personal use only. Key words. computational complexity, theory of computation, circuit
complexity, formula
- Theoretical Computer Science
"... Cancellations are known to be helpful in efficient algebraic computation of polynomials over fields. We define a notion of cancellation in Boolean circuits and define Boolean circuits that do
not use cancellation to be non-cancellative. Non-cancellative Boolean circuits are a natural generalizati ..."
Cited by 1 (0 self)
Add to MetaCart
Cancellations are known to be helpful in efficient algebraic computation of polynomials over fields. We define a notion of cancellation in Boolean circuits and define Boolean circuits that do not use
cancellation to be non-cancellative. Non-cancellative Boolean circuits are a natural generalization of monotone Boolean circuits. We show that in the absence of cancellation, Boolean circuits require
super-polynomial size to compute the determinant interpreted over GF(2). This non-monotone Boolean function is known to be in P . In the spirit of monotone complexity classes, we define complexity
classes based on non-cancellative Boolean circuits. We show that when the Boolean circuit model is restricted by withholding cancellation, P and popular classes within P are restricted as well, but
NP and circuit definable classes above it remain unchanged. Keywords: Boolean circuit, monotonicity, cancellation, determinant, complexity classes. 1 Introduction Using the power of cancellatio...
, 1993
"... We consider three restrictions on Boolean circuits: bijectivity, consistency and multilinearity. Our main result is that Boolean circuits require exponential size to compute the bipartite
perfect matching function when restricted to be (i) bijective or (ii) consistent and multilinear. As a consequen ..."
Add to MetaCart
We consider three restrictions on Boolean circuits: bijectivity, consistency and multilinearity. Our main result is that Boolean circuits require exponential size to compute the bipartite perfect
matching function when restricted to be (i) bijective or (ii) consistent and multilinear. As a consequence of the lower bound on bijective circuits, we prove an exponential size lower bound for
monotone arithmetic circuits that compute the 0-1 permanent function. We also define a notion of homogeneity for Boolean circuits and show that consistent homogeneous circuits require exponential
size to compute the bipartite perfect matching function. Motivated by consistent multilinear circuits, we consider certain restricted (\Phi; ) circuits and obtain an exponential lower bound for
computing bipartite perfect matching using such circuits. Finally, we show that the lower bound arguments for the bipartite perfect matching function on all these restricted models can be adapted to
prove exponential low... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=160085&sort=cite&start=10","timestamp":"2014-04-17T05:02:38Z","content_type":null,"content_length":"38596","record_id":"<urn:uuid:b927916c-bc29-49e2-8502-fa98d18ac4fb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Features SG100 Delivering Security
Since 1997 to the World
Prices and
Availability For more than a decade Protego Information AB has delivered the SG100 unit to the world market. No other unit has been so through fully tested and examined by time and the market. The
SG100 Security Generator could be considered the gold standard of TRNG devices. With more than 10 000 installations world wide the SG100 has proved it’s worth.
Tests If you want a 100% bullet proof and reliable solution withlow risk, you should choose the SG100 Security Generator as your True Random Number Generator.
Electrical & SG100 Security Generator is an easy to use, easy to integrate hardware random number generator that connects to a standard serial port. Complete with driver software for Windows and
EMI example programs in source. SG100 is the choice if you want to strengthen and enhance your encryption, statistics and simulation software.
SG100 The SG100 was the first of our random number products, released in 1996.
(pdf) For easy use of the device we recommend that you purchase the Protego Software Development Kit PSDK. It could be used for development of new applications supporting the SG100 device.
You will get driver redistribution rights for your application with the purchase of the PSDK.
The device is connected to the computer through the 9-pin serial port. Power is taken from the port. Supports all bit rates up to 115,200 bit/s. Throughput is about 9.2 kBytes/s for the
115,200-bit/s rate. The output is processed using statistical and cryptographic methods, and passes any statistical test. Resistant to external electromagnetic fields and high
resistance against power fluctuations.
Quantum physics random number source.
High resistance to RF-fields (30V/m) guarantees sustained
Very low emitted RF field makes random numbers difficult to intercept.
Windows-95/98/XP and Windows-NT-2k driver delivered with product.
Linux/Solaris/Windows driver delivered as a source code.
High output speed: 9.2 kBytes/s.
Device powered from the computer port - no batteries or cables.
Runtime electrical and statistical testing.
Easy to include drivers in OEM product.
Fast response to the calling process
Interface for multiple processes reading random numbers.
Windows DLL driver accept up to four SG100:s.
No cryptographic or statistical weaknesses.
Driver can be written for any platform.
Pass the Diehard test.
Pass the Crypt-X tests.
Runtime Package, Protego SDK (PSDK)
This product is a well established random number generation access for many years. The interface is the interface between your code and the SG100 or SG100 Evo USB. The devices are interchangeable in
your application if you use the PSDK. Example source code for many applications is included:
This utility will work as a lotto drawing with many usable options. lotto BALLS DRAWS EXTRABALLS ITERATIONS where BALLS is the number of balls in the urn of the lotto; DRAWS is the number of balls
drawn from the urn; EXTRABALLS is the number of balls to draw in addition in each drawing;
ITERATIONS is the number of lotto draws to perform; The first "DRAWS" balls from the urn will be output sorted in numerical order and the "EXTRABALLS" will be added unsorted.
Code Generator
This utility creates 8 digit numeric codes up to a user defined maximum value. The software generates the numeric codes with duplicate check.
Character generator
This utility generates sequences of characters at random. The characters that are allowed to be used are user defined.
Binary Gen
This utility reads data from the SG100 and the SG100 EVO and saves it in files. It reads the amount of data you specify and saves it to one or more files.
C# and VB Net source code example projects also included
Linux Driver 32bit/64bit support
The included Linux driver is a source code that may be integrated with
an application. Random numbers are obtained by a fuction call. The Linux driver
compile on Windows, Linux, and Solaris. The Linux driver is easy to modify and
adopt to different architectures and application areas.
Solaris Random Number Server
We have an Intel 32 bit Solaris-8 driver, intended for more complex
situations. This driver run multiple SG100:s and distribute the random
numbers using a local network. The Solaris driver include a disk buffer.
Download a ZIP with additional information: Solaris Server ZIP.
Prices and Availability
The SG100 is available in two basic packages; Developer and Runtime.
Developer Package including one unit
Drivers/License for the Win32 platform (Microsoft Windows 95/98, Windows NT
Demo Programs, compiled to EXE including C/C++ source, that open and use the driver DLLs.
Hardware Test Programs (EXE only) for the SG100 hardware.
Runtime Package, Protego SDK (PSDK)
You can purchase a Protego SDK (PSDK) separately. The price is 110 Euros. The kit is available for electronic download after payment. The kit must only be used with genuine Protego products.
│ Package │ Number of units │ Price in EUR/unit │
│ Developer │ 1+ │ 300 │
│ Runtime │ 1-4 │ 249 │
│ Runtime │ 5-9 │ 189 │
│ Runtime │ 10-29 │ 175 │
│ Runtime │ 30-99 │ 143 │
│ Runtime │ 100+ │ Quotation │
For S&H add 53 Euro
Prices and specification subject to change without prior notice.
Go Directly to Shop
Statistical Tests
The simplest statistical test is to check if the SG100 random number strings has about the same number of ones and zeroes. A test program (N1_TEST.EXE, included in Developer Package) is written that
counts bytes and bits. The output is given in absolute and relative frequency.
To make comparison easy the difference between a relative frequency of 50% and observed frequency is computed relative to the standard deviation. These values are seldom higher than three, for random
Note, that as the program outputs a large number of sigma values, it sometimes happens that a sigma value higher than three is found. This is normal for random strings. If in doubt, accuracy can be
increased by counting a longer string.
If we, as an example, count 6,400,000 bytes and find 25,603,990 "one" bits then we have a relative frequency of 0.50007793 Sigma = 1.1 That is 50.008% one bits.
To increase accuracy we count 441,600,000 bytes. We find 1,766,378,269 "one" bits yielding a frequency of 0.49999385 ( Sigma = -0.7) That was very close to 50% "one" bits and 50% "zero" bits.
Desperately we can read 1,651,200,000 bytes and count to 6,604,734,712 "one" bits and the frequency is 0.49999506 ( Sigma = -1.1).
Download complete test results (25K)
pLab load test of the SG100
The pLAB Research Group of the Institut für Mathematik, Universität Salzburg has conducted a load test of the SG100™. The report contains commented simulation results for SG100™. Each page contains
the plot of the truncated Kolmogorov-Smirnov-values and the according uppeirtail-probabilities for the Load Test'(LT).
Download the The pLab Load Tests for the SG100 Security Generator
Link to the pLab Team
The SG100 also passes the Diehard test. The Diehard test, by George Marsaglia, consists of several statistical counts that should have a specified distribution if the input string is random. By
comparing observed counts to a theoretical count we can see if a string is random or not.
For a sample of size 500: mean SG100.DAT using bits 6 to 29 1.942
│ duplicate │ number │ number │
│ spacings │ observed │ expected │
│ 0 │ 70. │ 67.668 │
│ 1 │ 142. │ 135.335 │
│ 2 │ 139. │ 135.335 │
│ 3 │ 86. │ 90.224 │
│ 4 │ 36. │ 45.112 │
│ 5 │ 18. │ 18.045 │
│ 6 to INF │ 9. │ 8.282 │
Chisquare with 6 d.o.f. = 2.61 p-value= .143850
The observations above are to few to give high accuracy. This problem originates in that the Diehard program do not adjust the sample sizes to a larger test file.
Download SG100 Diehard test results
Link to the Diehard test
Robert Davies test of SG100
Robert Davies have tested hardware random number generators, including the SG100, for a lottery application.
Link to Robert Davies lottery page
Electrical & RFI/EMI Measurements
EMC Test Reports
EMC Test Report: Emission of electromagnetic disturbances
EMC Test Report: Immunity to electromagnetic disturbances
Electrical Characteristics and Measurements — SG100 eBook
A schematic diagram of the SG100 circuit is displayed below. To the left we find the
diode where the noise signal originates. To the right is the SG100 output. You may click on the oscilloscope icons to obtain a graph of the signal. Power spectra of the frequencies of the signal may
be obtained by the FFT icons.
You can also choose to download the test in pdf format
Electrical Characteristics and Measurements.pdf | {"url":"http://protego.se/sg100_en.htm","timestamp":"2014-04-20T13:22:49Z","content_type":null,"content_length":"61411","record_id":"<urn:uuid:b0e5ed95-1efc-48bb-a637-db82e9698ca8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Plotting (& not plotting) Asymptotes in Mathematica
Stroodle Jul4-09 12:41 PM
Plotting (& not plotting) Asymptotes in Mathematica
I'm just starting to learn how to use Mathematica, and I'd like to know if there's a way to plot a graph of an equation, such
as [tex]y=\frac{a}{x-h}+k[/tex] without it showing the vertical asymptote
I would also like to know if there's a way to show the horizontal asymptote on the graph.
Thanks for your help
DaleSpam Jul4-09 09:34 PM
Re: Plotting (& not plotting) Asymptotes in Mathematica
To turn off the vertical asymptote use the option Exclusions->h
To show a horizontal or diagonal asymptote simply plot it.
Stroodle Jul5-09 12:57 AM
Re: Plotting (& not plotting) Asymptotes in Mathematica
Awesome. Thanks for your help.
All times are GMT -5. The time now is 01:30 PM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=323281","timestamp":"2014-04-17T18:30:05Z","content_type":null,"content_length":"5081","record_id":"<urn:uuid:db6325e3-1b18-4f0b-b442-590b58e50cd8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implicit Equation Grapher
Re: Implicit Equation Grapher
First thing I notice is the new warning about the flash player.
It is funny about the known bug. That is just y = x and should graph as a straight line. Mathematica will graph it like that, Geogebra too, but other graphers like this one have the same problem.
http://www.flashandmath.com/mathlets/ca … licit.html
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Implicit Equation Grapher
bobbym wrote:
First thing I notice is the new warning about the flash player.
What warning?
bobbym wrote:
It is funny about the known bug. That is just y = x and should graph as a straight line. Mathematica will graph it like that, Geogebra too, but other graphers like this one have the same problem
Mathematica and Geogebra may do something analytical with the equation first.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Implicit Equation Grapher
This warning is new on mine:
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Implicit Equation Grapher
Is that because you have Flash disabled?
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Implicit Equation Grapher
Yes, when I enable it everything works as it should.
You might consider a Newton type iteration to find your points instead of a change of sign.
The differentiations would be done numerically and there would be no algebraic manipulations. You could use it when the sign change algorithm you are using failed to find a point.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Implicit Equation Grapher
Was finally able to get Newton-Raphson to work.
Latest version (v0.90) now posted: Implicit Equation Grapher
You can still access the old algorithm (called "SignChange").
The "extra lines" seem to have gone, and (x-y)^2=0 works now.
Would really appreciate everyone testing it (try different functions) to make sure it does the best it can.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Implicit Equation Grapher
Did you see post #80? And they say telepathy is kaboobly doo!
I will look at it now, but you probably sensed that already.
Runs really nice and quick. Worked on all of mine. I especially liked this one (x - y)^2 + x^2 = 0.
There is one weird effect I am getting.
When I plot this exp(x-y)=0 which should have no solution among the Reals. I get an empty graph but when I hit 2x or 10x I get a surprise.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Implicit Equation Grapher
I started working on this recently because I wanted to improve it.
I knew that Newton was a possible approach but wasn't convinced. So I started off to see if I could find the slope at any point ... hence the "shaded" plot types. Once I mastered slope I knew it was
but a few small steps to Newton.
I was pleasantly surprised by the results, but it was very slow. After some work I was able to speed it up and you see the results.
You and some others over the past 2 years have suggested Newton to me, it just took a while for me to get around to it
exp(x-y)=0 seems to suffer under the Newton algorithm, values below around exp(-7.5)=5.5e-4 lead to Newton heading towards zero. I will investigate.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Implicit Equation Grapher
I use the grapher and it is a good tool. Thanks for writing it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Implicit Equation Grapher
You are welcome, bobby
bobbym wrote:
... Maybe the ability to graph 2 or more equations.
You asked for it, and I have (I hope) managed to do it in the new v0.91 Implicit Equation Grapher
It is quite fun to overlay plots and find similarities between them.
Let me know if it doesn't behave nicely.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Implicit Equation Grapher
Hi MIF;
Thanks for the new feature. It is working well.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: Implicit Equation Grapher
The thing I noticed about Sign Change is that, when I plot y=ceil(x), for example, extra lines appear.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://mathisfunforum.com/viewtopic.php?pid=280643","timestamp":"2014-04-20T23:28:09Z","content_type":null,"content_length":"24953","record_id":"<urn:uuid:5f1c09f4-1a9d-411c-8686-22fca08ff9da>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
2 by 2 Contingency Tables
Our null hypothesis is that the observations are drawn from the same population, so that the number of observations in each of the four classes is proportional to the size of the sample. That is, the
average value of a is Ea = n3 x n1/N, that of b is Eb = n4 x n1/N, that of c is Ec = n3 x n2/N and that of d is Ed = n4 x n2/N. When any one of these average values is known, the others can be
obtained by subtraction. For example, if we know a, then c = n3 - a and b = n1 - a, and finally d = n4 - b or n2 - c. That is, when the marginal sums are constant, all the numbers in the 2x2 table
are determined by a single number. Therefore, the table has one degree of freedom.
When a sample of N observations is drawn, the numbers a, b, c and d will differ from the average values due to the chances of sampling. As the observed values depart from the average values, the
chance of drawing that particular sample will become smaller and smaller. Note that this can be characterized in terms of the deviation of one number, say a, from its average value, since all four
numbers are determined by this one. If the chance of observing a certain divergent sample is small, we admit that the null hypothesis may be disproved, and that such a sample may well be due to a
real difference.
The probability of a certain sample may be determined from the binomial distribution to be P = [(n1! n2! n3! n4!)/(a! b! c! d!)](1/N!). For statistical significance, we must ask in the usual way of
the probability of drawing a sample at least this far from average, and so must add the probabilities of observations even less probable. These calculations are tedious, so alternative methods have
been found. For values of N greater than 50, and where the expected values are greater than 5, The χ^2 statistic with Yates's correction and one degree of freedom gives very good results. For values
of N less than 50, tables have been calculated from the accurate formula. These tables may be found in Langley.
The χ^2 statistic is the sum of [|O - E| - 0.5]^2/E for each entry in the table, where O is the observed value and E the average (or expected) value, and the 0.5 is Yates's correction. If you take
into account the relation between the four values, it happens that Y = |O - E| - 0.5 is the same for all four. Then, χ^2 = Y^2(1/Ea + 1/Eb + 1/Ec + 1/Ed). The sum of the reciprocals of the average
values is easy to find on the calculator, and is then multiplied by Y^2. The values of χ^2 for the different levels of significance are: 10%, 2.71; 5%, 3.84; 1%, 6.63; 0.2% 9.55. As χ^2 increases,
the probability that the differences from the expected average values are due to chance becomes less and less.
^2 test. Should d fall outside these tables, to the right or below, the probability is greater than 5%. For N = 8 or smaller, a 5% level cannot be reached. These tables are not generally found in
sets of statistics tables, unfortunately. Langley's source is given in the References.
In order to use the d tables, the table must be rearranged so that n1 is the smallest of the marginal totals, and ad > bc. This is done by exchanging rows and columns, or by exchanging rows or
columns, and amounts only to a relabeling of the rows and columns. For significance, the actual value of d should be equal to or greater than the tabular value. If ad = bc, the observed values are
proportional (a/c = b/d), so no difference can be indicated.
^2 = 28. This is a very large value, showing that it is very unlikely that students from the two schools are the same in knowledge of this question. In the table on the right, the proportion of
correct responses is more equal, 31% to 22%. Here, χ^2 = 3.53, just below the 5% value of 3.84, so it is not proven that the students from the two schools are unequally prepared.
Note carefully that all we can ever prove with statistics is that the difference between the samples of students from the two schools is unlikely to arise by chance, not that there actually is some
difference. If we find that there is a statistically significant difference, then we may find it possible to discover some reason for the difference. Statistics cannot prove that there actually is
some such difference; only research into possible explanations can do that.
A laboratory that performs many studies looking for correlations between one thing and another will occasionally come up with significant results. If P=5% is the criterion of significance, then one
in twenty studies of noncorrelated variables will come up showing a significant correlation, on the average. If only such "significant" studies are published, then a great deal of error can be
propagated. It seems as if this situation commonly exists in much medical and nutritional work. Any positive result should be repeated, but it is seldom reported that this has been done. This
compounds the error of concluding that whatever was tested against is the actual cause of any observed correlation. The classic example of the wearing of winter coats in Germany correlated with
temperatures in England is an example. Low temperatures in England do not make Germans put on coats. Nevertheless, whenever eating turnips is found to correlate with plantar warts, it is said that
eating turnips causes plantar warts. When the difficulty of choosing random samples is combined with these fallacies, it is a wonder that anything at all can be concluded by using statistics in this
^2 = 28.2. This large value shows that the table is highly significant, and consequently that the inoculation was successful in preventing cholera.
A number of small studies can be combined into one larger study by adding the values of χ^2 and considering each small study to contribute one degree of freedom. The values of χ^2 for each study
should be calculated without Yates's correction. For this procedure to be valid, the studies combined should not be selected, but included whether or not they are significant. Any other procedure
will give a biased result.
For example, if a small study gives χ^2 = 2.0, which is certainly not significant, 14 such studies will reach the 1% level of significance at χ^2 = 28. The 1% value of χ^2 increases about 1.2 per
degree of freedom for more than 20 degrees of freedom. This, of course, demonstrates the power of large samples. The values of χ^2 for various confidence levels are given as a function of the number
of degrees of freedom in any set of statistics tables.
M. J. Moroney presents a problem that shows another way to look at 2x2 contingency tables. He attends a Bach concert and counts 7 blondes and 143 brunettes in the audience. Then he attends a jazz
concert and counts 14 blondes and 108 brunettes. Is it safe to conclude from these figures that blondes prefer jazz to Bach? Consider the 2x2 contingency table, and find that the expected average
number of blondes at the Bach concert is 11.6 from the null hypothesis that blondes do not differ from brunettes in their music preferences. Find the other expected average numbers as well. Then, Y =
|11.6 - 7| - 0.5 = 4.1, and χ^2 = 3.51. This is not large enough to be significant at the 5% level. We must conclude that any preference of blondes for jazz is not proven.
R. Langley, Practical Statistics (Newton Abbot: David and Charles, 1970).
P. Armsen, Biometrika Journal, 1955, pp.506-511. d Tables for Fisher's Test.
H. R. Neave, Elementary Statistics Tables (London: George Allen and Unwin, 1979). These tables were recommended to students at the Open University.
M. J. Moroney, Facts from Figures, 3rd ed. (London: Penguin Books, 1956). The blonde-brunette study is on pp. 269-270.
Return to Econ Index
Composed by J. B. Calvert
Created 7 March 2005
Last revised | {"url":"http://mysite.du.edu/~jcalvert/econ/twobytwo.htm","timestamp":"2014-04-21T14:53:23Z","content_type":null,"content_length":"12305","record_id":"<urn:uuid:db2ac4f4-85af-4c8e-8ed9-e8e3ca48ca26>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
This page has been
, but needs to be
we may therefore neglect the infinitesimal changes of the quantities $g_{ab}$ over the extension considered, and also those of $\mathrm{R}_{e}$ and $\mathrm{R}_{h}$. By this we just come to the case
considered in § 19. Thus it is evident, that as regards quantities of the third order the first part of (10) is 0. From this it follows that in reality it is at least of the fourth order.
§ 21. Let us now return to the general case that the extension $\Omega$ to which equation (10) refers, has finite dimensions. If by a surface $\bar{\sigma}$ this extension is divided into two
extensions $\Omega_{1}$ and $\Omega_{2}$, the quantities on the two sides in (10) each consist of two parts referring to these extensions. For the right hand side this is immediately clear and as to
the quantity on the left hand side, it follows from the consideration that the contributions of a to the integrals over the boundaries of $\Omega_{1}$ and $\Omega_{2}$ are equal with opposite signs.
In the two cases namely we must take for $\mathrm{N}$ equal but opposite vectors.
Also, if the extension $\Omega$ is divided into an arbitrary number of parts, each term in (10) will be the sum of a number of integrals, each relating to one of these parts.
By surfaces with the equations $x_{1}=\mathrm{const.},\dots x_{4}=\mathrm{const}.$ we can divide the extension $\Omega$ into elements which we shall denote by $\left(dx_{1},\dots dx_{4}\right)$. As a
rule there will be left near the surface $\sigma$ certain infinitely small extensions of a different form. From the preceding § it is evident that, in the calculation of the integrals, these latter
extensions may be neglected and that only the extensions $\left(dx_{1},\dots dx_{4}\right)$ have to be considered. From this we can conclude that equation (10) is valid for any finite extension, as
soon at it holds for each of the elements $\left(dx_{1},\dots dx_{4}\right)$.
§ 22. We shall now show what equation (10) becomes for one element $\left(dx_{1},\dots dx_{4}\right)$. Besides the infinitesimal quantities $x_{1},\dots x_{4}$, occurring in the equation
of the indicatrix we introduce four other quantities $\xi_{1},\dots\xi_{4}$, which we define by
$\xi_{a}=\frac{1}{2}\frac{\partial F}{\partial x_{a}}$ (18)
$\left.\begin{array}{c} \xi_{1}=g_{11}x_{1}+g_{12}x_{2}+\dots+g_{14}x_{4}\\ \cdots\cdots\cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\cdots\cdots\\ \xi_{4}=g_{41}x_{1}+g_{42}x_{2}+\dots+g_ (19)
{44}x_{4} \end{array}\right\}$
with the equalities $g_{ba}=g_{ab}$. | {"url":"http://en.wikisource.org/wiki/Page:LorentzGravitation1916.djvu/19","timestamp":"2014-04-21T06:16:30Z","content_type":null,"content_length":"27154","record_id":"<urn:uuid:139f46f9-465c-4a0a-9072-afd09018c6c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume of a sphere/circle.
[QUOTE=Revolver]What is the formula? I have two problems:
1. What is the volume of a sphere with a radius of 5.0 m?
My initial guess was 5^3 = 125, but apparently the answer is 523.6.
[tex]v= \frac{4 \pi r^3}{3}[/tex]
2. The radius of the earth is 6400 km. If the atmosphere is approximately 10 km high, then what is the volume of air around the earth?
Ok to be honest, I have no idea. I assume you find the volume of the earth, then the volume of a sphere with radius 10 km, and subtract the two... but that goes back to me not knowing the forumla of
volume of a sphere :D
You might want to rethink that second one. | {"url":"http://www.physicsforums.com/showthread.php?t=131496","timestamp":"2014-04-20T05:56:56Z","content_type":null,"content_length":"31750","record_id":"<urn:uuid:31a07bdd-ac73-46cf-9e0d-cf6a0cd436e6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quick & Dirty Sine
Type : Sine Wave Synthesis
References : Posted by MisterToast
Notes :
This is proof of concept only (but code works--I have it in my synth now).
Note that x must come in as 0
There's not much noise in here. A few little peaks here and there. When the signal is at -20dB, the worst noise is at around -90dB.
For speed, you can go all floats without much difference. You can get rid of that unitary negate pretty easily, as well. A couple other tricks can speed it up further--I went for clarity in the
The result comes out a bit shy of the range -1
Where did this come from? I'm experimenting with getting rid of my waveform tables, which require huge amounts of memory. Once I had the Hamming anti-ringing code in, it looked like all my
waveforms were smooth enough to approximate with curves. So I started with sine. Pulled my table data into Excel and then threw the data into a curve-fitting application.
This would be fine for a synth. The noise is low enough that you could easily get away with it. Ideal for a low-memory situation. My final code will be a bit harder to understand, as I'll break the
curve up and curve-fit smaller sections.
Code :
float xSin(double x)
//x is scaled 0<=x<4096
const double A=-0.015959964859;
const double B=217.68468676;
const double C=0.000028716332164;
const double D=-0.0030591066066;
const double E=-7.3316892871734489e-005;
double y;
bool negate=false;
if (x>2048)
if (x>1024)
if (negate)
return (float)y;
Added on : 08/01/07 by toast[ AT ]somewhereyoucantfind[ DOT ]com
Comment :
Improved version:
float xSin(double x)
//x is scaled 0<=x<4096
const double A=-0.40319426317E-08;
const double B=0.21683205691E+03;
const double C=0.28463350538E-04;
const double D=-0.30774648337E-02;
double y;
bool negate=false;
if (x>2048)
if (x>1024)
if (negate)
return (float)(-y);
return (float)y;
Added on : 15/04/07 by depinto1[ AT ]oz[ DOT ]net
Comment :
%This is Matlab code. you can convert it to C
%All it take to make a high quality sine
%wave is 1 multiply and one subtract.
%You first have to initialize the 2 unit delays
% and the coefficient
Fs = 48000; %Sample rate
oscfreq = 1000.0; %Oscillator frequency in Hz
c1 = 2 * cos(2 * pi * oscfreq / Fs);
%Initialize the unit delays
d1 = sin(2 * pi * oscfreq / Fs);
d2 = 0;
%Initialization done here is the oscillator loop
% which generates a sinewave
for j=1:100
output = d1; %This is the sine value
fprintf(1, '%f\n', output);
%one multiply and one subtract is all it takes
d0 = d1 * c1 - d2;
d2 = d1; %Shift the unit delays
d1 = d0;
Added on : 09/02/08 by juuso[ DOT ]alasuutari[ AT ]gmail[ DOT ]com
Comment :
Can I use this code in a GPL2 or GPL3 licensed program (a soft synth project called Snarl)? In other words, will you grant permission for me to re-license your code? And what name should I write
down as copyright holder in the headers?
Juuso Alasuutari
Added on : 22/06/09 by by toast[ AT ]somewhereyoucantfind[ DOT ]com
Comment :
Add your own comment
Comments are displayed in fixed width, no HTML code allowed! | {"url":"http://www.musicdsp.org/showArchiveComment.php?ArchiveID=241","timestamp":"2014-04-16T22:26:36Z","content_type":null,"content_length":"12949","record_id":"<urn:uuid:32a64510-23a1-45ca-8ebf-83879934deac>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
The history of QCD - CERN Courier
L’histoire de la QCD
Au cours des soixante dernières années, la recherche a révélé que les hadrons comme les noyaux atomiques sont des états liés de quarks, d’antiquarks et de gluons. Il est remarquable qu’une simple
théorie de jauge, la chromodynamique quantique, soit à même de décrire les phénomènes compliqués que sont les interactions fortes et les forces nucléaires à l’œuvre dans ces systèmes. Dans cet
article, Harald Fritzsch, l’un des pionniers de la chromodynamique quantique, évoque le développement de la théorie, dont les fondements ont été posés dans les travaux qu’il a réalisés avec Murray
Gell-Mann en 1972.
About 60 years ago, many new particles were discovered, in particular the four Δ resonances, the six hyperons and the four K mesons. The Δ resonances, with a mass of about 1230 MeV, were observed in
pion–nucleon collisions at what was then the Radiation Laboratory in Berkeley. The hyperons and K mesons were discovered in cosmic-ray experiments.
Murray Gell-Mann and Yuval Ne’eman succeeded in describing the new particles in a symmetry scheme based on the group SU(3), the group of unitary 3 × 3 matrices with determinant 1 (Gell-Mann 1962,
Ne’eman 1961). SU(3)-symmetry is an extension of isospin symmetry, which was introduced in 1932 by Werner Heisenberg and is described by the group SU(2).
The observed hadrons are members of specific representations of SU(3). The baryons are octets and decuplets, the mesons are octets and singlets. The baryon octet contains the two nucleons, the three
Σ hyperons, the Λ hyperon and the two Ξ hyperons (see figure 1). The members of the meson octet are the three pions, the η meson, the two K mesons and the two K mesons.
In 1961, nine baryon resonances were known, including the four Δ resonances. These resonances could not be members of an octet. Gell-Mann and Ne’eman suggested that they should be described by an SU
(3)-decuplet but one particle was missing. They predicted that this particle, the Ω^–, should soon be discovered with a mass of around 1680 MeV. It was observed in 1964 at the Brookhaven National
Laboratory by Nicholas Samios and his group. Thus the baryon resonances were members of an SU(3) decuplet.
It was not clear at the time why the members of the simplest SU(3) representation, the triplet representation, were not observed in experiments. These particles would have non-integral electric
charges: 2/3 or –1/3.
The quark model
In 1964, Gell-Mann and Feynman's PhD student George Zweig, who was working at CERN, proposed that the baryons and mesons are bound states of the hypothetical triplet particles (Gell-Mann 1964, Zweig
1964). Gell-Mann called the triplet particles "quarks", using a word that had been introduced by James Joyce in his novel Finnegans Wake.
Since the quarks form an SU(3) triplet, there must be three quarks: a u quark (charge 2/3), a d quark (charge –1/3) and an s quark (charge –1/3). The proton is a bound state of two u quarks and one d
quark (uud). Inside the neutron are two d quarks and one u quark (ddu). The Λ hyperon has the internal structure uds. The three Σ hyperons contain one s quark and two u or two d quarks (uus or dds).
The Ξ hyperons are the bound states uss and dss. The Ω^– is a bound state of three s quarks: sss. The eight mesons are bound states of a quark and an antiquark.
In the quark model, the breaking of the SU(3)-symmetry can be arranged by the mass term for the quarks. The mass of the strange quark is larger than the masses of the two non-strange quarks. This
explains the mass differences inside the baryon octet, the baryon decuplet and the meson octet.
Introducing colour
In the summer of 1970, I spent some time at the Aspen Center of Physics, where I met Gell-Mann and we started working together. In the autumn we studied the results from SLAC on the deep-inelastic
scattering of electrons and atomic nuclei. The cross-sections depend on the mass of the virtual photon and the energy transfer. However, the experiments at SLAC found that the cross-sections at large
energies depend only on the ratio of the photon mass and the energy transfer – they showed a scaling behaviour, which had been predicted by James Bjorken.
In the SLAC experiments, the nucleon matrix-element of the commutator of two electromagnetic currents is measured at nearly light-like distances. Gell-Mann and I assumed that this commutator can be
abstracted from the free-quark model and we formulated the light-cone algebra of the currents (Fritzsch and Gell-Mann 1971). Using this algebra, we could understand the scaling behaviour. We obtained
the same results as Richard Feynman in his parton model, if the partons are identified with the quarks. It later turned out that the results of the light-cone current algebra are nearly correct in
the theory of QCD, owing to the asymptotic freedom of the theory.
The Ω^– is a bound state of three strange quarks. Since this is the ground state, the space wave-function should be symmetrical. The three spins of the quarks are aligned to give the spin of the
omega minus. Thus the wave function of the Ω^– does not change if two quarks are interchanged. However, the wave function must be antisymmetric according to the Pauli principle. This was a great
problem for the quark model.
In 1964, Oscar Greenberg discussed the possibility that the quarks do not obey the Pauli statistics but rather a "parastatistics of rank three". In this case, there is no problem with the Pauli
statistics but it was unclear whether parastatistics makes any sense in a field theory of the quarks.
Two years later, Moo-Young Han and Yoichiro Nambu considered nine quarks instead of three. The electric charges of these quarks were integral. In this model there were three u quarks: two of them had
electric charge of 1, while the third one had charge 0 – so on average the charge was 2/3. The symmetry group was SU(3) × SU(3), which was assumed to be strongly broken. The associated gauge bosons
would be massive and would have integral electric charges.
In 1971, Gell-Mann and I found a different solution of the statistics problem (Fritzsch and Gell-Mann 1971). We considered nine quarks, as Han and Nambu had done, but we assumed that the three quarks
of the same type had a new conserved quantum number, which we called "colour". The colour symmetry SU(3) was an exact symmetry. The wave functions of the hadrons were assumed to be singlets of the
colour group. The baryon wave-functions are antisymmetric in the colour indices, denoted by red (r), green (g) and blue (b):
Thus the wave function of a baryon changes sign if two quarks are exchanged, as required by the Pauli principle. Likewise, the wave functions of the mesons are colour singlets:
The cross-section for electron–positron annihilation into hadrons at high energies depends on the squares of the electric charges of the quarks and on the number of colours. For three colours this
leads to:
Without colours this ratio would be 2/3. The experimental data, however, were in agreement with a ratio of 2.
In 1971–1972, Gell-Mann and I worked at CERN. Together with William Bardeen we investigated the electromagnetic decay of the neutral pion into two photons. It was known that in the quark model the
decay rate is about a factor nine less than the measured decay rate – another problem for the quark model.
The decay amplitude is given by a triangle diagram, in which a quark–antiquark pair is created virtually and subsequently annihilates into two photons. We found that after the introduction of colour,
the decay amplitude increases by a factor three – each colour contributes to the amplitude with the same strength. For three colours, the result agrees with the experimental value.
In the spring of 1972, we started to interpret the colour group as a gauge group. The resulting gauge theory is similar to quantum electrodynamics (QED). The interaction of the quarks is generated by
an octet of massless colour gauge bosons, which we called gluons (Fritzsch and Gell-Mann 1972). We later introduced the name "quantum chromodynamics", or QCD. We published details of this theory one
year later together with Heinrich Leutwyler (Fritzsch et al. 1972).
In QCD, the gluons interact not only with the quarks but also with themselves. This direct gluon–gluon interaction is important – it leads to the reduction of the coupling constant at increasing
energy, i.e. the theory is asymptotically free, as discovered in 1972 by Gerard ’t Hooft (unpublished) and in 1973 by David Gross, David Politzer and Frank Wilczek. Thus at high energies the quarks
and gluons behave almost as free particles. This leads to the approximate "scaling behaviour" of the cross-sections in the deep-inelastic lepton–hadron scattering. The quarks behave almost as free
particles at high energies.
The logarithmic decrease of the coupling constant depends on the QCD energy-scale parameter, Λ, which is a free parameter and has to be measured in the experiments. The current experimental value is:
Experiments at SLAC, DESY, CERN’s Large Electron–Positron (LEP) collider and Fermilab’s Tevatron have measured the decrease of the QCD coupling-constant (figure 2). With LEP, it was also possible to
determine the QCD coupling-constant at the mass of the Z boson rather precisely:
It is useful to consider the theory of QCD with just one heavy quark Q. The ground-state meson in this hypothetical case would be a quark–antiquark bound state. The effective potential between the
quark and its antiquark at small distances would be a Coulomb potential proportional to 1/r, where r is the distance between the quark and the antiquark. However, at large distances the
self-interaction of the gluons becomes important. The gluonic field lines at large distances do not spread out as in electrodynamics. Instead, they attract each other. Thus the quark and the
antiquark are connected by a string of gluonic field lines (figure 3). The force between the quark and the antiquark is constant, i.e. it does not decrease as in electrodynamics. The quarks are
confined. It is still an open question whether this applies also to the light quarks.
In electron–positron annihilation, the virtual photon creates a quark and an antiquark, which move away from each other with high speed. Because of the confinement property, mesons – mostly pions –
are created, moving roughly in the same direction. The quark and the antiquark "fragment" to produce two jets of particles. The sum of the energies and momenta of the particles in each jet should be
equal to the energy of the original quark, which is equal to the energy of each colliding lepton. These quark jets were observed for the first time in 1978 at DESY (figure 4). They had already been
predicted in 1975 by Feynman.
If a quark pair is produced in electron–positron annihilation, then QCD predicts that sometimes a high-energy gluon should be emitted from one of the quarks. The gluon would also fragment and produce
a jet. So, sometimes three jets should be produced. Such events were observed at DESY in 1979 (figure 4).
The basic quanta of QCD are the quarks and the gluons. Two colour-octet gluons can form a colour singlet. Such a state would be a neutral gluonium meson. The ground state of the gluonium mesons has a
mass of about 1.4 GeV. In QCD with only heavy quarks, this state would be stable but in the real world it would mix with neutral quark–antiquark mesons and would decay quickly into pions. Thus far,
gluonium mesons have not been identified clearly in experiments.
The simplest colour-singlet hadrons in QCD are the baryons – consisting of three quarks – and the mesons, made of a quark and an antiquark. However, there are other ways to form a colour singlet. Two
quarks can be in an antitriplet – they can form a colour singlet together with two antiquarks. The result would be a meson consisting of two quarks and two antiquarks. Such a meson is called a
tetraquark. Three quarks can be in a colour octet, as well as a quark and an antiquark. They can form a colour-singlet hadron, consisting of four quarks and an antiquark. Such a baryon is called a
pentaquark. So far, tetraquark mesons and pentaquark baryons have not been clearly observed in experiments.
The three quark flavours were introduced to describe the symmetry given by the flavour group SU(3). However, we now know that in reality there are six quarks: the three light quarks u, d, s and the
three heavy quarks c (charm), b (bottom) and t (top). These six quarks form three doublets of the electroweak symmetry group SU(2):
The masses of the quarks are arbitrary parameters in QCD, just as the lepton masses are in QED. Since the quarks do not exist as free particles, their masses cannot be measured directly. They can,
however, be estimated using the observed hadron masses. In QCD they depend on the energy scale under consideration. Typical values of the quark masses at the energy of 2 GeV are:
The mass of the t quark is large, similar to the mass of a gold atom. Owing to this large mass, the t quark decays by the weak interaction with a lifetime that is less than the time needed to form a
meson. Thus there are no hadrons containing a t quark.
The theory of QCD is the correct field theory of the strong interactions and of the nuclear forces. Both hadrons and atomic nuclei are bound states of quarks, antiquarks and gluons. It is remarkable
that a simple gauge theory can describe the complicated phenomena of the strong interactions. | {"url":"http://cerncourier.com/cws/article/cern/50796","timestamp":"2014-04-16T13:03:20Z","content_type":null,"content_length":"44605","record_id":"<urn:uuid:ca3d94db-a38a-4c50-bef8-0847bfcc9913>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Show that the Euclidean plane is a metric space?
April 25th 2010, 08:42 AM
Show that the Euclidean plane is a metric space?
Metric Space (S , d) consists of a space S and a fxn d that associates a real number with any two elements of S. The properties of a metric space are:
d(x , y) = d(x , y) forall x,y in S
0 < d(x , y) < inf forall x,y in S & x does not = y
d(x , x) = 0 forall x in S
d(x , y) <= d(x , z) + d(z , y) forall x,y,z in S
I have to show that the Euclidean plane (defined by two 2-D vectors X and Y?) is a metric space.
Havin a bit of trouble with this one... :(
April 25th 2010, 10:11 AM
So define the Euclidean metric for two vectors $x=(x_1,x_2)$ and $y=(y_1,y_2)$ by $d(x,y)=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2}$.
The first three properties are easy to show. The triangle inequality requires some (not much) extra thinking.
April 25th 2010, 05:29 PM
yes I got as far, maybe you could explain the triangle inequality and the 'extra' thinking as I am at a wall.
I worked out all the distance equations for them and squared to get rid of the root... by induction one can say that with the z values added the inequality holds but I'm having trouble showing
it... help?
April 26th 2010, 07:51 AM
Well, the first thing I would notice is that you can write $(x_i-y_i)^2$ as $(x_i-z_i+z_i-y_i)^2$. So we have:
if we let $a_i=x_i-z_i$ and $b_i=z_i-y_i$, we get
What we would now like is to show that:
$\ast\quad\sqrt{(a_1+b_1)^2+(a_2+b_2)^2}\leq\sqrt{a _1^2+a_2^2}+\sqrt{b_1^2+b_2^2}$
The LHS of the inequality is $\sqrt{a_1^2+a_2^2+b_1^2+b_2^2+2(a_1b_1+a_2b_2)}$. We hope that the radicand is less than or equal to $\left(\sqrt{a_1^2+a_2^2}+\sqrt{b_1^2+b_2^2}\right) ^2$. That
is, we want: $a_1^2+a_2^2+b_1^2+b_2^2+2(a_1b_1+a_2b_2)\leq a_1^2+a_2^2+b_1^2+b_2^2+2\sqrt{a_1^2+a_2^2}\sqrt{b _1^2+b_2^2}$. If we boil this down to the bare inequality, we want to show that
$a_1b_1+a_2b_2\leq\sqrt{a_1^2+a_2^2}\sqrt{b_1^2+b_2 ^2}$.
The last inequality there (if you haven’t seen it before) is the Cauchy-Schwarz inequality. The easiest way to prove the triangle inequality is first to prove the C.S.I., then work backwards
(which is easier to do now that you know where to go) to prove $\ast$. Show that $\ast$ is equivalent to the triangle inequality.
April 27th 2010, 04:12 AM
Show the Euclidean Plane is a metric space.
$S=${ $(x,y)|a_1,a_2,b_1,b_2 \in R$}
$d(x,y)=d(y,x) \Rightarrow \sqrt{(a_1-b_1)^2 + (a_2-b_2)^2}=\sqrt{(b_1-a_1)^2 + (b_2-a_2)^2}$
from the RHS
$=\sqrt{(a^2_1-2a_1b_1+b^2_1) + (a^2_2-2a_2b_2+b^2_2)}$
$=\sqrt{(a_1-b_1)^2 + (a_2-b_2)^2}$
$d(x,x)=0 \Rightarrow \sqrt{(a_1-a_1)^2+(b_1-b_1)^2}=0$
$d(x,y) \leq d(x,z)+d(z,y)$
The triangle inequality is much easier to deal with when using vector notation:
we can define $d(x,z)$ and $d(z,y)$ as some vectors $\vec{a}$ and $\vec{b}$, respectively, and $d(x,y)$ (the addition of those two vectors) as $\vec{a}+\vec{b}$
This gives us $\|\vec{a}+\vec{b}\| \leq \|\vec{a}\|+\|\vec{b}\|$
Working from the LHS:
$\|\vec{a}+\vec{b}\|^2=(\vec{a}+\vec{b})\cdot(\vec{ a}+\vec{b})$
$=\vec{a}\cdot(\vec{a}+\vec{b})+\vec{b}\cdot(\vec{a }+\vec{b})$
$=\|\vec{a}\|^2+2(\vec{a}\cdot\vec{b})+\|\vec{b}\|^ 2$
by the Cauchy-Schwartz Inequality ( $|\vec{a}|\cdot|\vec{b}| \leq \|\vec{a}\|\|\vec{b}\|$)
$\Rightarrow$$\|\vec{a}\|^2+2(\vec{a}\cdot\vec{b})+\|\vec{b}\|^2 \leq \|\vec{a}\|^2+2(\|\vec{a}\|\|\vec{b}\|)+\|\vec{b}\ |^2$
$\|\vec{a}\|^2+2(\|\vec{a}\|\|\vec{b}\|)+\|\vec{b}\ |^2 \equiv (\|\vec{a}\|+\|\vec{b}\|)^2$
$(\|\vec{a}+\vec{b}\|)^2 \leq (\|\vec{a}\|+\|\vec{b}\|)^2$
$\Rightarrow \|\vec{a}+\vec{b}\| \leq \|\vec{a}\|+\|\vec{b}\|$$QED$
So I get it now. I was stuck because you had to use the cuachy schwartz ineq to tie the knot at the end.
thanks for the help
April 27th 2010, 04:42 AM
Yes, exactly. A general rule of thumb is that the more abstract your setting, the easier it is to show many results. For example, it is possible to show using C.S. that any inner product induces
a norm ( $\|x\|^2=\langle x,x\rangle$), and it is easy to see that any norm induces a metric ( $d(x,y)=\|x-y\|$). Then since the dot product is an inner product on $\mathbb{R}^2$, the function
defined as $d(x,y)=\sqrt{(x-y)\cdot(x-y)}$ is necessarily a metric. | {"url":"http://mathhelpforum.com/advanced-algebra/141275-show-euclidean-plane-metric-space-print.html","timestamp":"2014-04-23T11:11:52Z","content_type":null,"content_length":"18692","record_id":"<urn:uuid:0ce75ff7-1744-4836-b816-c96754d70451>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Empirical Bayes analysis of single nucleotide polymorphisms
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2008; 9: 144.
Empirical Bayes analysis of single nucleotide polymorphisms
An important goal of whole-genome studies concerned with single nucleotide polymorphisms (SNPs) is the identification of SNPs associated with a covariate of interest such as the case-control status
or the type of cancer. Since these studies often comprise the genotypes of hundreds of thousands of SNPs, methods are required that can cope with the corresponding multiple testing problem. For the
analysis of gene expression data, approaches such as the empirical Bayes analysis of microarrays have been developed particularly for the detection of genes associated with the response. However, the
empirical Bayes analysis of microarrays has only been suggested for binary responses when considering expression values, i.e. continuous predictors.
In this paper, we propose a modification of this empirical Bayes analysis that can be used to analyze high-dimensional categorical SNP data. This approach along with a generalized version of the
original empirical Bayes method are available in the R package siggenes version 1.10.0 and later that can be downloaded from http://www.bioconductor.org.
As applications to two subsets of the HapMap data show, the empirical Bayes analysis of microarrays cannot only be used to analyze continuous gene expression data, but also be applied to categorical
SNP data, where the response is not restricted to be binary. In association studies in which typically several ten to a few hundred SNPs are considered, our approach can furthermore be employed to
test interactions of SNPs. Moreover, the posterior probabilities resulting from the empirical Bayes analysis of (prespecified) interactions/genotypes can also be used to quantify the importance of
these interactions.
Whole-genome experiments comprise data of hundreds of thousands of single nucleotide polymorphisms (SNPs), where a SNP is the most common type of genetic variations that occurs when at a single base
pair position different base alternatives exist in a population. SNPs are typically biallelic. Therefore, SNPs can be interpreted as categorical variables having three realizations: the homozygous
reference genotype (if both chromosomes show the more frequent variant), the heterozygous genotype (if one chromosome shows the more frequent, and the other the less frequent variant), and the
homozygous variant genotype (if both bases explaining the SNP are of the less frequent variant).
Since SNPs can alter the risk for developing a disease, an important goal in studies concerned with SNPs is the identification of the SNPs that show a distribution of the genotypes that differs
substantially between different groups (e.g., cancer vs. non-cancer). Detecting such SNPs requires methods that can cope with this vast multiple testing problem in which hundreds of thousands of
hypotheses are tested simultaneously. Naturally, the value of a statistic appropriate for the considered testing situation and the corresponding p-value are computed for each variable, where in the
case of SNPs Pearson's χ^2-statistic is an appropriate test score. These raw p-values are then adjusted for multiple comparisons such that a Type I error rate is strongly controlled at a prespecified
level of significance α.
The classical example for a Type I error rate is the family-wise error rate
where V is the number of false positives, i.e. the number of rejected null hypotheses that are actually true – or in biological terms, the number of SNPs found by the procedure to differ between
groups that actually do not differ between the groups. This error rate is strongly controlled at a level α so that Prob(V ≥ 1) ≤ α by approaches such as the Bonferroni correction or the procedures of
Westfall and Young [1]. An overview on such methods is given in [2]. In [3], procedures for controlling this and other error rates are compared in an application to gene expression data.
In classical multiple testing situations in which rarely more than 20 hypotheses are tested simultaneously, it is reasonable to keep down the probability of one or more false positives. However, in
the analysis of data from whole-genome studies, hundreds of thousands of SNPs are considered simultaneously. Moreover, a few false positives are acceptable in such experiments as long as their number
is small in proportion to the total number R of rejected null hypotheses, i.e. identified SNPs. This situation for which the family-wise error rate might be too conservative is thus similar to the
multiple testing problem in studies concerned with gene expression data. In the analysis of such DNA microarray data, another error rate, namely the false discovery rate
$FDR={E(V/R),if R>00,if R=0$
proposed by Benjamini and Hochberg [4], has hence become popular which in turn is a reasonable choice in the analysis of high-dimensional SNP data.
Apart from adjusting p-values, there also exist other approaches for adjusting for multiple comparisons such as the significance analysis of microarrays (SAM [5]) and the empirical Bayes analysis of
microarrays (EBAM [6]) that have been developed particularly for the analysis of gene expression data.
In the original versions of both SAM and EBAM, a moderated t-statistic is computed. In SAM, the observed values of this test score are then plotted against the values of the statistic expected under
the null hypothesis of no difference between the two groups, and a gene is called differentially expressed if the point representing this gene in this Quantile-Quantile plot is far away from the
diagonal. In EBAM, the density f of the observed values z of the moderated t-statistic is modeled by a mixture of the density f[1 ]of the differentially expressed genes and the density f[0 ]of the
not differentially expressed genes, i.e. by
f(z) = π[0]f[0](z) + π[1]f[1](z),
where π[1 ]and π[0 ]= 1 - π[1 ]are the prior probabilities that a gene is differentially expressed or not, respectively. Following Efron et al. [6], a gene having a z-value of z* is detected to be
differentially expressed if the posterior probability
for being differentially expressed is larger than or equal to 0.9.
In [7], a generalized version of the SAM algorithm is presented, whereas in [8,9] SAM is adapted for categorical data such as SNP data.
In the following section, we first present a generalized EBAM algorithm. Then, we propose an adaption of EBAM enabling the analysis of categorical data. As computing the values of the test statistic
for all SNPs individually would be very time-consuming, we further suggest an approach based on matrix algebra that allows to compute all values simultaneously. Afterwards, EBAM for categorical data
is applied, on the one hand, to two subsets of the high-dimensional SNP data from the HapMap project [10], and on the other hand, to simulated data that mimic data from a typical association study in
which several ten SNPs are considered. In the latter application, it is also shown how EBAM can be applied to identify SNP interactions associated with the response, and how it can be used to specify
the importance of prespecified SNP interactions.
Generalized EBAM algorithm
In Algorithm 1, a generalized version of the empirical Bayes analysis of microarrays (EBAM [6]) is presented. This algorithm makes use of the fact that for a given rejection region Γ, the FDR can be
estimated by
where z[i ]is the observed value of the test statistic Z[i ]for variable i = 1 m, π[0 ]is the prior probability that a gene is not differentially expressed – or more generally, that a variable is not
associated with the response – and $EH0$ (#{Z[i ]11].
Several procedures have been suggested to estimate the prior probability π[0][6,11,12]. Efron et al. [6], e.g., propose to use a narrow interval $A$ around z = 0, and to estimate π[0 ]by the ratio of
the number of observed z-values in $A$ to the number of z-values that are expected under the null hypothesis to fall into $A$. However, the narrower $A$, the more instable is this estimate. To
stabilize this estimate, we use the procedure of Storey and Tibshirani [12] in which a natural cubic spline h with three degrees of freedom is fitted through the data points
$Λ={[0,q1−λ),if Γ is one-sided(qλ/2,q1−λ/2),if Γ is two-sided$
and q[λ ]denotes the λ quantile of the (estimated) null distribution. The estimate of π[0 ]is then given by
Algorithm 1 (Generalized EBAM Procedure)
Let X be an m × n matrix comprising the values of m variables and n observations, y be a vector of length n composed of the values of the response for the n observations, and B be the number of
1. For each variable i = 1, ..., m, compute the value z[i ]of a statistic appropriate for testing if the values of this variable are associated with the response.
2. If the null density f[0], is known, use a density estimation procedure to obtain $f^$ and compute $ϕ^=f0/f^$. Otherwise, estimate the ratio ϕ = f[0]/f directly by
(a) determining the m permuted z-values z[ib ]for each permutation b = 1, ..., B of the n values of the response,
(b) binning the m observed and mB permuted z-values into an appropriate number of intervals,
(c) fitting a logistic regression model with repeated observations through these intervals using an appropriate regression function.
3. Estimate π[0 ]by the procedure of Storey and Tibshirani [12].
4. For each variable i, compute the posterior probability $p^1(zi)=1−π^0ϕ^(zi)$
5. Order the observed z-values to obtain z[(1) ]≤ ... ≤ z[(m)], and set $i0=∑i=1mI(z(i)<0)+1$
6. For a prespecified probability Δ or a set of appropriate values for Δ,
(a) set $i1=maxi≥i0{i:p^1(z(i))<Δ}+1$, and compute the upper cut-off c[U ]by
$cU={z(i1),if i1≤m∞otherwise,$
(b) set $i2=mini<i0{i:p^1(z(i))<Δ}−1$, and compute the lower cut-off c[L ]by
$cL={z(i2),if i0>1 and i2≥1−∞otherwise,$
(c) call all variables i with $zi∉ΓΔC$ significant, where $ΓΔC=(cL,cU)$ denotes the complement of the rejection region Γ[Δ],
(d) estimate the FDR of Γ[Δ ]by
$α={1−∫cLcUf0(z)dz,if f0 is known#{zib∈ΓΔ}mBotherwise.$
The original version of EBAM is of course a special case of Algorithm 1: Efron et al. [6] compute the moderated t-statistic
for each gene i = 1, ..., m, where d[i ]is the difference of the groupwise mean expression values and s[i ]is the corresponding standard deviation such that d[i]/s[i ]is the ordinary t-statistic. The
fudge factor a[0 ]is computed by the quantile of the m standard deviations that leads to the largest number of genes called differentially expressed in a standardized EBAM analysis (see [6] for
details on this standardized analysis). Since the null distribution of (1) is unknown, the response is permuted repeatedly to generate mB permuted z-values. Efron et al. [6] then bin the m observed
and mB permuted z-values into 139 intervals. Treating the observed scores as successes and the permuted values as failures, a logistic regression model is fitted through the binned data points using
a natural cubic spline with five degrees of freedom as regression function. For details on this logistic regression, see Remark (D) in [6].
Algorithm 1 also comprises the approach used by Efron and Tibshirani [13] to test two-group gene expression data with Wilcoxon rank statistics.
The main difference between Algorithm 1 and the original version of EBAM is that Efron et al. [6] call all genes differentially expressed that have a posterior probability larger than or equal to Δ =
0.9, whereas we only call a variable i with $p^$[1](z[i]) ≥ Δ significant if there is no other variable with a more extreme z-value (a larger z-value if z[i ]> 0, or a smaller z-value if z[i ]< 0)
that has a posterior probability less than Δ. This approach that is comparable to the proceeding in SAM, therefore, ensures that all variables with a z-value exceeding some threshold are called
significant, whereas in the original version of EBAM it might happen that a variable is not called significant, even though it has a more extreme z-value than some of the identified variables.
Another difference is that Efron et al. [6] consider one fixed posterior probability, namely Δ = 0.9, for calling genes differentially expressed, whereas we allow both to prespecify one probability Δ
and to consider a set of reasonable values for Δ. The latter again is similar to the SAM procedure in which the number of genes called differentially expressed and the estimated FDR is determined for
several values of the SAM threshold, and then the value is chosen that provides the best balance between the number of identified genes and the estimated FDR. This approach can be helpful when the
detection of interesting variables is just an intermediate aim, and the actual goal of the analysis is, e.g., the construction of a classification rule. In such a case, prespecifying the value of Δ
might work poorly, as this might lead to either a too small number of identified variables, or a too high FDR. For an example of this proceeding in the context of the empirical Bayes analysis, see
the application of EBAM for categorical data to the HapMap data set.
EBAM for categorical data
We now assume that our data consist of m categorical variables each exhibiting C levels denoted by 1, ..., C, and n observations each belonging to one of R groups denoted by 1, ..., R. If these
variables are SNPs, C = 3.
A statistic appropriate for testing each of the m categorical variables if its null distribution differs between the R groups is Pearson's χ^2-statistic
where n[rc ]and $n˜$[rc ]are the observed number of observations and the number of observations expected under the null hypothesis in group r = 1, ..., R, respectively, showing level c = 1, ..., C.
Since the small denominator problem [5,6,14], which is the reason for adding the fudge factor a[0 ]to the denominator of the ordinary t-statistic in (1), does not show up in this case, it is not
necessary to add a fudge factor to the denominator of (2). Therefore, Algorithm 1 can be applied to SNPs – or to any other type of (genetic) categorical data – by employing Pearson's χ^2-statistic as
test score.
In EBAM, it is assumed that all variables follow the same null distribution. In the permutation based approach of Algorithm 1, this, e.g., means that not only the B permuted z-values corresponding to
a particular variable, but all mB permutations of all m variables are considered in the estimation of the null distribution of this variable. Normally, this is an advantage in the analysis of
high-dimensional data [6,15]. In the analysis of categorical data, this, however, might lead to a loss of a large number of variables, as only variables showing the same number of levels can be
considered together in an EBAM analysis.
Approximation to χ^2-distribution
Since the null distribution of (2) can be approximated by a χ^2-distribution with (R - 1)(C - 1) degrees of freedom, only the density f of the observed test statistics needs to be estimated. This can
be done by applying a (non-parametric) kernel density estimator to the observed z-values [16]. However, the standard kernels are typically symmetric such that negative values of z will have a
positive estimated density, even though f(z) = 0 for z < 0. A solution to this problem is to use asymmetric kernels that only give non-negative values of z a positive density [17,18]. Another
solution, which we will use, is a semi-parametric method proposed by Efron and Tibshirani [19].
In the first step of this procedure, a histogram of the observed z-values is generated. To obtain a reasonable number of bins for the histogram, we employ the one-level bin width estimator of Wand [
20]. Although other bin width estimators such as the approaches of Scott [21] or of Freedman and Diaconis [22] lead to different bin widths, the densities resulting from the method of Efron and
Tibshirani [19] are virtually identical. The approach of Sturges [23], however, which is, e.g., the default method for estimating the number of bins in the R function hist, typically leads to a much
too small number of intervals when considering large numbers of observations [24], and is therefore an inappropriate procedure in our application.
In the second step of the procedure of Efron and Tibshirani [19], a Poisson regression model is fitted in which the midpoints of the bins are used as explanatory variables, and the numbers of
observations in the intervals are the values of the response. As most of the SNPs are assumed to show the same distribution in the different groups, the density f of the observed z-values typically
looks similar to the null density f[0], but has a heavier right tail (see Figure Figure1).1). We therefore use a natural cubic spline with three degrees of freedom as regression function if (R - 1)(
C - 1) ≤ 2. For (R - 1)(C - 1) ≥ 3, a natural cubic spline with five degrees of freedom would be a reasonable regression function. However, in functions such as the R function ns for generating the
basis matrix of the spline, the inner knots by default are given by the 20%, 40%, 60%, and 80% quantile of the midpoints of the bins. These inner knots work well for symmetric densities. But the χ^
2-distribution is asymmetric – in particular for a small value of the degrees of freedom. If (R - 1)(C - 1) ≥ 3, we hence specify the inner knots directly by centering them around the mode and not
around the median. The inner knots are thus given by the 0.4q[M], 0.8q[M], 1 - 0.8(1 - q[M]), and 1 - 0.4(1 - q[M]) quantile of the midpoints, where q[M] is the quantile of the midpoints that
corresponds to the mode estimated by the midpoint of the bin of the histogram containing the most observations. If there is more than one bin showing the largest number of observations, then the
smallest of the corresponding midpoints is used as estimate. Other mode estimators such as the half-range mode [25,26] might lead to better estimates than this ad hoc methods, but the estimation of f
is typically only slightly influenced by the choice of the mode estimator.
Densities of the test scores in the analyses of the HapMap data. On the left hand side, the histograms and the estimated densities (marked by red lines) of the values of Pearson's χ^2-statistic of
the SNPs from the two subsets of the HapMap data ...
In Figure Figure2,2, the estimated densities of four χ^2-distributions with different degrees of freedom resulting from the application of this procedure to 100,000 values randomly drawn from the
respective χ^2-distribution are displayed, where the inner knots are centered, on the one hand, around the mode (red lines), and on the other hand, around the median (cyan lines). This figure reveals
that the former leads to a better estimation than using the standard inner knots. In fact, the densities estimated using the former approach are very similar to the true densities.
Estimating the density of the χ^2-distribution. For different degrees of freedom, the true (black line) and the estimated density (red line) of the χ^2-distribution are shown, where the density is
estimated by applying the procedure of Efron ...
Having estimated f, $ϕ^=f0/f^$ is determined, and the remaining steps 3 to 6 of Algorithm 1 are processed.
Permutation based estimation of the null density
If the assumptions for the approximation to the χ^2-distribution are not met [27], the null density f[0 ]also has to be estimated. In this case, we calculate the ratio $ϕ^$ directly by permuting the
group labels B times, computing the mB permuted z-values, dividing these scores and the m observed z-values into intervals, and fitting a logistic regression model through the binned data points.
Similar to the application of the procedure of Efron and Tibshirani [19] (see previous section), the estimation of ϕ does not depend on the number of intervals used in the binning as long as this
number is not too small or too large. We therefore follow Efron et al. [6], and split the observed and permuted z-values into 139 intervals. Since the rejection region is one-sided when considering
Pearson's χ^2-statistic as test score, a natural cubic spline with three degrees of freedom is used as regression function.
Whole-genome studies comprise the genotypes of hundreds of thousands of SNPs for each of which the value of Pearson's χ^2-statistic (2) has to be computed. Since calculating these values one-by-one
is very time-consuming, we employ matrix algebra for determining all the scores simultaneously.
Assume that we have given an m × n matrix X in which each row corresponds to a categorical variable exhibiting the levels 1, ..., C, and a vector y comprising the group labels 1, ..., R of the n
observations represented by the columns of X.
Firstly, C m × n indicator matrices X^(c) for the C levels are constructed by setting the elements of these matrices to
$xij(c)=I(xij=c)={1,if xij=c0otherwise,$
i = 1, ..., m, j = 1, ..., n. Furthermore, an n × R matrix Y with entries y[jr ]= I(y[j ]= r) is built in which each column represents one of the R group labels. Then, we set
c = 1, ..., C, where 1[n ]is a vector of length n consisting only of ones, so that the ith column and rth row of the m × R matrices N^(c) and $N˜$^(c) comprise the observed and the expected number of
observations, respectively, that belong to the rth group and show the cth level at the ith variable. Afterwards, the m × R matrices
are determined by elementwise matrix calculation, i.e. by setting
Finally, the vector z comprising the value of Pearson's χ^2-statistic for each of the m variables is given by
If the permutation based version of EBAM for categorical data is used, then not "just" m, but m(B + 1) z-values have to be computed. Again, matrix algebra can help to speed up computation by
considering all B permutations at once, or – if the number of variables or permutations is too large – subsets of the B permutations.
For this, suppose that L is a B × n matrix in which each row corresponds to one of the B permutations of the n group labels. If the B × n indicator matrices L^(r), r = 1, ..., R, are defined
analogously to X^(c), then the m × B matrix Z^0 = {z[ib]} containing the mB permuted z-values can be determined by
where $n˜r(c)$ is the rth column of $N˜$,
Processing time
To evaluate how much the matrix calculation procedure presented in the previous section can speed up the computation in comparison to an individual determination of Pearson's χ^2-statistic, both
approaches are applied to several numbers of variables. In Table Table1,1, the resulting processing times are summarized. This table shows that employing matrix algebra leads to an immense reduction
of time needed for computation – in particular if the number m of variables is large. If, e.g., 100,000 variables are considered, it takes just 6.2 seconds to determine the values of Pearson's χ^
2-statistic when employing matrix calculation, but more than 4.5 minutes when calculating the values one-by-one.
Comparison of computation times (in seconds) on an AMD Athlon XP 3000+ machine with one GB of RAM for both the matrix algebra based calculation and the individual determination of the values of
Pearson's χ^2-statistic for different numbers of variables ...
Note that the main reason for this immense reduction in computation time is not that the matrix calculation approach is algorithmically less complex than an individual computation, but that the
implementation of this approach makes essential use of the way how vectorization and matrix multiplication are implemented in R [28].
To exemplify that EBAM can be used to analyze high-dimensional categorical data, it is first applied to two subsets of the genotype data from the International Hapmap Project [10]. Afterwards, it is
shown how EBAM can be employed to identify SNP interactions associated with the response in association studies, and to quantify the importance of genotypes. R code for reproducing the results of all
analyses performed in this section is available in Additional file 1.
Application to HapMap data
In the International HapMap Project, millions of SNPs have been genotyped for each of 270 people from the four populations Japanese from Tokyo (abbreviated by JPT), Han Chinese from Beijing (CHB),
Yoruba in Ibadan, Nigeria (YRI), and CEPH (Utah residents with ancestry from northern and western Europe, abbreviated by CEU).
About 500,000 of these SNPs have been measured using the Affymetrix GeneChip Mapping 500 K Array Set that consists of two chips. In this paper, we focus on the BRLMM (Bayesian Robust Linear Models
with Mahalanobis distance) genotypes [29] of the 262,264 SNPs from one of these chips, namely the Nsp array (see [30] for these genotypes).
JPT vs. CHB
Since we are mainly interested in case-control studies, or more generally in binary responses, EBAM is applied to the 45 JPT and the 45 CHB to detect the SNPs that show a distribution that differs
substantially between these two population. Another reason is that both the JPT are unrelated, and the CHB are unrelated, whereas the other two populations consist each of 30 trios each of which is
composed of genotype data from a mother, a father and their child.
Since in EBAM it is assumed that all variables follow the same null distribution, only SNPs showing the same number of genotypes are considered in the same EBAM analysis. Moreover, the current
implementation of EBAM in the R package siggenes cannot handle missing values such that either missing genotypes have to be imputed, or SNPs with missing genotypes have to be removed prior to the
EBAM analysis. Therefore, 54,400 SNPs showing one or more missing genotypes and 75,481 SNPs for which not all three genotypes are observed at the 90 persons are excluded from the analysis leading to
a data set composed of the genotypes of 132,383 SNPs.
Using an AMD Athlon XP 3000+ machine with one GB of RAM on which Windows XP is installed, an application of EBAM to this data set takes 11.62 seconds if the null density f[0 ]is approximated by the χ
^2-density with two degrees of freedom, whereas it takes about 182 seconds if f[0 ]is estimated using 100 permutations.
In the upper left panel of Figure Figure1,1, a histogram and the estimated density $f^$ of the observed test scores is displayed. For many of the SNPs the assumptions for an approximation to the χ^
2-distribution might not be met [27], as some of the expected numbers in the corresponding contingency table are smaller than 5. We therefore prefer not to use the approximation to the χ^
2-distribution, but the permutation based approach of EBAM for categorical data.
Employing the threshold Δ = 0.9 as suggested by Efron et al. [6], i.e. calling all SNPs significant that have a posterior probability of being significant larger than or equal to 0.9, leads to the
identification of 193 SNPs with an estimated FDR of 0.08.
It is, however, also possible to use EBAM similarly to SAM [5,7]. For this, assume that we aim, on the one hand, to control the FDR at a level of about 0.05, and on the other hand, to identify about
200 SNPs for further analyses with, e.g., discrimination methods [9,31] such as logic regression [32]. In Table Table2,2, the numbers of detected SNPs and the corresponding FDRs are summarized for
six reasonable values of Δ. This table reveals that it is not possible to attain both goals simultaneously, as calling 200 SNPs significant would lead to an FDR larger than 0.08, whereas controlling
the FDR at 0.05 would result in the identification of about 42 SNPs. This table also shows that Δ = 0.90 (or Δ = 0.91) provides a good trade-off between the two goals. Hence, Δ = 0.90 will be also a
good choice here if EBAM is used similarly to SAM.
Estimated FDRs and numbers of identified SNPs for several values of the threshold Δ.
A list of the 193 SNPs with a posterior probability of being significant larger than or equal to 0.9 along with links to dbSNP [33] is available in the Additional file 2. Besides the z-values and the
posterior probabilities $p^$[1](z), this file also contains an estimate for the local FDR for each SNP [6]. Contrary to the FDR employed to quantify the overall accuracy of a list of variables, the
local FDR proposed by Efron et al. [6] is a variable-specific measure that can be estimated by
Multi-class case
EBAM for categorical variables is not restricted to binary responses. It, e.g., can also be used to identify the SNPs showing a distribution that differs strongly between the four HapMap populations.
For this analysis, the most obvious dependencies are removed by excluding the child from each of the 60 trios such that 45 JPT, 45 CHB, 60 YRI, and 60 CEU are considered. Again, all SNPs for which at
least one of the 210 values are missing (104,872 SNPs), or for which not all three genotypes are observed (14,273 SNPs), are excluded from the analysis resulting in a data set composed of the
genotypes of 143,119 SNPs. In the lower right panel of Figure Figure1,1, the estimated density of the z-values of these SNPs and the estimated null density are displayed. This figure reveals that a
huge number of these SNPs exhibit a distribution that differs substantially in at least one of the populations. In fact, 131,336 SNPs show a posterior probability $p^$[1](z) larger than or equal to
0.9, whereas 33,101 SNPs even have a posterior probability of 1.
To examine which of the populations are responsible for this huge number of significant SNPs, we perform a two-class EBAM analysis for each pair of the four HapMap populations. In Table Table3,3,
the numbers of SNPs exhibiting a posterior probability $p^$[1](z) ≥ 0.9 are summarized for all these analyses. This table reveals that only JPT and CHB show a small number of SNPs that differ between
these two populations. In all other two-class comparisons, a huge number of SNPs are called significant, where CEU differs the most from the other populations. These results do not seem to be that
surprising, since JPT and CHB are both populations from Asia, whereas the other two populations come from two other continents.
Numbers of significant SNPs found in pairwise EBAM analyses of the four HapMap populations.
Identification of interactions
When considering complex diseases, e.g., sporadic breast cancer, it is assumed that not individual SNPs, but interactions of SNPs have a high impact on the risk of developing the disease [34,35]. In
such a case, it would therefore be of interest to also test interactions of SNPs. However, in whole-genome studies in which the number m of SNPs is in the tens or even hundreds of thousands, it would
take – depending on the order of the interactions – hours, days or even weeks to compute the test scores for all $(mp)$p-way interactions comprised by the m variables. For strategies on testing
two-way interactions comprised by data from a simulated whole-genome study on a cluster of computers and their computation times, see [36]. Here, we focus our interest on the EBAM analysis of
interactions of SNPs from association studies such as the GENICA study [9,37] in which typically several ten SNPs are examined.
For the simulation of such a study, data for 50 SNPs and 1,000 observations are generated by randomly drawing the genotypes 1 (for the homozygous reference), 2 (heterozygous), and 3 (homozygous
variant) for each SNP S[i], i = 1 ,..., 50, where the minor allele frequency of the SNP is chosen uniformly at random from the interval [0.25, 0.4]. Afterwards, the case-control status y is randomly
drawn from a Bernoulli distribution with mean Prob(Y = 1), where
ogit(Prob(Y = 1)) = -0.5 + I(S[6 ]≠ 1, S[7 ]= 1),
such that the probability of being a case is 62.25% if SNP S[6 ]is not of the homozygous reference genotype and SNP S[7 ]is of this genotype.
In the left panel of Figure Figure3,3, the result of the application of EBAM to these 50 SNPs is displayed. This figure shows that S[6 ]is the only SNP with a posterior probability larger than or
equal to 0.9, and thus the only SNP called significant. This figure also reveals that S[7 ]shows the eighth largest z-value with a posterior probability of 0.313. If, however, the m(m - 1)/2 = 1,225
two-way interactions of the m = 50 SNPs are considered, then the interaction of S[6 ]and S[7 ]shows the by far largest z-value (see right panel of Figure Figure3).3). Most of the other features
found to be significant are interactions of S[6 ]with another SNP. In this analysis, not all 1,225, but 1,224 of the two-way interactions are included, since one of the interactions shows only seven
of the nine genotypes comprised by the respective two SNPs, and is thus excluded from the EBAM analysis of interactions showing all nine genotypes.
EBAM analysis of the simulated data. Scatter plots of the posterior probabilities vs. the z-values resulting from the applications of EBAM to both the simulated SNPs themselves (left panel) and the
two-way interactions comprised by these SNPs (right panel). ...
This analysis is repeated several times using different simulated data sets each generated randomly with the above settings. In each of the applications of EBAM to the individual SNPs, either one of
S[6 ]and S[7], or both are identified to be significant. Rarely, also other SNPs show a posterior probability larger than 0.9. In all of the analyses of the two-way interactions, the interaction of S
[6 ]and S[7 ]is detected to be the most important one.
Measuring the importance of genotypes
EBAM cannot only be used to detect interesting variables or interactions. The posterior probabilities estimated by EBAM can also be employed to quantify the importance of features found by other
approaches such as logicFS [38].
Logic regression [32] – which is employed as base learner in logicFS – is an adaptive regression and classification procedure that searches for Boolean combinations of binary variables associated
with the response. Since this method has shown a good performance in comparison to other discrimination [9,39] and regression [40,41] approaches, a bagging [42] version of logic regression is used in
logicFS to identify interactions of SNPs that are potentially interesting, i.e. associated with the response. While some of the found genotypes/interactions, that are of a similar form as the one
intended to be influential for the disease risk in the previous section, have a high impact on the disease risk, others are only found at random by logicFS. It is therefore necessary to quantify the
importance of the detected genotypes.
Since logic regression and thus logicFS can only handle binary predictors, each SNP has to be split into (at least) two binary dummy variables. We follow [32,38] and code each SNP S[i], i = 1, ..., m
, by
S[i1]: "S[i ]is not of the homozygous reference genotype."
S[i2]: "S[i ]is of the homozygous variant genotype."
such that S[i1 ]codes for a dominant and S[i2 ]for a recessive effect. The genotype intended to be influential in the simulated data set described in the previous section can thus also be specified
by the logic expression
where ^C denotes the complement of a binary variable with outcome true or false, and
Contrary to the previous section in which each of the $(mp)$ distributions of the values of the 3^p levels comprised by the respective combination of p of the m SNPs is tested whether it differs
between groups of persons, EBAM is here applied to conjunctions, i.e. AND-combinations, of binary variables with outcome true or false which are in turn binary variables such that genotypes of
different orders, i.e. combinations of genotypes of different numbers of SNPs, can be considered together in the same EBAM analysis.
Applying the single tree approach of logicFS, see [38], with 50 iterations to the data set composed of the 100 dummy variables coding for the 50 simulated SNPs from the previous section leads to the
detection of 84 potentially interesting interactions. For each of these genotypes which are conjunctions of one to four binary variables, the importance is then determined by the posterior
probability estimated by EBAM. The importances, however, should not be quantified using the same data set on which the genotypes are identified, as it is very likely that almost any of the found
genotypes is called significant, since it already has shown up as potentially interesting. In fact, if EBAM is applied to the 84 genotypes evaluated on the data set on which they were detected, 70 of
them are called significant using Δ = 0.9 and 15 show a posterior probability of 1 (see left panel of Figure Figure4).4). While these 15 genotypes are composed of $S61∧S71C$ and one or two other
binary variables, 32 of the genotypes called significant do neither contain S[6 ]nor $S61∧S71C$. Moreover, two genotypes exist that exhibit a larger z-value than $S61∧S71C$.
EBAM applied to the genotypes identified by logicFS. Scatter plots of the posterior probabilities vs. the z-values resulting from the applications of EBAM to the genotypes found in an application of
logicFS to the simulated data. On the left hand side, ...
It is therefore more appropriate to test the found genotypes on an independent data set. Thus, a new (test) data set is randomly generated as described in the previous section. Afterwards, the values
of the 84 detected genotypes for the observations from the new data set are computed, and EBAM is applied to these values.
The same 15 genotypes as in the application to the original data set show a posterior probability of 1, where $S61∧S71C$ is found to be the genotype with the largest z-value. The other three
genotypes also called significant using Δ = 0.9 either contain $S61∧S71C$ or S[61]. All the other genotypes not intended to have an impact on the disease risk, but called significant in the
application to the data set on which they were found show a posterior probability less than 0.9, and thus are not called significant anymore in the application to the test data set.
Again, this analysis is repeated several times with different training and test data sets leading to similar results in each of the applications.
Conclusion and Discussion
Using the Bayesian framework to adjust for multiple comparisons is an attractive alternative to adjusting p-values – in particular if the data are high-dimensional. Thus, Efron et al. [6] have
suggested an empirical Bayes analysis of microarrays (EBAM) for testing each gene if its mean expression value differs between two groups with a moderated t-statistic.
In this paper, we have proposed an algorithm that generalizes this procedure. This algorithm comprises the original EBAM analysis of Efron et al. [6] as well as the EBAM analysis based on Wilcoxon
rank sums [13], and allows for other types of EBAM analyses in other testing situations. For this, it is only necessary to choose an appropriate test statistic, and, if the null density is known, a
method for estimating the density of the observed test scores. The EBAM approach for categorical data proposed in this paper is one example for such an analysis. Another example would be to use an F
-statistic for performing an EBAM analysis of continuous data (e.g., gene expression data) when the response shows more than two levels. In this case, the z-values of the genes would be given by the
values of the F-statistic, and the density of the observed z-values might be estimated by the procedure of Efron and Tibshirani [19] if an F-distribution with appropriate degrees of freedom is
assumed to be the null distribution.
The generalized EBAM algorithm along with functions for using (moderated) t-statistics (one- and two-class, paired and unpaired, assuming equal or unequal group variances), (moderated) F-statistics
and Wilcoxon rank sums is implemented in the R package siggenes version 1.10.0 and later that can be downloaded from the webpage [43] of the BioConductor project [44] (see also the section
Availability and requirements).
siggenes version 1.11.7 and later also contains a function for the EBAM analysis of categorical data proposed in this paper. Note that siggenes 1.10.× already comprises a preversion of this function.
The main difference between these versions is the estimation of the density f of the observed test scores: While in siggenes 1.10.× the default version of the R function ns is used to generate the
basis matrix for the natural cubic spline that is employed in the estimation of f, the inner knots of this spline are centered around the mode (and not the median) in siggenes 1.11.7 and later which
leads to a better estimate of f as Figure Figure22 shows.
To exemplify how EBAM for categorical data can be applied to SNP data from whole-genome studies, it has been used to analyze two subsets of the HapMap data. In the first application aiming to
identify SNPs showing a distribution that differs substantially between JPT and CHB, 193 of the 132,383 considered SNPs show a posterior probability larger than or equal to 0.9, and are therefore
called significant by EBAM, where the estimated FDR of this set of SNPs is 0.08.
The number of identified SNPs and the corresponding FDR resulting from this EBAM analysis are identical to the results of the application of SAM to this HapMap data set [9] when the same permutations
of the group labels are used in both methods. This is due to the fact that both EBAM and SAM employ the same approach to estimate the FDR. Moreover, the same set of SNPs is identified by both
methods, since the same non-negative test statistic is used in both applications. Virtually the same applies to the usage of the q-values [11,12] as implemented, e.g., in John Storey's R package
qvalue. For example, each of the 193 SNPs found by EBAM exhibit a q-value less than or equal to 0.08.
In the second application to the HapMap data set in which all four populations are considered, most of the 143,119 SNPs show a distribution that differs substantially in at least one of the four
groups. This huge number of differences does not seem to be that surprising, as the four HapMap populations come from three different continents. Pairwise EBAM analyses of the four populations show
that CEU is the population that differs the most from the other populations. Again, a SAM analysis would lead to the same estimated FDR as the EBAM analysis if the same number of SNPs is identified,
where this set of significant variables will contain the same SNPs in both analyses.
An advantage of EBAM over other approaches is that it not only estimates the FDR for a set of detected variables, but also naturally provides a variable-specific estimate for the probability that a
variable is associated with the response.
The two applications to the HapMap data, however, also reveal two restrictions of the EBAM procedure. Since in EBAM it is assumed that all variables follow the same null distribution, a large number
of SNPs have to be removed prior to both analyses, as these SNPs either exhibit missing values or only show (one or) two of the three genotypes. A solution to the former problem would be to replace
the missing genotypes using imputation methods such as KNNcatImpute [45] or – when considering Affymetrix SNP chips – to employ genotype calling algorithms such as RLMM [46] or CRLMM [47] that allow
to obtain genotypes for all SNPs.
An idea to solve the second problem is to perform two EBAM analyses – one for the SNPs showing only two genotypes, and one for the SNPs with data available for all three genotypes. Having computed
the posterior probabilities for the two sets of SNPs separately and called all SNPs significant that exhibit a posterior probability of being significant larger than or equal to Δ in any of the
analyses, a combined FDR needs to be estimated for both analysis, since we are interested in one estimate for the FDR of all detected SNPs. How such a combined estimate of the FDR can be obtained is
an open question that will be part of future research.
EBAM cannot only be used to test individual categorical variables such as SNPs, but can also be applied to interactions of these variables.
However, two problems occur when considering interactions. The first problem is that $(mp)$p-way interactions have to be tested. Although the functions implemented in siggenes allow to split the
variables into subsets, an EBAM analysis of interactions in high-dimensional data is not feasible in a reasonable amount of time. It is thus restricted to data from association studies in which
several ten to a few hundred SNPs are considered.
The second problem is the empty cell problem: The number of observations available in a study is limited such that when considering p-way interactions of SNPs some of the 3^p cells of the p
-dimensional contingency tables of some of the interactions will be empty leading to features with different numbers of categories and thus with different null distributions. Hence, EBAM cannot be
applied to all of these features at once. In the analysis of the two-way interactions from the simulated data set, e.g., one interaction exhibits values only for seven of the nine genotypes comprised
by two SNPs. This interaction therefore has to be removed from the EBAM analysis.
The abovementioned idea of performing separate EBAM analyses for variables with different numbers of levels and computing a combined FDR might not be ideal in the case of interactions, as many
different numbers of level could exist. In such a situation, a better solution is not to consider the p-way interactions as variables with 3^p categories, but to test each of the 3^p genotypes
comprised by p SNPs that are observed at at least a particular number of persons. Furthermore, it might make sense to include the complements of the genotypes, as, e.g., "Not the homozygous reference
genotype" corresponds to a recessive effect of a SNP. This, however, would increase the multiple testing problem by a factor of up to 6^p such that a filtering prior to the EBAM analysis might be
Boulesteix et al. [48] propose a multiple testing procedure for the identification of the combination of genotypes in a prespecified subset of (interacting) SNPs that shows the largest association
with the response. Another solution to this multiple testing problem that does not require a prespecification of a subset of SNPs has been described in this paper: Firstly, a search algorithm such as
logicFS is used to identify potentially interesting genotypes, where these genotypes can be composed of the genotypes from any of the SNPs considered in the study. Afterwards, the detected genotypes
are tested on an independent data set using EBAM, where the posterior probability of being significant resulting from this EBAM analysis can be interpreted as an importance measure for the genotypes.
For this analysis, it is not necessary that all genotypes are composed of the genotypes of the same number of SNPs, as they are coded as binary variables. Quantifying the importance of (combinations
of) binary variables is implemented in the R packages logicFS version 1.7.6 and later [49].
Availability and requirements
Project name: siggenes – Multiple testing using SAM and Efron's empirical Bayes approach
Project home page: http://bioconductor.org/packages/2.1/bioc/html/siggenes.html (for siggenes 1.12.0)
Operating system(s): Platform independent
Programming language: R
Licence: Free for non-commercial use
Any restrictions to use by non-academics: See the licence in the siggenes package
CEPH – Utah residents with ancestry from northern and western Europe (CEU). Han Chinese from Beijing (CHB). Empirical Bayes Analysis of Microarrays (EBAM). False Discovery Rate (FDR). Japanese from
Tokyo (JPT). Significance Analysis of Microarrays (SAM). Single Nucleotide Polymorphism (SNP). Yoruba in Ibadan, Nigeria (YRI).
Authors' contributions
HS had the idea to generalize EBAM and to adapt EBAM to SNPs, implemented the software, and wrote the paper. KI was involved in the development of EBAM for categorical data and the design of the
applications. Both authors read and approved the final manuscript.
Supplementary Material
Additional file 1:
scriptEBAMSNP.R. This file that can be opened either in R or in any txt-editor contains the R code that has been used to generate the results presented in this paper.
Additional file 2:
ebam.jpt.chb.html. This html-file contains information about the significant SNPs found in the EBAM analysis of JPT vs. CHB.
Financial support of the Deutsche Forschungsgemeinschaft (SFB 475, "Reduction of Complexity in Multivariate Data Structures") is gratefully acknowledged. The authors also would like to thank the
reviewers for their helpful comments.
• Westfall PH, Young SS. Resampling-based multiple testing: examples and methods for p-value adjustments. New York, NY: Wiley; 1993.
• Shaffer JP. Multiple hypothesis testing. Ann Rev Psych. 1995;46:561–584.
• Dudoit S, Shaffer JP, Boldrick JC. Multiple hypothesis testing in microarray experiments. Stat Sci. 2003;18:71–103.
• Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Statist Soc B. 1995;57:289–300.
• Tusher V, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci USA. 2001;98:5116–5124. [PMC free article] [PubMed]
• Efron B, Tibshirani R, Storey JD, Tusher V. Empirical Bayes analysis of a microarray experiment. J Amer Statist Assoc. 2001;96:1151–1160.
• Schwender H, Krause A, Ickstadt K. Identifying interesting genes with siggenes. RNews. 2006;6:45–50.
• Schwender H. Classification – The Ubiquitous Challenge. Weihs C, Gaul W. Springer, Heidelberg; 2005. Modifying microarray analysis methods for categorical data – SAM and PAM for SNPs; pp.
• Schwender H. PhD thesis. University of Dortmund, Department of Statistics; 2007. Statistical analysis of genotype and gene expression data.
• The International HapMap Consortium The International HapMap Project. Nature. 2003;426:789–796. [PubMed]
• Storey JD. A direct approach to false discovery rates. J Roy Statist Soc B. 2002;64:479–498.
• Storey JD, Tibshirani R. Statistical significance of genome-wide studies. Proc Natl Acad Sci USA. 2003;100:9440–9445. [PMC free article] [PubMed]
• Efron B, Tibshirani R. Empirical Bayes methods and false discovery rates for microarrays. Genet Epidemiol. 2002;23:70–86. [PubMed]
• Smyth G. Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004;3:Article 3. [PubMed]
• Storey JD, Tibshirani R. SAM thresholding and false discovery rates for detecting differential gene expression in DNA microarrays. In: Parmigiani G, Garrett ES, Irizarry RA, Zeger SL, editor. The
Analysis of Gene Expression Data: Methods and Software. Springer, New York; 2004. pp. 272–290.
• Silverman BW. Density estimation for statistics and data analysis. London: Chapman and Hall; 1986.
• Chen SX. Probability density functions estimation using gamma kernels. Ann Inst Statist Math. 2000;52:471–480.
• Scaillet O. Density estimation using inverse and reciprocal inverse Gaussian kernels. J Nonparam Statist. 2004;16:217–226.
• Efron B, Tibshirani R. Using specially designed exponential families for density estimation. Ann Statist. 1996;24:2431–2461.
• Wand MP. Data-based choice of histogram bin width. Amer Stat. 1997;51:59–64.
• Scott DW. On optimal and data-based histograms. Biometrika. 1979;66:605–610.
• Freedman D, Diaconis P. On the histogram as a density estimator: L[2 ]theory. Z Wahr Verw Geb. 1981;57:453–476.
• Sturges H. The choice of a class-interval. J Amer Statist Assoc. 1926;21:65–66.
• Scott DW. Multivariate density estimation: theory, practice, and visualization. New York: Wiley; 1992.
• Bickel DR. Robust estimators of the mode and skewness of continuous data. Computat Statist Data Anal. 2002;39:153–163.
• Hedges SB, Shah R. Comparison of mode estimation methods and application in molecular clock analysis. BMC Bioinformatics. 2003;4:31. [PMC free article] [PubMed]
• Cochran WG. Some methods for strengthening the common χ^2 tests. Biometrics. 1954;10:417–451.
• R Development Core Team . R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria; 2007. http://www.R-project.org ISBN 3-900051-07-0.
• Affymetrix . BRLMM: an improved genotype calling method for the GeneChip Human Mapping 500 k array set. Tech rep, Affymetrix, Santa Clara, CA; 2006.
• Affymetrix – Mapping 500 k genotype calls on 270HapMap samples http://www.affymetrix.com/support/technical/sample_data/500k_hapmap_genotype_data.affx [PMC free article] [PubMed]
• Schwender H, Zucknick M, Ickstadt K, Bolt HM. A pilot study on the application of statistical classification procedure to molecular epidemiological data. Tox Letter. 2004;151:291–299. [PubMed]
• Ruczinski I, Kooperberg C, LeBlanc M. Logic regression. J Comput Graph Stat. 2003;12:475–511.
• The single nucleotids polymorphism database (dbSNP) http://www.ncbi.nlm.nih.gov/projects/SNP
• Garte S. Metabolic susceptibility genes as cancer risk factors: time for a reassessment? Cancer Epidemiol Biomarkers Prev. 2001;10:1233–1237. [PubMed]
• Culverhouse R, Suarez BK, Lin J, Reich T. A perspective on epistasis: limits of models displaying no main effect. Am J Hum Genet. 2002;70:461–471. [PMC free article] [PubMed]
• Marchini J, Donnely P, Cardon RC. Genome-wide strategies for detecting multiple loci that influence complex diseases. Nat Genet. 2005;37:413–416. [PubMed]
• Justenhoven C, Hamann U, Pesch B, Harth V, Rabstein S, Baisch C, Vollmert C, Illig T, Ko Y, Brüning T, Brauch H. ERCC2 genotypes and a corresponding haplotype are linked with breast cancer risk
in a German population. Cancer Epidemiol Biomarker Prev. 2004;13:2059–2064. [PubMed]
• Schwender H, Ickstadt K. Identification of SNP interactions using logic regression. Biostat. 2008;9:187–198. [PubMed]
• Ruczinski I, Kooperberg C, LeBlanc M. Exploring interactions in high-dimensional genomic data: an overview of logic regression, with applications. J Mult Anal. 2004;90:178–195.
• Kooperberg C, Ruczinski I, LeBlanc M, Hsu L. Sequence analysis using logic regression. Genet Epidemiol. 2001;21:S626–S631. [PubMed]
• Witte JS, Fijal BA. Introduction: analysis of sequence data and population structure. Genet Epidemiol. 2001;21:600–601. [PubMed]
• Breiman L. Bagging predictors. Mach Learn. 1996;26:123–140.
• BioConductor project http://www.bioconductor.org
• Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, Hornik K, Hothorn T, Huber W, Iacus S, Irizarry R, Leisch F, Li C, Maechler M, Rossini AJ,
Sawitzki G, Smith C, Smyth G, Tierney L, Yang JY, Zhang J. Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004;5:R80. http://genomebiology.com/
2004/5/10/R80 [PMC free article] [PubMed]
• Schwender H, Ickstadt K. Imputing missing genotypes with k nearest neighbors. Tech rep., Collaborative Research Center 475, Department of Statistics, University of Dortmund; 2008.
• Rabbee N, Speed TP. A genotype calling algorithm for Affymetrix SNP arrays. Bioinformatics. 2006;22:7–12. [PubMed]
• Carvalho B, Bengtsson H, Speed TP, Irizarry RA. Exploration, normalization, and genotype calls for high-density oligonucleotide SNP array data. Biostat. 2007;8:485–499. [PubMed]
• Boulesteix AL, Strobl C, Weidinger S, Wichmann HE, Wagenpfeil S. Multiple testing for SNP-SNP interactions. Stat Appl Genet Mol Biol. 2007;6 [PubMed]
• logicFS version 1.8.0 http://bioconductor.org/packages/2.1/bioc/html/logicFS.html
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2335278/?tool=pubmed","timestamp":"2014-04-16T14:05:11Z","content_type":null,"content_length":"158309","record_id":"<urn:uuid:3c385b43-983f-4362-8bfb-88259ee8577c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |