content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Single phase capacitor sizing - Electrical Engineering Centre
When install a motor using capacitor for starting or running methods,we must sizing the rated of capacitor suitable with motor to get correct starting torque and avoid winding from overheating and
can cause a damage.
This is basically a question of motor design.There is no straightforward regular relationship between capacitance and the motor size in kW.
When replacing these capacitors, the capacitance value and voltage should be taken from the manufacturer’s plate on the motor or from the old capacitor.This must be correct within ±5% and is
sometimes stipulated down to a fraction of a μF.The choice of a running capacitor is even more limited than with a starting capacitor.
How to sizing the starting capacitor?
1) A rule of thumb has been developed over the years to help simplify this process. To select the correct capacitance value, start with 30 to 50μF/kW and adjust the value as required, while measuring
motor performance.
We also can use this basic formula to calculate capacitor sizing :
2) Determine the voltage rating for capacitor.
When we select the voltage rated for capacitor,we must know the value of our power supply.For safety purpose,multiply the voltage of power supply with 30%.Factors that affect the selection of the
proper voltage rating of the capacitor include:
• Voltage de-rating factor
• Safety agency requirements.
• Reliability requirements
• Maximum operating temperature
• Space available
How to sizing the running capacitor?
When selecting motor run capacitors all of the required parameters above need to be identified in an organized process. Remember that, not only are the physical and basic electrical requirements
But the type of dielectric material and the metallization technique should be examined.The wrong choice here can have adverse effects on the overall performance of the capacitors.Please refer to
motor nameplate or contact supplier or manufacture to get a accurate value of capacitor.Safety First
1. brian k says:
hi couldnt make out the formula for single phase [start] caps, blurry.any way to get a clear copy? thanks brian
□ merrick says:
can u send me on CLEAR copy of capacitance motor formulla!
2. parvindra saini says:
see clear copy of single phase motor capacitor sizzing &with example
3. Roelof van Heerden says:
To whom it may concerin
Thank you for valuable information. It is a long time since I last worked with these motors and I forgot to work out the size of the capacitor for staring a S/P motor. Your infog in this article
helped me a lot and I hope to recover my motor which was out of order for about a year now.
Happy new year!
Roelof van Heerden
4. Robert says:
Hey, I found your post very valuable. But I got a question regarding the run capacitor. You aproximate sizing of the capacitor is in the range of 4-40 mF and at the motors I have 1.1-2.2 kW the
run capacitor is sized between 35-70 uF. The difference is more than 100 times, is there a calculation error?
5. Steve says:
Hi Robert,
I think 4-40mf means 4-40microfarad, so the difference is a lot smaller than you think. Using mF is an older way of abbreviating the units (now very confusing).
6. gioca book of ra 2 says:
I’m extremely impressed together with your writing talents and also with the format in your blog.
Is that this a paid subject matter or did you customize it yourself?
Either way keep up the nice quality writing,
it’s rare to look a great weblog llike this one these days..
gioca book of ra 2 recently posted..gioca book of ra 2
[...] single phase electric motor help. from here Single phase capacitor sizing 80-110 microfarads, if this is for the lathe, I would say go big, as IIRC you will get more [...]
Speak Your Mind Cancel reply | {"url":"http://www.electricneutron.com/electric-motor/single-phase-capacitor-sizing/","timestamp":"2014-04-18T15:57:26Z","content_type":null,"content_length":"52883","record_id":"<urn:uuid:ee813cca-8dd4-4d92-b68f-29b44e0f243a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quasi-Newton Methods
There are many variants of quasi-Newton methods. In all of them, the idea is to base the matrix in the quadratic model
on an approximation of the Hessian matrix built up from the function and gradient values from some or all steps previously taken.
This shows a plot of the steps taken by the quasi-Newton method. The path is much less direct than for Newton's method. The quasi-Newton method is used by default by
for problems that are not sums of squares.
The first thing to notice about the path taken in this example is that it starts in the wrong direction. This direction is chosen because at the first step all the method has to go by is the
gradient, and so it takes the direction of steepest descent. However, in subsequent steps, it incorporates information from the values of the function and gradient at the steps taken to build up an
approximate model of the Hessian.
The methods used by Mathematica are the Broyden-Fletcher-Goldfarb-Shanno (BFGS) updates and, for large systems, the limited-memory BFGS (L-BFGS) methods, in which the model is not stored explicitly,
but rather is calculated by gradients and step directions stored from past steps.
The BFGS method is implemented such that instead of forming the model Hessian at each step, Cholesky factors such that are computed so that only operations are needed to solve the system [DS96] for a
problem with variables.
For large-scale sparse problems, the BFGS method can be problematic because, in general, the Cholesky factors (or the Hessian approximation or its inverse) are dense, so the memory and operations
requirements become prohibitive compared to algorithms that take advantage of sparseness. The L-BFGS algorithm [NW99] forms an approximation to the inverse Hessian based on the last past steps, which
are stored. The Hessian approximation may not be as complete, but the memory and order of operations are limited to for a problem with variables. In Mathematica 5, for problems over 250 variables,
the algorithm is switched automatically to L-BFGS. You can control this with the method option "StepMemory"->m. With , the full BFGS method will always be used. Choosing an appropriate value of is a
tradeoff between speed of convergence and the work done per step. With , you are most likely better off using a "conjugate gradient" algorithm.
Quasi-Newton methods are chosen as the default in Mathematica because they are typically quite fast and do not require computation of the Hessian matrix, which can be quite expensive both in terms of
the symbolic computation and numerical evaluation. With an adequate line search, they can be shown to converge superlinearly [NW99] to a local minimum where the Hessian is positive definite. This
means that
or, in other words, the steps keep getting smaller. However, for very high precision, this does not compare to the -quadratic convergence rate of Newton's method.
This shows the number of steps and function evaluations required to find the minimum to high precision for the problem shown.
Newton's method is able to find ten times as many digits with far fewer steps because of its quadratic convergence rate. However, the convergence with the quasi-Newton method is still superlinear
since the ratio of the errors is clearly going to zero.
This makes a plot showing the ratios of the errors in the computation. The ratios of the errors are shown on a logarithmic scale so that the trend can clearly be seen over a large range of
The following table summarizes the options you can use with quasi-Newton methods.
option name default value
"StepMemory" Automatic the effective number of steps to "remember" in the Hessian approximation; can be a positive integer or Automatic
"StepControl" "LineSearch" how to control steps; can be or None
Method options for Method->"QuasiNewton". | {"url":"http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationQuasiNewtonMethods.html","timestamp":"2014-04-18T08:11:03Z","content_type":null,"content_length":"38456","record_id":"<urn:uuid:516424a4-44b0-420f-b204-1fd0624d1854>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polyline Reduction -- 3DSoftware.com
Shortest Distance to a Line Segment
In this section, we will cover Linear Algebra concepts.
We begin with a definition of vector. A vector, for this discussion, will be a list of numbers. The following are examples of vectors:
( 1, 2, 3 ) ( 3, 0, 4, 1 ) ( 2, 3 )
The first vector has 3 components. The second and third vectors have 4 and 2 components respectively. A vector's number of components represents the number of dimensions its vector space has. In this
article, we use vectors that have two components, representing points and lines in 2 dimensions. All of the examples in this article may be extended to 3 dimensions, by simply using vectors with 3
The length of a vector is called its magnitude or norm. This length is a scalar. (A scalar is a single number, not a vector). It can be calculated using the Pythagorean Theorem. For example, consider
a point P represented by the vector ( 2, 3 ):
The line from the Origin to P can be considered the hypotenuse of a right triangle, with the sides of the triangle being 2 and 3. According to the Pythagorean Theorem, the length of the hypotenuse
will equal the square root of the sum of the squares of the sides:
Distance 0,0 to P = sqrt( 2.0 * 2.0 + 3.0 * 3.0 )
Now we'll define what a unit vector is. A unit vector is a vector with a length of 1. To convert any vector to a unit vector, simply divide each component of the vector with the vector length. This
is called normalizing a vector. The unit vector points in the same direction as the vector it represents, but has a length of 1 no matter what the original vector's length is. For example, the unit
vector of ( 2, 3 ) is approximately ( 0.5547, 0.83205 ).
Next we define the Inner Product, which is also known as the Dot Product (both terms are used interchangably). The Inner Product is an operation that is performed on two vectors that have the same
number of components (dimensions), and the result is a scalar (a single number, not a vector). This resulting number (the dot product) is the sum of the products of the corresponding components.
[ 5 ] For example, the dot product of ( 2, 3 ) and ( 5, 6 ) is:
( 2, 3 ) · ( 5, 6 ) = 2 × 5 + 3 × 6 = 28
Notice that the symbol for a dot product is a dot (·), not the multiplication sign (×).
It is convenient to use labels and symbols to refer to vectors. The vectors ( 2, 3 ) and ( 5, 6 ) can be called u and v respectively:
u = ( 2, 3 ) v = ( 5, 6 )
The length of a vector is denoted by its name (u or v in this example) surrounded by double vertical bars:
||u|| = sqrt( 13.0 ) = 3.6055513
||v|| = sqrt( 61.0 ) = 7.81
5. For definition of inner product, see linear algebra textbooks, such as:
Peter Shirley, Fundamentals of Computers Graphics (Natick, MA: A.K. Peters, 2002), Fig. 2.18,
p. 25.
Steven A. Leduc, CliffNotes Linear Algebra (New York: Wiley, 1996), p. 27.
Seymour Lipschutz and Marc Lipson, Schaum's Outline of Linear Algebra 3rd Ed. (McGraw-Hill, 2001), p. 4.
Gareth Williams, Computational Linear Algebra with Models (Boston: Allyn and Bacon, 1978), p. 186. | {"url":"http://mappinghacks.com/code/PolyLineReduction/","timestamp":"2014-04-19T09:23:34Z","content_type":null,"content_length":"47589","record_id":"<urn:uuid:cb43e696-fabf-4a0f-b9e9-96a06ea63d07>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Warrenville, IL Prealgebra Tutor
Find a Warrenville, IL Prealgebra Tutor
...I have also met with students in their homes and occasionally at a local coffee shop. Students will typically meet with me twice a week for 30 minutes to an hour each session. Some students
meet with me only before quizzes and chapter tests for some guided review.
13 Subjects: including prealgebra, geometry, biology, algebra 1
...I look forward to helping you succeed in mathematics.I have a teaching certificate in mathematics issued by the South Carolina State Department of Education. During my two and half years of
teaching high school math, I have had the opportunity to teach various levels of Algebra 1 and Algebra 2. I have a teaching certificate in mathematics issued by the South Carolina Department of
12 Subjects: including prealgebra, calculus, geometry, algebra 1
...The best writing is inspired writing! For younger students, I have experience with Step Up to Writing to help with writing structure. I enjoy conferencing with students to help them develop
their own sense of their writing and what strengths and needs they have.
25 Subjects: including prealgebra, English, reading, Spanish
...I am currently attending DePaul University to pursue my master's degree in applied statistics. I have tutored students of varying levels and ages for more than six years. While I specialize in
high school and college level mathematics, I have had success tutoring elementary and middle school students as well.
19 Subjects: including prealgebra, calculus, statistics, algebra 1
...I am willing to travel to meet wherever the student feels comfortable. I hope to hear from you to talk about how I can help with whatever needs you might have. Thank you for considering me!I
have excelled in math classes throughout my school career.
5 Subjects: including prealgebra, statistics, algebra 1, probability
Related Warrenville, IL Tutors
Warrenville, IL Accounting Tutors
Warrenville, IL ACT Tutors
Warrenville, IL Algebra Tutors
Warrenville, IL Algebra 2 Tutors
Warrenville, IL Calculus Tutors
Warrenville, IL Geometry Tutors
Warrenville, IL Math Tutors
Warrenville, IL Prealgebra Tutors
Warrenville, IL Precalculus Tutors
Warrenville, IL SAT Tutors
Warrenville, IL SAT Math Tutors
Warrenville, IL Science Tutors
Warrenville, IL Statistics Tutors
Warrenville, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Warrenville_IL_prealgebra_tutors.php","timestamp":"2014-04-16T13:50:58Z","content_type":null,"content_length":"24389","record_id":"<urn:uuid:1014c437-812a-41b8-91bf-6d71a2b6f628>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US7706454 - Full-diversity, full-rate complex-field space-time coding for wireless communication
This application claims priority from U.S. Provisional Application Ser. No. 60/507,829, filed Oct. 1, 2003, the entire content of which is incorporated herein by reference.
This invention was made with Government support under University Account Number 522-6484, awarded by the Army Research Lab (ARL/CTA), and Contract No. DAAD19-01-2-011. The Government may have certain
rights in this invention.
The invention relates to communication systems and, more particularly, transmitters and receivers for use in wireless communication systems.
Rapid increase of cellular service subscribers and wireless applications has stimulated research efforts in developing wireless communication systems that support reliable high rate transmissions
over wireless channels. A major challenge in designing high-performance, high-rate systems is the mitigation of fading propagation effects within the prescribed bandwidth and power limitations. In
order to mitigate deleterious effects fading has on system performance, transmitters and receivers that exploit available diversity have been developed. To this end, multi-input multi-output (MIMO)
wireless links are particularly attractive when compared to single-input single-output wireless links. Existing MIMO designs utilizing multiple (N[t]) transmit antennas and multiple (N[r]) receive
antennas aim primarily at achieving either high performance or high rate.
MIMO systems that achieve high performance by utilizing the available space-diversity. Space-time (ST) orthogonal designs (OD), linear constellation preceding (LCP) ST codes, and ST trellis codes are
examples of performance driven designs. ST-OD codes can achieve full diversity (FD), i.e. the product of transmit and receive antennas, with linear decoding complexity. ST-OD systems with (N[t], N
[r])=(2, 1) antennas can achieve transmission rates up to one symbol per channel use (pcu). However, relative to full rate (FR) MIMO designs capable of N[t ]symbols pcu, other ST-OD codes incur
significant rate loss. For example, ST-OD systems with N[t]>2 have transmissions rates less than 0.75 symbols pcu. ST-TC schemes can offer better transmission rates, but are complex to decode and
lack closed-form construction while their design complexity increases exponentially with N[t]. In addition, high rate high performance ST-TC often require long size block resulting in long decoding
MIMO systems designed to achieve high rate utilize the capacity of MIMO fading channels. Bell Laboratories layered space time architecture (BLAST)-type architectures and linear dispersion (LD) codes
are examples of rate driven designs. LD designs offer no diversity guarantees. LD designs including diversity constraints are imposed require a search over a high dimensional space that becomes
prohibitively complex as N[t ]and the constellation size increase. However, layered ST multiplexers have complementary strengths and limitations. For example vertical-BLAST (V-BLAST) offers FR, i.e.
N[t ]symbols pcu, but relies on single-input single-output (SISO) error control coding per layer to offer performance guarantees. On the other hand, Diagonal-BLAST (D-BLAST) systems utilize space
diversity but have rate improvements that come at the price of increasing decoding delays. Nevertheless, V-BLAST and D-BLAST can afford reasonable complexity decoding and facilitate SISO codes in
MIMO systems. However, the rate efficiency of V-BLAST and D-BLAST schemes is offset by the bandwidth consuming SISO codes required to gain diversity. In other words, both high performance and high
rate ST codes do not take full advantage of the diversity and capacity provided by MIMO channels. Furthermore, conventional schemes are not flexible to strike desirable tradeoffs among performance,
rate, and complexity.
In general, the invention is directed to techniques that allow full diversity and full rate (FDFR) wireless communication with any number of transmit and receive antennas through flat-fading channels
and frequency- or time-selective channels. In particular, techniques are described that utilize layer specific linear complex-field (LCF) coding with a circular form of layered space-time (ST)
multiplexing to achieve FDFR wireless communications with any number of transmit and receive antennas through flat-fading channels and frequency- or time-selective channels.
Unlike conventional ST coding techniques that achieve only full rate or full diversity, the techniques described herein utilize a set of layer specific LCF encoders and a ST mapper to generate a ST
coded signal by performing layer specific LCF coding concatenated with a circular form of layered ST multiplexing. In some embodiment, the set of LCF encoders includes N[t ]encoders, each of the
layer specific LCF encoders having size N[t]. Consequently, the size of the ST mapper is N[t] ^2. Additionally, the described techniques provide flexibility for desirable tradeoffs among performance,
rate, and complexity.
More specifically, in accordance with an embodiment of the invention, a set of LCF encoders encode a block of information bearing symbols to form a respective set of layers and a ST mapper generates
a ST coded signal by mapping the set of layers in a row circular manner to form an array. The block of information bearing symbols includes N[t] ^2 symbols and comprises N[t ]sub-blocks, each
sub-block including N[t ]information bearing symbols. Each of the LCF encoders codes the information bearing symbols of a respective sub-block to produce a corresponding symbol layer. Each layer is
circularly mapped by the ST mapper such that the encoded information bearing symbols of each layer are orthogonal in space and time. The ST mapper reads the array out in a column-wise manner and a
modulator produces a multi-carrier output waveform in accordance with the ST coded signal for transmission through a frequency-selective wireless channel.
In one embodiment, the invention provides a wireless communication device comprising a set of linear complex-field (LCF) encoders, a space-time (ST) mapper, and a modulator. The set of linear
complex-field encoders encode a block of information bearing symbols to form a respective set of symbol layers. The ST mapper generates a ST coded signal by mapping the set of symbol layers in a row
circular manner to form an array where the encoded information bearing symbols of each layer are orthogonal in space and time. The modulator produces a multi-carrier waveform in accordance with the
ST coded signal for transmission through a wireless channel.
In another embodiment, the invention is directed to a method comprising linearly encoding a block of information bearing symbols with a set of a complex-field codes selected from a constellation to
produce a respective set of symbol layers and forming an array from the set of symbol layers by mapping the set of symbol layers in a row circular manner such that the encoded information bearing
symbols of each of the symbol layers are orthogonal in space and time. The method may further comprise generating a space-time (ST) coded signal from the array of symbol layers, modulating the ST
coded signal to produce a multi-carrier waveform, and transmitting the multi-carrier waveform through a wireless channel.
In another embodiment, the invention is directed to a computer-readable medium containing instructions. The instructions cause a programmable processor to linearly encode a block of information
bearing symbols with a set of a complex-field codes selected from a constellation to produce a respective set of symbol layers and form an array from the set of symbol layers by mapping the set of
symbol layers in a row circular manner so that the encoded information bearing symbols of each layer are orthogonal in space and time. The instruction may further cause the programmable processor to
generate a space-time (ST) coded signal by reading out the array in a column-wise manner, modulate the ST coded signal to produce a multi-carrier waveform, and transmit the multi-carrier waveform
through a wireless channel.
The invention may be capable of providing one or more advantages. For example, the invention provides techniques for achieving FDFR wireless communication over flat-fading and frequency- or
time-selective channels with any number of transmit and receive antennas. Furthermore, unlike conventional ST coding techniques that can also achieve FD but result in considerable mutual information
loss especially when N[r]>1, the described invention incurs no mutual information loss regardless of the number of receive antennas. In other words, if a perfect code is applied at the transmitter,
the capacity of the wireless channel is achieved.
Moreover, in systems with large antennae configurations, the described invention may tradeoff performance gains in order to reduce decoding complexity. When the number of antennas is large, the
diversity order, e.g. N[t]N[r], is large. At the same time, high performance and high rate result in high decoding complexity. For example, reducing the size of each of the LCF encoders results in
reduced diversity but also reduces decoding block size while still transmitting at full rate. Full rate transmission can also be achieved while combining several layers to form one layer, provided
that the constellation size is also increased. In other words, if layers are eliminated, then full diversity is maintained at reduced decoding block length.
The described invention also provides the flexibility to tradeoff rate in order to reduce complexity. For example, the invention may eliminate layers of the LCF encoders and employ maximum likelihood
(ML), or near-ML) decoding to collect full diversity with a reduced decoding block length and transmission rate. Additionally, if the maximum affordable decoding block length N<N[t] ^2, then full
diversity and full rate cannot be achieved, but a tradeoff between diversity for rate can be achieved by adjusting the size of the LCF encoders.
The described invention may also provide an advantage in wireless communication systems having a large number of transmit antennas. If the number of transmit antennas is large, then the block length,
i.e. N[t] ^2, is also large within the described system. As a result, the affordable decoding complexity may not be enough to achieve FDFR, and the described techniques for diversity-rate tradeoffs
are well motivated For example, if the maximum affordable decoding block length N<N[t] ^2, the described invention may be used to adjust the size of each of the LCF encoders in order to tradeoff
diversity for rate.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent
from the description and drawings, and from the claims.
FIG. 1 is a block diagram illustrating a multiple-input multiple-output (MIMO) wireless communication system.
FIG. 2 is a block diagram illustrating an example MIMO wireless communication system in which a transmitter communicates to a receiver through a flat-fading channel with any number of transmit and
receive antennas.
FIG. 3 illustrates an example arrangement of linear complex-field (LCF) coded information bearing symbols circularly mapped in an array in accordance with the transmitter in FIG. 2.
FIG. 4 illustrates an example arrangement of LCF coded information bearing symbols arranged in an array for a block length N<N[t] ^2.
FIG. 5 illustrates an alternative example arrangement of LCF coded information bearing symbols arranged in an array for a block length N<N[t] ^2.
FIG. 6 is a flowchart illustrating an example mode of operation of the MIMO wireless communication system in FIG. 2.
FIG. 7 is a block diagram illustrating an example MIMO wireless communication system in which a transmitter communicates to a receiver through a frequency- or time-selective channel with any number
of transmit and receive antennas.
FIG. 8 illustrates an example arrangement of LCF coded information bearing symbols mapped in an array in accordance with the transmitter in FIG. 7.
FIG. 9 illustrates an alternative arrangement of LCF coded information bearing symbols in an array in accordance with the transmitter in FIG. 7.
FIG. 10 is a flowchart illustrating an example mode of operation of the wireless communication system in FIG. 6.
FIGS. 11-16 are graphs illustrating results of simulations and comparisons that validate analyses and designs of the described wireless communication systems.
Throughout the Detailed Description, upper bold face letters represent matrices, bold face lower letters represent column vectors; (●)^T and (●)^H represent transpose and Hermitian transpose,
respectively; diag(d[1], . . . , d[P]) represents diagonal matrix with diagonal entries d[1], . . . , d[P];
[j] represents the algebraic integer ring with elements p+jq where p, q∈
(j) is the smallest subfield of the set of complex numbers
including both
and j; and
(j)(α) represents the smallest subfield of
including both
(j) and α, where α is algebraic over
FIG. 1 is a block diagram illustrating multiple-input multiple-output (MIMO) wireless communication system 2. Communication system 2 includes a transmitter 4, which communicates with a receiver 6 by
transmitting a space-time (ST) coded signal through a plurality of channels 8A-8N (hereinafter, “channels 8”). In general, ST coding techniques are applied in transmitter 4 to enable information
bearing symbols to be transmitted via multiple antennas over channels 8 with full diversity and full rate (FDFR). Transmitter 4 may include any number of transmit antennas N[t], and receiver 6 may
include any number of receive antennas N[r]. Each of the transmit antennas corresponds to one of the channels 8 to transmit a ST coded signal from the transmit antenna to the receive antennas.
Channels 8 may be flat fading, frequency-selective or time-selective.
Transmitter 4 includes a set of linear complex-field (LCF) encoders and a ST mapper that may be used to implement the ST coding techniques. In embodiments that include frequency- and/or
time-selective channels, transmitter 4 may also include an inverse fast Fourier transform (IFFT) unit coupled to a module that inserts a cyclic prefix (CP) in order to implement orthogonal
frequency-division multiplexing (OFDM). In any case, the set of LCF encoders utilize layer specific LCF coding and the ST mapper performs a circular form of layered ST multiplexing to generate a ST
coded signal that achieves FDFR over wireless channels 8. In particular, each of the LCF encoders encodes a respective one of the sub-blocks of a block of information bearing symbols to form a
corresponding layer. The ST mapper generates the ST coded signal by mapping each of the layers in a row circular manner to form an array. The array is read out in a column-wise manner and transmitted
through a corresponding one of the transmit-antennas. The ST coding techniques described herein provide flexibility to select tradeoffs among performance, rate, and complexity. For example, in
wireless communication systems with large antennae configurations, the described ST coding techniques may tradeoff performance gains in order to reduce decoding complexity.
Receiver 6 includes a LCF-ST decoder and, in frequency- and/or time-selective embodiments, a CP remover coupled to a fast Fourier transform (FFT) unit to recover the ST coded signal. The LCF-ST
decoder may implement any of a plurality of decoding techniques. Maximum likelihood (ML) decoding may be employed to detect the ST coded signal from the received signal regardless of the number of
receive antennas N[r]. However, ML decoding may have high complexity when the length of number of transmit antennas N[t ]is high. In general, the decoding complexity depends on N[t] ^2. Receiver 6
may employ sphere decoding (SD) or semi-definite programming algorithms to reduce decoding complexity while achieving performance approximately equal to ML decoding. To further reduce coding,
receiver 6 may employ nulling-cancelling based or linear decoding at the expense of substantially lower decoding performance. Additionally, nulling-canceling decoding requires N[r]≧N[t].
However, the described ST coding techniques allow desirable tradeoffs among performance, rate and complexity. For example, when the number of antennas is large, the diversity order, e.g. N[t]N[r], is
large. At the same time, high performance and high rate result in high decoding complexity. Therefore, system 2 may tradeoff performance gains in order to reduce decoding complexity. Additionally,
system 2 may tradeoff rate in order to reduce decoding complexity. Moreover, if the maximum affordable decoding block length is less than N[t] ^2, the block length that achieves FDFR, system 2 may
tradeoff diversity for increased rate.
The ST coding techniques described herein apply to uplink and downlink transmissions, i.e., transmissions from a base station to a mobile device and vice versa. Transmitter 4 and receiver 6 may be
any device configured to communicate using wireless transmissions including a cellular distribution station, a hub for a wireless local area network, a cellular phone, a laptop or handheld computing
device, a personal digital assistant (PDA), a Bluetooth™ enabled device and other such devices.
FIG. 2 is a block diagram illustrating in further detail MIMO wireless communication system 2 (FIG. 1) in which transmitter 4A transmits ST coded signals to receiver 6A through flat-fading channels 8
. In the illustrated embodiment, transmitter 4A includes a LCF encoder 12 that applies a set of layer specific LCF encoders. Transmitter 4A further includes and a ST mapper 14 to generate a stream of
ST coded information bearing symbols. For each of the N[t ]antennas 20A-20N, respectively, transmitter 4A includes a parallel-to-serial converters 16A-16N and a corresponding modulator 18A-18N for
transmitting the ST coded signal through channels 8. The ST coded signal achieves FDFR for any number of transmit and receive antennas (N[t], N[r]) over flat-fading channels 8.
Receiver 6A includes LCF-ST decoder 26 to recover the ST coded signal received by N[r ]receive antennas 22A-22N, respectively. For each of the N[r ]receive antennas 22A-22N receiver 6A includes
parallel-to-serial converter 24A-24N, respectively, for parsing the received waveform into vectors that can be decoded using LCF-ST decoder 26. LCF-ST decoder 26 may employ ML decoding or,
alternatively, SD or semi-definite programming algorithms to reduce the decoding complexity. In other embodiments, LCF-ST decoder 26 may employ nulling-canceling decoding to further reduce decoding
MIMO wireless communication system 2 achieves FDFR communications over flat-fading channels 8 for any number of transmit and receive antennas. Additionally, system 2 allows flexibility to select
tradeoffs among performance, rate and complexity. When the number of antennas is large, the diversity order N[t]N[r ]is large. However, high performance and high rate require high decoding
complexity. Herein, decoding complexity is quantified by the block length of the ST coded signal that is to be decoded. Therefore, with large antennae configurations, it may be advantageous to
tradeoff performance gains in order to reduce decoding complexity.
In general, transmitter 4A transmits the stream of information bearing symbols {s(i).} as an output waveform through channels 8. The information bearing symbols are drawn from a finite alphabet A[s].
Serial-to-parallel (S/P) converter 10 parses the information bearing symbols into blocks of size N×1, s:=[s(1), . . . , s(N)]^T, where s(n) represents the nth information bearing symbol. Each block
of information bearing symbols s is coded by LCF encoder 14 to form an N×1 vector u. In particular, LCF encoder 14 includes a set of LCF encoders that encode the block of information bearing symbols
s to form a respective set of symbol layers u. ST mapper 14 circularly maps the set of layers in a row circular manner to form an array. Each layer is circularly mapped such that the encoded
information bearing symbols of each layer are orthogonal in space and time. The array is read out in a column-wise manner to form N[t ]blocks {c[μ]}[μ=1] ^N ^ t with size P×1. Parallel-to-serial (P/
S) converters 16A-16N parse the corresponding N[t ]blocks {c[μ]}[μ=1] ^N ^ t into a serial stream of encoded information bearing symbols and are modulated by modulators 18A-18N through transmit
antennas 20A-20N, respectively. Receiver 6A receives the waveforms via receive-antennas 22A-22N, which may or may not be equal to the number of transmit antennas, and performs receive-filtering and
sampling. Corresponding S/P converters 24A-24N parse the filtered samples of the received waveform into blocks {y[v]}[v=1] ^N ^ r that LCF-ST decoder 26 decodes to yield an estimate ŝ of the
information block s.
Channels 8 are flat-fading and, thus, remain invariant, e.g. flat, over the observation interval of P time slots. Letting h[ν,μ](n) represent the channel associated with the μth transmit-antenna and
the vth receive-antenna during the nth time slot, h[ν,μ](n)=h[ν,μ]. Consequently, the nth sample at the output of the P/S converter corresponding to the vth receive-antenna can be expressed according
to equation (1) where w[v](n) represents the complex additive white Gaussian noise (AWGN) at the vth receive-antenna with mean zero and variance N[0]/2. The vector-matrix counterpart of equation (1)
is given according to equation (2) by stacking the received samples from the N[r ]receive-antennas 22A-22N.
$y v ( n ) = ∑ ϖ = 1 N t h v , μ c μ ( n ) + w v ( n ) , ∀ n ∈ [ 1 , N r ] ( 1 ) y ( n ) = Hc ( n ) + w ( n ) , ∀ n ∈ [ 1 , P ] ( 2 )$
In equation (2), the (v, μ)th entry of H represents h[ν,μ] and c(n):=[c[1](n), . . . , c[N] [ t ](n)]^T. Defining Y:=[y(1) . . . y(P)] and C:=[c(1) . . . c(P)] allows equation (2) to be expressed
according to equation (3).
Y=HC+W (3)
For simplicity, it is assumed that the N[t]N[r ]flat-fading channels 8 are complex Gaussian independent and identically distributed (i.i.d.). Importantly, however, it can be shown that the ST coding
techniques described herein achieve FDFR and diversity-based analysis results apply to all practical fading models including correlated and non-Gaussian fading channel models. Using average pairwise
error probability analysis, it follows that the maximum diversity order provided by the N[t]×N[r ]MIMO channels 8 is given according to equation (4).
G [d] ^max =N [t] N [r ](4)
The transmission rate is given according to equation (5) since it takes P time slots to transmit N information bearing symbols, i.e. the size of s. However, since it is possible to transmit up to one
symbol per antenna per time slot, i.e. symbol period, the maximum possible transmission rate with N[t ]antennas is given according to equation (6).
$R = N P symbols per channel use ( pcu ) ( 5 ) R max = N t symbols pcu ( 6 )$
Parameters G[d] ^max and R^max quantify the full diversity and full rate, respectively. It is important to note that the model given in equation (3) is general in that it subsumes ST orthogonal
designs (ST-OD), linear constellation precoding (LCP) ST designs, vertical-Bell Laboratories layered ST architecture (V-BLAST), and diagonal-BLAST (D-BLAST). These ST schemes offer different rates
and come with different diversity orders. For example, V-BLAST achieves full rate R^max but not full diversity while LCP ST codes achieve full diversity G[d] ^max at rate R=1 symbol pcu for any N[t].
The ST coding techniques described herein achieve FDFR for any number of antennas (N[t], N[r]). Transmitter 4A and receiver 6A are judiciously designed in the following analysis to achieve FDFR for
any number of transmit and receive antennas.
For simplicity, the block length N is selected as N=N[t] ^2 and each block of information bearing symbols s comprises N[g]=N[t ]sub-blocks. Each of the sub-blocks includes N[t ]information bearing
symbols. Consequently, the number of time slots is equal to the number of transmit antennas, i.e. P=N[t]. Accordingly, s is divided into {s[g]}[g=1] ^N ^ t sub-blocks and u is divided into {u[g]}[g=
1] ^N ^ t where u[g ]of the gth layer is given according to equation (7).
u [g]=Θ[g] s [g](7)
LCF encoder 12 enables full diversity and includes a set of LCF encoders Θ[g]. Each of the LCF encoders has entries drawn from
and implements layer specific LCF coding. Matrix Θ
[g ]
is given according to equation (8) where Θ is selected from the class of unitary Vandermonde matrices and the scalar β is selected as described in the following analysis.
^g−1 Θ,∀g∈[
N [t]
ST mapper 14 performs a circular form of layered ST multiplexing to construct an array according to the equation (9) where u[g](n) represents the nth element of the gth layer u[g]. In particular, ST
mapper 14 circularly maps each layer such that encoded information bearing symbols of each layer are orthogonal in space and time. The array given in equation (9) is read out in a column-wise manner.
Importantly, transmitter 4A transmits an encoded information bearing symbol per time slot and, thus, achieves full rate R^max=N[t]. However, the rate-efficient fading-resilient precoding employed by
LCF encoder 12 enables full diversity G[d] ^max=N[t]N[r ]for any number of transmit and receive antennas.
$C = [ u 1 ( 1 ) u N t ( 2 ) ⋯ u 2 ( N t ) u 2 ( 1 ) u 1 ( 2 ) ⋯ u 3 ( N t ) ⋮ ⋮ ⋯ ⋮ u N t ( 1 ) u N t - 1 ( 2 ) ⋯ u 1 ( N t ) ] → time ↓ space ( 9 )$
The input-output relationship given according to equation (3) can be expressed according to equation (10) with LCF encoder 12 and ST mapper 14 expressed as equations (8) and (9), respectively, after
stacking the receive vectors y(n) into one vector. Furthermore, the nth column of ST mapper 12 C can be expressed according to equation (11) where the permutation matrix P[n ]and the diagonal matrix
D[β] are defined, respectively, according to equations (12) and (13) where θ[n] ^T represents the nth row of Θ.
$y = ( I N t ⊗ H ) [ c ( 1 ) ⋮ c ( N t ) ] + w ( 10 ) c ( n ) = [ ( P n D β ) ⊗ θ n T ] s ( 11 ) P n := [ 0 I n - 1 I N t - n + 1 0 ] ( 12 ) D β := diag [ 1 , β , … , β N t - 1 ] ( 13
By defining H:=I[N] [ t ]{circle around (×)}H and the unitary matrix given according to equation (14), equation (9) can be expressed according to equation (15).
$Φ := [ ( P 1 D β ) ⊗ θ 1 T ⋮ ( P N t D β ) ⊗ θ N t T ] ( 14 ) y = ℋ Φ s + w ( 15 )$
Maximum likelihood decoding can be employed to detect s from y regardless of N[r], but possibly with high complexity because decoding complexity is dependent on the block length N=N[t] ^2. SD or
semi-definite programming algorithms may also be used to achieve near-optimal performance. The SD algorithm is known to have average complexity
) irrespective of the alphabet size with N
. When NT is large, the decoding complexity is high even for near-ML decoders. To further reduce decoding complexity, nulling-cancelling based or linear decoding may be used. However, such decoders
require N
In summary, given a number of transmit and receive antennas N[t], N[r], respectively, a block of information bearing symbols s with length N=N[t] ^2 is encoded to form a vector u. LCF encoder 12
includes a set of LCF encoders that encode a corresponding sub-block s[g ]of s to form respective layers u[g ]of u. ST mapper 14 circularly maps each of the layers to the array given according to
equation (9) and each column in (9) is transmitted via N[t ]antennas through channels 8. Receiver 6A decodes s using y given in equation (10).
In the following analysis transmitter 4A is examined with respect to performance and rate for FDFR transmissions. Let T represent channel coherence time and assume that T≧N[t]. In particular,
Proposition 1 establishes design criteria that enable FDFR for LCF encoder 12 and ST mapper 14 given in equations (8) and (9), respectively.
Proposition 1 For a block of information bearing symbols s carved from
[j], with ST mapper
given in equation (9), there exists at least one pair of (Θ, β) in equation (9) that enables full diversity (N
) for the ST coded signal given in equation (3) at full rate N
[t ]
symbols pcu.
The proof of proposition 1 is given in the following analysis. Since N=N[t] ^2 and P=N[t], it can be verified that the transmission rate is R=N/N[t]=N[t ]symbols pcu, which is the full rate given in
equation (6).
To prove the full diversity claim, it suffices to show that ∀s≠s′, there exists at least a pair of (Θ,β) such that det(C−C′)≠0.
For simplicity, define {tilde over (C)}:=C−C′, ŝ[g]:=s[g]−s[g]′, and ũ[g]:=u[g]−u[g]′. The determinant of {tilde over (C)} can then be expressed according to equation (16) where (i[1], . . . , i[N] [
t ]) is a permutation of the sequence (1, . . . , N[t]) and τ(i[1], . . . , i[N] [ t ]) is the number of inversions of the sequence (i[1], . . . , i[N] [ t ]).
$det ( C ~ ) = ∑ ( i 1 , … i N t ) ( - 1 ) τ ( i 1 , … , i N t ) ∏ n = 1 N t c ~ i n ( n ) ( 16 )$
Comparing {tilde over (c)}[i] [ n ](n) with ũ[g] [ n ](n), when {tilde over (c)}[i] [ n ](n)=ũ[g] [ n ](n), then g[n ]is given according to equation (17).
$g n = { N t + i n - n + 1 if i n - n + 1 ≤ 0 i n - n + 1 if i n - n + 1 〉 0 ∀ n ∈ [ 1 , N t ] ( 17 )$
Thus, g[n]=i[n]−n+1, or, N[t]+i[n]−n+1, from which it follows that
$∑ n = 1 N t ( g n - 1 ) = mN t ,$
m∈[0,N[t]−1] where m depends on the sequence (i[1], . . . , i[N] [ t ]). Therefore, it can be deduced that for each permutation (i[1], . . . , i[N] [ t ]),
$∏ n = 1 N t c ~ i n ( n ) = β mN t ∏ n = 1 N t θ n T s ~ g n ,$
Using the structure given in equation (9), it has been shown that the det({tilde over (C)}) is a function of Θ, and at the same time a polynomial in β^N ^ t , with degree N[t]−1. Before proving the
existence of a pair (Θ,β) that enables FDFR the result given in Lemma 1 is needed.
Lemma 1 If the constellation of s is carved from the ring of Gaussian integers
[j], then there exists a matrix Θ which guarantees that ũ
=Θ{tilde over (s)}
[g ]
has no zero entry when {tilde over (s)}
Based on Lemma 2, there always exists Θ, such that Θ{tilde over (s)}[g ]has no zero entry when {tilde over (s)}[g]≠0. Now it must be proved that when {tilde over (s)}≠0, det({tilde over (C)}) is not
a zero polynomial of β^N ^ t . In other words, not all the coefficients of det({tilde over (C)}) equal to zero simultaneously when {tilde over (s)}≠0. Suppose g is the smallest index of layers for
which {tilde over (s)}≠0, i.e., {tilde over (s)}[g′]≠0, for g′<g. Based on equation (17) the det({tilde over (C)}) is given according to equation (18) where (*) denotes the other terms from equation
$det ( C ~ ) = ( - 1 ) τ ( g , … , N t , … g - 1 ) β N t ( g - 1 ) ∏ n = 1 N t θ n T s ~ g + β N t g (* ) ( 18 )$
Since {tilde over (s)}[g]≠0, it follows from the design of Θ, that
$∏ n = 1 N i θ n T s g ~ ≠ 0.$
Hence, det({tilde over (C)}) cannot be a zero polynomial for any error pattern s≠s′. Furthermore, if the generator of Θ is α, then the entries of Θ{tilde over (s)}[g]∈
^j2π/N ^ t
)(α). Therefore, det({tilde over (C)}) is a polynomial of β
^N ^ t
with coefficients belonging to
^j2π/N ^ t
)(α). Similar to Lemma 2, there always exists β
^N ^ t
which has minimum polynomial in
^j2π/N ^ t
)(α) with degree greater than or equal to N
. Thus, it has been have proved that there exists at least one (Θ,β) pair for which det({tilde over (C)})≠0,∀{tilde over (s)}≠0.
The proof of Proposition 1 reveals that selecting Θ and β is critical in enabling FDFR transmissions in transmitter 4A. Intuitively, Θ enables full diversity per layer while β fully diversifies
transmissions across layers.
Relying on the algebraic number theoretic tools used to prove Proposition 1, systematic design methods for selecting (Θ,β) are provided in the following analysis. First, the unitary Vandermonde
matrix Θ is given according to equation (19) where is the N[t]×N[t ]fast Fourier transform (FFT) matrix with (m+1, n+1)st entry e^−j2πmn/N ^ t .
$Θ = 1 N t F N i ℋ diag [ 1 , α , … , α N i - 1 ] ( 19 )$
Note that Θ in equation (19) is parameterized by the single parameter α. Adding α to the scalar β in equation (8), the ensuing design methods, Design A, Design B, and Design C, aim at selecting (α,
β) that lead to Θ[g]s for which C in equation (9) offer FDFR.
Design A selects α such that the minimum polynomial of α over the field
[j] has degree greater than or equal to N
. Given α, select β
^N ^ t
such that the minimum polynomial of β
^N ^ t
in the field
^j2π/N ^ t
)(α) has degree greater than or equal to N
. For example, when N
, k∈
select α=e
^jπ/2N ^ t
and β
^N ^ t
^jπ/(4N ^ t ^) ^ 2
. In another example, when N
=3, select α=e
. In a further example, when N
=5, select α=e
and β
^N ^ t
Design B fixes fixes β^N ^ t =α and selects α such that the minimum polynomial of α in the field
^j2π/N ^ t
)(α) has degree greater than or equal to N
[t] ^2
. For example, when N
, k∈
select α=e
^jπ/(N ^ t ^ 3 ^)
. In another example, when N
=3, select α=e
and β
^N ^ t
. In a further example, when N
=5, select α=e
In the following analysis a general code design is described for Design A and Design B. Define [
] as the degree of field extension of
. For example [
]=2. For Design A, to enable full diversity, design α=e
, K∈
such that condition (20) is satisfied.
e ^j2π/N ^ t
N [t ]
Because [
]=φ(K), where φ(•) is the Euler totient function, K can be selected based on the properties of Euler numbers such that φ(K)≧2N
. Since
^j2π/N ^ t
(α), it can be verified that equation (21) can be satisfied.
e ^j2π/N ^ t
e ^j2π/N ^ t
N [t ]
As a result, equation (20) is satisfied. Based on the selection of α, the second step is to select
$β N i = ⅇ j 2 π M ,$
, such that equation (22) is satisfied. Similar to the selection of α, β
^N ^ t
is selected according to equation (23).
$[ ℚ ( j ) ( ⅇ j2π / N t ) ( α ) ( β N t ) : ℚ ( j ) ( ⅇ j2π / N t ) ( α ) ] ≥ N t ( 22 ) [ ℚ ( j ) ( ⅇ j2π / N t ) ( α ) ( β N t ) : ℚ ] = [ ℚ ( j ) ( ⅇ j2π / N t ) ( α
) ( β N t ) : ℚ ( j ) ( ⅇ j2π / N t ) ( α ) ] • [ ℚ ( j ) ( ⅇ j2π / N t ) ( α ) : ℚ ] ( 23 )$
Because [
^j2π/N ^ t
)φ(K), and [
^j2π/N ^ t
^N ^ t
^N ^ t
]=φ(M), then given K, if M can be selected according to equation (24), then the inequality given in equation (22) is satisfied holds.
N [t]
N [t]
) (24)
It is important to note that the selection of α and β is not unique. For example, if K is an integer multiple of N[t], M can be selected such that the inequality given in equation (25) is satisfied.
φ(M)≧2N [t]φ(K) (25)
For Design B, fix β^N ^ t =α=e^j2π/K. Consequently, the design problem becomes selecting α such that inequality given in equation (26) is satisfied.
e ^j2π/N ^ t
e ^j2π/N ^ t
N [t] ^2
In a similar manner, K can be selected such that the inequality given in equation (27) is satisfied.
φ(K)≧2N [t]φ(K) (27)
Design C selects α such that the minimum polynomial of α in the field
(j) has degree greater than or equal to N
. Based on α, one transcendental number in the field of
^j2π/N ^ t
)(α) is found. Alternatively, a transcendental number α can be found directly for the field
^j2π/N ^ t
)(α). For example, given N
, select Θ as in Design A and let β
^N ^ t
. In another example, given N
, select β
^N ^ t
=α and let α=e
Note that the transcendental number e^j/2 has also been used in M. O. Damen et al. “A construction of a space-time code based on number theory,” IEEE Transactions on Information Theory, vol. 48, pp.
753-760, March 2002, which is incorporated by reference herein. According to Lindemann's Theorem given on page 44 of A. O. Gelfond, Transcendental & Algebraic Numbers, Dover Publications, Inc., 1960,
which is incorporated herein by reference, transcendental numbers can be designed, e.g. e^jk, ∀k∈
. All three designs, i.e. Design A, Design B, and Design C, can enable full diversity. However, since the coding gain was not maximized for any of the designs, it may be possible to find other FDFR
encoders with improved coding gains. Simulations of the designs are presented to compare their relative performance.
Proposition 2 gives a measure of mutual information for the FDFR ST coding techniques described herein for transmitter 4A.
Proposition 2 If the information symbols s˜CN(0,ε[s]/N[t]I[N]) and the average signal-to-noise (SNR) ratio is γ:=ε[s]/(N[0]N[t]), then the mutual information of the FDFR ST coded signal transmitted
through channels 8 is given by equation (28).
C [flat]=log det(I [N] [ r ] +γHH ^H) bits pcu (28)
The proof of Proposition 2 is given in the following analysis. Based on equation (15), the mutual information is given according to equation (29) where H(●) represents entropy. Since w is AWGN and
matrix Φ are known, the second term on the right hand side of equation (21) can be expressed according to equation (30).
I(y;s|H)=H(s|H)−H(s|H,y) (29)
$ℋ ( s ℋ , y ) = log det ( π e ( ( ɛ s N t ) - 1 I N + 1 N 0 Φ ℋ ℋ ℋ ℋΦ ) - 1 ) ( 30 )$
For a given R[s]=E[ss^H]=ε[s]/N[t]I[N], H(s|H) is maximized when s is Gaussian. In other words, H(s|H) is given according to equation (31).
$ℋ ( s ℋ ) ≤ log det ( π e ɛ s N t I N ) ( 31 )$
Substituting equations (30) and (31) into equation (29), results in equation (32) when s is Gaussian.
$c flat = 1 N t log det ( I N + γΦ ℋ ℋ ℋ ℋΦ ) = log det ( I N r + γ HH ℋ ) ( 32 )$
Compared with the MIMO channel capacity described in G. J. Foschini and M. J. Gans, “On limits of wireless communication in a fading environment when using multiple antennas,” Wireless Personal
Communications, vol. 6, no. 3, pp. 311-335, March 1998, which is incorporated herein by reference, the mutual information given in equation (28) coincides with the instantaneous channel capacity. In
other words, the FDFR ST coding techniques applied in transmitter 4A incur no mutual information loss unlike other designs which can also achieve full diversity, e.g. ST-OD and LCP-STC, but result in
substantial mutual information loss, particularly when N[r]>1.
When the number of antennas is large, the diversity order N[t]N[r ]is large. At the same time, high performance and high rate come with high decoding complexity. Therefore, with large antennae
configurations, it may be desirable to tradeoff performance gains, which may show up for impractically high SNR, in order to reduce decoding complexity. The following two corollaries provide methods
that the FDFR ST coding techniques described herein may employ to tradeoff rate and performance with complexity. In particular, Corollary 1 illustrates a method that may be employed to tradeoff
performance with complexity while Corollary 2 illustrates a method that may be employed to tradeoff rate with complexity. Again, decoding complexity is quantified based on the block length that is to
be decoded.
Corollary 1 Keeping the same information rate, i.e. information bearing symbols pcu, two performance-complexity tradeoffs arise.
i.) (Diversity-Complexity Tradeoff) With samller size LCF encoders Θ, e.g. N[d]<N[t], the achieved diversity order reduces to N[d]N[r]≦N[d]N[t ]while the decoding block size reduces to N[d]N[t].
ii.) (Modulation-Complexity Tradeoff) Provided that the constellation size is increased, several layers can be combined to one layer. If (zero) N[z ]layers, i.e. u[g1]= . . . =u[gN] [ z ]=0, then
full diversity is maintained at reduced decoding block length N[t](N[t]−N[z]). Clearly, the decoding block length decreases as N[z ]increases.
Alternatively, decoding complexity may be reduced by decreasing the transmission rate. Reducing the transmission rate can be accomplished when min(N[t], N[r]) is large because the rate can be reduced
in order to reduce decoding complexity. Similar to full diversity, full rate is not always required. For example, instead of having N[t ]layers in equation (9), ST mapper 14 can be designed with N[t]
−1 or N[t]−2 layers. Corollary 2 quantifies the rate-complexity tradeoff.
Corollary 2 (Rate-Complexity Tradeoff) If, for the set of LCF encoders 12 given in equation (8) and ST mapper 14 given in equation (9), N[z ]layers are eliminated by letting u[g1]= . . . =u[gN] [ z ]
=0, then ML or near-ML decoding collects the full diversity N[t]N[r ]with decoding block length N[t](N[t]−N[z]) and has transmission rate N[t]−N[z ]symbols pcu.
In other words, Corollary 2 states that when entries of s are selected from a fixed constellation, as the transmission rate increases, i.e. N[z ]decreases, the decoding complexity increases as well.
It is important to note that when the number of “null layers” N[z]>0, the condition N[r]≧N[t ]for SD described in B. Hassibi and H. Vikalo, “On the expected complexity of sphere decoding,”
Processions of 35^th Asilomar Conference on Signals, Systems, and Computers, vol 2, pp. 1051-1055, Pacific Grove, Calif., Nov. 4-7, 2001, which is incorporated herein by reference, or
nulling-cancelling algorithms is relaxed to Nr≧Nt−N[z]. Importantly, while the tradeoffs described above have been considered when the channel has fixed coherence time, the tradeoffs vary accordingly
with the channel coherence time.
When the number of transmit-antennas is large N[t], the block length is also large N=N[t] ^2 and the affordable decoding complexity may not be sufficient to achieve FDFR. In this case, diversity-rate
tradeoffs are well motivated. Corollary 3 provides a method that the ST coding techniques may employ to tradeoff rate with diversity when the affordable decoding complexity is not sufficient to
achieve FDFR.
Corollary 3 (Diversity-Rate Tradeoff) If the maximum affordable decoding block length N<N[t] ^2, then based on the described FDFR ST coding techniques, full diversity and full rate cannot be achieved
simultaneously. By adjusting the size of Θ[g], diversity can be traded-off for rate.
The rate-diversity tradeoff for block size N=12 and N[t]=4 is illustrated in FIGS. 4 and 5.
It is important to note that the described diversity-rate tradeoffs are different from the diversity-multiplexing tradeoffs described in L. Zheng and D. Tse, “Diversity and multiplexing: a
fundamental tradeoff in multiple antenna channels,” IEEE Transactions on Information Theory, revised September 2002, which is incorporated by reference herein, where it is stated that “any fixed code
has 0 spatial multiplexing gain.” The multiplexing gain in that paper is defined as the limiting transmission rate divided by log[2 ](SNR) as SNR goes to infinity. For the ST coding techniques
described herein, the transmission rate is defined as the number of symbols pcu. In any case, the transmission rate as defined herein does not vary with SNR.
FIG. 3 illustrates an example arrangement of LCF coded information bearing symbols u circularly mapped in an array 30 in accordance with equation (9). In this arrangement the block of information
bearing symbols s includes N[t] ^2 symbols and comprises N[g]=N[t ]sub-blocks, each sub-block including N[t ]information bearing symbols. The LCF coded information bearing symbols u are organized in
array 30 having N[t ]rows and N[t ]columns. Each of the LCF encoders Θ[g ] 12 encodes a corresponding block of information bearing symbols s[g ]to form a respective layer u[g ]in accordance with
equation (7) and the described ST coding techniques. Each of the layers {u[g]}[g=1] ^N ^ t is circularly mapped by ST mapper 14 such that the encoded information bearing symbols of each layer are
orthogonal in space and time. For example, the LCF encoded information symbols of the first layer are arranged along the diagonal of array 30. Array 30 is read out in a column-wise manner and
transmits one symbol per antenna per time slot. Consequently, transmitter 4A can achieve the full rate R^max=N[t ]symbols pcu.
FIG. 4 illustrates an example arrangement of LCF coded information bearing symbols in an array 40 that utilizes a diversity-complexity tradeoff. In this arrangement the maximum block size N=12 and N
[t]=4 and the encoded information bearing symbols are circularly arranged in array 40 having N[t]=4 rows and N[d]=3 columns. Per Corollary 3, FD and FR cannot be achieved simultaneously because N<N
[t] ^2. The rate achieved by array 40 is R^max=N[t]=4 symbols pcu. However, the diversity order is 3N[r]<N[t]N[r]. Further, the decoding complexity reduces to N[d]N[r]=3N[r]. Array 40 achieves the
full rate because each symbol is transmitted per antenna per time slot. The diversity is reduced because the size of each of the LCF encoders is reduced.
FIG. 5 illustrates another example arrangement of LCF coded information bearing symbols in an array 50 that utilizes a rate complexity tradeoff. In this arrangement, the maximum block size N=12 and N
[t]=4 and the encoded information bearing symbols are circularly arranged in array 40 having N[t]=4 rows and N[t]=4 columns. However, per Corollary 3, FD and FR cannot be achieved simultaneously
because N<N[t] ^2. In accordance with Corollary 2, once layer is eliminated by letting u[g1]= . . . =u[gN] [ z ]=0. As a result, each transmit antenna does not transmit an information bearing symbol
per time slot. Consequently, the rate achieve by array 50 is R=3. However, array 50 does achieve full diversity because N[t]=4 transmit antennas transmit per time slot.
FIG. 6 is a flowchart illustrating an example mode of operation of MIMO wireless communication system 2 (FIG. 2) in which transmitter 4A and receiver 6A communicate using the described ST coding
techniques. Generally, transmitter 4A parses a serial stream of information bearing symbols into blocks of N=N[t] ^2 symbols s comprising N[g]=N[t ]sub-blocks, each sub-block including N[t ]symbols
(step 60). LCF encoders 12 applies a set of layer specific LCF encoders to encode respective sub-blocks to form corresponding layers (step 62). ST mapper 14 forms an array according to equation (9)
by mapping each layer such that the encoded information bearing symbols of each layer are orthogonal in space and time (step 64). The array is read out in a column-wise manner to generate a ST coded
signal (step 66) and transmitter 4AA converts the ST coded signal into a serial stream of ST coded symbols (step 68) and outputs a transmission waveform for carrying the ST coded signal through
communication channels 8 to receiver 6A (step 70).
Receiver 6A receives the incoming stream of information bearing symbols (step 72) and perform ML decoding to form estimates ŝ of the block of information bearing symbols s (step 74).
FIG. 7 is a block diagram illustrating in further detail one embodiment of MIMO wireless communication system 2 (FIG. 1) in which transmitter 4B employs orthogonal frequency-division multiplexing
(OFDM) to transmit ST coded signals to receiver 6 through frequency- or time-selective channels 8B. System 2 is first described with frequency-selective channels. The frequency channels are assumed
to be quasi-static, i.e. remain invariant over at least one block. System 2 is then described in a following analysis for time-selective channels.
When the maximum delay spread τ[max ]of a channel is greater than the symbol period T[s], the channel may become frequency-selective and cause inter-symbol interference (ISI). Specifically, if the
maximum delay spread among N[t]N[r ]channels is finite, then the channel order is upper-bounded by a finite number L:=└τ[max]/T[s]┘. Channel taps are represented as h[ν,μ](l) for μ∈[1, N[t]], v∈[1, N
[r]] and l∈[0, L]. OFDM mitigates ISI by converting frequency-selective channels to a set of frequency-flat subchannels.
In the illustrated embodiment, transmitter 4B includes a LCF encoder 82 and a ST mapper 84 to generate a stream of ST coded information bearing symbols. For each of the N[t ]transmit-antennas 92A-92
N, respectively, transmitter 4B includes corresponding P-point inverse fast Fourier transform (IFFT) units 88A-88N followed by respective cyclic prefix (CP) insertion units 89A-89N. Each output of ST
mapper 84 c[μ] is processed by a corresponding one of P-point IFFT units 88A-88N and a respective one of CP insertion units 89A-89N inserts a CP of length L. Modulators 90A-90N transmit the OFDM-ST
coded signal through frequency-selective channels 8B. The OFDM-ST coded signal achieves FDFR for any number of transmit and receive antennas over frequency-selective channels 8B.
Receiver 6B includes cyclic prefix removers 97A-97N followed by corresponding fast Fourier transform (FFT) units 98A-98N for each of the receive antennas 94A-94N and decodes the received OFDM-ST
coded signal via LCF-ST decoder 99. For each receive antenna 94A-94N, corresponding S/P converters 96A-96N parse the P+L filtered samples of the received waveform into a respective vector {x[v]}[v=1]
^N ^ r . CP removers 97A-97N remove the first L samples to substantially reduce the ISI from the corresponding previously received block. The remaining P symbols are processed by the respective one
of FFT units 98A-98N. The output of the FFT unit corresponding to the vth receive-antenna is represented by y[v]. LCF-ST decoder 99 decodes {y[v]}[v=1] ^N ^ r to yield an estimate ŝ of the
information block s. The input-out relationship between {c[μ]}[μ=1] ^N ^ t and y[v ]is given according to equation (33) where D[H] ^(ν,μ) represents a P×P matrix with (p, p)th entry given according
to equation (34). In other words, equation (34) represents the frequency response of h[ν,μ](l) at frequency bin 2Πpl/P.
$y v = ∑ μ = 1 N t D H ( v , μ ) c μ + w v ( 33 ) H v , μ ( p ) = ∑ l = 0 L h v , μ ( l ) ⅇ - jπ p l / P ( 34 )$
Transmitter 4B communicates with receiver 6B to achieves FDFR communications over frequency-selective channels 8B for any number of transmit- and receive antennas. Additionally, transmitter 4B and
receiver 6B allow flexibility to select tradeoffs among performance, rate, and complexity. When the number of antennas is large, the diversity order N[t]N[r](L+1) is large. However, high performance
and high rate require high decoding complexity. Herein, decoding complexity is quantified by the block length of the ST coded signal that is to be decoded N[t] ^2(L+1). Therefore, with large antennae
configurations, it may be advantageous to tradeoff performance gains in order to reduce decoding complexity.
In general, transmitter 4B transmits the stream of information bearing symbols {s(i)} as an output waveform through channels 8B. The information bearing symbols are drawn from a finite alphabet A[s].
S/P converter 80 parses the information bearing symbols into blocks s of size N[t] ^2(L+1). Each block s comprises N[t ]sub-blocks {s[g]}[g=1] ^N ^ t with each sub-block having length P=N[t](L+1).
The increased block length is due to the CP inserted to mitigate ISI. Each block of information bearing symbols s is coded by LCF encoder 14 to form vector u. In particular, LCF encoder 14 includes a
set of LCF encoders Θ[g ]that encode corresponding sub-blocks {s[g]}[g=1] ^N ^ t to form a respective set of layers u[g]=Θ[g]s[g]. Importantly, Θ[g ]has larger size N[t](L+1) for frequency-selective
channels than for flat-fading channels, however, Θ[g ]may be designed according to the analysis given previously for flat-fading channels.
ST mapper 84 circularly maps each of the LCF encoded symbols u[g](n) into the array given according to equation (35). In particular, ST mapper 84 maps each symbol layer in a row circular manner to
form the array given in equation (35). Array C given in equation (35) includes L+1 sub-arrays, each having the same structure as the array given in equation (9) for flat-fading channels. In other
words, each layer of in a sub-array is mapped such that the encoded symbols of a layer are orthogonal in space and time.
$C = [ u 1 ( 1 ) ⋯ u 2 ( N t ) u 1 ( N t + 1 ) ⋯ u 2 ( 2 N t ) ⋯ u 2 ( P ) u 2 ( 1 ) ⋯ u 3 ( N t ) u 2 ( N t + 1 ) ⋯ u 3 ( 2 N t ) ⋯ u 3 ( P ) ⋮ ⋯ ⋮ ⋮ ⋯ ⋮ u N t ( 1 ) ⋯ u 1
( N t ) u N t ( N t + 1 ) ⋯ u 1 ( 2 N t ) ⋯ u 1 ( P ) ] ( 35 )$
The array given in equation (35) is read out in a column wise manner such that every layer is transmitted over N[t ]transmit-antennas and through each antenna, each layer is spread over L+1 frequency
bins. Intuitively, the structure of C given in equation (35) allows for joint exploitation of space and diversity modes.
When the block size N>N[t] ^2(L+1), i.e. when the number of subcarriers P>N[t]L+1), the subcarrier grouping approach described in G. B. Giannakis et. al “Wireless multi-carrier communications: where
Fourier meets Shannon,” IEEE Signal Processing Magazine, vol. 17, no. 3, pp.29-48, May 2000, G. B. Giannakis et al. “Space-Time-Frequency Coded OFDM over Frequency-Selective Fading Channels,” IEEE
Transactions on Signal Processing, pp.2465-2476, October 2002, and G. B. Giannakis et al. “Linear Constellation Precoding for OFDM with Maximum Multipath Diversity and Coding Gains,” IEEE
Transactions on Communications, vol. 51, no.3, pp. 416-427, March 2003, each of which is incorporated herein by reference, can be used. In any event, the size of Θ[g ]is N[t](L+1). However, if P is a
multiple of N[t](L+1), for example P=N[g]N[t](L+1) where N[g ]is the number of sub-blocks, layers can be interlaced in different groups. As described below, FIG. 9 illustrates an example arrangement
of matrix C when P is a multiple of N[t](L+1).
In equation (33), the multi-channel output output samples are organized according to receive-antenna indices. Alternatively, the multi-channel output samples may be organized according to subcarrier
indices. Specifically, if the N[t]×N[t ]channel matrix H(n) with (v, μ)th entry H[ν,μ](n) is defined, the input-output relationship given in equation (36) can be obtained.
$[ y ( 1 ) ⋮ y ( P ) ] = [ H ( 1 ) ⋰ H ( P ) ] [ c ( 1 ) ⋮ c ( P ) ] + [ w ( 1 ) ⋮ w ( P ) ] ( 36 ) [ c ( 1 ) ⋮ c ( N t ) c ( N t + 1 ) ⋮ c ( P ) ] + [ ( P 1 D β ) ⊗ θ 1 T
⋮ ( P N t D β ) ⊗ θ N t T ( P 1 D β ) ⊗ θ N t + 1 T ⋮ ( P N t D β ) ⊗ θ P T ] s := Φ s ( 37 )$
Comparing equation (36) to equation (2) for flat-fading channels, it can be observed that due to frequency-selectivity, the channel response on different frequency bins may be different.
Consequently, if there is no frequency-selectivity, the design described in this analysis reduces to the design previously described for flat-fading channels.
The transmitted vector given in equation (36) can be expressed according to equation (37). Given the channel matrices in equation (35), the matrix Φ given in equation (37) is known. Consequently, ML
decoding, near-ML decoding such as SD, or linear decoding may be employed by receiver 6B to recover the information vector s.
Again, similar to flat-fading channels, the decoding complexity of SD depends only on the block length, regardless of the constellation size. However, decoding complexity for frequency-selective
channels is higher than that for flat-fading channels because the block length N=N[t] ^2(L+1) is larger. Importantly, when N[t] ^2(L+1) is large the decoding complexity is high even for near-ML
decoders. To further reduce decoding complexity, nulling-cancelling based or linear decoding may be used.
Maximum likelihood decoding can be employed to detect s from y regardless of N[r], but possibly with high complexity because decoding complexity is dependent on the block length N=N[t] ^2. SD or
semi-definite programming algorithms may also be used to achieve near-optimal performance. The SD algorithm is known to have average complexity
) irrespective of the alphabet size with N
. When N
[t ]
is large, the decoding complexity is high even for near-ML decoders. To further reduce decoding complexity, nulling-cancelling based or linear decoding may be used. However, such decoders require N
The following analysis describes the performance of transmitter 4B for frequency-selective channels. Collecting received blocks from equation (33), the received matrix can be expressed according to
equation (37) where D[c] ^(μ):=diag[c[μ]]=diag[c[μ](1) . . . c[μ](P)], the (p, v)th entry of the N[t](L+1)×N[r ]matrix H is h[v,┌p/N] [ t ] [┐](p mod N[t]), and the P×(L+1) matrix F[1:L+1 ]comprises
the first L+1 columns of the FFT matrix F[P ]with (m+1, n+1)th entry e^−j2Πmn/P, ∀m, n∈[1, P].
Y:=[y [1 ] . . . y [N] [ r ] ]=[D [c] ^(1) F [1:L+1 ] . . . D [c] ^(N ^ t ^) F [1:L+1] ]H+W:=ΛH+W (38)
When viewing transmissions through N[t ]antennas over frequency-selective channels of order L as transmissions through flat-fading channels with N[t](L+1) virtual transmit-antennas, the full
diversity order is expected, at least intuitively, to be N[t]N[r](L+1, provided that the relation matrix of the channel taps has fall rank. However, the maximum transmission rate is still N[t ]
symbols pcu because it is impossible to transmit different symbols through the multiple paths of the same channel. This implies that for each transmit antenna, only one symbol can be transmitted pcu
even though the channel has L+1 taps.
In order to enable the full diversity N[t]N[r](L+1) matrix Λ in equation (38) must be designed such that det(Λ-Λ′)≠0, ∀s≠s′. Analogous to Proposition 1, Proposition 3 establishes design criteria that
enable FDFR transmissions for LCF encoder 82 and ST mapper 84 that form the array given in equation (35).
Proposition 3 For any constellation of s carved from
[j], with the array formed by LCF encoder
and ST mapper
and given in equation (35), there exists at least one pair of (Θ, β) in equation (9) that enables full diversity (N
(L+1)) for the ST coded signal given in equation (38) at transmission rate N
P/(P+L) symbols pcu.
The proof of proposition 3 is given in the following analysis. Similar to the flat-fading case, to prove Proposition 3, it suffices to show that the matrix Λ-Λ′ in equation (38) has full rank ∀s≠s′.
Defining {tilde over (Λ)}:=Λ-Λ′, {tilde over (Λ)} is given according to equation (39) where ω:=e^−j2π/P.
$Λ ~ = [ c ~ 1 ( 1 ) ⋯ c ~ 1 ( 1 ) ⋯ c ~ 1 ( 1 ) ⋯ c ~ N t ( 1 ) c ~ 1 ( 2 ) ⋯ c ~ 1 ( 2 ) ω L ⋯ c ~ N t ( 2 ) ⋯ c ~ N t ( 2 ) ω L ⋮ ⋮ ⋮ ⋮ c ~ 1 ( P )
⋯ c ~ 1 ( P ) ω ( P - 1 ) L ⋯ c ~ N t ( P ) ⋯ c ~ N t ( P ) ω ( P - 1 ) L ] ( 39 )$
The determinant of {tilde over (Λ)} is according to equation (4) where τ(n[1], . . . , n[P]) represents the number of inversions of the sequence (n[1], . . . , n[P]).
$det ( Λ ′ ) = ∑ ( n I , … , n P ) ( - 1 ) τ ( n I , … , n P ) ∏ p = 1 P c ~ ⌈ n p L + 1 ⌉ ( p ) ω p ( n p mod ( L + 1 ) ) ( 40 )$
Furthermore, ST mapper 84 can be described as mapping u to the array given in equation (35) according to equation (41) where g[p ]is given according to equation (42) and equation (43) is satisfied.
$c ~ ⌈ n p L + 1 ⌉ ( p ) = u ~ g p ( p ) ( 41 ) g p = { N t + ⌈ n p / ( L + 1 ) ⌉ - ( p mod N t ) + 1 if ⌈ n p / ( L + 1 ) ⌉ - ( p mod N t ) + 1 ≤ 0 ⌈ n p / ( L + 1 ) ⌉ - (
p mod N t ) + 1 if ⌈ n p / ( L + 1 ) ⌉ - ( p mod N t ) + 1 〉 0 } ( 42 ) ∑ p = 1 P ( g p - 1 ) = mN t , m ∈ [ 0 , N t - 1 ] ( 43 )$
Therefore, similar to the flat-fading case, it is determined that det ({tilde over (Λ)}) is a polynomial of α and β^N ^ t . First, α is defined on the field of
(j) to obtain a Θ for which Θ{tilde over (s)}
[g ]
has no zero entry ∀{tilde over (s)}
≠0. Because P=N
(L+1), based on the design of α, it can be verified that we can verify that ω
^p(n ^ p ^mod L+
1) ∈
^j2π/N ^ t
Given α, det({tilde over (Λ)}) can be viewed a polynomial of β^N ^ t with coefficients given according to equation (44).
$∏ p = 1 P θ p T S ~ g p ω p ( n p modL + 1 ) ∈ ℚ ( j ) ( ⅇ j2π / N t ) ( α ) ( 44 )$
There always exists β^N ^ t such that the degree of the minimum polynomial is greater than N[t]. Thus, there exists (Θ,β) such that {tilde over (Λ)}≠0,∀det({tilde over (s)})≠0.
It is important to note that the rate loss L/(P+L) is due to the CP inserted and removed by CP insertion units 89A-89N and CP removers 97A-97N, respectively, to substantially mitigate ISI.
Furthermore, the LCF code described for frequency-selective channels 8B is analogous to LCF code described for flat-fading channels 8A apart from the differences in dimensionality. The difference
between the LCF code described for frequency-selective channels and the LCF code described for flat-fading channels results from the differences in ST mapper 82 and ST mapper 12. In order to better
illustrate the
EXAMPLE 1 LCF Repetition Codes
In order to enable only full diversity, Θ can be constructed to have identical rows, rather than being unitary. Θ having identical rows is given according to equation (45) where 1 represents a vector
with all ones and the 1×N[t](L+1) vector θ[1] ^T:=[1α . . . α^N ^ t(L+1)−1 ].
Θ=1{circle around (×)}θ[1] ^T(45)
Consequently, for each sub-block s[g], each layer u[g]=Θ[g]s[g]. Clearly, this is why this specific LCF encoding is referred to as LCF repetition coding. From the definitions of Λ in equation (38)
and C in equation (35), after re-arranging the rows of Λ, Λ is given according to equation (47) where U[μ,g]:=diag[u[μ](g)u[μ](g+N[t]) . . . u[μ](g+N[t]+L)], J[g]:=J[1]diag[1,e^−j2π(g−1/P), . . . e^
−j2π(g−1)L/P], and [J[1]][m+1,n+1]:=e^−j2πmn/(L+1). With the LCF repetition encoder given in equation (45), U[μ,g ]is given according to equation (47).
$Λ _ = [ U 1 , 1 J 1 U 2 , 1 J 1 ⋯ U N t , 1 J 1 U N t , 2 J 2 U 1 , 2 J 2 ⋯ U N t - 1 , 2 J 2 ⋮ ⋯ ⋮ U 2 , N t J N t U 3 , N t J N t ⋯ U 1 , N t J N t ] ( 46 ) U μ , g = β μ - 1 (
θ 1 T s μ ) I L + 1 , ∀ g , μ ∈ [ 1 , N t ] ( 47 )$
Assume that the transmitted vector s is erroneously decoded as s′, and let e[μ]:=θ[1] ^Ts[μ]. From equation (46), equation (48) can be expressed with det( Λ- Λ′)=(L+1)^N ^ t (det (E))^L+1.
$Λ _ - Λ _ = [ J 1 ⋰ J N t ] ( [ e 1 e 2 β ⋯ e N t β N t - 1 e N t β N t - 1 e 1 ⋯ e N t - 1 β N t - 2 ⋮ ⋮ e 2 β e 3 β 2 ⋯ e 1 ] ⊗ I L + 1 ︸ E ) ( 48 )$
Guaranteeing that det( Λ- Λ′)≠0 is equivalent to ensuring that det(E)≠0, ∀s≠s′. As illustrated in the proof of Proposition 1, it follows that if the information bearing symbols s(n) are drawn from
quadrature amplitude modulation (QAM) or pulse amplitude modulation (PAM), there always exists αand β such that det(E)≠0, ∀s≠s′.
EXAMPLE 2 Two-Antenna Two-Ray Channel
In this example, FDFR transmissions are designed over two-ray channels with (N[t], L)=(2, 1) and P=4. As a result, the matrix Λ is given according to equation (48).
$Λ = [ c 1 ( 1 ) c 1 ( 1 ) c 2 ( 1 ) c 2 ( 1 ) c 1 ( 2 ) jc 1 ( 2 ) c 2 ( 2 ) jc 2 ( 2 ) c 1 ( 3 ) - c 1 ( 3 ) c 2 ( 3 ) - c 2 ( 3 ) c 1 ( 4 ) - jc 1 ( 4 ) c 2 ( 4 ) -
jc 2 ( 4 ) ] ( 48 )$
Using the ST mapping to form the array given in equation (35) with LCF encoded blocks u[1]=Θs[1], u[2]=βΘs[2 ]the determinant of Λ in terms of s[g ]and (Θ, β) is given according to equation (49).
$det ( Λ ) = 2 j ( - 2 ∏ n = 1 4 θ n T s 1 + β 2 ( ∏ n = 1 2 θ n T s 1 ∏ n = 3 4 θ n T s 2 + ( θ 1 T s 1 ) ( θ 4 T s 1 ) ∏ n = 2 3 θ n T s 2 + ( θ 1 T s 2 )
( θ 4 T s 2 ) ∏ n = 2 3 θ n T s 1 + ∏ n = 1 2 θ n T s 2 ∏ n = 3 4 θ n T s 1 ) - 2 β 4 ∏ n = 1 4 θ n T s 2 ) ( 49 )$
Equation (49) is a polynomial in β^N ^ t with coefficients in
^j2π/N ^ t
)(α) if s
[j]. In order to guarantee that all coefficients of β
^N ^ t
in det(Λ) are non-zero ∀s≠s′, Θ is designed according to the methods described in X. Giraud et al. “Algebraic tools to build modulation schemes for fading channels,” IEEE Transaction on Information
Theory, vol. 43, no. 3, pp. 938-952, May 1997, G. B. Giannakis et al. “Space-time diversity systems based on linear constellation preceding,” IEEE Transactions on Wireless Communications, vol. 2, no.
2, pp. 294-309, March 2003, and G. B. Giannakis et al. “Complex field coded MIMO systems: performance, rate, and tradeoffs,” Wireless Communications and Mobile Computing, pp. 693-717, November 2002,
each of which is incorporated herein by reference. In this particular case, Θ is selected according to the equation (50).
$Θ = 1 2 [ 1 ⅇ j π 8 ⅇ j 2 π 8 ⅇ j 3 π 8 1 ⅇ j 5 π 8 ⅇ j 10 π 8 ⅇ j 15 π 8 1 ⅇ j 9 π 8 ⅇ j 18 π 8 ⅇ j 27 π 8 1 ⅇ j 13 π 8 ⅇ j 26 π 8 ⅇ j 30 π 8 ] ( 50
Given Θ, the det(Λ) can be viewed as a polynomial in β^N ^ t with coefficients in
(α). Using Design A described in the analysis of wireless communication system
for flat-fading channels β
^N ^ t
is selected β
^N ^ t 32
, such that det(Λ)≠0 ∀s≠s′.
Thus, full diversity in frequency-selective channels has higher decoding complexity with respect to flat-fading channels. However, the diversity order in frequency-selective channels is higher with
respect to the diversity order of flat-fading channels. Therefore, selecting complexity-performance tradeoffs may be particularly advantageous.
Throughout the analysis of FDFT transmissions of ST coded signals, the channel taps have been assumed to be uncorrelated. When the channel taps are correlated, the maximum achievable diversity of the
FDFR design for transmitter 4B and receiver 6B is the rank of the correlation matrix of all channel taps. The rank of the correlation matrix of all channel taps cannot exceed its dimension N[t]N[r]
(L+1) and can be as low as 1.
In order to complete the analysis of FDFR transmission over frequency-selective channels, Corollary 4 gives a measure of the mutual information for FDFR transmission over frequency-selective channels
based on the input-output relationship given in equation (36).
Corollary 4 If the information bearing symbols s˜CN(0,ε[s]/N[t]I[N]) and the average signal-to-noise (SNR) ratio is γ:=ε[s]/(N[0]N[t]), then the mutual information of the FDFR transmissions through
frequency-selective channels is given according to equation (51).
$c freq = 1 P + L ∑ p = 1 p log det ( I N r + γ H ( p ) H ℋ ( p ) ) bits pcu ( 51 )$
Based on Corollary 4, the effects of of N[t], N[r], and L on the outage probability Pr( C[freq]<R) are illustrated in FIG. 11. Generally, increasing either N[t], N[r], or L causes a decrease in the
outage probability. When the product of N[t]N[r](L+1) is fixed, e.g. 16, the outage probability has the same slow for moderate to high SNR values. However, because the channel variance is a function
of L and N[t ]controls the power splitting factor, L and N[t ]effect the outage probability differently.
In the following analysis the design of LCF encoders and ST mappers that generate ST coded signals which achieve FDFR transmissions over time-selective channels are described. The previously
described ST coded signals assumed the channels were quasi-static. For simplicity, the channels in the following analysis are assumed to be time-selective but frequency-flat.
When channels are changing from symbol to symbol, the system model given in equation (2) can be re-expressed according to equation (52) where the channel matrix H(n) changes along with the time index
The input-output relationship for time-selective channels in equation (52) coincides with the input-output relationship for frequency-selective channels given in equation (36). Therefore, the design
for frequency selective-channels can also be utilized for time-selective channels. The diversity order for time-selective channels is quantified before analyzing the performance of the design
described herein.
In order to define Doppler diversity, let h[ν,μ](t) represent the time-varying impulse response of the resulting channel that includes transmit-receive filters as well as the time-selective
propagation effects and let H[ν,μ](f) represent the Fourier transform (FT) of h[ν,μ](t). Although the bandwidth of h[ν,μ](t) over a finite time horizon is theoretically infinite, H[ν,μ](f) can be
approximated as H[ν,μ](f)≈0 for f≠[−f[max],f[max]] where f[max ]is the maximum frequency offset, i.e. Doppler shift, of all the rays. Sampling h[ν,μ](t) along the time t with period T[s ]selects the
discrete time equivalent channel taps h(p). Per Nyquist's theorem, it has thus been shown that such a channel can be well approximated by the basis expansion model given in equation (53) where ω[q]:=
2Π(q−Q/2)/N, Q:=2Πf[max]NT[s]┐, and h[v] ^(ν,μ) represent time-invariant channel coefficients.
$h v , μ ( p ) ≈ ∑ p = 0 Q h q ( v , μ ) ⅇ j w q p ( 53 )$
In order to quantify the Doppler diversity in Lemma 2, the result described in G. B. Giannakis et al. “Maximum-diversity transmissions over time-selective wireless channels,” Proceedings of Wireless
Communications and Network Conference, vol. 1, pp.297-501, Orlando, Fla., Mar. 17-21, 2002, which is incorporated herein by reference, is used.
Lemma 2 Given the channel model in equation (53), when the coefficients h[q] ^(ν,μ) are complex Gaussian distributed, the maximum diversity provided by the BEM {h[ν,μ](p)}[p=1] ^P is at most Q+1.
It is important to note that the bases e^jω ^ q in equation (53) are on the FFT grid. It is well known that circulant matrices can be diagonalized by IFF matrices. Using this property, the channel
matrix D[H] ^(ν,μ):=diag[h[ν,μ](1) . . . h[ν,μ](P)] can be expressed according to equation (54) where H^(ν,μ) is a circulant P×P matrix with first column
$[ h Q / 2 ( v , μ ) ⋯ h 0 ( v , μ ) 0 ⋯ 0 h Q ( v , μ ) ⋯ h Q / 2 + 1 ( v , μ ) ]$
and F[P ]represents the P-point FFT matrix with (m+1, n+1)st entry [F[P]][m,n]:=(1/√{square root over (P)})e^−j2πmn/P.
$D H ( v , m ) ≈ ∑ q = 0 Q h q ( v , μ ) D q = F P H ( v , μ ) F P ℋ ( 54 )$
Comparing the right hand side of equation (54) with the previously described OFDM model for frequency-selective channels, it can be determined that, based on equation (53), a time-selective channel
with Q+1 bases can be viewed as a frequency-selective channel with Q+1 taps. Relying on this time-frequency duality, the described design for FDFR over time-selective channels can be obtained from
the previously described design for frequency-selective channels. The results for time-selective channels are given in Proposition 4.
Proposition 4 For any constellation of s carved from
[j], with the array formed by LCF encoder
and ST mapper
and given in equation (35), there exists at least one pair of (Θ, β) in equation (9) that enables full diversity (N
(Q+1)) if each channel provides Doppler diversity (Q+1) for the ST coded signal in equation (52) at full rate N
[t ]
symbols pcu.
As the block size P, and thus Q, increases, the Doppler diversity increases. However, as Doppler diversity increases, both decoding delay and complexity increase. As illustrated in the simulations of
FIG. 16, when the Doppler diversity is sufficiently high, e.g. Q>3, the performance with multiple antennas does not substantially increase by further increasing complexity. Furthermore, for fixed
block size P, sampling period T[s], and f[max], the Doppler diversity (Q+1<P) is fixed. Therefore, selecting the size of Θ[g ]greater than N[t](Q+1) is not desirable.
FIG. 8 illustrates an example arrangement of LCF coded information bearing symbols u mapped in array 100 in accordance with equation (35). In this arrangement, the block of symbols s has length N[t]
^2(L+1) and comprises N[g]=N[t ]sub-blocks, each sub-block having length P=N[t](L+1). In particular, array 100 comprises L+1 sub-arrays, each having the same structure as the array given in equation
(9), i.e. array 30. The LCF coded symbols u are organized in array 100 having N[t ]rows and N[t](L+1) columns. Each of the LCF encoders Θ[g ]of LCF encoder 82 Θ encodes a corresponding sub-block of
symbols s[g ]to form a respective layer u[g]. For frequency-selective channels, Θ[g ]has larger size N[t](L+1) than Θ[g ]for flat-fading channels. However, each layer in array 100 is transmitted over
N[t ]antennas and each layer is spread over at least L+1 frequency bins.
FIG. 9 illustrates an alternative arrangement of LCF coded information bearing symbols u mapped in array 110 in accordance with equation (35) when the block size N>N[t] ^2(L+1), i.e. when the number
of subcarriers P>N[t](L+1). In this arrangement N[t]=4, L=1, N[g]=1 and P=N[g]N[t](L+1). Identically patterned boxes represent different symbols from the same layer and the layers are interlaced in
different groups.
FIG. 10 is a flowchart illustrating an example mode of operation of MIMO wireless communication system 2 (FIG. 7) in which transmitter 4B and receiver 6B communicate using the described ST coding
techniques through frequency-selective channels 8B. Generally, transmitter 4B parses a serial stream of information bearing symbols into blocks of N=N[t] ^2(L+1) symbols s (step 120) comprising N[g]=
N[t ]sub-blocks, each sub-block including P=N[t](L+1) symbols. Each LCF encoder Θ[g ]of LCF encoder 82 applies layer specific coding to encode a respective one of the sub-blocks to form a
corresponding layer (step 122). ST mapper 84 forms an array according to equation (35) by mapping each layer such that the encoded symbols of each layer are orthogonal in space and time (step 124).
In particular, the array is arranged such that each layer is transmitted over N[t ]transmit-antennas and each layers is spread over at least L+1 frequency bins through each transmit-antenna. The
array is read out in a column-wise manner to generate a ST coded signal (step 126). Transmitter 4B converts the ST coded signal into a serial stream of ST coded symbols (step 128) for each
transmit-antenna 92A-92N. Each serial stream of ST coded symbols is processed by a corresponding P-point IFFT unit 88A-88N (step 130) and a CP of length L is inserted by a respective CP insertion
unit 89A-89N (step 132). IFFT units 88A-88N and CP insertion units 89A-89N serve to implement OFDM and to substantially mitigate ISI by converting frequency-selective channels to a set of
frequency-flat subchannels. Modulators 90A-90N generate a multi-carrier output waveform to transmit the OFDM-ST coded signal through frequency-selective channels 8B (step 134).
Receiver 6B receives the multi-carrier output waveform (step 136). S/P converters 94A-94N parse the received waveform into P+L filtered samples (step 138). CP removers 97A-97N remove the first L
samples to substantially reduce ISI from the corresponding previously received block (step 140). The remaining P samples are processed by FFT units 98A-98N (step 142) and decoded by LCF-ST decoder 99
to yield and estimate ŝ of the information block s (step 144).
FIGS. 11-16 are graphs that illustrate FDFT ST codes for different types of MIMO fading channels, e.g. flat-fading channels, frequency-selective channels, and time-selective channels. The described
ST codes for each of the different types of MIMO fading channels are simulated to verify that FDFR are achieved and tradeoffs between performance, rate, and complexity can be selected.
FIG. 11 is a graph illustrating results of a comparison of outage probability Based on Corollary 4, the effects of of N[t], N[r], and L on the outage probability Pr(C[freq]<R) for the described ST
coding techniques over time-selective channels with varying values of SNR. In particular, FIG. 11 illustrates the effects of N[t], N[r], and L on the outage probability. The transmission rate is
fixed at R=4 bits pcu, the number of subcarriers P=48, and channel taps are simulated i.i.d. with zero mean and variance 1/(L+1). FIG. 11 illustrates four different designs for the described ST
coding techniques for time-selective channels. The first design 150 has configuration (N[t], N[r], L)=(2, 2, 1), the second design 152 has configuration (N[t], N[r], L)=(2, 2, 3), the third design
154 has configuration (N[t], N[r], L)=(2, 4, 1), and the fourth design 156 has configuration (N[t], N[r], L)=(4, 2, 1).
Clearly, increasing N[t], N[r], or L causes the outage probability to decrease. When the product N[t]N[r](L+1) is fixed, for example N[t]N[r](L+1)=16, the outage probability has the same slope for
moderate to high SNR values. However, because L determines the channel variance and N[t ]controls the power splitting factor, L and N[t ]have different effects on the outage probability.
FIG. 12 is a graph illustrating results of a comparison of average BER for the described ST coding techniques over flat-fading channels with varying values of SNR. In particular, binary phase-shift
keying (BPSK) is used to signal over an (N[t], N[r]) configuration at transmission rate R=3 bits pcu. The flat-fading channels are i.i.d. Gaussian distributed with zero mean, and unit variance. The
channel coherence time is greater than N[t]. FDFR transmissions are decode using SD. Clearly, the three design schemes, Design A (160), Design B (162), and Design C (164) achieve similar performance
with different encoders and the same diversity. Designs A (160), B (162), and C (164) are compared with two conventional ST codes, a high performance representative LCP-STC (166) described in G. B.
Giannakis et al. “Space-time diversity systems based on linear constellation precoding,” IEEE Transactions on Wireless Communications, vol 2, no. 2, pp. 294-309, Mar. 2003, which is incorporated by
reference herein, and a high-rate representative V-BLAST (168) described in P. W. Wolniansky et al. “V-BLAST: An architecture for realizing very high data rates over the rich-scattering wireless
channel,” Proceedings of URSI International Symposium Signals, Systems, and Electronics, Italy, Sep. 1998, which is incorporated herein by reference. In order to maintain the same transmission rate,
8 QAM is employed for LCP-STC (166) and BPSK for V-BLAST (168). For V-BLAST (168) SD is performed per time slot. At high SNR V-BLAST (168) exhibits lower performance than LCP-ST codes because the
maximum achievable diversity for uncoded V-BLAST is N[r]. Clearly, the described FDFR ST coding techniques have higher performance than V-BLAST (168) and LCP-STC (166) because V-BLAST (168) does not
achieve full diversity while LCP-STC (166) incurs rate loss. Furthermore, the described FDFR designs achieve the same full diversity order as LCP-STC (166).
FIG. 13 is a graph illustrating results of a comparison of average BER for the described ST coding techniques over flat-fading channels with varying values of SNR. In particular,
performance-complexity tradeoffs in Corollary 1 are demonstrated with a (N[t], N[r])=(4, 4) configuration with fixed transmission power for each of the following three different designs. Assume that
the rate is fixed at 4 bits pcu. The first design (170) employs BPSK per layer and includes four layers as given in equation (9) and forms the array given in equation (55). The second design (172)
employs quadrature phase-shift keying (QPSK) per layer and includes two layers that form the array given in equation (56). The third design (174) employs 16 QAM and includes one layer that forms the
array given in equation (57).
$C 1 = [ u 1 ( 1 ) u 4 ( 2 ) u 3 ( 3 ) u 2 ( 4 ) u 2 ( 1 ) u 1 ( 2 ) u 4 ( 3 ) u 3 ( 4 ) u 3 ( 1 ) u 2 ( 2 ) u 1 ( 3 ) u 4 ( 4 ) u 4 ( 1 ) u 3 ( 2 ) u 2 ( 3 ) u 1 ( 4
) ] ( 55 ) C 2 = [ u 1 ( 1 ) 0 u 3 ( 3 ) 0 0 u 1 ( 2 ) 0 u 3 ( 4 ) u 3 ( 1 ) 0 u 1 ( 3 ) 0 0 u 3 ( 2 ) 0 u 1 ( 4 ) ] ( 56 ) C 3 = [ u 1 ( 1 ) 0 0 0 0 u 1 ( 2 ) 0 0 0 0 u 1 ( 3 )
0 0 0 0 u 1 ( 4 ) ] ( 57 )$
All three designs 170, 172, and 174 achieve similar diversity order when SD is performed at the receiver. However, each design 160, 162, and 164 has substantially different coding gains. The
difference in coding gains is primarily a result of their respective constellation sizes. The decoding complexity for each of the three design is
), and
) for the first
, second
, and third
design, respectively.
Maintaining the same transmission rate, it can be determined from the decoding complexity for each design that a tradeoff between decoding complexity and performance can be selected. In particular,
by increasing decoding complexity to increase results in an increase in performance.
First, second, and third designs 170, 172, and 174, respectively, are compared with the V-BLAST design 176 and the D-BLAST design 178 described in G. J. Foshini et al. “Layered space-time
architecture for wireless communication in fading environments when using multiple antennas,” Bell Labs Technical Journal, vol. 2, Autumn 1996, which is incorporate by reference herein. For the
D-BLAST design 178, the ST matrix is given according to equation (58).
$C D - BLAST [ u 1 ( 1 ) u 2 ( 1 ) u 3 ( 1 ) ⋰ ⋰ ⋰ u 1 ( 4 ) u 2 ( 4 ) u 3 ( 4 ) ] 4 × 6 ( 58 )$
In order to maintain the same transmission rate while ensuring affordable decoding complexity, three layers are used in D-BLAST design 178 and QPSK modulation is selected to maintain the same rate.
Both performance and decoding complexity of the D-BLAST design 178 in equation (58) lie between the performance and decoding complexity for the first and second designs 170 and 172, while the D-BLAST
design 178 has longer decoding delay. The V-BLAST design 176 enables a compromise between complexity and performance in comparison to the other illustrated designs. In this example, the V-BLAST
design 176 has higher performance that the third design 174 at large SNR.
FIG. 14 is a graph illustrating results of a comparison of average BER for the described ST coding techniques over flat-fading channels with varying SNR. In particular, rate-complexity tradeoffs are
demonstrated with a (N[t], N[r])=(4, 4) configuration with fixed transmission power for each of the following three different designs. The first, second, and third design is the same as described
above, but BPSK is used for each of the three designs. Consequently, the rates for the first, second, and third modified designs, 180, 182, and 186, respectively, are 4, 2, and 1 bits pcu. Again, the
decoding complexity for each of the three design is
), and
) for the first
, second
, and third
modified designs, respectively.
FIG. 14
illustrates the performance of each of the three designs
with different rates. Since the total power transmission is fixed, the lower rate designs that have fewer layers also have higher symbol power. Clearly, the low rate designs have higher performance
that the high rate designs at low SNR. Moreover, the slopes of each of the curves is not identical. The slopes of each of the curves is not identical because the SNR is not sufficiently high to
achieve full diversity, modified designs
, and
have different coding gains and in order to observe identical slopes, simulations should be performed below BER 10
which is not necessary for the designs that do not include GF error control codes.
The modified designs 180, 182, and 184 are also compared with the LD codes 186 described in R. W. Health et al. “Linear dispersion codes for MIMO systems based on frame theory,” IEEE Transactions on
Signal Processing, vol. 50, no. 10, pp. 2429-2441, Oct. 2002, which is incorporated herein by reference, with BPSK at transmission rate 2 bits pcu. The complexity for decoding the LD code 186 with SD
is the same as the second modified design 182. Clearly, the second modified design 182 has higher performance than the LD code 186.
FIG. 15 is a graph illustrating results of a comparison of average BER for the described ST coding techniques over frequency-selective channels with varying values of SNR. In particular,
frequency-selective channels with parameters (N[t], N[r], L)=(2, 2, 1) are considered. Furthermore, the channel taps are independent, and for each channel the power of the taps satisfies an
exponentially decaying profile. The described FDFR design 190 is compared with two conventional space-time-frequency codes, V-BLAST-OFDM 192 described in R. J. Piechocki et al. “Performance
evaluation of BLAST-OFDM enhanced Hiperlan/2 using simulated and measured channel data,” Electronics Letters, vol. 37, no. 18, pp. 1137-1139, Aug. 30, 2001, which is incorporated herein by reference,
and GSTF 194 described in G. B. Giannakis et al. “Space-Time-Frequency Coded OFDM over Frequency-Selective Fading Channels,” IEEE Transactions on Signal Processing, pp. 2465-2476, Oct. 2002, which is
incorporated herein by reference. GSTF 194 is basically a concatenation of ST-OD with OFDM.
The block size P is selected as P=N[t](L+1)=4. QPSK is employed for the described FDFR design 190 and V-BLAST-OFDM 192 while 16 QAM is employed for the GSTF design 194 in order to fix the rate at R=8
/3 bits pcu. At the receiver, SD is employed for each of the designs 190, 192, and 194. From the slopes of the BER curves, the described FDFR design 190 and GSTF 194 achieve full diversity while
V-BLAST-OFDM 192 only achieves diversity order N[r]. The described FDFR design 190 has higher performance that the GSTF design 194 because a smaller constellation size is used for the FDFR design 190
. The decoding complexity of the FDFR design 190 is
[t] ^2
) while the V-BLAST-OFDM design
[r] ^3
) and the GSTF design
FIG. 16 is a graph illustrating results of a comparison of average BER for the described ST coding techniques over time-selective channels with varying values of SNR. In particular, channels are
considered that are changing from symbol to symbol, but not independently. The channels for different antenna pairs are independent and (N[t], N[r]). Each channel is generated based on Jakes' model
using carrier frequency f[0]=5.2 GHz, sampling period T[s]=43 μs and mobile speed v[max]=100 km/hr. Thus, the maximum frequency shift is f[max]=481 Hz. With block size P=48, it follows that Q=2.
Therefore, the Doppler diversity for each time-varying channel is at most Q+1=3. Three encoders with Θ[g]'s of different size are simulated to illustrate the effect of Doppler diversity. The first
encoder 200 has size N[r]×N[t ]and the P/N layers are interlaced as illustrated in FIG. 9. At the receiver, the N[t ]layers are jointly decoded using SD decoding. The second encoder 202 is based on a
2N[t]×2N[t ]LCF encoder Θ[g ]and is also decoded via SD. The decoding complexity of the second encoder 202 is
[t] ^2
). Thus, the decoding complexity of the second encoder
is higher than the decoding complexity of the first encoder
. The size of the third encoder
is increased to 3N
. Clearly, as the size of Θ
[g ]
increases, the diversity order increases since the Doppler effects increase. At the same time, the decoding complexity also increases. When the size of Θ
[g ]
is greater than N
(Q+1), performance gains saturate.
The described ST coding techniques can be embodied in a variety of transmitters and receivers including base stations, cell phones, laptop computers, handheld computing devices, personal digital
assistants (PDA's), and the like. The devices may include a digital signal processor (DSP), field programmable gate array (FPGA), application specific integrated circuit (ASIC) or similar hardware,
firmware and/or software for implementing the techniques. If implemented in software, a computer readable medium may store computer readable instructions, i.e., program code, that can be executed by
a processor or DSP to carry out one of more of the techniques described above. For example, the computer readable medium may comprise random access memory (RAM), read-only memory (ROM), non-volatile
random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, or the like. The computer readable medium may comprise computer readable instructions that
when executed in a wireless communication device, cause the wireless communication device to carry out one or more of the techniques described herein. These and other embodiments are within the scope
of the following claims. | {"url":"http://www.google.com/patents/US7706454?ie=ISO-8859-1&dq=6246862","timestamp":"2014-04-17T16:50:08Z","content_type":null,"content_length":"441442","record_id":"<urn:uuid:f9571dc0-4b2e-4069-aafb-8ee0734a2ec1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic complexity and modeling,” The Annals of Statistics
Results 1 - 10 of 17
- JOURNAL OF THE ASSOCIATION FOR COMPUTING MACHINERY , 1997
"... We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the ..."
Cited by 317 (66 self)
Add to MetaCart
We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit
sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum
achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching
leading constants in most cases. We then show howthis leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in
this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.
- Computational Linguistics , 1998
"... this paper, we confine ourselves to the former issue, and refer the interested reader to Li and Abe (1996), which deals with the latter issue ..."
Cited by 106 (4 self)
Add to MetaCart
this paper, we confine ourselves to the former issue, and refer the interested reader to Li and Abe (1996), which deals with the latter issue
, 1995
"... We present a simple characterization of equivalentBayesian network structures based on local transformations. The significance of the characterization is twofold. First, we are able to easily
proveseveral new invariant properties of theoretical interest for equivalent structures. Second, we ..."
Cited by 92 (1 self)
Add to MetaCart
We present a simple characterization of equivalentBayesian network structures based on local transformations. The significance of the characterization is twofold. First, we are able to easily
proveseveral new invariant properties of theoretical interest for equivalent structures. Second, we use the characterization to derive an efficient algorithm that identifies all of the compelled
edges in a structure. Compelled edge identification is of particular importance for learning Bayesian network structures from data because these edges indicate causal relationships when certain
assumptions hold. 1
- IEEE Transactions on Information Theory , 1998
"... The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the
ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition un ..."
Cited by 67 (7 self)
Add to MetaCart
The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal
MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality,
which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior.
Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability
of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into
Kolmogorov's mi...
- Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), Lecture Notes in Artificial Intelligence , 2002
"... Solomonoff's optimal but noncomputable method for inductive inference assumes that observation sequences x are drawn from an recursive prior distribution p(x). Instead of using the unknown p()
he predicts using the celebrated universal enumerable prior M() which for all exceeds any recursive p() ..."
Cited by 51 (20 self)
Add to MetaCart
Solomonoff's optimal but noncomputable method for inductive inference assumes that observation sequences x are drawn from an recursive prior distribution p(x). Instead of using the unknown p() he
predicts using the celebrated universal enumerable prior M() which for all exceeds any recursive p(), save for a constant factor independent of x. The simplicity measure M() naturally implements
"Occam's razor " and is closely related to the Kolmogorov complexity of . However, M assigns high probability to certain data that are extremely hard to compute. This does not match our intuitive
notion of simplicity. Here we suggest a more plausible measure derived from the fastest way of computing data. In absence of contrarian evidence, we assume that the physical world is generated by a
computational process, and that any possibly infinite sequence of observations is therefore computable in the limit (this assumption is more radical and stronger than Solomonoff's).
- Evolutionary Computation , 1997
"... This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as
structural adaptation. We present a novel representation scheme called neural trees that allows efficient ..."
Cited by 37 (15 self)
Add to MetaCart
This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as
structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid
evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle.
The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are
provided on two chaotic time series prediction problems of practical interest.
- IN PROCEEDINGS OF THE THIRTY-FIRST SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE , 1990
"... Formalizing the process of natural induction and justi-fying its predictive value is not only basic to the phi- ..."
, 2003
"... Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress
in the field of theoretically optimal and practically feasible algorithms for prediction, search, inducti ..."
Cited by 15 (9 self)
Add to MetaCart
Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in
the field of theoretically optimal and practically feasible algorithms for prediction, search, inductive inference based on Occam's razor, problem solving, decision making, and reinforcement learning
in environments of a very general type. Since inductive inference is at the heart of all inductive sciences, some of the results are relevant not only for AI and computer science but also for
physics, provoking nontraditional predictions based on Zuse's thesis of the computer-generated universe.
, 2009
"... In this paper, I review the literature on the formulation and estimation of dynamic stochastic general equilibrium (DSGE) models with a special emphasis on Bayesian methods. First, I discuss the
evolution of DSGE models over the last couple of decades. Second, I explain why the profession has decide ..."
Cited by 13 (1 self)
Add to MetaCart
In this paper, I review the literature on the formulation and estimation of dynamic stochastic general equilibrium (DSGE) models with a special emphasis on Bayesian methods. First, I discuss the
evolution of DSGE models over the last couple of decades. Second, I explain why the profession has decided to estimate these models using Bayesian methods. Third, I brie‡y introduce some of the
techniques required to compute and estimate these models. Fourth, I illustrate the techniques under consideration by estimating a benchmark DSGE model with real and nominal rigidities. I conclude by
o¤ering some pointers for future research.
, 1990
"... A new nonlterative approach to determine displacement vector fields with discontinuities is described. ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1495439","timestamp":"2014-04-20T10:26:17Z","content_type":null,"content_length":"36470","record_id":"<urn:uuid:27684efe-fbb1-4d9d-ade2-9228806b1546>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
In this Geometer’s Sketchpad lesson, equilateral triangles are created and defined
view a plan
In this Geometer’s Sketchpad lesson, equilateral triangles are created and defined
Computers & Internet, Math
Title – Classifying Triangles – Equilateral Triangles
By – Kim Ostling
Primary Subject – Math
Secondary Subjects – Computers / Internet
Grade Level – Third
• A triangle is a three-sided polygon whose interior angles add up to 180 degrees.
• Triangles are classified by either the measure of their sides or the measure of their angles.
• Geometer’s Sketchpad will be used in this lesson to create different triangles and to compare their different parts.
• G.GS.03.04 Identify, describe, compare, and classify two-dimensional shapes, e.g., parallelogram, trapezoid, circle, rectangle, square, and rhombus, based on their component parts (angles, sides,
vertices, line segment) and on the number of sides and vertices.
Learning Resources and Materials:
• Geometer’s Sketchpad
• Journals
• Pencils
Development of Lesson:
• Introduction
□ Activate prior knowledge by asking the students:
1. What is a triangle?
2. How many sides does a triangle have?
3. Is there anything we know about the angles of the triangle?
4. Is there anything we know about the sides of a triangle?
5. Is there anything else you would like to share about triangles?
• Methods/Procedures:
□ Break the students off into pairs.
□ While working in pairs, have the students open Geometer’s Sketchpad.
(Students will have prior experience in working with Geometer’s Sketchpad.)
□ Have the students construct an equilateral triangle by following the steps below:
1. Construct a line using the line segment tool.
2. Highlight one endpoint and label it “A” (click on the “Display” menu, then on “Label Point”).
3. Highlight the second endpoint and label it “B”.
4. Select B and the line AB, click on the “Construct” menu and construct a circle with the center B and radius of the line AB.
5. Select the point A and the line segment AB, click on the “Construct menu” and construct a circle with the center A and the radius of the line AB.
6. Label the point where the two circles intersect “C” (click on the “Display” menu, then on the “Label Point”).
7. Construct lines segments AC and BC by using the line segment tool.
8. Click off of the page, then click on the two circles only.
9. Go to “Edit” and select “Action Buttons”, click on the “Hide/Show” button.
10. Click on the new bullet on the page to hide your circles.
11. Click on the line connecting the points A and B, then click on “Measure”, then “Length”.
12. Click on the line connecting the points A and C, then click on “Measure”, then “Length”.
13. Click on the line connecting the points B and C, then click on “Measure”, then “Length”.
□ Stop and ask the students:
☆ What do you notice about the measurement of all three of the sides?
☆ What happens to the measurement of all three sides when you press on point A and drag?
☆ What happens to the measurement of all three sides when you press on point B and drag?
1. Click on the points ABC and click on the “Measure”, then “Angle” button.
2. Click on the points BCA and click on the “Measure”, then “Angle” button.
3. Click on the points CAB and click on the “Measure”, then “Angle” button.
□ Stop and ask the students:
☆ What do you notice about the measurement of all three of the angles?
☆ What happens to the measurement of all three angles when you press on point A and drag?
☆ What happens to the measurement of all three angles when you press on point B and drag?
□ What the students just created is called an “Equilateral Triangle”.
□ Together with the students come up with a definition for an equilateral triangle:
□ An Equilateral Triangle is a three-sided polygon where all of the sides are equal and all of the angles are congruent to one another.
• Accommodations/Adaptations:
□ Strategically pair the students so that a student who is excelling is partnered with a student who is lagging.
□ Give more direct instruction to students who are having a difficult time completing the task.
• Assessment/Evaluation:
□ The students will be assessed at the beginning of the lesson on their prior knowledge of triangles.
□ During the lesson, the teacher will monitor the pairs and provide helpful feedback as needed.
□ After the lesson, the teacher will check for understanding by viewing the student’s Sketchpad to see if the goal was met and to determine if the student understands the concept.
□ The student will have met the benchmark after they are able to classify all the different triangles, as well as other two-dimensional shapes.
• Closure:
□ The students will reflect on what they learned by writing and drawing in their journals an equilateral triangle and describing the unique characteristics of one.
□ If the students did not grasp the concept then more direct instruction will be given after the activity.
E-Mail Kim Ostling ! | {"url":"http://lessonplanspage.com/mathcidefineequilateraltriangleswithgeometerssketchpad3-htm/","timestamp":"2014-04-16T15:00:25Z","content_type":null,"content_length":"47672","record_id":"<urn:uuid:a6de7704-be95-4b2a-9b41-775e2d27bf37>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Abstractions of Finite-State Machines and Immediately-Detectable Output Faults
March 1992 (vol. 41 no. 3)
pp. 325-338
ASCII Text x
K.N. Oikonomou, "Abstractions of Finite-State Machines and Immediately-Detectable Output Faults," IEEE Transactions on Computers, vol. 41, no. 3, pp. 325-338, March, 1992.
BibTex x
@article{ 10.1109/12.127444,
author = {K.N. Oikonomou},
title = {Abstractions of Finite-State Machines and Immediately-Detectable Output Faults},
journal ={IEEE Transactions on Computers},
volume = {41},
number = {3},
issn = {0018-9340},
year = {1992},
pages = {325-338},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.127444},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Abstractions of Finite-State Machines and Immediately-Detectable Output Faults
IS - 3
SN - 0018-9340
EPD - 325-338
A1 - K.N. Oikonomou,
PY - 1992
KW - set partitioning; finite-state machines; immediately-detectable output faults; abstraction; nondeterministic machine; NP-complete; polynomial-time algorithm; approximately optimal partition;
computational complexity; data structures; fault tolerant computing; finite automata.
VL - 41
JA - IEEE Transactions on Computers
ER -
A general way to make a smaller model of a large system, or to represent the fact that the observations possible on it are limited, is to apply an abstraction A to it. If the system is modeled by a
finite-state machine M, the abstraction consists of three partitions, one for each of the state, input, and output sets. States, inputs, or outputs lumped together in one block by the partition are
indistinguishable from each other, resulting in a nondeterministic machine M/sub A/. An observer of M/sub A/, whose task is to detect erroneous behavior in M, is prevented by the abstraction from
seeing some of the faults. The authors investigate the choice of an abstraction that is optimal with respect to immediately detectable faults in the output map. It is shown that this requires solving
an NP-complete 'set-partitioning' problem. A polynomial-time algorithm for finding an approximately optimal partition of either the states or the inputs of M, together with a way to check the
goodness of the approximation is given. This algorithm also solves the undetectable fault minimization problem exactly, and in polynomial time.
[1] M. A. Arbib,Theories of Abstract Automata. Englewood Cliffs, NJ: Prentice-Hall, 1969.
[2] A. Avizienis, "Fault-tolerance by means of external monitoring of computer systems," inProc. Nat. Comput. Conf., 1981.
[3] R. E. Bellman, "On a routing problem,"Quarterly Appl. Mathemat., vol. 16, 1958.
[4] A. K. Chakravarty, J. B. Orlin, and U. G. Rothblum, "A partitioning problem with additive objective with an application to optimal inventory groupings for joint replenishment,"Oper. Res., vol.
30, no. 5, Sept.-Oct. 1982.
[5] M. R. Garey and D. S. Johnson,Computers and Intractability: A Guide to Theory of NP-Completeness. San Francisco, CA: Freeman, 1979.
[6] M. R. Garey, D. S. Johnson, and L. Stockmeyer, "Some simplified NP-complete graph problems,"Theoret. Comput. Sci., vol. 1, 1976.
[7] F. K. Hwang, J. Sun, and E. Y. Yao, "Optimal set partitioning,"SIAM J. Algebraic and Discrete Methods, vol. 6, no. 1, Jan. 1985.
[8] D. E. Knuth,The Art of Computer Programming, Vol. 1. Reading, MA: Addison-Wesley, 1973.
[9] K. N. Oikonomou and R. Y. Kain, "Abstractions for node-level passive fault detection in distributed systems,"IEEE Trans. Comput., vol. C-32, June 1983.
[10] K. N. Oikonomou, "Abstractions of finite-state machines optimal with respect to single undetectable output faults,"IEEE Trans. Comput., vol. C-36, Feb. 1987.
[11] A. Mahmood and E. J. McCluskey, "Concurrent error detection using watchdog processors--A survey,"IEEE Trans. Comput., vol. C-37, Feb. 1988.
Index Terms:
set partitioning; finite-state machines; immediately-detectable output faults; abstraction; nondeterministic machine; NP-complete; polynomial-time algorithm; approximately optimal partition;
computational complexity; data structures; fault tolerant computing; finite automata.
K.N. Oikonomou, "Abstractions of Finite-State Machines and Immediately-Detectable Output Faults," IEEE Transactions on Computers, vol. 41, no. 3, pp. 325-338, March 1992, doi:10.1109/12.127444
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/1992/03/t0325-abs.html","timestamp":"2014-04-23T20:55:35Z","content_type":null,"content_length":"52552","record_id":"<urn:uuid:0c94615c-655c-4554-8540-db5ecec9fea8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
A diffusion process in a Brownian environment with drift
Results 1 - 10 of 15
- Adv. Appl. Prob , 1992
"... Abstract: This is the second part of our survey on exponential functionals of Brownian motion. We focus on the applications of the results about the distributions of the exponential functionals,
which have been discussed in the first part. Pricing formula for call options for the Asian options, expl ..."
Cited by 98 (9 self)
Add to MetaCart
Abstract: This is the second part of our survey on exponential functionals of Brownian motion. We focus on the applications of the results about the distributions of the exponential functionals,
which have been discussed in the first part. Pricing formula for call options for the Asian options, explicit expressions for the heat kernels on hyperbolic spaces, diffusion processes in random
environments and extensions of Lévy’s and Pitman’s theorems are discussed.
- Probabilty Surveys , 2005
"... Abstract: This text surveys properties and applications of the exponential functional ∫ t exp(−ξs)ds of real-valued Lévy processes ξ = (ξt, t ≥ 0). 0 ..."
Cited by 34 (4 self)
Add to MetaCart
Abstract: This text surveys properties and applications of the exponential functional ∫ t exp(−ξs)ds of real-valued Lévy processes ξ = (ξt, t ≥ 0). 0
, 2007
"... We consider transient random walks in random environment on Z with zero asymptotic speed. A classical result of Kesten, Kozlov and Spitzer says that the hitting time of the level n converges in
law, after a proper normalization, towards a positive stable law, but they do not obtain a description o ..."
Cited by 15 (6 self)
Add to MetaCart
We consider transient random walks in random environment on Z with zero asymptotic speed. A classical result of Kesten, Kozlov and Spitzer says that the hitting time of the level n converges in law,
after a proper normalization, towards a positive stable law, but they do not obtain a description of its parameter. A different proof of this result is presented, that leads to a complete
characterization of this stable law. The case of Dirichlet environment turns out to be remarkably explicit.
- Trans. Amer. Math. Soc , 1999
"... Abstract. We are interested in the asymptotic behaviour of a diffusion process with drifted Brownian potential. The model is a continuous time analogue to the random walk in random environment
studied in the classical paper of Kesten, Kozlov, and Spitzer. We not only recover the convergence of the d ..."
Cited by 13 (5 self)
Add to MetaCart
Abstract. We are interested in the asymptotic behaviour of a diffusion process with drifted Brownian potential. The model is a continuous time analogue to the random walk in random environment
studied in the classical paper of Kesten, Kozlov, and Spitzer. We not only recover the convergence of the diffusion process which was previously established by Kawazu and Tanaka, but also obtain all
the possible convergence rates. An interesting feature of our approach is that it shows a clear relationship between drifted Brownian potentials and Bessel processes. 1.
, 2000
"... this paper is to study the favourite points. For n 0 and x 2 Z, ..."
, 2008
"... Abstract: We consider Sinai’s random walk in random environment. We prove that for an interval of time [1,n] Sinai’s walk sojourns in a small neighborhood of the point of localization for the
quasi totality of this amount of time. Moreover the local time at the point of localization normalized by n ..."
Cited by 7 (7 self)
Add to MetaCart
Abstract: We consider Sinai’s random walk in random environment. We prove that for an interval of time [1,n] Sinai’s walk sojourns in a small neighborhood of the point of localization for the quasi
totality of this amount of time. Moreover the local time at the point of localization normalized by n converges in probability to a well defined random variable of the environment. From these results
we get applications to the favorite sites of the walk and to the maximum of the local time. 1
, 2008
"... We consider a diffusion process X in a random Lévy potential V which is a solution of the informal stochastic differential equation dXt = dβt − ..."
Cited by 3 (0 self)
Add to MetaCart
We consider a diffusion process X in a random Lévy potential V which is a solution of the informal stochastic differential equation dXt = dβt −
, 2008
"... Some properties of the rate function of quenched large deviations for random walk in random environment ..."
Add to MetaCart
Some properties of the rate function of quenched large deviations for random walk in random environment | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2522441","timestamp":"2014-04-20T20:13:45Z","content_type":null,"content_length":"30594","record_id":"<urn:uuid:7ad28105-a68c-446a-ba36-ef314daa2eec>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How many solutions does the system have? x = -4y + 4 2x + 8y = 8 A. one solution B. two solutions C. infinitely many solutions D. no solution
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/504b9be7e4b0985a7a58b3b1","timestamp":"2014-04-21T15:28:05Z","content_type":null,"content_length":"51467","record_id":"<urn:uuid:f02a6405-3947-46dd-851f-ba5a8d59d044>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Method for Short Multivariate Fuzzy Time Series Based on Genetic Algorithm and Fuzzy Clustering
Advances in Fuzzy Systems
Volume 2013 (2013), Article ID 494239, 10 pages
Research Article
A New Method for Short Multivariate Fuzzy Time Series Based on Genetic Algorithm and Fuzzy Clustering
^1Faculty of Economics and Political Science, Cairo University, Egypt
^2Sadat Academy for Management Sciences, Cairo, Egypt
Received 16 September 2013; Accepted 19 October 2013
Academic Editor: Katsuhiro Honda
Copyright © 2013 Kamal S. Selim and Gihan A. Elanany. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Forecasting activities play an important role in our daily life. In recent years, fuzzy time series (FTS) methods were developed to deal with forecasting problems. FTS attracted researchers because
of its ability to predict the future values in some critical situations where most standard forecasting models are doubtfully applicable or produce bad fittings. However, some critical issues in FTS
are still open; these issues are often subjective and affect the accuracy of forecasting. In this paper, we focus on improving the accuracy of FTS forecasting methods. The new method integrates the
fuzzy clustering and genetic algorithm with FTS to reduce subjectivity and improve its accuracy. In the new method, the genetic algorithm is responsible for selecting the proper model. Also, the
fuzzy clustering algorithm is responsible for fuzzifying the historical data, based on its membership degrees to each cluster, and using these memberships to defuzzify the results. This method
provides better forecasting accuracy when compared with other extant researches.
1. Introduction
Time series are widely observed in many aspects of our lives; therefore, the prediction of future values based on the past and present information is very useful. In practice, there are several
emergent domains that require dealing with short multivariate time series. As a consequence, the prediction of such time series arises in many situations. There are many existing techniques that are
well proven in forecasting with multivariate time series data, but they put constraints on the minimum number of observations and require distribution assumptions to be made regarding the observed
time series.
Fuzzy time series (FTS) models have become increasingly popular in recent years because of their ability to deal with time series data without the need for validating any theoretical assumptions.
However, how to select the proper model, how to partition the universe of discourse and determine effective lengths of intervals objectively to fuzzify the numerical data, and how to defuzzify the
results are still open critical issues. These issues are very important and affect the model accuracy. The paper probes into these three questions in the modeling of FTS. The new method incorporates
the fuzzy clustering and genetic algorithms (GA) with FTS to reduce its subjectivity and improve its accuracy. More specifically, the new method uses the integer genetic algorithm to search for the
optimal model that fits the available data. (In this paper, to solve integer optimization problems, we used the MI-LXPM algorithm which is a suitably modified and extended version of the real coded
genetic algorithm, LXPM. In MI-LXPM, a tournament selection procedure, Laplace crossover, and power mutation are modified and extended for integer decision variables. Moreover, a special truncation
procedure for satisfaction of integer restriction on decision variables and a “parameter free” penalty approach for constraint handling are used in MI-LXPM algorithm. More details of these operators
are defined in [1].) In addition, fuzzy clustering is used to partition the universe of discourse objectively. Furthermore, the method employs clustering centers and the observations’ fuzzy
memberships to defuzzify the results, instead of the centers of each interval, which are used in numerous existing models. The empirical results show that the new model is able to forecast with high
accuracy measures than the counterpart of existing models.
GA was used before in the literature of fuzzy time series. For example, Chen and Chung [2] use GA to tune up the length of each interval in the universe of discourse for one factor (variable), Lee et
al. [3] use genetic algorithms to adjust the length of each interval in the universe of discourse for two factors, Kang [4] uses GA to obtain the optimal fuzzy membership function, while Egrioglu [5]
uses genetic algorithm for finding the elements of fuzzy relation matrix.
Many authors applied fuzzy clustering in the fuzzification process. For example, Cheng et al. [6], Li et al. [7], Egrioglu [5], and Yolcu [8] used fuzzy c-means for the fuzzification of time series.
While Lie et al. and Egrioglu applied their methods in univariate time series, Cheng et al. applied their method in both univariate and multivariate time series.
The remainder of this paper is organized as follows. Section 2 describes in brief the concept of FTS and its models. Section 3 presents the new FTS method; it begins with a method overview. Then the
details of each step of the new FTS forecasting model are described. In Section 4, the method is evaluated by comparing its forecasts with those derived from other related FTS methods based on an
example. Finally, Section 5 summarizes the main conclusions.
2. The Basic Concepts of Fuzzy Time Series and Its Models
Fuzzy set theory provides a powerful framework to cope with vague or ambiguous problems and can express linguistic values and human subjective judgments of natural language. In 1993, Song and Chissom
[9, 10] successfully employed the concept of fuzzy sets presented by Zadeh [11] and the application of fuzzy logic to approximate reasoning presented by Mamdani [12] to develop the foundation of FTS
forecasting. Initially, FTS was proposed to deal with forecasting problems where the historical data are linguistic values. But recently FTS models have become increasingly popular because of their
ability to deal with quantitative time series data with limited or even no theoretical assumptions in contrast with conventional time series models. The main difference between a traditional time
series and FTS is the values of observations. In a traditional time series, the observations are represented by crisp numerical values. However, in a FTS, the values of observations are represented
by fuzzy sets. The basic definitions of FTS could be presented as follows [10].
Let: the set of all real values: the universe of discourse, ,: fuzzy time series, ,: fuzzy set, ,: the fuzzy relation between and .
FTS. Let , a subset of , be the universe of discourse on which fuzzy sets , are defined and is the collection of , . Then is called a FTS on , .
can be understood as a linguistic variable and as the possible linguistic values of . Because at different times, the values of can be different, is a function of time . Also, since the universes of
discourse can be different at different times, is used for the universe of discourse at time .
The th Order Model. Suppose is caused by , and simultaneously. This relation can be expressed as the following fuzzy relation equation: where is the Cartesian product and is defined as the fuzzy
relation between and , , and . Then (1) is called the th order model of . When , the model is called the first-order model.
Despite the large number of FTS methods developed since the introduction of the subject by Song and Chissom [9, 10, 16], the main steps in most of them are similar to the first method [9], which
could be summarized as follows:(1)defining and partitioning the universe of discourse, defining fuzzy sets and fuzzifying time series,(2)establishing fuzzy logical rules,(3)grouping the fuzzy logical
rules,(4)calculating the forecasted fuzzy sets,(5)defuzzifying the forecasted results.
Most of the work done in FTS employs one variable to build a fuzzy time-series model, that is, univariate model. Recently, FTS models have employed multiple variables in forecasting processes to deal
with more complex forecasting problems, that is, multivariate model [17, 18]. It is important to note that the term “multivariate” in most FTS literature is limited to the prediction of one variable
of interest, taking into consideration the multiple variables that could have effects on it, into the prediction process. To forecast some or all of the other variables, the same process has to be
individually repeated for each variable. In this paper, we adopt the same understanding to deal with multivariate fuzzy time series data.
3. The New Method
In this section, we present the new forecasting method for fuzzy time series. Section 3.1 provides an overview of the new method. Section 3.2 presents the notations and definitions. Finally, the
detailed description of the new method is given in Section 3.3.
3.1. Method Overview
Figure 1 summarizes the structure of the new method in the following steps.Set the upper bound and lower bound for the model parameters, the probability of crossover (), and probability of mutation
(). Also set the stopping criteria (the algorithm stops if the weighted average relative change in the best fitness function value over generations is less than or equal to tolerance (TolFun) or a
prespecified maximum number of generations is reached [19]), maximum number of generations, and fitness function tolerance (TolFun).With the language of GA, the method begins with creating a random
initial population of chromosomes. Each chromosome suggests one possible model and consists of number of genes that is equal to the model’s parameters and the values which specify that model.Make a
mating pool by applying the tournament selection procedure on initial (old) population [1].Applying FTS procedure which is conducted for each chromosome in the mating pool, this procedure can be
briefed as follows.(i)Fuzzification: applying fuzzy clustering to partition each time series according to the specific number of clusters (as specified in the chromosome), ranking cluster centers of
each variable, and then fuzzifying the historical data.(ii)Building fuzzy rules: building fuzzy relations according to specific order and number of variables (as specified in the chromosome) in
addition to assigning membership degree of each data point to each cluster.(iii)Defuzzification: defuzzifying the forecasted outputs using the membership degrees and the clusters centers.(iv)
Calculate the penalty function which includes a term for infeasibility. If the member (chromosome) is feasible, the penalty function will be the fitness function; otherwise, it will be the maximum
fitness function among feasible members of the population, plus a sum of the constraint violations of the (infeasible) point.Apply Laplace crossover and power mutation to all individuals in mating
pool, with and to make new population.Replace the current population with the offsprings to form the next generation.Check the stopping criteria. If satisfied stop; else go to [19].
After this overview of the new method, we will go through the method in more detail.
3.2. Notations and Definitions
Assume that we have time series (or variables) ; and without loss of generality, assume that the first time series is the main time series that we want to forecast and the rest of time series, that
is, , , are the auxiliary time series.
The following notations and definitions are used throughout the formulation of the new method: denotes the variable’s (time series) number; , where is the total number of variables. denotes the time
point number; , where is the total number of time points. denotes the actual value of the variable at time ; ; . ; denotes the forecasted value of the variable at time ; ; .; . denotes the cluster’s
number; , where is the total number of clusters.
Generally speaking, the number of clusters could take any value from 1 to . However, in the new method a number of clusters lower than 3 is considered too small and a number of clusters higher than
is considered too large. If the number of clusters is too small, there will be no fluctuations in FTS. On the other hand, if the number of clusters is too large, the meaning of FTS will be diminished
and it creates computational complexities [20], although the higher number of intervals may contribute to reaching better forecasting accuracy.
In the new method we assume that takes values ranging from 3 to ; that is, denotes the model order, which indicates how many previous time points we use to predict the present time. It takes values
ranging from 1 to (). Generally, lower and upper limits of the model order are optional. However, it is common in the literature that the model order takes values between 1 and 5. In the new method,
we consider that the upper limit of the model order is (). That is, ; : the indicator variables, each one takes the value 0 or 1 to indicate the absence or presence of the auxiliary variable in the
model; that is,
3.3. Description of the New Method
In this section, each step of the new method is further illustrated and detailed as follows.
Step 1 (initialization). The first step is to create an initial population of possible models or chromosomes. For each model, three main factors are of concern:(1)the number of clusters or fuzzy sets
which will be used to fuzzify the historical data;(2)the order of the model, whether first order or high order;(3)the selected auxiliary variables to be entered in the forecasting model.
These factors are considered as the parameters of the model, so each chromosome the GA created represents one possible model. Accordingly, the number of genes in each chromosome is set equal to the
number of auxiliary variables () plus 2. That is, each chromosome consists of () genes. Figure 2 illustrates the model parameter as a chromosome.
The population of chromosomes is generally chosen at random. There are no hard rules for determining the size of the population. Generally, the size is defined to range between 20 and 100. Large
populations guarantee greater diversity and may produce more robust solutions. Following this, the initial population is used to make a mating pool by applying the tournament selection procedure [19
]; for more details, see [1].
Step 2. Applying the fuzzy time series procedure for each chromosome in the mating pool as follows.
(1) Each time series , in the possible model at hand is partitioned using fuzzy c-means into clusters (for more details about fuzzy c-means, see [21]). When fuzzy c-means algorithm is applied, the
clusters’ centers for each time series, denoted by , , and , are calculated iteratively. After the clusters’ centers are fixed, the memberships of each time point with respect to these centers,
denoted by , , , and , are computed. Then, the clusters of each variable are ranked ascendingly according to their centers’ values, and these ranks are used as linguistic values (fuzzy sets) , , and
such that and .
(2) Fuzzify each time series by replacing each value with the equivalent linguistic value to obtain the FTS, that is, each crisp value is mapped into a fuzzy set where its membership degree has
maximum value.
(3) Given that the model order and the indicator variables ’s are as specified in the randomly selected chromosome at hand, the fuzzy rule is established as follows: where is the value of the FTS at
time , , and is the row vector of membership degrees of the main variable at time to the clusters; .
For example, assume that we have two fuzzy time series (the main variable) and (the auxiliary variable) both clustered and fuzzified into four fuzzy sets (i.e., ) and assume also that the model is of
order 2 (i.e., ). Let the current states of the main variable be and , let its next state be , and let the next state vector of membership degrees to the four fuzzy sets (i.e., ) be equal to .
Similarly, let the current states of the auxiliary variable be and . Then, under these assumptions, the rule can be induced as follows:
Now moving one time point ahead, let the current states of become and , let the next state be , and let its vector of membership degrees be equal to . And for the auxiliary variable let , . Then, the
corresponding rule can be stated as
It is important to notice that, for each of the above two rules, the vector of membership degrees in the right-hand side is the main variable’s next state row vector of memberships; that is, it has
no relation with the vectors of memberships of all of the current states appearing in the left side of the rule whether they belong to the main variable or to the auxiliary variable.
Group all generated fuzzy logical rules that have the same left-hand side. If this grouping process resulted in any group with more than one relation, then the average vector of all related
membership vectors in this group is calculated. The resulting groups will be denoted by ; , , where is the total number of groups with at least one relation. The average membership vector : and where
is the total number of groups with at least two relations.
For example, assume that we have the following 3 rules attributed to 3 different points of time:
These rules have similar left-hand sides (same fuzzified classes of the current states) but different vectors of membership degrees of the 3 next states (different right-hand sides). Hence, by
calculating the average vector of membership degrees, we get the following final rule representing this group:
To calculate the forecasted output for each time point , that is, to forecast , , we have to match the current state of time point with the left-hand sides of the grouped constructed fuzzy rules.
Three possibilities could be faced.If the current state exists in one of the groups, say , then the forecasted output of year , that is, , is calculated as where is the row vector of clusters’
centers of the main variable.If there are no fuzzy relations with the same left-hand side in the grouped constructed fuzzy rules whose current state is which is the case if one or more of the time
series points is missing or for the first future time point in the time series, then we try to reduce the order of the model gradually to reach the same left-hand side; then we apply case .If there
are no fuzzy relations in the grouped constructed fuzzy logical rules with any order whose current state is , then the forecasted output of time point will be taken as the center of the cluster .
Calculate the fitness value that reflects the prediction accuracy of the main variable under the specified model. Theoretically speaking, the search is for the proper model thatminimizes a certain
measure of solution error;maximizes the separation between the clusters centers of the main variable.
Hence, a fitness function composed of two measures is proposed in this method to compare the possible models performance or prediction accuracy: the symmetric mean absolute percentage error (sMAPE)
for measuring the solution error and the minimum absolute distance for measuring the cluster separation.
sMAPE can be defined as follows [22]: where is the actual value at time point , is the forecasted value at time point , and is the number of time points used in the calculation. The formula above
provides a result between 0% and 200%, which is considered an advantage to this measure. The closer the value of sMAPE to 0 is, the better the forecasting accuracy is.
sMAPE is well known in FTS literature as a measurement of solution error as it has an independent scale and it is less influenced by extreme values. It also corrects the computation asymmetry of the
percentage error [22]. Hence, the fitness function in the new method depends mainly on the sMAPE as a measure of solution error; to improve the accuracy of the results, sMAPE has to be minimized.
The minimum absolute distance between the centers of clusters is chosen to measure the overlapping between clusters. The minimum absolute distance can be defined as follows: where and are any two
clusters for the main variable. Conventionally, can take any positive value.
Now, we have a two-objective optimization problem. Since the parameters of the model take only integer values, it is not easy to find software that can solve multiobjective integer genetic algorithm.
To overcome this deficiency, the weighted sum of objectives method [23] is used. This method is the simplest approach and is probably the most widely used one.
The weighted sum method [23] scalarizes a set of objectives into a single objective by premultiplying each objective by a user-supplied weight. There is one critical question that arises here: what
values of the weights must one use? The answer depends on the importance of each objective and the scaling of them.
The scale of the first objective (sMAPE) ranges from 0 to 2, while the second objective can take any nonnegative value. To guarantee that both objectives have the same scale, we redefine the second
objective as follows: Equation (9) is considered as a proportional absolute distance between the centers of two clusters and relative to their average.
Accordingly, the overall fitness function can be defined as a linear combination of the two objectives (7) and (9) as follows: where is the weight (a simulation study is conducted to determine
suitable weights for the two objectives. The simulation result shows that the best value for the weight is 0.9 because it gives lower sMAPE) and it takes any arbitrary value between 0 and 1. is the
shortest distance between cluster centers.
The formula in (10) is considered as a tradeoff between the prediction accuracy and the reasonability of the clustering results.
Step 3 (selection and mutation for all chromosomes in the mating pool). To produce the chromosomes for the next generation, genetic algorithm creates three types of children for the next generation:
(i)selection of elite children which are the individuals in the current generation with the best fitness values. These individuals automatically survive to the next generation.(ii)Crossover children,
created by combining the vectors of a pair of parents.(iii)Mutation children, created by introducing random changes, or mutations, to a single parent.
Step 4. Replace the current population with the offsprings to form the next generation.
Step 5. If the end conditions are not satisfied, go to Step 2. Otherwise, stop and retain the best solution in current population.
4. Experimental Results
In this section, we consider two examples, one for multivariate time series and the other for univariate case. Predictions that resulted from the new method are compared with the well-known results
reported in the literature by other researchers. The forecast accuracy is compared by the mean absolute percentage error (MAPE) and mean square error (MSE). Suppose the actual value in time point is
, the forecasted value in time point is , is the total number of time points used in calculation; then the MAPE and MSE are computed by the following equation, respectively:
4.1. The Multivariate Case
Table 1 shows the total number of annual car accidents casualties in Belgium from 1974 to 2004 [13]. Here, the main time series is the number of killed persons and the auxiliary time series are
mortally wounded and died within 30 days, severely wounded, and light casualties.
To implement the new method, The mixed integer genetic algorithm module in MATLAB package is used after making slight modifications to cope with the nature of the FTS problem. The solution steps of
the new method are carried out using a specially developed MATLAB code.
The best model selected by the new method, in the language of Figure 2, is represented by the chromosome (630001). This means that the best model found is of order three (second gene) with one
auxiliary variable (last gene); namely, the light casualties and a number of fuzzy sets equal 6 (first gene). The other three auxiliary variables are ruled out from the fitted model as their
corresponding indicator variables take the value 0 in the chromosome. The trends of the actual values and the forecasted ones obtained by the new method are shown in Figure 3.
As seen in Table 2, the new method has the smallest values for MSE and MAPE: 445 and 1.28, respectively; that is, the new method outperforms the Jilani and Burney model [14] and Egrioglu et al. model
[13] which are superior to other existing methods in the literature in terms of the mean square error and the average forecasting error rate.
4.2. The Univariate Case
New method is applied on the famous enrollment data in FTS literature (Table 3). The best model according to the results of the new method is (55). This means that the best model found is of order
five and a number of fuzzy sets equal 5. It should be noted that in the univariate time series the chromosome genes reduce to two as we have no auxiliary variables in this case. The trends of the
actual values and the forecasted ones obtained by the new method are shown in Figure 4.
Comparison of the forecasting results is made between the new method and two other recently published ones, the Kuo et al. method [15] and the Egrioglu method [5]. Both mentioned methods considered
the same enrollment data set and proved being superior to other existing methods in the literature in terms of the mean square error and the average forecasting error rate. From Table 3, it can be
seen that the forecasting accuracy of the new method outperforms both mentioned methods in terms of MSE and MAPE.
5. Conclusion
When dealing with short multivariate time series, it is obvious that the assumptions of classical time series are not satisfactory. This paper introduces a new fuzzy time series method to deal with
such cases. The new method drew on the ability of FTS approaches to deal with time series data without the need for checking any various theoretical assumptions. It also integrates the fuzzy
clustering with genetic algorithm to improve the prediction accuracy for solving the multivariate high order FTS.
In the new method, fuzzy clustering algorithm is responsible for fuzzifying the historical data based on its membership degrees to each cluster, then using these memberships to defuzzify the results.
Also, genetic algorithm is responsible for selecting the proper model. The method is applied to time series of the yearly car accident causalities in Belgium from 1974 to 2004 and to the famous
university enrollment data set. It provided forecasts with MSE and MAPE values that are smaller than those obtained from the other existing fuzzy time series methods.
1. K. Deep, K. P. Singh, M. L. Kansal, and C. Mohan, “A real coded genetic algorithm for solving integer and mixed integer optimization problems,” Applied Mathematics and Computation, vol. 212, no.
2, pp. 505–518, 2009. View at Publisher · View at Google Scholar · View at Scopus
2. S.-M. Chen and N.-Y. Chung, “Forecasting enrollments using high-order fuzzy time series and genetic algorithms,” International Journal of Intelligent Systems, vol. 21, no. 5, pp. 485–501, 2006.
View at Publisher · View at Google Scholar · View at Scopus
3. L.-W. Lee, L.-H. Wang, and S.-M. Chen, “Temperature prediction and TAIFEX forecasting based on fuzzy logical relationships and genetic algorithms,” Expert Systems with Applications, vol. 33, no.
3, pp. 539–550, 2007. View at Publisher · View at Google Scholar · View at Scopus
4. H. I. Kang, “A fuzzy time series prediction method using the evolutionary algorithm,” in Advances in Intelligent Computing, vol. 3645 of Lecture Notes in Computer Science, pp. 530–537, 2005.
5. E. Egrioglu, “A new time-invariant fuzzy time series forecasting method based on genetic algorithm,” Advances in Fuzzy Systems, vol. 2012, Article ID 785709, 6 pages, 2012. View at Publisher ·
View at Google Scholar
6. C.-H. Cheng, G.-W. Cheng, and J.-W. Wang, “Multi-attribute fuzzy time series method based on fuzzy clustering,” Expert Systems with Applications, vol. 34, no. 2, pp. 1235–1242, 2008. View at
Publisher · View at Google Scholar · View at Scopus
7. S.-T. Li, Y.-C. Cheng, and S.-Y. Lin, “A FCM-based deterministic forecasting model for fuzzy time series,” Computers and Mathematics with Applications, vol. 56, no. 12, pp. 3052–3063, 2008. View
at Publisher · View at Google Scholar · View at Scopus
8. U. Yolcu, “The forecasting of Istanbul stock market with a high order multivariate fuzzy time series forecasting model,” An Official Journal of Turkish Fuzzy Systems Association, vol. 3, no. 2,
pp. 118–135, 2012.
9. Q. Song and B. S. Chissom, “Forecasting enrollments with fuzzy time series—part I,” Fuzzy Sets and Systems, vol. 54, no. 1, pp. 1–9, 1993. View at Scopus
10. Q. Song and B. S. Chissom, “Fuzzy time series and its models,” Fuzzy Sets and Systems, vol. 54, no. 3, pp. 269–277, 1993. View at Scopus
11. L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, no. 3, pp. 338–353, 1965. View at Scopus
12. E. H. Mamdani, “Application of fuzzy logic to approximate reasoning using linguistic synpaper,” IEEE Transactions on Computers, vol. C-26, no. 12, pp. 1182–1191, 1977. View at Scopus
13. E. Egrioglu, C. H. Aladag, U. Yolcu, V. R. Uslu, and M. A. Basaran, “A new approach based on artificial neural networks for high order multivariate fuzzy time series,” Expert Systems with
Applications, vol. 36, no. 7, pp. 10589–10594, 2009. View at Publisher · View at Google Scholar · View at Scopus
14. T. A. Jilani and S. M. A. Burney, “Multivariate stochastic fuzzy forecasting models,” Expert Systems with Applications, vol. 35, no. 3, pp. 691–700, 2008. View at Publisher · View at Google
Scholar · View at Scopus
15. I.-H. Kuo, S.-J. Horng, T.-W. Kao, T.-L. Lin, C.-L. Lee, and Y. Pan, “An improved method for forecasting enrollments based on fuzzy time series and particle swarm optimization,” Expert Systems
with Applications, vol. 36, no. 3, pp. 6108–6117, 2009. View at Publisher · View at Google Scholar · View at Scopus
16. Q. Song and B. S. Chissom, “Forecasting enrollments with fuzzy time series—part II,” Fuzzy Sets and Systems, vol. 62, no. 1, pp. 1–8, 1994. View at Scopus
17. S.-M. Chen and J.-R. Hwang, “Temperature prediction using fuzzy time series,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 30, no. 2, pp. 263–275, 2000. View at Publisher · View at
Google Scholar · View at Scopus
18. T. H.-K. Yu and K.-H. Huarng, “A bivariate fuzzy time series model to forecast the TAIEX,” Expert Systems with Applications, vol. 34, no. 4, pp. 2945–2952, 2008. View at Publisher · View at
Google Scholar · View at Scopus
19. M. A. T. LAB, Global Optimization Toolbox User’s Guide, The Math Works, Natick, Mass, USA, 2011.
20. K. Huarng, “Heuristic models of fuzzy time series for forecasting,” Fuzzy Sets and Systems, vol. 123, no. 3, pp. 369–386, 2001. View at Publisher · View at Google Scholar · View at Scopus
21. J. C. Bezdek, R. Ehrlich, and W. Full, “FCM: the fuzzy c-means clustering algorithm,” Computers and Geosciences, vol. 10, no. 2-3, pp. 191–203, 1984. View at Scopus
22. M. Hibon and S. Makridakis, Evaluating Accuracy (or Error) Measures, INSEAD, Fontainebleau, France, 1995.
23. K. Deb, Multi-objective optimization using evolutionary algorithms, John Wiley & Sons, New York, NY, USA, 2001. | {"url":"http://www.hindawi.com/journals/afs/2013/494239/","timestamp":"2014-04-17T07:04:48Z","content_type":null,"content_length":"270556","record_id":"<urn:uuid:23715577-a666-464e-8f60-1d4db72655fe>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
amount of of heat required for different materials to heat.
Get a known amount of water (say 1kg), heat it to 100 C, then combine it with an equal amount of the sand/iron. Track the temperature of the water until it stops changing.
If you assume the only heat transfer is between the iron/snad and the water, you can calculate the heat capacity by setting the heat transfer on each side equal to each other.
Cwater*deltaTwater = Csand*deltaTsand
You know the starting and ending temperatures, and you know the Cwater (4.18 J/g*K), so you can solve for Csand.
This assumes that heat capacity does not change with temperature. | {"url":"http://www.physicsforums.com/showthread.php?t=606003","timestamp":"2014-04-19T07:38:07Z","content_type":null,"content_length":"23359","record_id":"<urn:uuid:1fae2f01-6d2a-4ba6-9c03-3c637afcc3cc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
The deep significance of the question of the Mandelbrot set's local connectedness?
up vote 16 down vote favorite
I am given to understand that the celebrated open problem (MLC) of the Mandelbrot set's local connectness has broader and deeper significance deeper than some mere curiosity of point-set topology.
From http://en.wikipedia.org/wiki/Mandelbrot_set I see that conjecture has implications concerning the structure of the Mandelbrot set itself, but I don't think I grasp its broadest implications for
complex dynamics and matters beyond.
Request: Could someone explain the proper current context in which to view MLC and/or how the world would look it its full glory if MLC has a positive answer? Alternatively, please give a pointer to
somewhere in the literature that does the same.
complex-dynamics gn.general-topology
add comment
3 Answers
active oldest votes
If a connected compact $K \subset C$ is locally connected then the Riemann map $h\colon C \setminus \Delta \to C \setminus K$ extends continuously to $\partial \Delta$. For each $z \in
K$, the boundary of the convex hull of $h^{-1}(\{z\})$ is the union of a set $\Lambda_z$ of chords; the union of these $\Lambda_z$ over all $z \in K$ is a closed set $\Lambda_K$ of
disjoint chords; it is called a lamination of $\Delta$. We can reconstruct the convex hulls of each $h^{-1}(\{z\})$ from $\Lambda_K$, and when we collapse every convex hull to a point, we
obtain a topological model for $K$.
In the case where $K$ is the Mandelbrot set $M$, the lamination $\Lambda_M$ can be described combinatorially, so MLC would mean that we know the topology of $M$.
There is a second answer which is more subtle and more important. For each $c \in M$, the filled Julia set $K_c$ of $z \mapsto z^2 + c$ is compact and connected; if it is locally
connected the resulting lamination $\Lambda_c \equiv \Lambda_{K_c}$ is, in the right sense, invariant under $z \mapsto z^2$ on $\partial \Delta$. Even if $K_c$ is not locally connected,
there is a way of defining what the lamination would be if $K_c$ were locally connected. Every invariant lamination appears as $\Lambda_c$ for some c, and MLC is equivalent to the
statement there is a unique $c$ with a given lamination. We think of $\Lambda_c$ as describing the combinatorics of $K_c$, and we think of this uniqueness conjecture as "combinatorial
rigidity"---two maps of the form $z \mapsto z^2 + c$ are conformally conjugate (and hence equal) if they are "combinatorially equivalent".
up vote 12
down vote (Actually, if $z \mapsto z^2 + c$ has an attracting periodic cycle, then the set of combinatorial equivalent parameters form an open subset of $C$, so the statement of combinatorial
accepted rigidity must be suitably modified in that case. It is known that structural stability is open and dense in the family of maps $z \mapsto z^2 + c$, so combinatorial rigidity implies that
every $z \mapsto z^2 + c$ in this open and dense set must have an attracting periodic cycle; this is the implication that Eremenko alluded to in his answer. )
In this sense MLC is closely analogous to Thurston's Ending Lamination Conjecture (proven by Brock, Canary, and Minsky), which says, broadly speaking, that a finitely generated Kleinian
group is determined by the topology of its quotient and the ending laminations of its ends, which are also, when viewed appropriately, invariant laminations of the disk.
There is a third answer which is more historical and empirical. We can prove MLC and combinatorial rigidity "pointwise" (or "laminationwise") by proving that for a given invariant
lamination $\Lambda$, it appears as the lamination $\Lambda_c$ for a single $c$. This has been done in great many cases, first by Jean-Christophe Yoccoz, and then by Mikhail Lyubich, the
author of this post, Genadi Levin, and Mitsuhira Shishikura. To prove this combinatorial rigidity for a given $c$ seems to require a detailed understanding of the geometry of the
associated dynamical system, and this almost always leads to further results. So proving MLC would most likely mean having a thorough understanding of the geometry and dynamics of every
map $z \mapsto z^2 + c$.
add comment
I don't know enough to give a detailed answer, but I can at least give a reference: Milnor's book Dynamics in One Complex Variable (Vieweg). You can find an early draft here. Appendix F
mentions a couple of conjectures that would follow from MLC: see the footnote on page F-2 in the linked pdf file.
While I'm at it, I can't resist repeating one of my favourite mathematical stories ever, which concerns the connectedness of the Mandelbrot set. It is connected, as you'd guess from the
picture, but because of the thin filaments involved, this is not obvious if your image is too low-resolution. So there was some confusion in the early days. I'll let Milnor (Appendix F)
up vote 12 take up the story:
down vote
Mandelbrot made quite good computer pictures, which seemed to show a number of isolated "islands". Therefore, he conjectured that [the Mandelbrot set] has many distinct connected
components. (The editors of the journal thought that his islands were specks of dirt, and carefully removed them from the pictures.)
+1 for the story itself! I had a copy of the book but never bothered to read it - perhaps I should! – Somnath Basu May 1 '12 at 23:06
4 There is a similar story with the observation of second-harmonic generation in a nonlinear crystal (see prl.aps.org/abstract/PRL/v7/i4/p118_1 or aip.org/history/ohilist/4612.html). A
lithographer erased the tiny spot caused by the second harmonic and left the huge blob from the main laser. – Emilio Pisanty May 2 '12 at 0:23
If you draw the Mandelbrot set by coloring a pixel black when the center of the pixel lies in the set, it will appear to be disconnected, and the islands will appear to be islands.
This is true even if you draw it at a high resolution. It's only when you color in the pixels where the escape time is his that you see, very clearly, that the Mandelbrot set is
connected. – Jeremy Kahn Dec 28 '12 at 5:03
add comment
MLC is indeed a very technical and complicated counterpart of a simple question which arises from the general theory of dynamical systems. For a generic system (in a given
finitely-parametric family) can one describe generic behavior of trajectories?
In our case "generic" means "open dense set". Our system is $z^2+c$ the simplest one parametric family of one-dimensional rational (polynomial) maps. For this case, the main question is:
up vote 8 is it true, that there is an open dense set of parameters $c$, such that for $c$ in this set, every trajectory that begins on an open dense set is converging to an attractive cycle.
down vote
This is usually called the Density of Hyperbolicity Conjecture. For our family it is equivalent to the MLC, but the equivalence is highly non-trivial. Density of hyperbolicity is known if
one restricts to real $c$.
2 My understanding is that the conjectures DHC and MLC are not necessarily equivalent. The fundamental work of Douady-Hubbard proved the implication MLC -> DLC. – Adam Epstein Aug 26 '12
at 12:49
add comment
Not the answer you're looking for? Browse other questions tagged complex-dynamics gn.general-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/95701/the-deep-significance-of-the-question-of-the-mandelbrot-sets-local-connectednes/105524","timestamp":"2014-04-19T15:29:10Z","content_type":null,"content_length":"67096","record_id":"<urn:uuid:b9b5e06b-c761-478b-9c77-b12fce121fda>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
DMTCS Proceedings
2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05)
Stefan Felsner (ed.)
DMTCS Conference Volume AE (2005), pp. 357-362
author: Pascal Ochem
title: Negative results on acyclic improper colorings
keywords: acyclic colorings, oriented colorings, NP-completeness
abstract: Raspaud and Sopena showed that the oriented chromatic number of a graph with acyclic chromatic number
is at most
. We prove that this bound is tight for
. We also show that some improper and/or acyclic colorings are NP-complete on a class
of planar graphs. We try to get the most restrictive conditions on the class
, such as having large girth and small maximum degree. In particular, we obtain the NP-completeness of
3-acyclic colorability
on bipartite planar graphs with maximum degree 4, and of
4-acyclic colorability
on bipartite planar graphs with maximum degree 8.
If your browser does not display the abstract correctly (because of the different mathematical symbols) you may look it up in the PostScript or PDF files.
reference: Pascal Ochem (2005), Negative results on acyclic improper colorings, in 2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05), Stefan Felsner (ed.),
Discrete Mathematics and Theoretical Computer Science Proceedings AE, pp. 357-362
bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file.
ps.gz-source: dmAE0169.ps.gz (63 K)
ps-source: dmAE0169.ps (159 K)
pdf-source: dmAE0169.pdf (157 K)
The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at
least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your
browser correctly.
Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the
other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript.
Automatically produced on Di Sep 27 10:10:02 CEST 2005 by gustedt | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAE0169/1189","timestamp":"2014-04-16T14:40:34Z","content_type":null,"content_length":"15003","record_id":"<urn:uuid:68f778c4-8475-4a57-bfdc-192a748cd3d1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2008 [00260]
[Date Index] [Thread Index] [Author Index]
RE: Re: Workaround for an unexpected behavior of Sum
• To: mathgroup at smc.vnet.net
• Subject: [mg91197] RE: [mg91172] Re: Workaround for an unexpected behavior of Sum
• From: "Jose Luis Gomez" <jose.luis.gomez at itesm.mx>
• Date: Sat, 9 Aug 2008 07:45:11 -0400 (EDT)
• References: <200808040723.DAA13823@smc.vnet.net> <200808050802.EAA09815@smc.vnet.net> <200808060906.FAA22342@smc.vnet.net> <FABC5C63-A896-4B61-93DD-0FB62737E92A@mimuw.edu.pl>
<56CE1299-6088-4EE5-A34B-55DF275308B9@mimuw.edu.pl> <g7ed4j$2o3$1@smc.vnet.net> <200808081115.HAA11668@smc.vnet.net>
Well, it is so great to have these responses to my post. Thank you too,
Another issue, somehow related with your answer, Albert, is that Mathematica
has two different ways to specify double sums, the two of them format the
same in standard form (with the sigma notation), but they are not consider
to be equal by Mathematica (I already posted this in another thread of this
forum, however I considered relevant to mention it again):
Evaluate in a Mathematica notebook this code:
Sum[Sum[f[j], {j, 1, k}], {k, 1, n}]==Sum[f[j], {k, 1, n}, {j, 1, k}]
As you can see, both sides of the equation display exactly equal in a
notebook, using two sigmas, etc. However Mathematica does not simplify this
to True, not even using Simplify or Reduce:
Sum[Sum[f[j], {j, 1, k}], {k, 1, n}]==Sum[f[j], {k, 1, n}, {j, 1, k}]
Mathematica is a great software, I feel happy of making a small contribution
to possible improvements/fixes. I already reported these two issues (dummy
index versus global variables, and nested sums not equal to two-indices
form) to the support team of Wolfram Research.
Please if anyone has further ideas besides those of Andrzej and Albert, I
will certainly be happy to read about them
Best regards
-----Mensaje original-----
De: Albert Retey [mailto:awnl at gmx-topmail.de]
Enviado el: Viernes, 08 de Agosto de 2008 06:15
Para: mathgroup at smc.vnet.net
Asunto: [mg91172] Re: Workaround for an unexpected behavior of Sum
> Yes Andrzej, I agree, you are right, I can see know very important
> advantages of your approach over mine:
> ONE: Using the flag, you avoid an infinite loop (actually, infinite
> recursion). The equivalent action in my approach was to verify if
> Unevaluated[dummyindex] is the same as the evaluated dummyindex. In my
> the evaluation of dummyindex could take a long time. We actually cannot
> in advance how much time would that evaluation take, it depends whatever
> end user has stored in dummyindex, which could be a long program that
> a lot of time to evaluate. Even worst, the evaluation of dummyindex could
> have unwanted results or side effects, again it depends on the content of
> dummyindex. Therefore my approach is potentially inefficient and
> dangerous. On the other hand, in your approach you only verify if the flag
> is True or False: always fast, always safe, always reliable, great!
I'm not in a position to criticize Adrzejs approach, but the following
solution seems to achieve the same in a different way and has some
advantages in my opinion, so it might also be of interest.
By using ValueQ (or accessing OwnValues directly) you can easily decide
whether or not a Symbol has a value without evaluating. So something
like this would also do the job:
{dummyindex_Symbol /; ValueQ[dummyindex], rest___},
] := Module[{dummyindex},
Sum[sumand, before, {dummyindex, rest}, after]
I'm sure this approach has it's limits too (there could be additional
trouble with UpValues or DownValues of the index), but I like it for the
following reasons:
1) no dependence on the state of a global variable, which for whatever
reason could be left in an inappropriate state (see e.g. nested sums below).
2) it is minimal invasive: it only intervenes when obviously necessary
3) it works also with nested sums:
Sum[Sum[f[j], {j, 1, k}], {k, 1, n}]
j = 1; k = 2;
Sum[f[j, k], {k, 1, n}, {j, 1, k}]
>> This confirms that the behavior of Sum is a bug.
>> Andrzej Kozlowski
After all a reliable solution can only be achieved by Wolfram in this
case, everything else is just a workaround...
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Aug/msg00260.html","timestamp":"2014-04-17T09:53:11Z","content_type":null,"content_length":"30631","record_id":"<urn:uuid:40257186-a3c7-4129-87f5-b783bf1b6358>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Papers refuting black hole firewalls spread
Some research activity following the provoking black hole firewall claim by AMPS (Polchinski et al.) continues. The papers that say – or pretty much say – that black holes don't have any firewalls at
/near the event horizon start do prevail.
It's interesting to mention that Samir Mathur's idea of fuzzballs used to look as an early predecessor of "firewalls". Many of us were slightly skeptical because it looked like Mathur was saying that
the black hole interior never looks empty.
But he never did. While the interior has a complicated structure if you can measure all the degrees of freedom, macroscopic infalling observers will see the empty region that general relativity
implies. Samir Mathur and David Turton made this point explicit in the August 2012 paper about fuzzball complementarity.
I have especially liked the October 2012 paper by Avery et al. that tried to check, present, and supplement this collection of ideas with some details. So the unlimited-resolution probes would see a
non-empty black hole interior; but the macroscopic probes with typical wavelengths still much shorter than the black hole radius,\[
l_{\rm Planck} \ll \lambda_{\rm probe} \ll R_{\text{black hole}}
\] will see the empty black hole interior, in agreement with the equivalence principle in particular and general relativity in general. Only field modes with \[
\lambda_{\rm probe}\sim {\mathcal O}(R_{\text{black hole}})
\] (and be sure it makes no sense to talk about longer wavelengths than that which would be confined in the black hole) will see violations of the free fall. So the black holes have to reveal some
nonlocal dynamics, to agree with all the conditions such as unitarity and the preservation of the information, but the only degrees of freedom that are "significantly" influenced by this nonlocality
are the degrees of freedom whose wavelength is extremely long, comparable to the black hole radius.
What I love about the black hole fuzzball approach – yes, I have clearly become a fuzzball convert – is that it is much more positive, constructive, and specific. It may be viewed as a detailed
extension of the well-known insights on the black hole entropy and black hole complementarity. It really tells you how the descriptions of the black holes look like. I am doing some research that has
a similar spirit and could perhaps be formulated as research within the fuzzball paradigm. In some sense, I am amazed by the fact that people were able to find the infinite-parameter classes of
fuzzball solutions. But this achievement became easier after the bubbling AdS space observation by LLM.
Today, there is a new hep-th paper on the arXiv that explicitly says that the firewall analysis by AMPS is flawed:
Black holes without firewalls
Klaus Larjo, David A. Lowe, and Larus Thorlacius say that the error of AMPS may be interpreted as a wrong exchange of the order of two limits. When AMPS exchange some limits, they erroneously
conclude that the stretched horizon carries no degrees of freedom. In reality, there are degrees of freedom, they're able to retain the information for a time of order the black hole scrambling time,
and they're able to restore the "smooth sensations" for the infalling observers.
Larjo et al. count Mathur et al. as "alternatives not only to firewalls but to the black hole complementarity", too. I don't think it's quite right. The fuzzball complementarity is a version of the
black hole complementarity – it just specifies more accurately which probes will see that the principle is obeyed. In fact, the key role is played by the region between the event horizon and the
stretched horizon in the
Mathur-Turton paper
as well.
So I would say that these folks don't seem to read papers by each other too much. Unfortunately. But despite the different language, their actual statements – their actual explanation why AMPS is
wrong – are probably consistent with each other. They mostly differ in the amount of details that they try to determine about various related questions.
I view the black hole fuzzball proposal to a promising paradigm that may be answering absolutely all confusing issues related to the black hole information. It may be used to count the black hole
entropy; one may probably derive why coarse-grained probes will see the empty interior; one should be even able to quantify the ways how the information propagates nonlocally (from the viewpoint of
the effective, long-distance field modes) through the black hole interior; the proposal is free of any xeroxing and firewall paradoxes.
The precise links between the descriptions relevant for various observers as well as a more microscopic, stringy description of the processes involving the singularities is something I am trying to
clarify, too (analytic continuations, averaging over many degrees of freedom, and other tools are rather important in this business), but I don't plan to publish anything because I am frustrated by
the shortage of "genuine open-minded readers" of such papers. There are already some excellent papers in this business – I consider e.g. the recent paper by Avery et al. to be excellent – and they
are earning almost no attention or citations as many people are dedicating their time and hype to some worthless and/or manifestly wrong research directions. These bright young folks are throwing
pearls to the swines – it just sucks.
snail feedback (3) :
Nice update on the firewall story :-)
I like fuzzballs too, not least because the term "fuzzball" looks like a crazy "Neue Deutsche Rechtschreibung" version of the German word "Fussball" :-D
Dear Lumo, by pointing their work out here on TRF, you maybe help your young bright colleagues getting the right attention they deserve ;-)
You don't get it. Firewalls aren't about what freefalling observers notice subjectively.
It's about what outside observers get as reports from freefalling observers just before they cross the horizon when measuring the near horizons modes at scales between the Planck scale and the
radius R.
reader Luboš Motl said...
No, this is not plausible. Firewalls, in the original AMPS paper, are about the behavior of the interior of "old" black holes, after the half-entropy-evaporated Page time. From the external
observer's viewpoint, the observed physics is clearly indistinguishable from thermal radiation and this radiation can't be used to answer whether there is a firewall or not.
Questions about the existence of firewalls make only sense when observations by observers inside the black hole are considered. | {"url":"http://motls.blogspot.com/2012/11/papers-refuting-black-hole-firewalls.html","timestamp":"2014-04-17T15:39:39Z","content_type":null,"content_length":"198306","record_id":"<urn:uuid:3642b3c0-1386-4bbb-ab33-7d7c34e2732e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monad with DefaultSignatures (self.haskell)
submitted ago by sonyandy
sorry, this has been archived and can no longer be voted on
{-# LANGUAGE DefaultSignatures, RebindableSyntax #-}
import Control.Applicative
import Control.Monad.Trans.Reader (ReaderT (..))
import Prelude hiding (Monad (..))
class Monad m where
return :: a -> m a
default return :: Applicative m => a -> m a
return = pure
(>>=) :: m a -> (a -> m b) -> m b
default (>>=) :: Functor m => m a -> (a -> m b) -> m b
m >>= f = join (fmap f m)
(>>) :: m a -> m b -> m b
m >> n = m >>= \ _ -> n
-- default (>>) :: Applicative m => m a -> m b -> m b
-- (>>) = (*>)
join :: m (m a) -> m a
join m = m >>= id
fail :: String -> m a
fail = error
-- default fail :: (Monad m, MonadTrans t) => String -> t m a
-- fail = lift . fail
instance (Applicative m, Monad m) => Monad (ReaderT r m) where
join m = ReaderT $ \ r -> join $ fmap (flip runReaderT r) $ runReaderT m r
fail = liftReaderT . fail
liftReaderT :: m a -> ReaderT r m a
liftReaderT = ReaderT . const
all 5 comments
[–]Tekmo1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
[–]sonyandy[S] 2 points3 points4 points ago
sorry, this has been archived and can no longer be voted on
[–]edwardkmett0 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
[–]dtellerulam1 point2 points3 points ago
sorry, this has been archived and can no longer be voted on
[–]drb2260 points1 point2 points ago
sorry, this has been archived and can no longer be voted on
It all depends on how you want to define your instance. This way would work into a simplistic system, where the superclass instances must be defined before subclass instances are considered.
It would, of course, be nice to simply write
instance MAF Foo where
foo >>= f = ...
return x = ...
instance MAF Foo where
join = ...
return = ...
fmap = ...
and have that automagically create instances for Monad, Applicative, and Functor. Alas, Haskell provides no convenient mechanism to deal with this issue; Template Haskell is as close as we could get.
That, or overlapping instances, which gets weird fast. | {"url":"http://www.reddit.com/r/haskell/comments/10rs9c/monad_with_defaultsignatures/","timestamp":"2014-04-18T12:13:31Z","content_type":null,"content_length":"61316","record_id":"<urn:uuid:f06ede93-3682-45e1-a62d-6c664d6fe35b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
[<< | Prev | Index | Next | >>]
Friday, August 17, 2007
Netflix Reprize
After looking over the Proceedings of KDD Cup and Workshop 2007 I am reminded of some things I left dangling where I left off in December.
This is bad, because I've got other things I should be working on. :)
But, while it's in my head, here it is:
[As a preliminary I should mention that the performance boost I got a day or two after I posted my December entry was due to a run finishing that was training all features simultaneously. While
this shouldn't make any difference in the linear case due to orthogonality of the features, once you introduce any clipping or especially regularization it becomes advantageous. This is relevant
in the discussion that follows since later I talk about using the covariance matrix of features which only makes sense when training multiple features at once.]
A detail I glossed over in my original description was on the choice of the particular regularization term. There were two obvious options: One was to penalize the magnitude of the feature
vectors, which corresponds to a prior assumption that the features are drawn from normal distributions with some fixed variance, and the other is to penalize the magnitude of the feature values
in proportion to how often they are used which, if one is to interpret that as a simple prior at all, assumes a prior whose variance is inversely proportional to, e.g., the number of ratings by
the corresponding user; or, looked at another way, corresponds to penalizing the magnitude of the unrolled feature vectors -- where instead of having one scalar value per user per feature, for
instance, we have one scalar value per observed rating per feature, and then we tie many of those values together (weight sharing) according to which users cast each rating. (Note those two
schemes are identical except in how one measures the magnitude of the resulting feature vector.)
My method uses the latter of those two options, for a few reasons. First, intuitively it makes sense to penalize in proportion to usage: We assume similar usage patterns will continue, so that a
user who hardly ever rates will likewise have little impact on future rmse, and thus the cost of their feature values being too large is proportionally diminished. Secondly, after trying both
anyway it simply worked better. :) Lastly, it's slightly easier to implement and more well-behaved in an incremental algorithm.
The open question is: why, really, does that work better? (Or: does it? If anyone else tried both methods I would like to get independent confirmation.)
A Bayesian formulation prefers the other approach, but this results in the feature vectors for any sparsely observed users being squelched to near zero. Essentially the prior dictates that by
default all feature values should be zero, and any user with few ratings simply never overcomes this prior. The usage-relative prior, then, "fixes" this by softening up the prior for sparsely
observed users (and movies). But really, this "fix" is just a violation of the whole notion of priors... so why does it improve things? I have two theories about this. Please feel free to
enlighten me if you have a better clue:
One is that, related to the intuitive justification given above, there is some implicit variational Bayes going on here, in that we are exploring the space of models and asking about the impact
of that. Specifically: if we assume, for instance, that our movie-side model (movie features) is only approximately correct, then we can imagine that the true movie-side model, or the one that
will best predict future ratings, will be some random offset away from the one we have. And we can ask: what is the effect of that offset? That effect is going to be proportional to the magnitude
and prevalence of the features which scale those movie features: the user features. In other words, the larger and more common a user feature is, the more the error in the corresponding movie
features will be amplified. (And the converse as well.) So, we are actually treating our model as the center of a (normal?) distribution of hypothetical models, and accounting (hand-wave,
hand-wave) for the expected value of the probability of our data set over that distribution of models.
You look skeptical.
The other idea, not necessarily incompatible with the first, is that the prior itself sucks. Specifically: the prior assumes that each feature value is taken independently of the others from a
normal distribution. But in truth if we look at the correlation matrix between the feature values there are quite a lot of sizable values off the diagonal. In other words, before we know anything
about a given user's specific preferences, we know how their various preferences are likely to interact. (For instance, someone who loves christian themes is less apt to love gay themes, which is
useful information as a prior assumption even if no single feature directly captures that particular interaction.) By taking this covariance matrix into account, we might be able to tighten our
priors back down on the sparsely observed users while still leaving them room to move (in specific directions). Arguably, this fine shape of the priors cloud is far more important for the sparse
cases where the priors dominate, vs. the strongly observed cases which are able to define their own shape within a radially symmetric priors cloud. Hand-wave, hand-wave.
Or both could be true at the same time. I am tempted to try to work out an incremental algorithm accounting for a covariance matrix on the priors as well as some variational aspects...
Curious to hear any thoughts.
(The paper that really got me thinking about this again was Variational Bayesian Approach to Movie Rating Prediction. One thing that bothers me is their MAP model [eqn 12] should be equivalent to
my model with the worse of the two regularization options, yet they claim it works better... What am I missing? Or are they using a covarience matrix [as opposed to, e.g., using eqn 9] for their
MAP model? Isn't clear.)
[UPDATE: I just found this paper (Principal Component Analysis for Large Scale Problems with Lots of Missing Values) which seems highly related. Note the form of eqns 12 and 13. This is a
gradient method derived from a variational Bayes approach which results in essentially the same gradient I use except with finer control over the regularization parameters. Score one for
[<< | Prev | Index | Next | >>] | {"url":"http://sifter.org/~simon/journal/20070817.html","timestamp":"2014-04-20T16:46:48Z","content_type":null,"content_length":"8229","record_id":"<urn:uuid:6861584e-654b-4394-97b5-2f331f20d279>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math in Focus Activities for First Grade
One of my goals this year was to place an increased emphasis on math in my curriculum. This is difficult coming from a very very very literacy-oriented teacher. I like books and words and writing,
and I know how to do books and words and writing. Doing math – not so much. I just don’t think like a mathematician. I think more in brightly colored Sharpies and/or Flair Pens that like to draw
pictures and write stories and words.
But this year, my friends, with the adoption of Math in Focus (Singapore Math), I am changing my tune and trying to make our math block just as engaging as I make our literacy block.
So, here’s some “math things” we’ve done so far this year to support math in our classroom…
I couldn’t tell you who this idea was from, but someone out there in the the www made a chart similar to this that inspired me. We’ll be adding more strategies throughout the year.
We’re using Cara’s Calendar Companion this year and, *omiword* if you don’t own it, buy it! A great resource to help me connect writing to math, organize my calendar time, and add a little pizazz to
my least favorite part of the day (sorry, calendar time drives me cah-razy). Anyway, to help ease my kiddos into the routine, I made a large laminated poster that I fill out as the kids are filling
theirs’ out. We are not doing every component of this just yet. I just filled it out for the sake of the picture :)
This space on my whiteboard contains a number-writing poem I borrowed from Debbie Diller’s Math Work Stations book (*love*) and our math vocabulary terms. Let me just tell you, that poem has really
helped my kiddos remember how to write their numbers! A parent even commented that her little girl was saying it while she was doing her homework.
I’ve just started introducing math work station activities. I used to get all bent out of shape over centers because they seemed sooooo time-intensive to come up with and create. However, now I just
kind of go with what manipulatives I have on hand, and keep them simple. { I’m sure someone else in the wide world of teaching has come up with similar activities, so please know I don’t think I’m
being innovative here!} The kids love the deviation from whole group activities, and they get 12 kinds of excited when they get to use – stop the world – buttons! connecting cubes! post-it notes!
playing cards! This is why I love my firsties so – genuine excitement about the small things:)
Since we’ve worked a lot on identifying numbers 1-10, writing the words 1-10, and representing numbers 1-10, the first activity is very “kindergarten.” However, Math in Focus starts out with a big
ol’ number review and I really appreciated the refresher for my kiddos.
Activity #1
This is a super quick activity that could also be used as a formative assessment, and will give you a good idea who’s still struggling with number identification/representation.
Give each child 10 sticky notes and a pile of manipulatives (we used buttons). Students should then label each note with a number 1-10. Then, beneath or on the note, have the student show that many
Activity #2 & 3: More is Better & Less is Best
We’re also working on identifying more/less. So, I created a recording sheet that corresponds to a fun card game that my kids love. I feel like this might be an Everyday Math game, too, but I
honestly don’t remember!
For this game, take a deck of cards (pkg of 2 for a $1 at Dollar Tree) and remove all the cards except 1-10. Students will then work in pairs to play a war-like game using the altered deck. Each
student draws a card and records what they drew on the response sheet. The winner, depending on what skill you want to focus on, is the student whose card is “more” or “less.” The winner of the turn
circles the number. Students play for 10 turns and then determine the winner as the student with the most circles (i.e. drew the larger or smaller number the majority of the turns).
You can download a copy of these recording sheet HERE.
Are you using Math in Focus? What do you think? We’re onto number bonds starting on Friday and I’m a wee little bit nervous!
Have a great day, friends :)
PS. If there are a gobzillion typos/errors/something that doesn’t make sense in this post, I’m going to blame it on the fact that I can’t keep my eyes open after 7:30pm. School is wearing me right
out! Ha!
41 comments:
Ahhh.... maybe today is the day I get to be first to comment on your awesome blog.
I love your math anchor chart!
We use a different math series, but I noticed you blogged about using Treasures! My school just adopted the program this year and it's a LOT to handle. Any suggestions?! I would love to hear
them. Even though I'm in 5th grade, I still appreciate the help!
WOW! Loved this post! I'm feeling a little hum-drum in the math department and you have provided some great inspiration! I am making your to anchor charts (the ways to solve and writing numbers)
this week!! LOVE THEM! Thanks!
I totally agree about calendar time. It makes me crazy too. I used to do it right away during morning routine, but I thought that was wasting time for more important tasks. Now I changed it to
after recess, before Math. It's okay, but I am still not totally feeling it. How much time do you allot to calendar time? how long does it take for your students to complete their portion of the
calendar book? I am trying to figure out if I should be having them do a calendar book to make it better or if it would just make it longer and not necessarily better. Aahhh! Love your blog, and
I would appreciate your thoughts!
Love your chart! I wish I had thought of making that anchor chart! - I am so stealing yours! :) Where did you say you got your markers?
I love Deb too! I created a few workstations so far based upon her workstation suggestions in her book and my kids love them! They are simplistic yet so meaningful!
You can see them here:
Our district doesn't use Math in Focus, but I did take an introductory class this summer on Singapore Math. One of the ideas I got from that was using beads/pipe cleaners to help students create
different ways to make a number. I have a set for each student each number is a different color. Here's a link to my blog if you are interested.
Love your ideas!
I LOVE the number-writing poem!!! I will def make one for my classroom! Thank you soooo much for the more/less sheet. I have had my kiddos play war and the page will be a great assessment!!!
:):):) I love your blog!
I like your more and less game because it is so simple. The meaningful center game makes kids THINK.
Thank you so much for your ideas! I am going to use activity 1 today!
I love the chart you made for Cara's Calendar Companion. I have just been filling one out on my Elmo projector each day with the students, but this is a visual that would be up all day :)
And, surprisingly, I have been doing every section of it with my firsties and I'm amazed at what they are picking up on already.
Love, love, love your Math strategy anchor chart. Thanks for sharing!
Fun In First
My daughter makes up rhymes for her little brothers on how to remember to write numbers...the one she made up today for the number 5 was, "Across the pier, take a dive, swim around to make a
I have a number packet (found online free somewhere) that uses some of these same rhymes...and some are different.
I'm starting the calendar companion next month. I wasn't sure what kind of kiddos I'd get, but they are so flippin' sweet and smart, I can't wait. Michelle
Fabulous in FIrst
aahhhh! You are saving me!! I love all of your ideas.
I can't wait to make a Calendar poster like yours. I've been trying to do it together with my class on my "smartboard" devise and it's just not going so smooth like.
and I second all of those who are loving your Math strategy anchor chart.
Thanks again!!
how do you write so neatly on those poster boards? My handwriting is AW-FUL when trying to write! Any tip/trick would be appreciated.
First of all, LOVE your blog.
My school is on the second year of Math in Focus (Singapore Math). Since it's year two I was more familiar with the content I wanted to jazz it up.
We just finished number bonds last week and I had the kiddos make number bonds out of construction paper. Each part was blue and the whole was yellow. Color coding helped. We used linking cubes
to move things around on our number bond.
Hope to see more great ideas! Thanks.
ahhhhhhh I too hatttte teaching math!
You are my favorite blog to stalk! I love your more is better/less is best game. I made another sheet (although not as cute at yours) for students who needed an extra challenge. I altered the
directions so 2 or 3 cards would have to be added together for the total.
Thank you so much for the freebie! I love plying games with my firsties in math. Then I send them for homework to get the families involved. I really appreciate the game mat!
My school is in the second year of using Math in Focus, and we are just feeling comforatable with using materials other than those straight from the series. I, along with my team, would LOVE
anything you create to supplement and/or enrich. Thanks for all your hard work!
I love Touch Math and borrow it from the special ed. teachers at my school. Our district just started Go Math because it aligns to the common core standards. I have always done math centers in my
classroom and will continue to do so. I have a few free math centers on my blog if you want to check them out.
Erica Bohrer's First Grade
I teach third grade and I LOVE this idea! I am going to make some higher number cards for some of my kids that need to just look at numbers. For others..addition...and others multiplication when
we get there! THANKS!
Love the anchor chart! I use Cara's math notebook too!!
WILD About First Grade!
Your anchor chart is crazy amazing!! It's the embodiment of the teaching craft. Keep inspiring!!
First Grade Factory
I love love love your charts. I had to pin them because everyone needs to see them. Thanks for sharing!
<>< Crystal
I loved the strategies for solving math problems poster soooo much! I stayed late at school yesterday just so I could make it! They are all things me students use with the math program we use.
Thanks so much for the inspiration, girl! Love your stuff!!
Ha! I saw that strategies chart somewhere on Pinterest and started adding strategies as I teach them this week. I'm 2nd, so mine is more like, Counting on, making 10, doubles, etc. But I LOVED
the idea of having the ideas on an anchor chart for them to refer to, and to remind me to remind them!
However, yours is RIDICULOUSLY cute and puts mine to shame! You have some chart-making mojo!
And yep. That Debbie Diller Math book is amazing. :)
Love the math game recording sheets! They dovetail nicely with our EveryDay Math game called Top-It. Thank you so much!
First Impressions
Great game Abby! We played this today in class and my kids really enjoyed it! I love any game you can play with cards!!! Thanks for sharing! It was a hit! :)
Your blog is so amazing! I just nominated you for the Versatile Blogger Award! Check it out:
Love all your math ideas! Thanks!
Ever tried EDC Calendar Math? http://www.edconline.net/
Makes really good use of calendar time, and goes well with MIF.
Love the anchor chart!!! I may have to make one :)
Love your math ideas and anchor chart!!! !
I see SpaceMan! heehhe!
Hi Abby-
Thanks for all of your wonderful creative ideas - love them!
A couple of questions: Are you able to whip out your super attractive anchor charts right in front of the kids, or do you do quick ones and then redo them for your photos? Just curious - b/c they
look awesome!Also, where did you get the font and border on this recording sheet?
I love your amazing blog and ideas! Everything is so focused and fun for the kiddos! I have a question- what is the cute font you used at the top of the better is best game? It looks like wish I
were taller, but I can;t get it to be outlined. Thanks for your great blog! :)
Admiring the time and effort you put into your blog and detailed information you offer! I will bookmark your blog and have my children check up here often.
Random question for you....our school district is just starting the math in focus series....What chapter do you start teaching and practicing math fact tests( Achieving Facts Fluency)
Colorful instructional materials can help children interested in learning all this math equations. It is important to young children to find interesting in this lessons.
Jojo @ kindergarten math worksheets
Thanks for all your great ideas! My first graders are switching to Math in Focus this august as well. Did you continue to post on your blog past chapter 1? I would love to hear more feedback on
how the program went. Thanks so much!
This are amazing ideas.. It helps me a lot..
write my book | {"url":"http://theinspiredapple.blogspot.com/2011/09/math-in-focus-activities-for-first.html","timestamp":"2014-04-16T07:17:00Z","content_type":null,"content_length":"143792","record_id":"<urn:uuid:cf5ee620-6d29-4c57-a6d5-4636ac9d9fb3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hypothesis Testing
Null Hypothesis ( H[0] )
Statement of zero or no change. If the original claim includes equality (<=, =, or >=), it is the null hypothesis. If the original claim does not include equality (<, not equal, >) then the null
hypothesis is the complement of the original claim. The null hypothesis always includes the equal sign. The decision is based on the null hypothesis.
Alternative Hypothesis ( H[1] or H[a] )
Statement which is true if the null hypothesis is false. The type of test (left, right, or two-tail) is based on the alternative hypothesis.
Type I error
Rejecting the null hypothesis when it is true (saying false when true). Usually the more serious error.
Type II error
Failing to reject the null hypothesis when it is false (saying true when false).
Probability of committing a Type I error.
Probability of committing a Type II error.
Level of Significance (Significance Level)
The probability of committing a type I error, also known as alpha. This is how unusual something must be before we call it too rare to have happened just by chance. The area in the critical
Probability Value (P-value)
The probability of getting the results obtained if the null hypothesis is true. If this probability is too small (smaller than the level of significance), then we reject the null hypothesis. If
the level of significance is the area beyond the critical values, then the probability value is the area beyond the test statistic.
Test statistic
Sample statistic used to decide whether to reject or fail to reject the null hypothesis.
Critical region
Set of all values which would cause us to reject H[0]
Critical value(s)
The value(s) which separate the critical region from the non-critical region. The critical values are determined independently of the sample statistics.
Significance level ( alpha )
The probability of rejecting the null hypothesis when it is true. alpha = 0.05 and alpha = 0.01 are common. If no level of significance is given, use alpha = 0.05. The level of significance is
the complement of the level of confidence in estimation.
A statement based upon the null hypothesis. It is either "reject the null hypothesis" or "fail to reject the null hypothesis". We will never accept the null hypothesis.
A statement which indicates the level of evidence (sufficient or insufficient), at what level of significance, and whether the original claim is rejected (null) or supported (alternative).
Table of contents
James Jones | {"url":"https://people.richland.edu/james/lecture/m113/def_testing.html","timestamp":"2014-04-25T06:13:09Z","content_type":null,"content_length":"3549","record_id":"<urn:uuid:3fd6dd2a-6764-4560-b830-3d6011325d00>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Reduced Form with biprobit
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Reduced Form with biprobit
From "Steven J. Carter" <sjcarter@uci.edu>
To statalist@hsphsun2.harvard.edu
Subject st: Reduced Form with biprobit
Date Thu, 08 May 2008 10:09:13 -0700
Dear Statalist,
I have a probit model with an endogenous dummy variable and I found some helpful hints on this thread (http://www.statacorp.com/statalist/archive/2007-05/msg00366.html.). I generated a data set to
see if biprobit recovers the parameters, and it doesn't. However, when I model the reduced form, (substituting the rhs variables in equation 2 for y2 in equation 1), and back out the structural
parameters, it is pretty close. So my question is, why does it work for the reduced form parameters, but not the recursive form? Also, wouldn't the reduced form be similar to the forbidden regression
that is discussed in Wooldridge's Cross section and Panel data text for nonlinear models?
Here is my do file
********* Confusing bivariate probit question ******
* Note: I generated the data in another program because I am not quite familiar with stata statistics codes yet.
* this is a simulated bivariate probit dataset with an endogenous regressor
* model specified as
* y1*=g*y2+x*b1+e1, x includes constant in both equations
* y2*=x*b2+d*z+e2, where e1, e2 are normal with zero means, unit variances and correlation p
* true values for parameters are g=0, b1=[-.5 .07]', b2=[0 -.1]', d=.7, p=.5
* do a bivariate probit as shown in greene (2003) pg 715-716
biprobit (y1= y2 x) (y2=x z)
* notice that the coefficients don't match the true values
* Now try a bivariate probit using reduced form:
* y1*=g*(b2*x+d*z)+b1*x
* =x*(g*b2+b1)+z*(g*d)
* =x*w1+z*w2
* y2*=b2*x+d*z+e2
biprobit (y1=x z) (y2=x z)
matrix define coef2=e(b)
* true reduced form parameters
matrix define rftrue=(.07, 0, -.5, -.1, .7, 0, .5)
matrix list coef2
matrix list rftrue
* though estimating the reduced form parameters, w1 and w2 in the first equation, solving for the structual parameters (g, b1) yields results closer * to the true parameters.
********* End confusing question**************
I appreciate any input on this.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-05/msg00349.html","timestamp":"2014-04-20T18:42:04Z","content_type":null,"content_length":"7230","record_id":"<urn:uuid:30e2ea10-1d9b-4363-acb7-77d8a6cfca9b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Bob bought a glass pyramid from a gift store. The pyramid, with a height of 5 cm and a square base of side 6 cm, was packed in a box shaped as a cube of side 7 cm. The empty space in the box was
packed with paper tissue. How many cubic centimeters of the box were stuffed with paper tissue? A 283 cubic centimeters B 293 cubic centimeters C 313 cubic centimeters D 343 cubic centimeters
• one year ago
• one year ago
Best Response
You've already chosen the best response.
A. 283 cm^3
Best Response
You've already chosen the best response.
How did you get that though?
Best Response
You've already chosen the best response.
The cube has a volume s^3 =7^3=343. The pyramid had a volume of BH/3 where B is the area of the base and h is the height. B is 6*6=36, h=5 so V=60 Vtp=Vc-Vp Vtp=343-60=283 cm^3
Best Response
You've already chosen the best response.
What about this question? I got the answer 8 times is greater (B) but I keep second guessing it. A floor has two square-shaped designs. The area of the second square-shaped design is sixteen
times greater than the area of the first square-shaped design. Which statement gives the correct relationship between the lengths of the sides of the two squares? A The length of the side of the
second square is 2 times greater than the length of the side of the first square. B The length of the side of the second square is 8 times greater than the length of the side of the first square.
C The length of the side of the second square is 16 times greater than the length of the side of the first square. D The length of the side of the second square is 4 times greater than the length
of the side of the first square.
Best Response
You've already chosen the best response.
D. The length of the side of the second square is 4 times greater than the length of the side of the first square.
Best Response
You've already chosen the best response.
Wait for one sec, double checking.....
Best Response
You've already chosen the best response.
I got D. as the answer again.
Best Response
You've already chosen the best response.
ok and how did you get that?
Best Response
You've already chosen the best response.
The ratio of the sides would be the square root of of the ratios of the areas. Square root of 16 = 4. 4*4=16
Best Response
You've already chosen the best response.
I had to redo it but I'm pretty sure that's right.
Best Response
You've already chosen the best response.
ok thanks. I must have done something wrong :)
Best Response
You've already chosen the best response.
No problem. I do that all the time to :P
Best Response
You've already chosen the best response.
Wish you luck on your test. :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fad41cbe4b059b524f8d391","timestamp":"2014-04-17T06:58:54Z","content_type":null,"content_length":"57233","record_id":"<urn:uuid:c5f0aac5-e114-4279-a97d-25757d041a95>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
To the cosmos by nanotechnology
Behind the buzz and beyond the hype: Our Nanowerk-exclusive feature articles
Posted: Aug 08, 2008
To the cosmos by nanotechnology
(Nanowerk Spotlight) Our title today refers to the 1960 article by Yuri Artsutanov in
Pravda: "To the Cosmos by Electric Train" (pdf download, 132 KB). This article is the
granddaddy of all 'space elevator' concepts and first to propose the idea that a
cable-based transport system could become an alternative to rockets for launching
people and payload into space.
Artsutanov wrote: "Take a little piece of string and attach to it a stone. Begin to
rotate this primitive sling. Under the influence of centrifugal force the stone will
try to pull itself away and tightly stretch the rope. Well, what will happen if one
fastens such a 'rope' to the Earth's equator and, having flung it far into the
cosmos, one hangs on it an appropriate load? Calculations show ... that if the 'rope'
is sufficiently long, then centrifugal force will also pull it out, not letting it
fall to Earth, just like the stone stretches out our string. Indeed, the Earth's
force of attraction lessens in proportion to the square of the distance, and
centrifugal force grows with the increase in distance. And already at a distance of
about 42,000 kilometers centrifugal force turns out to be equal to the force of
If you are interested in learning more about the Space Elevator project, take a look
at this video or visit the Spaceward Foundation's Elevator:2010 website.
The single most difficult task in building the Space Elevator is achieving the
required tether strength-to-weight ratio – in other words, developing a material that
is both strong enough and light enough to support the up to 100,000 km long tether.
Thanks to nanotechnology, this material has become available in the form of carbon
nanotubes (CNTs). The challenge ahead is to weave these raw CNTs into a useful form –
a space worthy climbable ribbon. Assembling carbon nanotubes into commercially usable
fibers is still one of the many challenges that nanotechnology researchers are faced
with when trying to exploit the amazing properties of many nanomaterials.
Before any physical construction of a cable can begin, and notwithstanding the huge
technical hurdles of building a space elevator cable, researchers must develop models
and perform simulations to identify the optimal size, shape and defect concentration
for such a cable.
Scientists at the Politecnico di Torino in Italy have now developed a new multiscale
numerical approach for simulating the mechanics of macroscopic cables composed of
carbon nanotubes.
"We carried out thousands of multiscale stochastic simulations in order to perform
the first in-silico tensile tests of CNT-based macroscopic cables with varying
length" Dr. Nicola Pugno tells Nanowerk. "The longest treated cable is the
space-elevator megacable but more realistic shorter cables are also considered in our
bottom-up investigation."
Pugno, an Associate Professor of Structural Mechanics, together with colleagues Dr.
Alberto Carpinteri and Dr. Federico Bosia, have demonstrated for the first time that
a computer model can simulate a kilometers-long CNT cable. They reported their
findings in a recent paper in Small ("Multiscale Stochastic Simulations for Tensile
Testing of Nanotube-Based Macroscopic Cables").
In their work, the Italian researchers simulate different sizes, shapes, and
concentrations of defects, resulting in cable macro strengths not larger than ∼10
GPa, which is much smaller than the theoretical nanotube strength of ∼100 GPa.
Schematic image of the adopted multiscale simulation procedure to determine the space
elevator cable strength. Overall, the cable comprises a total number of nanotubes
given by Ntot=(Nx Ny )k and, in order to obtain the correct number in the
space-elevator cable, which can be estimated as Ntot= 1023, k=5, Nx= 40 and Ny= 1000
was chosen. (Reprinted with permission from Wiley)
Pugno explains that there are no best-fit parameters present in their multiscale
simulations: the input at level 1 is directly estimated from nanotensile tests of
CNTs, whereas its output is considered as the input for the level 2, and so on up to
level 5, corresponding to the megacable. "Thus, five hierarchical levels are used to
span lengths from that of a single nanotube of about 100 nanometers to that of the
space-elevator megacable of about 100,000 kilometers."
To numerically evaluate the strength of the space-elevator cable, the authors adopted
the SE3 code, formerly proposed by Pugno in a 2006 paper ("On the strength of the
carbon nanotube-based space elevator cable: from nanomechanics to megamechanics").
With this model, stress–strain curves, Young’s modulus, number and location of
fractured fibers, kinetic energy emitted, fracture energy absorbed, and so on, can be
computed, in addition to the cable failure stress.
"In this paper, preliminary simulations were carried out on a small piece of the
space-elevator cable – basically our level 1 results in the current paper –
postponing a detailed and hierarchical investigation as the main topic of a
subsequent, that is, the present, paper" says Pugno.
Multiscale simulations are necessary in order to tackle the size scales involved,
spanning over 15 orders of magnitude from nanotube length to space-elevator cable
length, and also to provide useful information about cable scaling properties with
With regard to the concept of the proposed space-elevator cable, the Politecnico
team's results are sobering news. "Our simulations imply that the megacable strength
is predicted to be much smaller than the theoretical nanotube strength (∼100 GPa),
erroneously assumed previously in the space-elevator design," says Pugno.
"Accordingly" he continues, "the multiscale simulations suggest a taper ratio (the
ratio between the maximum cross-sectional area, at the geosynchronous orbit, and the
minimum, at the Earth’s surface) larger than 613, in spite of the current naïve
proposal of 1.9."
The results attained by Pugno and his colleagues strongly confirm the previous
theoretical predictions on the dramatic role expected to be played by even small
defects in the megacable.
"Our predictions are conservative, since we have assumed perfect junctions between
the nanotubes" Pugno points out. "The junctions are expected to be the weakest links,
even if advanced nanotechnology – which doesn't exist yet – could lead to nearly
perfect interconnects, i.e., junctions with a strength that is larger than that of
the nanotubes themselves."
Rather than reaching for the stars with a space elevator cable, the Politecnico team
is now focusing on the scaling of nominal properties such as strength, dissipated
energy density, etc., which is is a major issue in developing realistic models for
the behavior and attributes of nanofibers and nanotubes.
By Michael Berger. Copyright Nanowerk LLC
Subscribe! Receive a convenient email notification whenever a new Nanowerk
Nanotechnology Spotlight posts.
Become a Spotlight guest author! Have you just published a scientific paper or have
other exciting developments to share with the nanotechnology community? Let us know
Subscribe! Receive a convenient email notification whenever a new Nanowerk Nanotechnology Spotlight posts.
Become a Spotlight guest author! Have you just published a scientific paper or have other exciting developments to share with the nanotechnology community? Let us know. | {"url":"http://www.nanowerk.com/spotlight/spotid=6674.php","timestamp":"2014-04-16T07:14:17Z","content_type":null,"content_length":"47111","record_id":"<urn:uuid:ba660f30-ed64-441e-8c78-f59745db5d32>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ulrich Oertel
Spring, 2013
Office - Smith Hall 219
Acting Chair this semester only.
Course web pages on Blackboard
Office Phone - 973-353-5043; Email - oertel@rutgers.edu
Department of Mathematics and Computer Science, Rutgers-Newark
Current Research
Automorphisms of 3-manifolds
The most interesting automorphisms (self-homeomorphisms) of 3-manifolds occur in boundary-reducible and reducible manifolds. The paper Automorphisms of 3-dimensional handlebodies develops a theory of
automorphisms of handlebodies and compression bodies analogous to the Nielsen-Thurston theory of autmorphisms of surfaces. In the theory, the analogue of the pseudo-Anosov automorphism is the generic
automorphism of a handlebody or compression body, and generic automorphisms turn out, as one would expect, to be the most interesting and mysterious. Automorphisms of irreducible 3-manifolds with
non-empty boundary can then be understood by combining the new methods with older methods in 3-manifold topology, involving the Jaco-Shalen-Johannson characteristic submanifold and Bonahon's
characteristic compression body. Much remains to be understood about generic automorphisms, which are currently being studied further by former Rutgers graduate student Leonardo Navarro de Carvalho.
In recent work with Carvalho, the ideas used for the classification of automorphisms of handlebodies and compression bodies are extended to classify, in a certain sense, the automorphisms of any
compact 3-manifold (satisfying the Thurston Geometrization Conjecture).
Automorphisms of 3-dimensional handlebodies and compression bodies, Topology 41 (2002) 363-410: Postscript file PDF file
A classification of automorphisms of compact 3-manifolds, with Leonardo Navarro de Carvalho. Math arXiv
Mapping class groups of compression bodies and 3-manifolds
In work related to the classification of automorphisms of 3-manifolds, I obtain a short exact sequence relating various mapping class groups (diffeotopy groups) associated to a compression body of
arbitrary dimension. This is applied to obtain a short exact sequence for the mapping class group of a reducible 3-manifold M. The sequence gives the mapping class group of the disjoint union of
irreducible summands of M as a quotient of the entire mapping class group of M. There are further applications.
Mapping class groups of compression bodies and 3-manifolds. Math arXiv
Normal surfaces
Charalampos Charitos and I have a new normal surface theory for incompressible (but not necessarily boundary incompressible) surfaces in irrecucible (but not necessarily boundary irreducible) compact
3-manifolds with boundary. It may be possible to apply this theory to obtain better invariant lamination-like objects for generic automorphisms of handlebodies and compression bodies.
Essential disks and semi-essential surfaces in 3-manifolds. Math arXiv
Contact structures and contaminations in 3-manifolds
In a project with Jacek Swiatkowski (Wroclaw University) I am studying contact structures using branched surface techniques. A natural generalization of the contact structure and the confoliation is
the contamination, an object which can be carried by a branched surface. Contact structures can be represented by contaminations. The goals of this research are 1) to understand contact structures
combinatorially; 2) to understand the relationships between foliations, confoliations, contact structures, laminations, and contaminations; and 3) ultimately to apply the methods to obtain
information about 3-manifold topology.
A contamination carrying criterion for branched surfaces, Preprint, 2001. Math arXiv
Contact structures, sigma-confoliations, and contaminations in 3-manifolds, Preprint, 2002. Math arXiv
Some Papers
• On the existence of infinitely many essential surfaces of bounded genus, Pac. J. Math. 202, No. 2, 2002, 449-458. Postscript file
• Spaces which are not negatively curved, joint with Lee Mosher. Communications in Analysis and Geometry, 6 , No1, (1998), 67-140.
• Two-dimensional measured laminations of positive Euler characteristic, joint with Lee Mosher. Oxford Q. J. Math. 52 (2001), no. 2, 195--216.
• Invariant laminations for diffeomorphisms of handlebodies, Oxford Quarterly Journal of Mathematics, 51, No. 4, (2000)
• 485-507.
• Boundaries of pi[1]-injective surfaces, Topology and its Applications 78 (1997), 215-234.
• Affine foliations and covering hyperbolic structures, joint with Athanase Papadopoulos, Manuscripta Mathematica 104 (2001) 383-406.
• Affine laminations and their stretch factors, Pacific Journal of Mathematics 182, No. 2, (1998), 303-327.
• Full laminations in 3-manifolds, joint with Allen Hatcher, Math. Proc. Camb. Phil. Soc. 119 (1996), 73-82.
• Affine lamination spaces for surfaces, joint with Allen Hatcher, Pac. J. of Math. 154 (1992), No.1, 87-101.
• Boundary slopes for Montesinos knots, joint with Allen Hatcher, Topology 28 (1989), 453-480.
• Essential laminations in 3-manifolds, joint with David Gabai, Annals of Mathematics 130 (1989), 41-73.
• Sums of incompressible surfaces, Proceedings AMS 102 (1988) No.3, 711-719.
• Measured laminations in 3-manifolds, Transactions AMS. 305 (1988) No. 2, 531-573.
• Homology branched surfaces: Thurston's norm on H[2](M^ 3), LMS Lecture Note Series 112, Low-dimensional Topology and Kleinian Groups, Editor D.B.A. Epstein, 1986.
• An algorithm to decide if a 3-manifold is a Haken manifold, joint with William Jaco, Topology 23 (1984), 195-209 .
• Incompressible branched surfaces, Invent. Math. 76 (1984), 385-410.
• Closed incompressible surfaces in complements of star links, Pac. J. of Math. 111 (1984), 209-230.
• Incompressible surfaces via branched surfaces, joint with William Floyd, Topology 23 (1984), 117-125. | {"url":"http://andromeda.rutgers.edu/~oertel/","timestamp":"2014-04-18T02:58:35Z","content_type":null,"content_length":"8875","record_id":"<urn:uuid:33996f63-b21c-4d29-a388-b9a5c162a015>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
change in area over an isosceles triangle
November 6th 2008, 04:26 AM #1
Nov 2008
change in area over an isosceles triangle
**posted this a minute ago but in the wrong forum section***
I am trying to determine the formula for dA/dH for an isosceles triangle with a base of 50mm and a height of 30mm. I think it has something to do with the changing area of a trapezoid as the
angles from teh base to the sides must remain the same.
Using that idea the A(trap) = h[(b1+b2)/2] but I'm not sure how to relate the top surface (b2) with the change in h.
Any help would be appreciated!
**posted this a minute ago but in the wrong forum section***
I am trying to determine the formula for dA/dH for an isosceles triangle with a base of 50mm and a height of 30mm.
Mr F says: This area of this triangle is constant so it's rate of change is zero.
I think it has something to do with the changing area of a trapezoid as the angles from teh base to the sides must remain the same.
Mr F says: Where does the trapezoid come from??
Using that idea the A(trap) = h[(b1+b2)/2] but I'm not sure how to relate the top surface (b2) with the change in h.
The final height will be 30mm. However the height is growing from 0-30mm.
The trapazoid is coming from when say the height = 10mm. I am goign to have a base of 50mm, 2 equal sides, and a top horizontal or some length. The top horizontal will decrease from 50mm to 0mm
as the shape aproaches the final triangle shape.
Sorry if thats confusing, its hard without being able to draw pictures.
November 6th 2008, 05:04 AM #2
November 6th 2008, 05:38 AM #3
Nov 2008 | {"url":"http://mathhelpforum.com/calculus/57954-change-area-over-isosceles-triangle.html","timestamp":"2014-04-16T21:04:36Z","content_type":null,"content_length":"36525","record_id":"<urn:uuid:78fe4ce8-f455-47b6-a924-e8ebf3cb3254>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Restriction of Unitary Highest Representations of Reductive Lie Groups and its Multiplicities,
International Conference, Summer Solstice Days, Université de Paris VI & VII, France, June 1999.
Let π be a unitary highest weight module of a semisimple Lie group G, and (G,G') a semisimple symmetric pair. I will talk about theorems:
i. If π has a one dimensional minimal K-type, then the restriction π|[G'] is multiplicity free as a G'-module.
ii. If the natural map G'/K' \hookrightarrow G/K is holomorphic between Hermitian symmetric spaces, then the restriction π|[G'] decomposes discretely into irreducible representations of G' with
multiplicity uniformly bounded.
The main results present a number of new settings of multiplicity-free branching laws and also give a unified explanation of various known multiplicity-free results such as the Kostant-Schmid K
-type formula of a holomorphic discrete series representation and the Plancherel formula for a line bundle over a Hermitian symmetric space.
I also would like to discuss that such estimate is not true in general for the restriction of non-highest weight modules.
Our approach also yields an analogous result for the tensor product of unitary highest weight modules, and for finite dimensional representations of compact groups.
© Toshiyuki Kobayashi | {"url":"http://www.ms.u-tokyo.ac.jp/~toshi/lec/199906.html","timestamp":"2014-04-16T04:11:32Z","content_type":null,"content_length":"2292","record_id":"<urn:uuid:51a70f6a-1369-44c9-8574-f2ea3a55c0c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Complexity Theory
Joseph Shoenfield jrs at math.duke.edu
Wed Aug 26 17:23:32 EDT 1998
Martin Davis writes:
>I disagree with Joe when he says that the methods (of complexity
theory and recursion theory) are similar. My sense (and I'm not such an
expert) is that the arguments in complexity theory tend to be
down-and-dirty combinatorial in nature.
Martin, you are as much an expert on this question as I am, and you
are certainly right about much of complexity theory, e.g., Cook's
ground-breaking result that the satisfaction problem for the propositional
calculus is NP complete. But there are some exceptions. I spent some
time looking at complexity theory about ten years ago (only yesterday).
The result which pleased me most was a theorem of Mahaffey, which says
that if P=NP is false, no sparse set is NP-complete. A sparse set is a
set of words such that the number of words of length n in the set is
bounded by a polynomial in n. This is clearly in the spirit of Post's
program to use the size of a set to show that it is not complete in some
sense. Unlike all of Post's results, it uses smallness rather than
largeness. The proof is quite pleasing; difficult enough to be
interesting, but intuitive and clear in its basic ideas. The techniques
include some standard techniques, some techniques developed by other
complexity theorist in proving weaker version of the result, and some
techniques which he developed. If I had found more theorems like this, I
might have been tempted to do some research in complexity theory.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-August/002005.html","timestamp":"2014-04-16T17:31:40Z","content_type":null,"content_length":"3762","record_id":"<urn:uuid:c8ce903a-0bf5-4786-9352-b4dd72464626>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
petsc-dev 2014-04-20
Creates a parallel vector with ghost padding on each processor; the caller allocates the array space. Indices in the ghost region are based on blocks.
#include "petscvec.h"
PetscErrorCode VecCreateGhostBlockWithArray(MPI_Comm comm,PetscInt bs,PetscInt n,PetscInt N,PetscInt nghost,const PetscInt ghosts[],const PetscScalar array[],Vec *vv)
Collective on MPI_Comm
Input Parameters
comm - the MPI communicator to use
bs - block size
n - local vector length
N - global vector length (or PETSC_DECIDE to have calculated if n is given)
nghost - number of local ghost blocks
ghosts - global indices of ghost blocks (or NULL if not needed), counts are by block not by index, these do not need to be in increasing order (sorted)
array - the space to store the vector values (as long as n + nghost*bs)
Output Parameter
vv -the global vector representation (without ghost points as part of vector)
Use VecGhostGetLocalForm() to access the local, ghosted representation of the vector.
n is the local vector size (total local size not the number of blocks) while nghost is the number of blocks in the ghost portion, i.e. the number of elements in the ghost portion is bs*nghost
See Also
VecCreate(), VecGhostGetLocalForm(), VecGhostRestoreLocalForm(),
VecCreateGhost(), VecCreateSeqWithArray(), VecCreateMPIWithArray(), VecCreateGhostWithArray(), VecCreateGhostBlock()
Index of all Vec routines
Table of Contents for all manual pages
Index of all manual pages | {"url":"http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Vec/VecCreateGhostBlockWithArray.html","timestamp":"2014-04-21T12:58:04Z","content_type":null,"content_length":"4561","record_id":"<urn:uuid:4815d061-b538-411a-91f5-00fb2b88e7ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ohm's Law Experiment #2
Ohm's Law Experiment #2
The purpose of this experiment is to verify Ohm's Law, and prove that irrespective of the value of the resistance, the relationship between the Voltage and the current will always be linear.
The experiment setup consists of a simple circuit with a variable resistance and a simple power source. The diagram above shows the circuit.
You are encouraged to choose a Voltage range and then vary the value of the resistance over the range. As expected from Ohm's Law, it will be seen that independent of the voltage range and value of
the resistance chosen, the relationship between the Voltage and the Resistance is always linear.
V = IR
Ohm's Law deals with the relationship between voltage and current in an ideal conductor. This relationship states that the potential difference (voltage) across an ideal conductor is proportional to
the current through it. The constant of proportionality is called the "resistance", R.
Mathematically this law is expressed as:
V = I R
V = Voltage (the potential difference between two points which include a resistance).
I = Current flowing through the resistance.
R = Resistance. | {"url":"http://www.electricalfacts.com/Neca/Exp/Exp3/ohm2.shtml","timestamp":"2014-04-21T04:31:59Z","content_type":null,"content_length":"5897","record_id":"<urn:uuid:f636bf18-35d2-42cf-a59b-119f8482a1ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: June 2002 [00229]
[Date Index] [Thread Index] [Author Index]
Integrate using UnitStep[*]
• To: mathgroup at smc.vnet.net
• Subject: [mg34901] Integrate using UnitStep[*]
• From: michael_chang86 at hotmail.com (Michael Chang)
• Date: Wed, 12 Jun 2002 02:15:33 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
I recently read with interest the postings on how to define piecewise
continuous functions of, say, one variable using UnitStep or Condition
(/;). Since I need to perform integration, I've tried to use UnitStep
in the simple functions f[x] and g[x] defined below using Mathematica
4.1 for Winduh XP (x,y are both supposed to be Real numbers).
In[1]:= f[x_]:=1/40(4(-100+x)UnitStep[x-100]+(355-4x)UnitStep[x-90]+(45-4x)UnitStep[x-10]+4x
In[2]:= g[x_]:=UnitStep[x-500];
A couple of things perplex my little mind ... the first being that
In[3]:= Integrate[f[x],{x,0,5000}]
works and correctly evaluates to 100, but
In[4]:= Integrate[f[x],{x,0,\[Infinity]}]
remains unevaluated due to warnings such as
Out[4]:= Integrate::idiv : Integral of -100+x does not converge on
{100,\[Infinity]} (I've cut out the other Integrate warnings for
It appears that Mathematica is trying to actually integrate the
individual UnitSteps, thereby resulting in some indeterminate results
... is this correct, and if so, is there a way around this?
Secondly, I want to integrate f[x-y]*g[x] over x (I kinda want
something like convolution, but not quite ...), so I define
In[5]:= ans=Integrate[f[x-y] g[x],{x,0,5000}];
However when I try something like
In[6]:= Plot[ans,{y,0,1000}];
I get a NIntegrate::inum warning *and* a Graphics output, but I get no
warning *and* a Graphics output when I try
In[7]:= Plot[Evaluate[ans],{y,0,1000}];
I was just wondering why do these two situations arise? Again, the
collective wisdom of this group is most appreciated! | {"url":"http://forums.wolfram.com/mathgroup/archive/2002/Jun/msg00229.html","timestamp":"2014-04-17T18:37:47Z","content_type":null,"content_length":"35625","record_id":"<urn:uuid:9e53475b-b30b-482d-a01b-d5d60b117cc6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
coin flips
Saturday, March 28th, 2009
Say you want to flip a perfect coin, but the only coins available are imperfect. First, let’s consider the case where we don’t know the details of the available coins, and would prefer not to measure
The simplest strategy is to flip a bunch of coins and determine whether the total number of heads is even or odd. As long as there’s a reasonable number of both heads and tails, the probability of an
even number of heads will be close to one half.
Here is a measure of “coin perfection” which behaves additively with respect to the parity algorithm:
I.e., if you have two coins with probabilities of heads $p$ and $q$, and $r$ is the probability that flipping both coins gives the same result, then
since $\begin{array}{rl}{e}^{G\left(r\right)}& =|2r-1|\\ & =|2\left(pq+\left(1-p\right)\left(1-q\right)\right)-1|\\ & =|2\mathrm{pq}+2-2p-2q+2\mathrm{pq}-1|\\ & =|4\mathrm{pq}-2p-2q+1|\\ & =|\left
(2p-1\right)\left(2q-1\right)|\\ & ={e}^{G\left(p\right)}{e}^{G\left(q\right)}\end{array}$
In particular, $G\left(0\right)=G\left(1\right)=0$ and $G\left(1/2\right)=\infty$ as one would expect; flipping a deterministic coin gets you no closer to perfection, and flipping a single perfect
coin makes the parity perfectly random regardless of the other coins.
If we know the exact probability of heads $p$, we can do better than this and produce a perfect coin in a finite expected number of flips. By information theory the best we can hope for is $1/H\left
(p\right)$ where
is the entropy of the coin. Unfortunately determining the exact minimum appears rather complicated, so I’m not going to try to finish the analysis right now. | {"url":"http://naml.us/blog/tag/coin-flips","timestamp":"2014-04-18T16:25:07Z","content_type":null,"content_length":"15054","record_id":"<urn:uuid:765e2baa-1193-4c28-b21e-51225d38a487>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] The Minimal Model of ZF
Ilya Tsindlekht eilya497 at 013.net
Wed Jan 2 07:08:17 EST 2008
On Tue, Jan 01, 2008 at 12:49:43AM -0500, joeshipman at aol.com wrote:
> This is as parsimonious as a set theory can be. But there is a
> confusing issue. When there IS a standard model, so that M is a set,
> every element of M has a name. If there is no standard model, M is a
> proper class, so there are way too many things in M to correspond to
> countably many formulas. I'd like to ask what is the first set in M
> which does not have a name, but this runs into the usual paradoxes. On
> the other hand, if M is a set, then internally M satisfies V=M, and
> there IS a first set in M which, externally, we can name, but which M
> does not "know" has a name.
For each element of M M has a formula naming it. It just doesn't have
the function which maps each such formula to the defined element.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012422.html","timestamp":"2014-04-18T10:36:02Z","content_type":null,"content_length":"3327","record_id":"<urn:uuid:70e59490-cf80-44ed-8ddd-38ef19446f8c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
07-31-2008 #1
Registered User
Join Date
Jul 2008
I am trying to figure out this problem
consider this recurrence relation
f(n)=2*f(n-1)+f(-2) for n>2
write a recursive function to compute f.
I have some code that I have been working on. My question is.
How do u get the function to return a value that it computed back to itself to be used again .
When i run this for f(2) and f(1)it comes out right, but for anything else its wrong ( or at least i think.)
can anyone tell mat f(3) and f(4) are so i know what to look for? Also, give me a hint as to what I am doing wrong in the code?
using namespace std;
int computef(int n)
int answer;
if ((n=2)||(n=1))
return 2;
answer= 2*computef(n-1)+computef(n-2);
cout << answer;
int main()
int x;
cout << "Please input the number for calculation " ;
cin >> x;
cout << computef(x);
return 0;
How do u get the function to return a value that it computed back to itself to be used again .
Just return the value.
can anyone tell mat f(3) and f(4) are so i know what to look for?
I presume that you have a typographical error and thus f(-2) is actually f(n-2).
f(3) = 2 * f(3-1) + f(3-2) = 2 * f(2) + f(1) = 2 * 2 + 2 = 6
From the above calculation, do you understand how the recurrence relation works now?
Oh wait, the real problem (other than failing to return a value) is that you wrote ((n=2)||(n=1)) instead of ((n==2)||(n==1)).
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
Holy smokes Batman!!!
That worked. So when do u use something like n=2 vs n==2
also, when u use "return" it return the value to the calling function to use it again if the conditions are not met?
revised code.
using namespace std;
int computef(int n)
int answer;
if ((n==2)||(n==1))
return 2;
answer= 2*computef(n-1)+computef(n-2);
return answer;
cout << answer;
int main()
int x;
cout << "Please input the number for calculation " ;
cin >> x;
cout << computef(x);
return 0;
So when do u use something like n=2 vs n==2
The former is used in assignment (or as part of the syntax for initialisation), the latter is used for comparing for equality.
when u use "return" it return the value to the calling function to use it again if the conditions are not met?
Yes. Of course, whether some conditions are met or otherwise does not matter: from the point of view of the function, you are just returning a value.
revised code.
I suggest that you indent your code more consistently. Also remove the cout << answer; since it is not needed and unreachable anyway.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
Thank you so much foe help. U made my night, i was working on this one problem for like 3 and a half hours
07-31-2008 #2
07-31-2008 #3
Registered User
Join Date
Jul 2008
07-31-2008 #4
07-31-2008 #5
Registered User
Join Date
Jul 2008 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/105784-recursion.html","timestamp":"2014-04-23T21:24:44Z","content_type":null,"content_length":"57146","record_id":"<urn:uuid:710b5c4e-aca9-4bcb-83ba-bf1864ef8a01>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
barrel explosion radius? - Doomworld Forums
Register | User Profile | Member List | F.A.Q | Privacy Policy | New Blog | Search Forums | Forums Home
Author All times are GMT. The time now is 02:57.
Junior Member Anyone ever calculate the exact outer limit of the barrel explosion's effective radius? I've been to http://doomwiki.org/wiki/Barrel but it doesn't go there. The number and
nature of the gradients of damage as you stand closer to the erupting barrel is also of interest.
Posts: 141 I guess I could set up a test map but I'm currently busy finale stages testing of Plasmaplant so I thought I'd just throw the quick question out there...
Registered: 12-08
07:20 # Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
anARCHy code:dx = abs(thing->x - bombspot->x);
dy = abs(thing->y - bombspot->y);
Posts: 930 dist = dx>dy ? dx : dy;
Registered: 05-02 dist = (dist - thing->radius) >> FRACBITS;
if (dist < 0)
dist = 0;
if (dist >= bombdamage)
return true; // out of range
which means it looks at the larger of the two differences in coordinates between you and the barrel. I forget what they call this metric (supremum metric is it?) but for an
"explosion radius" I think it gives you a square of side 2*bombdamage centred on the barrel.
(At least that's the ideal case. There are probably complications introduced by scanning the blockmap...)
Edit: I should say, in the case of a barrel, bombdamage is 128
08:04 # Profile || Blog || PM || Email || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
Junior Member RjY said:
code:dx = abs(thing->x - bombspot->x);
Posts: 141 dy = abs(thing->y - bombspot->y);
Registered: 12-08
dist = dx>dy ? dx : dy;
dist = (dist - thing->radius) >> FRACBITS;
if (dist < 0)
dist = 0;
if (dist >= bombdamage)
return true; // out of range
which means it looks at the larger of the two differences in coordinates between you and the barrel. I forget what they call this metric (supremum metric is it?) but for an
"explosion radius" I think it gives you a square of side 2*bombdamage centred on the barrel.
(At least that's the ideal case. There are probably complications introduced by scanning the blockmap...)
Edit: I should say, in the case of a barrel, bombdamage is 128
Danx dyde!
08:36 # Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
Forum Legend Cybermind from the Russian forums calculated this a long time ago and somehow I still had that *.txt on my drive:
this is assuming that the player and the barrel are on the same X or Y coordinate. first number is the distance but I'm not sure if it's the distance from the center of the
Posts: 4785 barrel to the center of the player or maybe the distance between their bounding boxes?
Registered: 04-07
136 - 8 damage
140 - 4 damage
142 - 2 damage
143 - 1 damage
144 or more - no damage
12:44 # Profile || Blog || PM || Email || Search || Add Buddy IP || Edit/Delete || Quote
Junior Member Memfis said:
Cybermind from the Russian forums calculated this a long time ago and somehow I still had that *.txt on my drive:
Posts: 141 this is assuming that the player and the barrel are on the same X or Y coordinate. first number is the distance but I'm not sure if it's the distance from the center of the
Registered: 12-08 barrel to the center of the player or maybe the distance between their bounding boxes?
136 - 8 damage
140 - 4 damage
142 - 2 damage
143 - 1 damage
144 or more - no damage
Wow, if that's true it peters out rather abruptly. I thought it might have a randomizer added on the end of that - like the weapons damage... I mean sometimes shots seem to
"crit", like when you one-shot a lost soul with a singlebarrel...
EDIT: Uhm, I meant two singlebarrel shots to down a lost soul, not one. My apologies, I blame my brain being sweaty at the time, from pre-christmas level design crunch.
Last edited by Soundblock on 12-27-13 at 17:55
13:05 # Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
Why don't I have a custom title by now?! Basically, the maximum damage is equal to the maximum distance, which is 128 units. If you are 128 units away from the exploding barrel, you take 128-128=0
damage. 127 units away? 1 damage. 8 units away? 120 damage.
Posts: 10679 Your radius is taken into account, and a player's radius is 16. So your player's center point needs to be 128+16=144 units away to get no damage.
Registered: 07-07
And the distance being computed simplistically only as the greater of the two distances on the X and Y axes means that if you are 100 units away on the X
axis and 100 units away on the Y axis, you aren't further away than if you are 100 units away on either axis and 0 units away on the other, even if
trigonometry would tell you that in the former case, you are actually 141 units away from the center of the explosion. So the apparent distance, depending
on the angle, can be greater than the effective distance.
Note that 128 damage/128 distance means that this is identical to a rocket's splash damage. This is the standard splash damage in Doom, only the
arch-vile's blast differs. The rocket does deal an additional amount of random damage, but only if you get a direct hit -- the splash damage itself is
never random and depends only on the distance.
Of course, some ports allow to change that. For example, ZDoom allows to define a distance within which full damage are applied, and you can use random
values for the parameters.
14:13 # Profile || Blog || PM || Search || Add Buddy IP || Edit/Delete || Quote
Senior Member Is the barrel's own radius taken into account and could you therefore increase the damage range of the explosion in vanilla by increasing the radius?
Posts: 1854
Registered: 04-04
15:04 # Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
All times are GMT. The time now is 02:57.
Forum Rules:
You may not post new threads HTML code is OFF
You may not post replies vB code is ON
You may not post attachments Smilies are OFF
You may not edit your posts [IMG] code is ON
< Contact Us - Doomworld >
Powered by: vBulletin Version 2.2.5
Copyright ©2000, 2001, Jelsoft Enterprises Limited. | {"url":"http://www.doomworld.com/vb/doom-editing/66687-barrel-explosion-radius/","timestamp":"2014-04-19T01:57:41Z","content_type":null,"content_length":"35622","record_id":"<urn:uuid:7b3bb4ac-3f36-417b-a5d2-91ba159685aa>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
etd AT Indian Institute of Science: Dynamics Of Activated Processes Involving Chain Molecules
& Collections
Thesis Guide
Submitted Date
Sign on to:
Receive email
Login / Register
authorized users
Edit Profile
About DSpace
etd AT Indian Institute of Science >
Division of Chemical Sciences >
Inorganic and Physical Chemistry (ipc) >
Please use this identifier to cite or link to this item: http://hdl.handle.net/2005/618
Title: Dynamics Of Activated Processes Involving Chain Molecules
Authors: Debnath, Ananya
Advisors: Sebastain, K L
Chain Molecules
Activated Processes
Keywords: Polymer Chains
Polymer - Dynamics
Star Polymer
Long Chain Polymer
Short Chain Polymer
Submitted Jun-2007
Series/ G21640
Report no.:
This thesis presents our recent study of few interesting problems involving activated processes. This chapter gives an overview of the thesis. It is now possible to do single molecule
experiments involving enzyme molecules. The kinetics of such reactions exhibits dynamic disorder associated with conformational changes of the enzyme-substrate complex. The static
disorder and dynamic disorder of reaction rates, which are essentially indistinguishable in ensemble-averaged experiments, can be determined separately by the real-time single-molecule
approach. In our present work we have given a theoretical description of how rate of reactions involving dynamic disorder is studied using path integral approach. It is possible to write
the survival probability and the rate of the process as path integrals and then use variational approaches to get bounds for both. Though the method is of general validity, we illustrate
it in the case of electronic relaxation in stochastic environment modeled by a particle experiencing diffusive motion in harmonic potential in presence of delta function sink. The exact
solution of corresponding Smoluchowski equation was found earlier[1] analytically in Laplace domain with sink having arbitrary strength and position. Exact evaluation of path integral
calculation to survival probability is not possible analytically. Wolynes et al.[2] have done an approximate calculation to get bounds to the survival probability in the Laplace domain.
A bound in the Laplace domain is not as useful as a bound in the time domain and hence we use the direct approximate variational path integral technique to calculate both lower and upper
bound of survival probability in time domain. We mimic the delta function sink by quadratic sink for which the path integral can be solved exactly. The strength of the quadratic sink is
treated as variational parameter and using the optimized value for it, one can estimate the optimized lower as well as upper bound of survival probability. We have also calculated a
lower bound to the rate. The variational results are compared with the exact ones, and it is found that the results for the two parameter case are better than those of one parameter
case. To understand how good our approximation is, we calculate the bounds in survival time and found them to be in good agreement with exact results. Our approach is valid for any
arbitrary initial distribution that one may start with. We consider the Kramers problem for a long chain polymer trapped in a biased double well potential. Initially the polymer is in
the less stable well and it can escape from this well to the other well by the motion of its N beads across the barrier to attain the configuration having lower free energy. In one
dimension we simulate the crossing and show that the results are in agreement with the kink mechanism suggested earlier. In three dimensions, it has not been possible to get analytical
“kink solution”for an arbitrary po-tential; however, one can assume the form of the solution of the non-linear equation as a kink solution and then find a double well potential in three
dimensions. To verify the kink mechanism, simulations of the dynamics of a discrete Rouse polymer model in a double well in three dimensions were done. We find that the time of crossing
is proportional to the chain length which is in agreement with the results of kink mechanism. The shape of the kink solution is also in agreement with the analytical solution in both one
Abstract: and three dimensions. We then consider the dynamics of a short chain polymer crossing over a free energy barrier in space. Adopting the continuum version of the Rouse model, we find
exact expressions for the activation energy and the rate of crossing. For this model, the analysis of barrier crossing is analogous to semiclassical treatment of quantum tunneling.
Finding the saddle point for the process requires solving a Newton-like equation of motion for a fictitious particle. The analysis shows that short chains would cross the barrier as a
globule. The activation free energy for this would increase linearly with the number of units N in the polymer. The saddle point for longer chains is an extended conformation, in which
the chain is stretched out. The stretching out lowers the energy and hence the activation free energy is no longer linear in N . The rates in both the cases are calculated using a
multidimensional approach and analytical expressions are derived using a new formula for evaluating the infinite products. However, due to the harmonic approximation made in the
derivation, the rates are found to diverge at the point where the saddle point changes over from the globule to the stretched out conformation. The reason for this is identified to be
the bifurcation of the saddle to give two new saddles. A correction formula is derived for the rate in the vicinity of this point. Numerical results using the formulae are presented. It
is possible for the rate to have a minimum as a function of N . This is due to the confinement effects in the initial state. We analyze the dynamics of a star polymer of F arms confined
to a double well potential. Initially the molecule is confined to one of the minima and can cross over the barrier to the other side. We use the continuum version of Rouse-Ham model. The
rate of crossing is calculated using the multidimensional approach due to Langer[3].Finding the transition state for the process is shown to be equivalent to the solution of Newton’s
equations for F independent particles, moving in an inverted potential. For each star polymer, there is a critical total length N Tc below which the polymer crosses over as a globule.
The value of NTc depends on the curvature at the top of the barrier as well as the individual arm lengths. So we keep the lengths of (F -1) arms fixed and increase the length of the F th
arm to get the minimum total length NTc. Below NTc the activation energy is proportional to the total arm length of the star. Above N Tc the star crosses the barrier in a stretched
state. Thus, there is a multifurcation of the transition state at NTc. Above NTc, the activation energy at first increases and then decreases with increasing arm length. This particular
variation of activation energy results from the fact that in the stretched state, only one arm of the polymer is stretched across the top of the barrier, while others need not to be. We
calculate the rate by expanding the energy around the saddle upto second order in the fluctuations. As we use the continuum model, there are infinite modes for the polymer and
consequently, the prefactor has infinite products. We show that these infinite products can be reduced to a simple expression, and evaluated easily. However, the rate diverges near N Tc
due to the multifurcation, which results in more than one unstable mode. The cure for this divergence is to keep terms upto fourth order in the expansion of energy for these modes.
Performing this, we have calculated the rate as a function of the length of the star. It is found that the rate has a nonmonotonic dependence on the length, suggesting that longer stars
may actually cross over faster.
URI: http://hdl.handle.net/2005/618
Appears in Inorganic and Physical Chemistry (ipc)
Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated. | {"url":"http://etd.ncsi.iisc.ernet.in/handle/2005/618","timestamp":"2014-04-19T04:45:17Z","content_type":null,"content_length":"28166","record_id":"<urn:uuid:685c2b29-9394-434b-9bed-cd3498e80cc0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reseda Trigonometry Tutor
Find a Reseda Trigonometry Tutor
My services are great for students and prospective students studying for standardized tests (ACT, SAT, GRE, GMAT) and /or who need help with school coursework. I was educated at West Point and
Cornell University where I was in the top 5% of my class and received a degree in Economics and Mandarin C...
30 Subjects: including trigonometry, reading, Spanish, writing
...I retain all of my lecture notes, quizzes, exams, and various assessments from that time. I have not only the textbook from which I taught the course, but also the text from when I was enrolled
in the class as an undergraduate student. I feel I have a solid grasp of this topic, and encourage potential students to review my ratings on Rate My Professor.
14 Subjects: including trigonometry, calculus, geometry, algebra 1
...I am a math major at Caltech currently doing research in graph theory and combinatorics with a professor at Caltech. I have taken several discrete math courses and I spent a summer solving hard
problems in discrete math with a friend. I began programming in high school, so the first advanced ma...
28 Subjects: including trigonometry, Spanish, chemistry, French
...I have gone through the language change situation first hand. In fact, English was my third language to learn. Luckily for me, I already had the chance to study the English alphabet and had
picked up on a several dozen words and some phrases - just enough to hold simple conversations.
12 Subjects: including trigonometry, writing, geometry, ESL/ESOL
...I also took general chemistry in college and earned an A+ in the class. I have had a real fascination with European history ever since I took AP European History in high school and earned a 5
on the AP exam. I went to Europe twice and took multiple guided tours of historical locations, prompting my desire to major in history at Duke.
18 Subjects: including trigonometry, chemistry, Spanish, biology | {"url":"http://www.purplemath.com/Reseda_trigonometry_tutors.php","timestamp":"2014-04-19T04:49:20Z","content_type":null,"content_length":"24138","record_id":"<urn:uuid:4468075a-9ba2-4c4d-ad27-235db587f8f3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
kitchen table math, the sequel
Saturday, February 10, 2007
Sons of
Sons of the thief, sons of the saint
Who is the child with no complaint
Sons of the great or sons unknown
All were children like your own
The same sweet smiles, the same sad tears
The cries at night, the nightmare fears
Sons of the great or sons unknown
All were children like your own...
So long ago: long, long, ago...
But sons of tycoons or sons of the farms
All of the children ran from your arms
Through fields of gold, through fields of ruin
All of the children vanished too soon
In tow'ring waves, in walls of flesh
Among dying birds trembling with death
Sons of tycoons or sons of the farms
All of the children ran from your arms...
So long ago: long, long, ago...
But sons of your sons or sons passing by
Children we lost in lullabies
Sons of true love or sons of regret
All of the sons you cannot forget
Some built the roads, some wrote the poems
Some went to war, some never came home
Sons of your sons or sons passing by
Children we lost in lullabies...
So long ago: long, long, ago
But, sons of the thief, sons of the saint
Who is the child with no complaint
Sons of the great or sons unknown
All were children like your own
The same sweet smiles, the same sad tears
The cries at night, the nightmare fears
Sons of the great or sons unknown
All were children like your own...
Like your own, like your own
Jacques Brel is Alive and Living in Paris
sung by
Gay Marshall
Ed heard her sing at the French consulate. We saw her tonight in
Jacques Brel.
'Sons of' Today ...
I curse the person who invented disaggregated data.
Today we define excellent schools as ones that suck the least.
An improving school is one that can limit the drop off in performance in students as they get older the most.
A good teacher is one who can get their kids' parents to do the most teaching.
If the education system was a corporation, it would be Enron.
If the education reform was a soft drink, it would be New Coke.
Posted by TurbineGuy at 2:30 PM 4 comments:
left this comment:
When I was teaching DI to 8th graders some of the hurdles were understanding that conversion factors like 1 ft/12 in mean one and that we are taking advantage of the identity element for
multiplication. Another hurdle was to figure out which unit of the CF should be in the numerator and vice-versa. I thought I had developed crystal-clear strategies for a foolproof approach, even
though the approach didn't sink in easily.
I always insisted on writing out each step and wrote the direction of the conversion on top of the problem with an arrow to minimize confusion, e.g. sec --> hours. Not all students were converts
to my approach to conversion.
I don't quite get the arrow part.
I wish to heck I'd kept more notes on dimensional analysis.
Saxon teaches dimensional analysis throughout all his books, starting maybe in 7-6.
Several times I've thought I had it down cold, and then encountered a problem that stumped me.
• every single time we work on dimensional analysis I say to Christopher: "What does 1 ft/12" equal? Then I wait 'til he tells me it equals 1. Sometimes he doesn't tell me it equals 1, so I tell
him. Then I say, "Why can we multiply the initial value by 1 ft/12"? That he always gets: we're multiplying by 1. Then I say, "Have we changed this initial amount? Is it a different amount after
we've done all this multiplying by unit multipliers?" He gets that one, though he's slightly hesitant.... "...No..." Finally I say, "What has changed?" He may or may not say that the unit has
changed, but that's only because he's not necessarily following my train of thought. As soon as I say it he gets it. I've become a huge fan of scripted instruction. I do this script every time we
work on unit multipliers; when Christopher reaches the point where the script seems stupid and obvious to him I'll know he's got them conceptually as well as procedurally. (Or at least that he's
got a far more solid conceptual understanding than he did when we started out.)
• Christopher has no trouble figuring out which unit has to go in the numerator and denominator, and neither did I including back when I first learned unit multipliers. I think there's something
visual about it (and I believe visual memory is "stickier" though I have yet to review all that research).
• He sometimes gets confused about which number is which: for a particular problem he'll know he has to put yards in the numerator and feet in the denominator, but he'll write 3 yards/1 ft because
3-to-1 makes more sense or is more familiar (you probably know what I mean). So, although he has zero confusion about the canceling aspect of unit multipliers, the very fact that sometimes yards
will be in the numerator and sometimes they'll be in the denominator can trip him up.
• I had a bit of trouble moving from "easy" unit multipliers (centimeters to yards) to rate unit multipliers (mph to meters per second), but Christopher has had no trouble at all. I'm sure that's
because Christopher still writes the number 1 in the denominator of the rate: 60 miles/1 hour. I wish I'd thought of that. For quite awhile I kept thinking things like, "Wait! I have two units in
the numerator! (60 mph - it's all one chunk) What do I do now!" This is one of those times where having a fresher brain is an advantage.
• I think the single hardest aspect of unit multipliers is knowing which number to put first. To this day I don't quite know whether it matters; I've gotten jumbled up in long problems before and
had to unjumble myself by deciding there was one, and just one, number that could start the whole thing out. When Christopher reads a simple unit multiplier word problem I have him circle the
value to be translated and underline the unit he's supposed to end up with. He's not particularly interested in doing that, but on the other hand the fact that we have done it seems to have made
it fairly easy for him to figure out where to start.
• I have him do the cancellations as he goes along. I learned this the hard way. By the time you get to Saxon Algebra 2 you're doing some long-chain dimensional analysis; more than once I've lost
my place and had to start over.
• having the student write two unit multipliers for each conversion (1 yd/3 feet versus 3 feet/1 yd) is a very good thing to do.
• dimensional analysis word problems are also a very good thing to do. For me it was a terrific exercise to use dimensional analysis to solve everyday word problems I had never used DI to solve
• Terrific DI problem from Saxon Math: The Adams' car has a 16-gallon gas tank. How many tanks of gas will the car use on a 2000-mile trip if the car averages 25 miles per gallon?
(source: Saxon 8
/7 Lesson 96 page 660 #3 - answer: 5 tanks)
Dimensional analysis is the simplest procedure on the planet, and yet it's strangely challenging to learn. I think this is entirely due to Wickelgren's observation about
all math looking alike
Dimensional analysis is the ultimate exemplar of the practice, practice, practice theory of knowledge.
It's easy, but it's confusing.
Practice solves that problem.
ktm 1
Creativity is an outgrowth of learning, and a lot of it. The past twenty-five years of cognitive psychology research has shown that the more a person knows about a subject, the more creative he
or she can be in it. No question an adult poses is considered creative if someone else has already asked it. Thus, an adult must know what has come before to ask creative questions.
This is true more generally as well. A student's ability to be creative in any area of knowledge increases with his or her knowledge of that area. Knowledge forms the fodder for creative new
This was a revelation to me.
For years I'd been interested in creativity, and had been trying to read books and articles on the subject, none of which told me anything at all.
Finally I gave up.
When I read this passage in Wickelgren, all became clear.
Creativity, like problem-solving and conceptual understanding, is an emergent property.
I had been looking for some kind of essentialist, biological, "trait" explanation of creativity.
The reason I couldn't find it was that creativity develops in the wake of learning.
Math Coach
(the book that started it all for me)
My 5th grader just showed me her home work for the weekend. 26 pages, totalling 115 problems. Every single problem involves estimation.
Due Monday.
My daughter was surprisingly upbeat about this. It could have been worse. A friend of hers has 6 of these packets to do over the weekend.
It's panic time. The CMT testing begins in 4 weeks. We have a week off for winter break starting next weekend. Apparently all the fifth graders were tested for their weak areas and some extra
practice was assigned.
If you were the parent of the child that came home with close to 600 problems to complete in one weekend, what would you do? Especially if this should come after 5 months of haphazard, math box type
homework. This is so completely unreasonable and unproductive.
Why don't they just use one solid curriculum with distributed practice and feedback that builds cumulatively and consistently throughout the year?
I'm all for repetition and practice, but this is nuts. Just before the test, assign 100s of problems to do at home? When the kids get decent CMT scores, are we really going to attribute that to
Everyday Math? The answer is sadly, yes.
Needless to say, we are taking a break from Singapore Math this weekend. My plan had been to finish the unit on ratios and review a little of the fraction stuff we did last week. It had been going so
well. The ratio chapter was so logically tied into the work we had done on fractions. She was actually enjoying it and it came very easily.
But it is all estimation all weekend instead. This is not time well spent.
Plus, we have problems such as this:
Sara has 5 pet dogs ranging in weight from 65 pounds to 130 pounds. Which could be the number of pounds the dogs weighed in all?
Well, 5x65=325 and 5x130=650; Both 400 and 600 should be correct answers as they both fall within the range.
This infuriates me.
To my
last update
, where I said:
He just doesn't understand why he should write "Let x equal the number of pears" at the top of the problem. I didn't either when I was his age, but I did it because I had no choice (that's the
way math was taught back then). Now, I understand why, and that's why I'm passing it on to him.
Catherine responded:
What is the reason?!
It seems like a good thing to do, but that's all I know.
Because math isn't the only reason to learn math. We benefit at least as much from the sequential, linear, logical thought process, because we can apply it to nearly every facet of our lives, and not
just the quantitative ones.
The traditional formalism of math is the embodiment of that process. By llearning it, and being forced to reproduce it every time we do a problem, we learn the process itself, of breaking a problem
into its component parts, and creating a step by step solution, where each step follows from the previous steps.
It's discipline for the mind.
This is one of my major objections to "fuzzy" math, that students never learn this logical process.
Posted by Unknown at 7:13 AM 8 comments:
Friday, February 9, 2007
Yesterday Ms. K taught a lesson on dimensional analysis.
The last time she taught a lesson on dimensional analysis was March 10, 2006.
Today is February 9, 2007.
According to the math department an 11-month gap between a first exposure and a second exposure is fine.
It's more than fine, actually.
If a student has been exposed to a topic for one week in 6th grade, and then again for another week in 7th grade, he should be ready and able to take a test.
And not just any test, either. He should be ready and able to take a complicated test filled with multi-step first-applications of the topic or skill.
I'm not surmising this, by the way.
Ed and I were directly told this by the math chair, who was defending Ms. K's latest test which half the kids had hosed, and which we had no interest in discussing in any event no matter how many
kids hosed it. We've given up on Ms. K's tests. We've given up on Ms. K! We'd come to discuss curriculum and pedagogy; the math chair had come to defend the test. Under no cricumstances, she said,
would she discuss curriculum and pedagogy with a parent. Any parent.
So she discussed the test and we discussed curriculum and pedagogy.
That's how we know the chair of the math department thinks 11 months between exposures is fine and the kids should be ready to take a test.
This is the kind of thing that gets me even more revved than I already am.
I'm handing this one off to Mr. Engelmann:
Typically about 60 school days pass before any topic is revisited. Stated differently, the spiral curriculum is exposure, not teaching. You don't "teach" something and put it back on the shelf
for 60 days. It doesn't have a shelf-life of more than a few days. It would be outrageous enough to do that with one topic-- let alone all of them.
...Don't they know that if something is just taught, it will atrophy the fast way if it is not reinforced, kindled, and used? Don't they know that the suggested "revisiting of topics" requires
putting stuff that has been recently taught on the shelf where it will shrivel up? Don't they know that the constant "reteaching" and "relearning" of topics that have gone stale from three months
of disuse is so inefficient and impratical that it will lead not to "teaching" but to mere exposure? And don't they know that when the "teaching" becomes mere exposure, kids will understandably
figure out that they are not expected to learn and that they'll develop adaptive attitudes like, "We're doing this ugly geometry again, but don't worry. It'll soon go away and we won't see it for
a long time"?
The Underachieving Curriculum judged the problem with the spiral curriculum is that is lacks both intensity and focus. "Perhaps the greatest irony is that a curricular construct conceived to
prevent the postponing of teaching many important subjects on the grounds that they are too difficult has resulted in a treatment of mathematics that has postponed, often indefinitely, the
attainment of much substantive content at all."
The good news is, I spent the past week having Christopher do dimensional analysis problems.
I've been teaching Christopher how to use unit multipliers off and on since January 24, 2006. It's been more off than on, extremely sloppy teaching.
But it's been "on" enough that he always has some residual memory when I wake up one day and remember I haven't given him any practice on unit multipliers in a great long while.
I desperately need an afterschooling drill-and-kill book.
I need a book that has pages and pages of dimensional analysis problems of all kinds, along with pages and pages of various multi-step complex problems using algebra, geometry, baby statistics &
probability, stem and leaf charts, etc. - I need it all.
In one book.
Anyway, about a week ago I started having Christopher do dimensional analysis problems every day. Five or six of them. My goal this time, and I'm sticking with it until it happens, is for Christopher
not only to be able to do dimensional analysis problems, but to do them fast.
We're going to carry on doing dimensional analysis problems until the state test in March; then I'm going to write on my calendar the next date he should do some more of them.
when is that date?
how long am I supposed to wait?
how long can I wait?
I bet Engelmann knows.
One of these days I'll get around to reading his book.
After Christopher gains speed and accuracy I am going to carry on having him do dimensional analysis for the next 2 years so he'll remember unit multipliers for the rest of his life.
Oh, fine.
I can no longer find the Dan Willingham article I was positive said that if you study the same thing 3 years in a row you remember it forever.
The good news is that because I just spent a week having Christopher do dimensional analysis problems he was able to do Ms. K's overly-complicated homework (multi-step dimensional analysis word
problems) with ease.
In fact, multi-step dimensional analysis word problems were exactly what he needed.
I hope this is evidence I'm beginning to channel the mind of Ms. K.
Life would be a lot easier around here if that were the case.
rules for installing a new curricula
Just a quick update. We've finally gotten to Cartesian geometry -- after they did linear and quadratic equations (well, "did" in the current "let's mention it then move on to something else" sense),
but only barely. The worksheet had a graph with little cartoon bubble labels: slope, x-axis, y-axis, intercept. The slope bubble pointed to the line, which could just as easily be the equation, but
perhaps I'm being picky. Below it was a question:
If x is 5, what will y be?
It would have been a perfectly reasonable question except that the tics on the axes weren't labeled. Were they 1, 2, 3, ..., or 2, 4, 6, ..., or 5, 10, 15, ... ? That little glitch made it just a tad
difficult to answer the question.
That was presented along with set theory, which was your basic Venn diagrams, with unions and intersections.
Then they went back to "probability and statistics" (and yes, those are sneer quotes). I've already said I think it's bizarre to teach either in the 8th grade, but if you're going to teach it, then
teach it. There was no new information presented the second time around. "Probability" was nothing more than your standard ball problem ("If there are 8 green balls and 4 red balls in the hat, what
is the probability that you will select a red ball?"), which explains embarrassments like
. Worse was the "statistics" component, which was nothing more than median, mean, and mode.
I hate to break it to the math ed folks, but statistics is not "soft," and it is far more than measures of central tendency. Ultimately, even the hard sciences come down to statistics. Carbon-14
dating (and potassium-argon dating) are statistics. DNA testing is statistics. Epidemiology is statistics.
The math ed people I know wouldn't know a frequentist from a Bayesian, or MANOVA from a
-test. Call me cynical, but I can't help but wonder if that doesn't have something to do with this mess of a curriculum. Why revisit the same concepts over and over again? Couldn't they at least
introduce -- in concept, if nothing else -- standard deviations or sample v. population?
If you're going to teach it, teach it. That's my outmoded, stale, dinosaurian view, anyway.
We did the "probability" and "statistics" worksheets in fifteen minutes. That's how much substance there was. But there were terms on the worksheet (she'd copied this one from somewhere, you could
tell that) they hadn't covered (from the original source). She had told them to ignore anything they didn't understand (what kind of advice is that for a teacher to give a student?) but he wanted to
know what they meant. So we talked about the normal distribution, standard error, and standard deviation (the terms from the original source on the handout).
(By the way, there's an interesting video of Peter Donnelly discussing common statistical errors
, if you're interested.)
In more general terms, I'm teaching Ricky formalism, to set up his problems in sequential, logical steps. He finds it anal retentive, and I'm not drilling him a lot because it frustrates him, but
some every time I see him. He just doesn't understand why he should write "Let
equal the number of pears" at the top of the problem. I didn't either when I was his age, but I did it because I had no choice (that's the way math was taught back then). Now, I understand why, and
that's why I'm passing it on to him. His biggest problem is that he's pretty good at figuring out how to solve a problem, and he doesn't see why he can't skip steps if he knows the intervening ones.
I was like that. But there's a reason for it -- so he won't be
like these students
Anway, back to the beef and noodles.
Posted by Unknown at 12:36 PM 6 comments:
parentalcation: The Bomb or just Bombing?
Why you can't trust schools. They will play with stats, take credit for their students successes, and claim cosmetic changes are curriculum changes.
Posted by TurbineGuy at 5:35 AM 14 comments:
Thursday, February 8, 2007
We had divided duties tonight. The 7th grade transition meeting and a meeting with the new assistant superintendent for curriculum were scheduled for the same night (why?) so Ed and I had to go our
separate ways.
Both meetings were a vast improvement over anything we've experienced in this district ever.
It was amazing.
Ed went to the meeting with the assistant superintendent.
He sums up the theme of the meeting as:
I came to Irvington for the excellent schools, and I'm disappointed.
("I'm furious and I voted against the fields because I'm furious" appears to have been a subtext.)
I think Trailblazers is gone.
That's Ed's sense.
Parents in K-5 are beyond enraged. They are openly enraged; they are incensed; they are not mincing words. (I would like to be physically present to witness such a thing once in my life.)
department of corrections: This passage gives the wrong impression. These parents are highly articulate professionals who are accustomed to, as Ed puts it "presenting themselves in public." All of
them are capable of containing emotion while expressing strong views.This is a skill I lack. I've been sitting alone in a room writing for so long that I simply have not developed an ability to speak
passionately, persuasively, and extemporaneously in public.
These are parents I've never met & may not even know by face.
The curriculum advisor said the math wars are like the reading wars; they've just about run their course.
"New textbooks will be coming out."
That's what she said. (paraphrasing)
New textbooks will be coming out.
next up: balanced math!
New textbooks coming out is fine.
Actually, it's great.
But the fact is,
we lost the reading wars
, didn't we?
We're all sitting around thinking we won; we're thinking common sense and peer-reviewed science prevailed.
They didn't.
Whole language was repackaged as balanced literacy, and the ed schools and their associated NGOs went on their merry way.
The reports I have from parents at Dows Lane are that we are using balanced literacy; we have hired a Dows Lane administrator, educated at Columbia, who has a "specialty" in balanced literacy; we
have high rates of dyslexia; we have numerous "reading specialists" on staff;
we are refusing to pay for Windward School although we are apparently recommending to parents that
spend the $40,000/year it takes to send their kids to the school.
We can't teach them to read for $20,000/year. No, that takes $40,000.
This is what I'm told.
So. If the math wars are going to run their course the same way the reading wars ran
course, that's a problem.
I've decided to get out in front of the curve for once in my life and declare myself
against balanced math
There's no reason to wait for new textbooks that will be coming out.
Let's adopt one, two, or all three of the excellent textbooks we have on the market
speaking of losing the reading wars...
Some kind of Tri-State Consortium report was presented at the Board meeting, and it's
Lucy Calkins
all the way.
Irvington is to be Lucy Calkinsized, it seems.
Lucy Calkins
Godmother of balanced literacy
, Lucy Calkins.
It's in the works.
That is a very bad idea.
more tomorrow
The 7th grade transition meeting was great.
With caveats.
what are the excellent texts on the market now?
The California Department of Education has a careful content-based adoption process for K-8 curricula. Reports may be found through the CDE site for Mathematics Frameworks and Curricular Materials.
David Klein at CSU Northridge also has links to Content Review Panel reports on middle school mathematics programs. The official final adoption report of 2001 provides positive reviews of K-8
materials that include for grade school Saxon Math K-6, Sadlier Progress in Mathematics CA Edition K-6, Harcourt Math CA Edition 2002 K-6, Houghton Mifflin Mathematics CA Edition K-5, and Scott
Foresman CA Mathematics K-6; for middle school the adopted programs include McDougal Littell Structure and Method (the venerable Dolciani series), and Prentice Hall Pre-Algebra and Algebra 1.
In addition to the named programs adopted in California the Singapore Primary Mathematics K-6 curriculum is uniformly recommended by subject matter experts, and also the Singapore New Elementary
Mathematics grades 7-10 series is of high quality.
I'm not going to have get to this any time soon, so I'm hoping someone here has the time to take a look:
An Experiment in Teaching Ratio and Proportion (pdf file)
Here's the
This article appears in the journal
Educational Studies in Mathematics
, which may have most of its articles posted online. Not sure.
Bishop & Hook on Saxon Math in California (pdf file)
Bishop & Hook's paper is here
Wednesday, February 7, 2007
thenar eminence
I'm reading Ian Osborn's
Tormenting Thoughts and Secret Rituals
, which I think
told me about.
In med school Osborn suffered from an obsession involving his thenar eminence:
I would have the sudden, intrusive image of me standing at a patient's bedside ready to draw a sample of blood: I unsheath a large-bore phlebotomy needle, menacing, daggerlike in its appearance,
and then inexplicably, instead of inserting the needle into my patient's vein, I thrust it to the hilt into the thenar eminence of my hand.
back to work
This means phonics as a last resort in fourth grade. That's what it means at our schools.
This statement, from a theoretical mathematician, was first posted to the
Bridgewater-Raritan Parents Math Forum
I am a theoretical mathematician (Ph.D. UCLA 1996), who taught as an Adjunct Assistant Professor at UCLA for 2 years ('96-'98), and then as an Assistant Professor at Illinois State University
(ISU) for 4 more years ('98-'02). In addition to my experience teaching college math and computer science, through interaction with many of my colleagues at ISU I became well-versed in issues of
Mathematics Education. (ISU has one of the largest Math Ed. programs in the country.) In fact, many of my fellow faculty were involved in drafting the NCTM standards, both past and present.
Both my daughter (eighth grade) and my son (fourth grade) have used EDM exclusively for their in-school math instruction. As a mathematician I find the program abysmal, and I know that I am not
alone (amongst mathematicians and others) in this assessment.
Let me share with you a portion of an email that I sent to our local (Hollis, NH) school board. This should serve to encapsulate (at least in part) my position on EDM.
As you could tell, I am passionately opposed to the use of Everyday Math (EDM). My experience with it, both personal and professional, has been uniformly negative. I also have large amounts of
anecdotal evidence that confirms that the only way our kids learn any math while using EDM in school is when parents become frustrated and just teach them math the "old fashioned" way.
What I object to is the "Emperor's New Clothes" syndrome: everybody telling me how great this program is, but there being absolutely no evidence that it provides any benefit at all. What is
particularly telling are the words and phrases that its advocates use: "It makes math more enjoyable," or "The kids really like the games." Of course they do!
What I believe has happened is that the teachers have been sold a bill of goods: most elementary and middle school teachers, while being dedicated and tireless in their devotion to wanting to
teach our children, have not received adequate training in mathematics. (This I can attest to from first-hand experience; I once taught Calc I to a group of students destined to be "math
teachers." I failed half of them (many couldn't do high school algebra). What was particularly disturbing was the fact that failing my class did not dissuade them from wanting to be teachers, it
merely "redirected" them: without passing Calculus they simply could no longer be Secondary (i.e., High School) math teachers; Calculus, it seems, wasn't required for Elementary or Middle School
(math) teachers.)
Hence, when a program (endorsed by "experts") comes along and tells them that they can do a better job teaching math by having the kids participate in group activities, making it "relevant" to
their "everyday" lives, the teachers rush to adopt it: who wouldn't? However, the hard yet honest fact is that math is difficult, and requires work, dedication and perseverance to master. As
Euclid said, "There is no royal road to mathematics."
But beyond all this, what troubles me most is the fundamental philosophical flaw in EDM: It ignores the core beauty and power of mathematics, viz., that it is an edifice constructed out of pure
reason, all of whose inferences and deductions flow logically and unarguably from more basic facts. EDM asks the students to flit willy-nilly from room to room or even floor to floor in this
structure, without ever exposing them to the skeleton, the underlying architecture.
The basic premise of EDM, so much so that its part of its name, that math should be valued or appreciated only insofar as it can be applied to "everyday things," is worse than misguided, it is a
lie promulgated by people who, quite frankly, don't understand the first thing about mathematics. (Example: Do we study "Everyday English Literature?" Why do we still read Shakespeare? Are people
really worried about being encountered by three old women stirring a big pot, and wanting to know how to deal with them?)
Let me recount for you what I used to tell all my students the first day of class: Being in a (math) class is like buying a membership to Gold's Gym. If you come to class, sit passively by, and
then complain that you didn't learn anything, that you just don't "get it," that is akin to walking into the gym a month after you bought your membership and complaining that you haven't gotten
any stronger, even though you come to the gym everyday and watch people work out. Being in a class, or in school, provides only the opportunity to learn, the teacher is there to facilitate the
learning process, but the effort must emanate from the student.
In short, the "guided instruction" methodology, however well-intentioned, is in fact, "misguided": Imagine paying a tennis or golf pro to help improve your game, only to have her tell you to "try
and discover the right method to strike the ball on your own." You would be justifiably outraged; you pay someone who is a better tennis player / golfer than you to teach you the right way to do
it. Human minds are not designed to do math (unlike, say, to learn language); they need to be taught the right way to do it."
I hope that it comes through in what I have written that I am not blaming the teachers. In my (admittedly limited) experience, many of them are similarly frustrated by having to adhere to an
administration-mandated (math) curriculum that they neither support nor believe in. I have rarely met a teacher who is not extraordinarily dedicated to her students, and I absolutely do not want
anything I say to be construed as being critical of the job that they do. My point here was merely to demonstrate that some teachers (as well as administrators, school board members, etc.) are
insufficiently trained to evaluate properly the merit of their math curriculum, and so rely upon others (like the NCTM) to do so for them.
Tony Falcone, Ph.D
This passage is beautiful:
The basic premise of EDM, so much so that its part of its name, that math should be valued or appreciated only insofar as it can be applied to "everyday things," is worse than misguided, it is a
lie promulgated by people who, quite frankly, don't understand the first thing about mathematics. (Example: Do we study "Everyday English Literature?" Why do we still read Shakespeare? Are people
really worried about being encountered by three old women stirring a big pot, and wanting to know how to deal with them?)
The night before Tony's statement appeared on the B-R Mathforum I had been trying to explain to Ed why constructivist math, from the perspective of a real mathematician,
isn't even math
Of course, I can't really explain it, not off the cuff at any rate.
Then Tony's statement landed in my email queue.
We are all lucky to have it.
The new video from the Where's the Math?" folks is located
Very well done.
Tuesday, February 6, 2007
I had been hearing that
Reading First
had worked, mainly in
interviews with E.D. Hirsch
, I think:
And that, paradoxically, is the one area which has just come in for a scandal, the reading first program, because that program insisted that you have to have a phonics-oriented early reading
program which was just what they have been pilloried for in this recent GAO report.
I'd been asking Ken about it, because I figured if anyone would know, he would.
Ken said he'd heard the same thing, but didn't know.
Turns out Hirsch was right.
Reading First has worked;
Ken reports that it is one of only four programs in the Department of Education - and the only program within NCLB - rated "effective
Reading First
• Reading First is a focused nationwide effort to enable all students to become successful early readers.
• Funds are dedicated to help states and local school districts eliminate the reading deficit by establishing high-quality, comprehensive reading instruction in kindergarten through grade 3.
• Building on a solid foundation of research, the program is designed to select, implement, and provide professional development for teachers using scientifically based reading programs, and to
ensure accountability through ongoing, valid and reliable screening, diagnostic, and classroom-based assessment.
4 of 88 programs rated "Effective "
By my count, 4 of 88 Department of Education programs are rated "Effective":
• Adult State Education Grants "The Adult Education State grants program funds literacy and basic skills education programs to help adults become literate, get a secondary school education, or
learn English. Funds are distrbuted by formula grants to States and States must distibute funds competitively to local providers." (that's good news)
• NAEP ("The Nation's Report Card")
• NCES (National Center for Education Statistics)
whole language lives on
New report out from Fordham; schools still disguising whole language as balanced literacy:
Moats, a psychologist and widely respected authority on early reading, authored a previous Fordham report in October 2000 called Whole Language Lives On. In it, she uncovered many whole-language
programs hiding behind the phrase "balanced literacy" in order to win contracts from school districts and avoid public scrutiny.
Seven years later, such programs still exist-and still try to pull the wool over educators' eyes. Worse, major school systems, including Denver, Salt Lake City, and New York City, continue to
adopt them, misled by materials that "talk the talk," touting the five elements of effective reading instruction identified by the National Reading Panel, but which are actually just
whole-language programs in disguise.
Here's Mike Petrilli:
"This report's findings help to explain why the federal government has to be prescriptive in its implementation of Reading First," said Michael J. Petrilli, Fordham's Vice President for National
Programs and Policy. "Anyone can put the label ‘scientifically-based' on the cover of their reading program. But if we want to do right by kids, we need to dig below the surface. If the policy is
to fund only programs that truly work, officials at all levels need to fend off the charlatans."
This is where I team up with Engelmann.
It's time for think tanks, policy wonks, and researchers to stop telling us about the "five principles" of effective reading instruction or whatever it is, and start telling us the names and authors
of the programs that work.
But no.
Instead, Moats tells parents to ask their school,
Does our school reading program
• Have valid screening measures in place to identify children at risk and provide them with early/extra instruction in word recognition, comprehension, and writing skills?
• Interweave multiple language components (such as speech sounds, word structure, word meaning, and sentence structure) together in the same lesson?
• Support reading comprehension by focusing on a deep understanding of topics and themes rather than developing a set of shortcut strategies?
So say you ask your school these probing questions.
What's the answer going to be?
We DON'T interweave multiple language components such as speech sounds, word structure, word meaning, and sentence strugure together in the same lesson
School district personnel, in my experience, routinely tell parents programs are "scientific," "supported by research" and all the rest whether they are or not.
My favorite instance of this was the time our former middle school principal told a huge gathering of parents that "all the research shows constructivist math is the way to go."
The only way to make that a true statement is to change "all" to "none."
He made this remark with
Parents don't have a chance against
My district uses balanced literacy, I'm told.
In case you were wondering.
Advanced Placement Test Fees and AP Incentives rated moderately effective
These are programs designed to increase minority participation in AP courses.
By my count 7 programs, of 88, are rated "Moderately Effective."
Not bad.
Reading First has worked
Reading Last
I love
this guy
Because you never learned dimensional analysis, you have been working at a fast food restaurant for the past 35 years wrapping hamburgers. Each hour you wrap 184 hamburgers. You work 8 hours per
day. You work 5 days a week. You get paid every 2 weeks with a salary of $840.34. How many hamburgers will you have to wrap to make your first one million dollars? [You are in a closed loop
again. If you can solve the problem, you will have learned dimensional analysis and you can get a better job. But, since you won't be working there any longer, your solution will be wrong. If you
can't solve the problem, you can continue working which means the problem is solvable, but you can't solve it. We have decided to overlook this impasse and allow you to solve the problem as if
you had continued to wrap hamburgers.]
He's got one about college applications, too:
A Wilton High School senior was applying to college and wondered how many applications she needed to send. Her counselor explained that with the excellent grade she received in chemistry she
would probably be accepted to one school out of every three to which she applied. [3 applications = 1 acceptance] She immediately realized that for each application she would have to write 3
essays, [1 application = 3 essays] and each essay would require 2 hours work [1 essay = 2 hours]. Of course writing essays is no simple matter. For each hour of serious essay writing, she would
need to expend 500 calories [1 hour = 500 calories] which she could derive from her mother's apple pies [1 pie = 1000 calories]. How many times would she have to clean her room in order to gain
acceptance to 10 colleges? Hopefully you didn't skip problem No 1. I'll help you get started.... 10 acceptances [ ] [ ] etc.
Monday, February 5, 2007
It's all biological!
Sorry, I couldn't resist. This image is just too funny. Takes me back to my NAAR days
(Nat'l Alliance for Autism Research)
when scientists on our SAB would sit around heaping scorn on brain scans and what journalists & regular people thought they were learning from them.
The article is actually pretty interesting. Also, I have the sense that brain scans have come a ways since I was vetting research proposals, though perhaps one can't say the same for the field
science illustration.
Fast language learners boast more white matterNew Scientist
The administration appears determined to implement the middle school model here in Irvington. Topic is up for discussion at the Board meeting tomorrow night. I imagine it's a done deal.
Apprised of these facts, Ed said, "We should go to that meeting."
No way.
Let someone else go to that meeting.
We have one more year in this school -- well, one and a half -- then it's so long and farewell!
This is my feeling.
I must say, the new principal does have some political savvy. I like the guy, and I don't particularly feel like attending the Board meeting in order lob hand grenades at the middle school model.
So I've contented myself with lobbing one
great big, pulsing hand grenade
via the Irvington Parents Forum.
That one fell into my lap. A friend emailed asking whether I knew anything about the middle school model.
I did.
What I didn't know was that the
website, the officially cited organization that is to middle schools what the NCTM is to math, contains pages and pages of
Public Relations Resources for "middle level educators
Nor did I know that the NMSA is of the opinion that:
There may be no task more important today for middle level educators than public relations. National Middle School Association (NMSA) is committed to helping you build support for quality middle
level schools. Join us in taking action to gain visibility for successful middle grades programs and practices by using the following resources designed to help you be a successful public
relations practitioner.
Oh yeah, that's the hard sell.
My line on public relations for educators is:
Any organization or corporation that has to develop marketing campaigns to convince parents of its worth is an organization or corporation selling something I don’t want to buy.Quick PR Ideas
"practical, low-cost, proven public relations ideas that have worked in schools everywhere
such as -
Have Parents Speak Up for You
Encourage your PTA or PTSA to sponsor a "Get to Know X Middle School—It's a Great Place to Learn" night at feeder elementary schools. Invite the elementary parents to the meeting and have your
parents explain the value your school provides students. Encourage questions. Your parents can deliver a highly credible message to other parents.
Let Students Demonstrate Their Computer Skills
Create a program where your students teach senior citizens computing skills. Email can help seniors keep in touch with their family and the world, but many of them aren't confident using
computers. Your students can teach them the basic skills. Students might also play computer games with seniors and write them occasionally. Let seniors know that the school can serve them.
Use Themes
Themes can communicate what people can expect from your school. If you don't have a meaningful theme, get one. Then look for every possible place to use it-school newsletters, home page of a
school web site, lunch menus, report cards, calendars, school letterhead, and even envelops since more people see the envelop than the letter.
That ought to do it.
You've got parents so ticked off about curiculum, pedagogy, and student achievement that they're voting down bonds, walking out of math nights, and writing blogs - TIME TO CREATE A THEME!
how to write an op-ed
What Is an Op-Ed Article?
An Op-Ed or Opinion Article is an opinion piece published in a newspaper but written by someone who is not on that newspaper's staff. Many large dailies, smaller dailies, and weekly newspapers
use op-eds somewhere in their editorial section. On many large newspapers, that paper's editorials, the editorial cartoon and columns by staff writers will appear on one editorial page. Opposite
that page, the op-ed articles will be run, and that's where the term "op-ed" comes from—it's opposite the editorial page.
The important point is that these articles provide anyone with the chance to publish his or her opinion. You don't have to convince a reporter to cover something; you can express your opinion.
You may see that the president of the chamber of commerce is published in the op-ed columns. This opportunity is available to you, too.
question: Do middle level educators
know what an op-ed article is?
This stuff must be mortifying to actual middle school teachers.
how to write an op-ed article, part 2
So What Do I Do?
First, determine whether newspapers in your area use op-ed articles. You can do this simply by reading the editorial pages. See if national columnists or local officials are published. Read these
articles. Become familiar with style, length, format, messages, and anything else that makes them stand out.
Second, decide what you would like to write. Sample topics for educators might include:
● How parents can help students learn
● What's right with education
● Success of our local school
● The importance of middle level education
● The need for resources in education
● How the community can support its school
I'm starting to grasp the obvious reality that these organizations are even more condescending to teachers than to parents.
Or not.
sample op-ed article
, provided to middle level educators for their use, is a 666 word essay (666!) advising parents on ways to
ways to help young adolescents grow. theme contest
So say you were a middle level educator trying to drum up enthusiasm for the middle school model.
What theme would you pick?
Entries should be dated no later than midnight February 28, 2007; no limit on number submitted per household.
Chapter 3 is out... yeah! I am not going to bother analyzing it, because others are much more adept at that than I am, but I do want to say one thing. This chapter literally made me shake with anger.
One of the recurring themes in the book, was how much the school establishment worked against the implementation of the Direct Instruction model. I found myself cussing the various antagonists out
under my breath as I was reading. The stupidity of some of the characters is amazing.
There was one bright spot in the chapter though. In a fairly long and very descriptive passage, Zig describes a well run kindergarten class room. Here is a teaser:
As soon as the bell rings, the teacher says, “Everybody, you can finish your worksheet later. It’s time for our morning warm-up. So get those thinking engines ready to go. The blue group is ready
... so is the yellow group.”
The aides are positioned on each side of the room. The teacher walks to the chalkboard. “We’re going to start with the days of the week. Tell me what day it is today ... Get ready.”
The teacher claps. As she does, nearly all of the children respond, “Tuesday.”
The story is much more involved, but illustrates how the teacher and the aids work together, like a well oiled machine. It's amazing how much attention to detail is given to every aspect of the
learning environment.
As I read the passage, I couldn't help to get a little depressed as I thought about all the trouble my bright 1st grader is having reading, yet here is a story about poor kindergartners from
disadvantaged homes who were performing above the level my daughter is now. It's enough to piss you off.
(cross posted at
Posted by TurbineGuy at 10:26 AM 6 comments:
Sunday, February 4, 2007
Ed and I are plotting strategy for the state math test & for the rest of the school year.
I need advice.
Sometime this weekend it came to us that we have a need for speed.
speed and accuracy: the KUMON motto
Christopher managed to move up from a D- on his next to last test (a highly inflated D- may I add) to a genuine C on the last test by dint of mighty reteaching and home-assigned distributed practice.
He needs to move to a B.
He also needs to return to a 4 on the state test this year.
Since we're not going to be able to achieve this through our preferred medium of sound curriculum and pedagogy, we're going to have to find a work-around, and that work-around is going to be speed.
He has to practice until he's
. That way he can whiz through the problems he does know and have a shot at cracking some of the problems he doesn't know. (Meanwhile my friend Kris says she's begun to think like Ms. K; she's
managing to figure out what will be on the test that the kids have never seen before. Kris took calculus in college, so she can do these things. Next test, she's going to brief me.)
Anyway, speed.
Here's the question.
These days kids are taught to "do the same thing to both sides" of an equation vertically instead of horizontally.
I can't find an image of it online, and I can't get the spacing to work, so you'll have to imagine it from this:
3x+2 = 5+2x
- 2x = - 2x
x + 2 = 5 + 0
Then you do another vertical subtraction, writing a below the 2 on the left side and the 5 on the right side.
I like this procedure because it makes obvious the fact that you're subtracting (or adding) when you "do the same thing to both sides."
But it takes forever, and it eats up space since Christopher's handwriting is still big and not neat. (Sure glad the schools don't bother teaching handwriting to mastery any more! Why would we waste
time teaching legible handwriting when we've got keyboards??)
I'm going to teach him how to solve equations horizontally and have him practice until he can do it fast.
But how should I do this?
Russian Math shows students what happens when you add or subtract the same quantity from both sides: you change the sign.
From that point on the book has you simply move variables and constants from one side to the other, changing the sign as you move:
3x + 2 = 5 + 2x
2 = 5 + 2x - 3x
That's the fastest method, obviously.
Also the most space-saving when you have enormous handwriting.
But Ed says doing things that way gives you more likelihood of error, and I should have Christopher write out what he's doing:
3x + 2 = 5 + 2x
3x - 3x + 2 = 5 + 2x - 3x
Any thoughts?
We are also going to drill dimensional analysis until Christopher can do it in his sleep.
image source:
Linear Equations in One Variable
OK, so I went through our upcoming
State Test
and pulled out a rough version of topics to be tested.
Then I went through my five gazillion workbooks looking for suitable worksheets.
I don’t have any worksheets on Venn diagrams.
Or on circle graphs.
Or on problems like “The diameter of a circle is X; find the radius.”
Or on
Also, I don't remember what
So I’ve just ordered ... oh, maybe another hundred fifty bucks worth of workbooks that will eat up the remaining free space on my office floor and possibly make the difference between a 3 and a 4 on
the state test not to mention that between amnesia and something akin to mastery on the critical skill of constructing a circle graph.
I don't know why I'm doing this.
After all, the slim little $11 practice workbook the school just made us all buy ought to be plenty.
Say you're a 7th grader who's constructed one circle graph in your life, and that was last year.
What do you need to do this year to be prepared to construct a circle graph on the state test?
Construct another one!
In your slim little $11 test prep booklet!
One circle graph last year, one circle graph this year.
That'll do it.
see also:
to dostate test coming right up (2006)
throwing money at the problem
more stuff only teachers can buy
help desk 1state test coming right up (2007)
help desk 2
my life and welcome to it
progress report
28 out of 30 | {"url":"http://kitchentablemath.blogspot.com/2007_02_04_archive.html","timestamp":"2014-04-19T11:57:02Z","content_type":null,"content_length":"587126","record_id":"<urn:uuid:36981e19-b4a2-483d-93a7-703f7559ea74>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recent Contributions to Cryptographic Hash Functions
Hash functions are one of cryptography's most fundamental building blocks, even more so than encryption functions. For example, hash functions are used for digital fingerprinting and commitment
schemes, such as message authentication and random number generation, as well as for digital signature schemes, stream ciphers, and random oracles.
Recently, Antoine Joux, one of the leading cryptographers of our time, discovered multi-collision attacks against the general framework in which hash functions are constructed [1], and Xiaoyun Wang
created an attack that breaks the collision-resistance property of the most widely deployed hash functions [2], including MD5 and SHA-1. In 2005, Arjen Lenstra and Wang demonstrated how to forge two
digital certificates, based on the MD5 hash function, by using different keys but the same signature -- something that was hitherto thought to be impossible [3]. These attacks that were discovered,
and other vulnerabilities that were exposed by researchers have stimulated a resurgence of research into hash functions. This research has also spawned an international competition, sponsored by the
National Institute of Standards and Technology (NIST), an agency responsible for standards used by the U.S. Government, to create a next-generation hash function design.
In this article we highlight some recent work in hash function development. We begin by providing some background on hash functions and look at the problems that hash functions address. We then
sketch how to build a hash function. Moving on, we outline recent seminal work in the field of hash functions. We describe two radically different designs entered in the NIST competition, the Skein
and Vortex designs, created in part with Intel participation. This is followed by a discussion of the design rationale for each where we also compare and contrast the basic design decisions. We end
with a summary of our findings.
Hash Functions
A hash functions is usually defined as a function H satisfying three properties [4]:
• Collision resistance. It is computationally infeasible to find two distinct bit strings s≠ s' such that H (s) = H ( s').
• Pre-image resistance. Given a hash value t in the range of H, it is computationally infeasible to find a string s for which H (s) = t.
• 2nd Pre-image resistance. Given a string s and hash value t such that H (s) = t, it is computationally infeasible to find a second string s' such that H (s') = t as well.
A hash function maps the set of all bit strings into a "message digest" of defined length, called the hash function's "block size." Since there are many more strings than message digests, at least
one digest output by the hash function must be the image of more than one input string. It is therefore remarkable that it is possible to build a function h that has the three properties previously
noted. These properties imply that h essentially acts like a randomly selected compression function.
It is instructive to describe some typical use cases:
• Digital fingerprinting. Hash functions construct effective digital fingerprints. If d represents a digital data structure, such as a document, then the message digest h (d) is its fingerprint;
the 2nd-pre-image-resistance property says it is infeasible to find a second data structure d' with the same fingerprint h (d') = h (d) .
• Digital signatures. Digital signatures extend digital fingerprinting by encrypting the hash value h (d) of a data structure d under a private key.
• Message authentication. A hash function h used with a secret key K can be used to authenticate a message m. The idea is to create a tag t = h (K || m), where "||" denotes string concatenation,
which is sent with the message and verified by the receiver. (This does not quite work in practice, because the cascade construction introduces vulnerabilities hashing the last message block.
Instead the tag is essentially computed as h (K || h ( K || m))). Because of pre-image resistance, it is infeasible for an attacker to create the same tag tunless he or she knows the key K, and
because of collision resistance, the tag could have been created only by concatenating the message m to the key K.
• Pseudo-random number generation. One standard way to build a random number generator is to take a key K, usually called a "seed", and to compute h (K || 0 ), h (K || 1 ), h (K || 2 ), with each
digest representing a different random number. If h and K are carefully selected, it is infeasible to distinguish this, by any statistical test, from a stream of genuine random numbers.
• Stream ciphers. A stream cipher can be built by taking a hash function h, and encrypting message m[i] bym[i] ⊕ m[i] ⊕ h (K, i ), where " ⊕" denotes XOR, and decryption is mi ⊕ h (K, i ) ⊕ (mi ⊕ h
(K, i )) ⊕ h (K, i ) = m[i].
• Random oracles. Random oracles can be thought of as specialized random number generators. They are used widely in cryptography, such as for randomizing public and private key encryption, to make
them secure from arcane attacks. | {"url":"http://www.drdobbs.com/security/recent-contributions-to-cryptographic-ha/219500573","timestamp":"2014-04-18T05:36:28Z","content_type":null,"content_length":"95918","record_id":"<urn:uuid:f7d779b8-87ac-4b2f-ac9c-88193c364f72>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric symmetries
March 26th 2013, 06:13 AM
Geometric symmetries
This question tests your ability to describe symmetries geometrically and to represent them as permutations in cycle form. It also tests your understanding of conjugacy classes and their
relationship to normal subgroups.
The figure for this question is prism with three identical rectangular faces and an equilateral triangle at the top and base. The locations of the faces of the prism (numbered 1, 4 at the top and
base and each side 2, 3 and 5, 5 being the back face) have been numbered so that we may represent the group G of all symmetries of the prism as permutations of the set {1,2,3,4,5}.
a) Describe geometrically the symmetries of the prism represented in cycle form by (14)(23) and (25).
b) Write down all the symmetries of the prism in cycle form as permutations of {1,2,3,4,5}, and describe each symmetry geometrically.
c) Write down the conjugacy classes of G.
d) Determine a subgroup of G of order 2, a subgroup of order 3, and a subgroup of order 4. In each case, state whether or not your choice of subgroup is normal, justifying your answer. | {"url":"http://mathhelpforum.com/differential-geometry/215639-geometric-symmetries-print.html","timestamp":"2014-04-21T16:13:13Z","content_type":null,"content_length":"4084","record_id":"<urn:uuid:9af33702-6c35-4c6f-aa96-f8663852715e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
4. Polar Form of
4. Polar Form of a Complex Number
by M. Bourne
We can think of complex numbers as vectors, as in our earlier example. [See more on Vectors in 2-Dimensions].
We have met a similar concept to "polar form" before, in Polar Coordinates, part of the analytical geometry section.
Our aim in this section is to write complex numbers in terms of a distance from the origin and a direction (or angle) from the positive horizontal axis.
We find the real (horizontal) and imaginary (vertical) components in terms of r (the length of the vector) and θ (the angle made with the real axis):
From Pythagoras, we have: `r^2=x^2+y^2` and basic trigonometry gives us:
`tan\ theta=y/x``x=r\ cos\ theta``y = r\ sin\ theta`
Multiplying the last expression throughout by `j` gives us:
`yj = jr\ sin θ`
So we can write the polar form of a complex number as:
`x + yj = r(cos\ θ + j\ sin\ θ)`
r is the absolute value (or modulus) of the complex number
θ is the argument of the complex number.
Need Graph Paper?
There are two other ways of writing the polar form of a complex number:
`r\ "cis"\ θ` [This is just a shorthand for `r(cos\ θ + j\ sin\ θ)`]
`r\ ∠\ θ` [means once again, `r(cos\ θ + j\ sin\ θ)`]
NOTE: When writing a complex number in polar form, the angle θ can be in DEGREES or RADIANS.
Example 1
Find the polar form and represent graphically the complex number `7 - 5j`.
Example 2
Express `3(cos\ 232^@+ j\ sin\ 232^@)` in rectangular form.
1. Represent `1+jsqrt3` graphically and write it in polar form.
2. Represent `sqrt2 - j sqrt2` graphically and write it in polar form.
3. Represent graphically and give the rectangular form of `6(cos\ 180^@+ j\ sin\ 180^@)`.
4. Represent graphically and give the rectangular form of `7.32 ∠ -270°`
And the good news is...
Now that you know what it all means, you can use your calculator directly to convert from rectangular to polar forms and in the other direction, too.
How to convert polar to rectangular using calculator.
Didn't find what you are looking for on this page? Try search:
Online Algebra Solver
This algebra solver can solve a wide range of math problems. (Please be patient while it loads.)
Go to: Online algebra solver
Ready for a break?
Play a math game.
(Well, not really a math game, but each game was made using math...)
The IntMath Newsletter
Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents!
Share IntMath!
Short URL for this Page
Save typing! You can use this URL to reach this page:
Algebra Lessons on DVD
Easy to understand algebra lessons on DVD. See samples before you commit.
More info: Algebra videos | {"url":"http://www.intmath.com/complex-numbers/4-polar-form.php","timestamp":"2014-04-16T07:27:21Z","content_type":null,"content_length":"25819","record_id":"<urn:uuid:fdad10b7-f449-4075-aa04-4728a9368d6f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Coast Stat Views (on Observational Epidemiology and more)
Here are a couple to ponder. I've got answers (as well as the inevitable pedagogical discussions) over at
You Do the Math
1. If a perfect square isn't divisible by three, then it's always one more than a multiple off three, never one less. Can you see
2. Given the following
A B D _______________________________________________
C E F G
Where does the 'H' go
4 comments:
1. For 2 I didn't think of anything mathematical. My solution was to swap sides if the letter to be placed rhymed with the previous letter and stay the same side otherwise.
Which would diverge from your answer at k.
1. There are usually multiple patterns that hold early in the sequence. My way of handling that in class would be to say, "that certainly works so far... Let's see what happens when we get more
I might follow that with "Now let's see how the rhyming pattern looks." and have the class call out the answers. The important thing is to leave the kids with the understanding that a problem
can have more than one right answer but sometimes we can narrow it down with more information.
2. Another solution could be that "closed" letters (those with an open space with a line all around, like in D) are on top, while the "open" letters are at the bottom.
1. That is the pattern -- check out the full explanation at You Do the Math | {"url":"http://observationalepidemiology.blogspot.com/2012/12/boxing-day-brain-teasers.html","timestamp":"2014-04-16T10:44:53Z","content_type":null,"content_length":"103313","record_id":"<urn:uuid:16009b81-e079-4c77-99ba-4f46fe972a16>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highland Park, IL Trigonometry Tutor
Find a Highland Park, IL Trigonometry Tutor
I have over 10 years of experience teaching math and English in the public, private, and international school setting. I've taught all levels of math in the middle school and high school levels.
As an experienced classroom teacher (now a stay-at-home mom to two little ones), I know the struggles and triumphs that students face in the classroom.
8 Subjects: including trigonometry, geometry, algebra 1, algebra 2
I offer 45 years of tutoring and a PhD in both mathematics and physics. I teach students to think scientifically, that is, to understand the subject in depth, rather than simply memorizing rules
and patterns. My students have not only high ratings, but they are well prepared for university or college.I actually prefer to start with students as early grade as possible.
15 Subjects: including trigonometry, calculus, physics, algebra 2
...When I teach, I usually go to the basic so that when a student forgets the law or a formula, he/she can navigate through the problem without those law/formula. When I teach a person, I usually
go through many problems because I believe the only way to learn mathematics is through practice. I believe solving more problems allows students to learn how to approach different questions.
6 Subjects: including trigonometry, geometry, algebra 1, precalculus
I graduated magna cum laude with honors from Illinois State University with a bachelor's degree in secondary mathematics education in 2011. I taught trigonometry and algebra 2 to high school
juniors in the far north suburbs of Chicago for the past two years. I am currently attending DePaul University to pursue my master's degree in applied statistics.
19 Subjects: including trigonometry, calculus, statistics, algebra 1
...I am a Fellow in the Society of Actuaries with extensive experience in finance, modeling, and Microsoft Office. I have been tutoring for over 10 years.I have worked with a number of students
to get ready for their ACT exam. I am qualified in math by virtue of passing the Elementary Math test wi...
26 Subjects: including trigonometry, calculus, physics, geometry
Related Highland Park, IL Tutors
Highland Park, IL Accounting Tutors
Highland Park, IL ACT Tutors
Highland Park, IL Algebra Tutors
Highland Park, IL Algebra 2 Tutors
Highland Park, IL Calculus Tutors
Highland Park, IL Geometry Tutors
Highland Park, IL Math Tutors
Highland Park, IL Prealgebra Tutors
Highland Park, IL Precalculus Tutors
Highland Park, IL SAT Tutors
Highland Park, IL SAT Math Tutors
Highland Park, IL Science Tutors
Highland Park, IL Statistics Tutors
Highland Park, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Highland_Park_IL_trigonometry_tutors.php","timestamp":"2014-04-19T12:33:26Z","content_type":null,"content_length":"24898","record_id":"<urn:uuid:f3c58053-bc78-4e9e-928c-8a4d64646eeb>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reverse Quaternion Kienmatics equations
Hi all!, first post!
I'm programming an attitude estimation and control algorithm for a satellite. So I need a reference "trayectory" for my control system to try to follow.
The generation of that attitude profile is tricky: point to the sun, if sat has access to a certain city, point to it, the point again to the sun... and so on.
So I've managed to make a q(t) to make all that possible, but I need the angular velocities that yield that reference attitude quaternion as a function of time.
While I have already figured out a way to get them, they are not perfectly mathematically calculated, showing some nasty discontinuities at certain times.
So, the problem is, given the quaternion kinematic equations:
\dot q =\frac{1}{2} \left[ \begin{array}{cccc} q_4 & -q_3 & q_2 & q_1 \\ q_3 & q_4 & -q_1 & q_2 \\ -q_2 & q_1 & q_4 & q_3 \\ -q_1 & -q_2 & -q_3 & q_4 \end{array}\right] \left[ \begin{array}{c} \
omega_1 \\ \omega_2 \\ \omega_3 \\ 0 \end{array}\right],
and the quaternion $$q(t),$$ what is $$\vec \omega(t)?.$$ I think the problem would be a boundary value problem, but I don't really know. Does anyone can point me in the right direction?? maybe a
matlab supereasy command
Thanks in advance!! | {"url":"http://www.physicsforums.com/showthread.php?p=4201243","timestamp":"2014-04-18T10:54:29Z","content_type":null,"content_length":"20529","record_id":"<urn:uuid:93fc41bd-1cc6-4537-b357-1eb1c8e4a552>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] Sad sad sad... Was: Warning about remaining issues in stats.distributions ?
[SciPy-dev] Sad sad sad... Was: Warning about remaining issues in stats.distributions ?
Yaroslav Halchenko lists@onerussian....
Fri Mar 13 09:10:06 CDT 2009
> Fixing numerical integration over the distance of a machine epsilon of
> a function that has a singularity at the boundary was not very high on
> my priority list.
fair enough, but still sad ;)
it is just that I got frustrated since upgrade to 0.7.0 caused quite a
few places to break, and this one was one of the 'soar' ones ;)
> If there is a real use case that requires this, I can do a temporary
well... I think I've mentioned before how I ran into this issue: our
unittests of PyMVPA [1] fail. Primarily (I think) to my silly
distribution matching function.
> fix. As far as I have seen, you use explicitly this special case as a
> test case and not a test that would reflect a failing use case.
well -- this test case is just an example. that distribution matching is
the one which causes it in unittests
> Overall, I prefer to have a general solution to the boundary problem
> for numerical integration, instead of messing around with the
> theoretically correct boundaries.
sure! proper solution would be nice. as for "messing with boundaries":
imho it depends on how 'they are messed up with" ;) May be original
self.{a,b} could be left alone, but for numeric integration some others
could be introduced (self.{a,b}_eps), which are used for integration and
some correction term to be added whenever we are in the 'theoretical'
boundaries, to compensate for "missing" part of [self.a, self.a_eps]
Most of the distributions would not need to have them different from a,b
and have 0 correction.
Testing is imho quite obvious -- just go through all distributions and
try to obtain .cdf values within sample points in the vicinity of the
boundaries. I know that it is know exhaustive if a distribution has
singularities within, but well -- it is better than nothing imho ;)
I bet you can come up with a better solution.
> Also, I would like to know what the references for the rdist are.
actually I've found empirically that rdist is the one which I needed,
and indeed there is not much information on the web:
rdist corresponds to the distribution of a (single) coordinate
component for a point located on a hypersphere (in space of N
dimensions) of radius 1. When N is large it is well approximated by
Gaussian, but in the low dimensions it is very
different and quite interesting (e.g. flat in N=3)
n.b. actually my boss told me that there is a family of distributions where
this one belongs to but I've forgotten which one ;) will ask today again
> Google search for r distribution is pretty useless, and I have not yet
> found a reference or an explanation of the rdist and its uses.
there was just a single page which I ran to which described rdist and
plotted sample pdfs. but can't find it now
[1] http://www.pymvpa.org/
=------------------------------ /v\ ----------------------------=
Keep in touch // \\ (yoh@|www.)onerussian.com
Yaroslav Halchenko /( )\ ICQ#: 60653192
Linux User ^^-^^ [175555]
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2009-March/011534.html","timestamp":"2014-04-18T21:18:10Z","content_type":null,"content_length":"6205","record_id":"<urn:uuid:0b2b96a9-332e-4a63-8f5a-f5e4d9f13f4c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
3*2 Matrix with complex elements
October 21st 2011, 02:47 PM #31
Super Member
Re: 3*2 Matrix with complex elements
Last edited by Hartlw; October 21st 2011 at 03:05 PM.
Vector Space over Field F
Vector Space over Field F:
An association of scalars a, b, c.... from a field F with Vectors A, B, C... (undefined) from an Abelian group G.
The association has certain properties, for ex, aA is defined and belongs to G.
EDIT: A typical question might be, is multiplication (rule of association) of a Matrix (Vector) by a scalar from a field F (real or complex) a Vector Space?
1) Give the scalar and define the matrix.
2) Are the matrices an Abelian Group?
3) Are the rules of association satisfied? Especially does aA belong to G?
Last edited by Hartlw; October 22nd 2011 at 09:44 AM.
Re: Vector Space over Field F
ok, if you can explain why the first of the following is a subset and the second is not then I
will mark thread as solved. Thanks :-)
Let $W=([a_{ij}]:a_{ij} \in \mathbb{R})$. Let $A=\begin{bmatrix}1&1\\ 1&0\\0 & 1\end{bmatrix} \in W$
Take take scalar $\alpha \in \mathbb{R}$
Therefore $\alpha*A=\begin{bmatrix}\alpha &\alpha \\ \alpha&0 \\ 0 & \alpha\end{bmatrix} \in W$. Hence is a subset.
Is W a subspace for the set of matrices with first row (1,1)
$W=\begin {bmatrix} 1&1\\a_{21} & a_{22}\\a_{31}&a_{32}\end {bmatrix}:a_{ij} \in \mathbb{C}$
$2A ot \in W since 2A=\begin{bmatrix} 2&2\\A*a_{21}&A*a_{22}\\A*a_{31}&A*a_{32} \end{bmatrix}$. Why is this NOT a subset because '2' is scalar in $\matthbb{R}$ just like $\alpha$ was above....ie
the '1' is 'in' the set when you just take out the factor 2
Re: Vector Space over Field F
ok, if you can explain why the first of the following is a subset and the second is not then I
will mark thread as solved. Thanks :-)
Let $W=([a_{ij}]:a_{ij} \in \mathbb{R})$. Let $A=\begin{bmatrix}1&1\\ 1&0\\0 & 1\end{bmatrix} \in W$
Take take scalar $\alpha \in \mathbb{R}$
Therefore $\alpha*A=\begin{bmatrix}\alpha &\alpha \\ \alpha&0 \\ 0 & \alpha\end{bmatrix} \in W$. Hence is a subset.
Is W a subspace for the set of matrices with first row (1,1)
$W=\begin {bmatrix} 1&1\\a_{21} & a_{22}\\a_{31}&a_{32}\end {bmatrix}:a_{ij} \in \mathbb{C}$
$2A ot \in W since 2A=\begin{bmatrix} 2&2\\A*a_{21}&A*a_{22}\\A*a_{31}&A*a_{32} \end{bmatrix}$. Why is this NOT a subset because '2' is scalar in $\matthbb{R}$ just like $\alpha$ was above....ie
the '1' is 'in' the set when you just take out the factor 2
Vector Space over Field F:
An association of scalars a, b, c.... from a field F with Vectors A, B, C... (undefined) from an Abelian group G, which satisfy certain properties, including aA is defined and belongs to G.
First case:
1) What are the vectors? All matrices aA where a is real.
Are they a group? Yes. So they are a sub-group Gs of W.
Define multiplication of members B of Gs by a real scalar. aB is defined and closed (belongs to Gs). Therefore it is a subspace of W.
Second Case:
1) What are the vectors? All matrices with 1 in the first row. This is not a group so it is not a subgroup..
2) All real matrices with equal real numbers in the first row. This is a subgroup of Gs. Define multiplication of members B of Gs by a real scalar a. aB is defined and closed (first row equal,
belongs to Gs). Therefore it is a subspace of W.
If W has real components, nothing with complex components is a sub-group of W.
The difference between first and second case is what you define as a vector.
Complex numbers are all ordered pairs of real numbers (a,b). (1,0) is a complex number, 1 isn't unless you explicitly state it is a complex number, in which case it is really (1,0)
EDIT: As a trivia lbut interesting example:
Define the vector A as all 3X2 matrices with 1 in the first row.
Define vector addition as usual for second 2 rows but always 1 for first row.
Define multiplication by a scalar as usual for second two rows but always 1 for the first two rows.
Is it a vector space? yes. For all practical purposes the first two rows are just decoration.
Is it a subspace of W? No. Because it is not a subspace for the same opeations of addition and multiplication.
EDIT: Factoring out to give ones in the top row. You can define a vector as a factored matrix with one's in the first row. But then you are not using the same rule of scalar multiplication as W,
which says that a scalar times a matrix is identical to the matrix with all elements multiplied by the scalar. So you could define a vector as all matrices factored to give ones in the first row,
it would be a space, but not a subspace because you are changing the rules of scalar multiplication by saying that the factored matrix is not identical to the unfactored matrix.
Last edited by Hartlw; October 22nd 2011 at 04:09 PM.
Re: 3*2 Matrix with complex elements
Been meaning to clean up my posts:
1) Both elements of an mXn matrix and matrix scalar multipliers are from a field F, real or complex: Matrix has dimension mxn.
eg, basis of a 1X2 matrix (a11,a12) is (1,0), (0,1) and any 1X2 matrix can be expressed as z1x(1,0) + z2x(0,1), z1 and z2 real or complex, but not mixed.
2) Elements of an mXn matrix are (u,v), u and v and scalar multipliers are real or complex. Matrix has dimension 2xmxn.
eg, basis of a 1X2 matrix (a11,a12) is ((1,0),(0,0)), ((0,1),(0,0)), ((0,0),(1,0)), ((0,0),(0,1)), and any 1X2 matrix (a11,a12) can be expressed as z1x((1,0),(0,0))+ z2x((0,1),(0,0)) + z3x((0,0),
(1,0)) + z4x((0,0),(0,1)).
The source of confusion in my previous posts was failure to distinguish between (u,v) as shortcut notation for the complex number u+iv with (u1,v1)x(u2,v2) = (u1+iv1)x(u2+iv2), and (u,v) as a 2D
vector with real or complex components and (u1,v1)x(u2,v2) not defined but z1x(u1,v1) = (z1xu1,z1xv1).
With regard to the questions in #33
If (x,y,z) is a 3d vector space:
a(1,0,1) is a 1d vector subspace. (analogy of first part)
(1,1,0) + a(0,0,1) = (1,1,a) is a linear manifold of dimension 1 (no 0 vector).
a(1,1,1) = (1,1,a) is not a vector unless defined as above, = (1,1,0) + (0,0,a). So the answer to second part is that it is not a subspace as defined.
[a(1,1,x) is not the same as the vector whose first two components are always 1,1]
Last edited by Hartlw; November 25th 2011 at 11:05 AM.
October 22nd 2011, 08:58 AM #32
Super Member
October 22nd 2011, 11:39 AM #33
October 22nd 2011, 03:26 PM #34
Super Member
November 25th 2011, 10:10 AM #35
Super Member | {"url":"http://mathhelpforum.com/differential-geometry/190486-3-2-matrix-complex-elements-3.html","timestamp":"2014-04-18T16:39:17Z","content_type":null,"content_length":"53240","record_id":"<urn:uuid:7ac1cd33-20b4-41d1-8877-c6fc4307c02f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary Encodings of Non-binary Constraint Satisfaction Problems: Algorithms and Experimental Results
N. Samaras and K. Stergiou
A non-binary Constraint Satisfaction Problem (CSP) can be solved directly using extended versions of binary techniques. Alternatively, the non-binary problem can be translated into an equivalent
binary one. In this case, it is generally accepted that the translated problem can be solved by applying well-established techniques for binary CSPs. In this paper we evaluate the applicability of
the latter approach. We demonstrate that the use of standard techniques for binary CSPs in the encodings of non-binary problems is problematic and results in models that are very rarely competitive
with the non-binary representation. To overcome this, we propose specialized arc consistency and search algorithms for binary encodings, and we evaluate them theoretically and empirically. We
consider three binary representations; the hidden variable encoding, the dual encoding, and the double encoding. Theoretical and empirical results show that, for certain classes of non-binary
constraints, binary encodings are a competitive option, and in many cases, a better one than the non-binary representation.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/JAIR/Vol24/jair24-017.php","timestamp":"2014-04-20T13:33:34Z","content_type":null,"content_length":"3091","record_id":"<urn:uuid:4514d8ed-d008-4cb7-8129-212bf26027fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
3. Use formulas to find the lateral area and surface area of the given prism. Round your answer to the nearest whole number. Use the large 12 x 7 m rectangles on the top and bottom as the bases. (1
point) 76 m2; 244 m2 76 m2; 160 m2 216 m2; 160 m2 216 m2; 244 m2
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/517ab760e4b0249598f73944","timestamp":"2014-04-18T20:44:28Z","content_type":null,"content_length":"28670","record_id":"<urn:uuid:d266befa-4111-4cb0-937c-d405cee3298f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
a superhero asked his nemesis how many cats she has. she answered with a riddle "Three-fourths of my cats plus seven" how many cats does the nemesis have? answer is 28 how do you get that answer
• one year ago
• one year ago
Best Response
You've already chosen the best response.
To get started, let C represent the superhero's total number of cats. "Three-fourths of my cats plus seven" indicates that the hero has the following number of cats: C = (3/4)C + 7 C - (3/4) C =
7 Simplify C - (3/4) C and set the result = 7. Solve for C. I think you'll find this to be correct when you come back online.: answer is 28 how do you get that answer @ineedbiohelpandquick
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505fa805e4b0583d5cd21a41","timestamp":"2014-04-17T19:14:03Z","content_type":null,"content_length":"28155","record_id":"<urn:uuid:bdf0d4f4-3d0c-4234-aa6e-f4e882a87e0b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jason Klaczynski
Ness's Nest
with Jason Klaczynski
Opening Basic Math
December 6, 2006
During the next few weeks, I'll be responding to math questions you guys may have regarding opening hands. (If you're interested in submitting a question, read the bottom.) This week, a friend asked
me the odds of opening with a certain Pokemon in his Muk/Weezing deck.
If you're not a math person, you can skip right down to the numbers. But if you'd like to learn how to do this yourself, I'll walk you through it.
My friend runs the following basic Pokemon in his Muk/Weezing deck:
-4 Grimer
-2 Koffing
He wants to know the odds that he will be forced to open with Koffing - that is, he'll get a Koffing in his opening hand, but no Grimer.
The first thing we have to do is determine the odds of drawing a Koffing in our top seven cards. The simple way to figure out the odds of drawing something are to first calculate the odds of NOT
drawing it. Then, we'll simply subtract our answer from 1.
Step 1: The Odds of drawing Koffing
Since we play 60 cards, we know the odds of avoiding a Koffing in our first card would be 58/60 (or 29/30). If we avoided the Koffing in our first card, the odds of avoiding it in our second card
would be (57/59). We then have (56/58) for our third card, (55/57) for our fourth, and so on, so that we get a total of seven quotients. (One for each card in our hand.) In order to get the
probability of ALL of these things happening, we take these seven quotients and multiply them all together.
(58/60)*(57/59)*(56/58)*(55/57)*(54/56)*(53/55)*(52/54) = .7785
This means the odds of avoiding the Koffing in our opening hand are 77.85%. To determine the odds of getting the Koffing, we simply do 100% - 77.85%, or 1 - .7785.
1 - .7785 = .2215
So now we know there is a 22.15% chance we will draw a Koffing in our opening hand. But that doesn't necessarily mean we'll open with it, because we can draw one of our other basics (4 Grimers) to
save us from opening with Koffing.
Step 2: The Odds of Drawing no Grimer with our Koffing
Now, let's assume we did get a Koffing in our opening hand. We would have six other cards in our hand that could be a Grimer, and will save us from opening with Koffing. We need to determine the
chance that we do NOT get a Grimer. When we have the probability of drawing a Koffing, and the probability of not drawing Grimer, we will be able to multiple these two results together, and that will
give us the final result. (For now.)
The odds of avoiding a Grimer in our remaining 59 cards (Remember, we have 59 cards left to work with, since we know 1 of them is a Koffing, and is in our hand) are 55/59. If we avoid Grimer in our
first card, the odds of avoiding it in the second card are 54/58, 53/57 in the third card and so on. We need a total of six quotients, since we have six cards left to draw.
Odds of no Grimer in remaining 6 cards:
(55/59)*(54/58)*(53/57)*(52/56)*(51/55)*(50/54) = .6434
The odds of not getting a Grimer with our Koffing are 64.34%.
Now, we need to know the odds of both of these things happening.
Step 3: The Odds of drawing Koffing, and no Grimer:
This is the one simple step in our process. If you want to know the odds of one or more things all happening, all you have to do is multiply the chance of them happening individually, together. Since
drawing a Koffing is .2215 and not drawing a Grimer in the remaining six is .6434, all we have to do is multiply .2215 by .6434.
.2215 * .6434 = .1425
So the odds of getting the lone Koffing are 14.25%.
While this may seem like the final step, there is one more thing you have to compensate for, which will give us our final answer.
Step 4: The Odds of a Mulligan
What we've done in the previous three steps has given us the odds of drawing a Koffing in our opening hand without a Grimer. However, what we haven't calculated, is the actual odds of opening with
Why? Because our calculations don't take mulligans into consideration. While a hand of seven grass energy technically doesn't have a Koffing, it will be shuffled back in and could become a lone
Koffing. To take mulligans into consideration, we first need to determine the probability of us mulliganing.
We figure this out using the same math we used in Step2 - the odds of not drawing a basic in seven cards would be (54/60) for the first card, times (53/59) for the second card, and so on.
Odds of mulligan:
(54/60)*(53/59)*(52/58)*(51/57)*(50/56)*(49/55)*(48/54) = .4586
We will mulligan 45.86% of the time.
Step 5: Compensating for Mulligans
Since Step 3 had not compensated for mulligans, we know that we will get a lone Koffing 14.25% of the time.
Since we mulligan 45.86% of the time, we obviously will not mulligan 54.14% of the time. (1 - .4586).
This is where things get a little tricky. Since our first step told us we get Koffing 14.25% of the time, but assumed we didn't mulligan, we can cross multiply to figure out the chance a mulligan
results in a lone Koffing.
In other words, 54.14% of the time we don't mulligan, and we get a Koffing 14.25% of the time.
So the remaining 45.86%, we will get a lone Koffing in a proportional amount.
(.5414/.1425) * (.4586/x)
x is representing the odds a mulligan results in a lone Koffing.
Cross multiplying this gives us .5414x = .0654
We then divide .0654 by .5414 to isolate x.
x = .0654/.5414
x = .1208
So the odds that we will mulligan, and then get a lone Koffing are 12.08%.
Step 6: The Final Step
Now all we have to do is add our result from Step 3 with our result from Step 5.
The odds we do not mulligan, and get a lone Koffing are .1425.
The odds we do mulligan, and get a lone Koffing are .1208.
All we have to do is add these two results for our final answer.
.1425 + .1208 = .2633
So our final answer lets us know, the odds that we will be forced to open with a Koffing are 26.3%.
E-mail me any questions you have related to opening with a certain Pokemon, and or combination of Pokemon/Trainers/Energy. If I like your question, I'll post it and do the math for you. To make my
calculation possible, please provide an entire deck list.
Good luck!
-Jason Klaczynski (ness@pojo.com) | {"url":"http://www.pojo.com/CardofTheDay/2005/12-06-06.shtml","timestamp":"2014-04-19T19:33:20Z","content_type":null,"content_length":"35044","record_id":"<urn:uuid:fba1b6a8-a471-4297-a463-4e5031033eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kingston, NJ Algebra 2 Tutor
Find a Kingston, NJ Algebra 2 Tutor
...I graduated with a bachelors degree in mathematics. I am in pursuit of my masters in K-12 mathematics education with a specialization in urban education. I currently hold my substitute teacher
certification in Middlesex County.
24 Subjects: including algebra 2, calculus, geometry, precalculus
...I am currently enrolled in an M.Ed. program at Arcadia University. My plan is to begin my student teaching experience in the Spring of 2015. My dream is to teach at an elementary school
somewhere in the suburbs of Philadelphia.
48 Subjects: including algebra 2, reading, English, writing
...Give me a call and I will do my best to get you where you need to be. I can also assess skills that may be lacking and help you there as well. My availability is quite open in Old Bridge, NJ
and surrounding towns.
6 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...My name is Dheeraj and I'm a graduate of Swarthmore College. Through the Philadelphia Teaching Fellows Program, I've taught as a high school math teacher at the W. B.
26 Subjects: including algebra 2, calculus, writing, GRE
Physics and math can be a daunting task to many students, and some teachers don't make it any easier with either over-simplistic explanations that don't help or no explanations at all. As a
physics teacher with degrees in Math and Physics, I am aware of the areas of struggle students can experience...
9 Subjects: including algebra 2, physics, algebra 1, geometry
Related Kingston, NJ Tutors
Kingston, NJ Accounting Tutors
Kingston, NJ ACT Tutors
Kingston, NJ Algebra Tutors
Kingston, NJ Algebra 2 Tutors
Kingston, NJ Calculus Tutors
Kingston, NJ Geometry Tutors
Kingston, NJ Math Tutors
Kingston, NJ Prealgebra Tutors
Kingston, NJ Precalculus Tutors
Kingston, NJ SAT Tutors
Kingston, NJ SAT Math Tutors
Kingston, NJ Science Tutors
Kingston, NJ Statistics Tutors
Kingston, NJ Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bradevelt, NJ algebra 2 Tutors
Edgely, PA algebra 2 Tutors
Georgia, NJ algebra 2 Tutors
Jerseyville, NJ algebra 2 Tutors
Menlo Park, NJ algebra 2 Tutors
Millhurst, NJ algebra 2 Tutors
North Branch, NJ algebra 2 Tutors
Oakford, PA algebra 2 Tutors
Parkland, PA algebra 2 Tutors
Princeton Township, NJ algebra 2 Tutors
Robbinsville, NJ algebra 2 Tutors
Rocky Hill, NJ algebra 2 Tutors
Uppr Free Twp, NJ algebra 2 Tutors
West Bristol, PA algebra 2 Tutors
Winfield Park, NJ algebra 2 Tutors | {"url":"http://www.purplemath.com/Kingston_NJ_Algebra_2_tutors.php","timestamp":"2014-04-18T11:08:13Z","content_type":null,"content_length":"23904","record_id":"<urn:uuid:30048581-388a-4a12-b30a-1d9e8f84c1d8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
How would you determine the Lift(thrust) force of a Helicopter in hover
I was wondering like if you are building your own helicopter what would someone use to test the amount of thrust produced by the rotor?
You could do this simply with a spring scale and something to constrain the rotor group to the scale and something to drive it. It is even easier if you have a strain gauge based force transducer
because then there would be no axial movement due to the thrust.
In regards to the original question, the airfoils and rotors are well known throughout the flight envelope from wind tunnel testing. The calculations involve effects of trailing blades, etc... Of
course, the calculations are always compared to actual testing. | {"url":"http://www.physicsforums.com/showthread.php?p=1254201","timestamp":"2014-04-17T03:50:15Z","content_type":null,"content_length":"62200","record_id":"<urn:uuid:d9696833-6e5a-455d-a1f2-0acc4a753622>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jaakko Hintikka Boston University
Jaakko Hintikka
Boston University
Ilpo Halonen
University of Helsinki
INTERPOLATION AS EXPLANATION
In the study of explanation, one can distinguish two main trends. On the one hand, some
philosophers think — or at least used to think — that to explain a fact is primarily to subsume it
under a generalization. And even though most philosophical theorists of explanation are not
enchanted with this idea any longer, it is alive and well among theoretical linguists. We will call
this kind of view on explanation a subsumptionist one. It culminated in Hempel’s deductive-
nomological model of explanation, also known as the covering law model.
On the other hand, some philosophers see the task of explaining a fact essentially as a
problem of charting the dependence relations, for instance causal relations, that connect step by
step the explanandum with the basis of the explanation, with “the given,” as it is sometimes put.
We will call this kind of approach an interactionist one. Among the most typical views of this
kind, there are the theories according to which explaining means primarily charting the causal
structure of the world.
What logical techniques are appropriate as the main tools of explanation on the two
opposing views? The answer is clear in the case of explanation by subsumption. There the
crucial step in explanation is typically a generalization that comprehends the explanandum as a
special case. The logic of Hempelian explanation is thus essentially the logic of general
implications (covering laws).
But what is the logic of interactionist explanation? Functional dependencies can be
studied in first-order logic, especially in the logic of relations. But what kinds of results in the
(meta)theory of first-order logic are relevant here? There undoubtedly can be more than one
informative answer to this question. In this paper, we will concentrate on one answer which we
find especially illuminating. It highlights the promise of Craig’s interpolation theorem as a tool
for the study of explanation.
What Craig’s theorem says is reasonably well known, even though it has not found a slot
in any introductory textbook of logic for philosophers. It can be stated as follows, using the
turnstile − as a metalogical symbol to express logical consequence:
Craig’s Interpolation Theorem. Assume that F and G are (ordinary) first-order formulas, and
assume that
(i) F −G
(ii) not − ∼F
(iii) not − G
Then there exists a formula I (called an interpolation formula) such that
(a) F −I
(b) I −G
(c) The nonlogical constants and free individual variables of I occur both in F and in
It is the clause (c) that gives Craig’s interpolation theorem its cutting edge, for (a)-(b) would be
trivial to satisfy without (c).
The nature and implications of Craig’s interpolation theorem are best studied with the
help of some of the other basic results concerning first-order logic. First, from the completeness
of ordinary first-order logic it follows that because of (i) there exists an actual proof of (F ⊃ G)
(or a proof of G from F) in a suitable axiomatic formulation of first-order logic. From Gentzen’s
first Hauptsatz it follows that those suitable complete formulations include cut-free methods like
Beth’s tableau method, which of course is only a mirror image of suitable cut-free variants of
Gentzen’s sequent calculus.
In this paper, we will use a particularly simple variant of the tableau method. It is
characterized by the fact that in it no movement of formulas between the left and the right side of
the tableau is allowed. The negation-free subset of the tableau rules can be formulated as follows
where λ is the list of formulas on the left side of some subtableau and µ similarly the list of
formulas on the right side:
(L.&) If (S1 & S2) ∈ λ, add S1 and S2 to λ.
(L.∨) If (S1 ∨ S2) ∈ λ, you may start two subtableaux by adding S1 or S2, respectively, to λ.
(L.E) If (∃x)S[x] ∈ λ and if there is no formula of the form S[b] ∈ λ, you may add S[d] to λ,
where d is a new individual constant.
(L.A) If (∀x)S[x] ∈ λ and if b occurs in the same subtableau, you may add S[b] to λ.
Right-hand rules (R.&), (R.∨), (R.C) and (R.A) are duals (mirror images) of these rules. For
(R.&) If (S1 & S2) ∈µ, then you may start two subtableaux by adding S1 or S2, respectively, to
Negation can be handled by the following rewriting rules.
(R.R) ∼∼S as S
∼(S1 ∨ S2) as (∼S1 & ∼S2)
∼(S1 & S2) as (∼S1 ∨ ∼S2)
∼(∃x) as (∀x)∼
∼(∀x) as (∃x)∼
By means of these rewriting rules, each formulate can effectively be brought to a negation
normal form in which all negation signs are prefixed to atomic formulas or identities.
We will abbreviate ∼(a = b) by (a ≠ b).
As derived rules (construing (S1 ⊃ S2) as a rewritten form of (∼S1 ∨ S2) we can also have
(L.⊃) If (S1 ⊃ S2) ∈ λ, add ∼S1 or S2 to λ., starting two subtableaux
(R.⊃) If (S1 ⊃ S2) ∈ µ, add ∼S1 and S2 to µ.
For identity, the following rules can be used:
(L.self =) If b occurs in the formulas on the left side, add (b =b) to the left side.
(L.=) If S[a] and (a = b) occur on the left side, add S[b] to the left side.
Here S[a] and S[b] are like each other except that some occurrences of a or b have been
exchanged for the other one.
(R.self =) and (R.=) are like (L.self =) and (L.=) except that = has been replaced by its negation
As was stated, it can be shown that if F − G is provable in first-order logic, it is provable
by means of the rules just listed. A proof means a tableau which is closed. A tableau is said to be
closed if and only if the following condition is satisfied by it:
There is a bridge from each open branch on the left to each open branch on the right.
A bridge means a shared atomic formula.
A branch is open if and only if it is not closed.
A branch is closed if it contains a formula and its negation.
In the interpolation theorem, we are given a tableau proof of F − G. The crucial question
is how an interpolation is found on the basis of this tableau. For the purpose, it may be noted that
because of the assumptions of the interpolation theorem there must be at least one branch open
on the left and at least one on the right. It can also be assumed (ii) that all the formulas are in the
negation normal form.
Then an interpolation formula can be constructed as follows:
Step 1. For each open branch on the left, form the conjunction of all the formulas in it that are
used as bridges to the right.
Step 2. Form the disjunction of all such conjunctions.
Step 3. Beginning from the end (bottom) of the tableau and moving higher up step by step,
replace each constant introduced (i) from the right to the left by an application of (U.1) on the
left or (ii) from the left to the right by an application of (E.1) on the right by a variable, say x,
different for different applications.
In the case (i), moving from the end of the tableau upwards, prefix (∃x) to the formula so
far obtained. In the case (ii), prefix (∀x) to the formula so far obtained.
It can be proved that the formula obtained in this way is always an interpolation formula
in the sense of Craig’s theorem. We will not carry out the proof here. Instead, we will illustrate
the nature of the interpolation formula by means of a couple of examples.
Consider first the following closed tableau and the interpolation formula it yields:
(1.1) (∀x)L(x,b) (1.2) (∀y)(L(b,y) ⊃ m = y) ⊃ (m = b)
Initial premise Ultimate conclusion
(1.6) L(b,b) (1.3) ∼(∀y)(L(b,y) ⊃ m = y)
from (1.1) by (L.A) from (1.2) by (R.⊃)
(1.4) m=b
from (1.2) by (R.⊃)
(1.5) (∃y)(L(b,y) & (m ≠ y))
from (1.3) by rewrite rules
(1.7) L(b,b & (m ≠ b)
from (1.5) by (R.E)
(1.8) L(b,b) (1.9) m≠b
from (1.7) by (R.&) from (1.7)
bridge to (1.6) by (R.&)
closure by
The interpolation formula is
(1) L(b,b)
This example may prompt a déjà vu experience in some of our readers. If you interpret
L(x,y) as x loves y, b as my baby, and m as me, we get a version of the old introductory logic
book chestnut. The initial premise says
(2) Everybody loves my baby
and the conclusion
(3) If my baby only loves me, then I am my baby.
Textbook writers cherish this example, because it gives the impression of expressing a clever,
nontrivial inference. In reality, their use of this example is cheating, trading on a mistranslation.
In ordinary usage, (2) does not imply that my baby loves him/herself. Its correct translation is
(4) (∀x)(x ≠ b ⊃ L(x,b))
which does not any longer support the inference when used as the premise. Hence the inference
from the initial premise to the ultimate conclusion turns entirely on the premise’s implying
L(b,b). This is what explains the inference. Predictably, L(b,b) is also the interpolation formula.
Consider next another example of a closed tableau and its interpolation formula.
(2.1) (∀x)((A(x) & C(x)) ⊃ B(x)) IP (2.3) (∀x)(Ex ⊃ (A(x) ⊃ B(x))) UC
(2.2) (∀x)((D(x) ∨ ∼D(x)) ⊃ C(x)) IP (2.4) (E(β) ⊃ (A(β) ⊃ B(β)))
(2.9) (A(β) & C(β)) ⊃ B(β) (2.5) ∼E(β)
(2.10) ∼(A(β) & C(β)) (11) B(β) (2.6) (A(β) ⊃ B(β))
bridge (2.7) ∼A(β)
(2.12) ∼C(β) (2.13) ∼A(β) (2.8) B(β)
(2.14) (D(β) ∨ ∼D(β) ⊃ C(β)
(2.15) ∼(D(β) ∨ ¬D(β)) (2.16) C(β)
(2.17) ∼D(β)
(2.18) ∼∼D(β)
The interpolation formula is, as one can easily ascertain
(5) (∀x)(∼Ax ∨ Bx)
In this example, the first initial premise says that whatever in both C and A is also B. But
the second initial premise entails that everything is C anyway. Hence the cash value of the initial
premises is that if anything is A, it is also B.
The conclusion says that if any E is A, then it is B. But any A is B anyway, in virtue of
the initial premises. Hence the component of the premises that suffices to imply the conclusion is
(6) (∀x)(A(x) ⊃ B(x))
which is equivalent with the interpolation formula (5).
Again, the interpolation formula expresses the reason (“explanation”) why the logical
consequence relation holds.
Such examples bring out vividly the fact that interpolation theorem is already by itself a
means of explanation. Suitable normalized interpolation formulas show the deductive interface
between the premise and the conclusion and by so doing help to explain why the deductive
relationship between them holds.
This result has remarkable consequences. For one thing, some philosophers of
mathematics have discussed the question as to whether there can be explanations in mathematics.
This question can now be answered definitively in the affirmative. The fact that mathematical
arguments proceed purely deductively is no barrier to explanation, for it was just seen that there
can be explanations of deductive relationships, too. For instance, in the sense involved here, the
unsolvability by radicals of the general equation of the fifth degree is explained by the fact that a
certain group which “from the outside” can be characterized as the Galois group of the general
equation of the fifth degree, is symmetrical.
But in what sense does the interpolation theorem offer an analysis of functional
dependencies? Here the way the interpolation formula is constructed (see above) provides an
answer. We can think of the premise F (and its consequences on the left side) and of the
conclusion G (together with its consequences on the right side) as each describing a certain kind
of configuration. Each quantifier in the interpolation formula comes from an application of a
general truth (expressed on the left by a universal sentence and on the right by an existential
sentence) to a individual that is before the application occurred only on the other side. Since
there is no other traffic between the two sides of a tableau, these applications are the only ones
which show how the two configurations depend on each other so as to bring about the logical
consequence relation between them. No wonder that they are highlighted by the interpolation
formula, and no wonder that the interpolation formula in a perfectly natural sense serves to
explain why the consequence relation holds.
The character of the crucial instantiation steps as turning on functional dependencies can
be seen even more clearly when existential quantifiers on the left and universal quantifiers on the
right are replaced by the corresponding Skolem functions. Then one of the crucial instantiation
steps will express the fact that an ingredient of (individual in) one of the two configurations
depends functionally of an ingredient of (individual in) the other configuration. Moreover, these
are all the functional dependencies that have to be in place for the logical consequence to be
Essentially the same point can be expressed by saying that the interpolation formula is a
summary of the step-by-step functional dependencies that lead from the situation specified by the
premises to the situation described by the conclusion. This role by the interpolation formula can
be considered as the basic reason why it serves as a means of explanation.
At the same time, we can see the limitations of Craig’s interpolation theorem as a tool of
an interactionist theory of explanation. A suitable interpolation formula I explains why G follows
from F by showing how the structures specified by F interact with the structures specified by G
so as to make the consequence inevitable. What it does not explain is how the internal dynamics
of each of the two types of structures contributes to the consequence relation. It is for this reason
that nonlogical primitives not occurring in both F and G cannot enter into I: they cannot play any
role in the interaction of the two structures.
In the general theory of explanation the interpolation theorem can thus be used insofar as
the explanation of an event, say one described by E, can be thought of as depending on two
different things, on the one hand on some given background theory and on the other hand on
contingent ad hoc facts concerning the circumstances of E.
Both the background theory and the contingent “initial conditions” specify a kind of
structure. An explanation is an account of the interplay between these two structures.
Not surprisingly, the interpolation theorem turns out to be an extremely useful tool in the
general theory of explanation. Here we can only indicate the most important result. (Cf. here
Hintikka and Halonen, 1995.) In explaining some particular event, say that P(b), we have
available to us some background theory T and certain facts A about the circumstances of the
explanandum. The process of explanation will then consist of deducing the explanandum from T
& A. If it can be assumed that T = T[P] does not depend on b and A = A[b] does not depend on
P, we can apply (barring certain degenerate cases) in two different ways and to obtain two
different interpolation formulas H[b] and I[P] such that
(7) A[b] – H[b]
(8) T[P] – (∀x)(H[x] ⊃ P(x))
(9) P does not occur in H[b]
(10) T[P] – I[P]
(11) I[P] – (∀x)(A[x] ⊃ P(x))
(12) b does not occur in I[P]
Even though we have not made any assumptions concerning explanation as
subsumption, it turns out that as a by-product of a successful explanation we obtain not only one,
but two covering laws, viz.
(13) (∀x)(H[x] ⊃ P(x))
(cf. (8)) and
(14) (∀x)(A[x] ⊃ P(x))
(cf. (11)). They have distinctly different properties even though they have been fallaciously
assimilated to each other in previous literature. Neither of these covering laws can nevertheless
be said to provide an explanation why the explanandum holds. These covering laws are by-
products of successful explanatory arguments, not the means of explanation. Insofar as we can
speak of the explanation here either the “antecedent conditions” H[b] or the “local law” I[P] can
be considered as a viable candidate. This means of course that the notion of “the explanation” is
intrinsically ambiguous. In particular, the reason why H[x] can claim to the status of an
explanation is precisely what was pointed out earlier in this paper, viz. that it brings out the
interplay of the situations specified by A[b] and (T[P] ⊃ P(b)).
From these brief indications it can already be seen that the interpolation theorem is
destined to play a crucial role in any satisfactory theory of explanation. Why it can do so is
explained in the earlier parts of this paper.
These remarks point also to an interesting methodological moral for philosophers of
science. There is a widespread tendency to consider logical relationships from a purely
syntactical matter, as relations between the several propositions of a theory or even between the
sentences expressing such propositions. The entire structuralist movement is predicated on this
assumption. This tendency is a fallacious one, however. The propositions considered in logic
have each a clear-cut model-theoretical import. Spelling out this import can serve to clarify those
very problems which philosophers of science are interested in, for reasons that can be spelled out
(see Hintikka, forthcoming (b)). This model-theoretical meaning can be seen more directly when
cut-free proof methods (such as the tree methods or the tableau method) are used, just as they are
assumed to be used in the present paper.
Another example of how model-theoretical considerations can put issues in the
philosophy of science is offered by Hintikka (forthcoming (a)).
Craig, W. (1957), “Three Uses of the Herbrand-Gentzen Theorem in Relating Model Theory and
Proof Theory,” Journal of Symbolic Logic vol. 22, p. 269-285.
Hintikka, J. (forthcoming (a)), “Ramsey Sentences and the Meaning of Quantifiers,” Philosophy
of Science.
Hintikka, J. (forthcoming (b)), “The Art of Thinking: A Guide for the Perplexed.”
Hintikka, J. and I. Halonen (1995), “Semantics and Pragmatics for Why-questions,” Journal of
Philosophy vol. 92, pp. 636-657.
Hintikka, J. and I. Halonen (forthcoming), “Toward a Theory of the Process of Explanation.” | {"url":"http://www.docstoc.com/docs/49608659/Jaakko-Hintikka-Boston-University","timestamp":"2014-04-19T21:27:09Z","content_type":null,"content_length":"76203","record_id":"<urn:uuid:6bbbb776-a8ea-43c2-8e20-bdeaafc8e563>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the parameterized complexity of layered graph drawing
- 2002, submitted. Stacks, Queues and Tracks: Layouts of Graph Subdivisions 41 , 2004
"... A queue layout of a graph consists of a total order of the vertices, and a partition of the edges into queues, such that no two edges in the same queue are nested. The minimum number of queues
in a queue layout of a graph is its queue-number. A three-dimensional (straight- line grid) drawing of a gr ..."
Cited by 26 (20 self)
Add to MetaCart
A queue layout of a graph consists of a total order of the vertices, and a partition of the edges into queues, such that no two edges in the same queue are nested. The minimum number of queues in a
queue layout of a graph is its queue-number. A three-dimensional (straight- line grid) drawing of a graph represents the vertices by points in Z and the edges by non-crossing line-segments. This
paper contributes three main results: (1) It is proved that the minimum volume of a certain type of three-dimensional drawing of a graph G is closely related to the queue-number of G. In particular,
if G is an n-vertex member of a proper minor-closed family of graphs (such as a planar graph), then G has a O(1) O(1) O(n) drawing if and only if G has O(1) queue-number.
"... We prove that every n-vertex graph G with pathwidth pw(G) has a three-dimensional straight-line grid drawing with O(pw(G) n) volume. Thus for ..."
, 2002
"... A bipartite graph is biplanar if the vertices can be placed on two parallel lines (layers) in the plane such that there are no edge crossings when edges are drawn straight. The 2-Layer
Planarization problem asks if k edges can be deleted from a given graph G so that the remaining graph is biplanar ..."
Cited by 12 (4 self)
Add to MetaCart
A bipartite graph is biplanar if the vertices can be placed on two parallel lines (layers) in the plane such that there are no edge crossings when edges are drawn straight. The 2-Layer Planarization
problem asks if k edges can be deleted from a given graph G so that the remaining graph is biplanar. This problem is NP-complete, as is the 1-Layer Planarization problem in which the permutation of
the vertices in one layer is fixed. We give the following fixed parameter tractability results: an O(k ·6 k +|G|) algorithm for 2-Layer Planarization and an O(3 k ·|G|) algorithm for 1-Layer
Planarization, thus achieving linear time for fixed k.
- In Annual European Symposium on Algorithms (Proc. ESA ’04 , 2004
"... author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to
the public. ii We can visualize a graph by producing a geometric representation of the graph in which eac ..."
Cited by 6 (0 self)
Add to MetaCart
author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the
public. ii We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that
connects its two endpoints. Directed graphs are often used to model hierarchical structures; in order to visualize the hierarchy represented by such a graph, it is desirable that a drawing of the
graph reflects this hierarchy. This can be achieved by drawing all the edges in the graph such that they all point in an upwards direction. A graph that has a drawing in which all edges point in an
upwards direction and in which no edges cross is known as an upward planar graph. Unfortunately, testing if a graph is upward planar is NP-complete. Parameterized complexity is a technique used to
find efficient algorithms for hard | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=119923","timestamp":"2014-04-19T03:06:08Z","content_type":null,"content_length":"20766","record_id":"<urn:uuid:dadfee0e-f520-49fc-8e71-d61fcabb54e2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamics of Galaxies
Grigorij Kuzmin was born on 08.04.1917 in Vyborg (a town in Finland at that time, now in Leningrad district). In 1924 the family moved to Tallinn, Estonia. In 1935 he graduated the Russian High
School in Tallinn and became a student of the Faculty of Mathematics and Natural Sciences of the Tartu University, which he graduated
cum laude
in 1940.
Influenced by Professors Ernst Opik and Taavet Rootsmae, founders of modern Estonian astronomy, Kuzmin became an astronomer. Opik considered him as his most talented disciple. His first scientific
papers were published in 1938. Since 1940 he worked in Tartu Observatory, first as an assistant fellow, then as senior follow and head of the division of galactic studies. He lectured in Tartu
University, teaching astronomy, geophysics and stellar dynamics. Kuzmin was the supervisor of Academician Einasto, Professor Kutuzov, Professor Malasidze and many Tartu astronomers.
Works by Ogorodnikov, Agekian, Idlis, Genkin, Tsitsin, Ossipkov and others were influenced by Kuzmin.
In 1961 Kuzmin was elected Corresponding Member of Estonian Academy of Sciences. In 1971 he was awarded by Bredikhin Award of Academy of Sciences of the USSR.
Kuzmin passed away on 22.04.1988 after a hard kidney disease.
Kuzmin was interested in many branches of astronomy, but his most prominent achievements lie in Galactic Astronomy and Galactic Dynamics. Kuzmin was the second (after Ogorodnikov)researcher in Soviet
Union who started systematical studies in Dynamics of Stellar Systems.
In 1943 he developed a method how to construct galactic models using ellipsoids of non-homogeneous density and varying axial ratio. He applied this method first to the Andromeda galaxy (1943) and
then to our Galaxy (1952).
An important parameter of the Galactic model is the local density in Solar vicinity. He developed a method how to derive this quantity using vertical motions and positions of stars of flat Galactic
populations. His results show that there is no evidence for the presence of large amounts of dark matter in the plane of the Galaxy, contrary to results obtained by Oort (1933) and Parenago (1951).
His students Eelsalu and Joeveer repeated this analysis using different data and methods, and confirmed Kuzmin results. This discrepancy was solved only recently: modern data have confirmed results
by Kuzmin and his students.
The triaxial shape of the velocity ellipsoid shows that there must be at least 3 integrals of motions, and Kuzmin developed the theory of the third integral of motions (1952). Kuzmin showed that only
single-valued (isolating) integrals of motion should be used as arguments of a steady distribution function for stellar systems and a number of such integrals is equal or less than three, in general
He found that Stackel type potentials can be used for constructing reasonable Galactic models, and developed a general theory of axisymmetric Stackel type gravitating systems (1956).
Kuzmin was the first to construct Stackel type triaxial gravitating models (1973).
To study effects of stellar encounters Kuzmin deduced a Fokker-Planck type kinetic equation for gravitating systems (1957), practically at the same time as Rosenbluth and other physicists for
Applying his theory to galactic discs containing massive perturbers (such as giant molecular clouds) Kuzmin found a quasi-steady relation between velocity dispersions that is confirmed by
observational data (1962).
Kuzmin and his disciples Kutuzov and Veltmann were among the first to construct steady phase-space models for stellar systems. Kuzmin and Kutuzov found a two-integral distribution function for a
reasonable mass distribution model (1962). Veltmann developed in 1960--1968 a general theory of steady self-gravitating spherical systems, and then Kuzmin and Veltmann discovered new classes of
spherical models that were compared with observational data for globular clusters.
Kuzmin and co-authors (1986) discovered very general family of models of mass dustribution in stellar systems that can be used for approximating the de Vaucouleurs' profile for a surface density and
include the Hernquist model and the gamma-model found later.
Kuzmin shown that Lichtenstein's theorem on the existence of a plane of symmetry is valid for wide classes of collisionless gravitating systems (1964).
Kuzmin's articles include many other results of fundamental significance.
Unfortunately Kuzmin did not like to publish his works, and many of his results are unpublished till now. But even papers he published are sources of new studies. | {"url":"http://agora.guru.ru/display.php?conf=DOGtale&page=item005","timestamp":"2014-04-18T10:49:17Z","content_type":null,"content_length":"16171","record_id":"<urn:uuid:494f29cc-4f6f-4da1-b7bd-1c37fa7fc3f5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Omega limit set and Stable manifold of a point
up vote 2 down vote favorite
I can not understand the difference between omega limit of a point and its stable manifold? Anybody can give me some example please?
1 This isn't really research-level and you may have better luck on math.stackexchange.com (as per the FAQ here), but FWIW you may perhaps gain some insight by sonsidering the example $f\colon \
mathbb{R}^2\to\mathbb{R}^2$ given by $f(x,y) = (2x, y/2)$, and using the definitions to work out the omega limit set and stable manifold of $(0,0)$. Then try $(0,1)$ and $(1,0)$. – Vaughn
Climenhaga Aug 29 '12 at 15:40
As @Vaughn Climenhaga said, this question is not research level. But maybe a quick answer is that if $x$ is in the omega limit set of $p$, then $p$ is in the stable manifold of $x$ and vice versa.
– user7807 Aug 31 '12 at 22:40
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ds.dynamical-systems or ask your own question. | {"url":"http://mathoverflow.net/questions/105850/omega-limit-set-and-stable-manifold-of-a-point","timestamp":"2014-04-20T11:06:38Z","content_type":null,"content_length":"48242","record_id":"<urn:uuid:9b0feb46-0da0-4147-a92e-3e7de0d4eb31>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constants and ARIMA models in R
June 5, 2012
By Rob J Hyndman
This post is from my new book Forecasting: principles and practice, available freely online at OTexts.com/fpp/.
A non-seasonal ARIMA model can be written as
where 2).
Thus, the inclusion of a constant in a non-stationary ARIMA model is equivalent to inducing a polynomial trend of order
Including constants in ARIMA models using R
By default, the arima() command in R sets
The arima() command has an argument include.mean which only has an effect when TRUE by default. Setting include.mean=FALSE will force
The Arima() command from the forecast package provides more flexibility on the inclusion of a constant. It has an argument include.mean which has identical functionality to the corresponding argument
for arima(). It also has an argument include.drift which allows
There is also an argument include.constant which, if TRUE, will set include.mean=TRUE if include.drift=TRUE when include.constant=FALSE, both include.mean and include.drift will be set to FALSE. If
include.constant is used, the values of include.mean=TRUE and include.drift=TRUE are ignored.
When include.drift=TRUE, the fitted model from Arima() is
In this case, the R output will label
The auto.arima() function automates the inclusion of a constant. By default, for allowdrift=FALSE is specified, then the constant is only allowed when
Eventual forecast functions
The eventual forecast function (EFF) is the limit of
The constant
Seasonal ARIMA models
If a seasonal model is used, all of the above will hold with
for the author, please follow the link and comment on his blog:
Research tips » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/constants-and-arima-models-in-r/","timestamp":"2014-04-17T04:02:54Z","content_type":null,"content_length":"56557","record_id":"<urn:uuid:7d7fe6dc-777c-420a-ab91-f534affece1e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
What happens when you pop a quantum balloon?
The latest news from academia, regulators research labs and other things of interest
Posted: April 17, 2008
What happens when you pop a quantum balloon?
(Nanowerk News) When a tiny, quantum-scale, hypothetical balloon is popped in a vacuum, do the particles inside spread out all over the place as predicted by classical mechanics?
The question is deceptively complex, since quantum particles do not look or act like air molecules in a real balloon. Matter at the infinitesimally small quantum scale is both a wave and a particle,
and its location cannot be fixed precisely because measurement alters the system.
Now, theoretical physicists at the University of Southern California and the University of Massachusetts Boston have proven a long-standing hypothesis that quantum-scale chaos exists … sort of.
Writing in the April 17 edition of Nature, senior author Maxim Olshanii reported that when an observer attempts to measure the energies of particles coming out of a quantum balloon, the interference
caused by the attempt throws the system into a final, “relaxed” state analogous to the chaotic scattering of air molecules.
The result is the same for any starting arrangement of particles, Olshanii added, since the act of measuring wipes out the differences between varying initial states.
“It’s enough to know the properties of a single stationary state of definite energy of the system to predict the properties of the thermal equilibrium (the end state),” Olshanii said.
The measurement – which must involve interaction between observer and observed, such as light traveling between the two – disrupts the “coherent” state of the system, Olshanii said.
In mathematical terms, the resulting interference reveals the final state, which had been hidden in the equations describing the initial state of the system.
“The thermal equilibrium is already encoded in an initial state,” Olshanii said. “You can see some signatures for the future equilibrium. They were already there but more masked by quantum
The finding holds implications for the emerging fields of quantum computing and quantum information theory, said Paolo Zanardi, an associate professor of physics studying quantum information at USC.
In Zanardi’s world, researchers want to prevent coherent systems from falling into the chaos of thermal equilibrium.
“Finding such ‘unthermalizable’ states of matter and manipulating them is exactly one of those things that quantum information/computation folks like me would love to do,” Zanardi wrote. “Such states
would be immune from ‘decoherence’ (loss of quantum coherence induced by the coupling with environment) that’s still the most serious, both conceptually and practically, obstacle between us and
viable quantum information processing.”
Zanardi and a collaborator introduced the notion of “decoherence-free” quantum states in 1997. Researchers such as Zanardi and Daniel Lidar, associate professor of chemistry, among others, have
helped make USC a major center for the study of quantum computing. | {"url":"http://www.nanowerk.com/news/newsid=5350.php","timestamp":"2014-04-18T00:23:11Z","content_type":null,"content_length":"32698","record_id":"<urn:uuid:4b58c5a6-081b-4c9a-871b-fbfb34c95ae2>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Newsletter: Math Forum Internet News No. 2.15 (April 14)
Replies: 0
Newsletter: Math Forum Internet News No. 2.15 (April 14)
Posted: Apr 12, 1997 11:06 PM
14 April 1997 Vol.2, No.15
Well Connected Educator | Building Model Surfaces | Why Trig?
A publishing center and forum where teachers,
administrators, parents, and others write about
educational technology, join in conversations,
and learn from one another.
Nominate your favorite classroom use or professional
development site for the Teachers' Choice Web selection
contest... join author Howard Rheingold in an on-line Forum
and discuss the varying quality of information, the dangers
of using old models, the opportunities of new models, and
how to move into the future... or earn a stipend by writing
about what you do in the classroom for publication on this
New articles and columns:
- Can an email project create friendship links across
the generations?
- Can computers make language barriers disappear?
- Who is best equipped to help students and teachers
learn to use a 'library without walls'?
- How does a teacher provide help with topics beyond
his range of knowledge?
- Does television mix well with school work?
A project of the Global SchoolNet Foundation.
SO YOU WANT TO BUILD A SURFACE? JOAN'S HOUSE OF MODELS
At the Math Forum we're interested in the use of physical
models as aids in understanding three-dimensional surfaces.
Joan Hoffmann (Swarthmore '96) has created some pages for
building surfaces discussed in multi-variable calculus.
Using yarn, she shows how to:
- trace a sine curve around the inside of a soda bottle
- construct a simple or economy (appropriate for group
learning situations) hyperbolic paraboloid in a
cardboard box
- using foam board, cardboard, a sharp X-acto blade, and
glue, make f(x,y) = (x^2 * y) / (x^4 + y^2).
Do you have models of your own? Write up instructions for
building them and submit them to
WHY TRIG? A MATH-TEACH DISCUSSION
You are teaching a group of skeptical high school
students trigonometry and they want to know
"Why do we learn Trigonometry?"
-Sharon Hessney
Responses to this question ranged from concrete examples of
how trig is used to conversations about the validity and
utility of the question as stated. Below are a few
excerpts, and we encourage you to read the full discussion.
Trig is easy to defend! Any physical situation where
two actors don't meet at right angles or are parallel
requires trig. This includes virtually any realistic
mechanics problem (cars on hills, the trajectory of a
baseball or rocket, bridge design, road design, TV
picture tube design, etc.) and many optics problems...
Taken a step further, understanding many kinds of
motion and vibration (sound, light "waves,"...)
Now, try defending integration by parts...
- Tim Corica, The Peddie School
Why is it that questions from students about different
bits of math cause so much agitation among teachers?
I wonder how often English teachers get "why should we
study Shakespeare?" ...I suspect their answer is that
people without passing knowledge of old Will are
ignorant... - GYanos
I think it is because there is a strong feeling that
since there are some uses for mathematics, the study
of mathematics needs to be justified in terms of its
usefulness... [but what about] history or music or
literature. Are teachers of those subjects providing
their students with job skills?
- Jack Roach
The Math Forum's gateway to these and other recommended
math and education discussions, with directions for
subscribing to mailing lists, can be found at:
CHECK OUT OUR WEB SITE:
The Math Forum http://forum.swarthmore.edu/
Ask Dr. Math http://forum.swarthmore.edu/dr.math/
Problem of the Week http://forum.swarthmore.edu/geopow/
Internet Resources http://forum.swarthmore.edu/~steve/
Join the Math Forum http://forum.swarthmore.edu/join.forum.html
SEND COMMENTS TO comments@forum.swarthmore.edu
_o \o_ __| \ / |__ o _ o/ \o/
__|- __/ \__/o \o | o/ o/__ /\ /| |
\ \ / \ / \ /o\ / \ | \ / | / \ / \
The Math Forum ** 14 April 1997
You will find this newsletter, a FAQ,
and subscription directions archived at | {"url":"http://mathforum.org/kb/thread.jspa?threadID=485899","timestamp":"2014-04-20T06:09:09Z","content_type":null,"content_length":"19867","record_id":"<urn:uuid:5926a937-a8cb-45ce-b5ef-85c010dcdd74>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 22
- AI MAGAZINE , 1991
"... I give an introduction to Bayesian networks for AI researchers with a limited grounding in probability theory. Over the last few years, this method of reasoning using probabilities has become
popular within the AI probability and uncertainty community. Indeed, it is probably fair to say that Bayesia ..."
Cited by 236 (2 self)
Add to MetaCart
I give an introduction to Bayesian networks for AI researchers with a limited grounding in probability theory. Over the last few years, this method of reasoning using probabilities has become popular
within the AI probability and uncertainty community. Indeed, it is probably fair to say that Bayesian networks are to a large segment of the AI-uncertainty community what resolution theorem proving
is to the AIlogic community. Nevertheless, despite what seems to be their obvious importance, the ideas and techniques have not spread much beyond the research community responsible for them. This is
probably because the ideas and techniques are not that easy to understand. I hope to rectify this situation by making Bayesian networks more accessible to the probabilistically unsophisticated.
- ARTIFICIAL INTELLIGENCE , 1993
"... We provide a method, based on the theory of Markov decision processes, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each
world state; the planner must find a policy (mapping from states to actions) that maximizes future reward ..."
Cited by 162 (19 self)
Add to MetaCart
We provide a method, based on the theory of Markov decision processes, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each world
state; the planner must find a policy (mapping from states to actions) that maximizes future rewards. Standard goals of achievement, as well as goals of maintenance and prioritized combinations of
goals, can be specified in this way. An optimal policy can be found using existing methods, but these methods require time at best polynomial in the number of states in the domain, where the number
of states is exponential in the number of propositions (or state variables). By using information about the starting state, the reward function, and the transition probabilities of the domain, we
restrict the planner's attention to a set of world states that are likely to be encountered in satisfying the goal. Using this restricted set of states, the planner can generate more or less complete
- Artificial Intelligence , 1991
"... In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified meta-level control policies, but in the sense of
providing a basis for selecting and justifying computational actions. This research contributes to a devel ..."
Cited by 162 (10 self)
Add to MetaCart
In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified meta-level control policies, but in the sense of
providing a basis for selecting and justifying computational actions. This research contributes to a developing attack on the problem of resource-bounded rationality, by providing a means for
analysing and generating optimal computational strategies. Because reasoning about a computation without doing it necessarily involves uncertainty as to its outcome, probability and decision theory
will be our main tools. We develop a general formula for the utility of computations, this utility being derived directly from the ability of computations to affect an agent's external actions. We
address some philosophical difficulties that arise in specifying this formula, given our assumption of limited rationality. We also describe a methodology for applying the theory to particular
problem-solving systems, a...
- In Proceedings of the Eleventh National Conference on Artificial Intelligence , 1993
"... We provide a method, based on the theory of Markov decision problems, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each
world state; the planner must find a policy (mapping from states to actions) that maximizes future rewards. S ..."
Cited by 137 (10 self)
Add to MetaCart
We provide a method, based on the theory of Markov decision problems, for efficient planning in stochastic domains. Goals are encoded as reward functions, expressing the desirability of each world
state; the planner must find a policy (mapping from states to actions) that maximizes future rewards. Standard goals of achievement, as well as goals of maintenance and prioritized combinations of
goals, can be specified in this way. An optimal policy can be found using existing methods, but these methods are at best polynomial in the number of states in the domain, where the number of states
is exponential in the number of propositions (or state variables) . By using information about the starting state, the reward function, and the transition probabilities of the domain, we can restrict
the planner's attention to a set of world states that are likely to be encountered in satisfying the goal. Furthermore, the planner can generate more or less complete plans depending on the time it
has avail...
- Computational Intelligence , 1994
"... The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of
probability, utility, and rational choice, coupled with practical limitations on information and resources, in ..."
Cited by 109 (4 self)
Add to MetaCart
The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability,
utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems. 1 Introduction People make
judgments of rationality all the time, usually in criticizing someone else's thoughts or deeds as irrational, or in defending their own as rational. Artificial intelligence researchers construct
systems and theories to perform or describe rational thought and action, criticizing and defending these systems and theories in terms similar to but more formal than those of the man or woman on the
street. Judgments of human rationality commonly involve several different conceptions of rationality, including a logical conception used to judge thoughts, and an economic one used to judge actions
- Journal of Artificial Intelligence Research , 1995
"... Since its inception, artificial intelligence has relied upon a theoretical foundation centred around perfect rationality as the desired property of intelligent systems. We argue, as others have
done, that this foundation is inadequate because it imposes fundamentally unsatisfiable requirements. As a ..."
Cited by 79 (1 self)
Add to MetaCart
Since its inception, artificial intelligence has relied upon a theoretical foundation centred around perfect rationality as the desired property of intelligent systems. We argue, as others have done,
that this foundation is inadequate because it imposes fundamentally unsatisfiable requirements. As a result, there has arisen a wide gap between theory and practice in AI, hindering progress in the
field. We propose instead a property called bounded optimality. Roughly speaking, an agent is bounded-optimal if its program is a solution to the constrained optimization problem presented by its
architecture and the task environment. We show how to construct agents with this property for a simple class of machine architectures in a broad class of real-time environments. We illustrate these
results using a simple model of an automated mail sorting facility. We also define a weaker property, asymptotic bounded optimality (ABO), that generalizes the notion of optimality in classical
complexity th...
- In Proceedings of the Thirteenth National Conference on Artificial Intelligence , 1996
"... This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering
to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic ..."
Cited by 78 (0 self)
Add to MetaCart
This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a
search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and
exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of
which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show
that with the proper bias function, it can be easy to outperform greedy search. Introducing HBSS This paper presents a search technique, called Heuristic-Biased Stochastic Sampling (HBSS), that was
- ARTIFICIAL INTELLIGENCE , 1997
"... The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as
if the leaf evaluations were exact. This has been effective in many games because of the computational e ..."
Cited by 35 (0 self)
Add to MetaCart
The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if
the leaf evaluations were exact. This has been effective in many games because of the computational efficiency of the alpha-beta algorithm. Our approach is to form a Bayesian model of our
uncertainty. We adopt an evaluation function that returns a probability distribution estimating the probability of various errors in valuing each position. These estimates are obtained by training
from data. We thus use additional information at each leaf not available to the standard approach. We utilize this information in three ways: to evaluate which move is best after we are done
expanding, to allocate additional thinking time to moves where additional time is most relevant to game outcome, and, perhaps most importantly, to expand the tree along the most relevant lines. Our
measure of the relevan...
- Proceedings of the Fourth International Conference on Artificial Intelligence Planning Systems , 1998
"... The choice of an appropriate problem-solving method, from available methods, is a crucial skill for experts in many areas. We describe a technique for the automatic selection among methods,
which is based on a statistical analysis of their past performances. We formalize the statistical problem ..."
Cited by 23 (0 self)
Add to MetaCart
The choice of an appropriate problem-solving method, from available methods, is a crucial skill for experts in many areas. We describe a technique for the automatic selection among methods, which is
based on a statistical analysis of their past performances. We formalize the statistical problem involved in selecting an efficient problem-solving method, derive a solution to this problem, and
describe a method-selection algorithm. The algorithm not only chooses among available methods, but also decides when to abandon the chosen method, if it proves to take too much time. We give
empirical results on the use of this technique in selecting among search engines in the PRODIGY planning system. 1 Introduction The choice of an appropriate problem-solving method is one of the main
themes of Polya's famous book How to Solve It (Polya 1957). Polya showed that the selection of an effective approach to a problem is a crucial skill for a mathematician. Psychologists have
accumulated m... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1953654","timestamp":"2014-04-18T08:51:11Z","content_type":null,"content_length":"39527","record_id":"<urn:uuid:50557457-bb67-483d-881b-ed440ea2be5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimization of a multidimensional function using some of the algorithms described in:
The example in the GSL manual:
f [x,y] = 10*(x-1)^2 + 20*(y-2)^2 + 30
main = do
let (s,p) = minimize NMSimplex2 1E-2 30 [1,1] f [5,7]
print s
print p
> main
0.000 512.500 1.130 6.500 5.000
1.000 290.625 1.409 5.250 4.000
2.000 290.625 1.409 5.250 4.000
3.000 252.500 1.409 5.500 1.000
22.000 30.001 0.013 0.992 1.997
23.000 30.001 0.008 0.992 1.997
The path to the solution can be graphically shown by means of:
Graphics.Plot.mplot $ drop 3 (toColumns p)
Taken from the GSL manual:
The vector Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a quasi-Newton method which builds up an approximation to the second derivatives of the function f using the difference between
successive gradient vectors. By combining the first and second derivatives the algorithm is able to take Newton-type steps towards the function minimum, assuming quadratic behavior in that region.
The bfgs2 version of this minimizer is the most efficient version available, and is a faithful implementation of the line minimization scheme described in Fletcher's Practical Methods of
Optimization, Algorithms 2.6.2 and 2.6.4. It supercedes the original bfgs routine and requires substantially fewer function and gradient evaluations. The user-supplied tolerance tol corresponds to
the parameter sigma used by Fletcher. A value of 0.1 is recommended for typical use (larger values correspond to less accurate line searches).
The nmsimplex2 version is a new O(N) implementation of the earlier O(N^2) nmsimplex minimiser. It calculates the size of simplex as the rms distance of each vertex from the center rather than the
mean distance, which has the advantage of allowing a linear update. | {"url":"http://hackage.haskell.org/package/hmatrix-0.5.2.1/docs/Numeric-GSL-Minimization.html","timestamp":"2014-04-21T02:44:30Z","content_type":null,"content_length":"24327","record_id":"<urn:uuid:dd4e2b48-6678-4418-beb6-0bb57c87fe54>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Hi !
Yes, generally most mathematicians tend to agree that complex numbers cannot be compared by
"greater than" or "less than" relations (at least not in a useful, meaningful manner). But such
a relation can, I believe be defined, though perhaps not in terribly useful manner.
Here's a shot at it. I'll use Q for "theta" since theta isn't on the keyboard.
In re consider r nonnegative and Q in the interval [0degrees,360degrees).
So r is the distance from the origin and Q is the angle involved.
Given two complex numbers z and w we define
1) z is less than w if z is closer to the origin:
2) if z and w are the same distance from the origin then
a) z = w if their angles are the same.
b) z is less than w if z's angle is less than w's angle.
Of course if we let the r be negative and/or the angles to be any positive or negative angle, then
we weave a more tangled web. | {"url":"http://www.mathisfunforum.com/post.php?tid=18291&qid=236636","timestamp":"2014-04-17T21:41:13Z","content_type":null,"content_length":"19297","record_id":"<urn:uuid:b6305226-44ab-4acc-bdda-57dc5eb8fb9e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
vid P
• Gábor Bartók, Dávid Pál, Csaba Szepesvári, István Szita
Online Learning Lecture Notes
Summary: In spring 2011, Csaba taught an online learning course with occasional help from Gábor and István and me. As we were teaching, we were continuously writing lecture notes. The focus of
the course was no-regret non-stochastic learning (learning with expert advice, switching experts, follow the regularized leader algorithm, online linear regression), and brief explanation of
online-to-batch conversion and multi-armed bandits. Our goal was to develop the theory from first principles.
Bib info: lecture notes
Download: [PDF]
• Yasin Abbasi-Yadkori, Dávid Pál, Csaba Szepesvári
Online-to-Confidence-Set Conversions and Application to Sparse Stochastic Bandits
Summary: We show how to convert any online algorithm for linear prediction under quadratic loss (e.g. online least squares, online gradient, p-norm algorithm) can be converted into a confidence
set for the weight vector of the linear model under any sub-Gaussian noise. The lower the regret of the algorithm, the smaller the confidence set. As an application, we use these confidence sets
to construct online algorithms for the linear stochastic bandit problem. Using an online linear prediction algorithm for sparse models, we are able to construct an algorithm for the sparse linear
stochastic bandit problem i.e. problem where the underlying linear model is sparse.
Bib info: AISTATS 2012
Download: [PDF], [Slides], [Overview Talk]
• Yasin Abbasi-Yadkori, Dávid Pál, Csaba Szepesvári
Improved Algorithms for Linear Stochastic Bandits
Summary: We provide an upper bound for the tail probability of the norm of a vector-valued martingale. This bound immediately gives rise to a confidence set (ellipsoid) for online regularized
linear least-squares estimator. We apply these confidence sets to the stochastic multi-armed bandit problem and the stochastic linear bandit problem. We show that the regret of the modified UCB
algorithm is constant for any fixed confidence parameter! Furthermore, we improve the previously known bounds for the stochastic linear bandit problem by a poly-logarithmic factor.
Bib info: NIPS 2011
Download: [PDF], [arXiv]
• Gábor Bartók, Dávid Pál, Csaba Szepesvári
Minimax Regret of Finite Partial-Monitoring Games in Stochastic Environments
Summary: We continue with our ALT 2010 paper by classification of minimax regret of all partial games with finite number of actions and outcomes. However, we restrict ourselves to stochastic
environments, i.e. the environment choose outcomes i.i.d. from a fixed but unknown probability distribution. We show that regret of any game falls into one of the four categories: zero, T^1/2, T^
2/3 or T. We provide algorithms that achieve that regret within logarithmic factor. We believe that this result can be lifted to adversarial setting also; however that's left as a future work.
Bib info: Proceedings COLT 2011
Download: [PDF]
• Gábor Bartók, Dávid Pál, Csaba Szepesvári
Toward a Classification of Partial Monitoring Games
Summary: Partial monitoring games are online learning problems where a decision maker repeatedly makes a decision, receives a feedback and suffers a loss. Multi-armed bandits and dynamic pricing
are special cases. We give characterization of min-max regret of all finite partial monitoring games with two outcomes. We show that regret of any such game falls into one of the four categories:
zero, T^1/2, T^2/3 or T. Our characterization is nice and simple combinatorial/geometrical condition on the loss matrix.
Bib info: Proceedings of ALT 2010, Gábor received for this paper the best student paper award.
Download: [PDF], [arXiv]
• Dávid Pál, Barnabás Póczos, Csaba Szepesvári
Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs
Summary: We consider simple and computationally efficient nonparametric estimators of Rényi entropy and mutual information based on an i.i.d. sample drawn from an unknown, absolutely continuous
distribution over R^d. The estimators are calculated as the sum of p-th powers of the Euclidean lengths of the edges of the `generalized nearest-neighbor' graph of the sample and the empirical
copula of the sample respectively. Under mild conditions we prove the almost sure consistency of the estimators. In addition, we derive high probability error bounds assuming that the density
underlying the sample is Lipschitz continuous.
Bib info: Proceedings of NIPS 2010
Download: conference version: [PDF], extended version with proofs: [PDF], [arXiv], poster: [PDF]
• Shai Ben-David, Dávid Pál, Shai Shalev-Shwartz
Agnostic Online Learning
Summary: We generalize the Littlestone's online learning model to the agnostic setting where no hypothesis makes zero error. Littlestone defined a combinatorial parameter of the hypothesis class,
which we call Littlestone's dimension and which determines the worst-case number of prediction mistakes made by an online learning algorithm in the realizable setting. Point of our paper is that
Littlestone's dimension characterizes learnability in the agnostic case as well. Namely, we give upper and lower bounds on the regret in terms of the Littlestone's dimension.
This is a similar story to what happened to the Valiant's PAC model. The key quantity there is Vapnik-Chervonekis dimension. Valiant's PAC model is the ``realizable case''. Our work can be
paralleled to what Haussler and others did in 1992 when they generalized the PAC model to the agnostic setting and showed that Vapnik-Chervonekis dimension remains the key quantity characterizing
learnability there as well.
Bib info: COLT 2009
Download: [PDF], [slides], [slides sources]
• PhD thesis.
Contributions to Unsupervised and Semi-Supervised Learning
This thesis is essentially a compilation of the two older papers on clustering stability and an part of the paper on semi-supervised learning. The formal gaps, especially in the first paper on
clustering stability, were filled in and presentation was improved.
Download: [PDF], [sources]
• Tyler Lu, Dávid Pál, Martin Pál
Showing Relevant Ads via Lipschitz Context Multi-Armed Bandits
Summary: We study context multi-armed bandit problems where the context comes from a metric space and the payoff satisfies a Lipschitz condition with respect to the metric. Abstractly, a context
multi-armed bandit problem models a situation in which, in a sequence of independent trials, an online algorithm chooses an action based on a given context (side information) from a set of
possible actions so as to maximize the total payoff of the chosen actions. The payoff depends on both the action chosen and the context. In contrast, context-free multi-armed bandit problems,
studied previously, model situations where no side information is available and the payoff depends only on the action chosen.
For concreteness we focus in this paper on an application to Internet search advertisement where the task is to display ads to a user of a Internet search engine based on his search query so as
to maximize the click-trough rate of the ads displayed. We cast this problem as a context multi-armed bandit problem where queries and ads form metric spaces and the payoff function is Lipschitz
with respect to both the metrics. We present an algorithm, give upper bound on its regret and show an (almost) matching lower bound on the regret of any algorithm.
Bib info: Proceedings of AISTATS 2010
Download: [PDF], [slides], [slides sources]
• Shai Ben-David, Tyler Lu, Dávid Pál, Miroslava Sotáková
Learning Low-Density Separators
Summary: We define a novel, basic, unsupervised learning problem - learning the lowest density homogeneous hyperplane separator of an unknown probability distribution. This task is relevant to
several problems in machine learning, such as semi-supervised learning and clustering stability. We investigate the question of existence of a universally consistent algorithm for this problem.
We propose two natural learning paradigms and prove that, on input unlabeled random samples generated by any member of a rich family of distributions, they are guaranteed to converge to the
optimal separator for that distribution. We complement this result by showing that no learning algorithm for our task can achieve uniform learning rates (that are independent of the data
generating distribution).
Bib info: Proceedings of AISTATS 2009
Download: [PDF]
• Shai Ben-David, Tyler Lu, Dávid Pál
Does Unlabeled Data Provably Help? Worst-case Analysis of the Sample Complexity of Semi-Supervised Learning
Summary: We mathematically study the potential benefits of the use of unlabeled data to classification prediction to the learner. We propose a simple model of semi-supervised learning (SSL) in
which the unlabeled data distribution is perfectly known to the learner. We compare this model with the standard PAC model (and its agnostic vesrion) for supervised learning, in which the
unlabeled distribution is uknown to the learner. Can it be that there exists a supervised algorithm which for any unlabeled distribution needs in the worst case (over possible targets) at most by
a constant factor more labeled samples than any SSL learner? It seems that the ERM algorithm is such a candidate. We prove, for some special cases, that this indeed the case.
Bib info: Proceedings of COLT 2008
Note: See also Tyler's Master's thesis. Download: [PDF] [sources]
• Gagan Aggarwal, S. Muthukrishnan, Dávid Pál, Martin Pál
General Auction Mechanism for Search Advertising
Summary: We propose a new auction mechanims for search advertising that generalizes both Generelized Second Price auction (which is currently, in 2008, used by Google, Yahoo! and MSN) and the
famous Vickrey-Clarke-Groves mechanism adapted to the search auctions. Our mechanism allows each bidder to specify a subset of slots in which (s)he is interested, and the value and the maximum
price of each slot. For the auctioneer (the search engine) it allows to specify for every slot-bidder pair a reserve (minimum) price. Our auction computes the bidder-optimal stable matching. The
running time is O(nk^3), where n is the number of bidders and k is the number of slots. We also prove that our auction mechanism is truth-revealing, that is, dominant strategy of each bidder is
not to lie.
Bib info: Proceedings of WWW 2009
Download: [PDF]
• Shai Ben-David, Dávid Pál, Hans Ulrich Simon
Stability of K-means Clusterings
Summary: This is follow-up of the previous paper. We consider the clustering algorithm A which minimizes the k-means cost precisely. (Such algorithm is not realistic, since the optimization
problem is known to be NP-hard. Nevertheless, the classical Lloyd's hueristic a.k.a. "the k-mean algorithm" tries to do this.) We analyze how stable is the clustering outputted by A with respect
to a random draw of the sample points. Given a probability distribution, draw two i.i.d. samples of the same size. If with high probability, the clustering of the first sample does not differ
from the clustering of the second sample too much, we say that A is stable on the probability distribution. For discrete probability distributions we show that instability happens if and only if
the probability distribution has two or more clusterings with optimal k-means cost.
Bib info: Proceedings of COLT 2007
Download: [PDF] (extended version), [PDF] (short conference version)
• Shai Ben-David, Dávid Pál, Ulrike von Luxburg
A Sober look at Clustering Stability
Summary: We investigate how stable is the clustering outputted by a clustering algorithm with respect to a random draw of the sample points. Given a probability distribution, draw two i.i.d.
samples of the same size. If with high probability, the clustering of the first sample does not differ from the clustering of the second sample too much, we say that algorithm is stable on the
probability distribution. We show that for that for algorithms that optimize some cost function stability depends on the uniqueness of the optimal solution of the optimization problem for the
"infinite" sample represented by the probability distribution.
Bib info: Proceedings of COLT 2006, Best Student Paper Award
Download: [PDF], [Slides in PDF], [Slides source files]
• Dávid Pál, Martin Škoviera
Colouring Cubic Graphs by Small Steiner Triple Systems
Summary: Steiner triple system is a collection of 3-element subsets (called triples) of a finite set of points, such that every two points appear together in exactly one triple. Given Steiner
triple system S, one can ask whether it is possible to color the edges of a cubic graph with points of S, in such way that colors of the three edges at a vertex form a triple of S. We construct a
small Steiner triple system (of order 21) which colors all simple cubic graphs.
Note: This is the main result of my "Magister" Thesis at Comenius University under my advisor Martin Škoviera, 2004.
Bib info: Graphs and Combinatorics, 2007
Download: [PDF], [Postscript], [Source files]
Yasin Abbasi-Yadkori
Gagan Aggarwal
Gábor Bartók
Shai Ben-David
Tyler Lu
Ulrike von Luxburg
S. Muthukrisnan
Martin Pál
Barnabás Póczos
Hans Ulrich Simon
Miroslava Sotáková
Martin Škoviera
Shai Shalev-Shwartz
Csaba Szepesvári | {"url":"http://david.palenica.com/","timestamp":"2014-04-18T10:37:19Z","content_type":null,"content_length":"22827","record_id":"<urn:uuid:617b3ffb-c186-485c-8d76-f2d47e71d56a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
interesting new container algorithm -- worth the interest?
No. A hash table implementation with even simplified amortized constant time operations could easily outperform an AVL tree if the use case implies rare mutation after an initial setup phase. It
could also be the case that the AVL tree would be faster in the very same scenario if the comparison of objects is cheap while hashing those objects would be expensive or impossible to plan (user
generated strings of widely varying length and content). *shrug* It could also be the case that a B*tree would be the fastest option. It really depends on what you are doing and not on what
academic "Big O" operations a data structure may have.He said both operations are related to the expectations for hash tables. Soma
Are those complexities not dependent on another variable, such as the length of the input. Eg. if we insert strings into it, is the complexity not dependent on the length?
This seems a quality of implementation issue, but I'm not entirely sure I understand what you are asking.
But if that is the case, wouldn't it make his container always slower than the AVL tree?
No. A hash table implementation with even simplified amortized constant time operations could easily outperform an AVL tree if the use case implies rare mutation after an initial setup phase. It
could also be the case that the AVL tree would be faster in the very same scenario if the comparison of objects is cheap while hashing those objects would be expensive or impossible to plan (user
generated strings of widely varying length and content).
It could also be the case that a B*tree would be the fastest option. It really depends on what you are doing and not on what academic "Big O" operations a data structure may have.
If find is O(n), I don't see how remove can be O(1)
He said both operations are related to the expectations for hash tables.
It could be "remove given a pointer to the element you want removed".
That doesn't really change the nature of the statement.
Removing an element of a singly linked list even when given an iterator is still an "O(n)" operation.
The relationship is going to depend on the data structure and the underlying implementation.
I've learned from various places that hash tables have O(1) finding time, and this is a common misconception apparently. Hash tables are O(n), which means my container is O(n).
Cyberfish is right, once an element is found in my container, removing it is indeed O(1), it's one of the weird properties of my container but this is true. If you consider the removal operation
to include finding the element first, then it's O(n), but there's other ways to get an iterator than by searching for it.
Hash tables are only O(n) when they need to find an element due to collisions at some subscript or otherwise. With perfect hashing look up in O(1) time is assured, but in general you should
choose a hashing algorithm that is close enough to O(1) results anyway; since most published algorithms pass the avelanche test.
The weird thing about what you've talked about is that you haven't really described the container at all and you only state that it has so many properties. That's great.
Keep in mind your algorithm and container need serious peer review to validate the claims you are making about it. This would be a 'kind' place to get such a review. The members who have answered
you thus far in this thread are very intelligent programmers and computer scientists. You would do well to listen to them and allow them to review your algorithm.
My thinking is you won't allow any of them to review it b/c you know deep down they might find fault with it. I wouldn't worry about that b/c it would be better to find fault with it or perhaps
validate it now than later. If you are so sure that it is a great container then you should not be concerned about allowing people to see it. If you are concerned with people stealing it then put
a copyright on the code and/or use an open-source type license and then allow people to review it. Honestly no one here is going to steal your code or your idea. I'm a firm believer that there
are not that many 'original' algorithms or containers left to be built so I'm anxious to see you disprove that.
That's expected.
Being that the case, that's the one that matters on a worst case scenario, which is the whole point of the Big O notation. Otherwise, if you feel like pointing out there's best case scenario, you
need to separate deleting operations into the two realities.
Since we know now search is O(n), I dare not ask
Exactly. How about that guy that went public finding a time traveller in some old movie (charlie chaplin or so, heck I'm too young for this), because a person would be speaking on a mobile phone.
It was pointed out quite quickly on slashdot that the device was in fact (likely) a hearing aid of that time, not a mobile phone. Boy, must that guy have felt stupid.
Anyways, phantomotap, yes I understand the container might still be useful if we're talking strictly about worst case. We need average case to be able to tell anything about this container, but
it sounds like he simply reinvented hashtables. In that case, his original comparison with the AVL tree is completely useless. It's like: "Look, I'm way faster than this specific container in
worst case", only to move onto "Okay, that container is better in worst case". I am not going to compare it with other containers than he did in other cases (average, for instance) just because
his original comparison was wrong.
About the "Are those complexities not dependent on another variable, such as the length of the input. Eg. if we insert strings into it, is the complexity not dependent on the length?" you didn't
understand: I was thinking tries, where the complexity is independent of the number of elements in the container. I thought he may have used a similar idea, but missed out on the length, even
though it was important, in the complexities.
So, Verdagon, you've made a few wrong claims already. If you want to be taken serious about this still you'll have to show some proof. At least make a new comparison; show us the average case of
your container vs. hash tables.
Mario F., O(1) removal is not always the case in containers, when the iterator is known: trees might have to be re-balanced, all the elements in an array may have to be shifted left one field,
It is the accepted result for hash table lookup (you certainly agree we cannot contemplate those factors when describing our algorithm in Big-O). But honestly I don't know anymore what type of
container the OP is talking about -- If said container exists at all.
I'm a firm believer that there are not that many 'original' algorithms or containers left to be built so I'm anxious to see you disprove that.
That's me, but you never know when someone might manage something great even if it is just a variation on a theme.
I was thinking tries, where the complexity is independent of the number of elements in the container.
I see. When he said that about his container being related to hash tables I dismissed everything other than a hash table with a unique schedule using a B+Tree for the bins.
Well, dismissing types of containers doesn't go too well in this thread :P. To me, it sounds like a hoax (or, more likely, a mistake).
I agree with Mario ;-).
Bring on the container. Until that time, I consider it non-existant. | {"url":"http://cboard.cprogramming.com/tech-board/132594-interesting-new-container-algorithm-worth-interest-2-print.html","timestamp":"2014-04-23T15:10:40Z","content_type":null,"content_length":"24248","record_id":"<urn:uuid:4b8f90c3-e09f-49c9-9c5f-7bb869f1eb38>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Special Right Triangles Ppt Presentation
Slide 1:
Special Right Triangles!
Slide 2:
Justify and apply properties of 45°-45°-90° triangles. Justify and apply properties of 30°- 60°- 90° triangles. Objectives
Slide 3:
A diagonal of a square divides it into two congruent isosceles right triangles. Since the base angles of an isosceles triangle are congruent, the measure of each acute angle is 45°. So another name
for an isosceles right triangle is a 45°-45°-90° triangle. A 45°-45°-90° triangle is one type of special right triangle. You can use the Pythagorean Theorem to find a relationship among the side
lengths of a 45°-45°-90° triangle.
Slide 5:
Example 1A: Finding Side Lengths in a 45°- 45º- 90º Triangle Find the value of x. Give your answer in simplest radical form. By the Triangle Sum Theorem, the measure of the third angle in the
triangle is 45°. So it is a 45°-45°-90° triangle with a leg length of 8.
Slide 6:
Example 1B: Finding Side Lengths in a 45º- 45º- 90º Triangle Find the value of x. Give your answer in simplest radical form. The triangle is an isosceles right triangle, which is a 45°-45°-90°
triangle. The length of the hypotenuse is 5. Rationalize the denominator.
Slide 7:
Check It Out! Example 1a Find the value of x. Give your answer in simplest radical form. x = 20 Simplify.
Slide 8:
Check It Out! Example 1b Find the value of x. Give your answer in simplest radical form. The triangle is an isosceles right triangle, which is a 45°-45°-90° triangle. The length of the hypotenuse is
16. Rationalize the denominator.
Slide 9:
Example 2: Craft Application Jana is cutting a square of material for a tablecloth. The table’s diagonal is 36 inches. She wants the diagonal of the tablecloth to be an extra 10 inches so it will
hang over the edges of the table. What size square should Jana cut to make the tablecloth? Round to the nearest inch. Jana needs a 45°-45°-90° triangle with a hypotenuse of 36 + 10 = 46 inches.
Slide 10:
Check It Out! Example 2 What if...? Tessa’s other dog is wearing a square bandana with a side length of 42 cm. What would you expect the circumference of the other dog’s neck to be? Round to the
nearest centimeter. Tessa needs a 45°-45°-90° triangle with a hypotenuse of 42 cm.
Slide 11:
A 30°-60°-90° triangle is another special right triangle. You can use an equilateral triangle to find a relationship between its side lengths.
Slide 12:
Example 3A: Finding Side Lengths in a 30º-60º-90º Triangle Find the values of x and y. Give your answers in simplest radical form. Hypotenuse = 2(shorter leg) 22 = 2x Divide both sides by 2. 11 = x
Substitute 11 for x.
Slide 13:
Example 3B: Finding Side Lengths in a 30º-60º-90º Triangle Find the values of x and y. Give your answers in simplest radical form. Rationalize the denominator. Hypotenuse = 2(shorter leg). Simplify.
y = 2x
Slide 14:
Check It Out! Example 3a Find the values of x and y. Give your answers in simplest radical form. Hypotenuse = 2(shorter leg) Divide both sides by 2. y = 27
Slide 15:
Check It Out! Example 3b Find the values of x and y. Give your answers in simplest radical form. Simplify. y = 2(5) y = 10
Slide 16:
Check It Out! Example 3c Find the values of x and y. Give your answers in simplest radical form. Hypotenuse = 2(shorter leg) Divide both sides by 2. Substitute 12 for x. 24 = 2x 12 = x
Slide 17:
Check It Out! Example 3d Find the values of x and y. Give your answers in simplest radical form. Rationalize the denominator. Hypotenuse = 2(shorter leg) x = 2y Simplify.
Slide 18:
Example 4: Using the 30º-60º-90º Triangle Theorem An ornamental pin is in the shape of an equilateral triangle. The length of each side is 6 centimeters. Josh will attach the fastener to the back
along AB. Will the fastener fit if it is 4 centimeters long? Step 1 The equilateral triangle is divided into two 30°-60°-90° triangles. The height of the triangle is the length of the longer leg.
Slide 19:
Example 4 Continued Step 2 Find the length x of the shorter leg. Step 3 Find the length h of the longer leg. The pin is approximately 5.2 centimeters high. So the fastener will fit. Hypotenuse = 2
(shorter leg) 6 = 2x 3 = x Divide both sides by 2.
Slide 20:
Check It Out! Example 4 What if…? A manufacturer wants to make a larger clock with a height of 30 centimeters. What is the length of each side of the frame? Round to the nearest tenth. Step 1 The
equilateral triangle is divided into two 30º-60º-90º triangles. The height of the triangle is the length of the longer leg.
Slide 21:
Check It Out! Example 4 Continued Step 2 Find the length x of the shorter leg. Each side is approximately 34.6 cm. Step 3 Find the length y of the longer leg. Rationalize the denominator. Hypotenuse
= 2(shorter leg) y = 2x Simplify. | {"url":"http://www.authorstream.com/Presentation/bigerblue2002-574408-special-right-triangles/","timestamp":"2014-04-19T12:41:24Z","content_type":null,"content_length":"132720","record_id":"<urn:uuid:afe35ac1-a8e2-4ed7-9f5b-6a872739d6a7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How to combine an If statement with Statistics
Replies: 0
William How to combine an If statement with Statistics
Duhe Posted: Jul 17, 2013 1:46 AM
Posts: 11 Bellow is a code which attempts to change the sign (+/-) of S with an if statement. S1 is shown and S2 is the attempt to include the if statement.
From: New
Orleans You can see from the information yielded from S1 that the Gaussian fit is off by roughly a factor of 2 when it should in theory match. This is because it is overestimating positive events
Registered: due to the lack of the sign change. The sign change should change with the quantity I have assigned the label "Signal". When the Signal is positive S2 should be positive and when Signal
11/12/12 is negative S2 should become negative.
Non = RandomVariate[PoissonDistribution[\[Alpha]*off], 100000];
Noff = RandomVariate[PoissonDistribution[off], 100000];
off = 140; (*Expected Number of Off Counts*)
\[Alpha] = .3; (*Ratio of On/Off Counts*)
(*Normal Distribution*)
h2 = Plot[
Table[PDF[NormalDistribution[0, \[Sigma]], x], {\[Sigma],
1}], {x, -6, 6}, PlotStyle -> Red];
S[off_, \[Alpha]_] = Non - \[Alpha]*Noff;
hist = Histogram[S[off, \[Alpha]], Automatic, "ProbabilityDensity",
PlotLabel -> "Signal"]
(*Formula WITHOUT IF STATEMENT*)
S1[off_, \[Alpha]_] = .5 Sqrt[
2] (Non*Log[(1 + \[Alpha])/\[Alpha] (Non/(Non + Noff))] +
Noff*Log[(1 + \[Alpha]) (Noff/(Non + Noff))])^(1/2);
ListPlot[S1[off, \[Alpha]], PlotLabel -> "Formula 17",
PlotRange -> {{0, 100000}, {0, 8}}]
hist1 = Histogram[S1[off, \[Alpha]], "Log", "ProbabilityDensity",
PlotLabel -> "Formula 17 LOG"]
hist11 = Histogram[S1[off, \[Alpha]], Automatic, "ProbabilityDensity",
PlotLabel -> "Formula 17"]
Show[hist11, h2]
ProbabilityScalePlot[S1[off, \[Alpha]], "Normal",
PlotLabel -> "Formula 17"]
(*Formula WITH IF STATEMENT*)
S2[off_, \[Alpha]_] =
If[Non - \[Alpha]*Noff > 0,
Sqrt[2] (Non*Log[(1 + \[Alpha])/\[Alpha] (Non/(Non + Noff))] +
Noff*Log[(1 + \[Alpha]) (Noff/(Non + Noff))])^(
1/2), -Sqrt[
2] (Non*Log[(1 + \[Alpha])/\[Alpha] (Non/(Non + Noff))] +
Noff*Log[(1 + \[Alpha]) (Noff/(Non + Noff))])^(1/2)];
ListPlot[S2[off, \[Alpha]], PlotLabel -> "Formula 17"]
hist2 = Histogram[S2[off, \[Alpha]], "Log", "ProbabilityDensity",
PlotLabel -> "Formula 17 LOG"]
hist22 = Histogram[S2[off, \[Alpha]], Automatic, "ProbabilityDensity",
PlotLabel -> "Formula 17"]
Show[hist22, h2]
ProbabilityScalePlot[S2[off, \[Alpha]], "Normal",
PlotLabel -> "Formula 17"] | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2582334","timestamp":"2014-04-21T02:11:02Z","content_type":null,"content_length":"16178","record_id":"<urn:uuid:e8e27eb1-5d7e-4e14-acbe-2cbb5aed9019>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
problem on finding max value (URGENT)
May 28th 2009, 04:31 AM
problem on finding max value (URGENT)
I was wondering what would be the way to answer this question. I know that trial error is one way but i also know that there is a differential calculus like method. But i am not sure about how to
do it. Could some one pleaes help me!!!!!
"Find the length of the cuboid which has a maximum permissible length of 1.070m and also has the maximum possible volume"
NOTE: must ensure that sum of the maximum length and the corresponding circumference (circumference of plane perpendicular to length)
All help is appreciated.
May 28th 2009, 12:15 PM
I was wondering what would be the way to answer this question. I know that trial error is one way but i also know that there is a differential calculus like method. But i am not sure about how to
do it. Could some one pleaes help me!!!!!
"Find the length of the cuboid which has a maximum permissible length of 1.070m and also has the maximum possible volume"
NOTE: must ensure that sum of the maximum length and the corresponding circumference (circumference of plane perpendicular to length)
All help is appreciated.
So you have a rectangular solid with one side of length x and two of length y and must have x+ 4y= 1.070? Or sides of length x, y, z with x+ 2y+ 2z= 1.070?
The volume is $xy^2$ in the first case, xyz in the second.
In the first case, x= 1.070- 4y so $xy^2= 1.070y^2- 4y^3$. Differentiating that, $2.140y- 12y^2= y(2.140- 12y)= 0$ which gives either y= 0 or y= 2.140/12= .1783. y= 0 gives a package of volume 0-
obviously the maximum. So y= .1783 and then x= 1.070- 4(.1783)= 1.070- 0.7132= 0.3568. That makes the maximum volume $(.3568)(.1783)^2= 0.011342989552.$
I don't think there is enough data to do the other case so I hope this was what you meant! Are you mailing a package? | {"url":"http://mathhelpforum.com/calculus/90812-problem-finding-max-value-urgent-print.html","timestamp":"2014-04-20T11:47:49Z","content_type":null,"content_length":"6307","record_id":"<urn:uuid:d922d0c2-1ea5-4d58-bf45-b33cd237b95a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
Guohong Cao, Mukesh Singhal, "On Coordinated Checkpointing in Distributed Systems," IEEE Transactions on Parallel and Distributed Systems, vol. 9, no. 12, pp. 1213-1225, December, 1998.
BibTex x
@article{ 10.1109/71.737697,
author = {Guohong Cao and Mukesh Singhal},
title = {On Coordinated Checkpointing in Distributed Systems},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {9},
number = {12},
issn = {1045-9219},
year = {1998},
pages = {1213-1225},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.737697},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - On Coordinated Checkpointing in Distributed Systems
IS - 12
SN - 1045-9219
EPD - 1213-1225
A1 - Guohong Cao,
A1 - Mukesh Singhal,
PY - 1998
KW - Distributed system
KW - coordinated checkpointing
KW - causal dependence
KW - nonblocking
KW - consistent checkpoints.
VL - 9
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—Coordinated checkpointing simplifies failure recovery and eliminates domino effects in case of failures by preserving a consistent global checkpoint on stable storage. However, the approach
suffers from high overhead associated with the checkpointing process. Two approaches are used to reduce the overhead: First is to minimize the number of synchronization messages and the number of
checkpoints, the other is to make the checkpointing process nonblocking. These two approaches were orthogonal in previous years until the Prakash-Singhal algorithm [18] combined them. In other words,
the Prakash-Singhal algorithm forces only a minimum number of processes to take checkpoints and it does not block the underlying computation. However, we found two problems in this algorithm. In this
paper, we identify these problems and prove a more general result: There does not exist a nonblocking algorithm that forces only a minimum number of processes to take their checkpoints. Based on this
general result, we propose an efficient algorithm that neither forces all processes to take checkpoints nor blocks the underlying computation during checkpointing. Also, we point out future research
directions in designing coordinated checkpointing algorithms for distributed computing systems.
[1] A. Acharya and B.R. Badrinath, "Checkpointing Distributed Applications on Mobil Computers," Proc. Third Int'l Conf. Parallel and Distributed Information Systems, Sept. 1994.
[2] G. Barigazzi and L. Strigini, "Application-Transparent Setting of Recovery Points," Digest of Papers, Proc. 13th Fault Tolerant Computing Symp. (FTCS-13), pp. 48-55, 1983.
[3] B. Bhargava, S.R. Lian, and P.J. Leu, "Experimental Evaluation of Concurrent Checkpointing and Rollback-Recovery Algorithms," Proc. Int'l Conf. Data Eng., pp. 182-189, 1990.
[4] K.M. Chandy and L. Lamport, "Distributed Snapshots: Determining Global States of Distributed Systems," ACM Trans. Computer Systems, Feb. 1985.
[5] F. Cristian and F. Jahanian, "A Timestamp-Based Checkpointing Protocol for Long-Lived Distributed Computations," Proc. IEEE Symp. Reliable Distributed Systems, pp. 12-20, 1991.
[6] Y. Deng and E.K. Park, "Checkpointing and Rollback-Recovery Algorithms in Distributed Systems," J. Systems and Software, pp. 59-71, Apr. 1994.
[7] E.N. Elnozahy, D.B. Johnson, and W. Zwaenepoel, "The Performance of Consistent Checkpointing," Proc. 11th Symp. Reliable Distributed Systems, pp. 86-95, Oct. 1992.
[8] S.T. Huang, "Detecting Termination of Distributed Computations by External Agents," Proc. Ninth Int'l Conf. Distributed Computing Systems, pp. 79-84, 1989.
[9] J.L. Kim and T. Park, "An Efficient Protocol For Checkpointing Recovery in Distributed Systems," IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 8, pp. 955-960, Aug. 1993.
[10] R. Koo and S. Toueg, "Checkpointing and Rollback-Recovery for Distributed Systems," IEEE Trans. Software Eng., vol. 13, no. 1, pp. 23-31, Jan. 1987.
[11] T.H. Lai and T.H. Yang, "On Distributed Snapshots," Information Processing Letters, pp. 153-158, May 1987.
[12] L. Lamport, "Time, clocks and the ordering of events in a distributed system," Comm. ACM, vol. 21, no. 7, pp. 558-565, July 1978.
[13] P.Y. Leu and B. Bhargava, "Concurrent Robust Checkpointing and Recovery in Distributed Systems," Proc. Fourth IEEE Int'l Conf. Data Eng., pp. 154-163, 1988.
[14] D. Manivannan, R. Netzer, and Mukesh Singhal, "Finding Consistent Global Checkpoints in a Distributed Computation," IEEE Trans. Parallel and Distributed Systems, vol. 8, no. 6, pp. 623-627, June
[15] G. Muller, M. Hue, and N. Peyrouz, "Performance of Consistent Checkpointing in a Modular Operating System: Results of the FTM Experiment," Lecture Notes in Computer Science: Proc. First European
Conf. Dependable Computing (EDCC-1), pp. 491-508, Oct. 1994.
[16] R.H.B. Netzer and J. Xu, "Necessary and Sufficient Conditions for Consistent Global Snapshots," IEEE Trans. Parallel and Distributed System, vol. 6, no. 2, pp. 165-169, Feb. 1995.
[17] R. Prakash and M. Singhal, "Maximal Global Snapshot with Concurrent Initiators," Proc. Sixth IEEE Symp. Parallel and Distributed Processing, pp. 344-351, Oct. 1994.
[18] R. Prakash and M. Singhal, "Low-Cost Checkpointing and Failure Recovery in Mobile Computing Systems," IEEE Trans. Parallel and Distributed System, vol. 7, no. 10, pp. 1,035-1,048, Oct. 1996.
[19] P. Ramanathan and K.G. Shin, "Use of Common Time Base for Checkpointing and Rollback Recovery in a Distributed System," IEEE Trans. Software Eng., vol. 19, no. 6, pp. 571-583, June 1993.
[20] L.M. Silva and J.G. Silva, "Global Checkpointing for Distributed Programs," Proc. 11th Symp. Reliable Distributed Systems, pp. 155-162, Oct. 1992.
[21] M. Spezialetti and P. Kearns, "Efficient Distributed Snapshots," Proc. Sixth Int'l Conf. Distributed Computing Systems, pp. 382-388, 1986.
[22] R.E. Strom and S.A. Yemini, "Optimistic Recovery in Distributed Systems," ACM Trans. Computer Systems, vol. 3, no. 3, pp. 204-226, Aug. 1985.
[23] Y. Wang, "Maximum and Minimum Consistent Global Checkpoints and Their Application," Proc. 14th IEEE Symp. Reliable Distributed Systems, pp. 86-95, Oct. 1995.
[24] Y. Wang, "Consistent Global Checkpoints that Contain a Given Set of Local Checkpoints," IEEE Trans. Computers, vol. 46, no. 4, pp. 456-468, Apr. 1997.
[25] Z. Wojcik and B.E. Wojcik, "Fault Tolerant Distributed Computing Using Atomic Send Receive Checkpoints," Proc. Second IEEE Symp. Parallel and Distributed Processing, pp. 215-222, 1990.
Index Terms:
Distributed system, coordinated checkpointing, causal dependence, nonblocking, consistent checkpoints.
Guohong Cao, Mukesh Singhal, "On Coordinated Checkpointing in Distributed Systems," IEEE Transactions on Parallel and Distributed Systems, vol. 9, no. 12, pp. 1213-1225, Dec. 1998, doi:10.1109/
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/td/1998/12/l1213-abs.html","timestamp":"2014-04-21T15:01:12Z","content_type":null,"content_length":"59178","record_id":"<urn:uuid:d9eaeb9b-fd87-4274-acbf-5d3edf2782de>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Commutative Property Around Us
Date: 11/24/98 at 15:28:25
From: Tabitha Leber
Subject: The commutative property around us
Our teacher handed all of us a question sheet today before we left
class. I am going to a new school this year and have never heard of the
commutative property. I can't find it in the math book. I can't find
it on the computer. I don't know where to look for it. My parents have
never heard of it. Here are a few examples of what I'm suppose to
1. To put on your coat and to pick up your boots
2. To wash your clothes and to dry them
3. To put on your left shoe and to put on your right shoe
4. To hang up the phone and to say goodbye
Date: 12/02/98 at 15:27:48
From: Doctor Gail
Subject: Re: The commutative property around us
Dear Tabitha,
I must tell you first that I love your teacher and I think it is
important for you to know why. What your teacher is really trying to
get you to do is to connect the meaning of the commutative property to
the world around you.
So let me help. The commutative property of addition says that changing
the order of addition of two numbers does not change the meaning. For
example, 2+3 = 3+2. For multiplication, 4*5 = 5*4.
There is no commutative property for subtraction, since it doesn't make
sense to think that 10-5 = 5-10. There's a difference between having
$10 and spending $5, and having $5 and trying to spend $10, so
subtraction is not commutative.
Now back to the questions from your teacher - but I'm going to present
one of my own. Consider putting on toothpaste and brushing your teeth.
Are the results the same if you put on toothpaste, then brush, as
compared to brush and THEN put on toothpaste? I think one version
will make the dentist sad!
Let's consider someone who likes sugar and cream in her coffee. Does it
make any difference if the sugar goes in followed by the cream, or if
you put the cream in first followed by the sugar? I think the results
are the same to the coffee drinker, so I think the sugar and cream
example is commutative.
I hope this helps! Write back if you have more questions.
- Doctor Gail, The Math Forum
Date: 03/05/2003 at 11:12:06
From: TimeForLime
Subject: A polite correction
Preparing coffee is NOT necessarily COMMUTATIVE. A New Yorker, say,
wouldn't understand your example at all.
Purchased coffee, say "commuter coffee," sold one cup at a time in
containers "to go," is often only marginally warm enough. If you add
cream first and then sugar, the sugar doesn't always dissolve
thoroughly, or at least as quickly. You HAVE to dissolve the sugar
FIRST. So it isn't COMMUTATIVE in the purest sense.
If you have always obtained hot, hot coffee from an urn inside a warm
faculty building or teaching facilty, you might not be aware of this.
Thanks for listening.
Date: 03/05/2003 at 13:15:29
From: Doctor Peterson
Subject: Re: A polite correction
Hi, TimeForLime.
I suppose a commuter ought to have the last word on commutativity! ;-)
Actually there is a serious point to be made here: unlike math,
anything we say about the real world is likely to be false under some
circumstances, because we never know all there is to know. In math,
we can make absolute statements because we are defining the complete
circumstances - we know that numbers do not behave differently when
they are cold, because we define them to be independent of
temperature. Anything we say about coffee is conditional on what
coffee we are talking about.
But if we say that the coffee is hot, maybe the illustration can
There is, in fact, a similar situation with numbers: In a computer
addition might not be commutative, because the numbers might be
stored in different-sized variables. If you add a large number to a
number stored in a small space, there might be an overflow, which
would not happen if you added the small number to the large one.
Again, what's happening is that mathematical numbers do not
accurately model the computer's variables.
Thanks for the laugh, and the chance to think!
- Doctor Peterson, The Math Forum
Date: 03/10/2003 at 05:11:48
From: TimeForLime
Subject: Thank you (A polite correction)
Thank you for taking my comment seriously. Yes, the business of taking
"math" numbers in different order used to happen on the slide rule,
before the computer.
Your point that adding, say "single precision" to "double precision" -
as an example of "numbers stored in different sized variables" - is
not necessarily commutative is well taken.
I'm satisfied. Case closed. I enjoy all your examples. I just think
examples like the toothpase tube is SO unequivocal that it's probably
not necessary to tempt bright students (I'm not, and wasn't then) with
marginal examples like coffee.
I'm sure your genius students could even find fault with the
toothpaste tube but they might have to position themselves near a
singularity to pull it off.
Now we've both had a laugh. Bye. | {"url":"http://mathforum.org/library/drmath/view/58797.html","timestamp":"2014-04-19T17:37:05Z","content_type":null,"content_length":"10421","record_id":"<urn:uuid:d0a85db1-6d64-4e49-8825-dbee2de5238d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example of a CW complex not homeomorphic to the realization of a simplicial set?
up vote 8 down vote favorite
I've often heard that we can give examples of CW complexes that aren't homeomorphic to the realization of any simplicial set (although I haven't heard that there exist Kan complexes that aren't
isomorphic to the total singular complex of a CGWH space. Are there?) Would someone mind providing an example of one (and an example for the opposite statement as well, if it is true)?
at.algebraic-topology homotopy-theory simplicial-stuff model-categories
Heh, I just realized that we could call the "opposite statement" the "adjoint statement". – Harry Gindi Aug 5 '10 at 20:13
add comment
2 Answers
active oldest votes
The mapping cylinder of a really messy continuous map $I\to I$
up vote 9 down vote The nerve of the category in which there are two objects and each Hom set is a singleton.
What do you mean by the nerve in the second example? I thought the nerve was a simplicial set by definition, so it can't possibly provide an example of a CW-complex that
doesn't come from a simplicial set. – Omar Antolín-Camarena Jul 16 '13 at 13:43
add comment
The geometric realization of a simplicial set is always triangulable. See Corollary 4.6.12 in Cellular Structures in Topology by Fritsch and Piccinini. They also give an explicit example
(in section 3.4) of a non-triangulable CW-complex (which uses, I think, essentially the same idea as Tom Goodwillie's suggestion). This paper contains another example and shows, on the
up vote 8 other hand, that every CW-complex with cells in at most two dimensions is triangulable.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory simplicial-stuff model-categories or ask your own question. | {"url":"http://mathoverflow.net/questions/34674/example-of-a-cw-complex-not-homeomorphic-to-the-realization-of-a-simplicial-set?sort=newest","timestamp":"2014-04-19T04:33:07Z","content_type":null,"content_length":"57825","record_id":"<urn:uuid:f1696661-5994-4e84-ada9-fdf2994d146b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Full Text of Bill No. 181
SB 181--
SENATE BILL No. 181
By Committee on Education
AN ACT concerning school district finance; relating to determination of
transportation weighting; amending K.S.A. 72-6411 and repealing the
existing section.
Be it enacted by the Legislature of the State of Kansas:
Section 1. K.S.A. 72-6411 is hereby amended to read as follows: 72-
6411. (a) The transportation weighting of each district shall be deter-
mined by the state board as follows:
(1) Determine the total expenditures of the district during the pre-
ceding school year from all funds for transporting pupils of public and
nonpublic schools on regular school routes;
(2) divide the amount determined under (1) by the total number of
pupils who were included in the enrollment of the district in the preced-
ing school year and for whom transportation was made available by the
(3) multiply the quotient obtained under (2) by the total number of
pupils who were included in the enrollment of the district in the preced-
ing school year, were residing less than [S:21/2 miles :S]the formula prescribed
mileage by the usually traveled road from the school building they at-
tended, and for whom transportation was made available by the district;
(4) multiply the product obtained under (3) by 50%;
(5) subtract the product obtained under (4) from the amount deter-
mined under (1);
(6) divide the remainder obtained under (5) by the total number of
pupils who were included in the enrollment of the district in the preced-
ing school year, were residing [S:21/2 miles or more :S]not less than the formula
prescribed mileage by the usually traveled road from the school building
they attended, and for whom transportation was made available by the
district. The quotient is the per-pupil cost of transportation;
(7) on a density-cost graph plot the per-pupil cost of transportation
for each district;
(8) construct a curve of best fit for the points so plotted;
(9) locate the index of density for the district on the base line of the
density-cost graph and from the point on the curve of best fit directly
above this point of index of density follow a line parallel to the base line
to the point of intersection with the vertical line, which point is the for-
mula per-pupil cost of transportation of the district;
(10) divide the formula per-pupil cost of transportation of the district
by base state aid per pupil;
(11) multiply the quotient obtained under (10) by the number of
pupils who are included in the enrollment of the district, are residing [S:21/2:S]
[S:miles or more :S]not less than the formula prescribed mileage by the usually
traveled road to the school building they attend, and for whom transpor-
tation is being made available by, and at the expense of, the district. The
product is the transportation weighting of the district.
(b) For the purpose of providing accurate and reliable data on pupil
transportation, the state board is authorized to adopt rules and regulations
prescribing procedures which districts shall follow in reporting pertinent
information relative thereto, including uniform reporting of expenditures
for transportation.
(c) ``Index of density'' means the number of pupils who are included
in the enrollment of a district in the current school year, are residing [S:21/2:S]
[S:miles or more :S]not less than the formula prescribed mileage by the usually
traveled road from the school building they attend, and for whom trans-
portation is being made available on regular school routes by the district,
divided by the number of square miles of territory in the district.
(d) ``Density-cost graph'' means a drawing having: (1) A horizontal or
base line divided into equal intervals of density, beginning with zero on
the left; and (2) a scale for per-pupil cost of transportation to be shown
on a line perpendicular to the base line at the left end thereof, such scale
to begin with zero dollars at the base line ascending by equal per-pupil
cost intervals.
(e) ``Curve of best fit'' means the curve on a density-cost graph drawn
so the sum of the distances squared from such line to each of the points
plotted on the graph is the least possible.
[S:(f) The provisions of this section shall take effect and be in force from:S]
[S:and after July 1, 1992.:S]
(f) ``Formula prescribed mileage'' means two miles for the 1997-98
school year, 11/2 miles for the 1998-99 school year, and one mile for the
1999-2000 school year and each school year thereafter.
Sec. 2. K.S.A. 72-6411 is hereby repealed.
Sec. 3. This act shall take effect and be in force from and after its
publication in the statute book. | {"url":"http://www.kansas.gov/government/legislative/bills/1998/181.html","timestamp":"2014-04-17T13:16:40Z","content_type":null,"content_length":"5735","record_id":"<urn:uuid:75cacd5a-f010-459c-92e7-d88425b3c3ac>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
First, the game.
Ask a friend to write down a prime number. Bet them that you can always strike out 0 or more digits to get one of the following 26 primes:
2, 3, 5, 7, 11, 19, 41, 61, 89, 409, 449, 499, 881, 991, 6469, 6949, 9001, 9049, 9649, 9949, 60649, 666649, 946669, 60000049, 66000049, 66600049.
For example, if your friend writes down 43, you can strike out the 4 to get 3. If your friend writes down 946969, you can strike out the first 9 and the 6's to get 499.
(There's not always a unique way to do this, and of course it works with some non-primes, too: if your friend writes down 35, which isn't a prime, you can strike out the 3 to get 5 or vice versa. But
if someone gives you a number where you
strike out some subset of the digits to get a prime on the list above, then that number isn't prime. For example, you can't strike out any subset of the digits of 649 to get a prime on the list, but
649 isn't prime.)
I published this strange result in the
Journal of Recreational Mathematics
in 2000; there's a copy on my
papers page
. Believe it or not, there's actually some interesting mathematics behind it.
Let's say that a string of symbols
is a
of a string of symbols
if you can strike out some symbols of
(not necessarily contiguous) to get
. ("Subsequence" seems to be the preferred term in North America, but in Europe they call this a "subword". However "subword" is used in North America to mean "contiguous subblock".) The relation "
is a subsequence of
" is a
partial order
, meaning it shares the following properties of the ordinary <= relation on integers:
• x is a subsequence of x;
• If x is a subsequence of y and y is a subsequence of x, then x=y.
• If x is a subsequence of y and y is a subsequence of z, then x is a subsequence of z.
We now call two strings
is a subsequence of
, or vice versa; otherwise, we say
. A set is
pairwise incomparable
if every pair of elements is incomparable.
Now, a very neat result about the subsequence partial order is that
every pairwise incomparable set is finite
. This isn't obvious, and it isn't true for every partial order. (For example, it isn't true for the order where "subsequence" is replaced by "subword".) You may enjoy trying to prove this.
We need one more concept: the minimal element. A string
is said to be
if whenever
is a subsequence of
, then
. Given a set of strings
, it's not hard to see that the set of minimal elements of
is pairwise incomparable. So it must be finite. And every string in
has the property that some minimal element is a subsequence of it.
Now, to get the prime game, let
be the set of strings representing primes in base 10. The list of 26 primes above is the set of minimal elements for
Determining the set of minimal elements isn't always easy. For example, if instead of the primes we use the decimal representations of the powers of 2, then no one currently knows how to compute the
set of minimal elements. It's probably {1,2,4,8,65536}, but proving this seems quite hard.
A neat consequence of this result is that, given any language
, the set of all subsequences of strings in
is regular. We can't always easily determine the regular expression or automaton for
, but we know it exists.
Here's a link to a
file of cards you can print, cut out, and perhaps laminate
, with the prime game on them. Enjoy.
11 comments:
This is, indeed, very neat!
Very cool!
So is there an algorithm that takes as input a base and outputs the list of minimal primes in that base? Or did you just get lucky with decimals that you were able to construct the complete
Hi, David. Thanks for stopping by.
The primes are sufficiently dense that there might well be an algorithm to deduce the minimal set in any base of representation. But I don't currently have such an algorithm.
Someone has suggested the size of the minimal set, as a function of the base, as a measure of the complexity of the original language, but as far as I know, little work has been done on this
because computing the minimal set is, in general, so hard.
Hi Jeff (and all),
If anyone is planning on coming up with a recursive method of calculating such minimal sets, perhaps I can get people started with a base case:
unary: {11}
(I know recusion is not a likely candidate for such an algorithm, but then the tongue-in-cheekiness doesn't work).
Jeff, that's something new I learn from your blog today. Very cool indeed! :)
Is the minimal set for the squares known? My Haskell code has so far found:
It looks like it could go on and on.
No, the minimal set for squares in base 10 is not known, to my knowledge. It might be extremely difficult.
I'd include 0 as a square, which would simplify the minimal set of the squares in base 10 somewhat. :)
Hi Jeffrey,
Thanks for a very interesting article. Looking at the powers of 2 minimal set, I think there's a way to determine it, which I've somewhat started in an Excel spreadsheet. I'll let you know if do.
This comment has been removed by a blog administrator. | {"url":"http://recursed.blogspot.com/2006/12/prime-game.html?showComment=1165027680000","timestamp":"2014-04-17T06:41:27Z","content_type":null,"content_length":"85178","record_id":"<urn:uuid:528e2bc1-7e70-4084-a7ac-25c863631bbd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
When you take the mathematics COMPASS exam at the college, you are allowed to use the TI-83 or TI-84 calculator.
It is a good idea to practice with the calculator before the COMPASS test.
Compass Calculator Skills PowerPoint
Things that you should be able to do with a calculator:
+ Add signed numbers
- Subtract signed numbers
x Multiply signed numbers
÷ Divide signed numbers
√ Find the square root of a number
^ Find the value of a number raised to a rational exponent. (i.e. 9 ^3/2 = 27)
T1-83 or T1-84 calculator help: http://mathbits.com/MathBits/TISection/Openpage.htm | {"url":"http://www.highlands.edu/site/print/faculty-dakrayee-calculators","timestamp":"2014-04-18T06:24:55Z","content_type":null,"content_length":"2744","record_id":"<urn:uuid:e1119ebe-56da-442f-afd5-fb1559f454f2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 22
- Journal of Pure and Applied Algebra , 1991
"... There are many situations in logic, theoretical computer science, and category theory where two binary operations---one thought of as a (tensor) "product", the other a "sum"---play a key role.
In distributive and -autonomous categories these operations can be regarded as, respectively, the and/or of ..."
Cited by 119 (19 self)
Add to MetaCart
There are many situations in logic, theoretical computer science, and category theory where two binary operations---one thought of as a (tensor) "product", the other a "sum"---play a key role. In
distributive and -autonomous categories these operations can be regarded as, respectively, the and/or of traditional logic and the times/par of (multiplicative) linear logic. In the latter logic,
however, the distributivity of product over sum is conspicuously absent: this paper studies a "linearization" of that distributivity which is present in both case. Furthermore, we show that this weak
distributivity is precisely what is needed to model Gentzen's cut rule (in the absence of other structural rules) and can be strengthened in a natural way to generate - autonomous categories. We also
point out that this "linear" notion of distributivity is virtually orthogonal to the usual notion as formalized by distributive categories. 0 Introduction There are many situations in logic,
theoretical co...
, 1997
"... We extend the multiplicative fragment of linear logic with a non-commutative connective (called before), which, roughly speaking, corresponds to sequential composition. This lead us to a
calculus where the conclusion of a proof is a Partially Ordered MultiSET of formulae. We firstly examine coherenc ..."
Cited by 37 (8 self)
Add to MetaCart
We extend the multiplicative fragment of linear logic with a non-commutative connective (called before), which, roughly speaking, corresponds to sequential composition. This lead us to a calculus
where the conclusion of a proof is a Partially Ordered MultiSET of formulae. We firstly examine coherence semantics, where we introduce the before connective, and ordered products of formulae.
Secondly we extend the syntax of multiplicative proof nets to these new operations. We then prove strong normalisation, and confluence. Coming back to the denotational semantics that we started with,
we establish in an unusual way the soundness of this calculus with respect to the semantics. The converse, i.e. a kind of completeness result, is simply stated: we refer to a report for its lengthy
proof. We conclude by mentioning more results, including a sequent calculus which is interpreted by both the semantics and the proof net syntax, although we are not sure that it takes all proof nets
into account...
- Logical Methods in Computer Science, 2(4:3):1–44, 2006. Available from: http://arxiv.org/abs/cs/0605054. [McK05] Richard McKinley. Classical categories and deep inference. In Structures and
Deduction 2005 (Satellite Workshop of ICALP’05 , 2005
"... Vol. 2 (4:3) 2006, pp. 1–44 www.lmcs-online.org ..."
- IN FORMAL GRAMMAR , 1995
"... Lambek calculus may be viewed as a fragment of linear logic, namely intuitionistic non-commutative multiplicative linear logic. As it is too restrictive to describe numerous usual linguistic
phenomena, instead of extending it we extend MLL with a non-commutative connective, thus dealing with partia ..."
Cited by 17 (2 self)
Add to MetaCart
Lambek calculus may be viewed as a fragment of linear logic, namely intuitionistic non-commutative multiplicative linear logic. As it is too restrictive to describe numerous usual linguistic
phenomena, instead of extending it we extend MLL with a non-commutative connective, thus dealing with partially ordered multisets of formulae. Relying on proof net technique, our study associates
words with parts of proofs, modules, and parsing is described as proving by plugging modules. Apart from avoiding spurious ambiguities, our method succeeds in obtaining a logical description of
relatively free word order, head-wrapping, clitics, and extraposition (these latest two constructions are unfortunately not included, for lack of space).
- IN THE SECOND ROUND OF REVISION FOR MATHEMATICAL STRUCTURES IN COMPUTER SCIENCE , 2007
"... We study some normalisation properties of the deep-inference proof system NEL, which can be seen both as 1) an extension of multiplicative exponential linear logic (MELL) by a certain
non-commutative self-dual logical operator; and 2) an extension of system BV by the exponentials of linear logic. T ..."
Cited by 11 (6 self)
Add to MetaCart
We study some normalisation properties of the deep-inference proof system NEL, which can be seen both as 1) an extension of multiplicative exponential linear logic (MELL) by a certain non-commutative
self-dual logical operator; and 2) an extension of system BV by the exponentials of linear logic. The interest of NEL resides in: 1) its being Turing complete, while the same for MELL is not known,
and is widely conjectured not to be the case; 2) its inclusion of a self-dual, non-commutative logical operator that, despite its simplicity, cannot be axiomatised in any analytic sequent calculus
system; 3) its ability to model the sequential composition of processes. We present several decomposition results for NEL and, as a consequence of those and via a splitting theorem, cut elimination.
We use, for the first time, an induction measure based on flow graphs associated to the exponentials, which captures their rather complex behaviour in the normalisation process. The results are
presented in the calculus of structures, which is the first, developed formalism in deep inference.
, 1996
"... A graph-theoretical look at multiplicative proof nets lead us to two new descriptions of a proof net, both as a graph endowed with a perfect matching. The first one is a rather conventional
encoding of the connectives which nevertheless allows us to unify various sequentialisation techniques as the ..."
Cited by 10 (3 self)
Add to MetaCart
A graph-theoretical look at multiplicative proof nets lead us to two new descriptions of a proof net, both as a graph endowed with a perfect matching. The first one is a rather conventional encoding
of the connectives which nevertheless allows us to unify various sequentialisation techniques as the corollaries of a single graph theoretical result. The second one is more exciting: a proof net
simply consists in the set of its axioms -- the perfect matching -- plus one single series-parallel graph which encodes the whole syntactical forest of the sequent. We thus identify proof nets which
only differ because of the commutativity or associativity of the connectives, or because final par have been performed or not. We thus push further the program of proof net theory which is to get
closer to the proof itself, ...
, 1994
"... It is known that (mix) proof nets admit a coherence semantics, computed as a set of experiments. We prove here the converse: a proof structure is shown to be a proof net whenever its set of
experiments is a semantical object --- a clique of the corresponding coherence space. Moreover the interpretat ..."
Cited by 6 (4 self)
Add to MetaCart
It is known that (mix) proof nets admit a coherence semantics, computed as a set of experiments. We prove here the converse: a proof structure is shown to be a proof net whenever its set of
experiments is a semantical object --- a clique of the corresponding coherence space. Moreover the interpretation of atomic formulae can be restricted to a given coherent space with four tokens in
its web. This is done by transforming cut-links into tensor-links. Dealing directly with non-cut-free proof structure we characterise the deadlock freeness of the proof structure. These results are
especially convenient for Abramsky 's proof expressions, and are extended to the pomset calculus.
, 1997
"... We consider a class of graphs embedded in R 2 as noncommutative proof-nets with an explicit exchange rule. We give two characterization of such proof-nets, one representing proof-nets as
CW-complexes in a two-dimensional disc, the other extending a characterization by Asperti. As a corollary, we ..."
Cited by 5 (1 self)
Add to MetaCart
We consider a class of graphs embedded in R 2 as noncommutative proof-nets with an explicit exchange rule. We give two characterization of such proof-nets, one representing proof-nets as CW-complexes
in a two-dimensional disc, the other extending a characterization by Asperti. As a corollary, we obtain that the test of correctness in the case of planar graphs is linear in the size of the data.
Braided proofnets are proof-nets for multiplicative linear logic with Mix embedded in R 3 . In order to prove the cut-elimination theorem, we consider proof-nets in R 2 as projections of braided
proof-nets under regular isotopy. Contents 1 Introduction 2 2 Language 4 2.1 Links and Proof-Structures . . . . . . . . . . . . . . . . . . . . 5 3 A combinatorial characterization. 8 3.1 Subnets of
Proof-Nets . . . . . . . . . . . . . . . . . . . . . . 11 Research supported by EC Individual Fellowship Human Capital and Mobility, contract n. 930142. Both authors thank Jacques van de Wiele, f...
- LOGIC FOR CONCURRENCY AND SYNCHRONISATION”, KLUWER TRENDS IN LOGIC N.18, 2003, PP.93-114. LAMBDA CALCULUS 37 , 2001
"... ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=285658","timestamp":"2014-04-24T09:21:25Z","content_type":null,"content_length":"35989","record_id":"<urn:uuid:fc3b72ca-2676-4b84-8cc2-d8ca18dd0401>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tutors
Fort Lee, NJ 07024
Math, Science, Technology, and Test Prep Tutor
I'm an experienced certified teacher in NJ, currently pursuing a Doctorate in
Education and Applied
ematics. I've been teaching and tutoring for over 15 years in several subjects including pre-algebra, algebra I & II, geometry, trigonometry, statistics,...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/geo_Clifton_NJ_Math_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-19T13:30:52Z","content_type":null,"content_length":"61424","record_id":"<urn:uuid:28526fe5-b0dc-48c3-88f4-9d98015a7db2>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding A Circle in A Set of N Points C++
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Jan 2013
Rep Power
Finding A Circle in A Set of N Points C++
I'm a beginner at C++ and I was given this problem to do for my end of semester project. The question states:
"Generate by random, a set of n points in a 2D Cartesian Coordinates System. Find a circle enveloping all of the points which area is the smallest. Use functions for reading, printing appropriate
data. Use local variables."
I NEED HELP. This is all I have done so far:
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#define n 100
#define range_y 100
#define range_x 100
struct point
int x;
int y;
void main()
point * table;
table=(point *)(malloc(n*sizeof(point)));
for (int i=0;i<n;i++)
I'd plunge right in and apply a convex hull algorithm to reduce the size of the problem. Next I'd draw some pictures and research the geometry to see if there's an analytical technique. Otherwise
you'll have to use numerical methods. Now, if I were clever I'd think about the problem before I wrote code. I mean, the answer couldn't be something stupidly simple like "a diameter of the
enclosing circle is defined by the two points with greatest separation". Anyway, I doubt that's correct, but if it were it would be a shame to have bothered with convex hull.
[code]Code tags[/code] are essential for python code and Makefiles!
Originally Posted by b49P23TIvg
I mean, the answer couldn't be something stupidly simple like "a diameter of the enclosing circle is defined by the two points with greatest separation". Anyway, I doubt that's correct, but if it
were it would be a shame to have bothered with convex hull.
It's not that simple: consider the three vertices of an equilateral triangle, along with a fourth point opposite one of the vertices just barely inside the circumcircle of the triangle. Anyway,
this is a pretty old and solved problem. I believe Emo Welzl is the name usually associated to this.
It seems we've been here before.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper
Originally Posted by pacosarae
I believe Emo Welzl is the name usually associated to this.
Emo's algorithm is sometimes found in 3D game engines. But anyway, here is a link to a tutorial and sample Java code
[mod-edit]Deleted pacosarae's post due to his image link drop attempt and delinked his image on your reply.[/edit]
Last edited by Scorpions4ever; January 25th, 2013 at 08:53 AM.
No Profile Picture
Contributing User
Devshed Intermediate (1500 - 1999 posts)
Join Date
Feb 2004
San Francisco Bay
Rep Power
No Profile Picture
Contributing User
Devshed Newbie (0 - 499 posts)
Join Date
Oct 2012
Rep Power | {"url":"http://forums.devshed.com/programming-42/finding-circle-set-938656.html","timestamp":"2014-04-17T23:38:28Z","content_type":null,"content_length":"68948","record_id":"<urn:uuid:c1257f71-cdfa-4f87-92db-523b1faf929e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Engineering Acoustics/Rotor Stator Interactions
An important issue for the aeronautical industry is the reduction of aircraft noise. The characteristics of the turbomachinery noise are to be studied. The rotor/stator interaction is a predominant
part of the noise emission. We will present an introduction to these interaction theory, whose applications are numerous. For example, the conception of air-conditioning ventilators requires a full
understanding of this interaction.
http://en.wikipedia.org/w/index.php?title=Rotor/Stator_Interaction&action=edit Edit this page
Noise emission of a Rotor-Stator mechanismEdit
A Rotor wake induces on the downstream Stator blades a fluctuating vane loading, which is directly linked to the noise emission.
We consider a B blades Rotor (at a rotation speed of $\Omega$) and a V blades stator, in a unique Rotor/Stator configuration. The source frequencies are multiples of $B \Omega$, that is to say $mB \
Omega$. For the moment we don’t have access to the source levels $F_{m}$. The noise frequencies are also $mB \Omega$, not depending on the number of blades of the stator. Nevertheless, this number V
has a predominant role in the noise levels ($P_{m}$) and directivity, as it will be discussed later.
For an airplane air-conditioning ventilator, reasonable data are :
$B=13$ and $\Omega = 12000$ rnd/min
The blade passing frequency is 2600 Hz, so we only have to include the first two multiples (2600 Hz and 5200 Hz), because of the human ear high-sensibility limit. We have to study the frequencies m=1
and m=2.
Optimization of the number of bladesEdit
As the source levels can't be easily modified, we have to focuse on the interaction between those levels and the noise levels.
The transfer function ${{F_m } \over {P_m }}$ contains the following part :
$\sum\limits_{s = - \infty }^{s = + \infty } {e^{ - {{i(mB - sV)\pi } \over 2}} J_{mB - sV} } (mBM)$
Where m is the Mach number and $J_{mB - sV}$ the Bessel function of mB-sV order. In order to minimize the influence of the transfer function, the goal is to reduce the value of this Bessel function.
To do so, the argument must be smaller than the order of the Bessel function.
Back to the example :
For m=1, with a Mach number M=0.3, the argument of the Bessel function is about 4. We have to avoid having mB-sV inferior than 4. If V=10, we have 13-1x10=3, so there will be a noisy mode. If V=19,
the minimum of mB-sV is 6, and the noise emission will be limited.
Remark :
The case that is to be strictly avoided is when mB-sV can be nul, which causes the order of the Bessel function to be 0. As a consequence, we have to take care having B and V prime numbers.
Determination of source levelsEdit
The minimization of the transfer function ${{F_m } \over {P_m }}$ is a great step in the process of reducing the noise emission. Nevertheless, to be highly efficient, we also have to predict the
source levels $F_{m}$. This will lead us to choose to minimize the Bessel functions for the most significative values of m. For example, if the source level for m=1 is very higher than for m=2, we
will not consider the Bessel functions of order 2B-sV. The determination of the source levels is given by the Sears theory,which will not be explicited here.
All this study was made for a privilegiate direction : the axis of the Rotor/Stator. All the results are acceptable when the noise reduction is ought to be in this direction. In the case where the
noise to reduce is perpendicular to the axis, the results are very different, as those figures shown :
For B=13 and V=13, which is the worst case, we see that the sound level is very high on the axis (for $\theta = 0$)
For B=13 and V=19, the sound level is very low on the axis but high perpendicularly to the axis (for $\theta = Pi/2$)
External referencesEdit
Last modified on 13 December 2006, at 14:13 | {"url":"http://en.m.wikibooks.org/wiki/Engineering_Acoustics/Rotor_Stator_Interactions","timestamp":"2014-04-18T03:08:17Z","content_type":null,"content_length":"27194","record_id":"<urn:uuid:5c0f9774-0cc5-4a24-8641-f4d0d1bc506e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
@MIF , Activity:Drawing squares
Re: @MIF , Activity:Drawing squares
hi bobbym
speaking of Maxima,i cannot seem to find the download link for Maxima (programming language).
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=17413&p=4","timestamp":"2014-04-18T06:00:01Z","content_type":null,"content_length":"14804","record_id":"<urn:uuid:4ffcefd6-af43-48f4-a92b-4785e3971264>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
September 19, 1998
This Week's Finds in Mathematical Physics (Week 123)
John Baez
It all started out as a joke. Argument for argument's sake. Alison and her infuriating heresies.
"A mathematical theorem," she'd proclaimed, "only becomes true when a physical system tests it out: when the system's behaviour depends in some way on the theorem being true or false.
It was June 1994. We were sitting in a small paved courtyard, having just emerged from the final lecture in a one-semester course on the philosophy of mathematics - a bit of light relief from the
hard grind of the real stuff. We had fifteen minutes to to kill before meeting some friends for lunch. It was a social conversation - verging on mild flirtation - nothing more. Maybe there were
demented academics, lurking in dark crypts somewhere, who held views on the nature of mathematical truth which they were willing to die for. But were were twenty years old, and we knew it was all
angels on the head of a pin.
I said, "Physical systems don't create mathematics. Nothing creates mathematics - it's timeless. All of number theory would still be exactly the same, even if the universe contained nothing but a
single electron."
Alison snorted. "Yes, because even one electron, plus a space-time to put it in, needs all of quantum mechanics and all of general relativity - and all the mathematical infrastructure they
entail. One particle floating in a quantum vacuum needs half the major results of group theory, functional analysis, differential geometry - "
"OK, OK! I get the point. But if that's the case... the events in the first picosecond after the Big Bang would have `constructed' every last mathematical truth required by any physical system,
all the way to the Big Cruch. Once you've got the mathematics which underpins the Theory of Everything... that's it, that's all you ever need. End of story."
"But it's not. To apply the Theory of Everything to a particular system, you still need all the mathematics for dealing with that system - which could include results far beyond the mathematics
the TOE itself requires. I mean, fifteen billion years after the Big Bang, someone can still come along and prove, say... Fermat's Last Theorem." Andrew Wiles at Princeton had recently announced
a proof of the famous conjecture, although his work was still being scrutinised by his colleagues, and the final verdict wasn't yet in. "Physics never needed that before."
I protested, "What do you mean, `before'? Fermat's Last Theorem never has - and never will - have anything to do with any branch of physics."
Alison smiled sneakily. "No branch, no. But only because the class of physical systems whose behaviour depend on it is so ludicrously specific: the brains of mathematicians who are trying to
validate the Wiles proof."
"Think about it. Once you start trying to prove a theorem, then even if the mathematics is so `pure' that it has no relevance to any other object in the universe... you've just made it relevant
to yourself. You have to choose some physical process to test the theorem - whether you use a computer, or a pen and paper... or just close your eyes and shuffle neurotransmitters. There's no
such thing as a proof which doesn't rely on physical events, and whether they're inside or outside your skull doesn't make them any less real."
And this is just the beginning... the beginning of Greg Egan's tale of an inconsistency in the axioms of arithmetic - a "topological defect" left over in the fabric of mathematics, much like the
cosmic strings or monopoles hypothesized by certain physicists thinking about the early universe - and the mathematicians who discover it and struggle to prevent a large corporation from exploiting
it for their own nefarious purposes. This is the title story of his new collection, "Luminous".
I should also mention his earlier collection of stories, named after a sophisticated class of mind-altering nanotechnologies, the "axiomatics", that affect specific beliefs of anyone who uses them:
1) Greg Egan, Axiomatic, Orion Books, 1995.
Greg Egan, Luminous, Orion Books, 1998.
Some of the stories in these volumes concern math and physics, such as "The Planck Dive", about some far-future explorers who send copies of themselves into a black hole to study quantum gravity
firsthand. One nice thing about this story, from a pedant's perspective, is that Egan actually works out a plausible scenario for meeting the technical challenges involved - with the help of a little
23rd-century technology. Another nice thing is the further exploration of a world in which everyone has long been uploaded to virtual "scapes" and can easily modify and copy themselves - a world
familiar to readers of his novel "Diaspora" (see "week115"). But what I really like is that it's not just a hard-science extravaganza; it's a meditation on mortality. You can never really know what
it's like to cross an event horizon unless you do it....
Other stories focus on biotechnology and philosophical problems of identity. The latter sort will especially appeal to everyone who liked this book:
2) Daniel C. Dennett and Douglas R. Hofstadter, The Mind's I: Fantasies and Reflections on Self and Soul, Bantam Books, 1982.
Among these, one of my favorite is called "Closer". How close can you be to someone without actually being them? Would temporarily merging identities with someone you loved help you understand them
better? Luckily for you penny-pinchers out there, this particular story is available free at the following website:
3) Greg Egan, Closer, http://www.eidolon.net/old_site/issue_09/09_closr.htm
Whoops! I'm drifting pretty far from mathematical physics, aren't I? Self-reference has a lot to do with mathematical logic, but.... To gently drift back, let me point out that Egan has a website in
which he explains special and general relativity in a nice, nontechnical way:
4) Greg Egan, Foundations, http://www.netspace.net.au/~gregegan/FOUNDATIONS/index.html
Also, here are some interesting papers:
5) Gordon L. Kane, Experimental evidence for more dimensions reported, Physics Today, May 1998, 13-16.
Paul M. Grant, Researchers find extraordinarily high temperature superconductivity in bio-inspired nanopolymer, Physics Today, May 1998, 17-19.
Jack Watrous, Ribosomal robotics approaches critical experiments; government agencies watch with mixed interest, Physics Today, May 1998, 21-23.
What these papers have in common is that they are all works of science fiction, not science. They read superficially like straight science reporting, but they are actually the winners of Physics
Today's "Physics Tomorrow" essay contest!
For example, Grant writes:
"Little's concept involved replacing the phonons - characterized by the Debye temperature - with excitons, whose much higher characteristic energies are on the order of 2 eV, or 23,000 K. If excitons
were to become the electron-pairing `glue', superconductors with T_c's as high as 500 K might be possible, even under weak coupling conditions. Little even proposed a possible realization of the
idea: a structure composed of a conjugated polymer chain (polyene) dressed with highly polarizable molecule (aromatics) as side groups. Simply stated, the polyene chain would be a normal metal with a
single mobile electron per C-H molecular unit; electrons on separate units would be paired by interacting with the exciton field on the polarizable side groups."
Actually, I think this part is perfectly true - William A. Little suggested this way to achieve high-temperature superconductivity back in the 1960s. The science fiction part is just the description,
later on in Grant's article, of how Little's dream is actually achieved.
Okay, enough science fiction! Time for some real science! Quantum gravity, that is. (Stop snickering, you skeptics....)
6) Laurent Freidel and Kirill Krasnov, Spin foam models and the classical action principle, preprint available as hep-th/9807092.
I described the spin foam approach to quantum gravity in "week113". But let me remind you how the basic idea goes. A good way to get a handle on this idea is by analogy with Feynman diagrams. In
ordinary quantum field theory there is a Hilbert space of states called "Fock space". This space has a basis of states in which there are a specific number of particles at specific positions. We can
visualize such a state simply by imagining a bunch of points in space, with labels to say which particles are which kinds: electrons, quarks, and so on. One of the main jobs of quantum field theory
is to let us compute the amplitude for one such state to evolve into another as time passes. Feynman showed that we can do it by computing a sum over graphs in spacetime. These graphs are called
Feynman diagrams, and they represent "histories". For example,
\u e/
\ /
/ \
/ \
/d ν\
would represent a history in which an up quark emits a W boson and turns into a down quark, with the W being absorbed by an electron, turning it into a neutrino. Time passes as you march down the
page. Quantum field theory gives you rules for computing amplitudes for any Feyman diagram. You sum these amplitudes over all Feynman diagrams starting at one state and ending at another to get the
total amplitude for the given transition to occur.
Now, where do these rules for computing Feynman diagram amplitudes come from? They are not simply postulated. They come from perturbation theory. There is a general abstract formula for computing
amplitudes in quantum field theory, but it's not so easy to use this formula in concrete calculations, except for certain very simple field theories called "free theories". These theories describe
particles that don't interact at all. They are mathematically tractable but physically uninteresting. Uninteresting, that is, except as a starting-point for studying the theories we are interested in
- the so-called "interacting theories".
The trick is to think of an interacting theory as containing parameters, called "coupling constants", which when set to zero make it reduce to a free theory. Then we can try to expand the transition
amplitudes we want to know as a Taylor series in these parameters. As usual, computing the coefficients of the Taylor series only requires us to to compute a bunch of derivatives. And we can compute
these derivatives using the free theory! Typically, computing the nth derivative of some transition amplitude gives us a bunch of integrals which correspond to Feynman diagrams with n vertices.
By the way, this means you have to take the particles you see in Feynman diagrams with a grain of salt. They don't arise purely from the mathematics of the interacting theory. They arise when we
approximate that theory by a free theory. This is not an idle point, because we can take the same interacting theory and approximate it by different free theories. Depending on what free theory we
use, we may say different things about which particles our interacting theory describes! In condensed matter physics, people sometimes use the term "quasiparticle" to describe a particle that appears
in a free theory that happens to be handy for some problem or other. For example, it can be helpful to describe vibrations in a crystal using "phonons", or waves of tilted electron spins using
"spinons". Condensed matter theorists rarely worry about whether these particles "really exist". The question of whether they "really exist" is less interesting than the question of whether the
particular free theory they inhabit provides a good approximation for dealing with a certain problem. Particle physicists, too, have increasingly come to recognize that we shouldn't worry too much
about which elementary particles "really exist".
But I digress! My point was simply to say that Feynman diagrams arise from approximating interacting theories by free theories. The details are complicated and in most cases nobody has ever succeeded
in making them mathematically rigorous, but I don't want to go into that here. Instead, I want to turn to spin foams.
Everything I said about Feynman diagrams has an analogy in this approach to quantum gravity. The big difference is that ordinary "free theories" are formulated on a spacetime with a fixed metric -
usually Minkowski spacetime, with its usual flat metric. Attempts to approximate quantum gravity by this sort of free theory failed dismally. Perhaps the fundamental reason is that general relativity
doesn't presume that spacetime has a fixed metric - au contraire, it's a theory in which the metric is the main variable!
So the idea of Freidel and Krasnov is to approximate quantum graivty with a very different sort of "free theory", one in which the metric is a variable. The theory they use is called "BF theory". I
said a lot about BF theory in "week36", but here the main point is simply that it's a topological quantum field theory, or TQFT. A TQFT is a quantum field theory that does not presume a fixed metric,
but of a very simple sort, because it has no local degrees of freedom. I very much like the idea that a TQFT might serve as a novel sort of "free theory" for the purposes of studying quantum gravity.
Everything that Freidel and Krasnov do is reminscent of familiar quantum field theory, but also very different, because their starting-point is BF theory rather than a free theory of a traditional
sort. For example, just as ordinary quantum field theory starts out with Fock space, in the spin network approach to quantum gravity we start with a nice simple Hilbert space of states. But this
space has a basis consisting, not of collections of 0-dimensional particles sitting in space at specified positions, but of 1-dimensional "spin networks" sitting in space. (For more on spin networks,
see "week55" and "week110".) And instead of using 1-dimensional Feynman diagrams to compute transition amplitudes, the idea is now to use 2-dimensional gadgets called "spin foams". The amplitudes for
spin foams are easy to compute in BF theory, because there are a lot of explicit formulas using the so-called "Kauffman bracket", which is an easily computable invariant of spin networks. So then the
trick is to use this technology to compute spin foam amplitudes for quantum gravity.
Now, I shouldn't give you the wrong impression here. There are lots of serious problems and really basic open questions in this work, and the whole thing could turn out to be fatally flawed somehow.
Nonetheless, something seems right about it, so I find it very interesting.
Anyway, on to some other papers. I'm afraid I don't have enough energy for detailed descriptions, because I'm busy moving into a new house, so I'll basically just point you at them....
7) Abhay Ashtekar, Alejandro Corichi and Jose A. Zapata, Quantum theory of geometry III: Non-commutativity of Riemannian structures, preprint available as gr-qc/9806041.
This is the long-awaited third part of a series giving a mathematically rigorous formalism for interpreting spin network states as "quantum 3-geometries", that is, quantum states describing the
metric on 3-dimensional space together with its extrinsic curvature (as it sits inside 4-dimensional spacetime). Here's the abstract:
"The basic framework for a systematic construction of a quantum theory of Riemannian geometry was introduced recently. The quantum versions of Riemannian structures - such as triad and area operators
- exhibit a non-commutativity. At first sight, this feature is surprising because it implies that the framework does not admit a triad representation. To better understand this property and to
reconcile it with intuition, we analyze its origin in detail. In particular, a careful study of the underlying phase space is made and the feature is traced back to the classical theory; there is no
anomaly associated with quantization. We also indicate why the uncertainties associated with this non-commutativity become negligible in the semi-classical regime."
In case you're wondering, the "triad" field is more or less what mathematicians would call a "frame field" or "soldering form" - and it's the same as the "B" field in BF theory. It encodes the
information about the metric in Ashtekar's formulation to general relativity.
Moving on to matters n-categorical, we have:
8) Andre Hirschowitz, Carlos Simpson, Descente pour les n-champs (Descent for n-stacks), approximately 240 pages, in French, preprint available as math.AG/9807049. math.AG/9807049.
Apparently this provides a theory of "n-stacks", which are the n-categorical generalization of sheaves. Ever since Grothendieck's 600-page letter to Quillen (see "week35"), this has been the holy
grail of n-category theory. Unfortunately I haven't mustered sufficient courage to force my way through 240 pages of French, so I don't really know the details!
For the following two n-category papers, exploring some themes close to my heart, I'll just quote the abstracts:
9) Michael Batanin, Computads for finitary monads on globular sets, preprint available at http://www.ics.mq.edu.au/~mbatanin/papers.html
"This work arose as a reflection on the foundation of higher dimensional category theory. One of the main ingredients of any proposed definition of weak n-category is the shape of diagrams (pasting
scheme) we accept to be composable. In a globular approach [due to Batanin] each k-cell has a source and target (k-1)-cell. In the opetopic approach of Baez and Dolan and the multitopic approach of
Hermida, Makkai and Power each k-cell has a unique (k-1)-cell as target and a whole (k-1)-dimensional pasting diagram as source. In the theory of strict n-categories both source and target may be a
general pasting diagram.
The globular approach being the simplest one seems too restrictive to describe the combinatorics of higher dimensional compositions. Yet, we argue that this is a false impression. Moreover, we prove
that this approach is a basic one from which the other type of composable diagrams may be derived. One theorem proved here asserts that the category of algebras of a finitary monad on the category of
n-globular sets is equivalent to the category of algebras of an appropriate monad on the special category (of computads) constructed from the data of the original monad. In the case of the monad
derived from the universal contractible operad this result may be interpreted as the equivalence of the definitions of weak n-categories (in the sense of Batanin) based on the `globular' and general
pasting diagrams. It may be also considered as the first step toward the proof of equivalence of the different definitions of weak n-category.
We also develop a general theory of computads and investigate some properties of the category of generalized computads. It turned out, that in a good situation this category is a topos (and even a
presheaf topos under some not very restrictive conditions, the property firstly observed by S. Schanuel and reproved by A. Carboni and P. Johnstone for 2-computads in the sense of Street)."
10) Tom Leinster, Structures in higher-dimensional category theory, preprint available at http://www.dpmms.cam.ac.uk/~leinster
"This is an exposition of some of the constructions which have arisen in higher-dimensional category theory. We start with a review of the general theory of operads and multicategories. Using this we
give an account of Batanin's definition of n-category; we also give an informal definition in pictures. Next we discuss Gray-categories and their place in coherence problems. Finally, we present
various constructions relevant to the opetopic definitions of n-category.
New material includes a suggestion for a definition of lax cubical n-category; a characterization of small Gray-categories as the small substructures of 2-Cat; a conjecture on coherence theorems in
higher dimensions; a construction of the category of trees and, more generally, of n-pasting diagrams; and an analogue of the Baez-Dolan slicing process in the general theory of operads."
Okay - now for something completely different. In "week122" I said how Kreimer and Connes have teamed up to write a paper relating Hopf algebras, renormalization, and noncommutative geometry. Now
it's out:
11) Alain Connes and Dirk Kreimer, Hopf algebras, renormalization and noncommutative geometry, preprint available as hep-th/9808042.
Also, here's an introduction to Kreimer's work:
12) Dirk Kreimer, How useful can knot and number theory be for loop calculations?, Talk given at the workshop "Loops and Legs in Gauge Theories", preprint available as hep-th/9807125.
Switching over to homotopy theory and its offshoots... when I visited Dan Christensen at Johns Hopkins this spring, he introduced me to all the homotopy theorists there, and Jack Morava gave me a
paper which really indicates the extent to which new-fangled "quantum topology" has interbred with good old- fashioned homotopy theory:
12) Jack Morava, Quantum generalized cohomology, preprint available as math.QA/9807058 and http://hopf.math.purdue.edu/
Again, I'll just quote the abstract rather than venturing my own summary:
"We construct a ring structure on complex cobordism tensored with the rationals, which is related to the usual ring structure as quantum cohomology is related to ordinary cohomology. The resulting
object defines a generalized two- dimensional topological field theory taking values in a category of spectra."
Finally, Morava has a student who gave me an interesting paper on operads and moduli spaces:
13) Satyan L. Devadoss, Tessellations of moduli spaces and the mosaic operad, preprint available as math.QA/9807010.
"We construct a new (cyclic) operad of `mosaics' defined by polygons with marked diagonals. Its underlying (aspherical) spaces are the sets M[0,n](R) of real points of the moduli space of punctured
Riemann spheres, which are naturally tiled by Stasheff associahedra. We (combinatorially) describe them as iterated blow-ups and show that their fundamental groups form an operad with similarities to
the operad of braid groups."
Some things are so serious that one can only jest about them. - Niels Bohr.
© 1998 John Baez | {"url":"http://math.ucr.edu/home/baez/week123.html","timestamp":"2014-04-20T06:01:54Z","content_type":null,"content_length":"25930","record_id":"<urn:uuid:c27ecdb7-b621-43a1-9cfb-32c52c0943f3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PreCalc & Trig Help, Picture Below!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5134af91e4b093a1d9495e26","timestamp":"2014-04-17T21:54:45Z","content_type":null,"content_length":"116847","record_id":"<urn:uuid:3d90709b-f7e0-44fc-b1ca-f63bf306b8a9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning MIT Calculus in 5 Days
To access the course for free, click here.
Last week marked week one of my MIT Challenge, to learn their 4-year computer science curriculum in 12 months, without taking classes. As you can watch in the video above, this week was calculus.
I started the class on Monday and wrote the exam on Friday afternoon. This meant I had roughly 4.5 days to watch 30+ hours of video lectures, understand all the concepts and master the math enough to
pass a 3-hour comprehensive exam.
Why Bother Learning Calculus?
First though, why bother learning calculus at all? Beyond being a required course for my challenge, I think calculus has an unfair reputation as being either too hard or not useful enough to bother
I happen to think calculus is cool, and that’s not only because I’m a huge geek. Calculus is a tool that allows you to solve really interesting problems, that are much harder to solve without any
knowledge of calculus.
For learning computer science, for example, calculus allows you to run machine learning algorithms in artificial intelligence, render 3D computer graphics and create physics engines for video games.
Calculus may seem a little daunting or dry from the outset, but that’s mostly because people don’t realize the volume of cool ideas that are based on it.
More, learning calculus in this challenge is also a statement about the benefits of theoretical versus practical knowledge.
Theory Actually Matters (Or Why DIY Learners Often Fail)
A lot of people have been criticizing my challenge for not doing enough programming. I’ll admit, that’s a weakness I need to work on, and I’m making adjustments to try to include more programming
assignments for the classes where there are interesting projects.
However, part of the criticism, is weighed against learning theory in general. A common misconception is that the best way to learn is simply to just go out in the real world and do things, and only
learn theory when you absolutely need it.
But this misses the purpose of learning the theory behind ideas. It’s not just to fulfill curiosity, or to execute some more practical task. Learning theory broadens the types of problems you can
imagine possible solutions to.
You don’t need a computer science degree to learn how to program. I’ve programmed as a hobby for years, and I’m confident that I could probably make most simple programs if I put in enough time
(although I’m far from a master).
But computer science isn’t programming in the same way biology isn’t just using a microscope. Learning the theory behind algorithms, machine learning, graphics, compilers and circuitry gives you the
ability to think about and take on more interesting problems.
The power of theory is that it expands the breadth of problems you can solve, while practical knowledge improves your efficiency with one set of problems. Studying business didn’t teach me much about
running my business, but it did give me a language to think about all sorts of businesses that I haven’t started yet.
That’s the main goal of this MIT Challenge. I want to show that the theory can accelerated and learned outside of school. But I also want to show that big ideas matter and sometimes the best way to
be effective in the world is to first understand it.
The Setup to Learn Calculus in 5 Days
A lot of people are asking me how I avoid burnout when trying to complete a 120-hour course in 5 days. But the truth is, the schedule I keep isn’t too exhausting. It requires focus, certainly, and it
also has quite a few hours of work, but I set it up in such a way to maximize the amount of time I have to relax so I don’t feel overwhelmed.
Here’s the schedule I used last week, for example:
• Monday 6am – 5pm (roughly 60 minutes worth of breaks) – Watch first half of lectures at ~2x speed.
• Tuesday 6am – 6pm – Finish lectures
• Wednesday 6am – 6pm – Do 4 practice exams, use Feynman Technique on all conceptual errors or processes I don’t fully understand.
• Thursday 6am-6pm – Repeat process, ensure at least 1-2 Q’s are covered from every topic and use Feynman + practice questions to master the ones I’m having trouble with.
• Friday 6am-11am – Final brush up, write the exam at 1pm and finish by 4pm.
Not everyone will be able to get through an entire course like calculus in 5 days, however this basic setup of waking up early and starting immediately, but having the evenings off is a great way to
get more work done without feeling overwhelmed.
Each day I did approximately 10-11 hours of work, but because I had nights off, I could relax, watch movies, go to the gym or hang out with friends. When people talk about burnout, in my opinion,
they are usually talking about missing those things, not the actual number of hours they are putting in.
The truth is, when learning a conceptual class like calculus, burnout is one of the worst things you can do for efficiency. Losing sleep or not organizing your time well enough to have some
relaxation time can impair your ability to focus the next day. If you’re tired, you aren’t learning properly.
I did take 20 minute naps during the day, which I found helpful in giving a burst of energy which wanes after several hours of focus. But the timing and frequency of breaks is crucial, since it’s
easy to waste hours on breaks and then have to work later in the evening.
I feel comfortable with this schedule going forward, although I’ll certainly have to modify it for classes without video lectures or with larger programming assignments.
Now onto multivariate calculus, wish me luck!
Hello Scott,
I was wondering if you took notes on paper while watching the video. It just seems really hard to do remember everything without doing so.
[…] I share my learning experiences after the fact. I’ve written before about tackling MIT’s calculus, Spanish, linear algebra, finance and other non-academic learning […]
I just googled the easiest way to learn calculus and your utube video popped up. I’m wondering how old are you and what kind of motivation you have to do this rigorous schedule. I’m just in Calc 1
and I have 2 more years of math to obtain my engineering degree and to me it’s daunting. I’m actually afraid of the math and I’m having a hard time keeping up with the semesters schedule. I really
can’t imagine remembering anything from a one week session. Some of my homework assignments take me 8 hours to complete. How do you remember all of the rules? | {"url":"http://www.scotthyoung.com/blog/2011/10/09/learn-calculus-fast/","timestamp":"2014-04-20T13:18:55Z","content_type":null,"content_length":"54087","record_id":"<urn:uuid:f8fbb33c-98e0-404c-97aa-34097ee12c5a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra Learning Objectives, Freshman Math Program
110-1 Concept of Function
Students will be able to…
1. Identify if a relationship is a function and justify with the definition of a function.
2. Demonstrate proper use of function notation.
Given any of the four representations of a function, (algebraic, table, graph, contextual), students will be able to …
3. Identify independent and dependent variables.
4. Identify domain and range using interval or inequality notation.
**for all functions not listed in 110-2
110-2 Families of Functions
Students will be able to identify, solve for, and/or interpret the characteristics of each family of functions as appropriate (linear(L), exponential(E), logarithmic(G), power(P), polynomial(Y),
1. Intercepts
2. Rates of Change
3. Asymptotes
4. End Behavior
5. Domain and Range (using interval or inequality notation).
6. Maximum and Minimum
Given any of the four representations of a function, students will be able to …
7. Identify the function family
8. Match to an equivalent representation
9. Create an equivalent representation
Students will be able to write the equation for
10. A linear function (from the vertical intercept and a rate OR from two points)
11. A piecewise function (from a table, graph or context)
12. An exponential function (from the vertical intercept and a rate, from the vertical intercept and another point, or from financial information)
13. A power function (from context)
Given any of the four representation of a function, students will be able to identify and apply simple transformations:
14. Horizontal/vertical shifts
15. Horizontal/vertical axis reflections
110-3 Modeling with technology
Given data, students will be able to …
1. Identify the most appropriate model based both on technology and the context of the situation and justify reasoning.
2. Use technology (regression) to find a curve of best fit.
110-4 Algebraic Manipulation
Students will be able to perform the following manipulative skills and interpret the results:
1. Given input, find output.
2. Given output, find input.
Students will be able to perform the following manipulative skills with algebraic expressions and equations:
3. Use the rules of integer and rational exponents to simplify simple expressions.
4. Rewrite a logarithmic equation into exponential form and vice versa.
5. Factor simple equations.
110-5 Systems
Given a system of linear equations…
1. Write equations for the system (in standard or slope-intercept form).
2. Solve and interpret the results of a system of equations both graphically and algebraically.
Given a system of inequalities…
3. Write the system of inequalities.
4. Solve the system graphically.
110-6 Composition and Inverses
1. Perform a composition of two functions both with specific numbers and with variables.
2. Find the inverse of a given function using proper inverse function notation.
3. Given a context and a function such as t-1(4)=20, a student will be able to interpret the equation in context.
110-7 General Mathematical Skills
1. Explain complex mathematical concepts and procedures using appropriate mathematical terminology.
2. Organize and update a math portfolio as directed in a portfolio. | {"url":"http://www.fortlewis.edu/freshmanmath/tabid/6620/mid/8995/dnnprintmode/true/Default.aspx?SkinSrc=[G]Skins%2F_default%2FNo+Skin&ContainerSrc=[G]Containers%2F_default%2FNo+Container","timestamp":"2014-04-19T04:20:25Z","content_type":null,"content_length":"10424","record_id":"<urn:uuid:3d3bd780-58b2-4007-93b0-5d4b17212679>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solvable class field theory
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Is/should there be a theory of finite solvable extensions over a given base field? Could it be based on/use class field theory? Assume the base field isn't a local field.
up vote 9 down vote favorite class-field-theory nt.number-theory
add comment
Is/should there be a theory of finite solvable extensions over a given base field? Could it be based on/use class field theory? Assume the base field isn't a local field.
As FC says, since solvable extensions are built up out of abelian extensions, class field theory is certainly relevant and helpful in understanding the structure of solvable extensions.
On the other hand, to do this in a systematic way requires understanding class field theory of each number field in a tower "all at once". The picture that one gets in this way seems
quite blurry compared to the classical goal of class field theory: to describe and parameterize the finite abelian extensions L of a field K in terms of data constructed from K itself. In
the case of a number field, this description is in terms of groups of (generalized) ideal classes, or alternately in terms of quotients of the idele class group. I'm pretty sure there's
no description like this for solvable extensions of any number field.
What I can offer is a bunch of remarks:
1) Sometimes one has a good understanding of the entire absolute Galois group of a field K, in which case one gets a good understanding of its maximal (pro-)solvable quotient. Of course
this happens if the absolute Galois group is abelian.
2) Despite the OP's desire to exclude local fields, this is one of the success stories: the full absolute Galois group of a local field (complete, discretely valued field with finite
residue field) is a topologically finitely presented prosolvable group with explicitly known generators and relations.
3) On the other hand, we seem very far away from an explicit description of the maximal solvable extension of Q. For instance, in the paper
MR1924570 (2003h:11135) Anderson, Greg W.(1-MN-SM) Kronecker-Weber plus epsilon. (English summary) Duke Math. J. 114 (2002), no. 3, 439--475.
up vote 9
down vote the author determines the Galois group of the extension of Q^{ab} which is obtained by taking the compositum of all quadratic extensions K/Q^{ab} such that K/Q is Galois. Last week I
accepted heard a talk by Amanda Beeson of Williams College, who is working hard to extend Anderson's result to imaginary quadratic fields.
4) This question seems to be mostly orthogonal to the "standard" conjectural generalizations of class field theory, namely the Langlands Conjectures, which concern finite dimensional
complex representations of the absolute Galois group.
5) A lot of people are interested in points on algebraic varieties over the maximal solvable extension Q^{solv} of Q. The field arithmeticians in particular have a folklore conjecture
that Q^{solv} is Pseudo Algebraically Closed (PAC), which means that every absolutely irreducible variety over that field has a rational point. This would have applications to things like
the Inverse Galois Problem and the Fontaine-Mazur Conjecture (if that is still open!). Whether an explicit description of Q^{solv}/Q would be so helpful in these endeavors seems
debatable. I have a paper on abelian points on algebraic varieties, in which the input from classfield theory is minimal.
The two papers on solvable points that I know of (and very much admire) are:
MR2057289 (2005f:14044) Pál, Ambrus Solvable points on projective algebraic curves. Canad. J. Math. 56 (2004), no. 3, 612--637.
MR2412044 (2009m:11092) Çiperiani, Mirela; Wiles, Andrew Solvable points on genus one curves. Duke Math. J. 142 (2008), no. 3, 381--464.
add comment
As FC says, since solvable extensions are built up out of abelian extensions, class field theory is certainly relevant and helpful in understanding the structure of solvable extensions. On the other
hand, to do this in a systematic way requires understanding class field theory of each number field in a tower "all at once". The picture that one gets in this way seems quite blurry compared to the
classical goal of class field theory: to describe and parameterize the finite abelian extensions L of a field K in terms of data constructed from K itself. In the case of a number field, this
description is in terms of groups of (generalized) ideal classes, or alternately in terms of quotients of the idele class group. I'm pretty sure there's no description like this for solvable
extensions of any number field.
1) Sometimes one has a good understanding of the entire absolute Galois group of a field K, in which case one gets a good understanding of its maximal (pro-)solvable quotient. Of course this happens
if the absolute Galois group is abelian.
2) Despite the OP's desire to exclude local fields, this is one of the success stories: the full absolute Galois group of a local field (complete, discretely valued field with finite residue field)
is a topologically finitely presented prosolvable group with explicitly known generators and relations.
3) On the other hand, we seem very far away from an explicit description of the maximal solvable extension of Q. For instance, in the paper
MR1924570 (2003h:11135) Anderson, Greg W.(1-MN-SM) Kronecker-Weber plus epsilon. (English summary) Duke Math. J. 114 (2002), no. 3, 439--475.
the author determines the Galois group of the extension of Q^{ab} which is obtained by taking the compositum of all quadratic extensions K/Q^{ab} such that K/Q is Galois. Last week I heard a talk by
Amanda Beeson of Williams College, who is working hard to extend Anderson's result to imaginary quadratic fields.
4) This question seems to be mostly orthogonal to the "standard" conjectural generalizations of class field theory, namely the Langlands Conjectures, which concern finite dimensional complex
representations of the absolute Galois group.
5) A lot of people are interested in points on algebraic varieties over the maximal solvable extension Q^{solv} of Q. The field arithmeticians in particular have a folklore conjecture that Q^{solv}
is Pseudo Algebraically Closed (PAC), which means that every absolutely irreducible variety over that field has a rational point. This would have applications to things like the Inverse Galois
Problem and the Fontaine-Mazur Conjecture (if that is still open!). Whether an explicit description of Q^{solv}/Q would be so helpful in these endeavors seems debatable. I have a paper on abelian
points on algebraic varieties, in which the input from classfield theory is minimal.
The two papers on solvable points that I know of (and very much admire) are:
MR2057289 (2005f:14044) Pál, Ambrus Solvable points on projective algebraic curves. Canad. J. Math. 56 (2004), no. 3, 612--637.
MR2412044 (2009m:11092) Çiperiani, Mirela; Wiles, Andrew Solvable points on genus one curves. Duke Math. J. 142 (2008), no. 3, 381--464. | {"url":"http://mathoverflow.net/questions/4379/solvable-class-field-theory/4386","timestamp":"2014-04-24T03:59:52Z","content_type":null,"content_length":"60853","record_id":"<urn:uuid:0e9081b8-d512-4c99-aa0e-d43036b6a701>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
History of the Christian Church, Volume I: Apostolic Christianity. A.D. 1-100.
« Prev Subject Index Next »
Subject Index
Apostles, i.III_1.20-p106.2
John, i.VII.41-p0.1
Paul, i.V_1.30-p0.1
Peter, i.IV_1.25-p0.1
Baptism, i.IX.53-p40.1
Christian Ministry, i.X.59-p0.1
Spiritual Gifts, i.VIII.44-p18.2
Greek Literature, i.I_1.12-p0.1
Heathenism, i.I_1.11-p0.1
James, i.IV_1.27-p0.1
Jesus Christ, i.II_1-p3.2
Judaism, i.I_1.9-p0.1
New Testament
Acts, i.XII.85-p0.1
Epistle To The Hebrews, i.XII.100-p0.1
Epistle to Philemon, i.XII.98-p0.1
Epistle to the Colossians, i.XII.94-p0.1
Epistle to the Ephesians, i.XII.95-p0.1
Epistle to the Philippians, i.XII.97-p0.1
Epistles of John, i.XII.87-p41.2
Epistles to the Corinthians, i.XII.90-p0.1
Epistles to the Galatians, i.XII.91-p0.1
Epistles to the Thessalonians, i.XII.89-p0.1
James, i.XII.87-p11.2
Jude, i.XII.87-p34.2
Luke, i.XII.82-p1.2
Mark, i.XII.81-p0.1
Matthew, i.XII.80-p0.1
Peter, i.XII.87-p20.2
Revelation, i.XII.101-p0.1
Papacy, i.IV_1.26-p26.2
Resurrection, i.II_1.19-p0.1
Roman Empire, i.I_1.12-p0.2
Rome, i.V_1.36-p19.2
Slavery, i.VIII.47-p11.4
Spiritual Gifts
Tongues, i.IV_1.24-p62.1
Synagogue, i.IX.51-p0.1
Synoptics, i.XII.79-p5.2
The Lord's Supper, i.IX.54-p34.3
Worship, i.IX.52-p0.1, i.VIII.50-p20.1
« Prev Subject Index Next » | {"url":"http://m.ccel.org/ccel/schaff/hcc1.ii.i.html?destination=ccel/schaff/hcc1.ii.i.html&device=mobile","timestamp":"2014-04-19T16:35:45Z","content_type":null,"content_length":"17728","record_id":"<urn:uuid:cf41ff13-ec12-4f87-aed3-74c6d18dc1f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2012 [00162]
[Date Index] [Thread Index] [Author Index]
Re: ListPolarPlot questions
• To: mathgroup at smc.vnet.net
• Subject: [mg124086] Re: ListPolarPlot questions
• From: "David Park" <djmpark at comcast.net>
• Date: Sun, 8 Jan 2012 04:26:36 -0500 (EST)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• References: <3193533.81836.1325933394495.JavaMail.root@m06>
Make a radius interpolation function for your curve using the
PeriodicInterpolation option. And use a higher InterpolationOrder.
datapoints = Table[phi = (i - 1) 2 Pi/10; Cos[phi]^2, {i, 1, 11}];
radiusFunc =
ListInterpolation[datapoints, {{0, 2 \[Pi]}},
InterpolationOrder -> 4, PeriodicInterpolation -> True]
Then use PolarPlot with the radiusFunc. To translate to the circle use
Translate or GeometricTransformation with a Show statement but it is
confusing because of the necessary graphics level jumping. With the
Presentations Application from my web site it is easy. We just draw the
circle and translate the polar curve to the center of it.
<< Presentations`
{Circle[{1, 1}, 1],
PolarDraw[radiusFunc[\[Theta]], {\[Theta], 0, 2 \[Pi]}] //
TranslateOp[{1, 1}]}]
David Park
djmpark at comcast.net
From: Eric Michielssen [mailto:emichiel at eecs.umich.edu]
I have two questions regarding the use of ListPolarPlot.
1. The function I'd like to plot is periodic. Suppose I have samples of it
at equispaced angles, the first one for angle zero, and the last one for 2
Pi; the data for 0 and 2 Pi is identical. I want to plot the function for
all angles, from 0 to 2 Pi, connected by a smooth line. Choosing options
joined -> True and InterpolationOrder -> 2 almost does the trick, except
that there is a kink in the graph at phi = 0. As the interpolator does not
know the function is periodic. Is there a way around that?
Example: g1 = ListPolarPlot[Table[phi = (i - 1) 2 Pi/10; Cos[phi]^2, {i, 1,
11}], DataRange -> {0., 2 Pi}, Axes -> None, Joined -> True,
InterpolationOrder -> 2]
(In practice my table is numerically derived, not that of a Cos^2 function)
2. Suppose I want to plot a circle centered away from the origin, and then
the above g1 centered about the same point. How do I do that?
g1 = Graphics[
ListPolarPlot[Table[phi = (i - 1) 2 Pi/10; Cos[phi]^2, {i, 1, 11}],
DataRange -> {0., 2 Pi}, Axes -> None, Joined -> True,
InterpolationOrder -> 2]];
g2 = Graphics[Circle[{1., 1.}, 1]];
Show[g1, g2]
But now with the center of g1 shifted to {1,1} (and, if possible, smooth
about phi = 0...). I presume this can be done using Offset but I cannot get
that to work.
Eric Michielssen | {"url":"http://forums.wolfram.com/mathgroup/archive/2012/Jan/msg00162.html","timestamp":"2014-04-19T14:57:07Z","content_type":null,"content_length":"27658","record_id":"<urn:uuid:f37b4ef7-51ec-407f-8d62-2e9393c8254f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
heat transfer (conduction questions)
I'm not quite sure where you get the equation
Also are heat flow and heat transfer the same thing?
Yes, heat transfer and flow are the same as I use them. Heat transfer rate and heat flow rate are also the same as I use them.
As for the term GA dL/dt, it represents a rate of heat transfer to freeze water. If you had a volume of liquid water at 0 C, the energy you'd have to remove from it to turn it all to ice (also at 0
C) would be G times its volume where G is the heat of fusion needed per unit volume. So the rate at which this is done is G * A * dL/dt where A is area, and dL/dt is the rate of thickness change of
the layer. Note the units. They work out to be Joules/hour. They agree with the right hand side of the equation. A*dL/dt is rate of volume change. | {"url":"http://www.physicsforums.com/showthread.php?t=640583","timestamp":"2014-04-18T10:40:18Z","content_type":null,"content_length":"52140","record_id":"<urn:uuid:bcd394c6-4b10-48e1-877c-f85b2629339b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the New Notion of the Set-Expectation for a Random Set of Events
Vorobyev, Oleg Yu. and Vorobyev, Alexey O. (2003): On the New Notion of the Set-Expectation for a Random Set of Events. Published in: Proc. of the II All-Russian FAM'2003 Conference , Vol. 1, (27.
April 2003): pp. 23-37.
Download (412Kb) | Preview
The paper introduces new notion for the set-valued mean set of a random set. The means are defined as families of sets that minimize mean distances to the random set. The distances are determined by
metrics in spaces of sets or by suitable generalizations. Some examples illustrate the use of the new definitions.
Item Type: MPRA Paper
Original On the New Notion of the Set-Expectation for a Random Set of Events
Language: English
Keywords: mean random set, metrics in set space, mean distance, Aumann expectation, Frechet expectation, Hausdorff metric, random finite set, mean set, set-median, set-expectation
C - Mathematical and Quantitative Methods > C0 - General > C02 - Mathematical Methods
Subjects: C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General
C - Mathematical and Quantitative Methods > C4 - Econometric and Statistical Methods: Special Topics
Item ID: 17901
Depositing Oleg Vorobyev
Date 16. Oct 2009 07:09
Last 12. Feb 2013 21:49
[1] I.E. Abdou and W.K. Pratt, Quantitative design and evalualion of enhancement thresholding edge detectors, in: Proc. of the IEEE 67 (1979) 753-763.
[2] L.S. Andersen, Inference for hidden Markov models, in: A. Possolo ed., Proc. of the Joint IMS-AMS-SIAM Summer Res. Conf. on Spatial Statistics and lmaging (Brunswick. Maine. 1988)
[3] Z. Artstein and R.A. Vitale, A strong law of large numbers for random compact sets, Ann. Probab. 3 (1975) 879-882.
[4] A.J. Baddeley, An error metric for binary images, in: W. Forstner, H. Ruwiedel, eds., Robust Computer Vision: Quality of Vision Algorithms (Wichmann: Karlsruhe, 1992) 59-78.
[5] A.J. Baddeley, Errors in binary images and an Lp version of the Hausdorff metric, Nieuw Archief voor Wiskunde 10 (1992) 157-183.
[6] A. Baddeley and I. Molchanov, Averaging of Random Sets Based on Their Distance Functions, Journal of Mathematical Imaging and Vision 8 (1998) 79-92.
[7] C. Beer, On convergence of closed sets in a metric space and distance functions, Bull Austral. Math. Soc. 31 (1985) 421-432.
[8] M. Frechet, Les elements aleatoires de nature quelconque dans un espace distancie, Ann. Inst. H. Poincare 10 (1948) 235-310.
[9] N. Friel and I. Molchanov, A New Thresholding Technique Based on Random Sets, Pattern Recognition 32 (1999) 1507-1517.
[10] A. Frigessi and H. Rue, Bayesian image classification with Baddeley's Delta loss, J. Comput. Graph. Statist. 6 (1997) 55-73.
References: [11] S Geman and D. Geman, Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Trans. Pattern Analysis and Machine Intelligence 6 (1984) 721-741.
[12] J. Haslett and G. Horgan, Linear discriminant analysis in image restoration and the prediction of error rate, in: A. Possolo Ed., Proc. of the Joint IMS-AMS-SIAM Summer Res. Conf.
on Spatial Statisitics and Imaging (Brunswick, Maine, 1988) 112-119.
[13] I. Molchanov, Random Sets in View of Image Filtering Application, in: Dougherty E., Astola J., eds., Nonlinear Filter for Image Processing 11 (1999) 419-447.
[14] H. Rue and A.R. Syversveen, Bayesian object recognition with Baddeley's Delta loss, Preprint Statistics 8 Department of Mathematics, The University of Trondheim, (Trondheim,
Norway, 1995).
[15] D. Stoyan and H. Stoyan, Fractals, Random Shapes and Point Fields (Wiley, Chichester, 1994).
[16] C.C. Taylor, Measures of similarity between two images, in: A. Possolo ed., Proc. of the Joint IMS-AMS-SIAM Summer Res. Conf. On Spatial Statistics and Imaging (Brunswick, Maine,
1988) 382-391.
[17] R.A. Vitale, An alternate formulation of mean value for random geometric figures, J. Microscopy 151 (1988) 197-204.
[18] O.Yu. Vorobyev, About Set-Characteristics of States of Distributed Probabilistic Processes, Izvestiya of SB USSR AS 3 (1977) 3-7.
[19] O.Yu. Vorobyev, Mean Measure Modeling (Moscow, Nauka, 1984).
[20] O.Yu. Vorobyev, Random Set Models of Fire Spread, Fire Technology, USA National Fire Protection Association, 32 (2) (1996) 137-173.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/17901 | {"url":"http://mpra.ub.uni-muenchen.de/17901/","timestamp":"2014-04-16T04:29:02Z","content_type":null,"content_length":"23359","record_id":"<urn:uuid:f0f18a1e-8104-4f17-b996-14f479df7eed>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berwyn, IL Science Tutor
Find a Berwyn, IL Science Tutor
...I am a full-time college professor. I teach Biology. I have been awarded the Distinguished Professor Award.
4 Subjects: including biology, anatomy, physiology, pharmacology
...My passion for education comes through in my teaching methods, as I believe that all students have the ability to learn a subject as long as it is presented to them in a way in which they are
able to grasp. I use both analytical as well as graphical methods or a combination of the two as needed ...
34 Subjects: including physics, English, reading, writing
...I'd love to help! I have taught various levels of physics to public and private high school students for 20 years. On the AP Physics B exam, almost half of my students earn scores of 5, the
highest score possible, while most of the other half receive scores of 4!
2 Subjects: including physics, algebra 1
...I took and passed again all sorts of Advanced Chemistry courses. I have been assigned as a Teaching Assistant for the General Chemistry course several times, and upon Faculty request and
evaluations and ratings from the students, I have been assigned to teach the General Chemistry intensive cour...
2 Subjects: including chemistry, organic chemistry
...I have not tutored for employment before though have experience leading groups of friends and classmates in a wide range of subjects. I do have access my own means of transportation and am
willing to travel. Ideally looking for short term work up through high school finals.I am a Forensic Scien...
13 Subjects: including biology, physics, trigonometry, ACT Science
Related Berwyn, IL Tutors
Berwyn, IL Accounting Tutors
Berwyn, IL ACT Tutors
Berwyn, IL Algebra Tutors
Berwyn, IL Algebra 2 Tutors
Berwyn, IL Calculus Tutors
Berwyn, IL Geometry Tutors
Berwyn, IL Math Tutors
Berwyn, IL Prealgebra Tutors
Berwyn, IL Precalculus Tutors
Berwyn, IL SAT Tutors
Berwyn, IL SAT Math Tutors
Berwyn, IL Science Tutors
Berwyn, IL Statistics Tutors
Berwyn, IL Trigonometry Tutors
Nearby Cities With Science Tutor
Bellwood, IL Science Tutors
Broadview, IL Science Tutors
Brookfield, IL Science Tutors
Cicero, IL Science Tutors
Forest Park, IL Science Tutors
Forest View, IL Science Tutors
La Grange Park Science Tutors
Lyons, IL Science Tutors
Maywood, IL Science Tutors
North Riverside, IL Science Tutors
Oak Park, IL Science Tutors
River Forest Science Tutors
Riverside, IL Science Tutors
Stickney, IL Science Tutors
Westchester Science Tutors | {"url":"http://www.purplemath.com/berwyn_il_science_tutors.php","timestamp":"2014-04-20T02:09:26Z","content_type":null,"content_length":"23539","record_id":"<urn:uuid:09eb8754-b996-4016-a53a-30ca6d0c0c71>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
xcorr - biased and unbiased
There are 5 messages in this thread.
You are currently looking at messages 1 to .
Is this discussion worth a thumbs up?
xcorr - biased and unbiased - puckman - 2008-06-12 07:27:00
Could someone please tell me what is the difference between biased and
unbiased? And when do I use biased or unbiased? or any restriction to use
one of each?
I was trying to implement Linear Predictor by using autocorrelation.
When I tested with sine wave, biased correlation provides a better shape,
but has a scaling problem in a peak of sinewave.
Unbiased has a less scaling problem, but for some position, the prediction
is extremely bad.
Thank you.
Re: xcorr - biased and unbiased - Rune Allnor - 2008-06-12 08:26:00
On 12 Jun, 13:27, "puckman" <kwa...@hotmail.com> wrote:
> hello,
> Could someone please tell me what is the difference between biased and
> unbiased? And when do I use biased or unbiased? or any restriction to use
> one of each?
The biased correlation estimator is biased. The expression
looks something like this (not checking details and using
autocorrelation, as the details don't get so messy):
1 N-1 N-|k|
rxx[k] = --- sum ------ x[n+k]x[k]
N n=0 N
The (N-|k|)/N term scales the elements in rxx[k] differently
as a function of k:
E[rxx[0]] = x^2[0]
E[rxx[N-1]] = 1/N x^2[N-1]
Since in general E[rxx[k]] =/= x^2[n] the estimator
rxx[k] is biased when x[n] is stationary.
The unbiased estimator rxx'[k], on the other hand, doesn't
have that scaling term:
1 N-1
rxx'[k] = --- sum x[n+k]x[k]
N n=0
In this case there is no svclaing term inside the sum, so
E[rxx'[k]] = x^2[n] and rxx'[k] is unbiased.
The difference is that the biased estimator has bounded
variance whereas the unbiased estimater has not.
> I was trying to implement Linear Predictor by using autocorrelation.
> When I tested with sine wave, biased correlation provides a better shape,
> but has a scaling problem in a peak of sinewave.
> Unbiased has a less scaling problem, but for some position, the prediction
> is extremely bad.
This is because the variance is unbounded. This basically
means that results and predictions based on the unbiased
estimator can become unstable.
One uses the biased estimator as a matetr of course unless
one has a very specific and justified reason not to. The
justification would be that the bias introduced by the
stable estimator is a worse problem than the instability
of the unbiased estimator.
I have yet to see a real-life case where that actually
Re: xcorr - biased and unbiased - puckman - 2008-06-12 15:08:00
Thank you very much, Rune,for kindly answer to my question!!
I am afraid that if this is trivial question, but I still don't have
1. How could I possibly prove if the variance of the specific signal is
bounded or not?
2. "The justification would be that the bias introduced by the
stable estimator is a worse problem than the instability of the unbiased
Is there any example or published paper describing the use of unbiased
Thank you very much.
>On 12 Jun, 13:27, "puckman" <kwa...@hotmail.com> wrote:
>> hello,
>> Could someone please tell me what is the difference between biased and
>> unbiased? And when do I use biased or unbiased? or any restriction to
>> one of each?
>The biased correlation estimator is biased. The expression
>looks something like this (not checking details and using
>autocorrelation, as the details don't get so messy):
> 1 N-1 N-|k|
>rxx[k] = --- sum ------ x[n+k]x[k]
> N n=0 N
>The (N-|k|)/N term scales the elements in rxx[k] differently
>as a function of k:
>E[rxx[0]] = x^2[0]
>E[rxx[N-1]] = 1/N x^2[N-1]
>Since in general E[rxx[k]] =/= x^2[n] the estimator
>rxx[k] is biased when x[n] is stationary.
>The unbiased estimator rxx'[k], on the other hand, doesn't
>have that scaling term:
> 1 N-1
>rxx'[k] = --- sum x[n+k]x[k]
> N n=0
>In this case there is no svclaing term inside the sum, so
>E[rxx'[k]] = x^2[n] and rxx'[k] is unbiased.
>The difference is that the biased estimator has bounded
>variance whereas the unbiased estimater has not.
>> I was trying to implement Linear Predictor by using autocorrelation.
>> When I tested with sine wave, biased correlation provides a better
>> but has a scaling problem in a peak of sinewave.
>> Unbiased has a less scaling problem, but for some position, the
>> is extremely bad.
>This is because the variance is unbounded. This basically
>means that results and predictions based on the unbiased
>estimator can become unstable.
>One uses the biased estimator as a matetr of course unless
>one has a very specific and justified reason not to. The
>justification would be that the bias introduced by the
>stable estimator is a worse problem than the instability
>of the unbiased estimator.
>I have yet to see a real-life case where that actually
Re: xcorr - biased and unbiased - Rune Allnor - 2008-06-12 19:45:00
On 12 Jun, 21:08, "puckman" <kwa...@hotmail.com> wrote:
> Thank you very much, Rune,for kindly answer to my question!!
> I am afraid that if this is trivial question, but I still don't have
> confidence.
> 1. How could I possibly prove if the variance of the specific signal is
> bounded or not?
It's been 15 years since I looked into those proofs,
in chapter 6 in the book by Therrien. Can't remember
any other references off the top of my head, but I
wouldn't be surprised if they can be found in some
book by Kay.
> 2. "The justification would be that the bias introduced by the
> stable estimator is a worse problem than the instability of the unbiased
> estimator."
> Is there any example
You say the results are bad when you use it in your
own work, that ought to be example enough...
> or published paper describing the use of unbiased
> estimator?
...not that I can think of; people tend to publish the
good or useful results. The best you can hope for
is an example or exercise in the type of book that
contains the formal discussion of the variance.
Re: xcorr - biased and unbiased - puckman - 2008-06-13 00:16:00
Thank you for all your help!!~ | {"url":"http://www.dsprelated.com/showmessage/98437/1.php","timestamp":"2014-04-19T17:08:47Z","content_type":null,"content_length":"28058","record_id":"<urn:uuid:bce8f12f-d92a-4110-9ef5-9addf566e7ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Graph: y=-3/2x+2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/511daab0e4b06821731bbc32","timestamp":"2014-04-16T22:37:11Z","content_type":null,"content_length":"51595","record_id":"<urn:uuid:a57158f0-f175-4568-a620-27a852f0923c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Formal definitions
"Ira Baxter" <idbaxter@semdesigns.com>
Sat, 10 Oct 2009 11:26:57 -0500
From comp.compilers
| List of all articles for this month |
From: "Ira Baxter" <idbaxter@semdesigns.com>
Newsgroups: comp.compilers
Date: Sat, 10 Oct 2009 11:26:57 -0500
Organization: Compilers Central
References: 09-10-009
Keywords: optimize, theory
Posted-Date: 10 Oct 2009 23:08:58 EDT
"Tim Frink" <plfriko@yahoo.de> wrote in message
> I'm looking for books/papers/websites where compiler optimizations
> are formally defined, i.e. not just textual and a code example.
> Any hints?
One answer is source-to-source program transformation systems, in
which pairs of target-language surface syntax patterns are written to
encode the transformation as a rewrite of the source text of the
program, e.g., pattern1 ==> pattern2
Stratego http://en.wikipedia.org/wiki/Stratego/XT and TXL
http://en.wikipedia.org/wiki/TXL_(programming_language) and Fermat
http://en.wikipedia.org/wiki/FermaT_Transformation_System are tools in
which such source-to-source transformations are entirely coded in
special rule langauges. Simple transformations that are context-free
can be coded directly. Taking into account context is harder and
requires cooperating sets of rules. Since rewrites are
generalizations of arbitrary string replaces, they are generalizations
of Post systems and thus can accomplish any transformation in theory.
Stratego and TXL can be applied to any langauge. Fermat is limited to
WSL, a language with formally defined semantics, but admits parsers
for other langauge that map to and from WSL.
Like Stratego and TXL,
DMS http://en.wikipedia.org/wiki/DMS_Software_Reengineering_Toolkit
has a rewrite rule language. However, it uses standard
compiler technology method to collect the context information
(symbol tables, control and data flow analyses) of
interest to transformations. Because context is so
important, as a practical matter,
many DMS transformations are written as
pattern1 ==> pattern2 if condition
where the condition inspects some of the collected context
data. Similarly, rewrites aren't always a good
way to express transformations; DMS also
allows one to code complex transformations
procedurally (such as lambda lifting,
clone removal, ...)
Now transformations for a "compiler" are often from a programming
language level to a machine code level "langage". To the extent that
one can explicitly model both languages in the pattern language, it
can be handled directly by pattern rewriter. Stratego, TXL and DMS
can all do this easily. Stratego and TXL do this by allowing the
constructing a (joint) union grammar for source and target langauges,
with built in operators that compose grammars. DMS does this by
allowing disjoint unions of grammars so no grammer composition is
needed. One simply writes patterns in each of the languages and tags
the pattern according the grammar for which it applies.
One question that occurs over and over, is, "are the transformations
correct?" and "can you formally prove it?" Since the semantics of
the languages being transformed are not well-defined ina formal sense
(anybody seen denotational semantics for C++?) the answer to "can you
prove it" as a practical matter is outright NO. Are they correct?
Well, after rigourous testing, they are as correct as any other
program you might write. The Fermat system attempts to make
provability at least possible by insisting on the use of WSL, which is
formally defined. Of course, this simply pushes the burden on the
translator from your language X, which doesn't have formal semantics,
to WSL :-{
As an aside, I often get asked, "if you can't prove the
transformations correct, how you trust the tool?". My response is,
"if you aren't going to use a tool, then you going to use Joe to do
it, right? How can you trust Joe?".
Ira Baxter, CTO
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/09-10-010","timestamp":"2014-04-21T00:15:44Z","content_type":null,"content_length":"7817","record_id":"<urn:uuid:b6a4a49d-78b6-47d2-8cf7-368136e69e30>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limit of a sequence
Originally posted by KL Kam
Prove that for any positive real numbers a and b,
lim [(an+b)^1/n-1] = 0
This one just
First, rearrange it to:
Then take the natural log of both sides to get:
lim ln(an+b)/n=0
This goes to ∞/∞, which is an indeterminate form and ripe for L'Hopital's rule. | {"url":"http://www.physicsforums.com/showthread.php?t=3335","timestamp":"2014-04-18T21:31:43Z","content_type":null,"content_length":"34288","record_id":"<urn:uuid:6679025e-200c-450b-803e-88581591fd6a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical Analysis and Scientific Computing
Numerical Analysis and Scientific Computation is one of the largest and most active groups within the Department, with most of the major research areas represented. Our group is interested in a wide
range of topics, from the study of fundamental, theoretical issues in numerical methods; to algorithm development; to the numerical solution of large-scale problems in fluid mechanics, solid
mechanics, electromagnetism and materials science.
Graduate Studies
Students wishing to study numerical analysis or scientific computing usually enroll in the Applied Mathematics graduate program (see the Graduate Program web pages for more details). There are a
number of courses specifically intended for students interested in this area as well as topics courses. For example:
Students in Applied Mathematics are also encouraged to take courses outside the department. Courses in engineering and computer science can build on the foundations offered above. Other courses in
the Department of Mathematical Sciences of interest to students of Numerical Analysis and Scientific Computing include: | {"url":"http://www.math.udel.edu/research/num_anal/","timestamp":"2014-04-20T18:22:42Z","content_type":null,"content_length":"16068","record_id":"<urn:uuid:45e3eaca-6277-46b9-869a-548f98952ba2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
December 6th 2011, 07:59 PM
In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B?
I believe I need to assume A and do something Elim V, but I am really stumped as to what to do exactly. Any help would be appreciated.
December 6th 2011, 11:12 PM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
Is a truth table not an acceptable proof for this? That's what I would use.
December 7th 2011, 02:27 AM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
What do you mean by "corresponds to"?
To prove (~A v B) -> (A -> B), assume ~A v B and A and do Elim V on ~A v B. If ~A, then you get get a contradiction with A, so you can derive B by ex falso. Otherwise, you already have B.
To prove the converse implication is a little more difficult and requires double negation elimination.
December 7th 2011, 03:46 AM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
Fitch-Style Proof Builder
December 7th 2011, 08:50 AM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
This is what I've done so far. It's how to get from step 5 to 6 that confuses me.
December 7th 2011, 09:02 AM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
Nevermind, I figured it out. And I even know how to do the reverse. Thanks a lot guys!
December 7th 2011, 09:10 AM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
So, could you say how you derived B in step 6 for everybody's benefit?
December 7th 2011, 04:59 PM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
Fitch says that I've proved what I tried to prove, yet upon more reflection I'm worried it might due to a bug in the program (view steps 6-9). Could somebody confirm if I did this properly?
Attachment 23042
December 7th 2011, 10:13 PM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
Usually natural deduction is presented with the following two rules:
$\frac{\bot}{A}\bot\ \mbox{Elim}$ and $\frac{egeg A}{A}\mbox{DNE}$ (double-negation elimination, called $eg$ Elim in your program) for all formulas $A$
The rule $\bot$ Elim can be derived from DNE, which is what you did in your derivation:
~B Assumption
_|_ _|_ Intro
~~B ~ Intro
B DNE
Here the assumption ~B can be vacuous, in which case you derive B just from $\bot$.
The reason to keep these two rules separate is that the logic without DNE is quite interesting on its own.
Which program do you use to create Fitch-style derivations?
December 10th 2011, 01:23 PM
Re: In formal Logic (Fitch Format), how do I prove ~A v B corresponds to A -> B
The program is called Fitch by a CD called Language, Proof and Logic. | {"url":"http://mathhelpforum.com/discrete-math/193634-formal-logic-fitch-format-how-do-i-prove-v-b-corresponds-b-print.html","timestamp":"2014-04-21T12:32:32Z","content_type":null,"content_length":"11237","record_id":"<urn:uuid:c8ddd262-7dd9-43f2-8540-b91709e2b6b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equivariant Cohomology
The International Congress of Mathematicians will be taking place in Madrid relatively soon, in late August. One tradition at this conference is the announcement of the Fields Medals, and I’m getting
embarassed that I’m not hearing any authoritative rumors about this (other than about Tao and Perelman); if you have any, please send them my way. One other tradition is to have speakers write up
their talks in advance, with the proceedings available at the time of the conference, so already some write-ups of the talks to be given there have started appearing on the arXiv.
Last night, Michele Vergne’s contribution to the proceedings appeared, with the title Applications of Equivariant Cohomology. On her web-site she has a document she calls an exegesis of her
scientific work, this gives some context for the equivariant cohomology paper. She also is co-author of a book called Heat Kernels and Dirac Operators, which has a lot more detail on some aspects of
this subject. Finally, there has been a lot of nice recent work in this area by Paul-Emile Paradan.
Equivariant cohomology comes into play when one has a space with a group acting on it, and it mixes aspects of group (or Lie algebra) cohomology and the cohomology of topological spaces. There are
various ways of defining it, the definition that Vergne works with is a bit more general than the one more commonly used. It involves both differential forms on the space, and generalized functions
on the Lie algebra of the group.
The beauty of equivariant cohomology is that it often computes something more interesting than standard cohomology, and you can often do computations simply, since the results just depend on what is
happening at the fixed points of the group action. There’s a similar story in K-theory: when you have a group action on a space, equivariant K-groups can be defined, with representatives given by
equivariant vector bundles. Integration in K-theory corresponds to taking the index of the Dirac operator, and in the equivariant case this index is not just an integer, but a representation of the
group. The index formula relates cohomology and K-theory, and one of Vergne’s main techniques is to work with the equivariant version of this formula.
In the case of a compact space with action of a compact group, there’s a localization formula that tells you how to integrate representatives of equivariant cohomology classes in terms of fixed point
data. In many cases, this leads to a simple calculation, one famous example is the Weyl character formula, which can be gotten this way. New phenomena occur when the group action is free, and thus
without fixed points. This was first investigated by Atiyah (see Lecture Notes in Math, volume 401), who found that he had to generalize the index theorem to deal with not just elliptic operators,
but “transversally elliptic” ones. Such operators are not elliptic in the directions of orbits of the group action, but behavior of the index is governed by representation theory in those directions.
Vergne has been studying examples of this kind of situation, and it is here that generalized functions on the Lie algebra come into play. Integrating the kind of interesting equivariant cohomology
classes that occur in the transversally elliptic index theory case over a space gives not functions but generalized functions on the Lie algebra. There’s a localization formula in this case due to
Witten, who found it and applied it to 2d gauge theory in his wonderful 1992 paper Two Dimensional Gauge Theories Revisited.
This kind of mathematics, growing out of the equivariant index theorem, is strikingly deep and beautiful. It has found many applications in physics, from the ones in 2d gauge theory pioneered by
Witten, to more recent calculations of Gromov-Witten invariants. It leads to a mathematically rigorous derivation of some of the implications of mirror symmetry in special cases, and a wide variety
of other results related to topological strings. My suspicion is that it ultimately will be used to get new insight into the path integrals of gauge theory, not just in 2 dimensions but in 3 or 4.
Update: Vergne has another nice new paper on the arXiv. It’s some informal notes on the Langlands program which she describes as follows:
These notes are very informal notes on the Langlands program. I had some pleasure in daring to ask colleagues to explain to me the importance of some of the recent results on Langlands program, so I
thought I will record (to the best of my understanding) these conversations, and then share them with other mathematicians. These notes are intended for non specialists. Myself, I am not a specialist
on this particular theme. I tried to give motivations and a few simple examples.
It would be great if more good mathematicians wrote up informal notes like this about subjects they have learned something about, even if they are not experts. The notes are entitled All What I
Wanted to Know About Langlands Program and Was Afraid to Ask.
22 Responses to Equivariant Cohomology
1. I guess the ‘cosmology’ in the 4th paragraph is a typo and should be ‘cohomology’.
2. Oops, many people might like an application of this to cosmology, but that definitely was a typo…
…and I’m getting embarassed that I’m not hearing any authoritative rumors about this (other than about Tao and Perlman…
Apparently, it is ‘Perelman’.
4. Bhargava.
5. I’ve noticed there are four slots for fields medalists’ talks, so perhaps four medals. If indeed Perelman and Tao get one that’s two left. Recent young EMS prize winners make up for a good
Looking at his recent publication record, I’d bet on Okounkov.
There are also the lists of recent Clay Fellows http://www.claymath.org/fas/research_fellows/
and award winners
I’ve looked at recent AMS prize winners but most seem too old.
6. what about Zhu and Cao? the 2 dudes who completed the proof of the poincare conjecture.
7. My bet is T.Tao,A.Borodin,G.Perelman (though the age limit could be a problem in Perelman case). This could be surprising that in such a short time between publication of a paper and vetting of
it and the award of a prize is given in the case of Zhu and Cao.
8. doctorgero,
Thanks, typo fixed.
About possible Fields Medalists:
Zhu and Cao: No way, and, knowing Cao, I’m pretty sure he’s over 40 anyway.
Okunkov: Seems unlikely. In that field I’d think his Princeton colleague Rahul Pandharipande would be a more likely candidate.
Bhargava: Hmm, that actually sounds plausible. Maybe “Authoritative Bigshot” really is one…
9. Wikipedia claims Perelman is just 40 this year :
(Not sure to what extent it’s reliable, though..)
10. Who is A Borodin?
11. A great Georgian-Russian composer and chemist?
Or more likely, this guy: http://www.cs.toronto.edu/~bor/
12. Alexei Borodin teaches in Caltech. Representation theory.done groundbreaking work in group theory (big groups) aspects related to representation theory.
13. Brett and Zelah,
The reference must be to Alexei Borodin, now at Caltech, see
The list of Clay research fellows does probably contains many of the plausible candidates for the Fields.
14. There are also the prize winners from the 3rd ECM
Seidel, for his work on mirror symmetry and symplectic topology, and Gaitsgory, for his work on geometric Langlands, seem viable candidates. Since low dimensional topology/geometry always seems a
popular subject, how about Zoltan Szabo?
Both Borodin and Bhargava are eligible to claim the prize four years hence. This could affect their bids – in 2002, only 2 medals were distributed even though Tao was also a favorite then.
Does anyone know who is on the medal committee this go-round?
15. “This kind of mathematics, growing out of the equivariant index theorem … leads to a mathematically rigorous derivation of some of the implications of mirror symmetry in special cases, and a wide
variety of other results related to topological strings.”
Could you elaborate on that? How is equivariant index theory used there?
16. Hi anon,
This is a complicated story. Index theory itself hasn’t been much used, but there has been a lot of use of equivariant cohomology and localization formulae. For some surveys of this, see Kefeng
Liu’s web-site
where he has various papers and surveys, including one by Yau, of the subject.
A lot of this goes back to work by Givental, you can find his papers at his web-site
17. Peter,
Isn’t there a “for dummies” exposition of this stuff? How about putting it in historical context?
18. Danny,
Sorry, I don’t know of any really good simpler expositions of this material, especially for physicists. There is some literature in the context of TQFT and supersymmetric quantum mechanics for
physicists, look at papers from the early nineties by Blau and Thompson. Unfortunately most of the physics literature doesn’t really deal with the natural mathematical context for all this. Much
of this is due to Atiyah, if you really want to understand this and its history, his collected works are a good place to look. But this is really not easy for a physicist to read and absorb.
19. Is it decided already who will win the Fields medals? Or do they wait to the last minute to make that decision?
20. Johan,
I’m sure it’s already decided. I’ve heard the winners are notified some months in advance, so are those mathematicians chosen to deliver the addresses describing the work of the Fields medalists.
21. I am fairly certain that the Fields Medalists have already been chosen, but the names are a closely guarded secret.
If you look at the citation counts on Mathscinet, then, of the names posted here, Tao, Okounkov, and Pandharipande are way ahead of everybody else. Perelman is definitely on the short list, too.
But we’re all just guessing.
22. “It would be great if more good mathematicians wrote up informal notes like this about subjects they have learned something about, even if they are not experts.”
Terence Tao has lots of expository notes on his web site; the few that I’ve looked at are short and elegantly written.
This entry was posted in Uncategorized. Bookmark the permalink. | {"url":"http://www.math.columbia.edu/~woit/wordpress/?p=431","timestamp":"2014-04-16T16:12:45Z","content_type":null,"content_length":"57128","record_id":"<urn:uuid:cc49503b-2e96-412a-817f-b2365269417a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 308 Project: D-Orbitals
d-Orbitals are more complex wavefunctions than either s or p orbitals. Each of the five has a dependency on all three variables. However, they can be converted to Cartesian equivalents as in the
p-Orbital case.
Angular Elements
As in the p-Orbital case, the conversion of the angular portions of the wavefunctions to Cartesian co-ordinates has reduced the unique angular elements shapes to two. The following are the equations
of the wavefunctions for the 3d orbitals.
Y[0] = (5/16π)^1/2 (3cos^2θ-1) = (5/16π)^1/2 ●3z^2-r^2/r^2
Y[1 ]= (5/4π)^[1/2] cos Θ sin Θ cos Φ = (5/4π)^[1/2] ● xz/r^2
Y[2 ]= (5/4π)^1/2 cos Θ sin Θ sin Φ = (5/4π)^[1/2] ● yz/r^2
Y[3 ]= (5/4π)^1/2 sin2 Θ cos Φ = (5/4π)^[1/2] ● xy/r^2
Y[4] = (5/16π)^1/2 sin2Θ (cos2 Φ – sin 2 Φ) = (5/16π)^1/2 ● (x^2 – y^2)/r^2
Using the spherical conversions, this leads to two unique angular elements.The most common set is that of cos Θ sin Θ cos Φ. The last four graphs are the angular elements positioned around different
axes.The second is from Y[0, ]which is the angular portion of 3cos^2θ-1.
The first is given by the graph The net result is a set of four lobes that resemble a clover:
The top left shows the angular behaviour between 0 and 90, where the radius is positive. Between 90 and 180 (top right) , the radius is negative. The radius is positive again between 180 and 270
(bottom left), and has a negative value in the last quarter (bottom right).
The net result of the angular portions is this picture above. There are two lobes of both phases. In each of the xy, yz, and xy orbitals, the axes lie in the plane of the screen, such that the two
axis meet at the center of four lobed figure. The axis form planes of symmetry that fall in between the lobes.. In the case of the x
- y
, the x and y axis form two planes of symmetry that bisect the figure along the lobes.
The second angular function turns out to have this strange shape. In 3D, has a p-orbital type center, along with a “ring” of opposite phase that goes around the middle.
Between 0 and approx. 57 degrees, the radius is positive (top left). As Θ varies between 57 and 125, the radius remains negative (top left). The next major lobs has positive phase (bottom left), and
the last completed lobe, between 152 and 214, has a negative radius.
As Θ goes back to 360 degrees, the radius is positive, and completes the last lobe. The net result is a set of four lobes of non-equal size and shape. This orbital is generally depicted such that the
axis between 0 and 180 degrees forms a vertical axis The figure is then rotated around this axis to sweep out a figure that consist of a larger major lobe, along with a smaller ring around its middle
that is of opposite phase.
Introduction to Quantum Chemistry
Periodic Table
MO Theory | {"url":"http://www.math.ubc.ca/~cass/courses/m308-02b/projects/alo/d-Orbitals.html","timestamp":"2014-04-19T14:31:59Z","content_type":null,"content_length":"12088","record_id":"<urn:uuid:7f8ecc98-e7f2-4455-99bc-cb6a4851794d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: LAPACK routine (version 1.5) (l) Updated: 12 May 1997 Local index Up
PDGBTRS - solve a system of linear equations A(1:N, JA:JA+N-1) * X = B(IB:IB+N-1, 1:NRHS)
TRANS, N, BWL, BWU, NRHS, A, JA, DESCA, IPIV, B, IB, DESCB, AF, LAF, WORK, LWORK, INFO )
CHARACTER TRANS
INTEGER BWU, BWL, IB, INFO, JA, LAF, LWORK, N, NRHS
INTEGER DESCA( * ), DESCB( * ), IPIV(*)
DOUBLE PRECISION A( * ), AF( * ), B( * ), WORK( * )
PDGBTRS solves a system of linear equations
A(1:N, JA:JA+N-1)' * X = B(IB:IB+N-1, 1:NRHS)
where A(1:N, JA:JA+N-1) is the matrix used to produce the factors stored in A(1:N,JA:JA+N-1) and AF by PDGBTRF.
A(1:N, JA:JA+N-1) is an N-by-N real
banded distributed
matrix with bandwidth BWL, BWU.
Routine PDGBTRF MUST be called first.
This document was created by man2html, using the manual pages.
Time: 21:52:12 GMT, April 16, 2011 | {"url":"http://www.makelinux.net/man/3/P/pdgbtrs","timestamp":"2014-04-19T12:05:12Z","content_type":null,"content_length":"8903","record_id":"<urn:uuid:d4d90bdc-6282-4a76-8d15-b9537b43531a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of views: 1733 Article added: 2 February 2011 Article last modified: 8 February 2011 © Copyright 2010-2014 Back to top
Welcome to the Thermopedia A-to-Z Index!
This Index is organized alphabetically by subject areas and key words. Click on an item to the left in order to see the information associated with that item.
Each item will contain one of the following:
• A complete article
• A request from the Thermopedia Editorial Board to write an article on that topic
• If there is noarticle associated with an item, a link is provided for more information
Every item in the Index is connected via "RelatesLink", a visual/graphical mapping of each entry in Thermopedia, and its relationship to other articles and entries.
You can also search Thermopedia by entering a term in the Search box above. | {"url":"http://www.thermopedia.com/content/643/?tid=110&sn=7","timestamp":"2014-04-18T11:50:33Z","content_type":null,"content_length":"459239","record_id":"<urn:uuid:0aea0e39-a9a5-4138-b381-f917afec22a1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Making Change
I constantly find that I have no change. Right now my wallet contains some notes, plus one $2 coin and one 20 cent coin. And having discovered this, I'm going to take the 20 cent coin out and set it
aside for use at the laundromat, where it costs $2.20 per wash.
Five years ago, change used to build up in the bottom of my wallet, and often when I went through it I had fifteen dollars or so just in coins. (Australia has $1 and $2 coins.)
But then I went to the US. Immediately the change problem got way worse: America still has 1 cent coins (the "penny"); they have a 25 cent coin (the "quarter") instead of a 20; and the 5 cent coin
(the "nickel") is bigger than the ten cent (the "dime").
It's amazing how much harder it is to make change when there's a 25 cent coin. In sensible countries, you can decompose the problem of making change into two simpler problems: one for the tens digit,
and one for the ones. In Australia, the ones column is made trivially easy by the fact that we did away with 1 cent and 2 cent coins; anything below the 5 cent mark gets rounded. But in the US, it's
trickier -- if something costs 87 cents, say, you don't instantly know how many 5 cent pieces you should give. You have to subtract off three quarters, which leaves 12 cents, which takes the form of
a dime and two pennies. (Confusing, isn't it?)
To make things more confusing, sales tax is applied after the listed purchase price. So if you buy something that's listed as 99 cents, you'll be charged, say, $1.07. (Note to Americans: Australians
think this is just plain crazy. The price marked should be the price charged.) And sales tax is a state tax, so it varies from state to state. An it's not always an integer number of percentage
points. All this adds up to make it all but impossible to tell how much something is going to cost, until the checkout chick actually tells you the number.
So where in Australia I can easily fish out some coins to make part or all of the change, in the US it's much harder.
Unsurprisingly, in the first year or so that I was in the US, change built up in the bottom of my wallet enormously. Pennies are particularly problematic, especially since there's no 2 cent coin. You
can thus receive up to four pennies in a single transaction. I wound up with a small mountain of pennies on my kitchen counter.
Incidentally, examining the dates on a random selection of US coins reveals that pennies are on average substantially younger than the other coins. This is because they stay in circulation for less
time before dropping away unnoticed into some worthless jar or drawer. (And I wonder how much it costs to mint replacements?)
So the first trick to change minimisation is to always have at least two or three pennies rapidly available, semi-spilled out of the wallet's change pocket. It's easy to figure out how many pennies
you have to give away in order to receive none back.
This doesn't take care of the nickels and dimes, but after a while I got the hang of those, too. The more of them that you have in your wallet, the more are likely to be at hand when you spill
forward a random selection of coins, so once you've built a lookup table in your head to do the modulo-25 arithmetic, the problem self-manages. And quarters are actually useful, for soft drink
machines and laundromats and the like.
As another aside, this problem is evidently not unique to foreigners like myself. Change counting machines, although pretty much unheard of in Australia, are quite common in the US; in supermarkets
and the like. You take your jar of small useless coins in to the supermarket, pour it into the machine, which counts it, keeps a percentage for the house, and gives you the rest in real money.
(Actually, to keep the design of the machine simple, it generally just prints you a voucher which you redeem at the cash register. And of course, the voucher is for some irregular sum of money, so
the checkout chick gives you some more useless change in return for your original useless change, which you can then drop straight back into your jar.)
The upshot of all this is that I now obsessively dispose of coins at every opportunity, at a rate designed to keep a US wallet in balance; and thus, here in Australia, I never have any change at all. | {"url":"http://www.dougburbidge.com/makingchange.html","timestamp":"2014-04-19T04:20:31Z","content_type":null,"content_length":"5058","record_id":"<urn:uuid:30c588cc-2fe1-445d-900d-067ee1345cf3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration of Vector Functions
February 22nd 2007, 08:26 PM #1
Sep 2006
Integration of Vector Functions
Okay, when you find the derivative of a vector function at a specific point, you find the tangent vector. Geometrically what do you find when you integrate a vector function, and for that matter
what do you find when you integrate a parametric function?
It depends what integration you are talking about, the regular integration of line integration. I presume you are talking about regular integration.
I do not see of any geometrical way of thinking of it. I just think that sometimes you need to find the anti-derivative vector in applications and hence are required to integrate, that is all.
Yeah, that is what i thought; the basic need to integration I believe is to find the velocity and position functions.
February 23rd 2007, 06:51 AM #2
Global Moderator
Nov 2005
New York City
February 23rd 2007, 10:01 AM #3
Sep 2006 | {"url":"http://mathhelpforum.com/calculus/11875-integration-vector-functions.html","timestamp":"2014-04-21T09:48:31Z","content_type":null,"content_length":"35573","record_id":"<urn:uuid:55a7d054-da00-4d09-8892-c7b8d8393109>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Static collision checking amounts to testing a given configuration of objects for overlaps. In contrast, the goal of dynamic checking is to determine whether all configurations along a continuous
path are collision-free. While effective methods are available for static collision detection, dynamic checking still lacks methods that are both reliable and efficient. A common approach --
frequently used in PRM planners -- is to sample paths at some fixed, pre-specified resolution and statically test each sampled configuration. But this approach is not guaranteed to detect collision
whenever one occurs, especially when objects are thin (end-effector of the IRB2400 robot in the leftmost and fibers in the mid figure). One may increase the reliability of the approach by refining
the sampling resolution along the entire path, but a finer resolution increases the running time of the collision checker. We introduce a new method for testing straight path segments in
configuration space, or collections of such segments, which is both reliable and efficient. This method locally adjusts the sampling resolution (rightmost figure) by comparing lower bounds on
distances between objects in relative motion with higher bounds on lengths of curves traced by points of these moving objects. Several additional techniques and heuristics further improve the
checker's efficiency in scenarios with many moving objects (e.g., articulated arms and/or multiple robots) and high geometric complexity. Our new method is general, but particularly well suited for
use in PRM planners. Extensive tests, in particular on randomly generated path segments and on multi-segment paths produced by PRM planners, show that the new method compares favorably with a
fixed-resolution approach at "suitable" resolution, with the enormous advantage that it never fails to detect collision. | {"url":"http://ai.stanford.edu/~mitul/research.html","timestamp":"2014-04-18T13:08:40Z","content_type":null,"content_length":"15571","record_id":"<urn:uuid:dfac9a39-dcc2-4cc7-a87e-3156644bba2d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |