content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Selasa, 05 Januari 2010
Jumat, 18 Desember 2009
Sabtu, 21 November 2009
The study of magnetism is a science in itself. Electrical and magnetic phe- nomena interact; a detailed study of magnetism and electromagnetism could easily fill a book. Magnetism exists whenever
electric charges move relative to other objects or relative to a frame of reference.
The Earth has a core made up largely of iron heated to the extent that some of it is liquid. As the Earth rotates, the iron flows in complex ways. This flow gives rise to a huge magnetic field,
called the geomagnetic field, that surrounds the Earth.
The geomagnetic field has poles, as a bar magnet does. These poles are near, but not at, the geographic poles. The north geomagnetic pole is located
in the frozen island region of northern Canada. The south geomagnetic pole
is in the ocean near the coast of Antarctica. The geomagnetic axis is thus somewhat tilted relative to the axis on which the Earth rotates. Not only this, but the geomagnetic axis does not exactly
run through the center of the Earth. It’s like an apple core that’s off center.
Copyright 2002 by The McGraw-Hill Companies, Inc. Click here for Terms of Use.
PART 2 Electricity, Magnetism, and Electronics
Charged particles from the Sun, constantly streaming outward through the solar system, distort the geomagnetic field. This solar wind in effect
“blows” the field out of shape. On the side of the Earth facing the Sun, the field is compressed; on the side of the Earth opposite the Sun, the field is stretched out. This effect occurs with the
magnetic fields around the other planets, too, notably Jupiter.
As the Earth rotates, the geomagnetic field does a complex twist-and- turn dance into space in the direction facing away from the Sun. At and near the Earth’s surface, the field is nearly symmetrical
with respect to the geo- magnetic poles. As the distance from the Earth increases, the extent of geomagnetic-field distortion increases.
The presence of the Earth’s magnetic field was noticed in ancient times. Certain rocks, called lodestones, when hung by strings, always orient them- selves in a generally north-south direction. Long
ago this was correctly attributed to the presence of a “force” in the air. It was some time before the reasons for this phenomenon were known, but the effect was put to use by seafarers and land
explorers. Today, a magnetic compass can still be a valuable navigation aid, used by mariners, backpackers, and others who travel far from familiar landmarks. It can work when more sophisticated
navigational devices fail.
The geomagnetic field and the magnetic field around a compass nee- dle interact so that a force is exerted on the little magnet inside the com- pass. This force works not only in a horizontal plane
(parallel to the Earth’s surface) but vertically, too, in most locations. The vertical com- ponent is zero at the geomagnetic equator, a line running around the globe equidistant from both
geomagnetic poles. As the geomagnetic lat- itude increases toward either the north or the south geomagnetic pole, the magnetic force pulls up and down on the compass needle more and more. The extent
of this vertical component at any particular location is called the inclination of the geomagnetic field at that location. You have noticed this when you hold a compass. One end of the needle seems
to insist on touching the compass face, whereas the other end tilts up toward the glass.
Magnetic Force
As children, most of us discovered that magnets “stick” to some metals. Iron, nickel, and alloys containing either or both of these elements are known as ferromagnetic materials. Magnets exert force
on these metals. Magnets generally do not exert force on other metals unless those metals carry electric currents. Electrically insulating substances never attract mag- nets under normal conditions.
When a magnet is brought near a piece of ferromagnetic material, the atoms in the material become lined up so that the metal is temporarily mag- netized. This produces a magnetic force between the
atoms of the ferro- magnetic substance and those in the magnet.
If a magnet is near another magnet, the force is even stronger than it is when the same magnet is near a ferromagnetic substance. In addition, the force can be either repulsive (the magnets repel, or
push away from each other) or attractive (the magnets attract, or pull toward each other) depend- ing on the way the magnets are turned. The force gets stronger as the mag- nets are brought closer
and closer together.
Some magnets are so strong that no human being can pull them apart if they get “stuck” together, and no person can bring them all the way together against their mutual repulsive force. This is
especially true of electromag- nets, discussed later in this chapter. The tremendous forces available are of use in industry. A huge electromagnet can be used to carry heavy pieces of scrap iron or
steel from place to place. Other electromagnets can provide sufficient repulsion to suspend one object above another. This is called magnetic levitation.
Whenever the atoms in a ferromagnetic material are aligned, a magnetic field exists. A magnetic field also can be caused by the motion of electric charge carriers either in a wire or in free space.
The magnetic field around a permanent magnet arises from the same cause as the field around a wire that carries an electric current. The responsible
Electricity, Magnetism, and Electronics
factor in either case is the motion of electrically charged particles. In a
wire, the electrons move along the conductor, being passed from atom to atom. In a permanent magnet, the movement of orbiting electrons occurs in such a manner that an “effective current” is produced
by the way the elec- trons move within individual atoms.
Magnetic fields can be produced by the motion of charged particles through space. The Sun is constantly ejecting protons and helium nuclei. These particles carry a positive electric charge. Because
of this, they pro- duce “effective currents” as they travel through space. These currents in turn generate magnetic fields. When these fields interact with the Earth’s geomagnetic field, the
particles are forced to change direction, and they are accelerated toward the geomagnetic poles.
If there is an eruption on the Sun called a solar flare, the Sun ejects more charged particles than normal. When these arrive at the Earth’s geomag- netic poles, their magnetic fields, collectively
working together, can disrupt the Earth’s geomagnetic field. Then there is a geomagnetic storm. Such an event causes changes in the Earth’s ionosphere, affecting long-distance radio communications at
certain frequencies. If the fluctuations are intense enough, even wire communications and electrical power transmission can be interfered with. Microwave transmissions generally are immune to the
effects of geomagnetic storms. Fiberoptic cable links and free-space laser communications are not affected. Aurora (northern or southern lights) are frequently observed at night during geomagnetic
Physicists consider magnetic fields to be comprised of flux lines, or lines of flux. The intensity of the field is determined according to the number of flux lines passing through a certain cross
section, such as a centimeter squared (cm2) or a meter squared (m2). The lines are not actual threads in space, but it is intuitively appealing to imagine them this way, and their presence can be
shown by simple experimentation.
Have you seen the classical demonstration in which iron filings are placed on a sheet of paper, and then a magnet is placed underneath the paper? The filings arrange themselves in a pattern that
shows, roughly, the
“shape” of the magnetic field in the vicinity of the magnet. A bar magnet has a field whose lines of flux have a characteristic pattern (Fig. 14-1).
Another experiment involves passing a current-carrying wire through the paper at a right angle. The iron filings become grouped along circles
Fig. 14-1. Magnetic flux around a bar magnet.
centered at the point where the wire passes through the paper. This shows
that the lines of flux are circular as viewed through any plane passing through the wire at a right angle. The flux circles are centered on the axis of the wire, or the axis along which the charge
carriers move (Fig. 14-2).
A magnetic field has a direction, or orientation, at any point in space near a current-carrying wire or a permanent magnet. The flux lines run parallel to the direction of the field. A magnetic field
is considered to begin, or originate
PART 2 Electricity, Magnetism, and Electronics
Fig. 14-2. Magnetic flux produced by charge carriers traveling in a straight line.
at a north pole and to end, or terminate, at a south pole. These poles are not
the same as the geomagnetic poles; in fact, they are precisely the opposite! The north geomagnetic pole is in reality a south pole because it attracts the north poles of magnetic compasses.
Similarly, the south geomagnetic pole is
a north pole because it attracts the south poles of compasses. In the case of a permanent magnet, it is usually, but not always, apparent where the magnetic poles are located. With a current-carrying
wire, the magnetic field goes around and around endlessly, like a dog chasing its own tail.
A charged electric particle, such as a proton, hovering in space, is an
electric monopole, and the electrical flux lines around it aren’t closed. A
CHAPTER 14 Magnetism 351
positive charge does not have to be mated with a negative charge. The elec-
trical flux lines around any stationary charged particle run outward in all directions for a theoretically infinite distance. However, a magnetic field is different. Under normal circumstances, all
magnetic flux lines are closed loops. With permanent magnets, there is always a starting point (the north pole) and an ending point (the south pole). Around the current-carrying wire, the loops are
circles. This can be seen plainly in experiments with iron filings on paper.
You might at first think that the magnetic field around a current-carrying wire is caused by a monopole or that there aren’t any poles at all because the concentric circles apparently don’t originate
or terminate anywhere. However, think of any geometric plane containing the wire. A magnetic dipole, or pair of opposite magnetic poles, is formed by the lines of flux going halfway around on either
side. There in effect are two such “mag- nets” stuck together. The north poles and the south poles are thus not points but rather faces of the plane backed right up against each other.
The lines of flux in the vicinity of a magnetic dipole always connect the two poles. Some flux lines are straight in a local sense, but in a larger sense they are always curves. The greatest magnetic
field strength around a bar magnet is near the poles, where the flux lines converge. Around a current- carrying wire, the greatest field strength is near the wire.
Magnetic Field Strength
The overall magnitude of a magnetic field is measured in units called webers, symbolized Wb. A smaller unit, the maxwell (Mx), is sometimes used if a magnetic field is very weak. One weber is
equivalent to 100 mil- lion maxwells. Thus 1 Wb 108 Mx, and 1 Mx 10 8 Wb.
If you have a permanent magnet or electromagnet, you might see its
“strength” expressed in terms of webers or maxwells. More often, though,
PART 2 Electricity, Magnetism, and Electronics
you’ll hear or read about units called teslas (T) or gauss (G). These units
are expressions of the concentration, or intensity, of the magnetic field within a certain cross section. The flux density, or number of “flux lines per unit cross-sectional area,” is a more useful
expressions for magnetic effects than the overall quantity of magnetism. Flux density is customarily denoted
B in equations. A flux density of 1 tesla is equal to 1 weber per meter squared (1 Wb/m2). A flux density of 1 gauss is equal to 1 maxwell per cen- timeter squared (1 Mx/cm2). It turns out that the
gauss is equivalent to exactly 0.0001 tesla. That is, 1 G 10 4 T, and 1 T 104 G. To convert from teslas to gauss (not gausses!), multiply by 104; to convert from gauss
to teslas, multiply by 10 4.
If you are confused by the distinctions between webers and teslas or between maxwells and gauss, think of a light bulb. Suppose that a lamp emits 20 W of visible-light power. If you enclose the bulb
completely, then
20 W of visible light strike the interior walls of the chamber, no matter how large or small the chamber. However, this is not a very useful notion of the brightness of the light. You know that a
single bulb gives plenty of light for
a small walk-in closet but is nowhere near adequate to illuminate a gymna- sium. The important consideration is the number of watts per unit area. When we say the bulb gives off a certain number of
watts of visible light, it’s like saying a magnet has an overall magnetism of so many webers or maxwells. When we say that the bulb produces a certain number of watts per unit area, it’s analogous to
saying that a magnetic field has a flux den- sity of so many teslas or gauss.
When working with electromagnets, another unit is employed. This is the ampere-turn (At). It is a unit of magnetomotive force. A wire bent into a cir- cle and carrying 1 A of current produces 1 At of
magnetomotive force. If the wire is bent into a loop having 50 turns, and the current stays the same, the resulting magnetomotive force becomes 50 times as great, that is, 50 At.
If the current in the 50-turn loop is reduced to 1/50 A or 20 mA, the mag- netomotive force goes back down to 1 At.
A unit called the gilbert is sometimes used to express magnetomotive force. This unit is equal to about 1.256 At. To approximate ampere-turns when the number of gilberts is known, multiply by 1.256.
To approxi- mate gilberts when the number of ampere-turns is known, multiply by
CHAPTER 14 Magnetism 353
In a straight wire carrying a steady direct current surrounded by air or by free space (a vacuum), the flux density is greatest near the wire and dimin- ishes with increasing distance from the wire.
You ask, “Is there a formula that expresses flux density as a function of distance from the wire?” The answer is yes. Like all formulas in physics, it is perfectly accurate only under idealized
Consider a wire that is perfectly thin, as well as perfectly straight. Suppose that it carries a current of I amperes. Let the flux density (in teslas) be denoted B. Consider a point P at a distance
r (in meters) from the wire, as measured along the shortest possible route (that is, within a plane perpendicular to the wire). This is illustrated in Fig. 14-3. The following formula applies:
B 2 10 7 (I/r)
In this formula, the value 2 can be considered mathematically exact to any desired number of significant figures.
As long as the thickness of the wire is small compared with the distance
r from it, and as long as the wire is reasonably straight in the vicinity of the point P at which the flux density is measured, this formula is a good indi- cator of what happens in real life.
PROBLEM 14-1
What is the flux density in teslas at a distance of 20 cm from a straight, thin
wire carrying 400 mA of direct current?
SOLUTION 14-1
First, convert everything to units in the International System (SI). This means
that r 0.20 m and I 0.400 A. Knowing these values, plug them directly into the formula:
B 2 10 7 (I/r)
2.00 10 7 (0.400/0.20)
4.0 10 7 T
PROBLEM 14-2
In the preceding scenario, what is the flux density Bgauss (in gauss) at point P?
SOLUTION 14-2
To figure this out, we must convert from teslas to gauss. This means that we must multiply the answer from the preceding problem by 104:
3 G
PART 2 Electricity, Magnetism, and Electronics
Fig. 14-3. Flux density varies inversely with the distance from a wire carrying direct current.
Any electric current, or movement of charge carriers, produces a magnetic field. This field can become intense in a tightly coiled wire having many turns and carrying a large electric current. When a
ferromagnetic rod, called a core, is placed inside the coil, the magnetic lines of flux are con- centrated in the core, and the field strength in and near the core becomes tremendous. This is the
principle of an electromagnet (Fig. 14-4).
Electromagnets are almost always cylindrical in shape. Sometimes the cylinder is long and thin; in other cases it is short and fat. Whatever the
CHAPTER 14 Magnetism
Fig. 14-4. A simple electromagnet.
ratio of diameter to length for the core, however, the principle is always the
same: The flux produced by the current temporarily magnetizes the core.
You can build a dc electromagnet by taking a large iron or steel bolt (such as a stove bolt) and wrapping a couple of hundred turns of wire around it. These items are available in almost any hardware
store. Be sure the bolt is made of ferromagnetic material. (If a permanent magnet “sticks” to the bolt, the bolt is ferromagnetic.) Ideally, the bolt should be at least 3 8 inch
in diameter and several inches long. You must use insulated or enameled wire, preferably made of solid, soft copper. “Bell wire” works well.
Be sure that all the wire turns go in the same direction. A large 6-V
“lantern battery” can provide plenty of dc to operate the electromagnet. Never leave the coil connected to the battery for more than a few seconds at
a time. And do not—repeat, do not—use an automotive battery for this exper- iment. The near-short-circuit produced by an electromagnet can cause the acid from such a battery to violently boil out,
and this acid is dangerous stuff.
PART 2 Electricity, Magnetism, and Electronics
Direct-current electromagnets have defined north and south poles, just
like permanent magnets. The main difference is that an electromagnet can get much stronger than any permanent magnet. You should see evidence of this if you do the preceding experiment with a large
enough bolt and enough turns of wire. Another difference between an electromagnet and a permanent magnet is the fact that in an electromagnet, the magnetic field exists only as long as the coil
carries current. When the power source is removed, the magnetic field collapses. In some cases, a small amount of residual magnetism remains in the core, but this is much weaker than the magnetism
generated when current flows in the coil.
You might get the idea that the electromagnet can be made far stronger if, rather than using a lantern battery for the current source, you plug the wires into a wall outlet. In theory, this is true.
In practice, you’ll blow the fuse or circuit breaker. Do not try this. The electrical circuits in some buildings are not adequately protected, and a short circuit can create a fire hazard. Also, you
can get a lethal shock from the 117-V utility mains. (Do this experi- ment in your mind, and leave it at that.)
Some electromagnets use 60-Hz ac. These magnets “stick” to ferromag- netic objects. The polarity of the magnetic field reverses every time the direction of the current reverses; there are 120
fluctuations, or 60 complete north-to-south-to-north polarity changes, every second (Fig. 14-5). If a per- manent magnet is brought near either “pole” of an ac electromagnet of the same strength,
there is no net force resulting from the ac electromagnetism because there is an equal amount of attractive and repulsive force between the alternating magnetic field and the steady external field.
However, there
is an attractive force between the core material and the nearby magnet pro- duced independently of the alternating magnetic field resulting from the ac
in the coil.
PROBLEM 14-3
Suppose that the frequency of the ac applied to an electromagnet is 600 Hz
instead of 60 Hz. What will happen to the interaction between the alternating magnetic field and a nearby permanent magnet of the same strength?
SOLUTION 14-3
Assuming that no change occurs in the behavior of the core material, the
situation will be the same as is the case at 60 Hz or at any other ac frequency.
Fig. 14-5. Polarity change in an ac electromagnet.
Magnetic Materials
Some substances cause magnetic lines of flux to bunch closer together than they are in the air; other materials cause the lines of flux to spread farther apart. The first kind of material is
ferromagnetic. Substances of this type are, as we have discussed already, “magnetizable.” The other kind of mate- rial is called diamagnetic. Wax, dry wood, bismuth, and silver are examples of
substances that decrease magnetic flux density. No diamagnetic material reduces the strength of a magnetic field by anywhere near the factor that ferromagnetic substances can increase it.
The magnetic characteristics of a substance or medium can be quantified in two important but independent ways: permeability and retentivity.
Permeability, symbolized by the lowercase Greek mu ( ), is measured on a scale relative to a vacuum, or free space. A perfect vacuum is assigned, by
PART 2 Electricity, Magnetism, and Electronics
convention, a permeability figure of exactly 1. If current is forced through a
wire loop or coil in air, then the flux density in and around the coil is about the same as it would be in a vacuum. Therefore, the permeability of pure air
is about equal to 1. If you place an iron core in the coil, the flux density increases by a factor ranging from a few dozen to several thousand times, depending on the purity of the iron. The
permeability of iron can be as low as about 60 (impure) to as high as about 8,000 (highly refined).
If you use special metallic alloys called permalloys as the core material
in electromagnets, you can increase the flux density, and therefore the local strength of the field, by as much as 1 million (106) times. Such substances thus have permeability as great as 106.
If, for some reason, you feel compelled to make an electromagnet that
is as weak as possible, you can use dry wood or wax for the core material. Usually, however, diamagnetic substances are used to keep magnetic objects apart while minimizing the interaction between
Certain ferromagnetic materials stay magnetized better than others. When
a substance such as iron is subjected to a magnetic field as intense as it can handle, say, by enclosing it in a wire coil carrying a high current, there will be some residual magnetism left when the
current stops flowing in the coil. Retentivity, also sometimes called remanence, is a measure of how well a substance can “memorize” a magnetic field imposed on it and thereby become a permanent
Retentivity is expressed as a percentage. If the maximum possible flux density in a material is x teslas or gauss and then goes down to y teslas or gauss when the current is removed, the retentivity
Br of that material is given by the following formula:
Br 100y/x
What is meant by maximum possible flux density in the foregoing defi- nition? This is an astute question. In the real world, if you make an elec- tromagnet with a core material, there is a limit to
the flux density that can be generated in that core. As the current in the coil increases, the flux den- sity inside the core goes up in proportion—for awhile. Beyond a certain point, however, the
flux density levels off, and further increases in current do not produce any further increase in the flux density. This condition is called core saturation. When we determine retentivity for a
material, we
CHAPTER 14 Magnetism 359
are referring to the ratio of the flux density when it is saturated and the flux
density when there is no magnetomotive force acting on it.
As an example, suppose that a metal rod can be magnetized to 135 G when it is enclosed by a coil carrying an electric current. Imagine that this is the maximum possible flux density that the rod can
be forced to have. For any substance, there is always such a maximum; further increasing the current in the wire will not make the rod any more magnetic. Now suppose that the cur- rent is shut off
and that 19 G remain in the rod. Then the retentivity Br is
Br 100 19/135 100 0.14 14 percent
Certain ferromagnetic substances have good retentivity and are excellent for making permanent magnets. Other ferromagnetic materials have poor retentivity. They can work well as the cores of
electromagnets, but they do not make good permanent magnets. Sometimes it is desirable to have a sub- stance with good ferromagnetic properties but poor retentivity. This is the case when you want to
have an electromagnet that will operate from dc so that it maintains a constant polarity but that will lose its magnetism when the current is shut off.
If a ferromagnetic substance has poor retentivity, it’s easy to make it work as the core for an ac electromagnet because the polarity is easy to switch. However, if the retentivity is high, the
material is “magnetically sluggish” and has trouble following the current reversals in the coil. This sort of stuff doesn’t function well as the core of an ac electromagnet.
PROBLEM 14-4
Suppose that a metal rod is surrounded by a coil and that the magnetic flux
density can be made as great as 0.500 T; further increases in current cause no further increase in the flux density inside the core. Then the current is removed; the flux density drops to 500 G. What
is the retentivity of this core material?
SOLUTION 14-4
First, convert both flux density figures to the same units. Remember that 1 T
104 G. Thus the flux density is 0.500 104 5,000 G with the current and
500 G without the current. “Plugging in” these numbers gives us this:
Br 100 500/5,000 100 0.100 10.0 percent
Any ferromagnetic material, or substance whose atoms can be aligned per- manently, can be made into a permanent magnet. These are the magnets
PART 2 Electricity, Magnetism, and Electronics
you played with as a child (and maybe still play with when you use them
to stick notes to your refrigerator door). Some alloys can be made into stronger permanent magnets than others.
One alloy that is especially suited to making strong permanent magnets
is known by the trade name Alnico. This word derives from the chemical symbols of the metals that comprise it: aluminum (Al), nickel (Ni), and cobalt (Co). Other elements are sometimes added,
including copper and titanium. However, any piece of iron or steel can be magnetized to some extent. Many technicians use screwdrivers that are slightly magnetized so that they can hold onto screws
when installing or removing them from hard-to-reach places.
Permanent magnets are best made from materials with high retentivity. They are made by using the material as the core of an electromagnet for an extended period of time. If you want to magnetize a
screwdriver a lit- tle bit so that it will hold onto screws, stroke the shaft of the screwdriver with the end of a bar magnet several dozen times. However, take note: Once you have magnetized a tool,
it is practically impossible to completely demagnetize it.
Suppose that you have a long coil of wire, commonly known as a solenoid,
with n turns and whose length in meters is s. Suppose that this coil carries
a direct current of I amperes and has a core whose permeability is . The flux density B in teslas inside the core, assuming that it is not in a state of saturation, can be found using this formula:
B 4p 10 7 ( nI/s)
A good approximation is
B 1.2566 10 6 ( nI/s)
PROBLEM 14-5
Consider a dc electromagnet that carries a certain current. It measures 20 cm
long and has 100 turns of wire. The flux density in the core, which is known not to be in a state of saturation, is 20 G. The permeability of the core mate- rial is 100. What is the current in the
SOLUTION 14-5
As always, start by making sure that all units are correct for the formula that
will be used. The length s is 20 cm, that is, 0.20 m. The flux density B is 20
G, which is 0.0020 T. Rearrange the preceding formula so it solves for I:
CHAPTER 14 Magnetism 361
6 ( nI/s)
6 ( n/s)
I 1 1.2566 10 6 ( n/sB)
I 7.9580 10
(sB/ n)
This is an exercise, but it is straightforward. Derivations such as this are subject to the constraint that we not divide by any quantity that can attain a value of zero in a practical situation.
(This is not a problem here. We aren’t concerned with scenarios involving zero current, zero turns of wire, permeability of zero, or coils having zero length.) Let’s “plug in the numbers”:
I 7.9580 105 (0.20 0.0020)/(100 100)
7.9580 105 4.0 10 8
0.031832 A 31.832 mA
This must be rounded off to 32 mA because we are only entitled to claim two significant figures.
Magnetic Machines
A solenoid, having a movable ferromagnetic core, can do various things. Electrical relays, bell ringers, electric “hammers,” and other mechanical devices make use of the principle of the solenoid.
More sophisticated elec- tromagnets, sometimes in conjunction with permanent magnets, can be used to build motors, meters, generators, and other devices.
Figure 14-6 is a simplified diagram of a bell ringer. Its solenoid is an elec- tromagnet. The core has a hollow region in the center, along its axis, through which a steel rod passes. The coil has
many turns of wire, so the electromagnet is powerful if a substantial current passes through the coil. When there is no current flowing in the coil, the rod is held down by the force of gravity. When
a pulse of current passes through the coil, the rod is pulled forcibly upward. The magnetic force “wants” the ends of the rod,
PART 2 Electricity, Magnetism, and Electronics
Steel plate(ringer)
Fig. 14-6. A bell ringer using a solenoid.
which is the same length as the core, to be aligned with the ends of the core.
However, the pulse is brief, and the upward momentum is such that the rod passes all the way through the core and strikes the ringer plate. Then the steel rod falls back down again to its resting
position, allowing the plate to reverberate. Some office telephones are equipped with ringers that produce this noise rather than conventional ringing, buzzing, beeping, or chirping emitted by most
phone sets. The “gong” sound is less irritating to some people than other attention-demanding signals.
In some electronic devices, it is inconvenient to place a switch exactly where it should be. For example, you might want to switch a communica-
CHAPTER 14 Magnetism 363
tions line from one branch to another from a long distance away. In wire-
less transmitters, some of the wiring carries high-frequency alternating cur- rents that must be kept within certain parts of the circuit and not routed out
to the front panel for switching. A relay makes use of a solenoid to allow remote-control switching.
A drawing and a diagram of a relay are shown in Fig. 14-7. The movable lever, called the armature, is held to one side by a spring when there is no current flowing through the electromagnet. Under
these conditions, termi- nal X is connected to terminal Y but not to terminal Z. When a sufficient
Fig. 14-7. (a) Pictorial drawing of a simple relay. (b) Schematic symbol for the same relay.
PART 2 Electricity, Magnetism, and Electronics
current is applied, the armature is pulled over to the other side. This dis-
connects terminal X from terminal Y and connects X to Z.
There are numerous types of relays, each used for a different purpose. Some are meant for use with dc, and others are for ac; some will work with either ac or dc. A normally closed relay completes a
circuit when there is no current flowing in its electromagnet and breaks the circuit when current flows. A normally open relay is just the opposite. (Normal in this sense means “no current in the
coil.”) The relay shown in Fig. 14-7 can be used either as a normally open or normally closed relay depending on which contacts are selected. It also can be used to switch a line between two dif-
ferent circuits.
These days, relays are used only in circuits and systems carrying extreme currents or voltages. In most ordinary applications, electronic semiconduc- tor switches, which have no moving parts and can
last far longer than relays, are preferred.
Magnetic fields can produce considerable mechanical forces. These forces can be harnessed to do work. The device that converts dc energy into rotat- ing mechanical energy is a dc motor. In this
sense, a dc motor is a form of transducer. Motors can be microscopic in size or as big as a house. Some tiny motors are being considered for use in medical devices that actually can circulate in the
bloodstream or be installed in body organs. Others can pull a train at freeway speeds.
In a dc motor, the source of electricity is connected to a set of coils producing magnetic fields. The attraction of opposite poles, and the repulsion of like poles, is switched in such a way that a
constant torque, or rotational force, results. The greater the current that flows in the coils, the stronger is the torque, and the more electrical energy is needed. One set of coils, called the
armature coil, goes around with the motor shaft. The other set of coils, called the field coil, is stationary (Fig. 14-8). In some motors, the field coils are replaced by a pair of permanent mag-
nets. The current direction in the armature coil is reversed every half- rotation by the commutator. This keeps the force going in the same angular direction. The shaft is carried along by its own
angular momen- tum so that it doesn’t come to a stop during those instants when the cur- rent is being switched in polarity.
CHAPTER 14 Magnetism
Fig. 14-8. Simplified drawing of a dc electric motor. Straight lines represent wires. Intersecting
lines indicate connections only when there is a dot at the point where the lines cross.
An electric generator is constructed somewhat like a conventional motor, although it functions in the opposite sense. Some generators also can oper- ate as motors; they are called motor/generators.
Generators, like motors, are energy transducers of a special sort.
A typical generator produces ac when a coil is rotated rapidly in a strong magnetic field. The magnetic field can be provided by a pair of permanent magnets
(Fig. 14-9). The rotating shaft is driven by a gasoline-powered motor, a turbine, or some other source of mechanical energy. A commutator
PART 2 Electricity, Magnetism, and Electronics
can be used with a generator to produce pulsating dc output, which can be
filtered to obtain pure dc for use with precision equipment.
Magnetic Data Storage
Magnetic fields can be used to store data in various forms. Common media for data storage include magnetic tape and the magnetic disk.
Recording tape is the stuff you find in cassette players. These days, mag- netic tape is largely obsolete, but it is still sometimes used for home enter-
Fig. 14-9. A simple type of ac generator.
Magnetism 367
tainment, especially high-fidelity (hi-fi) music and home video. It also can
be found in some high-capacity computer data storage systems.
The tape consists of millions of particles of iron oxide attached to a plastic or nonferromagnetic metal strip. A fluctuating magnetic field, produced by the recording head, polarizes these
particles. As the field changes in strength next
to the recording head, the tape passes by at a constant, controlled speed. This produces regions in which the iron oxide particles are polarized in either direc- tion. When the tape is run at the
same speed through the recorder in the play- back mode, the magnetic fields around the individual particles cause a fluctuating field that is detected by a pickup head. This field has the same pat-
tern of variations as the original field from the recording head.
Magnetic tape is available in various widths and thicknesses for differ- ent applications. Thick-tape cassettes don’t play as long as thin-tape ones, but the thicker tape is more resistant to
stretching. The speed of the tape determines the fidelity of the recording. Higher speeds are preferred for music and video and lower speeds for voice.
The data on a magnetic tape can be distorted or erased by external mag- netic fields. Therefore, tapes should be protected from such fields. Keep magnetic tape away from permanent magnets or
electromagnets. Extreme heat also can damage the data on magnetic tape, and if the temperature is high enough, physical damage occurs as well.
The era of the personal computer has seen the development of ever-more- compact data storage systems. One of the most versatile is the magnetic disk. Such a disk can be either rigid or flexible.
Disks are available in various sizes. Hard disks (also called hard drives) store the most data and generally are found inside computer units. Diskettes are usually 3.5 inches (8.9 cm) in diameter and
can be inserted and removed from digital recording/playback machines called disk drives.
The principle of the magnetic disk, on the micro scale, is the same as that of magnetic tape. But disk data is stored in binary form; that is, there are only two different ways that the particles are
magnetized. This results
in almost perfect, error-free storage. On a larger scale, the disk works dif- ferently than tape because of the difference in geometry. On a tape, the information is spread out over a long span, and
some bits of data are far away from others. On a disk, no two bits are ever farther apart than the diameter of the disk. Therefore, data can be transferred to or from a disk more rapidly than is
possible with tape.
PART 2 Electricity, Magnetism, and Electronics
A typical diskette can store an amount of digital information equivalent
to a short novel. Specialized high-capacity diskettes can store the equiva- lent of hundreds of long novels or even a complete encyclopedia.
The same precautions should be observed when handling and storing magnetic disks as are necessary with magnetic tape.
Refer to the text in this chapter if necessary. A good score is eight correct. Answers are in the back of the book.
1. The geomagnetic field
(a) makes the Earth like a huge horseshoe magnet.
(b) runs exactly through the geographic poles.
(c) makes a compass work.
(d) makes an electromagnet work.
2. A material that can be permanently magnetized is generally said to be
(a) magnetic.
(b) electromagnetic.
(c) permanently magnetic.
(d) ferromagnetic.
3. The magnetic flux around a straight current-carrying wire
(a) gets stronger with increasing distance from the wire.
(b) is strongest near the wire.
(c) does not vary in strength with distance from the wire.
(d) consists of straight lines parallel to the wire.
4. The gauss is a unit of
(a) overall magnetic field strength.
(b) ampere-turns.
(c) magnetic flux density.
(d) magnetic power.
5. If a wire coil has 10 turns and carries 500 mA of current, what is the magne- tomotive force in ampere-turns?
(a) 5,000
(b) 50
(c) 5.0
(d) 0.02
CHAPTER 14 Magnetism 369
6. Which of the following is not generally observed in a geomagnetic storm?
(a) Charged particles streaming out from the Sun
(b) Fluctuations in the Earth’s magnetic field
(c) Disruption of electrical power transmission
(d) Disruption of microwave propagation
7. An ac electromagnet
(a) will attract only other magnetized objects.
(b) will attract iron filings.
(c) will repel other magnetized objects.
(d) will either attract or repel permanent magnets depending on the polarity.
8. A substance with high retentivity is best suited for making
(a) an ac electromagnet.
(b) a dc electromagnet.
(c) an electrostatic shield.
(d) a permanent magnet.
9. A device that reverses magnetic field polarity to keep a dc motor rotating is
(a) a solenoid.
(b) an armature coil.
(c) a commutator.
(d) a field coil.
10. An advantage of a magnetic disk, as compared with magnetic tape, for data storage and retrieval is that
(a) a disk lasts longer.
(b) data can be stored and retrieved more quickly with disks than with tapes.
(c) disks look better.
(d) disks are less susceptible to magnetic fields.
CHAPTER 13
Alternating Current
Direct current (dc) can be expressed in terms of two variables: the polarity
(or direction) and the amplitude. Alternating current (ac) is more compli- cated. There are additional variables: the period (and its reciprocal, the frequency), the waveform, and the phase.
Definition of Alternating Current
Direct current has a polarity, or direction, that stays the same over a long period of time. Although the amplitude can vary—the number of amperes, volts, or watts can fluctuate—the charge carriers
always flow in the same direction through the circuit. In ac, the polarity reverses repeatedly.
In a periodic ac wave, the kind discussed in this chapter, the mathematical function of amplitude versus time repeats precisely and indefinitely; the same pattern recurs countless times. The period
is the length of time between one repetition of the pattern, or one wave cycle, and the next. This
is illustrated in Fig. 13-1 for a simple ac wave.
The period of a wave, in theory, can be anywhere from a minuscule frac- tion of a second to many centuries. Some electromagnetic (EM) fields have periods measured in quadrillionths of a second or
smaller. The charged par- ticles held captive by the magnetic field of the Sun reverse their direction
Copyright 2002 by The McGraw-Hill Companies, Inc. Click here for Terms of Use.
PART 2 Electricity, Magnetism, and Electronics
Period Period
Fig. 13-1. A sine wave. The period is the length of time required for one cycle to be completed.
over periods measured in years. Period, when measured in seconds, is
symbolized T.
The frequency, denoted f, of a wave is the reciprocal of the period. That is,
f 1/T, and T 1/f. In the olden days (prior to the 1970s), frequency was specified in cycles per second, abbreviated cps. High frequencies were expressed in kilocycles, megacycles, or gigacycles,
representing thousands, millions, or billions of cycles per second. Nowadays, the standard unit of frequency is known as the hertz, abbreviated Hz. Thus 1 Hz 1 cps, 10 Hz
10 cps, and so on.
Higher frequencies are given in kilohertz (kHz), megahertz (MHz),
gigahertz (GHz), and terahertz (THz). The relationships are
1 kHz 1,000 Hz 103 Hz
1 MHz 1,000 kHz 106 Hz
1 GHz
1,000 MHz
109 Hz
1 THz
1,000 GHz
1012 Hz
CHAPTER 13 Alternating Current 325
PROBLEM 13-1
The period of an ac wave is 5.000 10 6 s. What is the frequency in hertz? In kilohertz? In megahertz?
SOLUTION 13-1
First, find the frequency fHz in hertz by taking the reciprocal of the period in seconds:
fHz 1/(5.000 10
6 ) 2.000 105 Hz
Next, divide fHz by 1,000 or 10
to get the frequency fkHz in kilohertz:
3fkHz fHz/10
2.000 105/103
200.0 kHz
Finally, divide fkHz by 1,000 or 10
to get the frequency fMHz in megahertz:
fMHz fkHz/10
0.2000 MHz
If you graph the instantaneous current or voltage in an ac system as a function of time, you get a waveform. Alternating currents can manifest themselves in an infinite variety of waveforms. Here are
some of the simplest ones.
In its purest form, alternating current has a sine-wave, or sinusoidal, nature. The waveform in Fig. 13-1 is a sine wave. Any ac wave that consists of a single frequency has a perfect sine-wave
shape. Any perfect sine-wave cur- rent contains one, and only one, component frequency.
In practice, a wave can be so close to a sine wave that it looks exactly like the sine function on an oscilloscope when in reality there are traces of other frequencies present. Imperfections are
often too small to see. Utility ac in the United States has an almost perfect sine-wave shape, with a fre- quency of 60 Hz. However, there are slight aberrations.
On an oscilloscope, a theoretically perfect square wave would look like a pair of parallel dotted lines, one having positive polarity and the other hav- ing negative polarity (Fig. 13-2a). In real
life, the transitions often can be seen as vertical lines (see Fig. 13-2b).
PART 2 Electricity, Magnetism, and Electronics
Fig. 13-2. (a) A theoretically perfect square wave. (b) The more common rendition.
A square wave might have equal negative and positive peaks. Then the
absolute amplitude of the wave is constant at a certain voltage, current, or power level. Half the time the amplitude is x, and the other half it is
x volts, amperes, or watts.
Some square waves are asymmetrical, with the positive and negative magnitudes differing. If the length of time for which the amplitude is positive differs from the length of time for which the
amplitude is negative, the wave is not truly square but is described by the more general term rectangular wave.
Some ac waves reverse their polarity at constant but not instantaneous rates. The slope of the amplitude-versus-time line indicates how fast the magnitude is changing. Such waves are called sawtooth
waves because of their appearance.
In Fig. 13-3, one form of sawtooth wave is shown. The positive-going slope (rise) is extremely steep, as with a square wave, but the negative-
CHAPTER 13 Alternating Current 327
Fig. 13-3. A fast-rise, slow-decay sawtooth wave.
going slope (fall or decay) is gradual. The period of the wave is the time
between points at identical positions on two successive pulses.
Another form of sawtooth wave is just the opposite, with a gradual positive-going slope and a vertical negative-going transition. This type of wave is sometimes called a ramp (Fig. 13-4). This
waveform is used for scanning in cathode-ray-tube (CRT) television sets and oscilloscopes.
Sawtooth waves can have rise and decay slopes in an infinite number of different combinations. One example is shown in Fig. 13-5. In this case, the positive-going slope is the same as the
negative-going slope. This is a triangular wave.
PROBLEM 13-2
Suppose that each horizontal division in Fig. 13-5 represents 1.0 microsecond
(1.0 s or 1.0 10 6 s). What is the period of this triangular wave? What is the frequency?
SOLUTION 13-2
The easiest way to look at this is to evaluate the wave from a point where it
crosses the time axis going upward and then find the next point (to the right or left) where the wave crosses the time axis going upward. This is four hori- zontal divisions, at least within the
limit of our ability to tell by looking at it. The period T is therefore 4.0 s or 4.0 10 6 s. The frequency is the reciprocal
of this: f 1/T 1/(4.0 10 6) 2.5 105 Hz.
PART 2 Electricity, Magnetism, and Electronics
Fig. 13-4. A slow-rise, fast-decay sawtooth wave, also called a ramp wave.
Fig. 13-5. A triangular wave.
CHAPTER 13 Alternating Current 329
Fractions of a Cycle
Scientists and engineers break the ac cycle down into small parts for analysis and reference. One complete cycle can be likened to a single revolution around a circle.
Suppose that you swing a glowing ball around and around at the end of a string at a rate of one revolution per second. The ball thus describes a circle
in space (Fig. 13-6a). Imagine that you swing the ball around so that it is always at the same level; it takes a path that lies in a horizontal plane. Imagine that you do this in a pitch-dark
gymnasium. If a friend stands some distance away with his or her eyes in the plane of the ball’s path, what does your friend see? Only the glowing ball, oscillating back and forth. The ball seems to
move toward the right, slow down, and then reverse its direction,
Top view
Side view
Fig. 13-6. Swinging ball and string. (a) as seen from above; (b) as seen from some distance away in the plane of the ball’s circular path.
PART 2 Electricity, Magnetism, and Electronics
going back toward the left (see Fig. 13-6b). Then it moves faster and faster and
then slower again, reaching its left-most point, at which it turns around again. This goes on and on, with a frequency of 1 Hz, or a complete cycle per sec- ond, because you are swinging the ball
around at one revolution per second.
If you graph the position of the ball as seen by your friend with respect to time, the result will be a sine wave (Fig. 13-7). This wave has the same char- acteristic shape as all sine waves. The
standard, or basic, sine wave is described by the mathematical function y sin x in the (x, y) coordinate plane. The general form is y a sin bx, where a and b are real-number constants.
Left Right
Position of ball
Fig. 13-7. Position of ball as seen edge-on as a function of time.
One method of specifying fractions of an ac cycle is to divide it into 360 equal increments called degrees, symbolized ° or deg (but it’s okay to write out the whole word). The value 0° is assigned
to the point in the cycle where the magnitude is zero and positive-going. The same point on the next cycle
is given the value 360°. Halfway through the cycle is 180°; a quarter cycle is 90°; an eighth cycle is 45°. This is illustrated in Fig. 13-8.
CHAPTER 13 Alternating Current 331
0° 90° 270°
Fig. 13-8. A wave cycle can be divided into 360 degrees.
The other method of specifying fractions of an ac cycle is to divide it into exactly 2 , or approximately 6.2832, equal parts. This is the number of radii of a circle that can be laid end to end
around the circumference. One radian, symbolized rad (although you can write out the whole word), is equal to about 57.296°. Physicists use the radian more often than the degree when talking about
fractional parts of an ac cycle.
Sometimes the frequency of an ac wave is measured in radians per second
(rad/s) rather than in hertz (cycles per second). Because there are 2 radians
in a complete cycle of 360°, the angular frequency of a wave, in radians per second, is equal to 2 times the frequency in hertz. Angular frequency is symbolized by the lowercase italicized Greek
letter omega ( ).
PROBLEM 13-3
What is the angular frequency of household ac? Assume that the frequency
of utility ac is 60.0 Hz.
SOLUTION 13-3
Multiply the frequency in hertz by 2 . If this value is taken as 6.2832, then the
angular frequency is
6.2832 60.0 376.992 rad/s
PART 2 Electricity, Magnetism, and Electronics
This should be rounded off to 377 rad/s because our input data are given only
to three significant figures.
PROBLEM 13-4
A certain wave has an angular frequency of 3.8865 105 rad/s. What is the frequency in kilohertz? Express the answer to three significant figures.
SOLUTION 13-4
To solve this, first find the frequency in hertz. This requires that the angular
frequency, in radians per second, be divided by 2 , which is approximately
6.2832. The frequency fHz is therefore
fHz (3.8865 105)/6.2832
6.1855 104 Hz
To obtain the frequency in kilohertz, divide by 103, and then round off to three significant figures:
fkHz 6.1855 104/103
61.855 kHz » 61.9 kHz
Amplitude also can be called magnitude, level, strength, or intensity. Depending on the quantity being measured, the amplitude of an ac wave can be specified in amperes (for current), volts (for
voltage), or watts (for power).
The instantaneous amplitude of an ac wave is the voltage, current, or power
at some precise moment in time. This constantly changes. The manner in which it varies depends on the waveform. Instantaneous amplitudes are represented by individual points on the wave curves.
The average amplitude of an ac wave is the mathematical average (or mean) instantaneous voltage, current, or power evaluated over exactly one wave cycle or any exact whole number of wave cycles. A
pure ac sine wave always has an average amplitude of zero. The same is true of a pure ac square wave or triangular wave. It is not generally the case for sawtooth
CHAPTER 13 Alternating Current 333
waves. You can get an idea of why these things are true by carefully looking
at the waveforms illustrated by Figs. 13-1 through 13-5. If you know calculus, you know that the average amplitude is the integral of the waveform eval- uated over one full cycle.
The peak amplitude of an ac wave is the maximum extent, either positive or negative, that the instantaneous amplitude attains. In many waves, the positive and negative peak amplitudes are the same.
Sometimes they differ, however. Figure 13-9 is an example of a wave in which the positive peak amplitude
is the same as the negative peak amplitude. Figure 13-10 is an illustration of a wave that has different positive and negative peak amplitudes.
The peak-to-peak (pk-pk) amplitude of a wave is the net difference between the positive peak amplitude and the negative peak amplitude (Fig. 13-11). Another way of saying this is that the pk-pk
amplitude is equal to the positive peak amplitude plus the absolute value of the negative peak amplitude.
Fig. 13-9. Positive and negative peak amplitudes. In this case, they are equal.
PART 2 Electricity, Magnetism, and Electronics
Fig. 13-10. A wave in which the positive and negative peak amplitudes differ.
to- peak
Fig. 13-11. Peak-to-peak amplitude.
CHAPTER 13 Alternating Current 335
Peak to peak is a way of expressing how much the wave level “swings”
during the cycle.
In many waves, the pk-pk amplitude is twice the peak amplitude. This is the case when the positive and negative peak amplitudes are the same.
Often it is necessary to express the effective amplitude of an ac wave. This
is the voltage, current, or power that a dc source would produce to have the same general effect in a real circuit or system. When you say a wall outlet has 117 V, you mean 117 effective volts. The
most common figure for effective ac levels is called the root-mean-square, or rms, value.
The expression root mean square means that the waveform is mathemat- ically “operated on” by taking the square root of the mean of the square of all its instantaneous values. The rms amplitude is not
the same thing as the average amplitude. For a perfect sine wave, the rms value is equal to 0.707 times the peak value, or 0.354 times the pk-pk value. Conversely, the peak value is 1.414 times the
rms value, and the pk-pk value is 2.828 times the rms value. The rms figures often are quoted for perfect sine-wave sources of voltage, such as the utility voltage or the effective voltage of a radio
signal. For a perfect square wave, the rms value is the same as the peak value,
and the pk-pk value is twice the rms value and twice the peak value. For sawtooth and irregular waves, the relationship between the rms value and the peak value depends on the exact shape of the
wave. The rms value is never more than the peak value for any waveshape.
Sometimes a wave can have components of both ac and dc. The simplest example of an ac/dc combination is illustrated by the connection of a dc source, such as a battery, in series with an ac source,
such as the utility main. Any ac wave can have a dc component along with it. If the dc compo-
nent exceeds the peak value of the ac wave, then fluctuating or pulsating dc will result. This would happen, for example, if a 200-V dc source were connected in series with the utility output.
Pulsating dc would appear, with an average value of 200 V but with instantaneous values much higher and lower. The waveshape in this case is illustrated by Fig. 13-12.
PROBLEM 13-5
An ac sine wave measures 60 V pk-pk. There is no dc component. What is
the peak voltage?
PART 2 Electricity, Magnetism, and Electronics
ac Component
Fig. 13-12. Composite ac/dc wave resulting from 117-V rms ac in series with 200-V dc.
SOLUTION 13-5
In this case, the peak voltage is exactly half the peak-to-peak value, or 30 V pk.
Half the peaks are 30 V; half are 30 V.
PROBLEM 13-6
Suppose that a dc component of 10 V is superimposed on the sine wave
described in Problem 13-5. What is the peak voltage?
SOLUTION 13-6
This can’t be answered simply, because the absolute values of the positive
peak and negative peak voltages differ. In the case of Problem 13-5, the pos- itive peak is 30 V and the negative peak is 30 V, so their absolute values are the same. However, when a dc component of
10 V is superimposed on the wave, both the positive peak and the negative peak voltages change by
10 V. The positive peak voltage thus becomes 40 V, and the negative peak voltage becomes 20 V.
Phase Angle
Phase angle is an expression of the displacement between two waves having identical frequencies. There are various ways of defining this. Phase angles are usually expressed as values such that 0°
360°. In radians, this
CHAPTER 13
Alternating Current
range is 0
2 . Once in awhile you will hear about phase angles
specified over a range of 180° 180°. In radians, this range is
. Phase angle figures can be defined only for pairs of waves whose frequencies are the same. If the frequencies differ, the phase changes from moment to moment and cannot be denoted as a specific
Phase coincidence means that two waves begin at exactly the same moment. They are “lined up.” This is shown in Fig. 13-13 for two waves having different amplitudes. (If the amplitudes were the same,
you would see only one wave.) The phase difference in this case is 0°.
If two sine waves are in phase coincidence, the peak amplitude of the resulting wave, which also will be a sine wave, is equal to the sum of the peak amplitudes of the two composite waves. The phase
of the resultant is the same as that of the composite waves.
When two sine waves begin exactly one-half cycle, or 180°, apart, they are said to be in phase opposition. This is illustrated by the drawing of Fig. 13-14.
0° 90° 270°
Fig. 13-13. Two sine waves in phase coincidence.
PART 2 Electricity, Magnetism, and Electronics
0° 90° 270°
Fig. 13-14. Two sine waves in phase opposition.
If two sine waves have the same amplitude and are in phase opposition,
they cancel each other out because the instantaneous amplitudes of the two waves are equal and opposite at every moment in time.
If two sine waves have different amplitudes and are in phase opposition, the peak value of the resulting wave, which is a sine wave, is equal to the difference between the peak values of the two
composite waves. The phase of the resultant is the same as the phase of the stronger of the two composite waves.
Suppose that there are two sine waves, wave X and wave Y, with identical frequencies. If wave X begins a fraction of a cycle earlier than wave Y, then wave X is said to be leading wave Y in phase.
For this to be true, X must begin its cycle less than 180° before Y. Figure 13-15 shows wave X leading wave Y by 90°. The difference can be anything greater than 0°, up to but not including 180°.
Leading phase is sometimes expressed as a phase angle such that 0°
180°. In radians, this is 0 . If we say that wave X has a phase of /2 rad relative to wave Y, we mean that wave X leads wave Y by /2 rad.
CHAPTER 13 Alternating Current 339
0° 90° 270°
Wave X
Wave Y
Fig. 13-15. Wave X leads wave Y by 90°.
Suppose that wave X begins its cycle more than 180° but less than 360° ahead of wave Y. In this situation, it is easier to imagine that wave X starts its cycle later than wave Y by some value between
but not including 0° and
180°. Then wave X is lagging wave Y. Figure 13-16 shows wave X lagging wave Y by 90°. The difference can be anything between but not including
0° and 180°.
Lagging phase is sometimes expressed as a negative angle such that
180° 0°. In radians, this is 0. If we say that wave X has a phase of 45° relative to wave Y, we mean that wave X lags wave Y by 45°.
If a sine wave X is leading a sine wave Y by x degrees, then the two waves can be drawn as vectors, with vector X oriented x degrees counterclockwise from vector Y. If wave X lags Y by y degrees,
then X is oriented y degrees clockwise from Y. If two waves are in phase, their vectors overlap (line up).
If they are in phase opposition, they point in exactly opposite directions.
PART 2 Electricity, Magnetism, and Electronics
Wave Y Wave X
0° 90°
Fig. 13-16. Wave X lags wave Y by 90°.
Figure 13-17 shows four phase relationships between waves X and Y.
Wave X always has twice the amplitude of wave Y, so vector X is always twice as long as vector Y. In part a, wave X is in phase with wave Y. In part b, wave X leads wave Y by 90°. In part c, waves X
and Y are 180° opposite in phase. In part d, wave X lags wave Y by 90°.
In all cases, the vectors rotate counterclockwise at the rate of one complete circle per wave cycle. Mathematically, a sine wave is a vector that goes around and around, just like the ball goes
around and around your head when you put it on a string and whirl it.
In a sine wave, the vector magnitude stays the same at all times. If the waveform is not sinusoidal, the vector magnitude is greater in some direc- tions than in others. As you can guess, there exist
an infinite number of variations on this theme, and some of them can get complicated.
PROBLEM 13-7
Suppose that there are three waves, called X, Y, and Z. Wave X leads wave
Y by 0.5000 rad; wave Y leads wave Z by precisely one-eighth cycle. By how many degrees does wave X lead or lag wave Z?
SOLUTION 13-7
To solve this, let’s convert all phase-angle measures to degrees. One radian
is approximately equal to 57.296°; therefore, 0.5000 rad 57.296° 0.5000
CHAPTER 13 Alternating Current 341
X,Y Y
Y Y
Fig. 13-17. Vector representations of phase. (a) waves X and Y are in phase;
(b) wave X leads wave Y by 90 degrees; (c) waves X and Y are in phase opposition; (d) wave X lags wave Y by 90 degrees.
28.65° (to four significant figures). One-eighth of a cycle is equal to 45.00°
(that is 360°/8.000). The phase angles therefore add up, so wave X leads wave Z by 28.65° 45.00°, or 73.65°.
PROBLEM 13-8
Suppose that there are three waves X, Y, and Z. Wave X leads wave Y by
0.5000 rad; wave Y lags wave Z by precisely one-eighth cycle. By how many degrees does wave X lead or lag wave Z?
SOLUTION 13-8
The difference in phase between X and Y in this problem is the same as that
in the preceding problem, namely, 28.65°. The difference between Y and Z is also the same, but in the opposite sense. Wave Y lags wave Z by 45.00°. This is the same as saying that wave Y leads wave Z
by 45.00°. Thus wave X
PART 2 Electricity, Magnetism, and Electronics
leads wave Z by 28.65° ( 45.00°), which is equivalent to 28.65° 45.00°,
or 16.35°. It is better in this case to say that wave X lags wave Z by 16.35°
or that wave Z leads wave X by 16.35°.
As you can see, phase relationships can get confusing. It’s the same sort of thing that happens when you talk about negative numbers. Which number
is larger than which? It depends on point of view. If it helps you to draw pictures of waves when thinking about phase, then by all means go ahead.
Refer to the text in this chapter if necessary. A good score is eight correct. Answers are in the back of the book.
1. Approximately how many radians are in a quarter of a cycle?
(a) 0.7854
(b) 1.571
(c) 3.142
(d) 6.284
2. Refer to Fig. 13-18. Suppose that each horizontal division represents 1.0 ns
(1.0 10 9 s) and that each vertical division represents 1 mV (1.0 10 3 V). What is the approximate rms voltage? Assume the wave is sinusoidal.
(a) 4.8 mV
(b) 9.6 mV
(c) 3.4 mV
(d) 6.8 mV
3. In the wave illustrated by Fig. 13-18, given the same specifications as those for the previous problem, what is the approximate frequency of this wave?
(a) 330 MHz
(b) 660 MHz
(c) 4.1 109 rad/s
(d) It cannot be determined from this information.
4. In the wave illustrated by Fig. 13-18, what fraction of a cycle, in degrees, is represented by one horizontal division?
(a) 60
(b) 90
CHAPTER 13 Alternating Current 343
Fig. 13-18. Illustration for quiz questions 2, 3, and 4.
(c) 120
(d) 180
5. The maximum instantaneous current in a fluctuating dc wave is 543 mA over several cycles. The minimum instantaneous current is 105 mA, also over sev- eral cycles. What is the peak-to-peak current
in this wave?
(a) 438 mA
(b) 648 mA
(c) 543 mA
(d) It cannot be calculated from this information.
6. The pk-pk voltage in a square wave is 5.50 V. The wave is ac, but it has a dc component of 1.00 V. What is the instantaneous voltage?
(a) More information is needed to answer this question.
(b) 3.25 V
(c) 1.25 V
(d) 1.00 V
7. Given the situation in the preceding question, what is the average voltage?
(a) More information is needed to answer this question.
(b) 3.25 V
(c) 1.25 V
(d) 1.00 V
PART 2 Electricity, Magnetism, and Electronics
8. Suppose that there are two sine waves having identical frequency and that their
vector representations are at right angles to each other. What is the difference in phase?
(a) More information is needed to answer this question.
(b) 90°
(c) 180°
(d) 2 rad
9. A square wave is a special form of
(a) sine wave.
(b) sawtooth wave.
(c) ramp wave.
(d) rectangular wave.
10. An ac wave has a constant frequency f. Its peak voltage Vpk is doubled. What happens to the period T?
(a) It doubles to 2T.
(b) It is reduced to T/2.
(c) It is reduced to 0.707T.
(d) It remains at T.
CHAPTER 12
Direct Current
You now have a solid grasp of physics math, and you know the basics of classical physics. It is time to delve into the workings of things that can’t be observed directly. These include particles, and
forces among them, that make it possible for you to light your home, communicate instantly with people on the other side of the world, and in general do things that would have been considered magical
a few generations ago.
What Does Electricity Do?
When I took physics in middle school, they used 16-millimeter celluloid film projectors. Our teacher showed us several films made by a well-known professor. I’ll never forget the end of one of these
lectures, in which the professor said, “We evaluate electricity not by knowing what it is, but by scrutinizing what it does.” This was a great statement. It really expresses the whole philosophy of
modern physics, not only for electricity but also for all phenomena that aren’t directly tangible. Let’s look at some of the things electricity does.
In some materials, electrons move easily from atom to atom. In others, the electrons move with difficulty. And in some materials, it is almost impos- sible to get them to move. An electrical
conductor is a substance in which the electrons are highly mobile.
The best conductor, at least among common materials, at room tempera- ture is pure elemental silver. Copper and aluminum are also excellent electrical
Copyright 2002 by The McGraw-Hill Companies, Inc. Click here for Terms of Use.
PART 2 Electricity, Magnetism, and Electronics
conductors. Iron, steel, and various other metals are fair to good conductors of
electricity. Some liquids are good conductors. Mercury is one example. Salt water is a fair conductor. Gases are, in general, poor conductors because the atoms or molecules are too far apart to allow
a free exchange of electrons. However, if a gas becomes ionized, it can be a fair conductor of electricity.
Electrons in a conductor do not move in a steady stream like molecules of water through a garden hose. They pass from atom to atom (Fig. 12-1). This happens to countless atoms all the time. As a
result, trillions of elec- trons pass a given point each second in a typical electric circuit.
Outer electron shell
Outer electron shell
Fig. 12-1. In an electrical conductor, electrons pass easily from atom to atom. This drawing is greatly simplified.
Imagine a long line of people, each one constantly passing a ball to his or her neighbor on the right. If there are plenty of balls all along the line, and if everyone keeps passing balls along as
they come, the result is a steady stream of balls moving along the line. This represents a good conductor. If the peo- ple become tired or lazy and do not feel much like passing the balls along, the
rate of flow decreases. The conductor is no longer very good.
If the people refuse to pass balls along the line in the preceding example, the line represents an electrical insulator. Such substances prevent electric cur- rents from flowing, except in very small
amounts under certain circumstances.
CHAPTER 12 Direct Current 299
Most gases are good electrical insulators (because they are poor con-
ductors). Glass, dry wood, paper, and plastics are other examples. Pure water is a good electrical insulator, although it conducts some current when minerals are dissolved in it. Metal oxides can be
good insulators, even though the metal in pure form is a good conductor.
An insulating material is sometimes called a dielectric. This term arises from the fact that it keeps electric charges apart, preventing the flow of electrons that would equalize a charge difference
between two places. Excellent insulating materials can be used to advantage in certain electrical components such as capacitors, where it is important that electrons not be able to flow steadily.
When there are two separate regions of electric charge having opposite polarity (called plus and minus, positive and negative, or and ) that are close to each other but kept apart by an insulating
mate- rial, that pair of charges is called an electric dipole.
Some substances, such as carbon, conduct electricity fairly well but not very well. The conductivity can be changed by adding impurities such as clay to a carbon paste. Electrical components made in
this way are called resistors. They are important in electronic circuits because they allow for the control of current flow. The better a resistor conducts, the lower is its resistance; the worse it
conducts, the higher is the resistance.
Electrical resistance is measured in ohms, sometimes symbolized by the uppercase Greek letter omega (W). In this book we’ll sometimes use the symbol W and sometimes spell out the word ohm or ohms, so
that you’ll get used to both expressions. The higher the value in ohms, the greater is the resistance, and the more difficult it is for current to flow. In an electrical system, it is usually
desirable to have as low a resistance, or ohmic value,
as possible because resistance converts electrical energy into heat. This heat is called resistance loss and in most cases represents energy wasted. Thick wires and high voltages reduce the
resistance loss in long-distance electrical lines. This is why gigantic towers, with dangerous voltages, are employed in large utility systems.
Whenever there is movement of charge carriers in a substance, there is an elec- tric current. Current is measured in terms of the number of charge carriers, or
PART 2 Electricity, Magnetism, and Electronics
particles containing a unit electric charge, passing a single point in 1
Charge carriers come in two main forms: electrons, which have a unit negative charge, and holes, which are electron absences within atoms and which carry a unit positive charge. Ions can act as
charge carriers, and in some cases, atomic nuclei can too. These types of particles carry whole- number multiples of a unit electric charge. Ions can be positive or negative
in polarity, but atomic nuclei are always positive.
Usually, a great many charge carriers go past any given point in 1 sec- ond, even if the current is small. In a household electric circuit, a 100-W light bulb draws a current of about 6 quintillion
(6 1018) charge carriers per second. Even the smallest minibulb carries a huge number of charge carriers every second. It is ridiculous to speak of a current in terms of charge carriers per second,
so usually it is measured in coulombs per sec- ond instead. A coulomb (symbolized C) is equal to approximately 6.24
1018 electrons or holes. A current of 1 coulomb per second (1 C/s) is called an ampere (symbolized A), and this is the standard unit of electric current.
A 60-W bulb in a common table lamp draws about 0.5 A of current.
When a current flows through a resistance—and this is always the case, because even the best conductors have resistance—heat is generated. Sometimes visible light and other forms of energy are
emitted as well. A light bulb is deliberately designed so that the resistance causes visible light
to be generated. However, even the best incandescent lamp is inefficient, creating more heat than light energy. Fluorescent lamps are better; they produce more light for a given amount of current. To
put this another way, they need less current to give off a certain amount of light.
In physics, electric current is theoretically considered to flow from the positive to the negative pole. This is known as conventional current. If you connect a light bulb to a battery, therefore,
the conventional current flows out of the positive terminal and into the negative terminal. However, the electrons, which are the primary type of charge carrier in the wire and the bulb, flow in the
opposite direction, from negative to positive. This is the way engineers usually think about current.
Charge carriers, particularly electrons, can build up or become deficient on objects without flowing anywhere. You’ve experienced this when walking on a carpeted floor during the winter or in a place
where the humidity is
CHAPTER 12 Direct Current 301
low. An excess or shortage of electrons is created on and in your body. You
acquire a charge of static electricity. It’s called static because it doesn’t go anywhere. You don’t feel this until you touch some metallic object that is connected to an electrical ground or to
some large fixture, but then there is
a discharge, accompanied by a spark and a small electric shock. It is the current, during this discharge, that causes the sensation.
If you were to become much more charged, your hair would stand on end because every hair would repel every other one. Objects that carry the same electric charge, caused by either an excess or a
deficiency of electrons, repel each other. If you were massively charged, the spark might jump several cen- timeters. Such a charge is dangerous. Static electric (also called electrostatic) charge
buildup of this magnitude does not happen with ordinary carpet and shoes, fortunately. However, a device called a Van de Graaff generator, found
in some high-school physics labs, can cause a spark this large. You have to be careful when using this device for physics experiments.
On the grand scale of the Earth’s atmosphere, lightning occurs between clouds and between clouds and the surface. This spark is a greatly magni- fied version of the little spark you get after
shuffling around on a carpet. Until the spark occurs, there is an electrostatic charge in the clouds, between different clouds, or between parts of a cloud and the ground. In Fig. 12-2, four types of
lightning are shown. The discharge can occur with-
in a single cloud (intracloud lightning, part a), between two different clouds (intercloud lightning, part b), or from a cloud to the surface (cloud-
to-ground lightning, part c), or from the surface to a cloud (ground-to-cloud lightning, part d). The direction of the current flow in these cases is con- sidered to be the same as the direction in
which the electrons move. In cloud-to-ground or ground-to-cloud lightning, the charge on the Earth’s surface follows along beneath the thunderstorm cloud like a shadow as the storm is blown along by
the prevailing winds.
The current in a lightning stroke can approach 1 million A. However, it takes place only for a fraction of a second. Still, many coulombs of charge are displaced in a single bolt of lightning.
Current can flow only if it gets a “push.” This push can be provided by a buildup of electrostatic charges, as in the case of a lightning stroke. When the charge builds up, with positive polarity
(shortage of electrons) in one place and negative polarity (excess of electrons) in another place, a powerful
PART 2 Electricity, Magnetism, and Electronics
+ +
+ + +
+ + +
+ + A
_ _ _
_ _ _ _
_ _ + +
_ _ _ _ _
+ + +
C D
+ + +
+ + + +
+ + +
_ _ _ _ _ _
Fig. 12-2. (a) Lightning can occur within a single cloud (intracloud),
(b) between clouds (intercloud), or between a cloud and the surface
(c) cloud to ground or (d) ground to cloud.
electromotive force (emf) exists. This effect, also known as voltage or electri-
cal potential, is measured in volts (symbolized V).
Ordinary household electricity has an effective voltage of between 110
and 130 V; usually it is about 117 V. A car battery has an emf of 12 V (6 V
in some older systems). The static charge that you acquire when walking on a carpet with hard-soled shoes can be several thousand volts. Before a discharge of lightning, millions of volts exist.
An emf of 1 V, across a resistance of 1 W, will cause a current of 1 A to
flow. This is a classic relationship in electricity and is stated generally as
CHAPTER 12 Direct Current 303
Ohm’s law. If the emf is doubled, the current is doubled. If the resistance is
doubled, the current is cut in half. This law of electricity will be covered in detail a little later.
It is possible to have an emf without having current flow. This is the case just before a lightning bolt occurs and before you touch a metallic object after walking on the carpet. It is also true
between the two prongs of a lamp plug when the lamp switch is turned off. It is true of a dry cell when there
is nothing connected to it. There is no current, but a current can flow if there is a conductive path between the two points.
Even a large emf might not drive much current through a conductor or resistance. A good example is your body after walking around on the carpet. Although the voltage seems deadly in terms of numbers
(thousands), not many coulombs of charge normally can accumulate on an object the size of your body. Therefore, not many electrons flow through your finger, in relative terms, when you touch the
metallic object. Thus you don’t get a severe shock. Conversely, if there are plenty of coulombs available, a moderate voltage,
such as 117 V (or even less), can result in a lethal flow of current. This is why
it is dangerous to repair an electrical device with the power on. The utility power source can pump an unlimited number of coulombs of charge through your body if you are foolish enough to get caught
in this kind of situation.
Electrical Diagrams
To understand how electric circuits work, you should be able to read electri- cal wiring diagrams, called schematic diagrams. These diagrams use schematic symbols. Here are the basic symbols. Think
of them as something like an alphabet in a language such as Chinese or Japanese, where things are represented by little pictures. However, before you get intimidated by this comparison, rest assured
that it will be easier for you to learn schematic sym- bology than it would be to learn Chinese (unless you already know Chinese!).
The simplest schematic symbol is the one representing a wire or electrical conductor: a straight solid line. Sometimes dashed lines are used to repre- sent conductors, but usually, broken lines are
drawn to partition diagrams into constituent circuits or to indicate that certain components interact with
PART 2 Electricity, Magnetism, and Electronics
each other or operate in step with each other. Conductor lines are almost
always drawn either horizontally across or vertically up and down the page so that the imaginary charge carriers are forced to march in formation like soldiers. This keeps the diagram neat and easy
to read.
When two conductor lines cross, they are not connected at the crossing point unless a heavy black dot is placed where the two lines meet. The dot always should be clearly visible wherever conductors
are to be connected, no matter how many of them meet at the junction.
A resistor is indicated by a zigzaggy line. A variable resistor, such as a rheostat or potentiometer, is indicated by a zigzaggy line with an arrow through it or by a zigzaggy line with an arrow
pointing at it. These symbols are shown in Fig. 12-3.
(a) (b) (c)
Fig. 12-3. (a) A fixed resistor.
(b) A two-terminal variable resistor.
(c) A three-terminal potentiometer.
An electrochemical cell is shown by two parallel lines, one longer than the other. The longer line represents the positive terminal. A battery, or combination of cells in series, is indicated by an
alternating sequence of parallel lines, long-short-long-short. The symbols for a cell and a battery are shown in Fig. 12-4.
Meters are indicated as circles. Sometimes the circle has an arrow inside it, and the meter type, such as mA (milliammeter) or V (voltmeter), is written alongside the circle, as shown in Fig. 12-5a.
Sometimes the meter type is indicated inside the circle, and there is no arrow (see Fig. 12-5b). It doesn’t
CHAPTER 12 Direct Current 305
(a) (b)
Fig. 12-4. (a) An electrochemical cell. (b) A battery.
(a) (b)
Fig. 12-5. Meter symbols: (a) designator outside; (b) designator inside.
matter which way it’s done as long as you are consistent everywhere in a
given diagram.
Some other common symbols include the lamp, the capacitor, the air-core coil, the iron-core coil, the chassis ground, the earth ground, the alternating- current (AC) source, the set of terminals, and
the black box (which can stand for almost anything), a rectangle with the designator written inside. These are shown in Fig. 12-6.
Voltage/Current/Resistance Circuits
Most direct current (dc) circuits can be boiled down ultimately to three major components: a voltage source, a set of conductors, and a resistance. This is shown in the schematic diagram of Fig.
12-7. The voltage of the emf source
PART 2 Electricity, Magnetism, and Electronics
(a) (b) (c)
(d) (e) (f)
(g) (h) (i)
Fig. 12-6. More common schematic symbols:
(a) incandescent lamp; (b) fixed-value capacitor;
(c) air-core coil; (d) iron-core coil; (e) chassis ground;
(f) earth ground; (g) ac source; (h) terminals;
and (i) and black box.
is called E (or sometimes V); the current in the conductor is called I; the
resistance is called R. The standard units for these components are the volt
(V), the ampere (A), and the ohm (W), respectively. Note which characters here are italicized and which are not. Italicized characters represent mathe- matical variables; nonitalicized characters
represent symbols for units.
You already know that there is a relationship among these three quanti- ties. If one of them changes, then one or both of the others also will change.
If you make the resistance smaller, the current will get larger. If you make
CHAPTER 12 Direct Current 307
Fig. 12-7. A simple dc circuit. The voltage is E, the current is I,
and the resistance is R.
the emf source smaller, the current will decrease. If the current in the cir-
cuit increases, the voltage across the resistor will increase. There is a sim- ple arithmetic relationship between these three quantities.
OHM’S LAW
The interdependence among current, voltage, and resistance in dc circuits
is called Ohm’s law, named after the scientist who supposedly first expressed it. Three formulas denote this law:
E IR I E/R R E/I
You need only remember the first of these formulas to be able to derive the others. The easiest way to remember it is to learn the abbreviations E for emf, I for current, and R for resistance; then
remember that they appear in alphabetical order with the equals sign after the E. Thus E IR.
It is important to remember that you must use units of volts, amperes, and ohms in order for Ohm’s law to work right. If you use volts, mil- liamperes (mA), and ohms or kilovolts (kV), microamperes (
A), and megohms (MW), you cannot expect to get the right answers. If the initial quantities are given in units other than volts, amperes, and ohms, you must convert to these units and then calculate.
After that, you can convert the units back again to whatever you like. For example, if you get 13.5 million ohms as a calculated resistance, you might prefer to say that it is 13.5 megohms. However,
in the calculation, you should use the number 13.5
million (or 1.35 107) and stick to ohms for the units.
PART 2 Electricity, Magnetism, and Electronics
The first way to use Ohm’s law is to find current values in dc circuits. In order to find the current, you must know the voltage and the resistance or be able to deduce them.
Refer to the schematic diagram of Fig. 12-8. It consists of a variable dc generator, a voltmeter, some wire, an ammeter, and a calibrated wide-range potentiometer. Actual component values are not
shown here, but they can be assigned for the purpose of creating sample Ohm’s law problems. While calculating the current in the following problems, it is necessary to mentally
“cover up” the meter.
PROBLEM 12-1
Suppose that the dc generator (see Fig. 12-8) produces 10 V and that the potentiometer is set to a value of 10 W. What is the current?
SOLUTION 12-1
This is solved easily by the formula I E/R. Plug in the values for E and
R; they are both 10, because the units are given in volts and ohms. Then
I 10/10 1.0 A.
PROBLEM 12-2
The dc generator (see Fig. 12-8) produces 100 V, and the potentiometer is set to 10.0 kW. What is the current?
dc generator
Fig. 12-8. Circuit for working Ohm’s law problems.
CHAPTER 12 Direct Current 309
SOLUTION 12-2
First, convert the resistance to ohms: 10.0 kW 10,000 W. Then plug the val-
ues in: I 100/10,000 0.0100 A.
The second use of Ohm’s law is to find unknown voltages when the current and the resistance are known. For the following problems, uncover the ammeter and cover the voltmeter scale in your mind.
PROBLEM 12-3
Suppose that the potentiometer (see Fig. 12-8) is set to 100 W, and the meas-
ured current is 10.0 mA. What is the dc voltage?
SOLUTION 12-3
Use the formula E IR. First, convert the current to amperes: 10.0 mA
0.0100 A. Then multiply: E 0.0100 100 1.00 V. This is a low, safe volt- age, a little less than what is produced by a flashlight cell.
Ohm’s law can be used to find a resistance between two points in a dc cir- cuit when the voltage and the current are known. For the following prob- lems, imagine that both the voltmeter and ammeter
scales in Fig. 12-8 are visible but that the potentiometer is uncalibrated.
PROBLEM 12-4
If the voltmeter reads 24 V and the ammeter shows 3.0 A, what is the value
of the potentiometer?
SOLUTION 12-4
Use the formula R E/I, and plug in the values directly because they are expressed in volts and amperes: R 24/3.0 8.0 W.
You can calculate the power P (in watts, symbolized W) in a dc circuit such as that shown in Fig. 12-8 using the following formula:
P EI
where E is the voltage in volts and I is the current in amperes. You may not be given the voltage directly, but you can calculate it if you know the cur- rent and the resistance.
PART 2 Electricity, Magnetism, and Electronics
Remember the Ohm’s law formula for obtaining voltage: E IR. If you
know I and R but don’t know E, you can get the power P by means of this formula:
P (IR) I I 2R
That is, take the current in amperes, multiply this figure by itself, and then multiply the result by the resistance in ohms.
You also can get the power if you aren’t given the current directly. Suppose that you’re given only the voltage and the resistance. Remember the Ohm’s law formula for obtaining current: I E/R.
Therefore, you can calculate power using this formula:
P E (E/R) E 2/R
That is, take the voltage, multiply it by itself, and divide by the resistance. Stated all together, these power formulas are
P EI I 2R E 2/R
Now we are all ready to do power calculations. Refer once again to
Fig. 12-8.
PROBLEM 12-5
Suppose that the voltmeter reads 12 V and the ammeter shows 50 mA. What
is the power dissipated by the potentiometer?
SOLUTION 12-5
Use the formula P EI. First, convert the current to amperes, getting I
0.050 A. Then P EI 12 0.050 0.60 W.
How Resistances Combine
When electrical components or devices containing dc resistance are con- nected together, their resistances combine according to specific rules. Sometimes the combined resistance is more than that of
any of the compo- nents or devices alone. In other cases the combined resistance is less than that of any of the components or devices all by itself.
When you place resistances in series, their ohmic values add up to get the total resistance. This is intuitively simple, and it’s easy to remember.
CHAPTER 12 Direct Current 311
PROBLEM 12-6
Suppose that the following resistances are hooked up in series with each
other: 112 ohms, 470 ohms, and 680 ohms (Fig. 12-9). What is the total resistance of the series combination?
Fig. 12-9. An example of three specific resistances in series.
SOLUTION 12-6
Just add the values, getting a total of 112 470 680 1,262 ohms. You
can round this off to 1,260 ohms. It depends on the tolerances of the compo- nents—how much their actual values are allowed to vary, as a result of man- ufacturing processes, from the values
specified by the vendor. Tolerance is more of an engineering concern than a physics concern, so we won’t worry about that here.
When resistances are placed in parallel, they behave differently than they do in series. In general, if you have a resistor of a certain value and you place other resistors in parallel with it, the
overall resistance decreases. Mathematically, the rule is straightforward, but it can get a little messy.
One way to evaluate resistances in parallel is to consider them as con- ductances instead. Conductance is measured in units called siemens, some- times symbolized S. (The word siemens serves both in
the singular and the plural sense). In older documents, the word mho (ohm spelled backwards)
is used instead. In parallel, conductances add up in the same way as resist- ances add in series. If you change all the ohmic values to siemens, you can add these figures up and convert the final
answer back to ohms.
The symbol for conductance is G. Conductance in siemens is the recip- rocal of resistance in ohms. This can be expressed neatly in the following two formulas. It is assumed that neither R nor G is
ever equal to zero:
G 1/R R 1/G
PART 2 Electricity, Magnetism, and Electronics
PROBLEM 12-7
Consider five resistors in parallel. Call them R1 through R5, and call the total resistance R, as shown in the diagram of Fig. 12-10. Let R1 100 ohms, R2
200 ohms, R3 300 ohms, R4 400 ohms, and R5 500 ohms, respec- tively. What is the total resistance R of this parallel combination?
R R 1 R 2 R 3 R 4 R 5
Fig. 12-10. Five general resistances in parallel.
SOLUTION 12-7
Converting the resistances to conductance values, you get G1 1/100
0.0100 siemens, G2 1/200 0.00500 siemens, G3 1/300 0.00333
siemens, G4 1/400 0.00250 siemens, and G5 1/500 0.00200
siemens. Adding these gives G 0.0100 0.00500 0.00333 0.00250
0.00200 0.02283 siemens. The total resistance is therefore R 1/G
1/0.02283 43.80 ohms. Because we’re given the input numbers to only three significant figures, we should round this off to 43.8 ohms.
When you have resistances in parallel and their values are all equal, the total resistance is equal to the resistance of any one component divided by the num- ber of components. In a more general
sense, the resistances in Fig. 12-10 combine like this:
R 1/(1/R1 1/R2 1/R3 1/R4 1/R5) If you prefer to use exponents, the formula looks like this:
5R (R1 2
3 R4
R 1) 1
These resistance formulas are cumbersome for some people to work with, but mathematically they represent the same thing we just did in Prob- lem 12-7.
Have you ever used those tiny holiday lights that come in strings? If one bulb burns out, the whole set of bulbs goes dark. Then you have to find out which bulb is bad and replace it to get the
lights working again. Each bulb
CHAPTER 12 Direct Current 313
works with something like 10 V, and there are about a dozen bulbs in the
string. You plug in the whole bunch, and the 120-V utility mains drive just the right amount of current through each bulb.
In a series circuit such as a string of light bulbs, the current at any given point is the same as the current at any other point. An ammeter can be connected in series at any point in the circuit,
and it will always show the same reading. This is true in any series dc circuit, no matter what the components actually are and regardless of whether or not they all have the same resistance.
If the bulbs in a string are of different resistances, some of them will consume more power than others. In case one of the bulbs burns out and its socket is shorted out instead of filled with a
replacement bulb, the current through the whole chain will increase because the overall resistance of the string will go down. This will force each of the remaining bulbs to carry too much current.
Another bulb will burn out before long as a result of this excess current. If it, too, is replaced by a short circuit, the current will be increased still further. A third bulb will blow out almost
right away. At this point it would be wise to buy some new bulbs!
In a series circuit, the voltage is divided up among the components. The sum total of the potential differences across each resistance is equal to the dc power-supply or battery voltage. This is
always true, no matter how large or how small the resistances and whether or not they’re all the same value.
If you think about this for a moment, it’s easy to see why this is true. Look at the schematic diagram of Fig. 12-11. Each resistor carries the same current. Each resistor Rn has a potential
difference En across it equal to the product of the current and the resistance of that particular resistor. These En values are in series, like cells in a battery, so they add together. What if the
En values across all the resistors added up to something more or less than the supply voltage E? Then there would be a “phantom emf” someplace, adding or taking away voltage. However, there can be no
such thing. An emf cannot come out of nowhere.
Look at this another way. The voltmeter V in Fig. 12-11 shows the volt- age E of the battery because the meter is hooked up across the battery. The meter V also shows the sum of the En values across
the set of resistors sim- ply because the meter is connected across the set of resistors. The meter
PART 2 Electricity, Magnetism, and Electronics
E n
R n
Fig. 12-11. Analysis of voltage in a series dc circuit. See text for discussion.
says the same thing whether you think of it as measuring the battery volt-
age E or as measuring the sum of the En values across the series combina- tion of resistors. Therefore, E is equal to the sum of the En values.
This is a fundamental rule in series dc circuits. It also holds for common utility ac circuits almost all the time.
How do you find the voltage across any particular resistor Rn in a cir- cuit like the one in Fig. 12-11? Remember Ohm’s law for finding voltage:
E IR. The voltage is equal to the product of the current and the resist- ance. Remember, too, that you must use volts, ohms, and amperes when making calculations. In order to find the current in the
circuit I, you need to know the total resistance and the supply voltage. Then I E/R. First find the current in the whole circuit; then find the voltage across any par- ticular resistor.
PROBLEM 12-8
In Fig. 12-11, suppose that there are 10 resistors. Five of them have values
of 10 ohms, and the other 5 have values of 20 ohms. The power source is 15
V dc. What is the voltage across one of the 10-ohm resistors? Across one of the 20-ohm resistors?
SOLUTION 12-8
First, find the total resistance: R (10 5) (20 5) 50 100 150
ohms. Then find the current: I E/R 15/150 0.10 A. This is the current
through each of the resistors in the circuit. If Rn 10 ohms, then
CHAPTER 12 Direct Current 315
En I (Rn) 0.10 10 1.0 V
If Rn 20 ohms, then
En I (Rn) 0.10 20 2.0 V
You can check to see whether all these voltages add up to the supply volt- age. There are 5 resistors with 1.0 V across each, for a total of 5.0 V; there are also 5 resistors with 2.0 V across each,
for a total of 10 V. Thus the sum of the voltages across the 10 resistors is 5.0 V 10 V 15 V.
Imagine now a set of ornamental light bulbs connected in parallel. This is the method used for outdoor holiday lighting or for bright indoor lighting. You know that it’s much easier to fix a
parallel-wired string of holiday lights if one bulb should burn out than it is to fix a series-wired string. The failure of one bulb does not cause catastrophic system failure. In fact, it might be
awhile before you notice that the bulb is dark because all the other ones will stay lit, and their brightness will not change.
In a parallel circuit, the voltage across each component is always the same and is always equal to the supply or battery voltage. The current drawn by each component depends only on the resistance of
that particular device. In this sense, the components in a parallel-wired circuit work inde- pendently, as opposed to the series-wired circuit, in which they all interact.
If any branch of a parallel circuit is taken away, the conditions in the other branches remain the same. If new branches are added, assuming that the power supply can handle the load, conditions in
previously existing branches are not affected.
Refer to the schematic diagram of Fig. 12-12. The total parallel resistance
in the circuit is R. The battery voltage is E. The current in branch n, con- taining resistance Rn, is measured by ammeter A and is called In.
The sum of all the In values in the circuit is equal to the total current I
drawn from the source. That is, the current is divided up in the parallel cir-
cuit, similarly to the way that voltage is divided up in a series circuit.
PROBLEM 12-9
Suppose that the battery in Fig. 12-12 delivers 12 V. Further suppose that
there are 12 resistors, each with a value of 120 ohms in the parallel circuit.
PART 2 Electricity, Magnetism, and Electronics
R n
Fig. 12-12. Analysis of current in a parallel dc circuit. See text for discussion.
What is the total current I drawn from the battery?
SOLUTION 12-9
First, find the total resistance. This is easy because all the resistors have the
same value. Divide Rn 120 by 12 to get R 10 ohms. Then the current I
is found by Ohm’s law:
I E/R 12/10 1.2 A
PROBLEM 12-10
In the circuit of Fig. 12-12, what does the ammeter A say, given the same
component values as exist in the scenario of the preceding problem?
SOLUTION 12-10
This involves finding the current in any given branch. The voltage is 12 V
across every branch; Rn 120. Therefore, In, the ammeter reading, is found by Ohm’s law:
In E/Rn 12/120 0.10 A
Let’s check to be sure all the In values add to get the total current I. There are
12 identical branches, each carrying 0.10 A; therefore, the sum is 0.10 12
1.2 A. It checks out.
CHAPTER 12 Direct Current 317
n nLet’s switch back now to series circuits. When calculating the power in a circuit containing resistors in series, all you need to do is find out the cur- rent I, in amperes, that the circuit is
carrying. Then it’s easy to calculate the power Pn, in watts, dissipated by any particular resistor of value Rn , in ohms, based on the formula P I 2R .
The total power dissipated in a series circuit is equal to the sum of the wattages dissipated in each resistor. In this way, the distribution of power
in a series circuit is like the distribution of the voltage.
PROBLEM 12-11
Suppose that we have a series circuit with a supply of 150 V and three resis-
tors: R1 330 ohms, R2 680 ohms, and R3 910 ohms. What is the power dissipated by R2?
SOLUTION 12-11
Find the current in the circuit. To do this, calculate the total resistance first.
Because the resistors are in series, the total is resistance is R 330 680
910 1920 ohms. Therefore, the current is I 150/1920 0.07813 A
78.1 mA. The power dissipated by R2 is
2 2P I 2R 0.07813 0.07813 680 4.151 W
We must round this off to three significant figures, getting 4.15 W.
n n nWhen resistances are wired in parallel, they each consume power accord- ing to the same formula, P I2R. However, the current is not the same in each resistance. An easier method to find the
power Pn dissipated by resis- tor of value R is by using the formula P E2/R , where E is the voltage
of the supply. This voltage is the same across every resistor.
In a parallel circuit, the total power consumed is equal to the sum of the wattages dissipated by the individual resistances. This is, in fact, true for any dc circuit containing resistances. Power
cannot come out of nowhere, nor can it vanish.
PROBLEM 12-12
A circuit contains three resistances R1 22 ohms, R2 47 ohms, and R3
68 ohms, all in parallel across a voltage E 3.0 V. Find the power dissipated
by each resistor.
PART 2 Electricity, Magnetism, and Electronics
SOLUTION 12-12
First find E 2, the square of the supply voltage: E 2 3.0 3.0 9.0. Then
P1 9.0/22 0.4091 W, P2 9.0/47 0.1915 W, and P3 9.0/68 0.1324
W. These should be rounded off to P1 0.41 W, P2 0.19 W, and P3 0.13 W,
Kirchhoff’s Laws
The physicist Gustav Robert Kirchhoff (1824–1887) was a researcher and experimentalist in electricity, back in the time before radio, before electric lighting, and before much was understood about
how electric cur- rents flow.
Kirchhoff reasoned that current must work something like water in a net- work of pipes and that the current going into any point has to be the same as the current going out. This is true for any
point in a circuit, no matter how many branches lead into or out of the point (Fig. 12-13).
In a network of water pipes that does not leak and into which no water
is added along the way, the total number of cubic meters going in has to be the same as the total volume going out. Water cannot form from nothing, nor can it disappear, inside a closed system of
pipes. Charge carriers, thought Kirchhoff, must act the same way in an electric circuit.
PROBLEM 12-13
In Fig. 12-13, suppose that each of the two resistors below point Z has a value
of 100 ohms and that all three resistors above Z have values of 10.0 ohms. The current through each 100-ohm resistor is 500 mA (0.500 A). What is the current through any of the 10.0-ohm resistors,
assuming that the current is equally distributed? What is the voltage, then, across any of the 10.0-ohm resistors?
SOLUTION 12-13
The total current into Z is 500 mA 500 mA 1.00 A. This must be divided
three ways equally among the 10-ohm resistors. Therefore, the current through any one of them is 1.00/3 A 0.333 A 333 mA. The voltage across any one of the 10.0-ohm resistors is found by Ohm’s law: E
IR 0.333
10.0 3.33 V.
CHAPTER 12 Direct Current 319
3I I
I I
Fig. 12-13. Kirchhoff’s current law. The current enter- ing point Z is equal to the current leaving point Z. In this case, I1 I2 I3 I4 I5.
The sum of all the voltages, as you go around a circuit from some fixed point and return there from the opposite direction, and taking polarity into account, is always zero. At first thought, some
people find this strange. Certainly there is voltage in your electric hair dryer, radio, or computer! Yes, there is—between different points in the circuit. However, no single point can have an
electrical potential with respect to itself. This is so sim- ple that it’s trivial. A point in a circuit is always shorted out to itself.
What Kirchhoff was saying when he wrote his voltage law is that volt- age cannot appear out of nowhere, nor can it vanish. All the potential dif- ferences must balance out in any circuit, no matter
how complicated and no matter how many branches there are.
Consider the rule you’ve already learned about series circuits: The volt- ages across all the resistors add up to the supply voltage. However, the polarities of the emfs across the resistors are
opposite to that of the battery. This is shown in Fig. 12-14. It is a subtle thing, but it becomes clear when
PART 2 Electricity, Magnetism, and Electronics
a series circuit is drawn with all the components, including the battery or
other emf source, in line with each other, as in Fig. 12-14.
E E
+ 2 _
+ 3 _
_ + _ +
E E
1 E 4
Fig. 12-14. Kirchhoff’s voltage law. The sum of the voltages
E E1 E2 E3 E4 0, taking polarity into account.
PROBLEM 12-14
Refer to the diagram of Fig. 12-14. Suppose that the four resistors have val-
ues of 50, 60, 70, and 80 ohms and that the current through them is 500 mA
(0.500 A). What is the supply voltage E?
SOLUTION 12-14
Find the voltages E1, E2, E3, and E4 across each of the resistors. This is done using Ohm’s law. In the case of E1, say, with the 50-ohm resistor, calculate E1 0.500 50 25 V. In the same way, you can
calculate E2 30 V, E3 35 V, and E4 40 V. The supply voltage is the sum E1 E2 E3 E4
25 30 35 40 V 130 V.
Refer to the text in this chapter if necessary. A good score is eight correct. Answers are in the back of the book.
1. Suppose that 5.00 1017 electrical charge carriers flow past a point in 1.00 s. What is the electrical voltage?
(a) 0.080 V
(b) 12.5 V
CHAPTER 12 Direct Current 321
(c) 5.00 V
(d) It cannot be calculated from this information.
2. An ampere also can be regarded as
(a) an ohm per volt.
(b) an ohm per watt.
(c) a volt per ohm.
(d) a volt-ohm.
3. Suppose that there are two resistances in a series circuit. One of the resistors has a value of 33 kW (that is, 33,000 or 3.3 104 ohms). The value of the other resistor is not known. The power
dissipated by the 33-kW resistor is 3.3 W. What is the current through the unknown resistor?
(a) 0.11 A
(b) 10 mA
(c) 0.33 mA
(d) It cannot be calculated from this information.
4. If the voltage across a resistor is E (in volts) and the current through that resis- tor is I (in milliamperes), then the power P (in watts) is given by the following formula:
(a) P EI.
(b) P EI 103.
(c) P EI 10 3.
(d) P E/I.
5. Suppose that you have a set of five 0.5-W flashlight bulbs connected in paral- lel across a dc source of 3.0 V. If one of the bulbs is removed or blows out, what will happen to the current through
the other four bulbs?
(a) It will remain the same.
(b) It will increase.
(c) It will decrease.
(d) It will drop to zero.
6. A good dielectric is characterized by
(a) excellent conductivity.
(b) fair conductivity.
(c) poor conductivity.
(d) variable conductivity.
7. Suppose that there are two resistances in a parallel circuit. One of the resistors has a value of 100 ohms. The value of the other resistor is not known. The power dissipated by the 100-ohm
resistor is 500 mW (that is, 0.500 W). What
is the current through the unknown resistor?
(a) 71 mA
(b) 25 A
(c) 200 A
(d) It cannot be calculated from this information.
PART 2 Electricity, Magnetism, and Electronics
8. Conventional current flows
(a) from the positive pole to the negative pole.
(b) from the negative pole to the positive pole.
(c) in either direction; it doesn’t matter.
(d) nowhere; current does not flow.
9. Suppose that a circuit contains 620 ohms of resistance and that the current in the circuit is 50.0 mA. What is the voltage across this resistance?
(a) 12.4 kV
(b) 31.0 V
(c) 8.06 10 5 V
(d) It cannot be calculated from this information.
10. Which of the following cannot be an electric charge carrier?
(a) A neutron
(b) An electron
(c) A hole
(d) An ion | {"url":"http://kartikowati.blogspot.com/","timestamp":"2014-04-16T07:14:09Z","content_type":null,"content_length":"193867","record_id":"<urn:uuid:fa4b9f6c-ad6e-49ba-96cf-96ad4bbac203>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
bessel function algorithm
Bill Allombert on Fri, 10 Feb 2012 15:26:44 +0100
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
bessel function algorithm
• To: pari-users@pari.math.u-bordeaux.fr
• Subject: bessel function algorithm
• From: Bill Allombert <Bill.Allombert@math.u-bordeaux1.fr>
• Date: Fri, 10 Feb 2012 15:26:36 +0100
• Delivery-date: Fri, 10 Feb 2012 15:26:44 +0100
• User-agent: Mutt/1.5.20 (2009-06-14)
Hello PARI users,
I forward this post from Marcel Bezerra about the Bessel function
which was misfilled.
From: Marcel Bezerra <gigamaxell@gmail.com>
Date: Fri, 10 Feb 2012 11:38:57 -0200
I'm working with cylindrical electromagnetic waveguides. The conditions I'm
studying now need solutions using Bessel functions with complex order. I
need to calculate them numerically, so I started to look for software
capable to calculate such functions.
I need some info about the algorithms used to calculate bessel functions. I
want to know what algorithm is used to calculate bessel functions with
complex order and argument in PARI/GP.
I found some papers regarding this subject. The first one is the Algorithm
644 from Amos. But his paper says his method is applied for nonnegative
order. The second one is from Masao Kodama, that is exactly an algorithim
about complex order and argument.
I used a software called Mathematica, that showed in an example the bessel
function of order 7.3 + 1.0*I and argument 4.5-1*I. But they don't make any
reference about the algorithm they use. I found out the PARI/GP software
also is capable to deal with complex order, and tested the same example and
I got the same result compared to Mathematica.
I need the info of the algorithm used in PARI to calculate bessel functions
in order to know if there are any limitations of order, argument, precision
and so on.
Thank you in advance!
Regards from Brazil!
Marcel Bezerra | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-1202/msg00000.html","timestamp":"2014-04-19T19:36:25Z","content_type":null,"content_length":"4606","record_id":"<urn:uuid:ff434a3c-d3e2-4f82-a7fd-e7f9afaff906>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximum likelihood estimator for Power-law with Exponential cutoff
up vote 1 down vote favorite
for fitting empirical data to power-law I am aware of the work by Clauset et al. (http://arxiv.org/abs/0706.1062) and how to use maximum likelihood estimation. There exists also a simple maximum
likelihood estimator for exponential distributions.
My data seems to be power-law with exponential cutoff after some time. Is there a closed estimator (e.g. maximum likelihood) to estimate the power law exponent, exponential rate, and the point where
the distribution cuts off into exponential?
Or do I have to determine the cutoff point myself and then use two separate estimators, one for power-law and one for exponential?
Thanks, Chris
power-series probability-distributions estimation-theory
add comment
2 Answers
active oldest votes
In the case of a power law, $ P(x; \alpha, x_{min}) = \frac{\alpha - 1}{x_{min}} \left( \frac{x}{x_{min}} \right)^{-\alpha}$, the maximum likelihood estimator (MLE) for $\alpha$ is indeed simple if
given the value for $x_{min}$, namely $\hat{\alpha} = 1 + n \cdot \left( \sum_{i=1}^n \ln{(x_i/x_{min})}\right)^{-1}$. However, there is no simple expression for estimating $x_{min}$ as the
likelihood is increasing in $x_{min}$, corresponding to throwing out more and more of the data, so another method is needed (Clauset et al. maximize the similarity between the observed data above $x_
{min}$ and the fitted distribution by using KS statistic).
In the case of a power law with an exponential cut-off, $P(x; \alpha, \lambda, x_{min}) = \frac{\lambda^{1-\alpha}}{\Gamma(1-\alpha,\lambda x_{min})} x^{-\alpha} e^{-\lambda x}$, finding exact
up expressions is much harder (the derivatives of the log-likelihood involves, among other things, a Meijer G-function and a closed form for the solution seems unlikely). The estimators of $\lambda$ and
vote $\alpha$ are coupled (due to the normalization constant) so mikitov's idea of finding them sequentially does not work, unfortunately.
down We therefore have to use numerical methods. The log-likelihood is $\mathcal{L}/n = (1-\alpha)\ln{\lambda} - \ln{\Gamma(1-\alpha,x_{min}\lambda)} - \alpha\sum_{i=1}^n\ln{x_i} - \lambda\sum_{i=1}^
vote nx_i$. Mathematica's NMaximize seems to do a fairly good job of finding the MLEs:
Clear[\[Lambda], \[Alpha], xmin] NMaximize[{Length[xs] Log[\[Lambda]^(1 - \[Alpha])/Re@Gamma[1 - \[Alpha], xmin \[Lambda]]] - \[Alpha] Total[Log[xs]] - \[Lambda] Total[xs],\[Alpha] >= 1, \[Alpha] <= 3, \[Lambda] >= 0, xmin > 0, xmin <= Min[xs]}, {\[Alpha], \[Lambda], k}]
where xs is data with $x_{min} = \min x_i$. This would have to be combined with a KS statistic maximization for $x_{min}$ similar to that for the regular power laws.
add comment
Power-law + cut-off seems to correspond to a pdf like $f(x; \alpha, \lambda) = Cx^{-\alpha}e^{-\lambda x}$
Thus, you should find the maximum likelihood estimator of both $\alpha , \lambda$. This can be done sequentially (i. e. once you find the ML estimator of $\alpha$ then you insert it
in the likelihood function and find $\hat{\lambda}_{ML}$).
up vote 1 down
vote They do not seem very difficult to obtain, see Kay book first volume for maybe some help. If there is no closed form solution use numerical methods (see http://en.wikipedia.org/wiki/
Expectation-maximization_algorithm )
No valid answer? – mikitov Sep 7 '11 at 14:43
add comment
Not the answer you're looking for? Browse other questions tagged power-series probability-distributions estimation-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/66731/maximum-likelihood-estimator-for-power-law-with-exponential-cutoff?sort=votes","timestamp":"2014-04-19T02:43:21Z","content_type":null,"content_length":"56020","record_id":"<urn:uuid:bc47cf32-fe29-40dc-8772-d37f5b56d55d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Airy Rainbow Simulator
Bow from 0.75mm diameter drops illuminated by a distant point source. Supernumeraries like these are not seen in nature because they are blurred by the finite angular size of the sun and
variations in drop size. AirySim simulation.
When a plane light wave interacts with a water drop the outgoing wave after a single internal reflection is curved. If the shape of the wave is somehow known then phase differences along it can be
calculated and thus the intensity of the resulting rainbow and its supernumeraries. The English Astronomer Royal, George Biddell Airy (1801-1892), approximated the scattered wavefront shape with a
cubic form and developed an analytic expression for the rainbow intensities in terms of what are now called Airy integrals or functions. Airy's theory gives satisfactory predictions of the
observable features of white light rainbows(1) and is computationally very considerably faster than the exact predictions of Mie theory.
AirySim precomputes and stores Airy functions for a whole range of arguments using an ascending series expansion(2). To compute a rainbow for a particular drop size, wavelength and refractive index
(3), appropriate values of the Airy functions for each scattering angle are derived by interpolation of the stored values or, where necessary, additional direct computation. White light rainbows are
obtained by repeating the calculation for closely spaced wavelengths between 380 and 700 nm and summing the intensities at each angle after weighting them by a spectral solar radiance(4).
When simulations from non monodisperse droplets are required AirySim uses a droplet population function normally distributed in radius. All the above calculations have to be repeated for the
different droplet radii and the intensities summed.
Rainbows from the sun rather than plane parallel light are derived by convolving the angular intensities with a disk intensity function.
Representation of colours is always problematic. AirySim uses the CIE and Bruton(5) colour models written for IRIS.
AirySim was produced by Les Cowley and Michael Schroeder.
AirySim is not yet available for download.
(1) Lee, R. L., "Mie theory, Airy theory, and the natural rainbow," Applied Optics 37, 1506-1519 (1998).
(2) Abramowitz M. & Stegun I.A., Handbook of Mathematical Functions, Dover.
(3) Water refractive indices are from IAPWS (International Association for the Properties of Water and Steam) which are in turn based on P. Schiebener, J. Straub, J. M. H. L. Sengers, J. S.
Gallagher, "Refractive index of water and steam as function of wavelength, temperature and density," J. Phys. Ch. R.,19, 677-717, (1990).
(4) Spectral solar radiances were those used by Raymond Lee(1) and kindly supplied by him in detailed tabular form.
(5) Dan Bruton of Stephen F. Austin State University, "Color Science", http://www.physics.sfasu.edu/astro/color.html | {"url":"http://www.atoptics.co.uk/rainbows/airysim.htm","timestamp":"2014-04-20T10:46:57Z","content_type":null,"content_length":"13222","record_id":"<urn:uuid:7032c875-8c86-4d57-920c-5cb3329e3d61>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functions, Limits, and Continuity
Problem : Could the following be the graph of a function f (x) ?
Figure %: Is this graph a function f (x) ?
No. For each positive
value, the graph contains two values for
f (x)
. Since a function takes on only one value for each
in its domain, this cannot be the graph of a function.
Problem : Find a linear function that passes through the points (1, 3) and (- 2, - 3)
We substitute
(x [1], y [1]) = (1, 3)
(x [2], y [2]) = (- 2, - 3)
into the @@equation@@ given in this section to obtain
f (x) = x - 1) + 3 = 2x + 1
Problem : Use a power function to solve the following problem. A mathematician named George mows lawns all summer and manages to save up 3, 000 dollars. George decides to invest his earnings in an
account that pays an annual interest rate of 7 percent. How much money will George have in his account after 5 years (assuming he does not make any further deposits)?
We may use a power function to describe this situation because we want to study a quantity that is being multiplied by a fixed number each year. The initial value in this problem is 3000 (in
dollars), and the growth rate is 1.07 (per year). Thus the appropriate power function is
f (t) = 3000(1.07)^t
, where
is the number of years from the time the money is invested. Plugging in
, we see that after
years George will have
f (5) = 3000(1.07)^5 | {"url":"http://www.sparknotes.com/math/calcbc1/functionslimitsandcontinuity/problems.html","timestamp":"2014-04-19T10:03:24Z","content_type":null,"content_length":"53439","record_id":"<urn:uuid:fcd430fa-add3-42ef-9b86-ac2cb8374bc3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dirk Schütz
My research interests include
• Topology of closed 1-forms
• Topological Robotics
• Assembly maps in algebraic K- and L-theory
• Parametrized fixed point theory
For more information on research in topology here in Durham see the
Durham Topology Research Page
Durham Conference on Geometry and Topology
in Honour of John Bolton and Cherry Kearton took place 20-22 June 2010. | {"url":"http://maths.dur.ac.uk/~dma0ds/","timestamp":"2014-04-16T10:35:11Z","content_type":null,"content_length":"5643","record_id":"<urn:uuid:8583dcc8-2da0-4a5d-a517-29fd290b5803>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Satellite-to-Satellite Tracking and Satellite Gravity Gradiometry
• The purpose of satellite-to-satellite tracking (SST) and/or satellite gravity gradiometry (SGG) is to determine the gravitational field on and outside the Earth's surface from given gradients of
the gravitational potential and/or the gravitational field at satellite altitude. In this paper both satellite techniques are analysed and characterized from mathematical point of view.
Uniqueness results are formulated. The justification is given for approximating the external gravitational field by finite linear combination of certain gradient fields (for example, gradient
fields of single-poles or multi-poles) consistent to a given set of SGG and/or SST data. A strategy of modelling the gravitational field from satellite data within a multiscale concept is
described; illustrations based on the EGM96 model are given.
Author: Willi Freeden, Volker Michel, Helga Nutz
URN (permanent link): urn:nbn:de:hbz:386-kluedo-10735
Serie (Series number): Berichte der Arbeitsgruppe Technomathematik (AGTM Report) (236)
Document Type: Preprint
Language of publication: English
Year of Completion: 2001
Year of Publication: 2001
Publishing Institute: Technische Universität Kaiserslautern
Tag: Earth' ; clo; fundamental systems ; s external gravitational field ; satellite gravity gradiometry ; satellite-to-satellite tracking ; uniqueness
Faculties / Organisational entities: Fachbereich Mathematik
DDC-Cassification: 510 Mathematik
MSC-Classification (mathematics): 31B05 Harmonic, subharmonic, superharmonic functions
35J05 Laplacian operator, reduced wave equation (Helmholtz equation), Poisson equation [See also 31Axx, 31Bxx]
86A20 Potentials, prospecting
86A30 Geodesy, mapping problems | {"url":"https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1134","timestamp":"2014-04-18T17:31:23Z","content_type":null,"content_length":"20815","record_id":"<urn:uuid:df9ef5d0-9aa5-44a0-8627-28215ffaaa29>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Android Lesson Two: Ambient and Diffuse Lighting
Welcome to the second tutorial for Android. In this lesson, we’re going to learn how to implement Lambertian reflectance using shaders, otherwise known as your standard diffuse lighting. In OpenGL
ES 2, we need to implement our own lighting algorithms, so we will learn how the math works and how we can apply it to our scenes.
Assumptions and prerequisites
Each lesson in this series builds on the lesson before it. Before we begin, please review the first lesson as this lesson will build upon the concepts introduced there.
What is light?
A world without lighting would be a dim one, indeed. Without light, we would not even be able to perceive the world or the objects that lie around us, except via the other senses such as sound and
touch. Light shows us how bright or dim something is, how near or far it is, and what angle it lies at.
In the real world, what we perceive as light is really the aggregation of trillions of tiny particles called photons, which fly out of a light source, bounce around thousands or millions of times,
and eventually reach our eye where we perceive it as light.
How can we simulate the effects of light via computer graphics? There are two popular ways to do it: ray tracing, and rasterisation. Ray tracing works by mathematically tracing actual rays of light
and seeing where they end up. This technique gives very accurate and realistic results, but the downside is that simulating all of those rays is very computationally expensive, and usually too slow
for real-time rendering. Due to this limitation, most real-time computer graphics use rasterisation instead, which simulates lighting by approximating the result. Given the realism of recent games,
rasterisation can also look very nice, and is fast enough for real-time graphics even on mobile phones. Open GL ES is primarily a rasterisation library, so this is the approach we will focus on.
The different kinds of light
It turns out that we can abstract the way that light works and come up with three basic types of lighting:
Ambient lighting
This is a base level of lighting that seems to pervade an entire scene. It is light that doesn’t appear to come from any light source in particular because it has bounced around so much before
reaching you. This type of lighting can be experienced outdoors on an overcast day, or indoors as the cumulative effect of many different light sources. Instead of calculating all of the individual
lights, we can just set a base light level for the object or scene.
Diffuse lighting
This is light that reaches your eye after bouncing directly off of an object. The illumination level of the object varies with its angle to the lighting. Something facing the light head on is lit
more brightly than something facing the light at an angle. Also, we perceive the object to be the same brightness no matter which angle we are at relative to the object. This is otherwise known as
Lambert’s cosine law. Diffuse lighting or Lambertian reflectance is common in everyday life and can be easily seen on a white wall lit up by an indoor light.
Specular lighting
Unlike diffuse lighting, specular lighting changes as we move relative to the object. This gives “shininess” to the object and can be seen on “smoother” surfaces such as glass and other shiny
Simulating light
Just as there are three main types of light in a 3D scene, there are also three main types of light sources: directional, point, and spotlight. These can also be easily seen in everyday life.
Directional lighting
Directional lighting usually comes from a bright source that is so far away that it lights up the entire scene evenly and to the same brightness. This light source is the simplest type as the light
is the same strength and direction no matter where you are in the scene.
Point lighting
Point lights can be added to a scene in order to give more varied and realistic lighting. The illumination of a point light falls off with distance, and its light rays travel out in all directions
with the point light at the center.
Spot lighting
In addition to the properties of a point light, spot lights also have the direction of light attenuated, usually in the shape of a cone.
The math
In this lesson, we’re going to be looking at ambient lighting and diffuse lighting coming from a point source.
Ambient lighting
Ambient lighting is really indirect diffuse lighting, but it can also be thought of as a low-level light which pervades the entire scene. If we think of it that way, then it becomes very easy to
final color = material color * ambient light color
For example, let’s say our object is red and our ambient light is a dim white. Let’s assume that we store color as an array of three colors: red, green, and blue, using the RGB color model:
final color = {1, 0, 0} * {0.1, 0.1, 0.1} = {0.1, 0.0, 0.0}
The final color of the object will be a dim red, which is what you’d expect if you had a red object illuminated by a dim white light. There is really nothing more to basic ambient lighting than that,
unless you want to get into more advanced lighting techniques such as radiosity.
Diffuse lighting – point light source
For diffuse lighting, we need to add attenuation and a light position. The light position will be used to calculate the angle between the light and the surface, which will affect the surface’s
overall level of lighting. It will also be used to calculate the distance between the light and the surface, which determines the strength of the light at that point.
Step 1: Calculate the lambert factor.
The first major calculation we need to make is to figure out the angle between the surface and the light. A surface which is facing the light straight-on should be illuminated at full strength, while
a surface which is slanted should get less illumination. The proper way to calculate this is by using Lambert’s cosine law. If we have two vectors, one being from the light to a point on the surface,
and the second being a surface normal (if the surface is a flat plane, then the surface normal is a vector pointing straight up, or orthogonal to that surface), then we can calculate the cosine by
first normalizing each vector so that it has a length of one, and then by calculating the dot product of the two vectors. This is an operation that can easily be done via OpenGL ES 2 shaders.
Let’s call this the lambert factor, and it will have a range of between 0 and 1.
light vector = light position - object position
cosine = dot product(object normal, normalize(light vector))
lambert factor = max(cosine, 0)
Fitst we calculate the light vector by subtracting the object position from the light position. Then we get the cosine by doing a dot product between the object normal and the light vector. We
normalize the light vector, which means to change its length so it has a length of one. The object normal should already have a length of one. Taking the dot product of two normalized vectors gives
you the cosine between them. Because the dot product can have a range of -1 to 1, we clamp it to a range of 0 to 1.
Here’s an example with an flat plane at the origin and the surface normal pointing straight up toward the sky. The light is positioned at {0, 10, -10}, or 10 units up and 10 units straight ahead. We
want to calculate the light at the origin.
light vector = {0, 10, -10} - {0, 0, 0} = {0, 10, -10}
object normal = {0, 1, 0}
In plain English, if we move from where we are along the light vector, we reach the position of the light. To normalize the vector, we divide each component by the vector length:
light vector length = square root(0*0 + 10*10 + -10*-10) = square root(200) = 14.14
normalized light vector = {0, 10/14.14, -10/14.14} = {0, 0.707, -0.707}
Then we calculate the dot product:
dot product({0, 1, 0}, {0, 0.707, -0.707}) = (0 * 0) + (1 * 0.707) + (0 * -0.707) = 0 + 0.707 + 0 = 0.707
Here is a good explanation of the dot product and what it calculates. Finally, we clamp the range:
lambert factor = max(0.707, 0) = 0.707
OpenGL ES 2′s shading language has built in support for some of these functions so we don’t need to do all of the math by hand, but it can still be useful to understand what is going on.
Step 2: Calculate the attenuation factor.
Next, we need to calculate the attenuation. Real light attenuation from a point light source follows the inverse square law, which can also be stated as:
luminosity = 1 / (distance * distance)
Going back to our example, since we have a distance of 14.14, here is what our final luminosity looks like:
luminosity = 1 / (14.14*14.14) = 1 / 200 = 0.005
As you can see, the inverse square law can lead to a strong attenuation over distance. This is how light from a point light source works in the real world, but since our graphics displays have a
limited range, it can be useful to dampen this attenuation factor so we still get realistic lighting without things looking too dark.
Step 3: Calculate the final color.
Now that we have both the cosine and the attenuation, we can calculate our final illumination level:
final color = material color * (light color * lambert factor * luminosity)
Going with our previous example of a red material and a full white light source, here is the final calculation:
final color = {1, 0, 0} * ({1, 1, 1} * 0.707 * 0.005}) = {1, 0, 0} * {0.0035, 0.0035, 0.0035} = {0.0035, 0, 0}
To recap, for diffuse lighting we need to use the angle between the surface and the light as well as the distance between the surface and the light in order to calculate the final overall diffuse
illumination level. Here are the steps:
//Step one
light vector = light position - object position
cosine = dot product(object normal, normalize(light vector))
lambert factor = max(cosine, 0)
//Step two
luminosity = 1 / (distance * distance)
//Step three
final color = material color * (light color * lambert factor * luminosity)
Putting this all into OpenGL ES 2 shaders
The vertex shader
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.
+ "uniform mat4 u_MVMatrix; \n" // A constant representing the combined model/view matrix.
+ "uniform vec3 u_LightPos; \n" // The position of the light in eye space.
+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.
+ "attribute vec3 a_Normal; \n" // Per-vertex normal information we will pass in.
+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.
+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
// Transform the vertex into eye space.
+ " vec3 modelViewVertex = vec3(u_MVMatrix * a_Position); \n"
// Transform the normal's orientation into eye space.
+ " vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); \n"
// Will be used for attenuation.
+ " float distance = length(u_LightPos - modelViewVertex); \n"
// Get a lighting direction vector from the light to the vertex.
+ " vec3 lightVector = normalize(u_LightPos - modelViewVertex); \n"
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
+ " float diffuse = max(dot(modelViewNormal, lightVector), 0.1); \n"
// Attenuate the light based on distance.
+ " diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance))); \n"
// Multiply the color by the illumination level. It will be interpolated across the triangle.
+ " v_Color = a_Color * diffuse; \n"
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
+ " gl_Position = u_MVPMatrix * a_Position; \n"
+ "} \n";
There is quite a bit going on here. We have our combined model/view/projection matrix as in lesson one, but we’ve also added a model/view matrix. Why? We will need this matrix in order to calculate
the distance between the position of the light source and the position of the current vertex. For diffuse lighting, it actually doesn’t matter whether you use world space (model matrix) or eye space
(model/view matrix) so long as you can calculate the proper distances and angles.
We pass in the vertex color and position information, as well as the surface normal. We will pass the final color to the fragment shader, which will interpolate it between the vertices. This is also
known as Gouraud shading.
Let’s look at each part of the shader to see what’s going on:
// Transform the vertex into eye space.
+ " vec3 modelViewVertex = vec3(u_MVMatrix * a_Position); \n"
Since we’re passing in the position of the light in eye space, we convert the current vertex position to a coordinate in eye space so we can calculate the proper distances and angles.
// Transform the normal's orientation into eye space.
+ " vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); \n"
We also need to transform the normal’s orientation. Here we are just doing a regular matrix multiplication like with the position, but if the model or view matrices have been scaled or skewed, this
won’t work: we’ll actually have to undo the effect of the skew or scale by multiplying the normal by the transpose of the inverse of the original matrix. This website best explains why we have to do
// Will be used for attenuation.
+ " float distance = length(u_LightPos - modelViewVertex); \n"
As shown before in the math section, we need the distance in order to calculate the attenuation factor.
// Get a lighting direction vector from the light to the vertex.
+ " vec3 lightVector = normalize(u_LightPos - modelViewVertex); \n"
We also need the light vector to calculate the Lambertian reflectance factor.
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
+ " float diffuse = max(dot(modelViewNormal, lightVector), 0.1); \n"
This is the same math as above in the math section, just done in an OpenGL ES 2 shader. The 0.1 at the end is just a really cheap way of doing ambient lighting (the value will be clamped to a minimum
of 0.1).
// Attenuate the light based on distance.
+ " diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance))); \n"
The attenuation math is a bit different than above in the math section. We scale the square of the distance by 0.25 to dampen the attenuation effect, and we also add 1.0 to the modified distance so
that we don’t get oversaturation when the light is very close to an object (otherwise, when the distance is less than one, this equation will actually brighten the light instead of attenuating it).
// Multiply the color by the illumination level. It will be interpolated across the triangle.
+ " v_Color = a_Color * diffuse; \n"
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
+ " gl_Position = u_MVPMatrix * a_Position; \n"
Once we have our final light color, we multiply it by the vertex color to get the final output color, and then we project the position of this vertex to the screen.
The pixel shader
final String fragmentShader =
"precision mediump float; \n" // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
+ "varying vec4 v_Color; \n" // This is the color from the vertex shader interpolated across the
// triangle per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = v_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
Because we are calculating light on a per-vertex basis, our fragment shader looks the same as it did in the first lesson — all we do is pass through the color directly through. In the next lesson,
we’ll look at per-pixel lighting.
Per-Vertex versus per-pixel lighting
In this lesson we have focused on implementing per-vertex lighting. For diffuse lighting of objects with smooth surfaces, such as terrain, or for objects with many triangles, this will often be good
enough. However, when your objects don’t contain many vertices (such as our cubes in this example program) or have sharp corners, vertex lighting can result in artifacts as the light level is
linearly interpolated across the polygon; these artifacts also become much more apparent when specular highlights are added to the image. More can be seen at the Wiki article on Gouraud shading.
An explanation of the changes to the program
Besides the addition of per-vertex lighting, there are other changes to the program. We’ve switched from displaying a few triangles to a few cubes, and we’ve also added utility functions to load in
the shader programs. There are also new shaders to display the position of the light as a point, as well as other various small changes.
Construction of the cube
In lesson one, we packed both position and color attributes into the same array, but OpenGL ES 2 also lets us specify these attributes in seperate arrays:
// X, Y, Z
final float[] cubePositionData =
// In OpenGL counter-clockwise winding is default. This means that when we look at a triangle,
// if the points are counter-clockwise we are looking at the "front". If not we are looking at
// the back. OpenGL has an optimization where all back-facing triangles are culled, since they
// usually represent the backside of an object and aren't visible anyways.
// Front face
-1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
// R, G, B, A
final float[] cubeColorData =
// Front face (red)
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
New OpenGL flags
We have also enabled culling and the depth buffer via glEnable() calls:
// Use culling to remove back faces.
// Enable depth testing
As an optimization, you can tell OpenGL to eliminate triangles that are on the back side of an object. When we defined our cube, we also defined the three points of each triangle so that they are
counter-clockwise when looking at the “front” side. When we flip the triangle around so we’re looking at the “back” side, the points then appear clockwise. You can only ever see three sides of a cube
at the same time so this optimization tells OpenGL to not waste its time drawing the back sides of triangles.
Later when we draw transparent objects we may want to turn culling back off, as then it will be possible to see the back sides of objects.
We’ve also enabled depth testing. If you always draw things in order from back to front then depth testing is not strictly necessary, but by enabling it not only do you not need to worry about the
draw order (although rendering can be faster if you draw closer objects first), but some graphics cards will also make optimizations which can speed up rendering by spending less time drawing pixels
that will be drawn over anyways.
Changes in loading shader programs
Because the steps to loading shader programs in OpenGL are mostly the same, these steps can easily be refactored into a separate method. We’ve also added the following calls to retrieve debug info,
in case the compilation/link fails:
Vertex and shader program for the light point
There is a new vertex and shader program specifically for drawing the point on the screen that represents the current position of the light:
// Define a simple shader program for our point.
final String pointVertexShader =
"uniform mat4 u_MVPMatrix; \n"
+ "attribute vec4 a_Position; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = u_MVPMatrix \n"
+ " * a_Position; \n"
+ " gl_PointSize = 5.0; \n"
+ "} \n";
final String pointFragmentShader =
"precision mediump float; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = vec4(1.0, \n"
+ " 1.0, 1.0, 1.0); \n"
+ "} \n";
This shader is similar to the simple shader from the first lesson. There’s a new property, gl_PointSize which we hard-code to 5.0; this is the output point size in pixels. It’s used when we draw the
point using GLES20.GL_POINTS as the mode. We’ve also hard-coded the output color to white.
Further Exercises
• Try removing the “oversaturation protection” and see what happens.
• There is a flaw with the way the ambient lighting is done. Can you spot what it is?
• What happens if you add a gl_PointSize to the cube shader and draw it using GL_POINTS?
Further Reading
The further reading section above was an invaluable resource to me while writing this tutorial, so I highly recommend reading them for more information and explanations.
Wrapping up
The full source code for this lesson can be downloaded from the project site on GitHub.
A compiled version of the lesson can also be downloaded directly from the Android Market:
Thanks for getting through another big lesson! I learned a lot while writing it, and I hope you learned a lot by following it through as well. Feel free to ask any questions or offer feedback, and
thanks for stopping by!
About the book
Android is booming like never before, with millions of devices shipping every day. In OpenGL ES 2 for Android: A Quick-Start Guide, you’ll learn all about shaders and the OpenGL pipeline, and
discover the power of OpenGL ES 2.0, which is much more feature-rich than its predecessor.
It’s never been a better time to learn how to create your own 3D games and live wallpapers. If you can program in Java and you have a creative vision that you’d like to share with the world, then
this is the book for you.
66 thoughts on “Android Lesson Two: Ambient and Diffuse Lighting”
1. This is the first example that I have found that actually had a good description on how to get lighting working properly. Thank you!
1. Thanks for the feedback, Ryan, I appreciate it! I hope to add some more explanations behind how the lighting works and why we use the math we do.
3. These tutorials are great! they describe every step in detail without assuming that your a mathematician. thanks for this!
4. 我不得不说这是我看过的最好的编程指南,详细而且全面。非常感谢国外的学者发表这么出色的指南!
I have to say this is the best programming guides that I have seen on the net.
I will put all of these study guides read N times!
Other learning materials attached to your tutorials is also very useful for me !
Thank you very much!
5. Thank you for the great comments and feedback! Within three weeks more articles will hopefully be going live. Sorry to keep everyone waiting.
6. Not only is this one of the very few OGL2.0 android tuts on the net, its a damn GOOD one at that! Especially the math parts with links to detailed information on the principles behind it, makes
it understandable even for a dumbass like me! THANKS!!!!
7. Our universe is described as infinitely large and the atomic universe is infinitely tiny. But what is infinitely massive and little in truly mathematical terms?
8. Thanks for this great series of tutorials, I’ve learned a lot. What is the answer to the question in the execise part? I assume the flaw is that you can’t do seperate ambient color and diffuse
color that way.
Go on with the excellent work
9. Thank you SO much for these tutorials. Please keep them coming.
10. Thanks everyone again for the great feedback. As for the flaw with the ambient lighting, it’s been so long since I wrote this tutorial, but looking at it again, it looks like the flaw is that the
ambient lighting is still attenuated with distance. Maybe that’s not a flaw depending on the effect you’re going for.
11. Thank you very much for the tutorials provided over here. It’s really helping me a total noob to venture into 3D world
One question, what is generic vertex attributes array? When should we enable/disable them?
1. Hi Alvin,
Thanks for the compliments! In OpenGL ES 2.0, there are no longer fixed attributes for specific features, such as color, normal, etc…. so you use generic attributes for these. The meaning of
these attributes is now interpreted in the shaders. In this lesson for example, we use generic vertex attributes to represent our position and color. You also need to call
glEnableVertexAttribArray before you can pass the vertex data through to a shader. It seems that you only need to call this once and not on every draw frame.
Hope this helps!
1. Yup that helped me to understand more. Thanks again!
13. i have learnt lesson one ,it’s pretty good!
1. Thanks!
14. i seperate the Class into several class, a class, aclass,but keep all the matrix and handle, and the program is ok to run, 5 cubes and a point light. but when i compare to the original effect, i
found the light point failed to light the cube
any suggestion?
1. My guess is that there’s an error with either the shader or the input values. Maybe try from the original source and make your modifications one by one?
1. i get the error,
mLightPosInModelSpace = new float[4];
but in original code its
float[] mLightPosInModelSpace = {0.0f, 0.0f, 0.0f, 1.0f};
15. second question:in your code i found something like this
public static final String pointFragmentShader =
“precision mediump float; \n”
+ “void main() \n”
+ “{ \n”
+ ” gl_FragColor = vec4(1.0, \n”
+ ” 1.0, 1.0, 1.0); \n”
+ “} \n”;
is there a way to get the GLSL code out of the String and reach the same effect?
or is there another way i can create a shader?
1. Yes, you can place them as text files under res/raw, and read them in using the code found here: https://github.com/learnopengles/Learn-OpenGLES-Tutorials/blob/master/android/
There should be an example of that in the last couple of lessons at the source code. Let me know if that helps!
16. Maybe you should mention more about gl_PointSize.
For example I found out you need to set glEnable(GL_POINT_SMOOTH) if you want to make the point size larger than 1 pixel. At least on my desktop implementation. My maximum point size is 63 and
you can find out yours using this code:
GLfloat pointSizeRange[2];
glGetFloatv(GL_ALIASED_POINT_SIZE_RANGE, pointSizeRange);
GLfloat min = pointSizeRange[0];
GLfloat max = pointSizeRange[1];
1. Thanks for sharing that, Steve!
17. Another really easy way is to store the shader code in a string in res/strings and then import it in the activity with getResources().getString(R.string.yourshadercode);
1. Hi Sam,
True, this could be even easier than the raw resource. I haven’t tried to see what happens to the formatting though with multi-line; does it get preserved decently well?
18. awesome tutorial,
thank you !
20. Great set of tutorials.
Finally managed to make my first OpenGLES2 app running thanks to your tutorials.
1. Sweet!
21. Hi, great tutorial! Just one question… what value do you pass into a_Normal?
1. Hi Joey,
If you imagine that the current vertex is part of a flat plane, representing the underlying surface, then the a_Normal should be a vector pointing directly away from that plane. Let me know
if this illustration from Wolfram helps out.
1. Hi Admin,
Thanks for the reply… I ended up figuring it out, but then I ran into a different problem with directional light, I sent you an email (from contact form), hope that’s okay
22. How can I calculate specular lighting. I simply need a term
1. This looks like a good resource: http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/lighting.php
23. thanks so much. these series of tutorial are great. thanks again!
1. No problem
24. What I did to fix the ambiant light which was depending on the distance :
uniform mat4 u_MVPMatrix;
uniform mat4 u_MVMatrix;
uniform vec3 u_LightPos;
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec3 a_Normal;
varying vec4 v_Color;
void main()
vec3 modelViewVertex = vec3(u_MVMatrix * a_Position);
vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
float distance = length(u_LightPos - modelViewVertex);
vec3 lightVector = normalize(u_LightPos - modelViewVertex);
float diffuse = max(dot(modelViewNormal, lightVector), 0.0); // remove approx ambient light
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
v_Color = a_Color * (diffuse + 0.3)/2.0; // average between ambient and diffuse
gl_Position = u_MVPMatrix * a_Position;
I hope this is correct
1. Well, as they say there’s more than one way.
25. This tutorial is of very high quality. It pinpoint the essentials of how the user program in android interact with , and how tasks can be off-loaded to shaders, in a clear and detailed way.
I am making my first homebrew game apps with OpenGL ES2 for learning purpose , these tutorials serve as an excellent starting point for my project.
Thank you so much.
1. Thanks for the great compliments, I really appreciate it and I’d love to see what you come up with and would be happy to help promote as well!
26. It’s a very clear tutorial, congratulations! I especially like your “Different kinds of light” introduction, it’s very useful, thanks for your effort. FipS
28. Hi there,
first off thanks for the tutorial, you really did a great job here.
The one question I have is where we actually “move” our point or set the moving path for it.
At the moment my guess would be in the drawLight method at GLES20.glVertexAttrib3f()?
Maybe i just have a blackout but could someone give me a hint on this please?
Thanks in advance
1. You’re on the right track, we draw the actual point in these lines: https://github.com/learnopengles/Learn-OpenGLES-Tutorials/blob/master/android/AndroidOpenGLESLessons/src/com/learnopengles/
It’s animated at these lines: https://github.com/learnopengles/Learn-OpenGLES-Tutorials/blob/master/android/AndroidOpenGLESLessons/src/com/learnopengles/android/lesson2/LessonTwoRenderer.java
I need to update all of these tutorials to make the source code clearer; it’s been a while.
29. Hi, Thanks for writing these tutorial. But there was something that’s not clear to me.
Here, you said
//Step three
final color = material color * (light color * lambert factor * luminosity)
But in vertex shader, I did not find the light color to be multiplied also.
// Multiply the color by the illumination level. It will be interpolated across the triangle.
+ ” v_Color = a_Color * diffuse;
Is there something missed in my observation?
1. In this case, the light color is assumed to be white (it’s been a while since I touched the code, but this is what it looks like to me).
30. Hi there.
I want to separate some fragments of scene so I’ve made a few classes. I’ve got a class Triangle which takes class Shaders as parameter and describes triangles in the same color (lighted by one
shader). Class Shders describes shaders and takes string codes for shaders as a parameter.
All other parameters are just like code from this lessons.
My object’s color is blue, but when I add light, it’s color changes to red. Why is that? Does anyone have some ideas? I can share my code by e-mail. Contact me if You think You could help. I will
be so glad.
1. I recommend posting a question to StackOverflow with source attached, and you can always share the link here. Sounds like it has something to do with the way the color is multiplied or
assigned to gl_FragColor.
1. Ok. Thank You for answer. So I put my code there:
If anybody could help feel free to mail me: lesniakwojciech at gmail dot com or post a replay if not too long.
31. Thank you very much for these wonderfull tutorials. It has cost me about a week to get through lesson one and two including the necessary mathematical sidetrips. But it was worth it.
One question. You are using triangles to build the faces of the cube. Isn’t there a way to use Quads in Gles2?
Iwas thinking about an OBJ importer. And it would be much easier if wouldn’t have to convert everything to triangles.
1. There is no GL_QUADS in ES, so you’d need to break each one down into two triangles. I recall a couple others here were working on an OBJ importer — I have to see where they’re at. Please
feel free to post something in the forums as well if you ever want to share some code there.
32. the code has cubeNormalData array, i don’t find it explained in the tutorial.. where are we using it?
1. Good point, the normals themselves were elided to keep the size of the tutorial down. These normals are the source for the normals that we talk about in the tutorial, so when the shaders run
those normals will be used for the lighting. Let me know if that helps out a bit.
33. Hi,
Your Articles are really good. But i tried implementing a pyramid using the same code. I get nothing except the black scrren. Are there any debuggers for opengl that you are aware of. I searched
on the web only thing I found was a tracer for android 4.1 . But I have android 4.04 . I am finding it really difficult to debug the code. Ay help is much appreciated.
1. The tracer unfortunately doesn’t work very well even if you have a supported OS. Old-school techniques work better: you can try adding calls to glGetError and also step through the code, and
also try turning off depth testing and face culling if you have those enabled. Turn off texturing if you’re doing that, eventually by simplifying things enough you’ll likely find the root
If you’re lucky you might also be able to use one of the GPU-specific tools without too much trouble, NVIDIA has some and the other GPU vendors do as well.
34. thanks
35. Can someone please tell me the answer to this question is that in Further Exercises:
There is a flaw with the way the ambient lighting is done. Can you spot what it is?
1. The ambient should probably be done like this:
float diffuse = max(dot(modelViewNormal, lightVector), 0.0);
v_Color = 0.1 + …
Instead of:
float diffuse = max(dot(modelViewNormal, lightVector), 0.1);
36. And look how the entire line v_Color?
Like this: v_Color = 0.1 + a_Color * diffuse;?
Because if I do this so I have all the colors faded.
So how is it right?
1. Yeah, that’s what ambient lighting does.
37. Thank you very much.
38. How can I add 2,3 … light?
I have to create all the variables with the light and make it twice as 2x normal light. Or is used in a shader trick.
Best would be to show a sequence of code for understanding.
(It may also be per-fragment lighting.)
Thank you for any answers
1. Multiple lights works the same as for one light, you just add the contribution from each light together. One way you can do it is by passing in a uniform array for the lights, and just loop
over that array and add the contribution from each light. There’s an example of this in the source code from my book, available here: http://pragprog.com/titles/kbogla/source_code. The
example is with different types of light but should still help you get an idea of how to do it; just check out the code for “Lighting”.
39. Thank you. Early look at it.
40. Hi,
thanks for the tutorials… i really appreciate the help in learning…
but I’m having a little bit of a problem… for some reason the bottom face of my cubes is transparent… only the bottom face… at first i thought maybe it had to do with the normal, eye level,
lighting position, ect… so i adjusted each of these in turn to see what effect it would have… nothing…
I reproduced the code as closely as i could for tutorials one and two, and i ran through the code on github to see if i could find my error… as far as i can see it is very nearly identical, with
the exception of the getshader methods…
can you possibly point me at where i might be making an error? | {"url":"http://www.learnopengles.com/android-lesson-two-ambient-and-diffuse-lighting/","timestamp":"2014-04-17T21:22:44Z","content_type":null,"content_length":"177079","record_id":"<urn:uuid:12fb7383-1519-4134-9071-990a519b9e00>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
[PD] OT-spectral theory of waveshapers (math)
Charles Henry czhenry at gmail.com
Thu May 25 21:31:40 CEST 2006
I've made some recent progress on a math problem related to
waveshapers...specifically, the simplest one, a cubic polynomial.
All good waveshaper functions should have a odd polynomial expansion
(for symmetry about x=0)
Suppose our transfer function looks like o(x)= x - .001x^3
we have a dominant linear term and a small (eps=.001) multiplied by
x^3, making it non-linear.
This transfer function reveals amplitude dependent additions of odd
harmonic spectra, just as we would expect.
When we look at spectral representations of x^3, we have by the
relationship between product/convolution using Fourier transforms,
g(t)=u(t)^3: G(f) the FT of g, and U(f) the FT of u
G(f) = ( U * U * U )(f) where * represents convolution in the Fourier domain
With a little exerimentation with the spectrum of u(t), we can see
that iff U(f) is an odd harmonic spectrum, G(f) also is an odd
harmonic spectrum (this is merely conjecture, as I have not proved it
yet). We can examine eigenfunctions of a spectral representation by
looking at a non-linear differential equation,
u' ' (t) + eig*u(t) - eps*u(t)^3 = 0 (if we add a forcing term on the
RHS, we get the Duffing equation)
if we expand u(t) = u0(t) + eps*u1(t) + eps^2*u2(t) + ... (series
expansion of u with respect to eps)
and we put appropriate periodic B.C.'s on some interval [0,T], with a
little work, we can show the eigenvalues remain exactly the same, and
we can derive a series expansion of u, which shows the odd harmonic
spectrum. I'm looking for a good source on this equation in order to
verify my series expansion terms, before writing them all out.
the series goes (as harmonic numbers) 1, 3, 5, 7, 9, etc...
now if we change x^3 to x^5, we get 1, 5, 9, 13, 17 etc...
which is a bit different spectrum
so, the idea here is that if we have a function such as arctan (x) as
our waveshaper, we can expand arctan(x) as a Taylor series to know the
coeff. of x^3, x^5 or x^7
or for instance, we could work with x-.001*x^5 as our waveshaper, and
use harmonic intervals of 9/5, 13/5, and so on
I'm currently working on a new tuning for 19-tet guitar based on odd
harmonics. When I turn on the distortion, the open tuning is very
consonant, as one would expect from the mathematical side of things
above. However, it's hard to find some chords that really fit in with
a uniquely odd harmonic spectrum (changes the system of harmony
More information about the Pd-list mailing list | {"url":"http://lists.puredata.info/pipermail/pd-list/2006-05/038671.html","timestamp":"2014-04-19T07:00:27Z","content_type":null,"content_length":"5002","record_id":"<urn:uuid:1e8c1cfd-6ea1-4273-8957-9fd4425e3e3b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simulation Tools Converge On Large RFICs | EE Times
News & Analysis
Simulation Tools Converge On Large RFICs
Simulation Tools Converge On Large RFICs
Editor's Note. To view a PDF version of this article, Click Here.
The drive to integrate an entire transceiver on a single chip has spurred a host of technical challenges arising from the myriad complex interactions among various sections of that system,
particularly at the RF end. To overcome those issues, today's RF designer needs expertise in communication and signal theory and must make intelligent trade-offs among such critical parameters as
noise, power, gain and linearity.
Designers must therefore have in their arsenals powerful and well-integrated EDA software that can not only simulate each subsection of the RFIC but also accurately simulate overall chip performance
for verification against wireless standards. That software must be capable of incorporating more recent, advanced techniques for nonlinear circuits with complex modulated RF signals, while also
factoring in the ever-increasing size of the RF circuits.
A typical RFIC in a wireless communication product has performance parameters that must be simulated at the system level (adjacent-channel power ratio, or ACPR), at the subsystem level (spurious-free
dynamic range, or SFDR), at the component level (phase noise) and sometimes at multiple levels (ACPR for the power amplifier and for the entire RF transmitter). Because of those requirements, no
single simulator can provide all performance measures. In addition, the architectural, subsystem and component-level simulations of both the analog/RF and baseband portions of the system should not
be done in isolation. Well-known techniques to achieve whole-system simulation range from dc simulation to harmonic-balance simulation.
Dc simulation: Calculating the dc operating point of a circuit is a prerequisite for other simulations such as ac, transient and harmonic balance. In dc simulation, ac sources are ignored, capacitors
are replaced with open circuits and inductors with short circuits, and nonlinear devices are represented by their Spice models. The simulator uses the Newton-Raphson algorithm to solve Kirchoff's
Current Law (KCL) at each node.
Ac and S-parameter simulations: Ac and small-signal S-parameter simulations first establish the dc operating point. Then nonlinear devices are linearized around their dc operating point by assuming
that the ac source levels do not perturb the dc operating point. Linear devices are represented by their small-signal frequency-domain Y or S parameters. That allows accurate frequency-domain models
for distributed components to be included in the analysis. After each device is represented by its linear model, the Y or S matrix of the overall circuit is calculated at its external ports.
Transient time-domain simulation: Transient simulation is appropriate for such applications as large baseband circuits, startup transients and oscillators. Here, dc bias analysis is performed.
Nonlinear devices are represented by Spice models; linear devices are represented by their lumped-equivalent circuits. Frequency-domain distributed models are either represented by their Y or S
parameters or by rational polynomials.
Finite-difference calculation is used on each circuit node current to convert the system of differential equations to a system of algebraic matrix equations.
This system of equations is then solved in an iterative manner using the Newton-Raphson algorithm such that KCL is satisfied at each circuit node.
The transient time-domain simulation is typically performed at both the component and chip levels. The final verification of an RFIC includes a transistor-level transient simulation of the whole IC.
However, computation time and memory constraints associated with transient time-domain simulation have created the need for other simulation technologies, which will be discussed later.
The convolution simulator: This is an extension to the transient simulator. It allows simulation of frequency-domain models such as microstrip and strip lines in time-domain simulators and also
accounts for high-frequency effects such as skin effect and dispersion.
Convolution works as follows: A finite input response (FIR) convolution is performed on distributed models by converting their frequency-domain S or Y parameters to impulse responses and then
convolving the input waveforms with the impulse responses. For frequency-domain models that can be represented accurately using a Laplace or a rational polynomial model, recursive convolution is
used. This is faster and numerically more stable than the FIR convolution.
Harmonic balance
Neither the S-parameter technique nor the transient time-domain technique is applicable to the steady-state solution of nonlinear circuits with multitone excitation. The S-parameter technique is a
linear simulation technique, while the transient technique is not practical for multitone excitation with closely spaced tones. The solution is a frequency-domain nonlinear simulator called the
harmonic balance (HB) simulator.
RFICs typically include frequency up- and/or down-conversions. HB is the ideal technique to analyze systems with multiple, closely spaced independent signals. Linear distributed models can be
accurately modeled at the same time, because HB is a frequency-domain technique.
Nonlinear noise analysis is another unique capability of HB. Spice linear noise analysis cannot predict the noise performance of a circuit with frequency-mixing effects or determine nonlinear
responses to variations in input signal amplitude, such as gain compression. HB can accurately simulate nonlinear noise of mixers and oscillators, including their large-signal effects.
Finally, HB is most useful in the analysis of components or systems that involve intermodulation distortions (IMD) and/or frequency-conversion. Examples include mixer IMD with closely spaced tones,
power amplifiers, load-pull, frequency multipliers, steady-state response of oscillators and system simulation.
The harmonic-balance solution process begins by performing a dc simulation to obtain the dc operating point. The periodic excitation signals are represented by Fourier series with a finite number of
harmonics of each independent tone. Initially, an estimate is made for the voltage spectrum at each circuit node. That spectrum is converted to a time-domain voltage waveform using an inverse FFT.
Time-domain current waveforms at the nonlinear device terminals are computed using their Spice models and the voltage waveforms. The time-domain currents are then converted to a current spectrum at
each terminal using FFT. The current spectrum at each linear device node is computed from S or Y parameters and the voltage spectrum at each node. That provides a first-iteration current spectrum at
each circuit node. The estimated initial voltage spectrum is then adjusted to satisfy KCL at each node. This Newton-Raphson iterative process continues until the difference between the two successive
iterations drops below a predetermined threshold.
Krylov subspace solver
With the Newton-Raphson technique that HB simulators use, each iteration requires an inversion of the Jacobian matrix associated with the nonlinear system of equations. When the matrix is factored by
direct methods, memory requirements climb as O(H2), where H is the number of harmonics.
An alternate approach to the solution of the linear system of equations associated with the Jacobian is to use a Krylov subspace iterative method such as generalized minimum residual (GMRES). This
method has a memory requirement proportional to O(H), not O(H2), in the context of harmonic balance. Thus the Krylov solver saves on memory requirements for large harmonic-balance problems, with a
corresponding increase in computation speed. This speed makes it practical to use HB for full-chip simulation with multitone excitation.
CCT envelope simulation
Unlike traditional communications designs involving sinusoidal modulations, modern wireless applications employ more sophisticated digital RF modulation for more efficient spectrum usage. These
include pi/4 differential quadrature phase-shift keying (DQPSK) and QAM, as used in wideband CDMA (W-CDMA), Edge and GSM standards. Associated with these modulation schemes are new RF specifications,
such as ACPR, error vector magnitude (EVM) and NPR. In addition, components such as phase-locked loops (PLLs) and automatic gain controls (AGCs) must satisfy tight timing specifications for
frequency-and power-level settling, respectively.
Circuit envelope (CE) was developed specifically to provide an efficient simulation technique for transient and complex modulated RF signals. Unlike Spice, it samples the baseband modulation envelope
of the signal instead of its RF carrier. The RF carrier is simultaneously computed in the frequency domain for each envelope time sample. The output is a time-varying spectrum.
CE efficiently analyzes amplifiers, mixers, oscillators and feedback loops in the presence of modulated and transient high-frequency signals, allowing the efficient and accurate analysis of the
sophisticated signals found in today's communication circuits and subsystems. This simulation technology combines the advantages of time- and frequency-domain techniques to overcome the limitations
of harmonic balance and Spice simulators in such applications.
End-to-end simulation
At the architectural level, designers are interested in the entire system performance, from "bits in to bits out." The measurements of interest are the overall bit error rate (BER), EVM, etc., which
are closely related to the performance of the baseband sections of the system.
Simulation of behavioral DSP designs in conjunction with analog/RF circuit designs is critical to the success of the integrated components, devices and subsystems used in wireless modems.
Verification of the impact of real-world analog/RF issues on the DSP algorithms, and vice versa, is vital in making intelligent choices in the trade-offs between performance and circuit complexity.
Today's designs use a mix of analog/RF and dedicated on-chip baseband blocks, and they require high levels of integration at the boundary between the two environments. Co-simulation between baseband
and RF circuits addresses that need. A design environment that supports a mix of simulation engines, signals and models, supporting baseband, RF and analog technologies, provides great value for both
top-down system specification and bottom-up test and validation.
Timed asynchronous data flow signal-processing simulators provide the bridge between baseband simulation and RF circuit simulators, enabling end-to-end communication system simulation for
Many recent improvements in technology have resulted in greater simulation efficiency and robustness. Both time- and frequency-domain simulators can now solve very large and nonlinear RF circuits. To
solve the nodal KCL equations, a set of equations and the associated Jacobian matrix is usually constructed and solved. In circuits with many nodes and harmonics, this matrix tends to get very large
and complex. This is when Krylov subspace solvers become very useful. But if the circuit is also highly nonlinear with many complex off-diagonal terms in the matrix, even the Krylov subspace solver
encounters difficulty. A robust preconditioner is needed to simplify and approximate that matrix to allow the Krylov subspace solver to obtain the final solution.
Two new preconditioners have been developed in addition to the standard Krylov dc preconditioner (DCP). These are the block-select preconditioner (BSP) and the Schur complement preconditioner (SCP).
The dc preconditioner works well for most circuits, but it is not effective on highly nonlinear circuits. The BSP, which adds further nonlinear blocks to the DCP, ensures robust convergence on highly
nonlinear circuits. The BSP uses block selection: It divides the Jacobian matrix into a set of linear and a set of nonlinear blocks, where the nonlinear blocks correspond to the most nonlinear parts
of the circuit. Using the BSP will produce convergence at the cost of additional memory usage.
The Schur-complement preconditioner is used for the most strongly nonlinear circuits (Figure 1). While the DCP uses a dc approximation on the entire circuit, the SCP partially applies that
approximation, excluding the most nonlinear parts. Applying SCP requires a more complex sequence of steps that includes an internal Krylov solve at each iteration of the outer Krylov loop. So using
SCP is typically more expensive in terms of memory usage. The addition of a specialized Krylov solver for the SCP improves memory usage and has resulted in improved speed and efficiency.
Comparing the two new preconditioners, BSP is a simpler technology than the SCP technology. Sometimes, it is faster than SCP. But in cases where the nonlinear part of the circuit is larger (as
determined by the nonlinear column selection), BSP will start to use a lot of memory and becomes inefficient. This is where SCP becomes more useful.
The size of the problem in harmonic balance depends on the number of equations (which is related to the number of devices and nodes) and the number of frequency harmonics. For a circuit with one
tone, the problem size is equal to the total number of equations per frequency, multiplied by [(2 * # of harmonics) - 1].
To prove out the new preconditioners, a test was done on a bipolar frequency divider circuit with 133 devices, including 48 nonlinear devices. The number of frequencies (harmonics) was increased from
eight to 512 to increase the nonlinear problem size for the HB Krylov solver. The default dc preconditioner was not able to solve the matrix. BSP and SCP were used, and their results are outlined in
Figure 2.
Note that the BSP resulted in faster simulation and consumed less memory with a smaller problem size. With larger problems, the SCP surpasses the BSP's speed and overall memory performance.
Transient-assisted harmonic balance (TaHB) is another technique that has been used successfully to ensure convergence and solution on very large and highly nonlinear circuits, such as flip-flops in
PLL ICs as well as ring oscillators. The transient simulator is run until steady state is reached, then the transient solution is applied as an initial estimate in Krylov HB and its preconditioners.
No single simulation technology can address all the simulation needs in an RFIC. All the simulation technologies we've described should be tightly integrated into a single environment that lets the
designers do a top-down multilevel co-simulation using different levels of model abstractions.
Author's Note: The author's would like to take this opportunity to acknowledge David Long and Bob Melville for their work in inventing the SCP preconditioner.
About the Authors
Jack Sifri is Agilent EEsof EDA's product manager for RFIC simulation. He has a BSEE and MSEE from UCLA and an MBA from the University of Southern California. Jack can be reached at
Niranjan Kanaglekar is R&D section manager for RF/mixed signal at Agilent EEsof EDA. He has a BEEE from the College of Engineering, Pune, India, and an MSEE and PhD from the Univ. of Massachusetts at
Amherst. Niranjan can be reached at niranjan_kanaglekar@agilent.com. | {"url":"http://www.eetimes.com/document.asp?doc_id=1205673","timestamp":"2014-04-21T13:13:11Z","content_type":null,"content_length":"140171","record_id":"<urn:uuid:812915ba-fb63-483f-8a19-b24a2727ed57>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Kate
Total # Posts: 1,648
If you received a $10 reward, then you spent between $50 and $99.99 If you received a $20 reward, then you could have spent $100, $125, or $149.99. If your purchase is $98, then you'd be better off
spending $2 more, so that you'll get another $10 gift card. That would ...
Math Correct?
Yes. Since all the angles of a square are 90 degrees, two of them will always add to 180 degrees, which is the definition of supplementary.
Sorry, silly error... 5y=25 y=5 (1,5)
(9x+5y)=34 (8x-2y)=-2 You need to eliminate a variable so first pick either x or y to eliminate. I'll choose y. To eliminate y from the equations, you need to multiply by numbers that will give you
zero y when you add the equations together: 2(9x+5y)=34*2 5(8x-2y)=-2*5 18x...
college psychology
it effects the temporal lobe of the cerebral cortex
college psychology
What areas of the brain are affected and how are they affected with sleep deprivation?
Well, what I wrote would be considered the appropriate work shown.
You really should write a linear equation: You're right on calculating the rate. It is 4. You can also call that the slope of the line. In y=mx+b form, y=4x+b Since you started at 11 in 2006, 11 is
your y-intercept (b) y=4x+11 Again, since you started at 2006, 2011 is 5 ye...
What exactly is the question? If you have to solve the inequality, then you would subtract 2.2 from both sides, leaving you with: x<2.3
The Product Rule is: (First*Derivative of Second)+(Second*Derivative of First) In your case: First is (x^3-2x) and Second is (5x^2+2x) Try that and write back if you're still having trouble.
Yes they are all correct.
Ratios are usually good to set up like this: 4x+7x+5x=64 in this case. 4x = the shortest side, 7x = the longest side, and 5x = the middle side. Then solve for x. 16x+64 x=4 Then plug 4 back in for x
to see what the individual side lengths are. 4*4=16 for the shortest side.
Substitute 2 in the inequality and see if the sentence is true. 4(2)+8<12 8+8<12 16<12 False, so 2 is not a solution. Do it again with 0 and see what happens. If need more help write back.
Discrete Math
Domain is correct. Range is all real numbers except 5. Since 1/(1+x) can never be 0, the equation will never be 5. Write back if you still have questions.
This question does not make sense, given that 2/5 + 3/4 = more than 1. Is there more information to this question?
Let x = number of gallons of %5 milk 0.05x + 0.01(100) = 0.02(x+100) 0.05x + 1 = 0.02(x+100) 0.05x + 1 = 0.02x + 2 0.03x = 1 x = 33 and 1/3 Write back if you need more explanation
8th Grade Algebra
In a standard deck there are 13 hearts and 4 aces. However one the hearts is an ace. So we're talking 13+4=17 cards, but then subtract one (16) for the ace of hearts that would be repeated if
counting the hearts and aces separately. So 16/52 is the probability. This can be...
A=P(1+r)^t A=amount of money at end P=amount of money to start r=interest rate as a decimal t=number of years 20000=P(1+.05)^5 20000=P(1.05^5) 20000/(1.05^5)=P $15670.52=P
8th grade Algebra
(5/100)*(5/100)=25/10000=0.0025 or .25%
The center of the rectangle is equidistant from each vertex. If you make the center of your circle the center of the rectangle, then the four vertices of the rectangle will be equidistant from the
center of the circle as well, making each distance from the vertex of the rectan...
Substitute -12 in for x in the above equation and solve for y. 6y+5(-12)=15 6y-60=15 simplifying 6y=75 adding 60 to both sides y=12.5 dividing both side by 6. k=12.5
A circular tablecloth is draped over a rectangular table so that the center of the cloth is directly above the center of the table. The table is 6 X(times) 2sqrt(3)feet, and the cloth has a radius of
2 feet. What is the area of the portion of the tablecloth covered by the cloth?
A buffer solution of pH=9.24 can be prepared by dissolving ammonia and ammonium chloride in water. How many moles of ammonium chloride must be added to 1.0 L of .50 M ammonia to prepare the buffer?
What is the [S^-2] in a saturated solution (.1 M) of H2S, in which the pH had been adjusted to 5.00 by the addition of NaOH? For H2S, Ka1= 1.1X10^-7 and Ka2=1.0X10^-14
if u add if you add 2.0 ml of 10.0 M HCl to 500 ml of a .10 M NH3, what is the pH of the resulting solution? for NH3, kb=1.8X10^-5
ap chem
what is the molality of a 0.075m of ethyl alcohol in water at 25 degrees celcius. Assume the density of the solution at such low concentration is equal to that of water at 25 degrees celcius, 0.997g/
Research the VISA website. After reviewing of all the documents you can find on the web and in libraries, write a 3-page response to the following questions
calculus, limits, l'hopital
Thanks. I made a really stupid error. You've been very helpful in answering my calc questions. I really appreciate it.
calculus, limits, l'hopital
Find the limit as x->0 of (2-2cos(x))/(sin(5x)) Mathematically I got 2/5, but on the graph it appears to be 0.
is that 9 times q times t times 9 times p times t?
i dont get scale draings how do you work it out?
Minds at work
A person that has a sevre overreation to an intense emotion might be experiencing a a. galvanic skin response b. sympathetic rebound c. suppressed emotion d. parasympathetic rebound if you relearn a
song that you heard as a child, your savings score for relearning the song wou...
Teacher's Aide: Helping Abused Children
I don't mean to be mean on this end but that is a very rude thing to ask anyone. 1st off i'm newly married, 6 months pregnant, and this is not the only thing that i'm going to school for. i actually
have a 91% in my class and this is the only test that i've had...
Teacher's Aide: Helping Abused Children
i for got to post my answers: 1) B 2) D 3) A 4) A 5) C 6) B 7) D 8) A 9) C 10)C
Teacher's Aide: Helping Abused Children
1. The Stubborn Child Law enacted by Massachusetts in 1646 permitted parents to A. use corporal punishment when children disobeyed them. B. keep children out of school if they were needed for family
work purposes. C. institutionalize a child born with a physical or mental disa...
Let x = the salary of the husband. So Lien earned x+1700. Lien's salary + her husband's = 42100 x + 1700 + x = 42100 2x + 1700 = 42100 2x = 40400 x = 20200 Lien's husband earned $20200, and Lien
earned 1700 + 20200 = $23900.
What are the directions? It's not clear what to do.
Set the denominator equal to zero and solve for x. 3x-14 = 0 3x=14 x=14/3 This is the x-coordinate of the vertical asymptote.
math 5th grade
First I would choose a number divisible by 9 to use as an example. I chose 81. If 8/9 of the customers ordered coffee, that's 8/9*81 = 72 customers. If reg coffee was ordered three times as much as
flavored, then you want to set it up like this: 3x + x = 72 4x = 72 x = 18 ...
It depends a little on what math class you're in how they want you to solve this. But basically, this function is a parabola and you need to find the vertex. x=-b/(2a), where a=-0.5, and b=12 x=-12/
(2*-0.5)=12, this is the x coordinate of the vertex and also the number of ...
First subtract 45-27 = 18 Then subtract 5-8 = -3. Next multiply 3 by -3 = -9 then divide 18 by -9 = -2 finally, the absolute value of -2 = 2.
Repost- Calculus; someone please help
Yes it is the right equation. Good job!
calculus, limits, l'hopital
Here's what's weird: I graphed the equation and from the left the function approaches 1 as x approaches zero, and from the right it approaches e^3. Doesn't that mean the limit does not exist? The
question, however, is multiple choice, with choices of infinity, 0, 3...
calculus, limits, l'hopital
Thank you so much! I didn't think to remove the 6x. So helpful, thank you!!
calculus, limits, l'hopital
Using l'hopital's rule, find the limit as x approaches zero of (e^(6/x)-6x)^(X/2) I know l'hopital's rule, but this is seeming to be nightmare. I just don't seem to get anywhere. I mistyped the first
time I posted this question.
advanced funtions
Similarities: Domain and Range the same. Same end behavior, you could say "as x approaches infinity or negative infinity". Differences: They don't have the same reflection characteristics because an
x^4 function can have non-symmetrical bumps in it. They're n...
calculus, limits, l'hopital
Using l'hopital's rule, find the limit as x approaches infinity of (e^(6/x)-6x)^(X/2) I know l'hopital's rule, but this is seeming to be nightmare. I just don't seem to get anywhere.
6.4-y = 6 and 2/5 convert 6 and 2/5 to decimal... 2/5=.4 6.4 - y = 6.4 subtract 6.4 from both sides -y = 0 divide both sides by -1 y = 0
1.24*10^-3 = .00124
algebra word problems
294/14=21 L=6*21=126
8th grade math
935*1.4 mil = P*4.5 mil (mult 935 by 1.4) 1309 mil = P*4.5 mil (divide both sides by 4.5) 290.8888888888888=P So P = $290.89
I need an answer--molecular/electron geometry
THanks so much
I need an answer--molecular/electron geometry
Can anyone help me out with the differences between electron geometry and molecular geometry for SCl6? I think the molecular geometry is octahedral, but I'm not so sure for the electron geometry.
Thanks :) *My earlier post wasn't answered, so I'd appreciate some he...
put it into simple form?
Equivalent. Fraction
What is 6/12 in a equivalent fraction
Molecular and Electron geometry
Can anyone help me out with the differences between electron geometry and molecular geometry for SCl6? I think the molecular geometry is octahedral, but I'm not so sure for the electron geometry.
Thanks :) *My earlier post wasn't answered, so I'd appreciate some he...
if a figure has 48 square units as it's area and 38 units as it's perimeter what would the figure look like?
Molecular and Electron geometry
Can anyone help me out with the differences between electron geometry and molecular geometry for SCl6? I think the molecular geometry is octahedral, but I'm not so sure for the electron geometry.
Thanks :)
The formula for frequency is: frequency = 1 / T, where T = Period. That is e.g. "cycles per second". The formula for time is: T (Period) = 1 / frequency. Note: T = Period and t = Time
SCl6 is used in....?
Thank you very much
SCl6 is used in....?
But elements compose compounds. So a compound, say SCl6, cannot create silicon dioxide. . . . I'm now thoroughly confused.
SCl6 is used in....?
But isn't SCl6 a compound within itself?
SCl6 is used in....?
Does anyone know what SCl6 is used in? Note: I don't mean SF6. *Sulfur Hexachloride* Thank you :)
SCl6, Sulfur Hexachloride
Very helpful, thank you!
SCl6, Sulfur Hexachloride
I have a project on Sulfur Hexachloride, or SCl6, but I can't find any info on it! Any help?
I have a project on Sulfur Hexachloride, or SCl6, but I can't find any info on it! Any help?
your dumb
When ice melts, it absorbs 0.33 per gram.How much ice is required to cool a 12.0 drink from 72 to 37, if the heat capacity of the drink is 4.18 ? (Assume that the heat transfer is 100 efficient.)
A tennis ball is shot vertically upward from the surface of an atmosphere-free planet with an initial speed of 20.0 m/s. One second later, the ball has an instantaneous velocity in the upward
direction of 15.0 m/s. What is the magnitude of the acceleration due to gravity on th...
Explain why the occupants in a running car observe objects on the roadside as though they are running backwards.
public relation
what has the result of what the artist Chris Brown led for the public to say or view him?
does that mean the hard drive size or this Double-layer DVD±RW/CD-RW or something
what does Hard Disk-FDD/HDD mean?
In Triangle SUM, if SU = 10 cm and UM = 15 cm then MS must be less than how many centimeters? 10 15 20 25...... Am I right?
like you might avoid paying your bill in order to purchase something you want
What effect does lacking personal responsibility have on materialism?
Find the perimeter of an isosceles trapezoid with base lengths of 10 and 18 and height of 8.
Find the perimeter of an isosceles trapezoid with base lengths of 10 and 18 and height of 8.
A catalytic converter combines 2.55 g CO with excess O2.What mass of CO2 forms?
11T over 28 - T over 4=1
4x27= partial products: product:
(a) An ideal gas occupies a volume of 1.0 cm3 at 20°C and atmospheric pressure. Determine the number of molecules of gas in the container. i got that right to be 2.5e19 molecules (b) If the pressure
of the 1.0 cm3 volume is reduced to 1.0 10-11 Pa (an extremely good vacuum...
oh, ok. that makes sense. Thanks!
i am getting .15 and .24 and i keep getting it wrong. i dont know what im doing wrong.
A rigid tank contains 0.40 moles of oxygen (O2). Determine the mass (in kg) of oxygen that must be withdrawn from the tank to lower the pressure of the gas from 37 atm to 23 atm. Assume that the
volume of the tank and the temperature of the oxygen are constant during this oper...
An object weighing 325 N in air is immersed in water after being tied to a string connected to a balance. The scale now reads 250 N. Immersed in oil, the object weighs 275 N. (a) Find the density of
the object. (b) Find the density of the oil.
Find all the critical numbers for the function f(x) = cube root of 9-x^2. help please?!
helping abused children
here let me help you: The Stubborn Child Law enacted by Massachusetts in 1646 permitted parents to A. use corporal punishment when children disobeyed them. B. keep children out of school if they were
needed for family work purposes. C. institutionalize a child born with a phys...
Principles of finace
How do you find use the annual net profit with the payables and inventory costs to determine the total annual cost a firm would need to achieve the industry level of operational efficiency?
the ionization energy for atomic hydrogen is 1.31x10^6 J/mol. what is the ionization energy for He^+
A container is filled to a depth of 15.0 cm with water. On top of the water floats a 27.0 cm thick layer of oil with specific gravity 0.800. What is the absolute pressure at the bottom of the
the sum of two decimal numbers is 3.9. their difference is 0.9, and their poduct is 3.6 what are these two numbers?
childhood obesity
what are some question you can ask to a child that is overweight?
Organic Chemistry
I just finished an organic chemistry lab experiment on separating and analyzing an unknown acid/neutral compound mixture using extraction, recrystallization, TLC, melting point analysis, and IR
spectrum. In the instructions, "Obtain an IR spectrum for each of your recryst...
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kate&page=8","timestamp":"2014-04-18T11:22:00Z","content_type":null,"content_length":"27969","record_id":"<urn:uuid:73207432-bbe5-4fa7-8d44-5de8b92a17b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Groovy JDK
Overview Package Class Tree Deprecated Index Help
Class Character
│ Method Summary │
│ boolean │ asBoolean() │
│ │ Coerce a character to a boolean value. │
│ int │ compareTo(Number right) │
│ │ Compare a Character and a Number. │
│ int │ compareTo(Character right) │
│ │ Compare two Characters. │
│ Number │ div(Number right) │
│ │ Divide a Character by a Number. │
│ Number │ div(Character right) │
│ │ Divide one Character by another. │
│ Number │ intdiv(Number right) │
│ │ Integer Divide a Character by a Number. │
│ Number │ intdiv(Character right) │
│ │ Integer Divide two Characters. │
│ boolean │ isDigit() │
│ │ Determines if a character is a digit. │
│ boolean │ isLetter() │
│ │ Determines if a character is a letter. │
│ boolean │ isLetterOrDigit() │
│ │ Determines if a character is a letter or digit. │
│ boolean │ isLowerCase() │
│ │ Determine if a Character is lowercase. │
│ boolean │ isUpperCase() │
│ │ Determine if a Character is uppercase. │
│ boolean │ isWhitespace() │
│ │ Determines if a character is a whitespace character. │
│ Number │ minus(Number right) │
│ │ Subtract a Number from a Character. │
│ Number │ minus(Character right) │
│ │ Subtract one Character from another. │
│ Number │ multiply(Number right) │
│ │ Multiply a Character by a Number. │
│ Number │ multiply(Character right) │
│ │ Multiply two Characters. │
│ Character │ next() │
│ │ Increment a Character by one. │
│ Number │ plus(Number right) │
│ │ Add a Character and a Number. │
│ Number │ plus(Character right) │
│ │ Add one Character to another. │
│ Character │ previous() │
│ │ Decrement a Character by one. │
│ char │ toLowerCase() │
│ │ Converts the character to lowercase. │
│ char │ toUpperCase() │
│ │ Converts the character to uppercase. │
public boolean asBoolean()
Coerce a character to a boolean value. A character is coerced to false if it's character value is equal to 0, and to true otherwise.
the boolean value
public int compareTo(Number right)
Compare a Character and a Number. The ordinal value of the Character is used in the comparison (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - a Number.
the result of the comparison
public int compareTo(Character right)
Compare two Characters. The ordinal values of the Characters are compared (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - a Character.
the result of the comparison
public Number div(Number right)
Divide a Character by a Number. The ordinal value of the Character is used in the division (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - a Number.
the Number corresponding to the division of left by right
public Number div(Character right)
Divide one Character by another. The ordinal values of the Characters are used in the division (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - another Character.
the Number corresponding to the division of left by right
public Number intdiv(Number right)
Integer Divide a Character by a Number. The ordinal value of the Character is used in the division (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - a Number.
a Number (an Integer) resulting from the integer division operation
public Number intdiv(Character right)
Integer Divide two Characters. The ordinal values of the Characters are used in the division (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - another Character.
a Number (an Integer) resulting from the integer division operation
public boolean isDigit()
Determines if a character is a digit. Synonym for 'Character.isDigit(this)'.
true if the character is a digit
public boolean isLetter()
Determines if a character is a letter. Synonym for 'Character.isLetter(this)'.
true if the character is a letter
public boolean isLetterOrDigit()
Determines if a character is a letter or digit. Synonym for 'Character.isLetterOrDigit(this)'.
true if the character is a letter or digit
public boolean isLowerCase()
Determine if a Character is lowercase. Synonym for 'Character.isLowerCase(this)'.
true if the character is lowercase
public boolean isUpperCase()
Determine if a Character is uppercase. Synonym for 'Character.isUpperCase(this)'.
true if the character is uppercase
public boolean isWhitespace()
Determines if a character is a whitespace character. Synonym for 'Character.isWhitespace(this)'.
true if the character is a whitespace character
public Number minus(Number right)
Subtract a Number from a Character. The ordinal value of the Character is used in the subtraction (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - a Number.
the Number corresponding to the subtraction of right from left
public Number minus(Character right)
Subtract one Character from another. The ordinal values of the Characters is used in the comparison (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - a Character.
the Number corresponding to the subtraction of right from left
public Number multiply(Number right)
Multiply a Character by a Number. The ordinal value of the Character is used in the multiplication (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - a Number.
the Number corresponding to the multiplication of left by right
public Number multiply(Character right)
Multiply two Characters. The ordinal values of the Characters are used in the multiplication (the ordinal value is the unicode value which for simple character sets is the ASCII value).
right - another Character.
the Number corresponding to the multiplication of left by right
public Character next()
Increment a Character by one.
an incremented Character
public Number plus(Number right)
Add a Character and a Number. The ordinal value of the Character is used in the addition (the ordinal value is the unicode value which for simple character sets is the ASCII value). This
operation will always create a new object for the result, while the operands remain unchanged.
right - a Number.
the Number corresponding to the addition of left and right
public Number plus(Character right)
Add one Character to another. The ordinal values of the Characters are used in the addition (the ordinal value is the unicode value which for simple character sets is the ASCII value). This
operation will always create a new object for the result, while the operands remain unchanged.
right - a Character.
the Number corresponding to the addition of left and right
public Character previous()
Decrement a Character by one.
a decremented Character
public char toLowerCase()
Converts the character to lowercase. Synonym for 'Character.toLowerCase(this)'.
the lowercase equivalent of the character, if any; otherwise, the character itself.
public char toUpperCase()
Converts the character to uppercase. Synonym for 'Character.toUpperCase(this)'.
the uppercase equivalent of the character, if any; otherwise, the character itself.
Groovy JDK
Overview Package Class Tree Deprecated Index Help | {"url":"http://groovy.codehaus.org/groovy-jdk/java/lang/Character.html","timestamp":"2014-04-16T16:00:30Z","content_type":null,"content_length":"29432","record_id":"<urn:uuid:a04d80f7-0f74-4669-846a-daff2fa333fd>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Many 3-D plotting functions produce graphs that use color as another data dimension. For example, surface plots map surface height to color. The color limits control the limits of the color dimension
in a way analogous to setting axis limits.
The axes CLim property controls the mapping of image, patch, and surface CData to the figure colormap. CLim is a two-element vector [cmin cmax] specifying the CData value to map to the first color in
the colormap (cmin) and the CData value to map to the last color in the colormap (cmax).
When the axes CLimMode property is auto, MATLAB^® sets CLim to the range of the CData of all graphics objects within the axes. However, you can set CLim to span any range of values. This enables
individual axes within a single figure to use different portions of the figure's colormap. You can create colormaps with different regions, each used by a different axes.
See the caxis command for more information on color limits.
Calculating Color Limits
The key to this example is calculating values for CLim that cause each surface to use the section of the colormap containing the appropriate colors.
To calculate the new values for CLim, you need to know
● The total length of the colormap (CmLength)
● The beginning colormap slot to use for each axes (BeginSlot)
● The ending colormap slot to use for each axes (EndSlot)
● The minimum and maximum CData values of the graphic objects contained in the axes. That is, the values of the axes CLim property determined by MATLAB when CLimMode is auto (CDmin and CDmax).
First, define subplot regions and plot the surfaces.
im1 = load('cape.mat');
im2 = load('flujet.mat');
ax1 = subplot(1,2,1);
ax2 = subplot(1,2,2);
Concatenate two colormaps and install the new colormap.
Obtain the data you need to calculate new values for CLim.
CmLength = length(colormap); % Colormap length
BeginSlot1 = 1; % Beginning slot
EndSlot1 = length(im1.map); % Ending slot
BeginSlot2 = EndSlot1 + 1;
EndSlot2 = CmLength;
CLim1 = get(ax1,'CLim'); % CLim values for each axis
CLim2 = get(ax2,'CLim');
Defining a Function to Calculate CLim Values
Computing new values for CLim involves determining the portion of the colormap you want each axes to use relative to the total colormap size and scaling its Clim range accordingly. You can define a
MATLAB function to do this.
function CLim = newclim(BeginSlot,EndSlot,CDmin,CDmax,CmLength)
% Convert slot number and range
% to percent of colormap
PBeginSlot = (BeginSlot - 1) / (CmLength - 1);
PEndSlot = (EndSlot - 1) / (CmLength - 1);
PCmRange = PEndSlot - PBeginSlot;
% Determine range and min and max
% of new CLim values
DataRange = CDmax - CDmin;
ClimRange = DataRange / PCmRange;
NewCmin = CDmin - (PBeginSlot * ClimRange);
NewCmax = CDmax + (1 - PEndSlot) * ClimRange;
CLim = [NewCmin,NewCmax];
The input arguments are identified in the bulleted list above. The function first computes the percentage of the total colormap you want to use for a particular axes (PCmRange) and then computes the
CLim range required to use that portion of the colormap given the CData range in the axes. Finally, it determines the minimum and maximum values required for the calculated CLim range and returns
these values. These values are the color limits for the given axes.
Using the Function
Use the newclim function to set the CLim values of each axes. The statement
sets the CLim values for the first axes so the surface uses color slots 65 to 120. The lit surface uses the lower 64 slots. You need to reset its CLim values as well.
How the Function Works
MATLAB enables you to specify any values for the axes CLim property, even if these values do not correspond to the CData of the graphics objects displayed in the axes. The minimum CLim value is
always mapped to the first color in the colormap and the maximum CLim value is always mapped to the last color in the colormap, whether or not there are really any CData values corresponding to these
colors. Therefore, if you specify values for CLim that extend beyond the object's actual CData minimum or maximum, MATLAB colors the object with only a subset of the colormap.
The newclim function computes values for CLim that map the graphics object's actual CData values to the beginning and ending colormap slots that you specify. It does this by defining a "virtual"
graphics object having the computed CLim values. | {"url":"http://www.mathworks.com.au/help/matlab/creating_plots/axes-color-limits--the-clim-property.html?nocookie=true","timestamp":"2014-04-24T08:47:16Z","content_type":null,"content_length":"44409","record_id":"<urn:uuid:d27ebc94-9de4-41b6-961a-fee0bd4090f1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student creates world's largest quantum cluster
(Phys.org) —Australian National University PhD student Seiji Armstrong has made a quantum leap towards next-generation computing.
Working with a team in Tokyo, Seiji has created the largest cluster of quantum systems ever – a milestone on the way to super-powerful, super-fast quantum computers.
"The more quantum systems you have in the cluster, the more powerful your quantum computer will be," he says.
"Previously the world record was 14. But in our experiment we went to more than 10,000 at once."
Each quantum system can encode a quantum 'bit' of information, like the binary system that a traditional computer uses, explains Seiji.
"In today's computers you have 'bits' of information – a bit is a 0 or a 1. A quantum bit is similar but it can also exist in another state – instead of just a 0 or a 1 it can be in what's called a
That's where it all starts to get a little complicated, but Seiji says it's easier if you think of quantum bits as coins.
"Imagine you have a coin and heads is 0 and tails is 1. When you flip it in the air it's as if the coin is both heads and tails at once. But, if you catch it and look at it, it will be either heads
or tails.
"That's sort of how quantum bits work: you don't know what state they're going to be in until you measure them.
"When you arrange these quantum bits in a cluster, it opens up all the different possibilities and gives you access to this huge computational power."
Seiji says the potential applications of this research are endless.
"Eventually, we'll be able to use these quantum clusters to build quantum communication networks with very fast but also very secure and very powerful transmission lines," he says.
"In a normal computer, if you had 1,000 bits you might be able to solve a bunch of very easy problems. In a quantum computer with 1,000 quantum bits you'd be able to solve way more difficult problems
– problems that classical computers can't solve."
Other applications might be so far advanced we can't even imagine them in today's world.
"Even when traditional computers had become commonplace, no one really saw the internet coming. So who knows where this will lead?"
The research was done while Seiji was at the University of Tokyo as part of a Prime Minister's Australia-Asia Award. Working with a group of experts headed by Professor Akira Furusawa was a pretty
exceptional experience, he says.
"Everybody there had a unique skill and it was a really nice example of all these experts in their fields coming together. Everyone contributed to the experiment in a different way. It was really
exciting stuff."
More information: Ultra-large-scale continuous-variable cluster states multiplexed in the time domain. Shota Yokoyama, Ryuji Ukai, Seiji C. Armstrong, Chanond Sornphiphatphong, Toshiyuki Kaji,
Shigenari Suzuki, Jun-ichi Yoshikawa, Hidehiro Yonezawa, Nicolas C. Menicucci, Akira Furusawa. Nature Photonics (2013) DOI: 10.1038/nphoton.2013.287
1 / 5 (12) Nov 19, 2013
To me A Ceiling Fan Circling Around at the Roof comes to my mind;
Which Blade of the Fan is it?
Is it the Roof OR The Fan Blades??
1.6 / 5 (7) Nov 19, 2013
even when traditional computers had become commonplace, nobody saw the Internet coming
-Well sure they did. Why do you think PCs were promoted so heavily to begin with? The need to replace analog with digital was obvious, and the proliferation of mass-produced machines was essential
for it's establishment.
The Internet was the CAUSE of the pc, not the EFFECT of it. One can revisit the history of hardware and software with this understanding in mind, and begin to appreciate how this was all Orchestrated
to occur. The forced establishent of one common OS. The forced obsolescence of hardware. The forced standardization of components.
The artifice of the pc-vs-apple competition, in case a fatal flaw were to be discovered in either of them. This redundancy is SOP in miltech development, where the end result is so critical that it
warrants such waste.
Computers were forced upon the world against it's will and for the Greater Good as was the railroads, the auto, the airlines, etc.
1.4 / 5 (10) Nov 19, 2013
I wish someone would force an R9 290X on me. Two would be nice.
1.7 / 5 (12) Nov 20, 2013
Orchestrated by whom, aliens? The most powerful government on the planet can't even manage the creation of one web site, despite three years and spending a near infinite amount of money, in web
development terms.
I think you have out metaphysicised even religions with your ridiculous conspiracy theory. The entity that could have social-engineered all that to occur counter to what would have occurred by human
seeking their own interest anyway, would certainly have been omnipotent in nature.
1.7 / 5 (12) Nov 20, 2013
... Let it be known that TheGhostofOtto1923 believes in Creationism and not natural evolution.
2 / 5 (4) Nov 20, 2013
You baiting me nou? Obamas website was sabotaged by insurers and conservatives both of whom abhor change and have much money to lose.
Nobody needed a pc when they first came out. It was much more expensive to computerize small businesses than not. Nobody was using CADD until the large corporations and govt agencies began insisting
on it, and began paying enormous amounts of money to their consultants for it. It took more time, cost more money, and very few people knew how to use it.
Your free markets would never have allowed computerization because it wasn't going to become competitive for a very long time. And yet here we are today with a vast economic infrastructure which
could not function without it.
The Internet could not have caught on if computers had not already become ubiquitous. Yet there they were waiting for something useful to do.
Major Constructs like worldwide abortion and the Internet cannot be happenstance. They are not an afterthought. The Way must be Prepared.
2 / 5 (4) Nov 20, 2013
Why do guys like yourself always ask 'who'? 'Who could possibly be capable of such a thing??' And yet you're easily convinced that one madman could commandeer one of the most powerful and educated
countries in only a few short years, and threaten to conquer the world or destroy it. How is that possible from your perspective?
You're willing to entertain the notion that a liberal conspiracy has subverted the entire western world. Who orchestrated that nou?
There is a great deal that can be anticipated, simply because so much of the human condition is cyclic. Social and economic cycles driven by our tropical repro rate - growth, decay, collapse, rebirth
- have always been inevitable. Collapse has ALWAYS meant war. And war has ALWAYS threatened to destroy civilization, and often did in the ancient world.
Given the inevitability of war, don't you think it is also inevitable that Leaders might begin colluding to preserve their rule by Managing it? You think the mafia invented this?
1.7 / 5 (12) Nov 20, 2013
Obamas website was sabotaged by insurers and conservatives both of whom abhor change and have much money to lose
LOL, you're conspiracies are delusional. It was "sabotaged" by the few people trying to use the site. The Obama administration has admitted it was not ready for primetime. They admitted it is a
failure, so its broken nature is not a mystery. Also, the insurance companies love ObamaCare: did you know that if they don't make enough money within the first three years, they receive a bailout
from the law?
1.7 / 5 (12) Nov 20, 2013
Your free markets would never have allowed computerization because it wasn't going to become competitive for a very long time.
It's as if you're just making up non-sense as you type. The entire pc industry started out of a guys garage,.... and from IBM, taping another potential market.
The Internet could not have caught on if computers had not already become ubiquitous.
Unbelievable. It's as if you consider the existing state of things and then without having an actual historic perspective you extrapolate retroactively how it would have happened had events emanated
arbitrarily out of your ass.
The "internet" was conceptually Obvious, it is a natural extension of the computer, to link them together. Mainframes and mini-computers had dumb-terminals where dozens of users could work together
within the same system / company / university, before PC's. There was BBS systems before the WWW. My first job was software dev of BBS for a small business, before the internet.
1.7 / 5 (12) Nov 20, 2013
Why do guys like yourself always ask 'who'? 'Who could possibly be capable of such a thing??' And yet you're easily convinced that one madman could commandeer one of the most powerful and
educated countries in only a few short years, and threaten to conquer the world or destroy it. How is that possible from your perspective?
You're willing to entertain the notion that a liberal conspiracy has subverted the entire western world.
You fail to understand the libertarian position. The argument is not one of competence. It's about lack of competence and the dangers of unintended consequences. A conservative does not think any
form of government is capable of improving the human condition, ...thus the advocation of a limited one. IOW, the fears of conservatives is not that 'liberals progressives' will succeed in planning
out a utopian society, after all who wouldn't want that (?),.... the fear is when they inevitably fail and the cost to liberty of that failure.
2.3 / 5 (3) Nov 20, 2013
LOL, you're conspiracies are delusional. It was "sabotaged" by the few people trying to use the site
No, it was intrinsically flawed. Designed from scratch to be unworkable. But why dont you just argue with these guys?
"... calculated sabotage by Republicans at every step.
"That may sound like a left-wing conspiracy theory... But there is a strong factual basis for such a charge... Most Republican governors declined to create their own state insurance exchanges...
congressional Republicans refused repeatedly to appropriate dedicated funds to do all that extra work"
2 / 5 (4) Nov 20, 2013
"AHIP the health insurance industry super lobby... were secretly funneling huge amounts money to the Chamber of Commerce to be spent on advertising designed to convince the public that the
legislation should be defeated... A stunning $102.4 million... 15 months."
"... it was all about the Medical Loss Ratio (MLR)—the provision of the ACA that not only requires the health insurance companies to spend 80 percent of your premium dollars on actual health care
expenditures, but further requires that they refund to their customers any amounts they fail to spend as required by the MLR... The total rebates... $1.1 billion for 2011 alone—clearly motivation for
the insurers to defeat the law"
-But thats only one of the many reasons. The industry is stuffed full of lucre. One indication is the 2008 AIG collapse. Few structural changes have been made since. It is unable to accommodate an
aging pop, pandemic, or AI and robotics which will soon make doctors obsolete. And so its being scrapped.
2.3 / 5 (3) Nov 20, 2013
A conservative does not think any form of government is capable of improving the human condition
No thats what an anarchist thinks. Youre not an anarchist are you?
Unbelievable. It's as if you consider the existing state of things and then without having an actual historic perspective
But I have an actual historical perspective. Mine just happens to make sense.
you extrapolate retroactively how it would have happened
This is commonly called forensics.
had events emanated arbitrarily out of your ass.
My ass has nothing to do with world events.
The entire pc industry started out of a guys garage,.... and from IBM
Yah and how could one possibly become the other?
Mega-scale Constructs like the Internet and the transformation from analog to digital, CANNOT BE incidental and unintended results of anything.
The digital age was ANTICIPATED by turing and others. It was obviously VITAL to the future. It was MADE to happen. It did NOT happen by itself.
1.7 / 5 (12) Nov 20, 2013
My first job was software dev of BBS for a small business, before the internet.
Well Skippy, from the silly things you post you sound like that has been your only job. Changing the typewriter ribbons and stocking the paper closet. But I agree with your being qualified be a
junior member of the BS department, even though your chances of advancement aren't looking to good.
"Skippy"? Is that your lame attempt at talking down to people?
It's easier to type in Jerry-Springer ad-hominems, than counter arguments isn't it.
I have yet to see your new screen name post a single thing of substance.
1.7 / 5 (12) Nov 20, 2013
A conservative does not think any form of government is capable of improving the human condition....
No thats what an anarchist thinks. Youre not an anarchist are you?
Did you deliberately cut off the end of that sentence so you could make that false charge? Here is the rest of my sentence,... ".....thus the advocation of a limited one [gov]."
An anarchist would advocate NO government. Conservatives are strong advocates of several branches of government.
Mega-scale Constructs like the Internet and the transformation from analog to digital, CANNOT BE incidental and unintended results of anything.
Of course I never mentioned it was a random event. Do you have any idea how much profit is made off of the internet? How's that for natural motivation?
1.7 / 5 (12) Nov 20, 2013
The digital age was ANTICIPATED by turing and others. It was obviously VITAL to the future. It was MADE to happen. It did NOT happen by itself.
Yes, as I mentioned above the direction of some tech is obvious, so what? It comes about on a massive scale because of the potential for profit,.... not some secret omnipotent social engineering
governmental committee.
1.7 / 5 (12) Nov 20, 2013
It's easier to type in Jerry-Springer ad-hominems, than counter arguments isn't it.
Considering that you were too stupid to answer to my epistemology question I figured that stupid was the only thing you could understand. Were you lying then? Or are you lying now? Oh, okay you
were lying then and now. Go SIT DOWN and SHUT UP. Let the smart people try to teach you something. Do you like it when the Ira slaps you Skippy? Good, I need the exercise anyway.
OK, I have a feeling I'm dealing with a teenager here. What question did you ask?
1.7 / 5 (12) Nov 20, 2013
@GhostOfOtto1923, ... I'm not surprised that you would fall for the notion that republicans are some how responsible for the ObamaCare debacle,.... despite their stated position all along that IT
WOULD NOT WORK,... If the Obama administration is so stupid as to rely on those opposed to the monstrosity to ensure its success, then there you go.
1 / 5 (4) Nov 20, 2013
A conservative does not think any form of government is capable of improving the human condition, ...thus the advocation of a limited one
-So by the rules of philo word calculating, you are saying that
Conservatives are strong advocates of several branches of government
-despite the inability of any of them to improve the human condition. That is at least clumsy. Why do we have even the ones you favor if NONE of them can improve the human condition?
Do you have any idea how much profit is made off of the internet?
Not possible without PCs which had reached a certain threshold of capability. This level was not reached because they were intrinsically profitable. They were mostly a waste of money for anything but
entertainment. They would not have caught on UNLESS govt and the large corporations had paid enormous amounts of money to support them, and require that their consultants all use them.
Without this seed money nobody in business would have used them.
2.3 / 5 (3) Nov 20, 2013
@GhostOfOtto1923, ... I'm not surprised that you would fall for the notion that republicans are some how responsible for the ObamaCare debacle,.... despite their stated position all along that IT
WOULD NOT WORK,... If the Obama administration is so stupid as to rely on those opposed to the monstrosity to ensure its success, then there you go.
See my above posts on some of the ways they sabotaged it. What makes YOU think it wont work?
It WILL work. Its a matter of national security. The industry will not be allowed to leave itself vulnerable to collapse by refusing to change. Obamacare is inevitable just like social security was.
Look at what it took to eliminate the southern slave culture. That wasnt going to change by itself either even though it wasnt going to be able to compete with industries which were embracing
So it was destroyed in the Only Way possible. Urban renewal if you will.
1.4 / 5 (11) Nov 20, 2013
A conservative does not think any form of government is capable of improving the human condition, ...thus the advocation of a limited one
-So by the rules of philo word calculating, you are saying that
Conservatives are strong advocates of several branches of government
-despite the inability of any of them to improve the human condition. That is at least clumsy. Why do we have even the ones you favor if NONE of them can improve the human condition?
Do you not understand the difference between "form of government", and "branches
of government"?
3 / 5 (2) Nov 20, 2013
So which form of govt is it that has branches you strongly advocate for, if NONE of these forms can improve the human condition? Try making sense this time please.
1.4 / 5 (11) Nov 20, 2013
@Zephir_fan, -It is of no concern to me that you believe me wrt Esteven57's deleted post. - I've been a member since 2007 & have never stated I was "leaving". -based on your above posts, it appears
that you are 'very young' and have only an interest in Jerry-Springer type trolling, and so it would not interest me to engage you further. No offense.
@Ghost, IOW government in general, is not efficient nor effective as a force for economic and societal progress. So even if republicans controlled the white house and congress in the USA, I would
still advocate limited gov, because they would not have any better rational for increasing government control at the expense of liberty. Branches of gov that are necessary for protection of citizens,
private property, and infrastructure.,... justice system, military.
1.4 / 5 (11) Nov 20, 2013
I've been posting since 2007, so there is plenty of evidence; google 'site:phys.org noumenon' knock yourself out, "skippy", or continue thinking whatever you want. You never asked a coherent question
in context of a substantive discussion. Plus, it is clear you're a troll and mentally young.
1 / 5 (9) Nov 21, 2013
I'm calling you a troll because you're calling people "skippy" in an moronic attempt to talk down to people. Once you drop the Jerry-Springer argument style, ad-hominem arguments, and post for
sometime as if you actually appear interested in a subject rather than using that sibject as a vehicle for pointless argument,.... then I may eventually respond,... othereise it is pointless to
engage in your type.
1.5 / 5 (8) Nov 21, 2013
Is it me or is there exactly zero information in this article?
(ignoring usual rant in comments) | {"url":"http://phys.org/news/2013-11-student-world-largest-quantum-cluster.html","timestamp":"2014-04-16T11:02:11Z","content_type":null,"content_length":"108099","record_id":"<urn:uuid:e5a9ebc2-0dc2-43d0-904b-30676df99bca>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Making Mathematics: Support for Students
From 1999-2002, Making Mathematics connected students with professional mathematicians to work on fun, interesting, and challenging math research projects. Working alongside a Making Mathematics
mentor, students experienced and learned the methods that mathematicians use in their work, saw mathematics as a scientific discipline, and saw what mathematicians really do every day.
Mathematics Projects
Privacy Policy
by Joe Noss
6.1 student at a school near London, England
I used to think of maths, (for reasons I don t understand in the US you only have one?) as being a bit like cookery. It consisted of "recipes": self contained nucleuses of technique, which gave me
the tools to answer particular types of problems.
Dan McGinn (PDF)
Eric Landquist (PDF) | {"url":"http://www2.edc.org/makingmath/student.asp","timestamp":"2014-04-18T10:37:51Z","content_type":null,"content_length":"10616","record_id":"<urn:uuid:ed96685d-e51b-4896-946c-8d2f8ad35d9d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
You are currently browsing the monthly archive for December 2009.
Tim Austin, Tanja Eisner, and I have just uploaded to the arXiv our joint paper Nonconventional ergodic averages and multiple recurrence for von Neumann dynamical systems, submitted to Pacific
Journal of Mathematics. This project started with the observation that the multiple recurrence theorem of Furstenberg (and the related multiple convergence theorem of Host and Kra) could be
interpreted in the language of dynamical systems of commutative finite von Neumann algebras, which naturally raised the question of the extent to which the results hold in the noncommutative setting.
The short answer is “yes for small averages, but not for long ones”.
The Furstenberg multiple recurrence theorem can be phrased as follows: if ${X = (X, {\mathcal X}, \mu)}$ is a probability space with a measure-preserving shift ${T:X \rightarrow X}$ (which naturally
induces an isomorphism ${\alpha: L^\infty(X) \rightarrow L^\infty(X)}$ by setting ${\alpha a := a \circ T^{-1}}$), ${a \in L^\infty(X)}$ is non-negative with positive trace ${\tau(a) := \int_X a\ d\
mu}$, and ${k \geq 1}$ is an integer, then one has
$\displaystyle \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0.$
In particular, ${\tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0}$ for all ${n}$ in a set of positive upper density. This result is famously equivalent to Szemerédi’s theorem on arithmetic
The Host-Kra multiple convergence theorem makes the related assertion that if ${a_0,\ldots,a_{k-1} \in L^\infty(X)}$, then the scalar averages
$\displaystyle \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )$
converge to a limit as ${N \rightarrow \infty}$; a fortiori, the function averages
$\displaystyle \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})$
converge in (say) ${L^2(X)}$ norm.
The space ${L^\infty(X)}$ is a commutative example of a von Neumann algebra: an algebra of bounded linear operators on a complex Hilbert space ${H}$ which is closed under the weak operator topology,
and under taking adjoints. Indeed, one can take ${H}$ to be ${L^2(X)}$, and identify each element ${m}$ of ${L^\infty(X)}$ with the multiplier operator ${a \mapsto ma}$. The operation ${\tau: a \
mapsto \int_X a\ d\mu}$ is then a finite trace for this algebra, i.e. a linear map from the algebra to the scalars ${{\mathbb C}}$ such that ${\tau(ab)=\tau(ba)}$, ${\tau(a^*) = \overline{\tau(a)}}$,
and ${\tau(a^* a) \geq 0}$, with equality iff ${a=0}$. The shift ${\alpha: L^\infty(X) \rightarrow L^\infty(X)}$ is then an automorphism of this algebra (preserving shift and conjugation).
We can generalise this situation to the noncommutative setting. Define a von Neumann dynamical system ${(M, \tau, \alpha)}$ to be a von Neumann algebra ${M}$ with a finite trace ${\tau}$ and an
automorphism ${\alpha: M \rightarrow M}$. In addition to the commutative examples generated by measure-preserving systems, we give three other examples here:
• (Matrices) ${M = M_n({\mathbb C})}$ is the algebra of ${n \times n}$ complex matrices, with trace ${\tau(a) = \frac{1}{n} \hbox{tr}(a)}$ and shift ${\alpha(a) := UaU^{-1}}$, where ${U}$ is a
fixed unitary ${n \times n}$ matrix.
• (Group algebras) ${M = \overline{{\mathbb C} G}}$ is the closure of the group algebra ${{\mathbb C} G}$ of a discrete group ${G}$ (i.e. the algebra of finite formal complex combinations of group
elements), which acts on the Hilbert space ${\ell^2(G)}$ by convolution (identifying each group element with its Kronecker delta function). A trace is given by ${\alpha(a) = \langle a \delta_0, \
delta_0 \rangle_{\ell^2(G)}}$, where ${\delta_0 \in \ell^2(G)}$ is the Kronecker delta at the identity. Any automorphism ${T: G \rightarrow G}$ of the group induces a shift ${\alpha: M \
rightarrow M}$.
• (Noncommutative torus) ${M}$ is the von Neumann algebra acting on ${L^2(({\mathbb R}/{\mathbb Z})^2)}$ generated by the multiplier operator ${f(x,y) \mapsto e^{2\pi i x} f(x,y)}$ and the shifted
multiplier operator ${f(x,y) \mapsto e^{2\pi i y} f(x+\alpha,y)}$, where ${\alpha \in {\mathbb R}/{\mathbb Z}}$ is fixed. A trace is given by ${\alpha(a) = \langle 1, a1\rangle_{L^2(({\mathbb R}/
{\mathbb Z})^2)}}$, where ${1 \in L^2(({\mathbb R}/{\mathbb Z})^2)}$ is the constant function.
Inspired by noncommutative generalisations of other results in commutative analysis, one can then ask the following questions, for a fixed ${k \geq 1}$ and for a fixed von Neumann dynamical system $
• (Recurrence on average) Whenever ${a \in M}$ is non-negative with positive trace, is it true that$\displaystyle \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots
(\alpha^{(k-1)n} a) ) > 0?$
• (Recurrence on a dense set) Whenever ${a \in M}$ is non-negative with positive trace, is it true that$\displaystyle \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0$for all ${n}$ in a set of
positive upper density?
• (Weak convergence) With ${a_0,\ldots,a_{k-1} \in M}$, is it true that$\displaystyle \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )$converges?
• (Strong convergence) With ${a_1,\ldots,a_{k-1} \in M}$, is it true that$\displaystyle \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})$converges in using the
Hilbert-Schmidt norm ${\|a\|_{L^2(M)} := \tau(a^* a)^{1/2}}$?
Note that strong convergence automatically implies weak convergence, and recurrence on average automatically implies recurrence on a dense set.
For ${k=1}$, all four questions can trivially be answered “yes”. For ${k=2}$, the answer to the above four questions is also “yes”, thanks to the von Neumann ergodic theorem for unitary operators.
For ${k=3}$, we were able to establish a positive answer to the “recurrence on a dense set”, “weak convergence”, and “strong convergence” results assuming that ${M}$ is ergodic. For general ${k}$, we
have a positive answer to all four questions under the assumption that ${M}$ is asymptotically abelian, which roughly speaking means that the commutators ${[a,\alpha^n b]}$ converges to zero (in an
appropriate weak sense) as ${n \rightarrow \infty}$. Both of these proofs adapt the usual ergodic theory arguments; the latter result generalises some earlier work of Niculescu-Stroh-Zsido,
Duvenhage, and Beyers-Duvenhage-Stroh. For the ${k=3}$ result, a key observation is that the van der Corput lemma can be used to control triple averages without requiring any commutativity; the
“generalised von Neumann” trick of using multiple applications of the van der Corput trick to control higher averages, however, relies much more strongly on commutativity.
In most other situations we have counterexamples to all of these questions. In particular:
• For ${k=3}$, recurrence on average can fail on an ergodic system; indeed, one can even make the average negative. This example is ultimately based on a Behrend example construction and a von
Neumann algebra construction known as the crossed product.
• For ${k=3}$, recurrence on a dense set can also fail if the ergodicity hypothesis is dropped. This also uses the Behrend example and the crossed product construction.
• For ${k=4}$, weak and strong convergence can fail even assuming ergodicity. This uses a group theoretic construction, which amusingly was inspired by Grothendieck’s interpretation of a group as a
sheaf of flat connections, which I blogged about recently, and which I will discuss below the fold.
• For ${k=5}$, recurrence on a dense set fails even with the ergodicity hypothesis. This uses a fancier version of the Behrend example due to Ruzsa in this paper of Bergelson, Host, and Kra. This
example only applies for ${k \geq 5}$; we do not know for ${k=4}$ whether recurrence on a dense set holds for ergodic systems.
This will be a more frivolous post than usual, in part due to the holiday season.
I recently happened across the following video, which exploits a simple rhetorical trick that I had not seen before:
If nothing else, it’s a convincing (albeit unsubtle) demonstration that the English language is non-commutative (or perhaps non-associative); a linguistic analogue of the swindle, if you will.
Of course, the trick relies heavily on sentence fragments that negate or compare; I wonder if it is possible to achieve a comparable effect without using such fragments.
A related trick which I have seen (though I cannot recall any explicit examples right now; perhaps some readers know of some?) is to set up the verses of a song so that the last verse is identical to
the first, but now has a completely distinct meaning (e.g. an ironic interpretation rather than a literal one) due to the context of the preceding verses. The ultimate challenge would be to set up a
Möbius song, in which each iteration of the song completely reverses the meaning of the next iterate (cf. this xkcd strip), but this may be beyond the capability of the English language.
On a related note: when I was a graduate student in Princeton, I recall John Conway (and another author whose name I forget) producing another light-hearted demonstration that the English language
was highly non-commutative, by showing that if one takes the free group with 26 generators $a,b,\ldots,z$ and quotients out by all relations given by anagrams (e.g. $cat=act$) then the resulting
group was commutative. Unfortunately I was not able to locate this recreational mathematics paper of Conway (which also treated the French language, if I recall correctly); perhaps one of the
readers knows of it?
In a multiplicative group ${G}$, the commutator of two group elements ${g, h}$ is defined as ${[g,h] := g^{-1}h^{-1}gh}$ (other conventions are also in use, though they are largely equivalent for the
purposes of this discussion). A group is said to be nilpotent of step ${s}$ (or more precisely, step ${\leq s}$), if all iterated commutators of order ${s+1}$ or higher necessarily vanish. For
instance, a group is nilpotent of order ${1}$ if and only if it is abelian, and it is nilpotent of order ${2}$ if and only if ${[[g_1,g_2],g_3]=id}$ for all ${g_1,g_2,g_3}$ (i.e. all commutator
elements ${[g_1,g_2]}$ are central), and so forth. A good example of an ${s}$-step nilpotent group is the group of ${s+1 \times s+1}$ upper-triangular unipotent matrices (i.e. matrices with ${1}$s on
the diagonal and zero below the diagonal), and taking values in some ring (e.g. reals, integers, complex numbers, etc.).
Another important example of nilpotent groups arise from operations on polynomials. For instance, if ${V_{\leq s}}$ is the vector space of real polynomials of one variable of degree at most ${s}$,
then there are two natural affine actions on ${V_{\leq s}}$. Firstly, every polynomial ${Q}$ in ${V_{\leq s}}$ gives rise to an “vertical” shift ${P \mapsto P+Q}$. Secondly, every ${h \in {\bf R}}$
gives rise to a “horizontal” shift ${P \mapsto P(\cdot+h)}$. The group generated by these two shifts is a nilpotent group of step ${\leq s}$; this reflects the well-known fact that a polynomial of
degree ${\leq s}$ vanishes once one differentiates more than ${s}$ times. Because of this link between nilpotentcy and polynomials, one can view nilpotent algebra as a generalisation of polynomial
Suppose one has a finite number ${g_1,\ldots,g_n}$ of generators. Using abstract algebra, one can then construct the free nilpotent group ${{\mathcal F}_{\leq s}(g_1,\ldots,g_n)}$ of step ${\leq s}$,
defined as the group generated by the ${g_1,\ldots,g_n}$ subject to the relations that all commutators of order ${s+1}$ involving the generators are trivial. This is the universal object in the
category of nilpotent groups of step ${\leq s}$ with ${n}$ marked elements ${g_1,\ldots,g_n}$. In other words, given any other ${\leq s}$-step nilpotent group ${G'}$ with ${n}$ marked elements $
{g'_1,\ldots,g'_n}$, there is a unique homomorphism from the free nilpotent group to ${G'}$ that maps each ${g_j}$ to ${g'_j}$ for ${1 \leq j \leq n}$. In particular, the free nilpotent group is
well-defined up to isomorphism in this category.
In many applications, one wants to have a more concrete description of the free nilpotent group, so that one can perform computations more easily (and in particular, be able to tell when two words in
the group are equal or not). This is easy for small values of ${s}$. For instance, when ${s=1}$, ${{\mathcal F}_{\leq 1}(g_1,\ldots,g_n)}$ is simply the free abelian group generated by ${g_1,\
ldots,g_n}$, and so every element ${g}$ of ${{\mathcal F}_{\leq 1}(g_1,\ldots,g_n)}$ can be described uniquely as
$\displaystyle g = \prod_{j=1}^n g_j^{m_j} := g_1^{m_1} \ldots g_n^{m_n} \ \ \ \ \ (1)$
for some integers ${m_1,\ldots,m_n}$, with the obvious group law. Indeed, to obtain existence of this representation, one starts with any representation of ${g}$ in terms of the generators ${g_1,\
ldots,g_n}$, and then uses the abelian property to push the ${g_1}$ factors to the far left, followed by the ${g_2}$ factors, and so forth. To show uniqueness, we observe that the group ${G}$ of
formal abelian products ${\{ g_1^{m_1} \ldots g_n^{m_n}: m_1,\ldots,m_n \in {\bf Z} \} \equiv {\bf Z}^k}$ is already a ${\leq 1}$-step nilpotent group with marked elements ${g_1,\ldots,g_n}$, and so
there must be a homomorphism from the free group to ${G}$. Since ${G}$ distinguishes all the products ${g_1^{m_1} \ldots g_n^{m_n}}$ from each other, the free group must also.
It is only slightly more tricky to describe the free nilpotent group ${{\mathcal F}_{\leq 2}(g_1,\ldots,g_n)}$ of step ${\leq 2}$. Using the identities
$\displaystyle gh = hg [g,h]; \quad gh^{-1} = ([g,h]^{-1})^{g^{-1}} h^{-1} g; \quad g^{-1} h = h [g,h]^{-1} g^{-1}; \quad g^{-1} h^{-1} := [g,h] g^{-1} h^{-1}$
(where ${g^h := h^{-1} g h}$ is the conjugate of ${g}$ by ${h}$) we see that whenever ${1 \leq i < j \leq n}$, one can push a positive or negative power of ${g_i}$ past a positive or negative power
of ${g_j}$, at the cost of creating a positive or negative power of ${[g_i,g_j]}$, or one of its conjugates. Meanwhile, in a ${\leq 2}$-step nilpotent group, all the commutators are central, and one
can pull all the commutators out of a word and collect them as in the abelian case. Doing all this, we see that every element ${g}$ of ${{\mathcal F}_{\leq 2}(g_1,\ldots,g_n)}$ has a representation
of the form
$\displaystyle g = (\prod_{j=1}^n g_j^{m_j}) (\prod_{1 \leq i < j \leq n} [g_i,g_j]^{m_{[i,j]}}) \ \ \ \ \ (2)$
for some integers ${m_j}$ for ${1 \leq j \leq n}$ and ${m_{[i,j]}}$ for ${1 \leq i < j \leq n}$. Note that we don’t need to consider commutators ${[g_i,g_j]}$ for ${i \geq j}$, since
$\displaystyle [g_i,g_i] = id$
$\displaystyle [g_i,g_j] = [g_j,g_i]^{-1}.$
It is possible to show also that this representation is unique, by repeating the previous argument, i.e. by showing that the set of formal products
$\displaystyle G := \{ (\prod_{j=1}^k g_j^{m_j}) (\prod_{1 \leq i < j \leq n} [g_i,g_j]^{m_{[i,j]}}): m_j, m_{[i,j]} \in {\bf Z} \}$
forms a ${\leq 2}$-step nilpotent group, after using the above rules to define the group operations. This can be done, but verifying the group axioms (particularly the associative law) for ${G}$ is
unpleasantly tedious.
Once one sees this, one rapidly loses an appetite for trying to obtain a similar explicit description for free nilpotent groups for higher step, especially once one starts seeing that higher
commutators obey some non-obvious identities such as the Hall-Witt identity
$\displaystyle [[g, h^{-1}], k]^h\cdot[[h, k^{-1}], g]^k\cdot[[k, g^{-1}], h]^g = 1 \ \ \ \ \ (3)$
(a nonlinear version of the Jacobi identity in the theory of Lie algebras), which make one less certain as to the existence or uniqueness of various proposed generalisations of the representations
(1) or (2). For instance, in the free ${\leq 3}$-step nilpotent group, it turns out that for representations of the form
$\displaystyle g = (\prod_{j=1}^n g_j^{m_j}) (\prod_{1 \leq i < j \leq n} [g_i,g_j]^{m_{[i,j]}}) (\prod_{1 \leq i < j < k \leq n} [[g_i,g_j],g_k]^{n_{[[i,j],k]}})$
one has uniqueness but not existence (e.g. even in the simplest case ${n=3}$, there is no place in this representation for, say, ${[[g_1,g_3],g_2]}$ or ${[[g_1,g_2],g_2]}$), but if one tries to
insert more triple commutators into the representation to make up for this, one has to be careful not to lose uniqueness due to identities such as (3). One can paste these in by ad hoc means in the $
{s=3}$ case, but the ${s=4}$ case looks more fearsome still, especially now that the quadruple commutators split into several distinct-looking species such as ${[[g_i,g_j],[g_k,g_l]]}$ and $
{[[[g_i,g_j],g_k],g_l]}$ which are nevertheless still related to each other by identities such as (3). While one can eventually disentangle this mess for any fixed ${n}$ and ${s}$ by a finite amount
of combinatorial computation, it is not immediately obvious how to give an explicit description of ${{\mathcal F}_{\leq s}(g_1,\ldots,g_n)}$ uniformly in ${n}$ and ${s}$.
Nevertheless, it turns out that one can give a reasonably tractable description of this group if one takes a polycyclic perspective rather than a nilpotent one – i.e. one views the free nilpotent
group as a tower of group extensions of the trivial group by the cyclic group ${{\bf Z}}$. This seems to be a fairly standard observation in group theory – I found it in this book of Magnus, Karrass,
and Solitar, via this paper of Leibman – but seems not to be so widely known outside of that field, so I wanted to record it here.
This is a technical post inspired by separate conversations with Jim Colliander and with Soonsik Kwon on the relationship between two techniques used to control non-radiating solutions to dispersive
nonlinear equations, namely the “double Duhamel trick” and the “in/out decomposition”. See for instance these lecture notes of Killip and Visan for a survey of these two techniques and other related
methods in the subject. (I should caution that this post is likely to be unintelligible to anyone not already working in this area.)
For sake of discussion we shall focus on solutions to a nonlinear Schrödinger equation
$\displaystyle iu_t + \Delta u = F(u)$
and we will not concern ourselves with the specific regularity of the solution ${u}$, or the specific properties of the nonlinearity ${F}$ here. We will also not address the issue of how to justify
the formal computations being performed here.
Solutions to this equation enjoy the forward Duhamel formula
$\displaystyle u(t) = e^{i(t-t_0)\Delta} u(t_0) - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt'$
for times ${t}$ to the future of ${t_0}$ in the lifespan of the solution, as well as the backward Duhamel formula
$\displaystyle u(t) = e^{i(t-t_1)\Delta} u(t_1) + i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt'$
for all times ${t}$ to the past of ${t_1}$ in the lifespan of the solution. The first formula asserts that the solution at a given time is determined by the initial state and by the immediate past,
while the second formula is the time reversal of the first, asserting that the solution at a given time is determined by the final state and the immediate future. These basic causal formulae are the
foundation of the local theory of these equations, and in particular play an instrumental role in establishing local well-posedness for these equations. In this local theory, the main philosophy is
to treat the homogeneous (or linear) term ${e^{i(t-t_0)\Delta} u(t_0)}$ or ${e^{i(t-t_1)\Delta} u(t_1)}$ as the main term, and the inhomogeneous (or nonlinear, or forcing) integral term as an error
The situation is reversed when one turns to the global theory, and looks at the asymptotic behaviour of a solution as one approaches a limiting time ${T}$ (which can be infinite if one has global
existence, or finite if one has finite time blowup). After a suitable rescaling, the linear portion of the solution often disappears from view, leaving one with an asymptotic blowup profile solution
which is non-radiating in the sense that the linear components of the Duhamel formulae vanish, thus
$\displaystyle u(t) = - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (1)$
$\displaystyle u(t) = i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (2)$
where ${t_0, t_1}$ are the endpoint times of existence. (This type of situation comes up for instance in the Kenig-Merle approach to critical regularity problems, by reducing to a minimal blowup
solution which is almost periodic modulo symmetries, and hence non-radiating.) These types of non-radiating solutions are propelled solely by their own nonlinear self-interactions from the immediate
past or immediate future; they are generalisations of “nonlinear bound states” such as solitons.
A key task is then to somehow combine the forward representation (1) and the backward representation (2) to obtain new information on ${u(t)}$ itself, that cannot be obtained from either
representation alone; it seems that the immediate past and immediate future can collectively exert more control on the present than they each do separately. This type of problem can be abstracted as
follows. Let ${\|u(t)\|_{Y_+}}$ be the infimal value of ${\|F_+\|_N}$ over all forward representations of ${u(t)}$ of the form
$\displaystyle u(t) = \int_{t_0}^t e^{i(t-t')\Delta} F_+(t') \ dt' \ \ \ \ \ (3)$
where ${N}$ is some suitable spacetime norm (e.g. a Strichartz-type norm), and similarly let ${\|u(t)\|_{Y_-}}$ be the infimal value of ${\|F_-\|_N}$ over all backward representations of ${u(t)}$ of
the form
$\displaystyle u(t) = \int_{t}^{t_1} e^{i(t-t')\Delta} F_-(t') \ dt'. \ \ \ \ \ (4)$
Typically, one already has (or is willing to assume as a bootstrap hypothesis) control on ${F(u)}$ in the norm ${N}$, which gives control of ${u(t)}$ in the norms ${Y_+, Y_-}$. The task is then to
use the control of both the ${Y_+}$ and ${Y_-}$ norm of ${u(t)}$ to gain control of ${u(t)}$ in a more conventional Hilbert space norm ${X}$, which is typically a Sobolev space such as ${H^s}$ or ${L
One can use some classical functional analysis to clarify this situation. By the closed graph theorem, the above task is (morally, at least) equivalent to establishing an a priori bound of the form
$\displaystyle \| u \|_X \lesssim \|u\|_{Y_+} + \|u\|_{Y_-} \ \ \ \ \ (5)$
for all reasonable ${u}$ (e.g. test functions). The double Duhamel trick accomplishes this by establishing the stronger estimate
$\displaystyle |\langle u, v \rangle_X| \lesssim \|u\|_{Y_+} \|v\|_{Y_-} \ \ \ \ \ (6)$
for all reasonable ${u, v}$; note that setting ${u=v}$ and applying the arithmetic-geometric inequality then gives (5). The point is that if ${u}$ has a forward representation (3) and ${v}$ has a
backward representation (4), then the inner product ${\langle u, v \rangle_X}$ can (formally, at least) be expanded as a double integral
$\displaystyle \int_{t_0}^t \int_{t}^{t_1} \langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X\ dt'' dt'.$
The dispersive nature of the linear Schrödinger equation often causes ${\langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X}$ to decay, especially in high dimensions. In high
enough dimension (typically one needs five or higher dimensions, unless one already has some spacetime control on the solution), the decay is stronger than ${1/|t'-t''|^2}$, so that the integrand
becomes absolutely integrable and one recovers (6).
Unfortunately it appears that estimates of the form (6) fail in low dimensions (for the type of norms ${N}$ that actually show up in applications); there is just too much interaction between past and
future to hope for any reasonable control of this inner product. But one can try to obtain (5) by other means. By the Hahn-Banach theorem (and ignoring various issues related to reflexivity), (5) is
equivalent to the assertion that every ${u \in X}$ can be decomposed as ${u = u_+ + u_-}$, where ${\|u_+\|_{Y_+^*} \lesssim \|u\|_X}$ and ${\|u_-\|_{Y_-^*} \lesssim \|v\|_X}$. Indeed once one has
such a decomposition, one obtains (5) by computing the inner product of ${u}$ with ${u=u_++u_-}$ in ${X}$ in two different ways. One can also (morally at least) write ${\|u_+\|_{Y_+^*}}$ as ${\| e^{i
(\cdot-t)\Delta} u_+\|_{N^*([t_0,t])}}$ and similarly write ${\|u_-\|_{Y_-^*}}$ as ${\| e^{i(\cdot-t)\Delta} u_-\|_{N^*([t,t_1])}}$
So one can dualise the task of proving (5) as that of obtaining a decomposition of an arbitrary initial state ${u}$ into two components ${u_+}$ and ${u_-}$, where the former disperses into the past
and the latter disperses into the future under the linear evolution. We do not know how to achieve this type of task efficiently in general – and doing so would likely lead to a significant advance
in the subject (perhaps one of the main areas in this topic where serious harmonic analysis is likely to play a major role). But in the model case of spherically symmetric data ${u}$, one can perform
such a decomposition quite easily: one uses microlocal projections to set ${u_+}$ to be the “inward” pointing component of ${u}$, which propagates towards the origin in the future and away from the
origin in the past, and ${u_-}$ to simimlarly be the “outward” component of ${u}$. As spherical symmetry significantly dilutes the amplitude of the solution (and hence the strength of the
nonlinearity) away from the origin, this decomposition tends to work quite well for applications, and is one of the main reasons (though not the only one) why we have a global theory for
low-dimensional nonlinear Schrödinger equations in the radial case, but not in general.
The in/out decomposition is a linear one, but the Hahn-Banach argument gives no reason why the decomposition needs to be linear. (Note that other well-known decompositions in analysis, such as the
Fefferman-Stein decomposition of BMO, are necessarily nonlinear, a fact which is ultimately equivalent to the non-complemented nature of a certain subspace of a Banach space; see these lecture notes
of mine and this old blog post for some discussion.) So one could imagine a sophisticated nonlinear decomposition as a general substitute for the in/out decomposition. See for instance this paper of
Bourgain and Brezis for some of the subtleties of decomposition even in very classical function spaces such as ${H^{1/2}(R)}$. Alternatively, there may well be a third way to obtain estimates of the
form (5) that do not require either decomposition or the double Duhamel trick; such a method may well clarify the relative relationship between past, present, and future for critical nonlinear
dispersive equations, which seems to be a key aspect of the theory that is still only partially understood. (In particular, it seems that one needs a fairly strong decoupling of the present from both
the past and the future to get the sort of elliptic-like regularity results that allow us to make further progress with such equations.)
One of the most basic theorems in linear algebra is that every finite-dimensional vector space has a finite basis. Let us give a statement of this theorem in the case when the underlying field is the
Theorem 1 (Finite generation implies finite basis, infinitary version) Let ${V}$ be a vector space over the rationals ${{\mathbb Q}}$, and let ${v_1,\ldots,v_n}$ be a finite collection of vectors
in ${V}$. Then there exists a collection ${w_1,\ldots,w_k}$ of vectors in ${V}$, with ${1 \leq k \leq n}$, such that
□ (${w}$ generates ${v}$) Every ${v_j}$ can be expressed as a rational linear combination of the ${w_1,\ldots,w_k}$.
□ (${w}$ independent) There is no non-trivial linear relation ${a_1 w_1 + \ldots + a_k w_k = 0}$, ${a_1,\ldots,a_k \in {\mathbb Q}}$ among the ${w_1,\ldots,w_m}$ (where non-trivial means that
the ${a_i}$ are not all zero).
In fact, one can take ${w_1,\ldots,w_m}$ to be a subset of the ${v_1,\ldots,v_n}$.
Proof: We perform the following “rank reduction argument”. Start with ${w_1,\ldots,w_k}$ initialised to ${v_1,\ldots,v_n}$ (so initially we have ${k=n}$). Clearly ${w}$ generates ${v}$. If the ${w_i}
$ are linearly independent then we are done. Otherwise, there is a non-trivial linear relation between them; after shuffling things around, we see that one of the ${w_i}$, say ${w_k}$, is a rational
linear combination of the ${w_1,\ldots,w_{k-1}}$. In such a case, ${w_k}$ becomes redundant, and we may delete it (reducing the rank ${k}$ by one). We repeat this procedure; it can only run for at
most ${n}$ steps and so terminates with ${w_1,\ldots,w_m}$ obeying both of the desired properties. $\Box$
In additive combinatorics, one often wants to use results like this in finitary settings, such as that of a cyclic group ${{\mathbb Z}/p{\mathbb Z}}$ where ${p}$ is a large prime. Now, technically
speaking, ${{\mathbb Z}/p{\mathbb Z}}$ is not a vector space over ${{\mathbb Q}}$, because one only multiply an element of ${{\mathbb Z}/p{\mathbb Z}}$ by a rational number if the denominator of that
rational does not divide ${p}$. But for ${p}$ very large, ${{\mathbb Z}/p{\mathbb Z}}$ “behaves” like a vector space over ${{\mathbb Q}}$, at least if one restricts attention to the rationals of
“bounded height” – where the numerator and denominator of the rationals are bounded. Thus we shall refer to elements of ${{\mathbb Z}/p{\mathbb Z}}$ as “vectors” over ${{\mathbb Q}}$, even though
strictly speaking this is not quite the case.
On the other hand, saying that one element of ${{\mathbb Z}/p{\mathbb Z}}$ is a rational linear combination of another set of elements is not a very interesting statement: any non-zero element of $
{{\mathbb Z}/p{\mathbb Z}}$ already generates the entire space! However, if one again restricts attention to rational linear combinations of bounded height, then things become interesting again. For
instance, the vector ${1}$ can generate elements such as ${37}$ or ${\frac{p-1}{2}}$ using rational linear combinations of bounded height, but will not be able to generate such elements of ${{\mathbb
Z}/p{\mathbb Z}}$ as ${\lfloor\sqrt{p}\rfloor}$ without using rational numbers of unbounded height.
For similar reasons, the notion of linear independence over the rationals doesn’t initially look very interesting over ${{\mathbb Z}/p{\mathbb Z}}$: any two non-zero elements of ${{\mathbb Z}/p{\
mathbb Z}}$ are of course rationally dependent. But again, if one restricts attention to rational numbers of bounded height, then independence begins to emerge: for instance, ${1}$ and ${\lfloor\sqrt
{p}\rfloor}$ are independent in this sense.
Thus, it becomes natural to ask whether there is a “quantitative” analogue of Theorem 1, with non-trivial content in the case of “vector spaces over the bounded height rationals” such as ${{\mathbb
Z}/p{\mathbb Z}}$, which asserts that given any bounded collection ${v_1,\ldots,v_n}$ of elements, one can find another set ${w_1,\ldots,w_k}$ which is linearly independent “over the rationals up to
some height”, such that the ${v_1,\ldots,v_n}$ can be generated by the ${w_1,\ldots,w_k}$ “over the rationals up to some height”. Of course to make this rigorous, one needs to quantify the two
heights here, the one giving the independence, and the one giving the generation. In order to be useful for applications, it turns out that one often needs the former height to be much larger than
the latter; exponentially larger, for instance, is not an uncommon request. Fortunately, one can accomplish this, at the cost of making the height somewhat large:
Theorem 2 (Finite generation implies finite basis, finitary version) Let ${n \geq 1}$ be an integer, and let ${F: {\mathbb N} \rightarrow {\mathbb N}}$ be a function. Let ${V}$ be an abelian
group which admits a well-defined division operation by any natural number of size at most ${C(F,n)}$ for some constant ${C(F,n)}$ depending only on ${F,n}$; for instance one can take ${V = {\
mathbb Z}/p{\mathbb Z}}$ for ${p}$ a prime larger than ${C(F,n)}$. Let ${v_1,\ldots,v_n}$ be a finite collection of “vectors” in ${V}$. Then there exists a collection ${w_1,\ldots,w_k}$ of
vectors in ${V}$, with ${1 \leq k \leq n}$, as well an integer ${M \geq 1}$, such that
□ (Complexity bound) ${M \leq C(F,n)}$ for some ${C(F,n)}$ depending only on ${F, n}$.
□ (${w}$ generates ${v}$) Every ${v_j}$ can be expressed as a rational linear combination of the ${w_1,\ldots,w_k}$ of height at most ${M}$ (i.e. the numerator and denominator of the
coefficients are at most ${M}$).
□ (${w}$ independent) There is no non-trivial linear relation ${a_1 w_1 + \ldots + a_k w_k = 0}$ among the ${w_1,\ldots,w_k}$ in which the ${a_1,\ldots,a_k}$ are rational numbers of height at
most ${F(M)}$.
In fact, one can take ${w_1,\ldots,w_k}$ to be a subset of the ${v_1,\ldots,v_n}$.
Proof: We perform the same “rank reduction argument” as before, but translated to the finitary setting. Start with ${w_1,\ldots,w_k}$ initialised to ${v_1,\ldots,v_n}$ (so initially we have ${k=n}$),
and initialise ${M=1}$. Clearly ${w}$ generates ${v}$ at this height. If the ${w_i}$ are linearly independent up to rationals of height ${F(M)}$ then we are done. Otherwise, there is a non-trivial
linear relation between them; after shuffling things around, we see that one of the ${w_i}$, say ${w_k}$, is a rational linear combination of the ${w_1,\ldots,w_{k-1}}$, whose height is bounded by
some function depending on ${F(M)}$ and ${k}$. In such a case, ${w_k}$ becomes redundant, and we may delete it (reducing the rank ${k}$ by one), but note that in order for the remaining ${w_1,\
ldots,w_{k-1}}$ to generate ${v_1,\ldots,v_n}$ we need to raise the height upper bound for the rationals involved from ${M}$ to some quantity ${M'}$ depending on ${M, F(M), k}$. We then replace ${M}$
by ${M'}$ and continue the process. We repeat this procedure; it can only run for at most ${n}$ steps and so terminates with ${w_1,\ldots,w_m}$ and ${M}$ obeying all of the desired properties. (Note
that the bound on ${M}$ is quite poor, being essentially an ${n}$-fold iteration of ${F}$! Thus, for instance, if ${F}$ is exponential, then the bound on ${M}$ is tower-exponential in nature.) $\Box$
(A variant of this type of approximate basis lemma was used in my paper with Van Vu on the singularity probability of random Bernoulli matrices.)
Looking at the statements and proofs of these two theorems it is clear that the two results are in some sense the “same” result, except that the latter has been made sufficiently quantitative that it
is meaningful in such finitary settings as ${{\mathbb Z}/p{\mathbb Z}}$. In this note I will show how this equivalence can be made formal using the language of non-standard analysis. This is not a
particularly deep (or new) observation, but it is perhaps the simplest example I know of that illustrates how nonstandard analysis can be used to transfer a quantifier-heavy finitary statement, such
as Theorem 2, into a quantifier-light infinitary statement, such as Theorem 1, thus lessening the need to perform “epsilon management” duties, such as keeping track of unspecified growth functions
such as ${F}$. This type of transference is discussed at length in this previous blog post of mine.
In this particular case, the amount of effort needed to set up the nonstandard machinery in order to reduce Theorem 2 from Theorem 1 is too great for this transference to be particularly worthwhile,
especially given that Theorem 2 has such a short proof. However, when performing a particularly intricate argument in additive combinatorics, in which one is performing a number of “rank reduction
arguments”, “energy increment arguments”, “regularity lemmas”, “structure theorems”, and so forth, the purely finitary approach can become bogged down with all the epsilon management one needs to do
to organise all the parameters that are flying around. The nonstandard approach can efficiently hide a large number of these parameters from view, and it can then become worthwhile to invest in the
nonstandard framework in order to clean up the rest of a lengthy argument. Furthermore, an advantage of moving up to the infinitary setting is that one can then deploy all the firepower of an
existing well-developed infinitary theory of mathematics (in this particular case, this would be the theory of linear algebra) out of the box, whereas in the finitary setting one would have to
painstakingly finitise each aspect of such a theory that one wished to use (imagine for instance trying to finitise the rank-nullity theorem for rationals of bounded height).
The nonstandard approach is very closely related to use of compactness arguments, or of the technique of taking ultralimits and ultraproducts; indeed we will use an ultrafilter in order to create the
nonstandard model in the first place.
I will also discuss a two variants of both Theorem 1 and Theorem 2 which have actually shown up in my research. The first is that of the regularity lemma for polynomials over finite fields, which
came up when studying the equidistribution of such polynomials (in this paper with Ben Green). The second comes up when is dealing not with a single finite collection ${v_1,\ldots,v_n}$ of vectors,
but rather with a family ${(v_{h,1},\ldots,v_{h,n})_{h \in H}}$ of such vectors, where ${H}$ ranges over a large set; this gives rise to what we call the sunflower lemma, and came up in this recent
paper of myself, Ben Green, and Tamar Ziegler.
This post is mostly concerned with nonstandard translations of the “rank reduction argument”. Nonstandard translations of the “energy increment argument” and “density increment argument” were briefly
discussed in this recent post; I may return to this topic in more detail in a future post.
Van Vu and I have just uploaded to the arXiv our paper “Random covariance matrices: Universality of local statistics of eigenvalues“, to be submitted shortly. This paper draws heavily on the
technology of our previous paper, in which we established a Four Moment Theorem for the local spacing statistics of eigenvalues of Wigner matrices. This theorem says, roughly speaking, that these
statistics are completely determined by the first four moments of the coefficients of such matrices, at least in the bulk of the spectrum. (In a subsequent paper we extended the Four Moment Theorem
to the edge of the spectrum.)
In this paper, we establish the analogous result for the singular values of rectangular iid matrices ${M = M_{n,p}}$, or (equivalently) the eigenvalues of the associated covariance matrix ${\frac{1}
{n} M M^*}$. As is well-known, there is a parallel theory between the spectral theory of random Wigner matrices and those of covariance matrices; for instance, just as the former has asymptotic
spectral distribution governed by the semi-circular law, the latter has asymptotic spectral distribution governed by the Marcenko-Pastur law. One reason for the connection can be seen by noting that
the singular values of a rectangular matrix ${M}$ are essentially the same thing as the eigenvalues of the augmented matrix
$\displaystyle \begin{pmatrix} 0 & M \\ M^* & 0\end{pmatrix}$
after eliminating sign ambiguities and degeneracies. So one can view singular values of a rectangular iid matrix as the eigenvalues of a matrix which resembles a Wigner matrix, except that two
diagonal blocks of that matrix have been zeroed out.
The zeroing out of these elements prevents one from applying the entire Wigner universality theory directly to the covariance matrix setting (in particular, the crucial Talagrand concentration
inequality for the magnitude of a projection of a random vector to a subspace does not work perfectly once there are many zero coefficients). Nevertheless, a large part of the theory (particularly
the deterministic components of the theory, such as eigenvalue variation formulae) carry through without much difficulty. The one place where one has to spend a bit of time to check details is to
ensure that the Erdos-Schlein-Yau delocalisation result (that asserts, roughly speaking, that the eigenvectors of a Wigner matrix are about as small in ${\ell^\infty}$ norm as one could hope to get)
is also true for in the covariance matrix setting, but this is a straightforward (though somewhat tedious) adaptation of the method (which is based on the Stieltjes transform).
As an application, we extend the sine kernel distribution of local covariance matrix statistics, first established in the case of Wishart ensembles (when the underlying variables are gaussian) by
Nagao and Wadati, and later extended to gaussian-divisible matrices by Ben Arous and Peche, to any distributions which matches one of these distributions to up to four moments, which covers virtually
all complex distributions with independent iid real and imaginary parts, with basically the lone exception of the complex Bernoulli ensemble.
Recently, Erdos, Schlein, Yau, and Yin generalised their local relaxation flow method to also obtain similar universality results for distributions which have a large amount of smoothness, but
without any matching moment conditions. By combining their techniques with ours as in our joint paper, one should probably be able to remove both smoothness and moment conditions, in particular now
covering the complex Bernoulli ensemble.
In this paper we also record a new observation that the exponential decay hypothesis in our earlier paper can be relaxed to a finite moment condition, for a sufficiently high (but fixed) moment. This
is done by rearranging the order of steps of the original argument carefully.
Assaf Naor and I have just uploaded to the arXiv our paper “Random Martingales and localization of maximal inequalities“, to be submitted shortly. This paper investigates the best constant in
generalisations of the classical Hardy-Littlewood maximal inequality
for any absolutely integrable ${f: {\mathbb R}^n \rightarrow {\mathbb R}}$, where ${B(x,r)}$ is the Euclidean ball of radius ${r}$ centred at ${x}$, and ${|E|}$ denotes the Lebesgue measure of a
subset ${E}$ of ${{\mathbb R}^n}$. This inequality is fundamental to a large part of real-variable harmonic analysis, and in particular to Calderón-Zygmund theory. A similar inequality in fact holds
with the Euclidean norm replaced by any other convex norm on ${{\mathbb R}^n}$.
The exact value of the constant ${C_n}$ is only known in ${n=1}$, with a remarkable result of Melas establishing that ${C_1 = \frac{11+\sqrt{61}}{12}}$. Classical covering lemma arguments give the
exponential upper bound ${C_n \leq 2^n}$ when properly optimised (a direct application of the Vitali covering lemma gives ${C_n \leq 5^n}$, but one can reduce ${5}$ to ${2}$ by being careful). In an
important paper of Stein and Strömberg, the improved bound ${C_n = O( n \log n )}$ was obtained for any convex norm by a more intricate covering norm argument, and the slight improvement ${C_n = O
(n)}$ obtained in the Euclidean case by another argument more adapted to the Euclidean setting that relied on heat kernels. In the other direction, a recent result of Aldaz shows that ${C_n \
rightarrow \infty}$ in the case of the ${\ell^\infty}$ norm, and in fact in an even more recent preprint of Aubrun, the lower bound ${C_n \gg_\epsilon \log^{1-\epsilon} n}$ for any ${\epsilon > 0}$
has been obtained in this case. However, these lower bounds do not apply in the Euclidean case, and one may still conjecture that ${C_n}$ is in fact uniformly bounded in this case.
Unfortunately, we do not make direct progress on these problems here. However, we do show that the Stein-Strömberg bound ${C_n = O(n \log n)}$ is extremely general, applying to a wide class of metric
measure spaces obeying a certain “microdoubling condition at dimension ${n}$“; and conversely, in such level of generality, it is essentially the best estimate possible, even with additional metric
measure hypotheses on the space. Thus, if one wants to improve this bound for a specific maximal inequality, one has to use specific properties of the geometry (such as the connections between
Euclidean balls and heat kernels). Furthermore, in the general setting of metric measure spaces, one has a general localisation principle, which roughly speaking asserts that in order to prove a
maximal inequality over all scales ${r \in (0,+\infty)}$, it suffices to prove such an inequality in a smaller range ${r \in [R, nR]}$ uniformly in ${R>0}$. It is this localisation which ultimately
explains the significance of the ${n \log n}$ growth in the Stein-Strömberg result (there are ${O(n \log n)}$ essentially distinct scales in any range ${[R,nR]}$). It also shows that if one restricts
the radii ${r}$ to a lacunary range (such as powers of ${2}$), the best constant improvees to ${O(\log n)}$; if one restricts the radii to an even sparser range such as powers of ${n}$, the best
constant becomes ${O(1)}$.
This is an adaptation of a talk I gave recently for a program at IPAM. In this talk, I gave a (very informal and non-rigorous) overview of Hrushovski’s use of model-theoretic techniques to establish
new Freiman-type theorems in non-commutative groups, and some recent work in progress of Ben Green, Tom Sanders and myself to establish combinatorial proofs of some of Hrushovski’s results.
This is the last reading seminar of this quarter for the Hrushovski paper. Anush Tserunyan continued working through her notes on stable theories. We introduced the key notion of non-forking
extensions (in the context of stable theories, at least) of types when constants are added; these are extensions which are “as generic as possible” with respect to the constants being added. The
existence of non-forking extensions can be used for instance to generate Morley sequences – sequences of indiscernibles which are “in general position” in some sense.
Starting in the winter quarter (Monday Jan 4, to be precise), I will be giving a graduate course on random matrices, with lecture notes to be posted on this blog. The topics I have in mind are
somewhat fluid, but my initial plan is to cover a large fraction of the following:
• Central limit theorem, random walks, concentration of measure
• The semicircular and Marcenko-Pastur laws for bulk distribution
• A little bit on the connections with free probability
• The spectral distribution of GUE and gaussian random matrices; theory of determinantal processes
• A little bit on the connections with orthogonal polynomials and Riemann-Hilbert problems
• Singularity probability and the least singular value; connections with the Littlewood-Offord problem
• The circular law
• Universality for eigenvalue spacing; Erdos-Schlein-Yau delocalisation of eigenvectors and applications
If time permits, I may also cover
• The Tracy-Widom law
• Connections with Dyson Brownian motion and the Ornstein-Uhlenbeck process; the Erdos-Schlein-Yau approach to eigenvalue spacing universality
• Conjectural connections with zeroes of the Riemann zeta function
Depending on how the course progresses, I may also continue it into the spring quarter (or else have a spring graduate course on a different topic – one potential topic I have in mind is dynamics on
nilmanifolds and applications to combinatorics).
Recent Comments
Descanse en paz, Wil… on Bill Thurston
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Sagemath 18: Calcula… on Noether’s theorem, and t…
Eytan Paldi on Polymath8b, X: writing the pap…
JOE on Finite time blowup for an aver…
Anonymous on Polymath8b, X: writing the pap…
Andrei Ludu on The Euler-Arnold equation
David Roberts on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Will Sawin on Polymath8b, X: writing the pap… | {"url":"https://terrytao.wordpress.com/2009/12/","timestamp":"2014-04-16T22:06:25Z","content_type":null,"content_length":"248051","record_id":"<urn:uuid:db3c80c1-ef4b-4f5d-9427-1053cd9480ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
EXACT function
This article describes the formula syntax and usage of the EXACT function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use
functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel.
Compares two text strings and returns TRUE if they are exactly the same, FALSE otherwise. EXACT is case-sensitive but ignores formatting differences. Use EXACT to test text being entered into a
EXACT(text1, text2)
The EXACT function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.):
● Text1 Required. The first text string.
● Text2 Required. The second text string.
The example may be easier to understand if you copy it to a blank worksheet.
1. Select the example in this article. If you are copying the example in Excel Online, copy and paste one cell at a time.
Important: Do not select the row or column headers.
Selecting an example from Help
1. Press CTRL+C.
2. Create a blank workbook or worksheet.
3. In the worksheet, select cell A1, and press CTRL+V. If you are working in Excel Online, repeat copying and pasting for each cell in the example.
Important: For the example to work properly, you must paste it into cell A1 of the worksheet.
4. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas
After you copy the example to a blank worksheet, you can adapt it to suit your needs.
A B
First string Second string
word word
Word word
w ord word
Formula Description (Result)
=EXACT(A2,B2) Checks whether the strings in the first row match (TRUE)
=EXACT(A3,B3) Checks whether the strings in the second row match (FALSE)
=EXACT(A4,B4) Checks whether the strings in the third row match (FALSE)
Applies to:
Excel 2010, Excel Web App, SharePoint Online for enterprises, SharePoint Online for professionals and small businesses | {"url":"http://office.microsoft.com/en-us/excel-help/exact-function-HP010342485.aspx?CTT=5&origin=HP010342953","timestamp":"2014-04-21T07:18:41Z","content_type":null,"content_length":"25089","record_id":"<urn:uuid:3abc70b7-6b07-4f5f-88de-25a5e5ed5498>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
clock puzzles
The earliest known clock problem was posed in 1694 by Jacques Ozanam in his Récréations mathématiques et physiques.
Here are two clock puzzles invented by Lewis Carroll:
1. A clock has hour and minute hands of the same length and no numerals on its face. At what time between 6 and 7 o'clock will the time on the clock appear to be the same as the time read on the
reflection of the clock in a mirror?
2. Which has a better chance of giving the right time: a clock that has stopped or one that loses a minute every day?
And here is another from Henry Dudeney's Amusements in Mathematics called "The Club Clock:"
3. One of the big clocks in the Cogitators' Club was found the other night to have stopped just when, as will be seen in the illustration, the second hand was exactly midway between the other two
hands. One of the members proposed to some of his friends that they should tell him the exact time when (if the clock had not stopped) the second hand would next again have been midway between
the minute hand and the hour hand. Can you find the correct time that it would happen?
1. Approximately 27 minutes and 42 seconds (exactly 360/13 minutes) after 6.
2. The stopped clock, because it will give the right time twice a day whereas the other is only correct about every two years approximately.
3. The positions of the hands shown in the illustration could only indicate that the clock stopped at 44 min. 51 1143/1427 sec. after eleven o'clock. The second hand would next be "exactly midway
between the other two hands" at 45 min. 52 496/1427 sec. after eleven o'clock. If we had been dealing with the points on the circle to which the three hands are directed, the answer would be 45
min. 22 106/1427 sec. after eleven; but the question applied to the hands, and the second hand would not be between the others at that time, but outside them.
Related categories
GAMES AND PUZZLES | {"url":"http://www.daviddarling.info/encyclopedia/C/clock_puzzles.html","timestamp":"2014-04-17T21:25:17Z","content_type":null,"content_length":"8566","record_id":"<urn:uuid:39ba3a5d-af91-4036-b6f4-fde9a390ef82>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sums in a Triangle
Date: 9/2/95 at 18:30:30
From: Anonymous
Subject: Algebra
Six numbered bottle caps are arranged in a triangle. The sum of the three
corner numbers, 1+6+5, is three more than the sum of the remaining numbers.
How can you rearrange the bottle caps so that the sum of the corner
number is:
a. Twice the sum of the remaining numbers?
b. The same as the sum of the remaining numbers?
Date: 9/6/95 at 11:32:48
From: Doctor Ken
Subject: Re: Algebra
To do these problems, we can write an equation. If I were given the first
problem, the one that's already solved, I would use the variable X to represent
the sum of the numbers on the corners, and write down the following equation:
X + (X-3) = 1+2+3+4+5+6
= 21
The X is the numbers on the corners, and the X-3 is the other numbers
(which add up to 3 less than X, hence the x-3).
Then I could solve for X:
2X - 3 = 21
2X = 24
X = 12
So then all you'd have to do is fiddle around and try to find three numbers
that add up to 12: 1+6+5 does fine, and so does 3+4+5 and a bunch of other
How would you write an equation for the next problem?
If the sum of the corner numbers is twice the sum of the remaining numbers,
then the corner numbers are X and the other numbers are X/2, right?
-Doctor Ken, The Geometry Forum | {"url":"http://mathforum.org/library/drmath/view/58614.html","timestamp":"2014-04-16T22:15:00Z","content_type":null,"content_length":"6402","record_id":"<urn:uuid:4ee2e794-1c30-490e-be0e-fb7e1ab03df9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry of Cuts and Metrics, Algorithms Combin. 15
Results 1 - 10 of 17
- DOC. MATH. J. DMV , 1998
"... We describe a few applications of semide nite programming in combinatorial optimization. ..."
- Ann. Inst. Statist. Math
"... We provide a characterization of the compressed lattice polytopes in terms of their facet defining inequalities and we show that every compressed lattice polytope is affinely isomorphic to a 0/
1-polytope. As an application, we characterize those graphs whose cut polytopes are compressed and discuss ..."
Cited by 17 (1 self)
Add to MetaCart
We provide a characterization of the compressed lattice polytopes in terms of their facet defining inequalities and we show that every compressed lattice polytope is affinely isomorphic to a 0/
1-polytope. As an application, we characterize those graphs whose cut polytopes are compressed and discuss consequences for studying linear programming relaxations in statistical disclosure
limitation. 1
, 1998
"... Given an incomplete edge-weighted graph, G = (V; E; !), G is said to be embeddable in ! r , or r-embeddable, if the vertices of G can be mapped to points in ! r such that every two adjacent
vertices v i , v j of G are mapped to points x i , x j 2 ! r whose Euclidean distance is equal to t ..."
Cited by 12 (1 self)
Add to MetaCart
Given an incomplete edge-weighted graph, G = (V; E; !), G is said to be embeddable in ! r , or r-embeddable, if the vertices of G can be mapped to points in ! r such that every two adjacent vertices
v i , v j of G are mapped to points x i , x j 2 ! r whose Euclidean distance is equal to the weight of the edge (v i ; v j ). Barvinok [3] proved that if G is r-embeddable for some r, then it is r
-embeddable where r = b( p 8jEj + 1 \Gamma 1)=2c. In this paper we provide a constructive proof of this result by presenting an algorithm to construct such an r -embedding. 1 Introduction Let G = (V;
E;!) be an incomplete undirected edge-weighted graph with vertex set V = fv 1 ; v 2 ; : : : ; v n g, edge set E ae V \Theta V and a nonnegative weight E-mail aalfakih@orion.math.uwaterloo.ca y
Research supported by Natural Sciences Engineering Research Council Canada. E-mail henry@orion.math.uwaterloo.ca. ! ij for each (v i ; v j ) 2 E. G is said to be r-embeddable if th...
- J. Reine Angew. Math
"... We construct finitely generated groups with arbitrary prescribed Hilbert space compression α ∈ [0, 1]. For a large class of Banach spaces E (including all uniformly convex Banach spaces), the
E-compression of these groups coincides with their Hilbert space compression. Moreover, the groups that we c ..."
Cited by 11 (0 self)
Add to MetaCart
We construct finitely generated groups with arbitrary prescribed Hilbert space compression α ∈ [0, 1]. For a large class of Banach spaces E (including all uniformly convex Banach spaces), the
E-compression of these groups coincides with their Hilbert space compression. Moreover, the groups that we construct have asymptotic dimension at most 3, hence they are exact. In particular, the
first examples of groups that are uniformly embeddable into a Hilbert space (respectively, exact, of finite asymptotic dimension) with Hilbert space compression 0 are given. 1
- SIAM Journal on Optimization
"... Abstract. We study how the lift-and-project method introduced by Lovász and Schrijver [SIAM J. Optim., 1 (1991), pp. 166–190] applies to the cut polytope. We show that the cut polytope of a
graph can be found in k iterations if there exist k edges whose contraction produces a graph with no K5-minor. ..."
Cited by 7 (4 self)
Add to MetaCart
Abstract. We study how the lift-and-project method introduced by Lovász and Schrijver [SIAM J. Optim., 1 (1991), pp. 166–190] applies to the cut polytope. We show that the cut polytope of a graph can
be found in k iterations if there exist k edges whose contraction produces a graph with no K5-minor. Therefore, for a graph G with n ≥ 4 nodes with stability number α(G), n − 4 iterations suffice
instead of the m (number of edges) iterations required in general and, under some assumption, n − α(G) − 3 iterations suffice. The exact number of needed iterations is determined for small n ≤ 7 by a
detailed analysis of the new relaxations. If positive semidefiniteness is added to the construction, then one finds in one iteration a relaxation of the cut polytope which is tighter than its basic
semidefinite relaxation and than another one introduced recently by Anjos and Wolkowicz [Discrete Appl. Math., to appear]. We also show how the Lovász–Schrijver relaxations for the stable set
polytope of G can be strengthened using the corresponding relaxations for the cut polytope of the graph G ∇ obtained from G by adding a node adjacent to all nodes of G.
- QUARTERLY JOURNAL OF MATHEMATICS OXFORD , 2000
"... The size sz() of an `1-graph = (V; E) is the minimum of n f =tf over all the possible `1-embeddings f into n f -dimensional hypercube with scale t f . The sum of distances between all the pairs
of vertices of is at most sz()dv=2ebv=2c (v = jV j). The latter is an equality if and only if is equic ..."
Cited by 7 (3 self)
Add to MetaCart
The size sz() of an `1-graph = (V; E) is the minimum of n f =tf over all the possible `1-embeddings f into n f -dimensional hypercube with scale t f . The sum of distances between all the pairs of
vertices of is at most sz()dv=2ebv=2c (v = jV j). The latter is an equality if and only if is equicut graph, that is, admits an `1 -embedding f that for any 1 i n f satis es x2V f(x) i 2 fdv=2e; bv=
2cg. Basic properties of equicut graphs are investigated. A construction of equicut graphs from `1-graphs via a natural doubling construction is given. It generalizes several well-known constructions
of polytopes and distance-regular graphs. Finally, large families of examples, mostly related to polytopes and distance-regular graphs, are presented.
- PREPRINT CAMS 142 ECOLE DES HAUTES ETUDES EN SCIENCES SOCIALES , 2001
"... The classical game of Peg Solitaire has uncertain origins, but was certainly popular by the time of Louis XIV, and was described by Leibniz in 1710. The modern mathematical study of the game
dates to the 1960s, when the solitaire cone was first described by Boardman and Conway. Valid inequalities ov ..."
Cited by 7 (3 self)
Add to MetaCart
The classical game of Peg Solitaire has uncertain origins, but was certainly popular by the time of Louis XIV, and was described by Leibniz in 1710. The modern mathematical study of the game dates to
the 1960s, when the solitaire cone was first described by Boardman and Conway. Valid inequalities over this cone, known as pagoda functions, were used to show the infeasibility of various peg games.
In this paper we study the extremal structure of solitaire cones for a variety of boards, and relate their structure to the well studied metric cone. In particular we give: 1. an equivalence between
the multicommodity flow problem with associated dual metric cone and a generalized peg game with associated solitaire cone; 2. a related NP-completeness result; 3. a method of generating large
classes of facets; 4. a complete characterization of 0-1 facets; 5. exponential upper and lower bounds (in the dimension) on the number of facets; 6. results on the number of facets, incidence and
adjacency relationships and diameter for small rectangular, toric and triangular boards; 7. a complete characterization of the adjacency of extreme rays, diameter, number of 2-faces and edge
connectivity for rectangular toric boards.
- Discrete Applied Mathematics , 1996
"... The classical game of Peg Solitaire has uncertain origins, but was certainly popular by the time of Louis XIV, and was described by Leibniz in 1710. The modern mathematical study of the game
dates to the 1960s, when the solitaire cone was first described by Boardman and Conway. Valid inequalities o ..."
Cited by 5 (0 self)
Add to MetaCart
The classical game of Peg Solitaire has uncertain origins, but was certainly popular by the time of Louis XIV, and was described by Leibniz in 1710. The modern mathematical study of the game dates to
the 1960s, when the solitaire cone was first described by Boardman and Conway. Valid inequalities over this cone, known as pagoda functions, were used to show the infeasibility of various peg games.
In this paper we study the extremal structure of solitaire cones for a variety of boards, and relate their structure to the well studied metric cone. In particular we give: 1. an equivalence between
the multicommodity flow problem with associated dual metric cone and a generalized peg game with associated solitaire cone; 2. a related NP-completeness result; 3. a method of generating large
classes of facets; 4. a complete characterization of 0-1 facets; 5. exponential upper and lower bounds (in the dimension) on the number of facets; 6. results on the number of facets, incidence,
adjacency and ...
"... In this paper we study enumeration problems for polytopes arising from combinatorial optimization problems. While these polytopes turn out to be quickly intractable for enumeration algorithms
designed for general polytopes, tailor-made algorithms using their rich combinatorial features can exhib ..."
Cited by 4 (1 self)
Add to MetaCart
In this paper we study enumeration problems for polytopes arising from combinatorial optimization problems. While these polytopes turn out to be quickly intractable for enumeration algorithms
designed for general polytopes, tailor-made algorithms using their rich combinatorial features can exhibit strong performances. The main engine of these combinatorial algorithms is the use of the
large symmetry group of combinatorial polytopes. Specifically we consider a polytope with applications to the well-known max-cut and multicommodity flow problems: the metric polytope mn on n nodes.
We prove that for n 9 the faces of codimension 3 of the metric polytope are partitioned into 15 orbits of its symmetry group. For n 8, we describe additional upper layers of the face lattice of mn .
In particular, using the list of orbits of high dimensional faces of m8 , we prove that the description of m8 given in [9] is complete with 1 550 825 000 vertices and that the Laurent-Poljak
conjecture [14] holds for n 8. Many vertices of m9 are computed and additional results on the structure of the metric polytope are presented...
"... Abstract. In this paper, we explore a connection between binary hierarchical models, their marginal polytopes and codeword polytopes, the convex hulls of linear codes. The class of linear codes
that are realizable by hierarchical models is determined. We classify all full dimensional polytopes with ..."
Cited by 3 (3 self)
Add to MetaCart
Abstract. In this paper, we explore a connection between binary hierarchical models, their marginal polytopes and codeword polytopes, the convex hulls of linear codes. The class of linear codes that
are realizable by hierarchical models is determined. We classify all full dimensional polytopes with the property that their vertices form a linear code and give an algorithm that determines them. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1804128","timestamp":"2014-04-17T14:01:03Z","content_type":null,"content_length":"37593","record_id":"<urn:uuid:4bbaedbd-1a8a-46d1-8e40-2f357528c707>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Slideshow: Pumpkin Pi Proves Integral to Bowdoin Math Department
Last Halloween, Amanda Gartside ’12 botched up infinity. This year, the math major is shooting for something a bit more manageable, like pi. Pumpkin pi, that is.
Gartside is one of the student organizers of the Math Department’s annual pumpkin carving event. The informal get-together of students, professors and their families has become a tradition in the
math department, with students attempting to outdo each other with Bowdoin and math-themed jack o’ lanterns.
“It’s as creative as the students are,” says Jennifer Taback, associate professor of mathematics. “One year somebody tried to carve one of the professor’s faces. I don’t think it came out too well …”
Gartside says she looks forward to the event all year. “This was one of the things that attracted me to the math department in the first place, when I was a freshman,” she says. “I saw the department
had a sense of togetherness.”
The pumpkins will be on display on the steps of Searles Hall, the math department’s home. Currently more than 100 students are majoring and minoring in mathematics at Bowdoin.
Love this idea! Hope we get to see the creations! I am a Math major of the class of 1984 now teaching grades 5 and 6 math and loving it!
OOps! Sorry! I responded before the slideshow had a chance to load! Nice | {"url":"http://www.bowdoindailysun.com/2011/10/slideshow-pumpkin-pi-proves-integral-to-bowdoin-math-department/","timestamp":"2014-04-19T22:07:58Z","content_type":null,"content_length":"59846","record_id":"<urn:uuid:d9242589-d641-43d3-97b7-b02e3aa0b13f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to rotate a 64-bit BitBoard?
How to rotate a 64-bit BitBoard?
I am currently programming a chess game using bitboards.
From what I have read on the internet, the best way (at least the most commonly used way) to generate moves for sliding pieces (rooks, bishops, and queens), involves the use of rotated bitboards.
However, I cannot seem to come up with or find an efficient way to rotate bitboards.
For people not into chess programming, what I want to achieve is to transform something like:
Is there any efficient way to do this?
Thank you very much.
Perhaps the best way would be to treat your bitboard as a virtual "2D matrix". If you perform all your calculations based on x and y, (Where Y is a vertical coordinate from 0-7, and X is
horizontal 0-7), the position in your 1D bit board will be y*8 + x
From there, its a simple mathematical translation, using the relationship between X and Y
eg, for a translation which rotates 90 degrees right, the bit at the virtual coords [ X, Y ] translates directly to [ 7-Y, X ]
a left-rotation would be the opposite, [ X. Y ] mapped on to [ Y, 7-X ]
Here's a short program to illustrate (using a nonstandard MSVC++ type, unsigned __int64 as a bitboard.. this snippet may not compile on anything other than MSVC++)
#include <iostream>
typedef unsigned __int64 bitboard;
bool get(bitboard const board, int x, int y)
int pos = y * 8 +x;
return ( board >> pos ) & 1 ;
bitboard set(int x, int y, bool val)
int pos = y*8+x;
return static_cast<bitboard>(val) << pos;
int xrot(int x, int y)
return 7-y;
int yrot(int x, int y)
return x;
bitboard rot(bitboard const from)
bitboard to(0);
for(int y(0); y<8; ++y)
for( int x(0); x<8; ++x)
to |= set( xrot(x,y), yrot(x,y), get(from, x, y) );
return to;
void print(bitboard const board)
for(int y(0); y<8; ++y)
for( int x(0); x<8; ++x)
std::cout << get(board, x, y) << " " ;
std::cout << "\n";
int main()
bitboard board = static_cast<bitboard>(255)<<32;
std::cout << "\n\n";
print( rot(board) );
If the code doesn't compile for you, then hopefully you'll be able to see what's going on to some extent. It causes the bitboard to 'rotate right' 90 degrees.
Note for the above post - if I were to do that program properly, i would almost certainly wrap the X and Y coords into a class of their own (as i would for the bitboard.. which might be better
represented as a std::bitset<64> ), that was abit of a crude C-like example.
Depends on how you define 'efficient'. You're not going to be able to avoid doing about 64 bit comparisons for every rotation.
#include <stdio.h>
void rotbits (unsigned char bits[8]) {
int i, j;
unsigned char i_mask, j_mask;
// Change the direction of the rotation by changing the direction of the masks
for (i = 0, i_mask = 0x01; i < 8; ++i, i_mask <<= 1)
for (j = 7, j_mask = 0x80; j > i; --j, j_mask >>= 1)
// If the bits are not the same, flip both bits
if (((bits[i] & j_mask) != 0) != ((bits[j] & i_mask) != 0)) {
bits[i] ^= j_mask;
bits[j] ^= i_mask;
int main (void) {
unsigned char foo[8] = {0xFF, 0x00, 0x00, 0x00, 0x0F, 0x00, 0x00, 0x00};
int i;
for (i = 0; i < 8; ++i) {
printf ("%2X\n", foo[i]);
printf ("rotate...\n");
rotbits (foo);
for (i = 0; i < 8; ++i) {
printf ("%2X\n", foo[i]);
return 0;
to Bench82:
thank you for your demonstration, I only need to subsitude in "long long int" (the equivalent non-standard type in gcc) for "__int64" for the code to compile. This is the algorithm that I
initially came up with, too, but I wasn't sure if this is the most efficient way.
to QuestionC:
thank you for the code. Took me a while to get the algorithm, but I think I get it now.
Depends on how you define 'efficient'. You're not going to be able to avoid doing about 64 bit comparisons for every rotation.
that was what I was afraid of. I was kind of hoping for some kind of magic =) I guess one doesn't exist then.
I want to find the most efficient algorithm because the board has to be rotated 3 times at every node of the minimax tree.
Thank you all for your help.
You might want to look up Darkthought. There is an intersting article called How Darkthought Plays chess that might be of interest.
[edit] perhaps you can create multiple bitboards that are already rotated and just update them instead of having a single bitboard that you manipulate.
thank you, I will look it up
to Bench82:
thank you for your demonstration, I only need to subsitude in "long long int" (the equivalent non-standard type in gcc) for "__int64" for the code to compile. This is the algorithm that I
initially came up with, too, but I wasn't sure if this is the most efficient way.
to QuestionC:
thank you for the code. Took me a while to get the algorithm, but I think I get it now.
that was what I was afraid of. I was kind of hoping for some kind of magic =) I guess one doesn't exist then.
I want to find the most efficient algorithm because the board has to be rotated 3 times at every node of the minimax tree.
Thank you all for your help.
Well... there is one trick, but it's pretty dirty.
class bitboard {
unsigned char bit[8];
bool flipbit;
void lazyflip() { flipbit = !flipbit; }
unsigned char at (unsigned int x, unsigned int y) {
if (flipbit)
return bit[y] & (1 >> x);
return bit[x] & (1 >> y);
Forgive syntax errors. I am sure the idea is clear from the code.
Depending on the ratio of flips to board accesses, this may be faster.
Well... there is one trick, but it's pretty dirty.
class bitboard {
unsigned char bit[8];
bool flipbit;
void lazyflip() { flipbit = !flipbit; }
unsigned char at (unsigned int x, unsigned int y) {
if (flipbit)
return bit[y] & (1 >> x);
return bit[x] & (1 >> y);
Forgive syntax errors. I am sure the idea is clear from the code.
Depending on the ratio of flips to board accesses, this may be faster.
wow, this is amazing =) never thought of this
however, I think I will stick to manofsteel972's suggestion of maintaining rotated bitboards because the ratio of flips to board accesses is pretty low.
Thank you
There's another dirty trick
For all those who (like me) stumble upon this thread and can't believe there's no alternative to testing every single bit, here's a version that reduces the number of instructions a bit (at the
cost of readability).
#include <stdio.h>
// gcc
typedef unsigned long long bitboard;
// msvc
//typedef unsigned __int64 bitboard;
// rotate bitboard 90° to the left
inline bitboard rotate(register bitboard b) {
register bitboard t; // temporary
// reflect b against diagonal line going through bits 1<<7 and 1<<56
t = (b ^ (b >> 63)) & 0x0000000000000001; b ^= t ^ (t << 63);
t = (b ^ (b >> 54)) & 0x0000000000000102; b ^= t ^ (t << 54);
t = (b ^ (b >> 45)) & 0x0000000000010204; b ^= t ^ (t << 45);
t = (b ^ (b >> 36)) & 0x0000000001020408; b ^= t ^ (t << 36);
t = (b ^ (b >> 27)) & 0x0000000102040810; b ^= t ^ (t << 27);
t = (b ^ (b >> 18)) & 0x0000010204081020; b ^= t ^ (t << 18);
t = (b ^ (b >> 9)) & 0x0001020408102040; b ^= t ^ (t << 9);
// reflect b against vertical center line
t = (b ^ (b >> 7)) & 0x0101010101010101; b ^= t ^ (t << 7);
t = (b ^ (b >> 5)) & 0x0202020202020202; b ^= t ^ (t << 5);
t = (b ^ (b >> 3)) & 0x0404040404040404; b ^= t ^ (t << 3);
t = (b ^ (b >> 1)) & 0x0808080808080808; b ^= t ^ (t << 1);
return b;
void dump(bitboard b) {
int x,y;
for (y=0; y<8; y++) {
for (x=0; x<8; x++) {
printf(b&1 ? " X" : " -");
b >>= 1;
int main() {
bitboard test = 0x40F858187E3C1800;
If you want to understand why this works, I suggest you
1. Read http://www-cs-faculty.stanford.edu/~knuth/fasc1a.ps.gz, which explains why every one of the bit-twiddling lines above swaps all the bits that are set in the hex constant with those 63,
54, 45, ... positions to the left.
2. Grab a piece of squared paper, mark off 8x8 squares, fill in the corresponding bit positions and convince yourself that the swappings in the two blocks do what the comments say.
3. Convince yourself that the two reflections indeed are the same as the desired rotation.
Whoa, ancient thread! | {"url":"http://cboard.cprogramming.com/cplusplus-programming/91892-how-rotate-64-bit-bitboard-printable-thread.html","timestamp":"2014-04-19T21:41:32Z","content_type":null,"content_length":"22527","record_id":"<urn:uuid:6ad6db5b-0700-49aa-88f4-b91cf254c61f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Anonymous on Wednesday, November 14, 2012 at 11:01pm.
do not know how to get answer on kathleen spent $30.50 on two shirts. One shirt cost $3.50 more than the other. How much did each shirt cost?How do you get the answer for X +X+3.50 = 30.50. Trying to
explain how to get answer to a 5th grader.
• math - Ms. Sue, Wednesday, November 14, 2012 at 11:08pm
Two of us gave you lengthy explanations.
• math - Reiny, Wednesday, November 14, 2012 at 11:11pm
explanation for 5th grader without using variables ??
one shirt cost $3.50 more than the other
so if we reduced the cost of the the more expensive by $3.50, they would have cost the same.
But then we should reduce the total by 3.50
--- 30.50 - 3.50 = $27
since they now cost the same, we simply take half of 27 which is 13.5
cheaper shirt -- 13.50
more expensive shirt --13.50+30.50 = 17
total cost = 13.50+17 = 30.50
Related Questions
Design Logic and programming - write a program that ask the student to enter an ...
computer science - I need to write a program that ask the student to enter an ...
math - do not know how to get answer on kathleen spent $30.50 on two shirts. One...
College Algebra - Mints cost 5 cents each and gumballs cost 3 cents each. ...
Math - Multiply: (7.6)(1)(6)
Math - Divide: 4 1/5 / 14
math - 3m/m is a fraction. How do I right it as a whole number. Is it just 3?
Geography - How do I calculate gradient when I end up with and odd number: I ...
Math - Write each of the fractions in their decimal form and then arrange them ...
math - what is 3m/m (fraction), Written as a mixed fraction | {"url":"http://www.jiskha.com/display.cgi?id=1352952096","timestamp":"2014-04-19T23:17:17Z","content_type":null,"content_length":"9005","record_id":"<urn:uuid:d8539eb3-a97e-4eb6-9779-ee75b668a1d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Super Shapes' printed from http://nrich.maths.org/
Why do this problem?
This problem
provides an opportunity for pupils to practise using addition and subtraction, and it reinforces their inverse relationship. The problem gives learners chances to explain their working and it also
helps them become familiar with the idea of a symbol (in this case a shape) representing a number.
Possible approach
This problem would make a good starter. You could print off
this sheet
of the problem for children to work from and make sure that paper and/or mini-whiteboards are available for them to jot down any workings.
It would be important to talk about the methods that learners have used in each case and so invite a few children to share their way of working. The method is likely to differ depending on which
problem is being solved and you can use this as an opportunity for the group to reflect on why particular approaches were chosen in each case. Emphasise that the method used is entirely up to the
individual, but you may discuss which could be the most efficient.
Key questions
How did you work this out?
Possible extension
Shape Times Shape
involves multiplication rather than addition and requires much more sophisticated reasoning.
Possible support
You could provide calculators if you want children to focus on the method rather than the arithmetic. | {"url":"http://nrich.maths.org/1056/note?nomenu=1","timestamp":"2014-04-20T03:30:35Z","content_type":null,"content_length":"6076","record_id":"<urn:uuid:b3720503-934c-485b-8684-9fbfee7c31be>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Untitled Document
ROWAN UNIVERSITY
Department of Mathematics
Math 01.520 Topics in Applied Mathematics
Math 01.520
Topics in Applied Mathematics ……………………………………………………………3 S.H.
Catalog Description
This course provides an overview of the mathematical modeling process and includes applications to optimization, dynamical systems, and Stochastic process. Models of specific real world systems will
be developed and studied using an analytical and numerical methods.
(Prerequisite: 1701.231, 1701.502)
This course is intended to provide a sufficient background in linear algebra and matrix theory for students in the program of M.A. in mathematics and those in the program of M.A. in Subject Matter
Teaching Mathematics.
After completing this course a student will be able to
1. construct mathematical models of real world systems
2. to describe the recursive process for the construction of mathematical models
3. to use methods of linear algebra and differential equations toward solving mathematical models.
4. to describe at least three different types of mathematical models.
5. to use statistical techniques to estimate model parameters and fit a particular model to available data
6. to evaluate the validity and robustness of a mathematical model
1. Mathematical Models and Mathematical Modeling
• The Modeling Process, Dimensional Analysis, and Curve Fitting
2. The Mathematics of Optimization
• One-Variable and Multivariable Optimization, Sensitivity Analysis, Robustness, and Computational Methods (could include applications to transportation, economics, production control, and
3. Dynamical Systems
• Steady State Analysis, Discrete and Continuous Time Dynamical Systems, Eigenvalue Methods, Phase Portraits, and Numerical Methods (could include applications to epidemiology, planetary motion,
ecology, and traffic flow.)
4. Stochastic Processes
• Discrete and Continuous Probability Models, Markov Processes, and Monte Carlo Simulation (could include applications to inventory control, operations research, and epidemiology.)
Evaluations of Students: Students will be evaluated based on exams, and individual and/or team projects
Course Evaluation: The course will be evaluated through student surveys and faculty focus groups within the graduate mathematics program.
Rev.: 10/04 DM | {"url":"http://www.rowan.edu/colleges/csm/departments/math/syllabi/TopicsAppliedMath.grd.htm","timestamp":"2014-04-16T11:29:59Z","content_type":null,"content_length":"4947","record_id":"<urn:uuid:ddb3f378-20b8-403d-9c4e-4472fe188396>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Systems of numeration
Results 11 - 20 of 46
- Journal of Universal Computer Science
"... Abstract: The splitting method was defined by the author in [Margenstern 2002a, Margenstern 2002d]. It is at the basis of the notion of combinatoric tilings. As a consequence of this notion,
there is a recurrence sequence which allows us to compute the number of tiles which are at a fixed distance f ..."
Cited by 5 (5 self)
Add to MetaCart
Abstract: The splitting method was defined by the author in [Margenstern 2002a, Margenstern 2002d]. It is at the basis of the notion of combinatoric tilings. As a consequence of this notion, there is
a recurrence sequence which allows us to compute the number of tiles which are at a fixed distance from a given tile. A polynomial is attached to the sequence as well as a language which can be used
for implementing cellular automata on the tiling. The goal of this paper is to prove that the tiling of hyperbolic 4D space is combinatoric. We give here the corresponding polynomial and, as the
first consequence, the language of the splitting is not regular, as it is the case in the tiling of hyperbolic 3D space by rectangular dodecahedra which is also combinatoric. 1 Key Words: cellular
automata, hyperbolic plane
, 2002
"... This survey paper is aimed to describe a relatively new branch of symbolic dynamics which we call Arithmetic Dynamics. It deals with explicit arithmetic expansions of reals and vectors that have
a “dynamical” sense. This means precisely that they (semi-) conjugate a given continuous (or measure-pres ..."
Cited by 5 (1 self)
Add to MetaCart
This survey paper is aimed to describe a relatively new branch of symbolic dynamics which we call Arithmetic Dynamics. It deals with explicit arithmetic expansions of reals and vectors that have a
“dynamical” sense. This means precisely that they (semi-) conjugate a given continuous (or measure-preserving) dynamical system and a symbolic one. The classes of dynamical systems and their codings
considered in the paper involve: • Beta-expansions, i.e., the radix expansions in non-integer bases; • “Rotational ” expansions which arise in the problem of encoding of irrational rotations of the
circle; • Toral expansions which naturally appear in arithmetic symbolic codings of algebraic toral automorphisms (mostly hyperbolic). We study ergodic-theoretic and probabilistic properties of these
expansions and their applications. Besides, in some cases we create “redundant” representations (those whose space of “digits ” is a priori larger than necessary)
"... We determine the subword complexity of the characteristic functions of a two-parameter family fA n g 1 n=1 of infinite sequences which are associated with the winning strategies for a family of
2-player games. A special case of the family has the form A n = bnffc for all n 2 Z?0 , where ff is a f ..."
Cited by 4 (4 self)
Add to MetaCart
We determine the subword complexity of the characteristic functions of a two-parameter family fA n g 1 n=1 of infinite sequences which are associated with the winning strategies for a family of
2-player games. A special case of the family has the form A n = bnffc for all n 2 Z?0 , where ff is a fixed positive irrational number. The characteristic functions of such sequences have been shown
to have subword complexity n + 1. We show that every sequence in the extended family has subword complexity O(n). 1 Introduction Denote by Z0 and Z?0 the set of nonnegative integers and positive
integers respectively. Given two heaps of finitely many tokens, we define a 2-player heap game as follows. There are two types of moves: 1. Remove any positive number of tokens from a single heap. 2.
Remove k ? 0 tokens from one heap and l ? 0 from the other. Here k and l are constrained by the condition: 0 ! k l ! sk + t, where s and t are predetermined positive integers. The player who reaches
a stat...
- In Proc. WORDS 2007 , 2009
"... Abstract. For a given numeration system, the successor function maps the representation of an integer n onto the representation of its successor n+1. In a general setting, the successor function
maps the n-th word of a genealogically ordered language L onto the (n+1)-th word of L. We show that, if t ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract. For a given numeration system, the successor function maps the representation of an integer n onto the representation of its successor n+1. In a general setting, the successor function maps
the n-th word of a genealogically ordered language L onto the (n+1)-th word of L. We show that, if the ratio of the number of elements of length n +1overthenumber of elements of length n of the
language has a limit β>1, then the amortized cost of the successor function is equal to β/(β − 1). From this, we deduce the value of the amortized cost for several classes of numeration systems
(integer base systems, canonical numeration systems associated with a Parry number, abstract numeration systems built on a rational language, and rational base numeration systems). 1
, 1999
"... this paper. The fi-integers are defined via a numeration system with base fi, see below. In ..."
- Proceedings of the Number Theory Conference , 1997
"... We establish quantitative refinements of recent results on the occurrence of blocks in digital expansions. Furthermore we extend these results to linear numeration systems. ..."
Cited by 4 (0 self)
Add to MetaCart
We establish quantitative refinements of recent results on the occurrence of blocks in digital expansions. Furthermore we extend these results to linear numeration systems.
, 1991
"... New formulae are presented which express various generalizations of Fibonacci numbers as simple sums of binomial and multinomial coefficients. The equalities are inferred from the special
properties of the representations of the integers in certain numeration systems. ..."
Cited by 4 (0 self)
Add to MetaCart
New formulae are presented which express various generalizations of Fibonacci numbers as simple sums of binomial and multinomial coefficients. The equalities are inferred from the special properties
of the representations of the integers in certain numeration systems.
- IEEE Conference on Microwaves, Communications, Antennas and Electronics Systems (COMCAS) 2011
"... Abstract — A simple algebraic approach to synthesis Fibonacci Switched Capacitor Converters (SCC) was developed. The proposed approach reduces the power losses by increasing the number of target
voltages. The synthesized Fibonacci SCC is compatible with the binary SCC and uses the same switch networ ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract — A simple algebraic approach to synthesis Fibonacci Switched Capacitor Converters (SCC) was developed. The proposed approach reduces the power losses by increasing the number of target
voltages. The synthesized Fibonacci SCC is compatible with the binary SCC and uses the same switch network. This feature is unique, since it provides the option to switch between the binary and
Fibonacci target voltages, increasing thereby the resolution of attainable conversion ratios. The theoretical results were verified by experiments. Index terms — Charge pump, Fibonacci numbers,
redundant number system, signed-digit representation, switched capacitor.
- Objective 8: Prevent the invasion of the zebra mussel into California. Goal 6: Water and Sediment Quality Improve
"... Abstract. Odometers or “adding machines ” are usually introduced in the context of positional numeration systems built on a strictly increasing sequence of integers. We generalize this notion to
systems defined on an arbitrary infinite regular language. In this latter situation, if (A, <) is a total ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. Odometers or “adding machines ” are usually introduced in the context of positional numeration systems built on a strictly increasing sequence of integers. We generalize this notion to
systems defined on an arbitrary infinite regular language. In this latter situation, if (A, <) is a totally ordered alphabet, then enumerating the words of a regular language L over A with respect to
the induced genealogical ordering gives a one-to-one correspondence between N and L. In this general setting, the odometer is not defined on a set of sequences of digits but on a set of pairs of
sequences where the first (resp. the second) component of the pair is an infinite word over A (resp. an infinite sequence of states of the minimal automaton of L). We study some properties of the
odometer like continuity, injectivity, surjectivity, minimality,... We then study some particular cases: we show the equivalence of this new function with the classical odometer built upon a sequence
of integers whenever the set of greedy representations of all the integers is a regular language; we also consider substitution numeration systems as well as the connection with β-numerations. 1.
- Theoret. Comp. Sci , 2000
"... Let fi be a real number ? 1. The digit set conversion between real numbers represented in fixed base fi is shown to be computable by an on-line algorithm, and thus is a continuous function. When
fi is a Pisot number the digit set conversion is computable by an on-line finite automaton. 1 Introdu ..."
Cited by 2 (0 self)
Add to MetaCart
Let fi be a real number ? 1. The digit set conversion between real numbers represented in fixed base fi is shown to be computable by an on-line algorithm, and thus is a continuous function. When fi
is a Pisot number the digit set conversion is computable by an on-line finite automaton. 1 Introduction In computer arithmetic, on-line computation consists of performing arithmetic operations in
Most Significant Digit First (MSDF) mode, digit serially after a certain latency delay [8]. This allows the pipelining of different operations such as addition, multiplication and division. It is
also appropriate for the processing of real numbers having infinite expansions: it is well known that when multiplying two real numbers, only the left part of the result is significant. To be able to
perform on-line addition, it is necessary to use a redundant number system (see [19], [8]). On the other hand, a function is computable by a finite automaton if it needs only a finite auxiliary
storage me... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=411026&sort=cite&start=10","timestamp":"2014-04-19T02:06:28Z","content_type":null,"content_length":"36427","record_id":"<urn:uuid:124b43ba-3c1f-4ca2-9acc-e3d4fdf4bd31>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Screen Cauchy Riemann lightlike submanifolds.
(English) Zbl 1083.53063
The notion of light-like submanifold has been introduced and studied by K. L. Duggal and A. Bejancu in [Light-like Submanifolds of Semi-Riemannian Manifolds and Applications, Kluwer Academic, 364
(1996; Zbl 0848.53001)]. A result in this article shows that, for an indefinite Kähler manifold, the Cauchy Riemann (CR) light-like submanifolds do not include invariant (complex) and real light-like
The paper under review gives an affirmative answer to the following question: “Are there any light-like submanifolds of an indefinite Kähler manifold which contain invariant (complex) and real
light-like submanifolds ?” The main tool is the notion of Screen Cauchy Riemann (SCR)-light-like submanifolds of an indefinite Kähler manifold. For such submanifolds the authors prove two existence
theorems, show that the class of SCR-light-like submanifolds contains complex and screen real subcases, and find the integrability condition of all the distributions. Totally umbilical proper
SCR-light-like submanifolds are also studied. Some new results on irrotational screen real light-like submanifolds are proved and examples are provided.
53C50 Lorentz manifolds, manifolds with indefinite metrics
53C15 Differential geometric structures on manifolds
53C40 Global submanifolds (differential geometry) | {"url":"http://zbmath.org/?q=an:1083.53063","timestamp":"2014-04-20T20:58:14Z","content_type":null,"content_length":"21798","record_id":"<urn:uuid:f8b8fe93-4e75-486f-ab05-736ea4bef7bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Is AD or DC greater?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f91579e4b007c4a2ebf056","timestamp":"2014-04-19T22:35:16Z","content_type":null,"content_length":"73957","record_id":"<urn:uuid:e46d6b98-3e0a-4211-884c-192007df384f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Pressure At Surface And Scale Height
Please can someone tell me if my thinking here is right...
I've got a planet with an atmospheric pressure at 6km of 0.5 P[0] and at 8km of 0.4 P[0] (P[0] = pressure at the surface).
I want to work out the scale height of the atmosphere.
Given scale height = λ
and for height above surface = z
I could rearrange to show the pressure at the surface as:
I could then use the relative pressure, assume P(0)=1 (as it will cancel out shortly) and height from each of the know quantities and set them equal to each other like this:
0.4/e^(-8000/λ) = 0.5/e^(-6000/λ)
A little mutliplication....
0.4 e^(-6000/λ) = 0.5 e^(-8000/λ)
Take the Log of both sides....
(-6000/λ) log 0.4 = (-8000/λ) log 0.5
But know I'm left with the λ cancelling out if I multiply both sides by λ. I'm sure I've gone wrong here somewhere. Probably something very simple. Can anyone advise? Have I made a simple mistake in
my working or have I gone completely off the reservation and need to start again? I just need to end up with λ = xxx metres.
Thank you. | {"url":"http://www.physicsforums.com/showpost.php?p=4240172&postcount=1","timestamp":"2014-04-18T23:21:52Z","content_type":null,"content_length":"9760","record_id":"<urn:uuid:24cd525a-385b-414c-aa02-c1c79e983e7d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 19
- In VizSEC/DMSEC ’04: Proceedings of the 2004 ACM workshop on Visualization and , 2004
"... We describe a framework for managing network attack graph complexity through interactive visualization, which includes hierarchical aggregation of graph elements. Aggregation collapses
non-overlapping subgraphs of the attack graph to single graph vertices, providing compression of attack graph compl ..."
Cited by 39 (4 self)
Add to MetaCart
We describe a framework for managing network attack graph complexity through interactive visualization, which includes hierarchical aggregation of graph elements. Aggregation collapses
non-overlapping subgraphs of the attack graph to single graph vertices, providing compression of attack graph complexity. Our aggregation is recursive (nested), according to a predefined aggregation
hierarchy. This hierarchy establishes rules at each level of aggregation, with the rules being based on either common attribute values of attack graph elements or attack graph connectedness. The
higher levels of the aggregation hierarchy correspond to higher levels of abstraction, providing progressively summarized visual overviews of the attack graph. We describe rich visual representations
that capture relationships among our semantically-relevant attack graph abstractions, and our views
, 2000
"... Recently external memory graph algorithms have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number
of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain ..."
Cited by 33 (11 self)
Add to MetaCart
Recently external memory graph algorithms have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number of I
/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open. In this paper we develop improved algorithms for the problem of computing a minimum spanning
tree of a general graph G = (V; E), as well as new algorithms for the single source shortest paths and the multi-way graph separation problems on planar graphs.
- In Proc. 8th Scandinavian Workshop on Algorithmic Theory, volume 1851 of LNCS , 2000
"... Recently external memory graph algorithms have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number
of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open ..."
Cited by 24 (2 self)
Add to MetaCart
Recently external memory graph algorithms have received considerable attention because massive graphs arise naturally in many applications involving massive data sets. Even though a large number of I
/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open. In this paper we develop an improved algorithm for the problem of computing a minimum spanning
tree of a general graph, as well as new algorithms for the single source shortest paths and the multi-way graph separation problems on planar graphs.
- Journal of Graph Algorithms and Applications
"... Even though a large number of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open. For example, no space- and I/O-efficient algorithms are
known for depth-first search or breadth-first search in sparse graphs. In this paper we present two new re ..."
Cited by 24 (15 self)
Add to MetaCart
Even though a large number of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open. For example, no space- and I/O-efficient algorithms are known for
depth-first search or breadth-first search in sparse graphs. In this paper we present two new results on I/O-efficient depth-first search in an important class of sparse graphs, namely undirected
embedded planar graphs. We develop a new efficient depth-first search algorithm and show how planar depth-first search in general can be reduced to planar breadth-first search. As part of the first
result we develop the first I/Oefficient algorithm for finding a simple cycle separator of a biconnected planar graph. Together with other recent reducibility results, the second result provides
further evidence that external memory breadth-first search is among the hardest problems on planar graphs. 1
- In Proc. 8th European Symposium on Algorithms (ESA , 2000
"... Abstract. We introduce the tree cross-product problem, which abstracts a data structure common to applications in graph visualization, string matching, and software analysis. We design solutions
with a variety of tradeoffs, yielding improvements and new results for these applications. 1 ..."
Cited by 19 (0 self)
Add to MetaCart
Abstract. We introduce the tree cross-product problem, which abstracts a data structure common to applications in graph visualization, string matching, and software analysis. We design solutions with
a variety of tradeoffs, yielding improvements and new results for these applications. 1
, 2008
"... Keyword search on graph structured data has attracted a lot of attention in recent years. Graphs are a natural “lowest common denominator” representation which can combine relational, XML and
HTML data. Responses to keyword queries are usually modeled as trees that connect nodes matching the keyword ..."
Cited by 19 (1 self)
Add to MetaCart
Keyword search on graph structured data has attracted a lot of attention in recent years. Graphs are a natural “lowest common denominator” representation which can combine relational, XML and HTML
data. Responses to keyword queries are usually modeled as trees that connect nodes matching the keywords. In this paper we address the problem of keyword search on graphs that may be significantly
larger than memory. We propose a graph representation technique that combines a condensed version of the graph (the “supernode graph”) which is always memory resident, along with whatever parts of
the detailed graph are in a cache, to form a multi-granular graph representation. We propose two alternative approaches which extend existing search algorithms to exploit multigranular graphs; both
approaches attempt to minimize IO by directing search towards areas of the graph that are likely to give good results. We compare our algorithms with a virtual memory approach on several real data
sets. Our experimental results show significant benefits in terms of reduction in IO due to our algorithms.
- In 12th Symposium on Graph Drawing (GD , 2004
"... Abstract. Compound-fisheye views are introduced as a method for the display and interaction with large graphs. The method relies on a hierarchical clustering of the graph, and a generalization
of the traditional fisheye view, together with a treemap representation of the cluster tree. 1 ..."
Cited by 10 (1 self)
Add to MetaCart
Abstract. Compound-fisheye views are introduced as a method for the display and interaction with large graphs. The method relies on a hierarchical clustering of the graph, and a generalization of the
traditional fisheye view, together with a treemap representation of the cluster tree. 1
- American Chemical Society , 2002
"... We introduce the base architecture of a software library which combines graphs, hierarchies, and views and describes the interactions between them. Each graph may have arbitrarily many
hierarchies and each hierarchy may have arbitrarily many views. Both the hierarchies and the views can be added ..."
Cited by 5 (3 self)
Add to MetaCart
We introduce the base architecture of a software library which combines graphs, hierarchies, and views and describes the interactions between them. Each graph may have arbitrarily many hierarchies
and each hierarchy may have arbitrarily many views. Both the hierarchies and the views can be added and removed dynamically from the corresponding graph and hierarchy, respectively. The software
library shall serve as a platform for algorithms and data structures on hierarchically structured graphs. Such graphs become increasingly important and occur in special applications, e. g., call
graphs in software engineering or biochemical pathways, with a particular need to manipulate and draw graphs.
"... Architecture recovery is an activity applied to a system whose initial architecture has eroded. When the system is large, the user must use dedicated tools to support the recovery process. We
present Softwarenaut – a tool which supports architecture recovery through interactive exploration and visua ..."
Cited by 5 (4 self)
Add to MetaCart
Architecture recovery is an activity applied to a system whose initial architecture has eroded. When the system is large, the user must use dedicated tools to support the recovery process. We present
Softwarenaut – a tool which supports architecture recovery through interactive exploration and visualization. Classical architecture recovery features, such as filtering and details on demand, are
enhanced with evolutionary capabilities when multi-version information about a subject system is available. The tool allows sharing and discovering the results of previous analysis sessions through a
global repository of architectural views indexed by systems. We present the features of the tool together with the architecture recovery process that it supports using as a case-study ArgoUML, a
well-known open source Java system.
, 2001
"... We present a new algorithm to compute a subset S of vertices of a planar graph G whose removal partitions G into O(N/h) subgraphs of size O(h) and with boundary size O( p h) each. The size of S
is O(N= p h). Computing S takes O(sort(N)) I/Os and linear space, provided that M 56hlog² B. Together with ..."
Cited by 3 (1 self)
Add to MetaCart
We present a new algorithm to compute a subset S of vertices of a planar graph G whose removal partitions G into O(N/h) subgraphs of size O(h) and with boundary size O( p h) each. The size of S is O
(N= p h). Computing S takes O(sort(N)) I/Os and linear space, provided that M 56hlog² B. Together with recent reducibility results, this leads to O(sort(N)) I/O algorithms for breadth-first search
(BFS), depth-first search (DFS), and single source shortest paths (SSSP) on undirected embedded planar graphs. Our separator algorithm does not need a BFS tree or an embedding of G to be given as
part of the input. Instead we argue that "local embeddings" of subgraphs of G are enough. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=91579","timestamp":"2014-04-17T16:23:19Z","content_type":null,"content_length":"37262","record_id":"<urn:uuid:0f58f0c4-7f97-4250-8be2-bc918d2eeff8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
RSS SPSS Short Course Module 9 Linear Mixed Effects Modeling
Linear Mixed Effects Modeling
1. Mixed Effects Models
Mixed effects models refer to a variety of models which have as a key feature both fixed and random effects.
The distinction between fixed and random effects is a murky one. As pointed out by Gelman (2005), there are several, often conflicting, definitions of fixed effects as well as definitions of random
effects. Gelman offers a fairly intuitive solution in the form of renaming fixed effects and random effects and providing his own clear definitions of each. “We define effects (or coefficients) in a
multilevel model as constant if they are identical for all groups in a population and varying if they are allowed to differ from group to group” (Gelman, p. 21). Other ways of thinking about fixed
and random effects, which may be useful but are not always consistent with one another or those given by Gelman above, are discussed in the next paragraph.
Fixed effects are ones in which the possible values of the variable are fixed. Random effects refer to variables in which the set of potential outcomes can change. Stated in terms of populations,
fixed effects can be thought of as effects for which the population elements are fixed. Cases or individuals do not move into or out of the population. Random effects can be thought of as effects for
which the population elements are changing or can change (i.e. random variable). Cases or individuals can and do move into and out of the population. Another way of thinking about the distinction
between fixed and random effects is at the observation level. Fixed effects assume scores or observations are independent while random effects assume some type of relationship exists between some
scores or observations. For instance, it can be said that gender is a fixed effect variable because we know all the values of that variable (male & female) and those values are independent of one
another (mutually exclusive); and they (typically) do not change. A variable such as high school class has random effects because we can only sample some of the classes which exist; not to mention,
students move into and out of those classes each year.
There are many types of random effects, such as repeated measures of the same individuals; where the scores at each time of measure constitute samples from the same participants among a virtually
infinite (and possibly random) number of times of measure from those participants. Another example of a random effect can be seen in nested designs, where for example; achievement scores of students
are nested within classes and those classes are nested within schools. That would be an example of a hierarchical design structure with a random effect for scores nested within classes and a second
random effect for classes nested within schools. The nested data structure assumes a relationship among groups such that members of a class are thought to be similar to others in their class in such
a way as to distinguish them from members of other classes and members of a school are thought to be similar to others in their school in such a way as to distinguish them from members of other
schools. The example used below deals with a similar design which focuses on multiple fixed effects and a single nested random effect.
2. Linear Mixed Effects Models
Linear mixed effects models simply model the fixed and random effects as having a linear form. Similar to the General Linear Model, an outcome variable is contributed to by additive fixed and random
effects (as well as an error term). Using the familiar notation, the linear mixed effect model takes the form:
y[ij] = β[1]x[1ij] + β[2]x[2ij] … β[n]x[nij] + b[i1]z[1ij] + b[i2]z[2ij] … b[in]z[nij] + ε[ij]
where y[ij] is the value of the outcome variable for a particular ij case, β[1] through β[n] are the fixed effect coefficients (like regression coefficients), x[1ij] through x[nij] are the fixed
effect variables (predictors) for observation j in group i (usually the first is reserved for the intercept/constant; x[1ij] = 1), b[i1] through b[in] are the random effect coefficients which are
assumed to be multivariate normally distributed, z[1ij] through z[nij] are the random effect variables (predictors), and ε[ij] is the error for case j in group i where each group’s error is assumed
to be multivariate normally distributed.
3. Example Data
The example used for this tutorial is fictional data where the interval scaled outcome variable Extroversion (extro) is predicted by fixed effects for the interval scaled predictor Openness to new
experiences (open), the interval scaled predictor Agreeableness (agree), the interval scaled predictor Social engagement (social), and the nominal scaled predictor Class (classRC); as well as the
random (nested) effect of Class (classRC) within School (schoolRC) as well as the random effect of School (schoolRC). The data contains 1200 cases evenly distributed among 24 nested groups (4 classes
within 6 schools). The data set is available here.
4. Running the Analysis
Begin by clicking on Analyze, Mixed Models, Linear...
The initial dialogue box is self-explanatory; but will not be used in this example so click the Continue button.
Next, we have the main Linear Mixed Models dialogue box. Here we specify the variables we want included in the model. Using the arrows; move extro to the Dependent Variable box, move classRC and
schoolRC to the Factor(s) box, and move open, agree, and social to the Covariat(s) box. Then click on the Fixed... button to specify the fixed effects.
The fixed effects in a LINEAR mixed effects model are essentially the same as a traditional ordinary least squares linear regression. To specify the fixed effects, use the Add button to move open,
agree, social, and classRC into the Model box. Notice we are not specifying any interaction terms for this model. Then click the Continue button.
Next, click on the Random... button to specify the random effects.
The first thing we need to do is click on the Build nested terms circle (marked with the top, centered red ellipse). Then, highlight / select the classRC factor and use the down arrow button (marked
with the lower, left red ellipse) to move classRC into the Build Term box. Then click the (Within) button (marked with the lower, middle ellipse). Next, highlight / select the schoolRC factor and use
the down arrow button again to move it inside the parentheses created by the (Within) button. Next, click the Add button (marked with a red ellipse inside a green ellipse) to move our nested term
into the Model box. Next, click on the Build terms circle (marked with the green ellipse in the upper left). Then, highlight / select schoolRC factor and use the Add button (marked with the green
ellipse around the red ellipse) to move schoolRC to the Model box. Next, click the Continue button at the bottom of the dialogue box.
Next, click on the Estimation... button.
Next, change the Maximum iterations from the default (100) to 150 (marked with the red soft rectangle). This step is not technically necessary, but it insures the estimated values match those
produced in R using the lme4 package. Then, click the Continue button.
Next, click on the Statistics... button. While some of the options are not necessary (Case Processing Summary), I generally click all of them.
Next, click on the EM Means... button (Estimated Marginal Means). When the (OVERALL) factor is moved to the Display Means for box, the grand mean will be produced. The classRC factor is present (and
moved to the Display Means for box) because it is the only factor (categorical variable) included in the model as a fixed effect. The other fixed effects are not categorical and thus do not appear
here. Next, click the Continue button.
Next, click on the Save... button. It is generally a good idea to save the Predicted values. The Fixed Predicted Values will be predicted values based solely on the Fixed Effects part of the model;
while the lower Predicted Values & Residuals Predicted values will be the whole model's predicted values. Next, click the Continue button.
Then click the Paste button. Your syntax should match what is below. The reason I recommend pasting the syntax is that it takes quite a few clicks to create one of these types of models and it is
often the case that multiple models are run during a session and changing variables or options is simply easier in the syntax than pointing and clicking back through all the above steps.
Next, highlight / select all the text in the syntax and then click the green 'run' arrow (marked with the red ellipse).
Your output should be the same as what is below.
5. Interpreting the Output.
The Case Processing Summary (above) simply shows that the cases are balanced among the categories of the categorical variables and no cases were excluded.
The next, rather large table contains all the descriptive statistics (only the very top of the table is shown here; below).
The Model Dimension table (below) simply shows the model in terms of which variables (and their number of levels) are fixed and / or random effects and the number of parameters being estimated.
The next table displays fit indices. For each index; the lower the number, the better the model fits the data. Generally I use and recommend the Bayesian Information Criterion (BIC).
The next table contains the results of the Fixed Effects tests; here we see the intercept and the classRC variables appear to be the main contributors.
The next 5 tables do not offer much information and simply show each parameter function (only the first and part of the second tables of the five are shown below).
The next table "Estimates of Fixed Effects" (below) is very important and shows the parameter estimates for the Fixed Effects specified in the model. It should be clear, this table and its
interpretation are exactly like one would expect from a traditional ordinary least squares linear regression. One thing to note is the way SPSS chooses the reference category for categorical
variables. You may have noticed we have been using the classRC and schoolRC variables instead of the original class and school variables in the data set. The RC variables contain the same information
as the original variables, they simply have been ReCoded or Reverse Coded so that the output here will match the output produced using the lme4 package in the R programming language. It is important
to know that SPSS (and SAS) automatically choose the category with the highest numerical value (or the lowest alphabetical letter) as the reference category for categorical variables. All packages I
have used in the R programming language choose the reference category in the more intuitive but opposite way. In the lme4 package (and others I've used) in R, the software automatically picks the
lowest numerical value (or the earliest alphabetically letter) as the reference category for categorical variables. This has drastic implications for the intercept estimate and more troubling, the
predicted values produced by a model. For example, if this same model is specified with the original variables (not reverse coded) then the Fixed Effects intercept term is 63.049612; so you can
imagine how much different the predicted values would be in that model compared to this model where the intercept is 57.383879. Recall from multiple regression, the intercept is interpreted as the
mean of the outcome (extro) when all the predictors have a value of zero. The predictor estimates (coefficients or slopes) are interpreted the same way as the coefficients from a traditional
regression. For instance, a one unit increase in the predictor Openness to new experiences (open) corresponds to a 0.006130 increase in the outcome Extroversion (extro). Likewise, a one unit increase
in the predictor Agreeableness (agree) corresponds to a 0.007736 decrease in the outcome Extroversion (extro). Furthermore, the categorical predictor classRC = 3 has a coefficient of 2.054798; which
means, the mean Extroversion score of the third group of classRC (3) is 2.0547978 higher than the mean Extroversion score of the last group of classRC (4). ClassRC (4) was automatically coded as the
reference category.
The next 2 tables simply show the correlation matrix and covariance matrix for the fixed effects estimates. We can see that multicollinearity is not an issue among the predictors because, their
correlations (and covariances) are quite low (except of course, the categories of the classRC variable which as expected, are related).
Next, we have the Estimates of Covariance Parameters table (below); which are the parameter estimates for the Random Effects. These are variance estimates (with standard errors, Wald Z test
statistics, significance values, and confidence intervals for the variance estimates). Recall the ubiquitous ANOVA summary table where we generally have a total variance estimate (sums of squares) at
the bottom, then just above it we have a residual or within groups variance estimate (sums of squares) and then we have each treatment or between groups variance estimate (sums of squares). This
table is very much like that, but the total is not displayed and the residual variance estimate is on top. So, we can quickly calculate the total variance estimate: 95.171929 + 2.883600 + .968368 =
99.0239 then we can create an R² type of effect size to gauge the importance of each random effect by dividing the effect's variance estimate by the total variance estimate to arrive at a proportion
of variance explained or accounted for by each random effect. This is analogous to an Eta-squared (η²) in standard ANOVA or an R² in regression; it is sometimes referred to (in the linear mixed
effects situation) as an Intraclass Correlation Coefficient (ICC, Bartko, 1976; Bliese, 2009). For example, we find that the nested effect of classRC within schoolRC is 2.883600 / 99.0239 =
0.02912024 or simply stated, that random nested effect only accounts for 2.9% of the variance of the random effects. However, the random effect for schoolRC alone accounts for 95.171929 / 99.0239 =
0.9611006 or 96% of the variance of the random effects. If none of the random effects account for a meaningful amount of variance in the random effects (i.e. if the residual variance is larger than
the random effect variance estimates), then the random effects should be eliminated from the model and a standard General Linear Model (or Generalized Linear Model) should be fitted (i.e., a model
with only the fixed effects). Notice, SPSS does not calculate the standard errors correctly and therefore, the confidence interval estimates and the results of the Wald Z test are NOT valid. The Wald
Z test simply divides the estimate by its standard error to arrive at a Z-score to test for significance with the standard normal distribution of Z-scores. However, the standard errors do not match
with the standard errors produced when using the lme4 package in the R programming language. The good news is that the variance estimates are correct (do match) and the proportion of variance
estimates can be correctly computed and used as effect size measures.
The next two tables simply show the correlation and covariances for the random effect parameter estimates.
The next three tables in the output are the Random Effects Covariance Structure matrices. They are omitted here because they are particularly useless and redundant; because each table simply lists
the parameter estimate for each random effect.
The last part of the output contains tables with the Estimated Marginal means (EM means) for the Grand Mean and ClassRC.
The Grand Mean contrast coefficients table and actual grand mean table (the overall mean of the outcome variable: extro).
The ClassRC variable's contrast coefficients table and mean extroversion (extro) for each group table.
As with most of the tutorials / pages within this site, this page should not be considered an exhaustive review of the topic covered and it should not be considered a substitute for a good textbook.
References / Resources
Akaike, H. (1974). A new look at the statistical model identification. I.E.E.E. Transactions on Automatic Control, AC 19, 716 – 723. Available at:
Bartko, J. J. (1976). On various intraclass correlation reliability coefficients. Psychological Bulletin, 83, 762-765.
Bates, D., & Maechler, M. (2010). Package ‘lme4’. Reference manual for the package, available at:
Bates, D. (2010). Linear mixed model implementation in lme4. Package lme4 vignette, available at:
Bates, D. (2010). Computational methods for mixed models. Package lme4 vignette, available at:
Bates, D. (2010). Penalized least squares versus generalized least squares representations of linear mixed models. Package lme4 vignette, available at:
Bliese, P. (2009). Multilevel modeling in R: A brief introduction to R, the multilevel package and the nlme package. Available at:
Draper, D. (1995). Inference and hierarchical modeling in the social sciences. Journal of Educational and Behavioral Statistics, 20(2), 115 - 147. Available at:
Fox, J. (2002). Linear mixed models: An appendix to “An R and S-PLUS companion to applied regression”. Available at:
Gelman, A. (2005). Analysis of variance -- why it is more important than ever. The Annals of Statistics, 33(1), 1 -- 53. Available at:
Hofmann, D. A., Griffin, M. A., & Gavin, M. B. (2000). The application of hierarchical linear modeling to organizational research. In K. J. Klein (Ed.), Multilevel theory, research, and methods in
organizations: Foundations, extensions, and new directions (p. 467 - 511). San Francisco, CA: Jossey-Bass. Available at:
Raudenbush, S. W. (1995). Reexamining, reaffirming, and improving application of hierarchical models. Journal of Educational and Behavioral Statistics, 20(2), 210 - 220. Available at:
Raudenbush, S. W. (1993). Hierarchical linear models and experimental design. In L. Edwards (Ed.), Applied analysis of variance in behavioral science (p. 459 - 496). New York: Marcel Dekker.
Available at:
Rogosa, D., & Saner, H. (1995). Longitudinal data analysis examples with random coefficient models. Journal of Educational and Behavioral Statistics, 20(2), 149 - 170. Available at:
Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6, 461 – 464. Available at: | {"url":"https://www.unt.edu/rss/class/Jon/SPSS_SC/Module9/M9_LMM/SPSS_M9_LMM.htm","timestamp":"2014-04-17T03:55:47Z","content_type":null,"content_length":"37157","record_id":"<urn:uuid:9d127627-ba42-42c5-9b9b-1437494a99d7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Search Menu
Engineering Mechanics Mechanics 13th Edition rapidshare megaupload hotfile, Engineering Mechanics Mechanics 13th Edition via torrent download, Engineering Mechanics Mechanics 13th Edition full Login
free download, Engineering Mechanics Mechanics 13th Edition rar Zip password mediafire Engineering Mechanics Mechanics 13th Edition download included crack, serial, keygen, 2013 download
Engineering Mechanics: Dynamics, 13th Edition
by Russell C. Hibbeler
English | 2012 | ISBN: 0132911272 | 768 pages | PDF | 105.3 MB
Russell C. Hibbeler - Engineering Mechanics: Statics (13th Edition)
Published: 2012-01-22 | ISBN: 0132915545 | PDF | 672 pages | 73 MB
Engineering Mechanics: Statics (13th Edition)
English | ISBN: 0132915545 | 2012 | PDF | 672 Pages | 73 Mb
In his revision of Engineering Mechanics, R.C. Hibbeler empowers students to succeed in the whole learning experience. Hibbeler achieves this by calling on his everyday classroom experience and
his knowledge of how students learn inside and outside of lecture. This text is ideal for civil and mechanical engineering professionals.
Comments: 0
Date: 27-08-2013, 02:51
Views: 795
Russell C. Hibbeler, "Engineering Mechanics: Statics (13th Edition)"
English | ISBN: 0132915545 | 2012 | PDF | 672 pages | 73 MB
In his revision of Engineering Mechanics, R.C. Hibbeler empowers students to succeed in the whole learning experience. Hibbeler achieves this by calling on his everyday classroom experience and
his knowledge of how students learn inside and outside of lecture. This text is ideal for civil and mechanical engineering professionals.
Comments: 0
Date: 26-08-2013, 03:13
Views: 4391
Engineering Mechanics: Dynamics (13th Edition) by Russell C. Hibbeler
English | 136 pages | True PDF | ISBN: 0132911272 | 102 Mb
In his revision of Engineering Mechanics, R.C. Hibbeler empowers students to succeed in the whole learning experience. Hibbeler achieves this by calling on his everyday classroom experience and
his knowledge of how students learn inside and outside of lecture. This text is ideal for civil and mechanical engineering professionals.
Engineering Mechanics: Dynamics (13th Edition) by Russell C. Hibbeler
English | 136 pages | True PDF | ISBN: 0132911272 | 102 Mb
In his revision of Engineering Mechanics, R.C. Hibbeler empowers students to succeed in the whole learning experience. Hibbeler achieves this by calling on his everyday classroom experience and
his knowledge of how students learn inside and outside of lecture. This text is ideal for civil and mechanical engineering professionals.
Engineering Mechanics: Statics (13th Edition) by Russell C. Hibbeler
Prentice Hall | 2012 | ISBN: 0132915545 | 672 pages | PDF | 73 MB
In his revision of Engineering Mechanics, R.C. Hibbeler empowers students to succeed in the whole learning experience.
Engineering Mechanics: Dynamics (13th Edition) by Russell C. Hibbeler
Prentice Hall | 2012 | ISBN: 0132911272 | 768 pages | PDF | 105 MB
In his revision of Engineering Mechanics, R.C. Hibbeler empowers students to succeed in the whole learning experience.
Russell C. Hibbeler - Engineering Mechanics: Statics (13th Edition)
Published: 2012-01-22 | ISBN: 0132915545 | PDF | 672 pages | 73 MB
Comments: 0
Date: 4-02-2014, 19:40
Views: 0
Engineering Mechanics: Dynamics, 13th Edition
by Russell C. Hibbeler
English | 2012 | ISBN: 0132911272 | 768 pages | PDF | 105.3 MB
In his revision of Engineering Mechanics, R.C. Hibbeler empowers students to succeed in the whole learning experience. Hibbeler achieves this by calling on his everyday classroom experience and
his knowledge of how students learn inside and outside of lecture. This text is ideal for civil and mechanical engineering professionals. | {"url":"http://www.dltobez.biz/ca0/Engineering+Mechanics++Mechanics+13th+Edition","timestamp":"2014-04-21T12:08:54Z","content_type":null,"content_length":"47947","record_id":"<urn:uuid:76789ba7-c3b9-4a67-af70-ac9afe67c855>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
double slit experiement?
The wavefunction that describes the electron is calculated based on the geometry of the experiment and provides a statistical prediction of which screen locations are more probable than others. The
wavefunction is a vector field, and so when two of them are superimposed, the vectors at each point are summed. Vector addition is what makes interference patterns emerge when you would expect to see
two blobs in the double slit experiment.
"Particles" interact via forces. They do not interfere with eachother or themselves. The vector field which predicts probable outcomes of an experiment does interfere, which is simply stating that
sometimes the sum of two vectors has a smaller absolute value than the absolute value of either of the vectors.
This doesn't answer your question, its just me being fussy because I'm annoyed that certain popular statements about QM imply incorrect, magical things because of their inaccurate wording.
The wavefunction is formulated with no assumptions about the number of "particles" that are present, on the contrary, it would be much more difficult if particle-particle interactions were
considered. It can be normalized to any number you want so it shouldn't be surprising when it works as well for low intensities as for high...
The one assumption that all paradoxical and unintuitive scenarios share is the assertion that there is any such thing as "particles" in the first place. So I would say that the "particle" splitting
and taking both paths is an interpretation of the results, as well as the existence of the particle. I know this is not what you wanted to hear. Sorry.
You mention Feynman, and I think he may be the only person who does not characterize this as having the electron split and interfere with itself.
Instead, he prefers the approach where you combine all possible timelines and let them interact. So it is more like the electron is free to go back and forth freely in time before deciding on its
ultimate destination, but all of the forward paths interfere with all of the 'previous' forward paths in this very long 'lifetime' of time-reversals. So when it comes forward again, it needs to avoid
the other paths that it 'tried out' and chose not to take, as if there were another electron that *did* choose to take them.
To make the point in an extreme way, he once proposed that there is only one electron in all the universe, but it has gone back in time often enough to be all of the existing electrons we see now. It
captures a way of looking at quantum mechanics that is less bizarre than the possibility function with a collapsing wavefront.
I love this idea, since the arrow of time is something that arises from the tendency of thermal systems to maximize disorder purely because all states are equally likely and the number of states of
maximum disorder is astronomically large compared to all others, there is no reason for it to apply to a single electron.
But since the wavefunction does not consider particle-particle interactions, and would be different if it did so, how can a model in which the particle reverses direction in time and affects itself,
through the electromagnetic force I assume, produce the same predictions? | {"url":"http://www.physicsforums.com/showthread.php?p=4209560","timestamp":"2014-04-19T12:35:52Z","content_type":null,"content_length":"47991","record_id":"<urn:uuid:66b8a4df-f86a-4ecb-b744-25af734925ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
George Pau
George Pau
Contact Information
Research Interests
Until February 2011, George Pau was a Luis W. Alvarez Postdoctoral Fellow working in CCSE, where he worked on developing adaptive schemes for reactive geochemical flow. He is a member of the Society
of Industrial and Applied Mathematics.
"High resolution simulation and characterization of density-driven flow in CO2 storage in saline aquifers'', Advances in Water Resources, 33(4):443-455, 2010. [pdf]
"Numerical studies of density-driven flow in CO2 storage in saline aquifers'', Proceedings of TOUGH Symposium, September 14 -16 2009, Berkeley, California, USA. [pdf]
"A Parallel Second-Order Adaptive Mesh Algorithm for Reactive Flow in Geochemical Systems", Proceedings of TOUGH Symposium, September 14 -16 2009, Berkeley, California, USA. [pdf]
"A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media", Phil. Trans. R. Soc. A 367, 4633-4654, 2009. LBNL Report LBNL-176E. [pdf]
"A General Multipurpose Interpolation Procedure: The Magic Points", Communications on Pure and Applied Analysis 8(1), 383--404, 2009.
"Reduced Basis Method for Nanodevices Simulation", Phys. Rev. B 78, 155425 (2008). LBNL Report LBNL-314E. [pdf]
" Reduced Basis Method for Band Structure Calculations", Phys. Rev. E 76, 046704 (2007). [pdf]
" Feasibility and Competitiveness of a Reduced Basis Approach for Rapid Electronic Structure Calculations in Quantum Chemistry", In High-Dimensional Partial Differential Equations in Science and
Engineering, CRM Proceedings Volume 41, AMS, 2007. | {"url":"https://ccse.lbl.gov/people/pau/index.html","timestamp":"2014-04-19T19:36:28Z","content_type":null,"content_length":"5472","record_id":"<urn:uuid:681a1186-a1a5-4989-a64e-c3f597c912df>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The Polish astronomer Nicolaus Copernicus devised a method for determining the sizes of the orbits of planets farther from the sun than Earth. His method involved noting the number of days between
the times that a planet was in the positions labeled A and B in the diagram. Using this time and the number of days in each planet’s year, he calculated c and d.
• 8 months ago
• 8 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/520bcf72e4b07b0d8c1d8c05","timestamp":"2014-04-21T10:18:22Z","content_type":null,"content_length":"395332","record_id":"<urn:uuid:e956c5fa-9fb2-4186-b1bd-9f155afa185b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
A model for the zero shear viscosity.
Zero shear shear: see strength of materials. Shear
A straining action wherein applied forces produce a sliding or skewing type of deformation. viscosity, [[Eta].sub.0], being a fundamental property of polymeric polymeric /poly·mer·ic/ (pol?i-mer´ik)
exhibiting the characteristics of a polymer.
1. Having the properties of a polymer.
2. materials, has been the subject of intensive studies aimed at elucidating its relationship with the polymeric structure (1-5). From experimental and theoretical studies the presence of a double
regime of viscosity was quickly ascertained, depending on the molecular weight. At low molecular weights the relationship of viscosity with molecular weight was found to be substantially linear (1,
6), while at molecular weights higher than a critical value, [M.sub.c], the relationship was found to follow the 3.4 power of molecular weight. The 3.4 power law equation was first proposed in 1951
by Fox and Flory (3) on the basis of measurements on narrow distribution fractions of polystyrene polystyrene (pŏl'ēstī`rēn), widely used plastic; it is a polymer of styrene. Polystyrene is a
colorless, transparent thermoplastic that softens slightly above 100°C; (212°F;) and becomes a viscous liquid at around 185°C; and polyisobutylene. It has since been shown to apply to both
melts and concentrated solutions for many species of polymers (1, 4).
The presence of a sharp transition between the two regimes at M = [M.sub.c] has been the subject of particular attention since it turned out to be a characteristic constant of the species in the
melt state; [M.sub.c] was found equal to about 2-3 [M.sub.e], where [M.sub.e] is the average entanglement molecular weight (1). The sharpness of the transition, however, remained quite puzzling.
The nature of the entanglements has often been discussed and criticized in literature; perhaps, a sounder representation is to consider them as being time fluctuating fluc·tu·ate
v. fluc·tu·at·ed, fluc·tu·at·ing, fluc·tu·ates
1. To vary irregularly. See Synonyms at swing.
2. To rise and fall in or as if in waves; undulate.
v. and rather de-localized (1). More recently a different description of the macromolecular mac·ro·mol·e·cule
A very large molecule, such as a polymer or protein, consisting of many smaller structural units linked together. Also called supermolecule. structure has been proposed, the so-called reptation
model, introduced by De Gennes (7) and developed by Doi and Edwards (8, 9), where an entanglement spacing is not specified but an equivalent concept, the tube diameter, has been introduced, which
allows an alternative description of the polymeric structure. The entanglements, however, are considered a key factor controlling not only the melt rheology but also the solid mechanical (10-12) and
adhesive adhesive, substance capable of sticking to surfaces of other substances and bonding them to one another. The term adhesive cement is sometimes used in place of adhesive, especially when
referring to a synthetic adhesive. properties of polymers (13). For instance, according to according to
1. As stated or indicated by; on the authority of: according to historians.
2. In keeping with: according to instructions.
3. Kramer, their relative values determine crazing craze
v. crazed, craz·ing, craz·es
1. To cause to become mentally deranged or obsessed; make insane.
2. To produce a network of fine cracks in the surface or glaze of.
v. versus shear yielding behavior (11, 12).
The increasing evidence of a correlation of material properties to entanglement junctions has stimulated new detailed studies on the correlation between entanglements and molecular structures,
resulting, aside basic considerations, in a deep reexamination re·ex·am·ine also re-ex·am·ine
tr.v. re·ex·am·ined, re·ex·am·in·ing, re·ex·am·ines
1. To examine again or anew; review.
2. Law To question (a witness) again after cross-examination. of experimental data and the ensuing en·sue
intr.v. en·sued, en·su·ing, en·sues
1. To follow as a consequence or result. See Synonyms at follow.
2. To take place subsequently. extensive compilations of [M.sub.e] for a broad variety of polymeric species (14, 15).
In this paper we have reexamined the definition of the average number of entanglements per molecule in order to provide a quantitative ground for the zero shear viscosity. It will be shown that a
suitable expression for the average number of entanglements per molecule may allow a simple new model for viscosity, which, by the way, quite naturally predicts the features of the sharp transition
observed for the relationship viscosity versus molecular weight.
For the moment, we shall restrict our attention to monodisperse A collection of objects are called monodisperse if they have the same size - i.e. their size distribution is effectively a delta
function. A sample of objects with a broader size distribution is called polydisperse. In practice, exactly monodisperse collections rarely exist. polymers.
Let us consider the average number of entanglements per molecule, [n.sub.e]. It can be easily seen from geometric considerations, Fig. 1, that [n.sub.e] increases according to the equation
[n.sub.e] = (M/[M.sub.e] - 1) (1)
which, in the limit of very high molecular weights, tends to M/[M.sub.e]. This function predicts that the number of entanglements per molecule is 1 when M = 2[M.sub.e]; at this level, indeed, it is
considered that the entanglement-dominated behavior begins for the viscosity, characterized by the well-known power 3.4.
Equation 1, while adequate to represent [n.sub.e] at high molecular weights, is not fully satisfactory for molecular weights comparable to [M.sub.e]. Indeed, according to this formula, we should
expect [n.sub.e] = 0 for M = [M.sub.e] and even negative values for still lower molecular weights, a nonsensical expectation from the physical point of view.
A possible way out for this may be found along these lines:
a) Macromolecules Macromolecules
A large molecule composed of thousands of atoms.
Mentioned in: Gene Therapy
macromolecules may obstruct ob·struct
To block or close a body passage so as to hinder or interrupt a flow.
ob·struc adj. themselves, when flowing, even at molecular weights equal to or lower than [M.sub.e]. Since the nature of these "effective contacts" or "flow restrictions" is very similar to that
defined by the entanglements, we may count as entanglements all the flow restrictions, even those experienced by short molecules;
b) The entanglements do not show up abruptly a·brupt
1. Unexpectedly sudden: an abrupt change in the weather.
2. Surprisingly curt; brusque: an abrupt answer made in anger.
3. at some well-defined molecular weight but appear gradually as a function of molecular weight. Accordingly, we require that the equation defining [n.sub.e] as a function of M must be continuous,
starting from zero for M = 0 and converging con·verge
v. con·verged, con·verg·ing, con·verg·es
a. To tend toward or approach an intersecting point: lines that converge.
b. to formula (1) at high M.
A tentative empirical equation satisfying such requirements is the following one,
[n.sub.e] = M/[M.sub.e] [e.sup.-[M.sub.e]/M (2)
This equation may possibly find a theoretical ground; however, without trying to further pursue this goal, we note that it certainly satisfies the above-mentioned requirements, as can be easily
seen. Indeed it tends to zero when M tends to zero and to Eq 1 at high M, as can be easily seen by a series expansion of the exponential 1. (mathematics) exponential - A function which raises some
given constant (the "base") to the power of its argument. I.e.
f x = b^x
If no base is specified, e, the base of natural logarthims, is assumed.
2. factor
[n.sub.e] = [M/[M.sub.e] exp exp
1. exponent
2. exponential (-[M.sub.e]/M) [approaches] M/[M.sub.e] (1 - [M.sub.e]/M) = M/[M.sub.e] - 1. (3)
The use of Eq 2 to estimate the number of entanglements per molecule, here introduced on purely mathematical grounds, entails some significant differences and novelties over the usually accepted
estimations of entanglements accounts, such as Eq 1.
First, it describes a continuous development of entanglements over the whole molecular weight range. Second, it satisfies our requirement of avoiding negative values for M [less than] [M.sub.e],
which cannot be avoided when using Eq 1. Third, it predicts a number of entanglements differing from zero for M [less than] [M.sub.e]. At this molecular weight, Eq 2 predicts 1/e = 0.37
entanglements per molecule. For M = 2[M.sub.e] Eq 2 gives an estimation of [n.sub.e] of 1.21, a value 21% higher than the classical value of 1 obtained from Eq 2. Finally, as mentioned above,
[n.sub.e] estimated by Eq 2 converges to the values estimated by Eq 1 for molecular weights higher than n[M.sub.e], when n is higher than about 4.
In conclusion, we can summarize sum·ma·rize
intr. & tr.v. sum·ma·rized, sum·ma·riz·ing, sum·ma·riz·es
To make a summary or make a summary of.
sum the above considerations stating that Eq 2, which does not introduce new molecular parameters other than [M.sub.e], looks formally adequate to estimate the number of entanglements per molecule
over the full range of molecular weights. From the physical point of view, it may be useful to stress again the fact that the above analysis simply means that short macromolecules experience some
restriction to their movements even if they do not form a fluctuating network, as implied in the concept of entanglements.
Having a formula for estimating the number of entanglements/molecule, we are now ready to reconsider the dependence of zero shear viscosity, [[Eta].sub.0], on molecular weight for monodisperse
polymers. Let's suppose that the viscosity could be written as the sum of two terms: a) the first one, describing the friction between small molecules, depending on the monomeric monomeric /
mono·mer·ic/ (mon?o-mer´ik)
1. pertaining to, composed of, or affecting a single segment.
2. in genetics, determined by a gene or genes at a single locus. friction factor Friction factor can refer to:
• Darcy friction factor
• Fanning friction factor
• Atkinson friction factor (ventilation of mines)
, [[Zeta].sub.0], and on the total number of chain monomers, M/[m.sub.0]: b) the second one, dealing with the difficulties found by flowing entangled en·tan·gle
tr.v. en·tan·gled, en·tan·gling, en·tan·gles
1. To twist together or entwine into a confusing mass; snarl.
2. To complicate; confuse.
3. To involve in or as if in a tangle. macromolecules, depending on the entanglement friction factor, [[Zeta].sub.e], and on the number of entanglements per molecule, [n.sub.e]; this last term is
taken with the 3.4 power to take into account all the available experimental information, which points to such power in the entanglements region. Then we write:
[Mathematical Expression A group of characters or symbols representing a quantity or an operation. See arithmetic expression. Omitted] (4)
where [[Eta].sub.0] is the zero shear viscosity, [[Eta].sub.0] = M/[m.sub.0] is the number of monomeric units in a chain, i.e., the polymerization polymerization
Any process in which monomers combine chemically to produce a polymer. The monomer molecules—which in the polymer usually number from at least 100 to many thousands—may or may not all be the same.
degree, and [n.sub.e] the number of entanglements per molecule.
The friction coefficients [[Zeta].sub.0] and [[Zeta].sub.e].sup.3.4], having the dimension of the viscosity, contain, respectively, all the relevant information about the restraints experienced by
small molecules moving over the others and the additional restraints to the movement caused by the entanglements. Some comments on them will be made later.
A virtue of Eq 4, Fig. 2, is to be able to describe by a single continuous function the behavior of viscosity of low and polymeric systems over a full range of molecular weights. At low M, i.e., M
[less than] [M.sub.e], it converges toward the classical linear relationship, known for low molecular liquids, since the first term is prevailing
log [[Eta].sub.0] = log ([[Zeta].sub.0] M/[m.sub.0]) (M [less than] [M.sub.e]) (5)
and at high M, i.e. M [greater than] 4[M.sub.e], toward the law
log [[Eta].sub.0] = 3.4 log [[Zeta].sub.e] + 3.4 log (M/[M.sub.e] exp (-[M.sub.e]/M)) (6)
which, for M [greater than] 10 Me, further simplifies, reducing to the classical 3.4 power law of high polymeric materials
log [[Eta].sub.0] [Congruent con·gru·ent
1. Corresponding; congruous.
2. Mathematics
a. Coinciding exactly when superimposed: congruent triangles.
b. ] 3.4 log [[Zeta].sub.e] + 3.4 log M/[M.sub.e] (7)
In the molecular weight region, [M.sub.e] [less than] M [less than] 4[M.sub.e], the two contributions are comparable, so they both contribute significantly to the viscosity.
Accordingly, we may speak of three flow regimes: a) the monomeric regime, up to M [less than] [M.sub.e], where Eq 5 applies; b) the transition regime, for [M.sub.e] [less than] M [less than] 4
[M.sub.e], where the monomeric flow behavior coexists with an incipient incipient (insip´ēent),
adj beginning, initial, commencing.
beginning to exist; coming into existence. entanglement-like behavior; and finally c) the high polymer or entanglement regime, M [greater than] 4 [M.sub.e], where Eq 6 applies. Within this region,
we can use the approximate law expressed by Eq 7 only when the molecular weight is sufficiently high, i.e., higher than 10 [M.sub.e].
A second characteristic of Eq 4, which clearly shows up from Fig. 3, is that the viscosity shows a rather abrupt slope variation in a narrow molecular weight range, centered around 2[M.sub.e]. This
behavior, well known in literature (1), comes out naturally from Eq 4, indicating, as expected, that the entanglement regime becomes really effective on viscosity when, on the average, one
entanglement per molecule is present. On this ground, [M.sub.c] can be suitably defined on the basis of the number of entanglements per molecule.
The friction coefficients [[Zeta].sub.0] and [[Zeta].sub.e].sup.3.4], which are characteristic of a given polymeric species and depend on temperature and pressure, can be easily estimated for each
polymeric species by Eqs 5 and 7 by testing on low and high molecular weight polymers respectively, as shown in the next section.
As for the estimation of [[Zeta].sub.0], a particular point has to be considered to get meaningful values of it, which was deeply investigated by Graessley and co-workers (16). Since the monomeric
friction factor reflects the local chain dynamics, it depends on the concentration of chain ends, the effect of which vanishes with increasing molecular weight. This entails that experimental
viscosity values must be corrected in order to establish the real dependence of viscosity from M in the low molecular weight range. Graessley's paper (16) suggests a method to estimate the
correction, based on free volume concepts, which results in the following correcting equation
[Mathematical Expression Omitted] (8)
where [[Eta].sub.corr] and [[Eta].sub.exp] are the corrected and experimental viscosities for a low polymer of molecular weight M; [C.sub.1] is the WLF WLF Washington Legal Foundation
WLF Wallis and Futuna (ISO Country code)
WLF Waist Level Finder (camera viewfinder type)
WLF Viva La Figa (MotoGP motorcycle races) constant for the molecular weight M, and [[C.sub.1].sup.varies] is the value of C at high molecular weight. The correction, requiring a preliminary
analysis on a set of low molecular weight polymers, is important because only in this way can low polymers be shown to follow the familiar power low pattern, i.e., near to unity, that we have taken
for granted Adj. 1. taken for granted - evident without proof or argument; "an axiomatic truth"; "we hold these truths to be self-evident"
axiomatic, self-evident
obvious - easily perceived by the senses or grasped by the mind; "obvious errors" .
In order to check the applicability of Eq 4 and to estimate the constants of the model, we have taken into account experimental data from literature on selected well-characterized polymers. Details
of characterization, not reported here, may be found in the original papers.
a. The Case of Polybutadiene
We have used the remarkably accurate set of data from Colby, Fetters fet·ter
1. A chain or shackle for the ankles or feet.
2. Something that serves to restrict; a restraint.
tr.v. fet·tered, fet·ter·ing, fet·ters
1. To put fetters on; shackle. , and Graessley (17) covering a very extended range of molecular weights, from 1 x [10.sup.3] up to 1.65 x [10.sup.7]. Our analysis, however, was limited to molecular
weights up to 350,000, where the 3.4 power was found to apply; above this value, experimental results were found consistent with the lower power of M, about 3, as suggested by the reptation theory.
Samples obtained by anionic an·i·on
A negatively charged ion, especially the ion that migrates to an anode in electrolysis.
[From Greek, neuter present participle of anienai, to go up : ana-, ana- polymerization were nearly monodisperse and with similar microstructure mi·cro·struc·ture
The structure of an organism or object as revealed through microscopic examination.
a structure on a microscopic scale, such as that of a metal or a cell . [T.sub.g] was found to be -99 [degrees] C and [M.sub.e] = 1850, based on a plateau modulus See modulo. [[G.sub.N].sup.0] =
1.20 X [10.sup.7] dyn/[cm.sup.2].
In Table 1, zero shear viscosity data are reported at 25 [degrees] C for the molecular weight range where the entanglement regime is supposed to hold, i.e., 10,000 [less than] M [less than] 350,000.
In the two last columns are reported the number of entanglements per molecule, [n.sub.e], and the entanglements friction factor, [[Zeta].sub.e], estimated respectively by Eqs 2 and 6. As can be
seen, the values of [[Zeta].sub.e] are rather similar, ranging from 0.49 to 0.43. The range could be even a little more narrowed, considering only samples having a well-formed entanglement network,
i.e., n [greater than] 20, and excluding sample B3, [TABULAR tab·u·lar
1. Having a plane surface; flat.
2. Organized as a table or list.
3. Calculated by means of a table.
resembling a table. DATA FOR TABLE 1 OMITTED] which is in the borderline borderline /bor·der·line/ (-lin) of a phenomenon, straddling the dividing line between two categories. borderline of the
validity of the 3.4 power law. Accordingly, for polybutadiene at 25 [degrees] C, [[Zeta].sub.e] = 0.445 [+ or -] 0.02 [(Pa.s).sup.1/3.4]. This indicates that a proper choice of [n.sub.e] in the
model results in a unique estimation of the entanglements friction factor.
In order to estimate [[Zeta].sub.0] we have taken into account four low polymer samples, Table 2, with molecular weights ranging from 1030 to 1420. Using for the viscosity the already chain ends
corrected values, as mentioned at the end of the above paragraph, we got for polybutadiene at 25 [degrees] C: [[Zeta].sub.0] = 0.031 [+ or -] 0.03 Pa.s. Finally, in order to check the validity of Eq
4 we have compared experimental and calculated viscosity data for intermediate molecular weights 1420 [less than] M [less than] 10500, see Table 3 and Fig. 4, which is the transition region from low
to high polymer behavior. This region, where the two flow regimes mix together, appears the most suitable for a critical test of Eq 4. As can be seen, estimated viscosity data appear in rather good
agreement with the experimental data.
Table 2. Estimation of [[Zeta].sub.0] for Polybutadiene. T:
25 [degrees] C.
Sample M corr.[Eta](Pa.s) [[Zeta].sub.0] (Pa.s)
CDS-B2 1030 0.6 0.0315
C1 1130 0.7 0.0335
C2 1190 0.61 0.0277
C3 1420 0.85 0.0323
Ref. as in Tab. 1.
[TABULAR DATA FOR TABLE 3 OMITTED]
To conclude, we have shown in Fig. 5 the comparison of experimental values of viscosity with calculated ones over the whole range of molecular weights.
b. Other Remarks
The example of polybutadiene developed in detail has shown us how to deal with experimental data in order to extract the parameters of the model. For [[Zeta].sub.e] it may be sometimes more
straightforward to use the relationship [[Eta].sub.0] versus molecular weight on monodisperse polymers, [[Eta].sub.0] = k [M.sup.3.4], provided that viscosity data were obtained on samples of
sufficiently high molecular weight, i.e., M [greater than] 10 [M.sub.e]. Under this condition
[[Zeta].sub.e] = [M.sub.e] [k.sup.(1/3.4)] (9)
where k is the coefficient coefficient /co·ef·fi·cient/ (ko?ah-fish´int)
1. an expression of the change or effect produced by variation in certain factors, or of the ratio between two different quantities.
2. of the viscosity-molecular weight relationship in the high entanglement range.
Table 4 shows examples of such calculations, based on literature data (16-22). The [[Zeta].sub.e] values were estimated, considering both the M powers reported by the [TABULAR DATA FOR TABLE 4
OMITTED] authors, the [Alpha] values of column 6, and the standard 3.4 power, last column. Usually the resulting values are not too different. In case of too big a discrepancy DISCREPANCY. A
difference between one thing and another, between one writing and another; a variance. (q.v.)
2. Discrepancies are material and immaterial. , we think it preferable to rely on values obtained from the power estimated from experimenters, since the coefficient k, from which we get the friction
factor, and [Alpha] are somehow related. In any case, [[Zeta].sub.e] values for a very broad set of polymers span over two decades with the average value centering around 1 [(Pa.s).sup.1/3.4].
In order to improve our knowledge about [[Zeta].sub.e], it proved fruitful fruit·ful
a. Producing fruit.
b. Conducive to productivity; causing to bear in abundance: fruitful soil.
2. to examine its dependence on temperature. A preliminary analysis, Fig. 6, shows that:
a) [[Zeta].sub.e] has a very similar temperature dependence for a number of polymeric species;
b) The temperature dependence is a WLF-type, i.e., it can be rationalized by T-[T.sub.g].
These statements may help to predict viscosity, since [[Zeta].sub.e] can be easily estimated at any given temperature from plot 6.
As for [[Zeta].sub.0], the above analysis on polybutadiene has indicated that, before getting acceptable results, it is necessary to make relevant corrections to experimental data. This would
require a more specific study and will be postponed to a later time. For the moment we simply note that, at the same temperature, [[Zeta].sub.0] values for polybutadiene turn out to be about half
the value of [[Zeta].sub.3.4].
It may be interesting to observe that the present model considers the friction factors as constants, independent from the molecular weight. This is in contrast to what is stated in the literature
(Ref. 1, Ch. 10, Eq 14), where the monomeric friction coefficient is assumed to rise from the value [[Zeta].sub.00], for the monomer monomer (mŏn`əmər): see polymer. monomer
Molecule of any of a class of mostly organic compounds that can react with other molecules of the same or other compounds to form very large molecules (polymers). to an equilibrium value
[[Zeta].sub.0] at high molecular weights, as a consequence of the additional free volume associated with molecular ends. Our approach, Eq 2, explains the small deviations from linearity of the
viscosity of low polymers as due to additional entanglement-type constraints to flow found from small molecules even below 2 [M.sub.e].
We can summarize the above considerations along these lines:
The average number of entanglements per molecule, [n.sub.e], was critically reexamined, resulting in a new equation, Eq 2, which applies to monodisperse polymers. The modifications introduced for
[n.sub.e], although apparently minor from the quantitative point of view, offer in principle a means to describe by a unique continuous function the evolution with the molecular weight of the number
of entanglements per molecule.
The analytical equation for [n.sub.e] was then used to get a description of the zero shear viscosity over an extended range of molecular weights, from very low polymer up to the molecular weights
where the 3.4 power law applies. Two well-known experimental findings turned out to be properly described: the continuous evolution of the viscosity as a function of molecular weight, and the sharp
slope variation when changing from the monomer friction regime to the high polymer, i.e., to the entanglement regime.
The parameters appearing in the new viscosity equation, Eq 4, are [m.sub.0]. the monomeric molecular weight, [M.sub.e], the average molecular weight between entanglements, [[Zeta].sub.0], the
monomeric friction coefficient, and [[Zeta].sub.e], the entanglement friction factor. In particular we underline underline
an animal's ventral profile; the shape of the belly when viewed from the side, e.g. pendulous, pot-belly, tucked up, gaunt. here the novelty of the introduction of the entanglement friction
coefficient, which accounts for additional restraints experienced by a macromolecule macromolecule, term that may refer either to a crystal such as a diamond, in which the atoms are identical and
held by covalent bonds (see chemical bond) of equal strength, or to one of the units that compose a polymer. when moving within the network of randomly distributed macromolecules. The friction
factors depend on temperature and pressure. Eqs 5 and 7 may be used for a quantitative evaluation.
Whereas [[Zeta].sub.0] was estimated only in the particular case of polybutadiene, leaving a deeper analysis to further studies, Se values for many polymers were shown to follow a WLF-type equation,
i.e., with quite similar values when referred to temperatures equally distant from [T.sub.g].
Finally, using the friction factors extracted from experimental data at low and high molecular weights, the validity of the general viscosity equation, Eq 4, was tested in the most critical
molecular weight region, i.e., for [M.sub.e] [less than] M [less than] 4[M.sub.e]. finding good agreement with the experimental data.
1. D. J. Ferry, Viscoelastic Adj. 1. viscoelastic - having viscous as well as elastic properties
natural philosophy, physics - the science of matter and energy and their interactions; "his favorite subject was physics" Properties of Polymers, 3rd Ed., John Wiley John Wiley may refer to:
• John Wiley & Sons, publishing company
• John C. Wiley, American ambassador
• John D. Wiley, Chancellor of the University of Wisconsin-Madison
• John M. Wiley (1846–1912), U.S.
& Sons, New York New York, state, United States
New York, Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and
the Canadian province of , (1980).
2. T. G. Fox and P. J. Flory, J. Am. Chem. Soc., 70, 2384 (1948).
3. T. G. Fox and P. J. Flory, Phys. Colloid colloid (kŏl`oid) [Gr.,=gluelike], a mixture in which one substance is divided into minute particles (called colloidal particles) and dispersed throughout
a second substance. Chem, 55, 221 (1951).
4. G. C. Berry and T. G. Fox, Adv. Polym. Sci, 5, 261 (1968).
5. W. W. Graessley, Adv Polym. Sci, 16, 1 (1974).
6. P. E. Rouse, J. Chem. Phys., 21, 1272 (1953).
7. P. G. De Gennes, Scaling Concepts in Polymer Physics Polymer physics is the field of physics associated to the study of polymers, their fluctuations, mechanical properties, as well as the
kinetics of reactions involving degradation and polymerisation of polymers and monomers respectively. , Cornell University Cornell University, mainly at Ithaca, N.Y.; with land-grant, state, and
private support; coeducational; chartered 1865, opened 1868. It was named for Ezra Cornell, who donated $500,000 and a tract of land. With the help of state senator Andrew D. Press, Ithaca, N.Y.
8. M. Doi and S. F. Edwards, J. Chem Soc., Faraday faraday /far·a·day/ (F ) (far´ah-da) the electric charge carried by one mole of electrons or one equivalent weight of ions, equal to 9.649 ×
n. Trans. II, 74, 1789, 1802, 1818 (1978).
9. M. Doi and S. F. Edwards, J. Chem Soc., Faraday Trans. II, 75, 38 (1979).
10. H. H. Kausch, Polymer Fracture, 2nd Ed., Springer springer
a North American term commonly used to describe heifers close to term with their first calf. Verlag, Berlin (1987).
11. A.M. Donald, and E. J. Kramer, J. Polym. Sci., Polym Phys. Ed phys.
1. physical
2. physician
3. physiological
4. physiology ., 20, 899 (1982).
12. A.M. Donald, and E. J. Kramer, J. Mater. Sci., 17, 1871 (1982).
13. S. Wu, Polymer Interfaces and Adhesion adhesion /ad·he·sion/ (ad-he´zhun)
1. the property of remaining in close proximity.
2. the stable joining of parts to one another, which may occur abnormally.
3. , Marcel Dekker Marcel Dekker is a well-known encyclopedia publishing company with editorial boards found in New York, New York. They are part of the Taylor and Francis publishing group.
Initially a textbook publisher, they went to encyclopedia publishing in the late 1990's. , New York (1982).
14. S. M. Aharoni, Macromolecules, 19, 426 (1986).
15. S. Wu, J. Polym. Sci., Polym. Phys. Ed., 27, 723 (1989).
16. L. J. Fetters, W. W. Graessley, and A.D. Kiss, Macromolecules, 24, 3136 (1991).
17. R. H. Colby, L. J. Fetters, and W. W. Graessley, Macromolecules, 20, 2226 (1987).
18. J. T. Gotro and W. W. Graessley, Macromolecules, 17, 2767 (1984).
19. D. J. Plazek and V. M. O'Rourke, J. Polym. Sci.: Pt A-2, 9, 209 (1971); D. J. Plazek and P. Agarwal, J. Appl. Polym. Sci., 22, 2127 (1978).
20. W. M. Prest Jr. and R. S. Porter, Polymers, Polym. J., 4, 154 (1973).
21. S. H. Wasserman and W. W. Graessley, J Rheol., 36 (4), 543 (1992).
22. G. Locati and L. Gargani, J. Polym. Sci., Polym. Lett. Ed., 11, 95 (1973).
Reader Opinion | {"url":"http://www.thefreelibrary.com/A+model+for+the+zero+shear+viscosity.-a054831782","timestamp":"2014-04-18T05:53:33Z","content_type":null,"content_length":"61191","record_id":"<urn:uuid:28741ec5-421f-4f35-8c79-e9da110cf7e1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamics of Economic Well-Being: Labor Force, 1991 to 1993
Source of Data
The SIPP universe is the noninstitutionalized resident population living in the United States. Field representatives interview eligible persons who are at least 15 years of age at the time of the
interview. Not eligible to be in the survey are crew members of merchant vessels, Armed Forces personnel living in military barracks, institutionalized persons, such as correctional facility inmates
and nursing home residents, and United States citizens residing abroad.
The SIPP sample for the 1991 panel is located in 230 Primary Sampling Units (PSUs) each consisting of a county or a group of contiguous counties.
For the 1991 panel, interviewing began in February, March, April, or May of 1991 for four random subsamples, respectively. For the remainder of the panel, interviews for each person occurred every 4
months for a total of 8 interviews. (One round of interviewing all 4 subsamples is called a wave.) At each inter- view, the reference period was the 4 months preceding the interview month.
Occupants of about 93 percent of all eligible living quarters participated in the first interview of the panel. For later interviews, field representatives interviewed only original sample persons
and persons living with them. We followed respondents who moved during the panel. The Census Bureau automati- cally designated noninterviewed households at the first wave as noninterviews for
subsequent waves.
We classified a person as interviewed for the entire panel and both calendar years based on the following two definitions:
• Those for whom self, proxy, or imputed responses were obtained for each reference month of all eight interviews for the 1991 panel, and all three interviews for each calendar year; or
• Those for whom self or proxy responses were obtained for the first reference month of the interview period and responses exist for each subsequent month until they were known to have died or
moved to an ineligible address (foreign living quarters, institutions, or military barracks).
Everyone else is considered noninterview.
Some estimates are based on monthly averages from cross-sectional files. Nonresponse rates for the months on the file vary from 8 percent to 21 percent. Some respondents did not respond to some of
the questions. Therefore, the overall nonresponse rate for some items, especially sensitive income and money related items, is higher than the person nonresponse rate.
We used several stages of weight adjustments in the estimation procedure to derive the SIPP longitudinal person weights. We gave each person a base weight equal to the inverse of his/her probability
of selection and applied adjustments to account for noninterviews.
We performed an additional stage of adjustment to longitudinal person weights to reduce the mean square error of the survey estimates by age, sex, race, and ethnicity (Hispanic/non-Hispanic).
Accuracy of Estimates
We base SIPP estimates on a sample. The sample estimates may differ somewhat from the values obtained from administering a complete census using the same questionnaire, instructions, and enumerators.
The difference occurs because a sample survey estimate is subject to two types of errors: nonsampling and sampling. We can provide estimates of the magnitude of the SIPP sampling error, but this is
not true of nonsampling error. The next few sections describe SIPP nonsampling error sources, followed by a discussion of sampling error, its estimation, and its use in data analysis.
Nonsampling Variability. We attribute nonsampling errors to many sources; they include but are not limited to the following:
• Inability to obtain information about all cases in the sample.
• Inability or unwillingness on the part of the respondents to provide correct information.
• Errors made in collection (e.g. recording or coding the data).
• Undercoverage
We used quality control and edit procedures to reduce errors made by respondents, coders, and interviewers.
Undercoverage in SIPP resulted from missed living quarters and missed persons within sample households. It is known that undercoverage varies with age, race, and sex. Generally, undercoverage is
larger for males than for females and larger for Blacks than for non-Blacks. Ratio estimation to independent age- race-sex population controls partially corrects for the bias resulting from survey
undercoverage. However, biases exist in the estimates when persons in missed households or missed persons in interviewed households have character- istics different from those of interviewed persons
in the same age-race-sex group. Further, we did not adjust the independent population controls for under- coverage in the census.
Comparability with Other Estimates. Exercise caution when comparing data from this report with data from other SIPP publications or with data from other surveys. Comparability problems are from
varying seasonal patterns for many characteristics, different nonsampling errors, and different concepts and procedures.
Sampling Variability. Standard errors indicate the magnitude of the sampling error. They also partially measure the effect of some nonsampling errors in response and enumeration, but do not measure
any systematic biases in the data. The standard errors mostly measure the variations that occurred by chance because we surveyed a sample rather than the entire population.
Uses and Computation of Standard Errors
Confidence Intervals. The sample estimate and its standard error enable one to construct confidence intervals, ranges that would include the average result of alpossible samples with a known
Approximately 90 percent of the intervals from 1.645 standard errors below the estimate to 1.645 standard errors above the estimate would include the average result of all possible samples.
The average estimate derived from all possible samples is or is not contained in any particular computed interval. However, for a particular sample, one can say with a specified confidence that the
confidence interval includes the average estimate derived from all possible samples.
Hypothesis Testing. One may also use standard errors for hypothesis testing. Hypothesis testing is a procedure for distinguishing between population char- acteristics using sample estimates. The most
common type of hypothesis tested is (1) the population characteristics are identical versus (2) they are different. One can perform tests at various levels of significance, where a level of sig-
nificance is the probability of concluding that the characteristics are different when, in fact, they are identical.
Unless noted otherwise, all statements of comparison in the report passed a hypothesis test at the 0.10 level of significance or better. This means that, for differences cited in the report, the
estimated absolute difference between parameters is greater than 1.645 times the standard error of the difference.
Note that as we perform more tests, more erroneous significant differences will occur. For example, at the 10-percent significance level, if we perform 100 independent hypothesis tests in which there
are no real differences, it is likely that about 10 erroneous differences will occur. Therefore, interpret the significance of any single test cautiously.
Standard Error Parameters and Tables and Their Use
Most SIPP estimates have greater standard errors than those obtained through a simple random sample because we sampled clusters of living quarters for the SIPP. To derive standard errors at a
moderate cost and applicable to a wide variety of estimates, we made a number of approximations. We grouped estimates with similar standard error behavior and developed two parameters (denoted "a"
and "b") to approximate the standard error behavior of each group of estimates. The standard errors we computed from these parameters provide an indication of the order of magnitude of the standard
error for any specific estimate.
Methods for using these parameters and tables for computation of standard errors are given in the following sections. To calculate standard errors for estimates of persons ever participating or
persons participating all of two years, use a = -0.0000483 and b = 8,912. The bases for percentages are found in appropriate text tables.
Standard Errors of Estimated Numbers. Approximate s using the formula,
s = SQRT(ax2 + bx).
Here x is the size of the estimate
Illustration. As shown in text table E, the 1991 SIPP estimates approximately 1.7 million labor turnover actions occurred in the retail trade industry in an average month during 1991. The appropriate
"a" and "b" parameters are a = -0.0000483 and b = 8,912
Using the above formula, the approximate standard error is
s = SQRT((-0.0000483)(1,737,0000)2 + (8,912)(1,737,000)) = 123,832
The 90-percent confidence interval is from 1,533,296 to 1,940,704. Therefore, a conclusion that the average estimate derived from all possible samples lies within a range computed in this way would
be correct for roughly 90 percent of all samples.
Standard Errors of Estimated Percentages. The reliability of an estimated percentage, computed using sample data for both numerator and denominator, depends on the size of the percentage and its
base. Approximate the standard error by the formula:
S = SQRT(b/x (p)(100-p)
Here x is the total number of persons in the base of the percentage and p is the percentage (0 <= p >= 100). Illustration. As shown in text table F, the 1991 SIPP estimates that the average monthly
labor turnover rate for men age 25 to 54, was 4.9% in 1991. To find the base for the percentage, use text table F. In this example, the base is 39,892,000. The appropriate "b" parameter is b = 8,912.
Using the above formula, the approximate standard error is
S = SQRT((8,912/39,892,000)(4.9)(100-4.9)) = 0.32 percent
The 90-percent confidence interval is from 4.4 to 5.4 percent. Therefore, a conclusion that the average percentage derived from all possible samples lies within a range computed in this way would be
correct for roughly 90 percent of all samples.
1 Details on nonresponse and Hispanic controls are in "SIPP 91: Source and Accuracy Statement for the Longitudinal Panel File REVISION," dated October 19, 1994
2 Details on interview-status classification are in "Weighting of Persons for SIPP Longitudinal Tabulations," paper by Judkins, Hubble, Dorsch, McMillen and Ernst in the 1984 Proceedings of the
Survey Research Methods Section, American Statistical Association.
3 Details on patterns of nonresponse are in "Weighting Adjustment for Partial Nonresponse in the 1984 SIPP Panel.." paper by Lepkowski, Kalton, and Kasprzyk in the 1989 Proceeding of the Survey
Research Methods Section, American Statistical Association.
4 For more discussion on nonresponse and the existence and control of nonsampling errors in the SIPP, see the Quality Profile for the Survey of Income and Program Participation, May 1990, by T.
Jabine, K. King and R. Pertoni. Available from Customer Services, Data User Services Division (301-763-1139)
5 For more details on noninterview adjustment for longitudinal estimates, see Nonresponse Adjustment Methods for Demographic Surveys at the U.S. Bureau of the Census, November 1988, Working Paper
8823, by R. Singh and R. Petroni.
6 More detailed discussions of the population controls are in the SIPP Dynamics of Economic Well-Being: Labor Force and Income, 1990 to 1992, Report P70-40, by Wilfred Masumura and Paul Ryscavage. | {"url":"http://www.census.gov/people/laborforce/publications/dewb9193/source.html","timestamp":"2014-04-16T08:53:21Z","content_type":null,"content_length":"124881","record_id":"<urn:uuid:81ce9e6e-3dab-4488-8463-257099a427ee>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Review of Waves and Ray Diagrams
Review of Waves and Ray Diagrams
Light travels as a wave. You should have already encountered waves in your physics course before starting this module. But in case your study of those topics is a distant memory or scary nightmare,
here is a review of some of the key concepts.
Periodic waves traveling through space, such as light waves, are described by their wavelength and period. A wave is periodic if it repeats itself. Some examples of repeating and non-repeating waves
are shown here.
The wavelength of a wave is the distance between two repeated points, as shown in the figure above. The period T of a wave is the time it takes for an entire cycle of the wave to pass one point in
space. In the animation below, one complete cycle passes the green line in 5.0s, so the period of the wave is 5.0 s. (The timer may or may not represent real time, depending on your system.) Click on
"replay" to see the animation again.
The frequency f of a wave is the number of cycles completed per unit time. In our animation, the wave travels through one fifth of a cycle in one second, so the frequency would be 1/5.0s = 0.20Hz.
Frequency and period are just the inverse of each other:
f = 1/T.
The wave speed is the speed at which one point in the cycle travels through space. Since any point on the wave will travel one wavelength of distance in one period of time, the speed v can be found
v = /T = f.
Representing Light
Light can be represented by sine waves, wave fronts, or arrows in the direction of motion, as the animation below illustrates. Click on "replay" to see a given animation again. When you have viewed
animation 1, click on the "2" in the bottom right to continue, then "3", then "4".
A particular representation (waves, wavefronts, and rays) may be better suited for a given context, but all three representations are equally valid ways of illustrating light.
Copyright © 1999 Rensselaer Polytechnic Institute and DJ Wagner. All Rights Reserved. | {"url":"http://www.rpi.edu/dept/phys/Dept2/APPhys1/optics/optics/node3.html","timestamp":"2014-04-17T10:38:39Z","content_type":null,"content_length":"7315","record_id":"<urn:uuid:bdcf0abf-3f1e-446f-b999-becb112d0231>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
February 6, 2013
COMS W4115
Programming Languages and Translators
Lecture 5: Implementing a Lexical Analyzer
February 6, 2013
1. Finite automata
2. Converting an NFA to a DFA
3. Equivalence of regular expressions and finite automata
4. Simulating an NFA
5. The pumping lemma for regular languages
6. Closure and decision properties of regular languages
1. Finite Automata
• Variants of finite automata are commonly used to match regular expression patterns.
• A nondeterministic finite automaton (NFA) consists of
□ A finite set of states S.
□ An input alphabet consisting of a finite set of symbols Σ.
□ A transition function δ that maps S × (Σ ∪ {ε}) to subsets of S. This transition function can be represented by a transition graph in which the nodes are labeled by states and there is a
directed edge labeled a from node w to node v if δ(w, a) contains v.
□ An initial state s[0] in S.
□ F, a subset of S, called the final (or accepting) states.
• An NFA accepts an input string x iff there is a path in the transition graph from the initial state to a final state that spells out x.
• The language defined by an NFA is the set of strings accepted by the NFA.
• A deterministic finite automaton (DFA) is an NFA in which
1. There are no ε moves, and
2. For each state s and input symbol a there is exactly one transition out of s labeled a.
2. Converting an NFA to a DFA
• Every NFA can be converted to an equivalent DFA using the subset construction (Algorithm 3.20, ALSU, pp. 153-154).
• Every DFA can be converted into an equivalent minimum-state DFA Using Algorithm 3.39, ALSU, pp. 181-183. All equivalent minimum-state DFAs are isomorphic up to state renaming.
3. Equivalence of Regular Expressions and Finite Automata
• Regular expressions and finite automata define the same class of languages, namely the regular sets.
• Every regular expression can be converted into an equivalent NFA using the McNaughton-Yamada-Thompson algorithm (Algorithm 3.23, ALSU, pp. 159-161).
• Every finite automaton can be converted into a regular expression using Kleene's algorithm.
4. Simulating an NFA
• Two-stack simulation of an NFA: Algorithm 3.22, ALSU, pp. 156-159.
5. The Pumping Lemma for Regular Languages
• The pumping lemma allows us to prove certain languages, like { a^nb^n | n ≥ 0 }, are not regular.
• The pumping lemma. If L is a regular language, then there exists a constant n associated with L such that for every string w in L where |w| ≥ n, we can partition w into three strings xyz (i.e., w
= xyz) such that
□ y is not the empty string,
□ the length of xy is less than or equal to n, and
□ for all k ≥ 0, the string xy^kz is in L.
6. Closure and Decision Properties of Regular Languages
• The regular languages are closed under the following operations:
□ union
□ intersection
□ complement
□ reversal
□ Kleene star
□ homomorphism
□ inverse homomorphism
• Decision properties
□ Given a regular expression r and a string w, it is decidable whether r matches w.
□ Give a finite automaton A, it is decidable whether L(A) is empty.
□ Given two finite automata A and B, it is decidable whether L(A) = L(B).
7. Practice Problems
1. Write down deterministic finite automata for the following regular expressions:
1. (a*b*)*
2. (aa|bb)*((ab|ba)(aa|bb)*(ab|ba)(aa|bb)*)*
3. a(ba|a)*
4. ab(a|b*c)*bb*a
2. Construct a deterministic finite automaton that will recognize all strings of 0's and 1's representing integers that are divisible by 3. Assume the empty string represents 0.
3. Use the McNaughton-Yamada-Thompson algorithm to convert the regular expression a(a|b)*a into a nondeterministic finite automaton.
4. Convert the NFA of (3) into a DFA.
5. Minimize the number of states in the DFA of (4).
8. Reading Assignment | {"url":"http://www1.cs.columbia.edu/~aho/cs4115/lectures/13-02-06.htm","timestamp":"2014-04-18T18:26:44Z","content_type":null,"content_length":"6056","record_id":"<urn:uuid:f41208bf-8dd0-4138-968e-ee2f645be16b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solution to Bubble Puzzle Pops Out
With a key mathematical insight, a pair of theorists has solved a 5-decades-old puzzle as easily as you might burst a soap bubble with a pin. The new result lets researchers predict whether a bubble
in foam will grow or shrink. More than a mere curiosity, the mathematical relation could aid engineers designing foamy materials, biologists studying the architecture of tissues, and physicists
probing how crystalline grains are arranged within a solid.
Foam looks simple, but researchers can't explain how it evolves as bubbles grow, shrink, and merge--a process known as coarsening. In 1952, famed mathematician John von Neumann deciphered one aspect
of 2-dimensional foams, such as soap bubbles squeezed between glass plates. Whether a bubble grows or shrinks depends on the sum total of the curvature of its faces. But Von Neumann reduced the messy
problem of adding up curvature to the much simpler task of counting a bubble's sides. He proved that, regardless of their sizes or shapes, 2-D bubbles with five or fewer sides shrink, those with
seven or more grow, and those with six remain the same. For half a century, researchers have struggled to extend von Neumann's result to 3 dimensions.
Now, mathematician Robert MacPherson of the Institute of Advanced Study in Princeton, New Jersey, and theoretical materials scientist David Srolovitz of Yeshiva University in New York City have
cracked the problem. What made it so difficult is that bubbles' surfaces can curve in complicated ways like saddles or potato chips. However, MacPherson realized that he could succinctly describe the
curvature using a mathematical concept called the Euler characteristic. When an object is sliced in two, the Euler characteristic is the tally of surfaces revealed minus the number of holes in
them--one for a croquet ball, zero for a hollow tennis ball. "After that insight, we were able to knock out the rest of it relatively quickly," Srolovitz says.
Using the Euler characteristic, MacPherson and Srolovitz also invented an abstract "mean width" that they could calculate for any object regardless of its shape. In 3 dimensions, a bubble's faces
meet at distinct edges, and the researchers found that a bubble will grow if the sum of the lengths of its edges is greater than 6 times its mean width. If the sum of all the edge lengths is smaller,
the bubble will shrink, as the team reports tomorrow in Nature. The researchers have shown that in 2 dimensions their result reduces to von Neumann's rule and have extended the relation to
hypothetical bubbles in 4 or more dimensions.
Other researchers had already developed empirical relations that, on average, tied the growth of a bubble to the number of faces, and the new, exact result might be useful for to putting those rules
of thumb on a firmer theoretical foundation, says Sascha Higlenfeldt, an applied mathematician at Northwestern University in Evanston, Illinois. "It's very satisfying to have this formulation to work
with," he says. "You know you're on safe ground now." James Glazier, a physicist at Indiana University in Bloomington, says the new work is "a beautiful piece of mathematics." He notes, however, that
a tougher problem is describing how the overall structure of the foam develops as bubbles disappear and merge. "We still have many more years of difficult work ahead before we can truly say we
understand coarsening foams."
Go to Comments | {"url":"http://news.sciencemag.org/2007/04/solution-bubble-puzzle-pops-out?mobile_switch=mobile","timestamp":"2014-04-19T02:34:47Z","content_type":null,"content_length":"45372","record_id":"<urn:uuid:b580d102-057a-4e4d-b73e-452605412ee7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
'equations' Answers By New Users
New answers tagged equations
I don't want to do your work right away, because it won't help you in the future. But I'll try to give you hints. Java elements The different elements that you might need are the following: the
number n, which is the number of elements in your input array, can be accessed using myInput.length to iterate with a moving k index, you'll need a for loop. ...
You seem to have indented the "list" with four spaces, making it a block of preformatted text instead. Try this: In a list, equation shows the latex code: 1. \\({e}^{i\pi }+1=0\\)
As far as I can see, you are computing Lagrange polynomials. In the specific case of 3 data points (x_0, y_0), (x_1, y_1), (x_2, y_2) - which in your example are (0, 4), (1, 2), (3, 3), the
calculation is quite easy. f(x) = y_0*l_0(x) = y_0/((x_0-x_1)*(x_0-x_2))*(x^2 + -(x_1+x_2)*x + (x_1*x_2)) The other two polynomials can be computed similarly. In ...
Top 50 recent answers are included | {"url":"http://stackoverflow.com/tags/equations/new","timestamp":"2014-04-20T21:47:40Z","content_type":null,"content_length":"39513","record_id":"<urn:uuid:0ef65f98-4c68-4264-a961-7e8b1acce559>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Materials Science Help - Slip Systems & Resolved Shear Stress
Also, note that in this case, the tensile axis lies in one of the slip planes. It is obviously normal to the plane's normal direction and the rss is 0 for all slip directions in that plane.
Thanks for the reply again Bavid. Have fixed up my units and am getting a far more reasonable answer now. This is the only point that i'm still a little lost on, which of the cases will have a rss of
0? Breaking it down from what I see:
[1,-2,1] for tensile axis, [0,-1,-1] for slip dir, not normal to each other so rss non-zero.
[1,-2,1] for tensile axis, [1,0,1] for slip dir, not normal to each other so rss non-zero.
[1,-2,1] for tensile axis, [1,-1,0] for slip dir, not normal to each other so rss non-zero.
None of my cos(λ) angles are 90/0 degrees either, so surely none of the slip directions in this have a rss of 0?
Which case am I messing it up with? :s | {"url":"http://www.physicsforums.com/showthread.php?p=3834565","timestamp":"2014-04-20T21:24:50Z","content_type":null,"content_length":"76308","record_id":"<urn:uuid:da305c7e-9b46-4a6e-939f-0d4c44d91ba7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notes on BRST VII: The Harish-Chandra Homomorphism
The Casimir element discussed in the last posting of this series is a distinguished quadratic element of the center infinitesimal character of
defined by
Note: this is not the same thing as the usual (or global) character of a representation, which is a conjugation-invariant function on the group
The Harish-Chandra Homomorphism
The Poincare-Birkhoff-Witt theorem implies that for a simple complex Lie algebra
to decompose
and show that If
Remarkably, it turns out that one gets something much simpler if one composes
corresponding to the mysterious
The composition map
is a homomorphism, known as the Harish-Chandra homomorphism. One can show that the image is invariant under the action of the Weyl group, and the map is actually an isomorphism
It turns out that the ring
To see how things work in the case of
so one has
which is invariant under the Weyl group action
Once one has the Harish-Chandra homomorphism
and the infinitesimal character of an irreducible representation of highest weight
The Casselman-Osborne Lemma
We have computed the infinitesimal character of a representation of highest weight
This space has weight
These two actions are related by the map
It turns out that one can consider the same question, but for the higher cohomology groups
and thus
for some element
For more details about this and a proof of the Casselman-Osborne lemma, see Knapp’s Lie Groups, Lie Algebras and Cohomology, where things are worked out for the case of
So far we have been considering the case of a Cartan subalgebra
In this more general setting, there is a generalization of the Harish-Chandra homomorphism, now taking
2 Responses to Notes on BRST VII: The Harish-Chandra Homomorphism
1. I don’t know anything about the subject, but it sounds like a suitable title for a “The Big Bang Theory” episode.
2. This presentation of Casimirology is really first rate.
This entry was posted in BRST. Bookmark the permalink. | {"url":"http://www.math.columbia.edu/~woit/wordpress/?p=1346","timestamp":"2014-04-17T03:48:37Z","content_type":null,"content_length":"54614","record_id":"<urn:uuid:88c6bd18-f217-4999-ad05-5860d8f335fe>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
W := {(x_1; x_2; x_3) in V l (x_2)=(x_1)(x_3)} Is W a subspace of R^3 (subspace test)
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
there are certain criteria for "subspace"
Best Response
You've already chosen the best response.
do you recall what they are?
Best Response
You've already chosen the best response.
I know show 0 vector is in set, closure under addition and closure under multiplication, i have shown the first two but for the last i am unsure on whether i have done it right so just wanted to
Best Response
You've already chosen the best response.
V l (x_2)=(x_1)(x_3)} that is hard to parse thru, can you clear it up any?
Best Response
You've already chosen the best response.
\[ W:= \{(x_{1}, x_{2}, x_{3}) \in V \text{such that} x_{2}=x_{1}x_{3}\} ) \]
Best Response
You've already chosen the best response.
sorry wasnt sure on how to put a space after text
Best Response
You've already chosen the best response.
that is alot easier to read :)
Best Response
You've already chosen the best response.
if you can prove that: \[c_1\vec X+c_2\vec Y\in W\]that should satisfy the addition and multiplcation requirements in one fell swoop
Best Response
You've already chosen the best response.
oh ok so i have done it seperately, but under multiplication can it be said that for cu=c(u1,u2,u3) then cu=(cu1,cu2,cu3) and if we let say v= cu that conditions are met? as cu2=cu1cu3 implies v2
=v1v3 if that makes sense.
Best Response
You've already chosen the best response.
cu1 cu2 = c^2 u1 u2 right?
Best Response
You've already chosen the best response.
yh see thats where i was not sure as then surely that wouldnt work
Best Response
You've already chosen the best response.
im getting the indexes a little mixed up i see
Best Response
You've already chosen the best response.
sorry its just my latex typing isnt that quick
Best Response
You've already chosen the best response.
\[\vec u=c_1\vec x_1+c_2\vec x_2+c_3\vec x_3\] \[\vec u=c_1\vec x_1+c_2\vec x_1\vec x_3+c_3\vec x_3\] \[k\vec u=k(c_1\vec x_1)+k(c_2\vec x_1\vec x_3)+k(c_3\vec x_3)\]
Best Response
You've already chosen the best response.
now im getting lost lol ... oy vey
Best Response
You've already chosen the best response.
ok i think i got it it let me type it in latex so it is clear.
Best Response
You've already chosen the best response.
Suppose \(c \in R \) and \(\mathbf{v} \in V \) \[c\mathbf{v} = c(v_{1}, v_{2}, v_{3}) =(cv_{1}, cv_{2}, cv_{3}) \]Now if we let \(\mathbf{u} \in V \) where \(\mathbf{u} = c\mathbf{v}\) which is
valid as it is given that \( (x_{1}, x_{2}, x_{3}) \in V \) and \(V\) is the vector space \(R^3\) so multiplying by a scalar will still be in the reals. Thus, we have that \[ cv_{2} =cv_{1}cv_{3}
\] and as \(\mathbf{u} = c\mathbf{v}\) it can be shown that \[ u_{2} =u_{1}u_{3} \]
Best Response
You've already chosen the best response.
i think that makes sense.
Best Response
You've already chosen the best response.
with my limited recollection, that does make sense to me as well. but wouldnt we want: u in W? or is the line definition of v2 = v1v3 sufficient for that ....
Best Response
You've already chosen the best response.
im not even sure what that definition would actually entail ... but for the sake of definitions :)
Best Response
You've already chosen the best response.
yh you are right. i want u in W not V
Best Response
You've already chosen the best response.
by defining u in W, u = cv, and W is of the defintion x2= x1x3; cv2 = c^2 v1v2 for all c seems to defy closure to me
Best Response
You've already chosen the best response.
..but knowing me, id want a 2nd opinion :) @.Sam. you any decent at these things?
Best Response
You've already chosen the best response.
yh thats the reason i came on here to ask in the first place the c^2 might mean it is not a subspace.
Best Response
You've already chosen the best response.
if we could visualize a x1x3 product ... we would have something to compare with
Best Response
You've already chosen the best response.
if its a "dot product" 3<1,1,1> 3<4,1,0> ---------- 3(4+1+0) doesnt seem to make sense to me
Best Response
You've already chosen the best response.
cross product the only other thing i can thing of for a test
Best Response
You've already chosen the best response.
<1,1,1>x<4,1,0> = <-1,4,-3> <3,3,3>x<12,3,0> = <-9,36,-27> = 3<-3,12,-9>
Best Response
You've already chosen the best response.
in that case it definately doesnt hold right
Best Response
You've already chosen the best response.
thats what im wondering about we can still get <-1,4,-3> back again, by factoring out another 3, and since 3^2 is a real number ... im just not sure if thats acceptable or not :/
Best Response
You've already chosen the best response.
<-1,4,3> , if we are defining the product of x1x3 as a cross, IS in W, and any real scalar of it is as well
Best Response
You've already chosen the best response.
hmm i guess i will just have to email my lecturer see what he says
Best Response
You've already chosen the best response.
well, good luck with it :) i think ive stared myself into the 50s bracket of an IQ test with this one ....
Best Response
You've already chosen the best response.
well thanks for the help anyway, much appreciated
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/517a9d40e4b0249598f7260b","timestamp":"2014-04-17T06:59:24Z","content_type":null,"content_length":"108175","record_id":"<urn:uuid:a46d1eef-4878-48a0-b512-300563cc6aee>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Just checking to see if right and how to get arc.
July 11th 2007, 01:48 PM #1
Junior Member
Jun 2007
Just checking to see if right and how to get arc.
Just wondering if I proved the congruency part right and how do I answer the second part? Still confused over previous post. Is the arc 45 degrees? Took forever to get this to upload!
Theorem 2 says that if two chords are parallel, then the arcs between them are congruent. Arcs AE and BF are between the parallel chords AB and EF so this makes arcs AE and BF congruent. It says
that the arcs between EF and CD are congruent. This makes arcs EC and FD congruent so the total arcs of AC and BD are congruent. Since all lines are parallel and all other arcs are congruent then
the arcs between AB and CD are congruent.
Last edited by sanee66; July 11th 2007 at 06:17 PM. Reason: did not see attachment
Let $arc \ CD=x$. Then $arc \ AB=2x$.
$arc \ AC=arc \ BD=105$.
We have $x+2x+2\cdot 105=360\Rightarrow 3x=150\Rightarrow x=50$
July 11th 2007, 05:49 PM #2
July 11th 2007, 10:11 PM #3 | {"url":"http://mathhelpforum.com/geometry/16772-just-checking-see-if-right-how-get-arc.html","timestamp":"2014-04-16T14:41:57Z","content_type":null,"content_length":"38480","record_id":"<urn:uuid:01110868-cb6e-4512-89de-86a39eea9bc3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arithmetic on the Abacus: Part 1
If you want to talk about mechanical computing tools, you can’t ignore the abacus. It’s the oldest computing tool in the world; and it’s still very commonly used. It’s also about as different from
the slide rule as you could imagine. The abacus is really fundamentally an addition device; the slide-rule is fundamentally a multiplier. And the slide rule is very complicated – all those different
scales, in logarithmic relationships; the abacus is thoroughly simple – just beads hanging on wires. But don’t let that fool you: the abacus is is a remarkable device, which is capable of a really
huge number of computations: addition. subtraction, multiplication, division, even square and cube roots.
The abacus is, basically, sort of like a *better* piece of paper. Any kind of numerical calculation that you can do using piece of paper and a pencil, you can do on an abacus; only it’s a whole lot
faster on the abacus.
There are a lot of different variants on the abacus. A few examples with pictures:
1. The Chinese abacus, or suan-pan. Each column on the abacus is split into two “decks”, with five beads on the lower and two on the upper.
2. The Japanese abacus, or Soroban. Each column is split into two decks, with 4 beads on the lower, and one on the upper.
3. The Roman abacus. The basic idea of the roman abacus is similar to the soroban; lower deck with four beads, upper with one. The roman generally had seven columns, plus sometimes a couple of extras
for fractions. The main different mechanically is that the roman abacus doesn’t put the beads on the wires; instead it has its beads just sitting in grooves.
4. The Lee chinese abacus (named after its inventor, Lee Kai-Chen). This looks like two small soroban stacked on top of a suan-pan. The two upper mini-sorobans are used for place-keeping and
sub-calculations, and sliding markers on the beams separating the decks. The Lee calculus is a really amazing piece of work. Unfortunately, they’re quite rare. (I’d love to own one, but I’ve never
been able to find one for less than $400!)
I’m going to talk about the Chinese abacus, the suan-pan. The main reason that I
prefer the suan-pan is that the way that it’s beads are set 5/2 lets you
simplify some things; you can do things like delay a carry until you’re ready;
and it makes some 5′s complement stuff easier to do.
If you’re interested in the abacus at all, I recommending looking [here][abacus-site]; it’s an absolutely wonderful website with information about all of the different kinds of abacuses (abaci?), a
Java applet that simulate the suan-pan abacus; scanned images of books about how to do things on the abacus, information about where to buy yourself an abacus, and more. It’s great. The images that
I’ll be using were generated using the suan-pan applet on that site.
So, let’s take a look at how to do some simple arithmetic on the Chinese abacus.
First we need to see how to do basic numbers. The abacus is set to zero with all of the beads on the lower deck down against the bottom beam; and all of the beads in the upper deck pushed up against
the upper beam, like so: *(Commenter JuanCarlos pointed out that I messed up the original version of this image; I didn’t line up correctly, and as a result, didn’t have any beads down in the upper
deck of the fifth column, so instead of being 5, it was 0. Thanks for the catch!)*
To read numbers, each column represents one decimal digit. Each bead on the lower rack moved up counts adds one to value in the column; each bead on the upper rack moved down counts adds “5″ to the
column value. So in the following image, the columns from left to right read 9 (4 lower + 1\*5 upper); 8 (3 lower + 1\*5 upper), 7, 6, 5, 4, 3, 2, 1:
To add on an abacus, you use basically the same process as adding on paper: you move from right to left, adding numbers in each column. You start by putting the first of two numbers to add in the
columns to the right. Then, for each digit in the second number from right to left, you move beads to represent your addition. To add 1, move one bead from the lower deck up; when all five beads on
the lower rack are up, you move them all down and lower one bead from the upper deck. When both beads on the upper deck are down, you can move both of them back up, and raise one bead from the lower
deck of the next digit to the left.
That will become clearer after an example. Suppose we want to add 47281 + 23153. We’ll start by putting 47281 onto the abacus:
We start at the right-most column. We want to add three there; so we move three beads up on the lower deck:
Now we move to the next digit. To add 5, we can just lower one bead on top. Since that gives us two beads on the upper deck, which means that we need to carry one to the next column. So we raise both
beads on the upper deck of the second column, and raise one bead from the lower deck of the third column. So far, the abacus reads 47334:
In the third digit, we want to add one, so we raise one bead in the third column. The abacus now reads 47434:
We move on to the fourth column. We need to add 3, so we raise three beads on the lower deck. That gives us five raised beads on the lower deck. So we can lower all five beads, and also lower one
bead from the upper deck. The abacus now reads 4(10)434:
With two beads down on the upper deck, we need to carry one to the left. So we shift them up, and add one to the lower deck of the next column, so that we correctly read 50434:
Now, we finally move on to the fifth column. Since the lower deck has 5 beads up, we can lower all of the beads on the lower deck, and one from the upper deck. Then we add two. So we wind up with
See? It’s basically exactly the same as addition on paper, only we’re moving beads instead of writing down numbers. It’s the same mechanism; right-to-left adding digits, carrying one to the left each
time a digit is 10 or higher.
There’s one neat trick that you can use on the abacus to make things easier, based on fives-complement arithmetic. In base 5, adding a single digit in the *i*th position is equivalent to adding 1 to
the the digit in the *i+1*th digit, and subtracting (5-n) to the *i*th digit. So, for example, if we have 3241 in base 5, and we want to add 4 to the third digit (2), we can do it by adding one to
the fourth digit, and *subtracting* 5-4=1 from the third digit, giving us 4141.
On the abacus, we can use this trick. The two decks in a single column are effectively two base-5 digits. So adding n to a column is the same as *lowering* one bead from the upper deck of that
column, and *lower* 5-n beads from the lower deck of that column.
For example, if we’re adding 34 + 53, we’d start with 4 raised beads in the lower deck of the first column; and 3 raised beads in the lower deck of the second column. We want to add 3 to the first
column; we can do that by lowering one bead from the upper dock, and two beads from the lower deck. That basically means adding five and subtracting two – which is adding three. Many things can be
done much faster on the abacus by playing with fives-complement this way.
[abacus-site]: http://www.ee.ryerson.ca/~elf/abacus/
1. #1 Xanthir, FCD September 19, 2006
Okay, so the Soruban is explicitly base-10; you can only represent up to 9 in each column. You have to carry immediately, though, moving the upper deck as soon as you go past 4 on the lower deck,
and moving the next column as soon as you go past 9. I see how the Suan Pan lets you ‘store’ the carry for a while, so you can do it when convenient.
As for the 5-complement business, I would guess that this is mostly employed to eliminate carrying? Depending on the size of the number you’re adding to a particular digit, you can either do it
forward (normally) or backward (using the 5-complement) to avoid having to mess with carries as often. Can you eliminate all carries with this, or do you just eliminate most?
And one small nitpick: The two registers can’t be thought of as two base-5 numbers, but rather as a pairing of a binary digit and a base-5 digit. Using them together creates a base-10 digit. If
they were both base-5, you’d get a combined base-25 digit.
2. #2 slipstick libby September 19, 2006
Aw, you didn’t get to the other scales on the slide rules: Trig, canned conversions, stats, special purpose rules, and maybe 10^-40 : 10^+40 on the decimal keeper scale at http://
www.antiquark.com/sliderule/sim/n904t/virtual-n904-t.html — large enough to cover lengths from sub-planck scale to beyond the observable universe: http://en.wikipedia.org/wiki/
3. #3 Mark C. Chu-Carroll September 19, 2006
I considered whether to say two base-5 values or to be more precise; I decided that it was reasonable to think about the upper deck as a base-5 that just never got incremented above
2. The 5s complement doesn’t actually eliminate carries. Looking back at the article, I wasn’t very clear. The 5s complement is actually just a physical motion optimization. In adding 3+3,
instead of sliding up two on lower, down one on upper, down all five on lower, then up 1 on lower, it’s one motion: down one on upper, and down 2 on lower. If you’re good at the abacus, you use
your thumb on the lower deck, and your first finger on the upper, and you just put both fingers on the right beads and push them down at the same time. It’s *very* fast; faster than you even
think about the numbers.
4. #4 JuanCarlos September 19, 2006
The 6th column of the second image (when the “simple arithmetic on the Chinese abacus” begins) red 0 (zero), right?
So the columns from left to right read 9, 8, 7, 6, 5, 0, 4, 3, 2, 1… or am I wrong?
5. #5 Mark C. Chu-Carroll September 19, 2006
You’re right; I made a mistake in the simulator when I snapped the image; there was suppose to be one bead down in the upper deck. I’ll correct it; thanks.
6. #6 Xanthir, FCD September 19, 2006
Ah, I see. It sort of reduces carries within a column (from the lower to the upper register), but not between columns. Still a big labor saver (as you showed).
7. #7 Jivemasta September 19, 2006
I was just wondering, where did you get the abacus program?
8. #8 Mark C. Chu-Carroll September 19, 2006
Sorry, I meant to include a link! There’s a wonderful website full of abacus information, including the Java applet that I used to generate the images, at http://www.ee.ryerson.ca/~elf/abacus/.
9. #9 Stu Savory September 20, 2006
I have a home-made hex/octal/binary abacus,
Split 7/2. That nerdy enough for you
10. #10 Patrick September 20, 2006
Nice post! So where does one get an abacus? I saw a few on Amazon and some other sites, but I’m wary buying something like that online. I’d want to check the construction and stuff, make sure it
ain’t going to fall apart. Any nat’l brick and mortars carry them, or is there a particular brand you recommend?
11. #11 Mark C. Chu-Carroll September 20, 2006
I just tried ordering one from amazon, to see what it’s like. I’ll let you know if it’s any good.
In general, the best way to get a good abacus is to head for a chinese market, if there’s one near where you live. They’ll have an assortment made of different sizes and materials, and their
prices will be better than what you’d typically find online.
I’m also doing some experimenting with building a Lee abacus myself. If I can find cheap materials that turn out well, I’ll post the plans.
12. #12 Doug September 20, 2006
There were counting boards even before the abacus [Jacksonville University].
Charles Seife in ‘ZERO: the biography of a dangerous idea’ discusses the Greek knowledge of 0 as a placeholder, possibly borrowed from Babylonia on pages 37-39. Figure 2 page 15 demonstrates
Babylonian use of 0 while Figure 1 page 14 lists two Greek methods of counting with one style similar to Roman numerals and the other style apparently using a part of the alphabet as numeric
symbols sometime around 300-500 BCE. This is nearly a thousand years before the introduction of Arabic numerals.
Since Arabic numerals were apparently first developed in Hindu India ~ 400 BCE [1], one can speculate if there was some type of Indo-European numeric system throughout all lands the civilizations
of these languages occupied. Both the Greeks and Romans were great engineers which would be cumbersome with only Roman numerals available. Perhaps this was state secret information?
[1] http://en.wikipedia.org/wiki/Arabic_numerals
13. #13 Patrick September 21, 2006
Thanks Mark!
14. #14 BMurray September 21, 2006
I recall regularly watching the cashier at my favourite sushi joint check her math on an electronic calculator with an attached abacus. She’d tally with the electronic calculator part and then
double check the result with the abacus bit.
15. #15 Stephen September 28, 2006
I have problem sheet generators on my web site:
Addition and subtraction:
Of course, you can use multiplication problems for division.
The idea here is that the app is up on a web site somewhere, you ask your browser to print the results. No installation. No advertisments. Pretty fast.
16. #16 Weiqi Gao October 3, 2006
Your post triggers a flood of childhood memories. I learned arithmetic on the Chinese SuanPan when I was six, before I started elementary school when I learned it on paper.
I’ve written up how abacus addition is really done in a post on my blog:
17. #17 Anonymous July 24, 2010
i’m an elementary school teacher from Greece and i’m planning to integrate the use of the abacus in mathematics teaching next schoolyear. I’m trying to find the conceptual differences between
suan pan and soroban so as to decide which one to use. I would be grateful if someone could help me! | {"url":"http://scienceblogs.com/goodmath/2006/09/19/arithmetic-on-the-abacus-part/","timestamp":"2014-04-16T16:53:10Z","content_type":null,"content_length":"56960","record_id":"<urn:uuid:2755ae64-9b2d-451c-a0d5-deeb2e788dcc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
Y. Wolfstahl, M. Yoeli, "An Equivalence Theorem for Labeled Marked Graphs," IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 8, pp. 886-891, August, 1994.
BibTex x
@article{ 10.1109/71.298217,
author = {Y. Wolfstahl and M. Yoeli},
title = {An Equivalence Theorem for Labeled Marked Graphs},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {5},
number = {8},
issn = {1045-9219},
year = {1994},
pages = {886-891},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.298217},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - An Equivalence Theorem for Labeled Marked Graphs
IS - 8
SN - 1045-9219
EPD - 886-891
A1 - Y. Wolfstahl,
A1 - M. Yoeli,
PY - 1994
KW - Index TermsPetri nets; formal languages; multiprocessing systems; equivalence theorem; labeledmarked graphs; Petri nets; structural determinism; sequential language; firing sequences;
transitions; concurrent language
VL - 5
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Petri nets and their languages are a useful model of systems exhibiting concurrentbehavior. The sequential language associated with a given Petri net S consists of allpossible firing sequences of S,
where each element of a firing sequence is a singletransition. The concurrent language associated with S consists of all possible concurrentfiring sequences of S, where each element of a concurrent
firing sequence is a set oftransitions. The sequential language and the concurrent language associated with S aredenoted by (L)(S) and (/spl pi/)(S), respectively. In this paper, we consider an
importantspecial ease of Petri nets, called labeled marked graphs. The main result derived in thispaper states that if /spl Gammasub 1/ and /spl Gammasub 2/ are two structurallydeterministic labeled
marked graphs, then (L)(/spl Gammasub 1/)=L(/spl Gammasub 2/)/spl rlhar2spl pi/(/spl Gammasub 1/)=/spl pi/(/spl Gammasub 2/).
[1] J. L. Peterson,Petri Net Theory and the Modeling of Systems. Englewood Cliffs, NJ: Prentice-Hall, 1981.
[2] J. L. Peterson, "Petri nets,"ACM Comput. Surveys, vol. 9, no. 3, pp. 223-252, Sept. 1977.
[3] J. L. Baer, "Techniques to exploit parallelism," inParallel Processing Systems. D. J. Evans, Ed. Cambridge, U.K.: Cambridge University Press, 1984, pp. 76-99.
[4] E. Best and R. Devillers, "Sequential and concurrent behavior in Petri net theory,"Theoretical Comput. Sci., vol. 55, pp. 87-136, 1987.
[5] G. Rozenberg and R. Verraedt, "Subset languages of Petri nets I: The relationship to string languages and normal forms,"Theoretical Comput. Sci., vol. 26, pp. 301-326, 1983.
[6] M. Yoeli, "Specification and verification of asynchronous circuits using marked graphs," inConcurrency and Nets: Advances in Petri Nets, K. Voss, H. J. Genrich, and G. Rozenberg, Eds. New York:
Springer-Verlag, 1987, pp. 605-622.
[7] F. Commoner, A. W. Holt, S. Even, and A. Pnueli, "Marked directed graphs,"J. Comput. Syst. Sci., vol. 5, pp. 511-523, 1971.
[8] Y. Malka and S. Rajsbaum, "Analysis of distributed algorithms based on recurrence relations,"Proc. 5th Workshop on Distributed Algorithms on Graphs (WDAG-5), 1991, pp.
[9] S. Rajsbaum, "Stochastic marked graphs,"Proc. 4th Int. Workshop on Petri Nets and Performance Models (PNPM91), 1991, pp.
[10] C. V. Ramamoorthy and G. S. Ho, "Performance evaluation of asynchronous concurrent systems using Petri nets,"IEEE Trans. Software Eng., vol. 6, pp. 440-449, May 1989.
[11] M. Yoeli and T. Etzion, "Behavioral equivalence of concurrent systems," inApplications and Theory of Petri Nets, A. Pagoni and G. Rozenberg, Eds.Informatik Fachberichte 66, pp. 292-305, 1983.
[12] C. L. Seitz, "System timing," inIntroduction to VLSI Systems, C. Mead and L. Conway, Eds. Reading, MA: Addison-Wesley, 1980, pp. 218-262.
Index Terms:
Index TermsPetri nets; formal languages; multiprocessing systems; equivalence theorem; labeledmarked graphs; Petri nets; structural determinism; sequential language; firing sequences;transitions;
concurrent language
Y. Wolfstahl, M. Yoeli, "An Equivalence Theorem for Labeled Marked Graphs," IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 8, pp. 886-891, Aug. 1994, doi:10.1109/71.298217
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/td/1994/08/l0886-abs.html","timestamp":"2014-04-19T05:19:43Z","content_type":null,"content_length":"52795","record_id":"<urn:uuid:36be2df5-38ed-42d2-8282-78808fb4edb9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inscribed Polygon Grids
A simple straight edge and a percentage circle offer children access to profound and fundamental learnings in both art and geometry. Making grids from scratch can exercise children's powers of
visualization: the abilities developed through interpreting configurations for their 'hidden' geometric shapes, patterns, symmetries, and other attributes.
When I am teaching, these constructions grow out of children making chords. Counting clockwise by 25 they construct a square; counting by 20, a pentagon; by 30, a 20-pointed star. They learn that
an inscribed polygon is made of chords, that a regular polygon is the result of counting correctly, that the chords and the inscribed angles are congruent. They know these things because they
have MADE them from scratch!
Let's begin our grid with a set of inscribed polygons. First, a pair of squares at eight points:
Here we should be perceiving at least 8 triangles, 2 squares, 1 eight-pointed star, and 1 octagon, all inside 1 circle:
Below, you will see more sets of polygons. There are 2 octagons, not 1, and more sets of different triangles, some congruent, some similar. All rotate around the center of the circle.
When I present polygons, inscribed polygons, or the diameter of a circle, students (children or teachers) seem to know what these are. We don't go into formal definitions, especially in the
context of my introduction, where I talk about two distinct and contrasting kinds of patterns: regular and random.
Regular patterns have motifs that are 'units of repeat'. Though the units themselves are the same, we can vary the pattern by counting them with different numbers, i.e., repetitions: (1212121)
(112112112) (112221122211222).
Random patterns are like camouflage: there is not only a specific or single motif, but the viewer perceives an overall similarity of shapes and colors which, like the
transcendental-numbers-after-the-decimal-sign, cannot be predicted. The motif (unit of repeat) and the pattern (the numbered intervals) are 'random'.
Highlighting the polygons with lines in colors helps children to visualize, to 'seek and find' . Given the opportunity to draw this grid from the initial 8 points on the circumference, using a
straight edge, learners discover elemental geometric properties as did ancient geometers long ago. They begin to experience the quintessential beauty of geometry. | {"url":"http://mathforum.org/sarah/shapiro/shapiro.inscribed.grids.html","timestamp":"2014-04-17T04:10:23Z","content_type":null,"content_length":"5522","record_id":"<urn:uuid:73108323-1215-43e7-b3ff-fa2b1080a354>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
April 26th 2010, 01:17 AM #1
Junior Member
Nov 2009
Any ideas to show that in integers, $xz = yz \Rightarrow x=y$, without division algorithm and without the corresponding property in natural numbers? (Note: z is nonzero.)
In my course we are doing one of those "ground-up" constructions, so we started with a few axioms (Peano?), defined addition and multiplication of natural numbers, put the equivalence relation $
(a,b)~(c,d)$ if $a+d=b+d$ on $\mathbb{N} \times \mathbb{N}$, then defined addition and multiplication of the resulting equivalence classes in what I assume is the standard way of doing that here,
but its late and I'm tired so I don't want to type anymore. Maybe I'll have time to edit in a few hours after a nap.
My attempt dead-ends me. I compute xz and yz in terms of the equivalence relation definitions (so x, y, and z are equivalence classes as above). I need to show that because xz=yz, it must be true
that a+d = b+c (where [(a,b)]=x and [(c,d)]=y). Best I've been able to do so far is a few lines of manipulation that almost seems just random.
Got it. Had to do a li'l lemma first, which is what I was trying to avoid but gave up and did it another way.
Last edited by cribby; April 26th 2010 at 02:03 PM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/number-theory/141464-cancellation.html","timestamp":"2014-04-16T04:14:57Z","content_type":null,"content_length":"30272","record_id":"<urn:uuid:db362561-3d76-424a-bb31-ba28acfd1797>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Stable_normal_bundle
surgery theory
, a branch of
, the
stable normal bundle
of a
differentiable manifold
is an invariant which encodes the stable normal (dually, tangential) data. It is also called the
Spivak normal bundle
, after
Michael Spivak
(reference below). There are analogs for generalizations of manifold, notably
topological manifolds
, and
Poincaré spaces
Construction via embeddings
Given an embedding of a manifold in
Euclidean space
, it has a
normal bundle
. The embedding is not unique, but for high dimension it is unique up to homotopy, thus the (class of) the bundle is unique, and called the
stable normal bundle
This construction works for any Poincaré space X: a finite CW-complex admits a stably unique (up to homotopy) embedding in Euclidean space, via general position, and this embedding yields a spherical
fibration over X. For more restricted spaces (notably PL-manifolds and topological manifolds), one gets stronger data.
Construction via classifying spaces
has a tangent bundle, which has a classifying map (up to homotopy)
$xicolon M to BO\left(n\right)$
Composing with the inclusion $BO\left(n\right) to BO$ yields (the homotopy class of a classifying map of) the stable tangent bundle; taking the dual yields the stable normal bundle. (Or equivalently,
dualizing and then stabilizing.)
Why normal?
Stable normal data is used instead of unstable tangential data because generalizations of manifolds have natural stable normal-type structures, but not unstable tangential ones.
A Poincaré space X does not have a tangent bundle, but it does have a well-defined stable spherical fibration, which for a differentiable manifold is the spherical fibration associated to the stable
normal bundle; thus a primary obstruction to X having the homotopy type of a differentiable manifold is that the spherical fibration lifts to a vector bundle.
In classifying space language, the stable spherical fibration $X to BH$ must lift to $X to BG$, which is equivalent to the map $X to B\left(G/H\right)$ being null homotopic; recall the distinguished
$BG to BH to B\left(G/H\right)$
Thus the bundle obstruction to the existence of a (smooth) manifold structure is the class $X to B\left(G/H\right)$.
The stable normal bundle is fundamental in
surgery theory
as a primary obstruction:
• For a Poincaré space X to have the homotopy type of a smooth manifold, the map $X to B\left(G/H\right)$ must be null homotopic
• For a homotopy equivalence $fcolon M to N$ between two manifolds to be homotopic to a diffeomorphism, it must pull back the stable normal bundle on N to the stable normal bundle on M
MR0214071 (35 #4923) 55.50 Spivak, Michael. "Spaces satisfying Poincaré duality," in Topology,
(1967), 77–101. | {"url":"http://www.reference.com/browse/wiki/Stable_normal_bundle","timestamp":"2014-04-16T23:16:42Z","content_type":null,"content_length":"76383","record_id":"<urn:uuid:611e600b-1e6a-4b04-9c28-b72ee6bb0f59>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real Second-Order Sections
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Real Second-Order Sections
In practice, however, signals are typically real-valued functions of time. As a result, for real filters (§5.1), it is typically more efficient computationally to combine complex-conjugate one-pole
sections together to form real second-order sections (two poles and one zero each, in general). This process was discussed in §6.8.1, and the resulting transfer function of each second-order section
where biquad section discussed in §B.1.6.
When the two poles of a real second-order section are complex, they form a complex-conjugate pair, i.e., they are located at 9.3) can be expressed as
which is often more convenient for real-time control of resonance tuning and/or bandwidth. A more detailed derivation appears in §B.1.3.
Figures 3.25 and 3.26 (p. ) illustrate filter realizations consisting of one first-order and two second-order filter sections in parallel.
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/filters/Real_Second_Order_Sections.html","timestamp":"2014-04-18T13:48:36Z","content_type":null,"content_length":"11920","record_id":"<urn:uuid:dac14bc4-4437-49be-84fe-6c639bd8d15c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Simple slope analysis for non-linear models
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Simple slope analysis for non-linear models
From Maarten Buis <maartenlbuis@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Simple slope analysis for non-linear models
Date Mon, 25 Mar 2013 10:30:22 +0100
On Sun, Mar 24, 2013 at 11:40 AM, Ebru Ozturk wrote:
>I followed your suggestion and it seems that I want this derivative E( y |X, y ≥ 0).
> In my model, I have one interaction term (dummy X continuous-ranges from 0 to 10) and other Xs.
This is sentence full of contradicotry statements: Do you have one or
many interaction terms? Is your key variable a single indicator
(dummy) variable, 10 indicator variables, or continuous?
>I would like to plot the values of true interaction effect and implied z-statistic value at each observation. After doing that I would like to show the value and significance of X’s marginal effect at selected values of the moderator Z (low, mean and high). I use Stata 10.
There is no such thing as a "true interaction effect". I presume you
are looking for a cross partial derivative or discrete difference
(depending on whether your variables are continous or categorical). As
far as I know there is no program that does this kind of computation
for -tobit-, so you'll need the general purpose commands -predictnl-
and -adjust- for that in Stata 10. So, you'll need to look up the
appropriate formulas and probably do the derivatives yourself. For
such computations I often combine doing the derivations by hand and
using <http://www.quickmath.com/>. After those computations you can
feed the results to -predictnl- or -adjust-.
I realize you would have liked the answer to be in the form of a
command rather than some general tips on how to write a program that
does what you want to do. But if no one wrote the program before, then
that is the only answer possible. I could have written the program for
you, but that is too big time investment on my part. I only do things
like that if I am also interested in that problem. In this case I
consider this way of thinking about interaction terms a dead end, so I
am not going to invest time in it.
> But the problem is in journals I use they never mention which Tobit intepretation they implement so that's why I am struggling.
So they report differences/changes in predicted outcome without
defining what the outcome is? That seems a bit problematic to me.
Hope this helps,
Maarten L. Buis
Reichpietschufer 50
10785 Berlin
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-03/msg01080.html","timestamp":"2014-04-16T04:22:58Z","content_type":null,"content_length":"11377","record_id":"<urn:uuid:432feaca-b325-4870-830a-8c8b230a2671>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
James' Empty Blog
People seem to
have got very excited
over the
Chip Knappenberger gave at the Heartland conference, which I am a co-author on. So perhaps it is worth a post. Judith Curry described it as a
Good study with appropriate analysis methods as far as I can tell
. But please don't let her endorsement put you off too much :-)
The work presented is a straightforward comparison of temperature trends, both observed and modelled. The goal is to check the consistency of the two - ie, asking the question "are the observations
inconsistent with the models"?
This is approached though a standard null hypothesis significance test, which I've
talked about at some length before
. The null hypothesis being that the observations are drawn from the distribution defined by the model ensemble. We are considering whether or not this null hypothesis can be rejected (and at what
confidence level). If so, this would tend to cast doubts on either or both of the forced response and the internal variability of the models.
It may be worth emphasising right at the outset that our analysis is almost identical in principle to that
presented by Gavin on RC some time ago
. In that post, he formed the distribution of model results (over two different intervals) and used this to assess how likely a negative trend would be. Here is his main picture:
He argued (correctly) that if the models described the forced and natural behaviour adequately, a negative 8-year trend was not particularly unlikely, but over 20 years it would be very unlikely,
though not impossible (1% according to his Gaussian fit).
We have extended that basic calculation in a few ways, firstly by considering a more complete range of intervals (to avoid accusations of cherry-picking on the start date). Also, rather than using an
arbitrary threshold of zero trend, we have specifically looked at where the observed trends actually lie (well, we also show where zero lies in the distributions). I don't believe there is anything
remotely sneaky or underhand in the basic premise or method. One subtle difference, which I believe to be appropriate, is to use an equal weighting across models rather than across simulations (which
is what I believe Gavin did). I don't think there is any reason to give one model more weight just because more simulations were performed with it. In practice this barely affect the results. Another
clever trick (not mine, so I can praise it without a hint of boastfulness) is to use not just the exactly matching time intervals from the models to compare to the data, but also to consider other
intervals of equal length but different start months. It so happens that the mean trend of the models is very much constant up to 2020 and of course there were no exciting external events like
volcanoes, so this gives a somewhat larger sample size with which to characterise the model ensemble. For longer trends, these intervals are largely overlapping, so it's not entirely clear how much
better this approach is quantitatively, but it's still a nice idea.
Anyway, without further ado, here are the results. First the surface observations, plotted as their trend overlaying the model distribution:
You should note that our results agree pretty well with Gavin's - over 8 years, the probability of a negative trend is around 15% on this graph, and we don't go to 20y but it's about 1% at 15y and
changing very slowly. So I don't think there is any reason to doubt the analysis.
Then the satellite analyses (compared to the appropriate tropospheric temps, so the y axis is a little different):
And finally a summary of all obs plotted as the cumulative probability (ie one-sided p-level):
As you can see, the surface obs are mostly lowish (all in the lower half), and for several of the years the satellite analyses are really very near the edge indeed.
Note that the observational data points are certainly not independent realisations of the climate trend - they all use overlapping intervals which include the most recent 5 years. Really it's just a
lot of different ways of looking at the same system. (If each trend length were independent, then the disagreement would be striking, as it's not plausible that all 11 different values would lie so
close to the edge, even with the GISS analysis. But no-one is making that argument.)
It is also worth pointing out that this analysis method contradicts the confused and irrelevant calculations that some have
previously presented elsewhere
in the blogosphere. Contrary to the impression you might get from those links, the surface obs are certainly not outside the symmetric 95% interval (ie below the 2.5% threshold on the above plots),
though you can get just past 5% for HadCRU for particular lengths of trend and a couple of the satellite data points do go below 2.5%, particularly those affected by the super-El-Nino of 1998.
As for the interpretation...well this is where it gets debatable, of course. People may not be entitled to their own facts, but they are entitled to reasonable interpretations of these facts.
Clearly, over this time interval, the observed trends lie towards the lower end of the modelled range. No-one disputes that. But at no point do they go outside it, and the lowest value for any of the
surface obs is only just outside the cumulative 5% level. (Note this would only correspond to a 10% level on a two-sided test). So it would be hard to argue directly for a rejection of the null
hypothesis. On the other hand, it is probably not a good idea to be too blase about it. If the models were wrong, this is exactly what we'd expect to see in the years before the evidence became
indisputable. Another point to note is that the satellite data shows worse agreement with the models, right down to the 1% level at one point, and I find it hard to accept that this issue has really
been fully reconciled.
A shopping list of possible reasons for the results include:
• Natural variability - the obs aren't really that unlikely anyway, they are still within the model range
• Incorrect forcing - eg some of the models don't include solar effects, but some of them do (according to Gavin on that post - I haven't actually looked this up). I don't think the other major
forcings can be wrong enough to matter, though missing mechanisms such as stratospheric water vapour certainly could be a factor, let alone "unknown unknowns"
• Models (collectively) over-estimating the forced response
• Models (collectively) under-estimating the natural variability
• Problems with the obs
I don't think the results are very conclusive regarding these reasons. I do think that the analysis is worth keeping an eye on. Anyone who thinks that even mainstream climate scientists are not
wondering about the apparent/possible slowdown in the warming rate is kidding themself. As I quoted recently:
However, the trend in global surface temperatures has been nearly flat since the late 1990s despite continuing increases in the forcing due to the sum of the well-mixed greenhouse gases (CO2,
CH4, halocarbons, and N2O), raising questions regarding the understanding of forced climate change, its drivers, the parameters that define natural internal variability (2), and how fully these
terms are represented in climate models.
That wasn't some sceptic diatribe, but rather
Solomon et al, writing in Science
(stratospheric water vapour paper). And there was also the Easterling and Wehner paper (which incidentally also uses a very similar underlying methodology for the model ensemble). Knight et al as
well: "Observations indicate that global temperature rise has slowed in the last decade"
So all those who are hoping to burn me at the stake, please put away your matches.
Yatsugatake, originally uploaded by julesberry2001.
jules is in the foreground and the big peaks of Yatsugatake in the distance. Yatsugatake remains my favourite mountain.
Like our papers, it seems that our team-photos are often the best.
Posted By jules to jules' pics at 5/27/2010 05:42:00 PM
shirakoma ike at sunrise, originally uploaded by julesberry2001.
James says, "the eyepads don't help much when one's wife decides we must get up and photograph the sunrise anyway. "
[BTW - this week's blogged fotos from last weekend's mountain trip (Monday,Tuesday, Wednesday) are all taken with James' wee LX3.]
Posted By jules to jules' pics at 5/25/2010 06:06:00 PM
ipad, originally uploaded by julesberry2001.
Some people on the internets have suggested that ipads are of little practical value. That would be very wrong. In Japanese mountain huts, where the sun rises through the bare windows at 5am, they
not just a luxury but a necessity.
Posted By jules to jules' pics at 5/24/2010 08:50:00 PM
The fall in Japan's population is accelerating pretty much as expected:
Japan's population has entered full-scale decline and shrank by a record 183,000 people over the past year
Of course this is accompanied by a "greying" of the population and reduction in the workforce as a proportion of the total. One rational response might be to encourage immigration to make up the
numbers, but in fact Japan has been kicking out foreigners at quite a rate recently. The reduction in non-Japanese population actually contributed 47,000 to the total decline. This is all part of the
current trend towards isolationism. Perhaps they think they can replace workers with robots.
Meanwhile, the economy has recently been back in deflation, though not by enough to make up for the pay cuts.
Iodake in a blizzard, originally uploaded by julesberry2001.
It was OK as long as we didn't walk into the wind. Unfortunately, down there in the white there is a junction at which we had to do just that. The path being invisible, we walked off the edge of the
mountain in the snow hoping we would land on a path below. More accurately, James walked - perhaps he could even see where he was going - while I, eyes stinging and being blown around like a little
leaf, clung on to him pathetically.
[Iodake summit, part of Yatsugatake, at about 7am and 2700m]
Posted By jules to jules' pics at 5/23/2010 11:37:00 PM
This is the title of an interesting paper by Zaliapin and Ghil that appeared some time ago on the Arxiv and more recently in peer-reviewed form in NPG. It's presented as a criticism of Roe and Baker,
which has already been debunked enough, so after a quick glance I didn't pay too much attention to it at first. I also know the second author to be extremely clever, so I was worried that it might be
really technical. On a second, slower, reading, however, it's actually quite straightforward and very interesting. It also seems rather harsh on R&B, because the criticism applies to a whole host of
similar work.
Z&G start with the basic premise that R&B - and indeed all work of this nature - use, that there's a functional relationship between radiation R (at the top of the atmosphere) and the surface
temperature T, which we can write as R=R(T,a(T)) where the notation indicates that a(T) is the "feedback" term, that is to say, if we replace a(T) with zero then the function returns the
zero-feedback relationship.
We can then perform a Taylor series expansion to investigate how the radiation balance changes with temperature:
DR = dR/dT*DT + dR/da*da/dT*DT + O(DT^2)
(read the real paper for more elegant typesetting)
By writing 1/L0 = dR/dT (L0 is the zero-feedback sensitivity) and defining f=-L0*dR/da*da/dT we arrive at the familiar expression
DR = (1-f)/L0 * DT + O(DT^2)
Now what everyone does at this point is to drop the last term and use the linear approximation, which can be re-arranged to give
DT = L0/(1-f) * DF
exhibiting the well-known singularity for a feedback of f=1.
What Zaliapin and Ghil point out is really startlingly simple and IMO elegant. They merely observe that if f is close to one, the linear truncation was not justified because the quadratic term is now
large enough to matter! Once it is included, the singularity at f=1 goes away, as their Fig 2 shows:
(The feedback factor f cannot be larger than one or the initial equilibrium is unstable, even with the nonlinear term, hence the upper bound on the x-axis is sound.)
I'm not yet sure how much this really matters. We can still get a high sensitivity so long as the nonlinearity is small. AIUI most models do show a fairly, but not perfectly, linear response over
quite a range of forcing and temperature, and the existence of complex climate models with sensitivities above 6C implies that such high values are at least not a mathematical impossibility. It may,
however, make it harder to justify the sort of pathological "long tail" arguments beloved by some. Of course I've argued against them on a number of grounds already - not least of which is that, from
a policy perspective, we are on really shaky ground if all the calls to action have to be based on highly improbable events that we are pretty confident won't happen irrespective of what mitigation
we do or do not attempt. In any case, the maths is interesting in its own right.
Maybe the last picktur I'll post from "real Japan", this one is of a restaurant in Takayama that we didn't go to. Instead we had a very memorable meal of prime beef tonkastu*, but I didn't photograph
the probably equally good looking restaurant as I was too busy negotiating our entry to the establishment.
*It was a real treat because normal tonkatsu restaurants in our region of Japan don't serve beef, just pork and jumbo shrimp.
Posted By jules to jules' pics at 5/19/2010 10:52:00 PM
Apparently the next Michelin guide to Tokyo will also cover Yokohama and Kamakura.
When Michelin first brought out a guide to Tokyo a few years ago, there was much brouhaha (also here) about how foreign barbarians couldn't possibly be able to properly judge Japan's uniquely unique
haute cuisine, and they seemed to hand out stars like confetti, especially to the sort of absurdly pretentious places where you need a personal introduction in order to even be admitted into the
I have to wonder what they will find in Kamakura to be worthy of Michelin stars. I mean, I very much enjoy some of the restaurants here - it is far better than you'd find in any normal Japanese town
this size, presumably due to the huge numbers of day-trippers and foreign visitors - but there is nothing that I'd really associate with Michelin stars.
Anyway, I'll be interested to see what they say - and maybe they will have one or two new suggestions for us to try.
Old-timers* in old-time Japan. The streets of "Real Japan" were full of Germans and Brits. I think this was why it did not feel very real to me. Nevertheless, it is certainly worth a visit.
[*actually in-laws!]
Posted By jules to jules' pics at 5/17/2010 08:36:00 PM
Hot on the heels of our paper (which is still languishing in the publishing queue, though published on-line) I was rather surprised to come across another paper recently talking about upper bounds on
climate sensitivity, and the costs of climate change. It is open access, so you can all read it for yourselves. The authors consider the "long tail" of possible temperature change and how this
influences the economic analyses of climate change. They point out that the pathology of Weitzman's result vanishes if an upper bound on climate sensitivity is imposed. They use a Cauchy distribution
for sensitivity, and show that the optimal climate policy is fairly insensitive to where this bound is placed, within the range tested of 20-50C. However, they don't appear to justify why these
bounds should be used, rather than (say) 500C or 500,000C, at which point the results would probably be rather different.
Though the authors appear to not know about our Climatic Change paper, they actually do cite two of our other papers, in a way that I'm not really enthused by. They interpret us as explicitly ruling
out a value for sensitivity greater than 8C, where in fact all of our results are probabilistic and do not arrive at an absolute value (other than any assumed in the prior). But this is only by way
of a throwaway comment at the end of their paper, and isn't in any way central to their argument.
Coincidentally, Myles Allen and co are also going on again about how the Jeffreys' Prior solves all the problems of subjectivity (see here for previous). The whole enterprise appears to be a dead end
to me and as far as I can tell, they haven't actually demonstrated any practical results, but maybe when he has eliminated all other possibilities he will reluctantly come around to embracing the
standard Bayesian interpretation of probability. At least while he is presenting abstruse technical notes on the Arxiv he isn't causing more trouble elsewhere, and it must now be increasingly
difficult for him to defend his previous claims. This could make life a little embarassing for the next IPCC report if people don't start producing climate sensitivity estimates that are not based on
the now thoroughly discredited uniform prior...
More "Real Japan". Genuine, Edo period (1603 to 1868) vending machine.
Posted By jules to jules' pics at 5/16/2010 09:46:00 PM
Oh, I suppose I shouldn't poke fun. Having got their fingers burnt last year - or perhaps I should say, having had their parade thoroughly rained on - the UK Met Office is declining to offer public
forecasts, but is still doing "experimental" research on seasonal prediction and has obviously given a nod and a wink to the author of this Times article.
Of course we all know how the last "barbecue summer" turned out:
“Well, let’s put it this way. I’ve put my barbecue in the shed,” Dave says. “I don’t want it to get any rustier.”
I'm going to be in the UK for a chunk of the summer, so I hope they have got it right this time.
I am reminded that earlier this year, the BBC put out its weather contract to tender. Rumour has it that the esteemed Piers Corbyn, in a change of focus from his current work in volcano and
earthquake prediction (no I'm not joking on that bit), is putting in a strong bid.
A new bulletin board for climate science has been set up here. Apparently there used to be a climate section on physicsforums but they closed it down, so one participant (Chris Ho-Stuart) has set one
up himself, with the motivation:
Our aim is to support substantive discussion of the science of climate, especially the underlying physics. We focus on ideas that have been published in the mainstream scientific literature. This
still allows for all kinds of competing ideas to be considered, while hopefully avoiding distraction from ideas that have no credible basis.
There is already some discussion of the awful G&T paper there (CHS co-authored the comment), and plenty of space for more discussion...
Well, it's not really news, but since the energiser bunny was boasting about his comment recently, here is ours, newly published for real:
Comment on “Influence of the Southern Oscillation on tropospheric temperature” by J. D. McLean, C. R. de Freitas, and R. M. Carter
And to save you from the effort of clicking the link, here's the abstract:
McLean et al. (2009) (henceforth MFC09) claim that the El Niño–Southern Oscillation (ENSO), as represented by the Southern Oscillation Index (SOI), accounts for as much as 72% of the global
tropospheric temperature anomaly and an even higher 81% of this anomaly in the tropics. They conclude that the SOI is a “dominant and consistent influence on mean global temperatures,” “and
perhaps recent trends in global temperatures.” However, their analysis is inappropriate in a number of ways and overstates the influence of ENSO on the climate system. This comment first briefly
reviews what is understood about the influence of ENSO on global temperatures and then shows that the analysis of MFC09 greatly overestimates the correlation between temperature anomalies and the
SOI by inflating the power in the 2–6 year time window while filtering out variability on longer and shorter time scales. The suggestion in their conclusions that ENSO may be a major contributor
to recent trends in global temperature is not supported by their analysis or any physical theory presented in their paper, especially as the analysis method itself eliminates the influence of
trends on the purported correlations.
Unlike Eli, our comment was so devastating that the original authors were unable to come up with a coherent reply. Oh, ok, that's not so different from Eli's case then!
lavatory, Hida folk village, originally uploaded by julesberry2001.
Here's James exploring some more of "real Japan" on our holiday in April.
Coincidentally, James, after giggling at my pitiful state earlier in the week, is now a quite peaky himself. I hope the rest of JUMP don't get it.
...well - it was time we dieted a bit after our recent extravagances...but it is also sad cos we wanted to climb a snowy mountain this weekend, and now we are far too feeble.
Posted By jules to jules' pics at 5/13/2010 06:21:00 PM
So the Nasty Party is back in power (I told you before, this blog is strictly neutral on political matters) with the help of the weaselly LibDems. I don't really blame the latter, they had little
choice given that Lab/Lib was not viable without numerous fringe parties. I don't think there is much chance of it lasting once the economy heads south, but we will wait and see.
The biggest item of interest to me is the possibility of electoral reform, especially as the economy doesn't affect me so much. It has always seemed absurd that the country should have a dominant
Govt supported by only about 35-40% of the voters (which means less than 25% of the electorate by the time the turnout has been accounted for). LDs are supposedly set on "proportional representation"
but I haven't actually seen them specify this in detail (nor have I looked). Cons have offered a referendum on Alternative Vote. I think this would be a very big improvement on the current system and
think that the arguments against are very weak.
First, those in favour of the current system - usually referred to as "first past the post" which seems bizarre when in fact there is no post, and being first to any specific number of votes is not
relevant. Wikipedia calls it "Plurality" which is also a bit cryptic. The supporters seem to believe that what this country needs is "Strong Government", which they claim is best achieved by awarding
over 50% of MPs to the party with the most votes, even if a large majority of the electorate opposes them. To those who say, look what FPTP has done for the UK over the past few decades, I reply,
yes, by all means look at what it has done, and do you really think that it's worth defending? Lurching from a dominant right-wing govt to an dominant ultra-right-wing government every few years just
means they spend most of their time trying to [S:undo:S]outdo the damage that their predecessor did. But mostly, I object to its intrinsic unfairness, invitation to tactical voting and the
implication that your vote doesn't matter unless you are in a marginal constituency.
Other possible criticisms of PR are that it may remove the local link from voter to MP, and also that it hands too much influence in the hands of fringe parties like the UKIP, BNP and Greens. On the
former point, that is true of some approaches, but not AV. As for the latter, obviously Stoat would like this, but not many others. However, all reasonable proposals like STV generally include a
threshold that cuts off the loonies, so this seems like generalised scare tactics rather than a sensible argument. And it doesn't apply to AV in the first place.
I'm disappointed that the Electoral Reform Society has chosen to put such a misleading spin on AV, describing AV as "very much like FPTP" and making a set of amazingly weak and duplicated criticisms
in its list of "drawbacks". Yes, AV is not fully proportional, but since no-one advocates fully proportional systems in the first place, this seems somewhat of a straw man. And although there is a
theoretical opportunity for tactical voting (as there is in all systems) this is hardly plausible in reality. As obvious and substantial benefits over FPTP, there are no "wasted votes" for minor
parties, there is a strong incentive to vote honestly rather than tactically, and the outcome would in practice be substantially closer to proportional (as this BBC page shows for the last election).
Their preferred option of STV is not perfect either, but they don't have any list of arguments against it at all!
It should not be overlooked that one large advantage of AV is that it would be simple to implement as a change to the current system. It does not require redrawing boundaries, zoning into regional
groups, and (perhaps more importantly) nationally-controlled party lists of top-up MPs that some systems involve. The latter would be an easy target for the press and other critics to oppose.
I may have mentioned before how Japan and the Japanese continue to find ways of amusing us, even after almost a decade here...
According to Wikipedia:
"'Enoch was right' is a phrase of political rhetoric, employed by the far right, .... The phrase implies criticism of racial quotas, immigration and multiculturalism."
Oh, but that description relates to "In the United Kingdom, particularly in England". Here in Japan, it's merely the dinner-party conversation of the educated internationalised elite at a highly
multicultural gathering.
bowls, originally uploaded by julesberry2001.
Rainbow coalition; same bowl, different light.
..sorry about the cliche - been under the weather today.. but I have discovered empirically that Pocari Sweat is awfully good for alleviating gastroenteritis.
Posted By jules to jules' pics at 5/11/2010 01:22:00 AM
(Japanese) Jungle Crow, originally uploaded by julesberry2001.
Ever since discovering the cool Yellowstone Ravens last year, I've been trying to enpixelate one of our jungle crows.These birds are big, aggressive, audacious scavengers that shape the way rubbish
collection is organised in Japan; it is only put out on the morning of the collection, and covered with netting, or put in crow-proof crates.
It understandable that their relationship with humans is not that great, and I think this might be why they are hard to photograph. They fly away as soon as they see that you are interested in them.
Anyway, this is the best pic so far. I would have liked to have included the tail and more of the red bridge in the photo, but I wasn't quick enough... it flew away before I could frame a better
Posted By jules to jules' pics at 5/10/2010 04:40:00 PM
Wisteria at Hachimangu, originally uploaded by julesberry2001.
Sometimes the extra height of the bipod can really make the picktur. I'm so lucky to have a bipod while all the Japanese have to carry stepladders around...
I wonder what would happen if I bought a blue camera with a flip-out screen as an accessory for my bipod so the automatic composition system could work while the "arms" are fully extended. Could it
be that the world is even more exciting viewed from 7.5 feet in the air?
Posted By jules to jules' pics at 5/09/2010 09:56:00 PM
Bug, originally uploaded by julesberry2001.
The insects have come back to life!
Posted By jules to jules' pics at 5/07/2010 11:22:00 PM
Even if only half of the stories (eg) are half true, the widespread failures of the electoral system in the UK are a complete disgrace. It sounds more like some tinpot dictatorship in Africa than a
modern "developed" nation.
(FWIW, I gave up trying to maintain my registration on the electoral roll some time ago, it hardly seems fair to vote as a non-taxpaying expat. Besides, the postal system is so broken that I'd been
unable to vote the last few times even when registered and even when living in the UK, due to travelling abroad at the wrong time.)
Gassho houses, Oogimachi, originally uploaded by julesberry2001.
According to the tourist brochures, this is "real Japan".
I'm not so sure.
Posted By jules to jules' pics at 5/06/2010 06:28:00 PM
Wendy, originally uploaded by julesberry2001.
I hope you noticed the frog family sitting on a rock in the pond of moss at the super-special zen temple of Chojuji. We recently acquired (gift from Mother In Law) a garden frog. She is called Wendy.
Putting two and two together, I can only conclude that we must be close to enlightenment.
Posted By jules to jules' pics at 5/05/2010 09:42:00 PM
We've been having the annual "Golden Week" holidays (just a set of public holidays that all occur sequentially) here in Japan, so on Monday we went on our traditional hilltop walk on the Ten'en
Hiking Course around the north end of Kamakura to the famous Zen temple of Kenchoji.
Actually, this pic is an old one - on Monday it was crowded and we walked though fairly briskly. Outside of main holidays and fairly early in the morning it's usually more like this though. Chojuji
was open, unusually, so we went in there too.
The centre of town was absolutely heaving so we quickly retired to our peaceful neighbourhood, and the bicycle maintenance job that has been hanging over me for a couple of months...replacing the
rear bottom bracket on our tandem, which had developed an alarming degree of wobble.
Regular cyclists will probably know that the bottom bracket tends to be a particularly recalcitrant opponent and is prone to seizing in place. Sitting there at the low point of the frame where all
the water and muck collects, it also has a large diameter thread and thus requires a high torque at the best of times. The rear of a tandem in particular has a large load applied with both riders'
forces passing through it. With our tandem being aluminium, the threads in the shell are rather weak and prone to damage - plus effectively irrepairable, making it a potentially expensive job to
I read Zen and the Art of Motorcycle Maintenance fairly recently. Although nominally about motorcycles, it is also a pretty good bike maintenance manual (helped by the authors' habit of using the
term "cycle"). The book's term "gumption trap" applies very well to the problem of a knackered bottom bracket - at least in my case. The cod philosophy is a bit annoying, though.
A few hefty blows with my trusty mallet had got one of the cups loose the day before, but didn't make much of an impression on the other side. I was left scratching my head - and thinking up
strategies for finding and importing a new frame - when in a moment of inspiration I remembered the 6ft roof bar we had for mounting the tandem on a car roof (brought with us to Japan, but never used
Rather to my surprise, it worked, the threads on the shell are still at least somewhat intact (though a fair amount of powdered metal came out) and the new component is now installed. As the old
saying almost says - if at first you don't succeed - get a bigger spanner!
Chojuji, originally uploaded by julesberry2001.
Chojuji, in Kamakura, is very exclusive, in that it is always shut. But on Monday it was open and for only 300¥ we got to tour the pristine buildings and garden. The front garden was very like part
of Daitokuji in Kyoto - similar moss garden and even similar decoration on the stone paths. Weirdly, SLR photography was forbidden, but anything else was OK, so everyone except me was taking
Posted By jules to jules' pics at 5/04/2010 07:17:00 PM
I'm not going to the EGU meeting this year for a number of reasons, but it seems that increasingly parts of it is coming to me - a development I'm strongly in favour of, of course. There is an
official blog here, and a certain amount of live streaming of lectures and all press conferences can be found via links from here (not so much the ordinary science sessions though). It is still on a
rather limited basis compared to what you see by actually attending, but hopefully will expand in the future.
The title of one of the "Great Debates" caught my eye: "To what extent do humans impact the Earth's climate?" It has just ended - I listened to the second half - and it seems that you can already get
it on demand here.
self portrait, originally uploaded by julesberry2001.
Me - another liccle twit.
[Taken on holidays with inlaws, in Matsumoto, with N80 camera.]
Posted By jules to jules' pics at 5/03/2010 09:07:00 PM
coal tit, Tonotake, originally uploaded by julesberry2001.
A liccle twit, spotted by Helen on the way down the mountain.
Posted By jules to jules' pics at 5/03/2010 01:16:00 AM
It appears that reports of the Hachimangu Ginkgo tree's death were greatly exaggerated. After it blew down a couple of months ago, it was mostly chopped up and taken away, but a chunk of root mass
was left in place and a section of the main trunk was erected nearby (along with a book of condolences for people to sign), and people have been praying in front of it - and photographing it - ever
To the right is the site where the tree originally grew, where there is now a small mound with green shoots of recovery. More surprisingly, the section of trunk which was set up at the left is
greening up too! Fresh leaves can clearly be seen on several twigs. I don't think that bit had any roots attached - it looked like it was cut straight across at both ends. I don't know what chance
there is of it re-rooting - obviously some trees can grow from cuttings but this seems an extreme case.
Fuji-san from Tonotake, originally uploaded by julesberry2001.
We'd seen it before, from the top to Tonotake (the huts make you get up very early), so we were prepared to see Fuji-san turn pink at dawn.
Posted By jules to jules' pics at 5/01/2010 12:46:00 AM | {"url":"http://julesandjames.blogspot.co.uk/2010_05_01_archive.html","timestamp":"2014-04-20T15:50:54Z","content_type":null,"content_length":"292286","record_id":"<urn:uuid:15879522-302b-47e2-829b-de5e3a2100fa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Mathematica strange behaviour finding a cubic root
Replies: 5 Last Post: Dec 18, 2012 2:36 AM
Messages: [ Previous | Next ]
Re: Mathematica strange behaviour finding a cubic root
Posted: Dec 17, 2012 2:54 AM
You can't because they are not equal of course. Fractional powers are
defined as
x^a = Exp[a*Log[x]
where Log is the principal branch of the logarithm. It is impossible to
define a continuous branch of the logarithm in the entire complex plane
so as you go around it there has to be a "jump" somewhere. The usual
choice of the so called principal branch makes the jump take place on
the negative real axis. The two answers that you get to yoru computation
come from different branches of the logarithm. In fact here is one of
your answers:
and here is the other:
Simplify[Exp[(1/3)*(Log[1/4] + 2*Pi*I)]]
they are certainly not equal. The reason why you think they are equal is
because you are assuming that
(x^a)^b = x^(a b)
but this is not always true. In fact Mathematica itself can find examples when this does not hold, e.g:
FindInstance[x^(a*b) != (x^a)^b && Element[{x, b}, Reals] && Element[a, Integers], {x, a, b}]
{{x -> -(109/5), a -> 22, b -> -(56/5)}}
Andrzej Kozlowski
On 16 Dec 2012, at 07:06, sergio_r@mail.com wrote:
> How can I make Mathematica provides the same answer for
> (-1/2)^(2/3) = ((-1/2)^2)^(1/3) ?
> What follows is a Mathematica session:
> In[1]:= (-1/2)^(2/3)
> 1 2/3
> Out[1]= (-(-))
> 2
> In[2]:= N[%]
> Out[2]= -0.31498 + 0.545562 I
> In[3]:= ((-1/2)^2)^(1/3)
> -(2/3)
> Out[3]= 2
> In[4]:= N[%]
> Out[4]= 0.629961
> Sergio
Date Subject Author
12/17/12 Re: Mathematica strange behaviour finding a cubic root Bob Hanlon
12/17/12 Re: Mathematica strange behaviour finding a cubic root Murray Eisenberg
12/17/12 Re: Mathematica strange behaviour finding a cubic root Andrzej Kozlowski
12/18/12 Re: Mathematica strange behaviour finding a cubic root Murray Eisenberg | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2421015&messageID=7937989","timestamp":"2014-04-20T16:11:10Z","content_type":null,"content_length":"21334","record_id":"<urn:uuid:eaf88df1-387f-4552-818c-6b40999420bc>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asymptotic determination of the eigenfrequencies of a sphere in a
ASA 128th Meeting - Austin, Texas - 1994 Nov 28 .. Dec 02
4pPAb5. Asymptotic determination of the eigenfrequencies of a sphere in a fluid.
G. C. Gaunaurd
Naval Surface Warfare Center, White Oak Detachment, R-34, Silver Spring, MD 20903-5640
Starting from a phase-matching principle that is the acoustical analog of the Bohr--Somerfeld--Wilson quantization rule of the old ``quantum theory,'' it is analytically shown how to asymptotically
obtain the eigenfrequencies of an insonified sphere immersed in a fluid. This technique was first illustrated by J. B. Keller [cf. Ann. Phys. 4, 180--188 (1958)] and it has been extended by many
authors, notably L. B. Felsen and J. M. Ho, who have renamed it the ``ray-acoustic algorithm.'' It is shown here how the acoustical counterpart of this quantum principle leads to a resonance
condition for the (external) eigenfrequencies of a sphere (rigid, soft, or to some extent, elastic) that exactly coincides with F. W. J. Olver's (1954) classical asymptotic formula for the (complex)
zeros of the spherical Hankel functions. The poles of the scattering amplitude of an elastic sphere fall into two great families, one depending on shape, and the other on elastic composition. The
asymptotic spacings in between the shape-dependent zeros in the (complex) ka plane are shown to reduce to a uniform value, obtained earlier by numerical means, which manifests itself in all the (RST)
``background'' curves of the sphere. [Work supported by NSWC-DD IR Program.] | {"url":"http://www.auditory.org/asamtgs/asa94aus/4pPAb/4pPAb5.html","timestamp":"2014-04-17T07:54:00Z","content_type":null,"content_length":"1947","record_id":"<urn:uuid:43a9d5e9-7e85-4a10-aded-e3ad0cef5a5b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
We need to explore new ways to teach Mathematics to high school students
It’s not a secret that a popular software company offer their employees a percentage of time to contribute to the project of their wish. I think most people when they are asked the question “how
would you spend that time?” would answer: “I’d contribute to this X new project,” “I’d add this Y feature to this Z project”…
They are all very good options that will impact millions of people around the world. But what if we want to impact the future generations of consumers, or even more, the future of computing? I would
use that time to improve education. Recent studies report that nowadays not many people are choosing Computer Science as a major, especially in the U.S. This is a serious problem that caught the
attention of many well-known people, like
Bill Gates
. Indeed, if we want to progress in this digital era, we need qualified and creative engineers that are prepared to invent and develop the “next big thing”. How could we generate more interest around
Computer Science and, consequently, increase competitiveness? By changing the way Mathematics and, in general, exact sciences are taught in high school.
I don’t know about each country and school in the world, but Mathematics in high school tends to be a subject about calculating things: One of the most prominent goals is that you are able to solve
things like a complicated integral without the slightest error. You don’t even need to know what an integral is, and why integrals are important. Of course, calculating is also important and cannot
be ignored, but now computers can do this kind of computations in less time, and without a single mistake. Moreover, computers are now able to solve the trickiest differential equations, but this
cannot avoid the fact that we don’t know how to model some important “real world” problems in the form of differential equations.
That’s the point I’d like to make. We need to focus the teaching of Mathematics around problem-solving skills. Once we’ve modeled a problem, then use a computer or cloud computing service to get the
exact answer. Problem-solving skills are not only essential for engineers, they are useful for professionals in general in their everyday activities. Consequently, I believe that with this change of
mentality Engineering would appeal more to students, there would be less drop-outs after the first year, and there would be more opportunities to success in this field.
What do you think? Do you also think that the method of teaching Mathematics at pre-university level is becoming obsolete? How would you foster creativity and innovation to keep up with the world as
it is now? | {"url":"http://blogs.msmvps.com/dmartin/blog/2011/08/27/we-need-to-explore-new-ways-to-teach-mathematics-to-high-school-students/","timestamp":"2014-04-18T16:16:54Z","content_type":null,"content_length":"15367","record_id":"<urn:uuid:4b569f27-c366-4825-bb20-a5d265a38a1e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Neural network algorithms that learn in polynomial time from examples and queries
, 1995
"... For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn,
1994]. We then showhow the same principles may be used to select data for two alternative, statistically-bas ..."
Cited by 529 (10 self)
Add to MetaCart
For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We
then showhow the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the
techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate.
, 2000
"... We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe
a number of benefits, the most surprising of which is that a SVM trained on a wellchosen subset of the av ..."
Cited by 203 (1 self)
Add to MetaCart
We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe a
number of benefits, the most surprising of which is that a SVM trained on a wellchosen subset of the available corpus frequently performs better than one trained on all available data. The heuristic
for choosing this subset is simple to compute, and makes no use of information about the test set. Given that the training time of SVMs depends heavily on the training set size, our heuristic not
only offers better performance with fewer data, it frequently does so in less time than the naive approach of training on all available data. 1.
- Mathematics of Neural Networks: Models, Algorithms and Applications , 1997
"... this paper queries which maximize the expected information gain, which are related to the criterion of (Bayes) D-optimality in optimal experimental design. The generalization performance
achieved by maximum information gain queries is by now well understood for single-layer neural networks such as l ..."
Cited by 1 (0 self)
Add to MetaCart
this paper queries which maximize the expected information gain, which are related to the criterion of (Bayes) D-optimality in optimal experimental design. The generalization performance achieved by
maximum information gain queries is by now well understood for single-layer neural networks such as linear and binary perceptrons [1, 2, 3]. For multi-layer networks, which are much more widely used
in This work was partially supported by European Union grant no. ERB CHRX-CT92-0063 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=620356","timestamp":"2014-04-18T16:52:19Z","content_type":null,"content_length":"17706","record_id":"<urn:uuid:2addf583-aeff-4cb6-b752-0e9fe20d2496>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimal control of the propagation of a graph in inhomogeneous media
The Library
Optimal control of the propagation of a graph in inhomogeneous media
Deckelnick, Klaus, Elliott, Charles M. and Styles, Vanessa. (2009) Optimal control of the propagation of a graph in inhomogeneous media. SIAM Journal on Control and Optimization, Vol.48 (No.3). pp.
1335-1352. ISSN 0363-0129
WRAP_Elliott_Optimal_Control.pdf - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (297Kb)
We study an optimal control problem for viscosity solutions of a Hamilton–Jacobi equation describing the propagation of a one-dimensional graph with the control being the speed function. The
existence of an optimal control is proved together with an approximate controllability result in the $H^{-1}$-norm. We prove convergence of a discrete optimal control problem based on a monotone
finite difference scheme and describe some numerical results.
Item Type: Journal Article
Subjects: Q Science > QA Mathematics
Divisions: Faculty of Science > Mathematics
Library of Differential equations, Partial -- Research, Eikonal equation, Hamilton-Jacobi equations, Approximation theory
Journal or SIAM Journal on Control and Optimization
Publisher: Society for Industrial and Applied Mathematics
ISSN: 0363-0129
Date: 15 April 2009
Volume: Vol.48
Number: No.3
Page Range: pp. 1335-1352
Identification 10.1137/080723648
Status: Peer Reviewed
Access rights Open Access
to Published
# G. Barles and P. E. Souganidis, Convergence of approximation schemes for fully nonlinear second order equations, Asymptotic Anal., 4 (1991), pp. 271–283. # J. M. Berg, A. Yezzi, and
A. R. Tannenbaum, Phase transitions, curve evolution and the control of semiconductor manufacturing processes, in Proceedings of the IEEE Conference on Decision and Control, Kobe,
Japan, 1996, pp. 3376–3381. # J. M. Berg and N. Zhou, Shape-based optimal estimation and design of curve evolution processes with application to plasma etching, IEEE Trans. Automat.
Control, 46 (2001), pp. 1862–1873. # C. Castro, F. Palacios, and E. Zuazua, An alternating descent method for the optimal control of the inviscid Burgers equation in the presence of
shocks, Math. Models Methods Appl. Sci., 18 (2008), pp. 369–416. [MathRev] # M. G. Crandall and P. L. Lions, Viscosity solutions of Hamilton–Jacobi equations, Trans. Amer. Math. Soc.,
277 (1983), pp. 1–42. [ISI] # M. G. Crandall, L. C. Evans, and P. L. Lions, Some properties of viscosity solutions of Hamilton–Jacobi equations, Trans. Amer. Math. Soc., 282 (1984),
References: pp. 487–502. [ISI] # K. Deckelnick and C. M. Elliott, Uniqueness and error analysis for Hamilton–Jacobi equations with discontinuities, Interfaces Free Bound., 6 (2004), pp. 329–349.
[MathRev] # K. Deckelnick and C. M. Elliott, Propagation of graphs in two-dimensional inhomogeneous media, Appl. Numer. Math., 56 (2006), pp. 1163–1178. [Inspec] [MathRev] # H. Ishii,
Hamilton–Jacobi equations with discontinuous Hamiltonians on arbitrary open sets, Bull. Fac. Sci. Engrg. Chuo. Univ., 28 (1985), pp. 33–77. [MathRev] # O. A. Ladyzhenskaya, V. A.
Solonnikov, and N. N. Uralceva, Linear and Quasilinear Equations of Parabolic Type, Transl. Math. Monogr. 24, AMS, Providence, RI, 1968. # S. Leung and J. Qian, An adjoint state
method for three-dimensional transmission traveltime tomography using first-arrivals, Commun. Math. Sci., 4 (2006), pp. 249–266. [MathRev] # J. A. Sethian, Level Set Methods,
Cambridge University Press, Cambridge, UK, 1996. # S. Ulbrich, A sensitivity and adjoint calculus for discontinuous solutions of hyperbolic conservation laws with source terms, SIAM
J. Control Optim., 41 (2002), pp. 740–797. [MathRev]
URI: http://wrap.warwick.ac.uk/id/eprint/2210
Data sourced from Thomson Reuters' Web of Knowledge
Actions (login required) | {"url":"http://wrap.warwick.ac.uk/2210/","timestamp":"2014-04-20T21:33:03Z","content_type":null,"content_length":"42577","record_id":"<urn:uuid:10679407-4261-4982-be6d-c2b1fb24e67d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perfect powers on genus 0 curves (with restrictions)
up vote 2 down vote favorite
I suppose a conjecture implies this so there might be an unconditional proof.
Let $F(x,y)=0$ be a curve with infinitely many integral points $(u^m,v^n)$ where $\gcd(u,v)=1$ infinitely often and $m \ge 3,n \ge 2$. Such curves are easy to construct by starting with a
parametrization for example.
For a bivariate polynomial $F$ define $\operatorname{High}(F)$ to be the sum of the highest degree monomials (i.e., $\operatorname{High}(F) = F$ iff $F$ is homogeneous and $\operatorname{High}(2x^
3+3y^3+x+y)=2x^3+3y^3$). Let $\gcd(\operatorname{High}(F),xy)=1$.
Under these conditions is $\operatorname{High}(F)$ not square-free?
This can't be relaxed to $m \ge 2 $
add comment
1 Answer
active oldest votes
(Added some remarks from my comments)
Suppose that $F=F_1F_2\cdots F_r$ with $F_i$ irreducible. If $F=0$ has infinitely many points of the requested form, then so does one of the factors. Furthermore, $\text{High}(F)=\text
{High}(F_1) \text{High}(F_2)\cdots\text{High}(F_r)$. So in order to look at the question, we may assume that $F$ is irreducible. But the $F$ is even absolutely irreducible, which we
assume from now on.
up vote 5 As $F(x,y)=0$ has infinitely many integral solutions, then by Siegel's Theorem the projective closure of this curve has at most $2$ points at infinity. These point are just those with
down vote coordinates $(x:y:0)$ with $H(x,y)=0$, where $H=\text{High}(F)$. So the polynomial $H$ has total degree at most $2$ if it is to be separable and not divisible by $x$ nor $y$.
So a counterexample (that is where $H(x,y)$ is not squarefree) would have degree at most $2$.
I believe that degree $2$ can be ruled out. However, degree $1$ seems to amount to solve Pillai's conjecture: Given nonzero integers $A,B,C$, then $Au^m+Bv^n=C$ has only finitely many
integral solutions $u,v,m,n$ with $m,n\ge3$. You have the additional assumption that $u$ and $v$ are relatively prime. I'm sure that this doesn't make the conjecture easier.
You don't use the hypothesis about perfect powers and in particular $m \ge 3$? You don't need this hypothesis to answer the question? – joro Aug 20 '13 at 6:32
You don't mention coprimality at all. You don't need it to answer the question? – joro Aug 20 '13 at 6:59
@joro: Indeed, it doesn't seem to be necessary to use these additional assumptions in order to reduce to degree $2$ which is easy to handle. – Peter Mueller Aug 20 '13 at 7:57
Hm, isn't $x^n-y^n=0$ counterexample to your claim about degree 2?. Points are (t,t). High(x^n-y^n)=x^n-y^n, the degree is n and it is squarefree? – joro Aug 20 '13 at 8:21
Well, but this curve contains a straight line. Trivial examples like this are always possible. However, as your function $\text{High}$ is multiplicative, your question only makes sense
for irreducible polynomials. – Peter Mueller Aug 20 '13 at 9:13
show 4 more comments
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/120393/perfect-powers-on-genus-0-curves-with-restrictions/139848","timestamp":"2014-04-20T03:53:05Z","content_type":null,"content_length":"57793","record_id":"<urn:uuid:f867927e-5a0f-4237-ad6f-69d8ea230d83>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
limitx->0 sin3x-5sin2x/sin6x-3sinx - Homework Help - eNotes.com
limitx->0 sin3x-5sin2x/sin6x-3sinx
You need to evaluate limit of function `(sin 3x-5sin 2x)/(sin 6x-3sin x)` when `x-gt0` , hence you need to substitute 0 for x in equation of function such that:
`lim_(x-gt0) (sin 3x-5sin 2x)/(sin 6x-3sin x) = (sin0-5sin0)/(sin0-3sin0)`
Using `sin 0 = 0 =gt lim_(x-gt0) (sin 3x-5sin 2x)/(sin 6x-3sin x) = 0/0`
The case `0/0` is indeterminate, hence you may use l'Hospital's theorem to solve the limit:
`lim_(x-gt0) ((sin 3x-5sin 2x)')/((sin 6x-3sin x)') =lim_(x-gt0)(3cos 3x- 10 cos 2x)/(6 cos 6x - 3cos x) `
Using cos 0 = 1 yields
`lim_(x-gt0)(3cos 3x- 10 cos 2x)/(6 cos 6x - 3cos x) =(3*1 - 10*1)/(6*1 - 3*1)`
`lim_(x-gt0)(3cos 3x- 10 cos 2x)/(6 cos 6x - 3cos x) = -7/3`
Hence, evaluating the limit of function yields `lim_(x-gt0) (sin 3x-5sin 2x)/(sin 6x-3sin x) = -7/3` .
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/limitx-gt-y-sin3x-5sin2x-sin6x-3sinx-312169","timestamp":"2014-04-20T22:38:24Z","content_type":null,"content_length":"24820","record_id":"<urn:uuid:71a07f89-fc09-40a0-a0cb-05a89bc4c5e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: clogit prediction in the presence of collinearity
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: clogit prediction in the presence of collinearity
From sjsamuels@gmail.com
To statalist@hsphsun2.harvard.edu
Subject Re: st: clogit prediction in the presence of collinearity
Date Tue, 16 Feb 2010 18:48:44 -1000
Actually my statements:
"In -clogit-, there is an intercept for each pair, it but does not
appear in the prediction. -logit- has only a singleintercept, which
is part of the xb prediction."
are irrelevant for the algorithm Laura gave, as any intercept will cancel out.
However the estimated coefficients from the two commands will differ,
and -logit- will accept some predictors that -clogit- rejects because
they are constant within group.
On Tue, Feb 16, 2010 at 11:51 AM, <sjsamuels@gmail.com> wrote:
> Laura, why do you want to manually construct the predictions? Which
> predictions? What problem are you trying to solve? Please be specific
> and illustrated with code and results if you have them.
> In any case, to answer your specific questions:
> 1. Your found "algorithm" is incorrect: The command "clogit x y z" is
> not legal syntax; there is no group variable. If you mean "logit x y
> z", then the algorithm you found does not produce the -pc1-
> prediction. In -clogit-, there is an intercept for each pair, it but
> does not appear in the prediction. -logit- has only a single
> intercept, which is part of the xb prediction. Also, -logit- knows
> nothing about the group variable, whereas -clogit- measures
> associations within groups.
> 2. -predict- after -clogit- works the same way with collinear
> variables as it does without. Collinearity does not affect
> predictions much, if at all, in general. What is your reason for
> thinking otherwise?
> On Mon, Feb 15, 2010 at 10:30 AM, Laura Zoratto
> <laura.zoratto@graduateinstitute.ch> wrote:
>> dear all,
>> would anyone know how Stata predicts the probability of a positive outcome
>> after estimating a clogit, in the presence of collinearity (and the colinear
>> variable does not get dropped)? I found somewhere that the pc1 command in
>> Stata is equivalent to:
>> clogit x y z
>> predict xb, xb
>> gen top=exp(xb)
>> by countrypair, sort: egen bot=total(exp(xb))
>> gen pc1=top/bot
>> But I dont know what it does when there is a collinear variable affecting
>> the results . I need to calculate manually the predicted value...
>> thank you,
>> Laura
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
> --
> Steven Samuels
> sjsamuels@gmail.com
> 18 Cantine's Island
> Saugerties NY 12477
> USA
> 845-246-0774
Steven Samuels
18 Cantine's Island
Saugerties NY 12477
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2010-02/msg00750.html","timestamp":"2014-04-20T16:00:27Z","content_type":null,"content_length":"9105","record_id":"<urn:uuid:c37e2d92-c77d-4076-98bc-af7abf1bea10>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relationships of weight and height with age in hybrid Holstein-Friesian/Guzera females
F.E. Madalena^1, R.L. Teodoro^2 and A.P. Madureira^3
^1Department of Animal Sciences, School of Veterinary Sciences, Federal University of Minas Gerais, Caixa Postal 567, 30123-970 Belo Horizonte, MG, Brazil
^2EMBRAPA-Dairy Cattle, Juiz de Fora, MG, Brazil
^3EMBRAPA Arroz e Feijão
Corresponding author: F.E. Madalena
E-mail: fermadal@dedalus.lcc.ufmg.br
Genet. Mol. Res. 2 (3): 271-278 (2003)
Received July 7, 2003
Accepted August 1, 2003
Published September 30, 2003
ABSTRACT. Serial data on live weights, height at withers and the weight/height ratio of 263 cows (3 to 9 years old) and 196 heifers (2 to 5 years old) were studied. The animals were of six red and
white Holstein-Friesian (HF)/Guzera crosses (1/4, 1/2, 5/8, 3/4, 7/8 and ³31/32 HF-expected gene fraction). Separate analyses were performed for cows and heifers using the Proc Mixed of the SAS
package. Models included the fixed effects of farm, season, reproductive and lactation status, two-factor interactions, quadratic regressions on age and age x crossbred group interaction, as
continuous co-variables and regressions on the HF gene fraction and on breed heterozygosity, plus the animal random effect. Only heifer growth in height and weight/height was linear with age. In all
three traits in both categories the individual additive-dominance model explained the variation between crossbred groups. The breed additive difference was not significant (P > 0.05) for cow and
heifer live weight and for heifer weight/age ratio. Heterosis was significant for all traits except height of cows. Linear and quadratic regression coefficients for cows, were, respectively, for
live weight, 35.20 ± 5.23 kg/year and -1.54 ± 0.43 kg/year^2, for withers height, 2.49 ± 0.29 cm/year and -0.15 ± 0.02 cm/year^2 and for weight/height, 0.22 ± 0.04 kg/cm/year and -0.01 ± 0.003 kg/cm
/year^2. Corresponding values for heifers were, for live weight, 153.46 ± 37.06 kg/year and -15.69 ± 4.91 kg/year^2, while only linear coefficients applied to withers height (1.63 ± 0.43 cm/year)
and weight/height (0.16 ± 0.03 kg/cm/year).
Key words: Holstein-Friesian, Guzera, Weight/height ratio, Growth curve
Because a high proportion of feed in Brazilian dairy farms is required for maintenance, cow live weight has a strong negative economic weight on farm profit (Vercesi Filho et al., 2000; Martins et
al., 2003). Descriptions of live weight changes with age are needed for decision making on planning, selection and crossing. Live weight gain in heifers mainly affects the rearing costs of heifers.
Thus, although juvenile and adult weights are biologically related, it is convenient to study them separately for economic considerations and from this perspective there is no need for a unique
growth curve suitable for both categories but rather separate curves may be preferable.
Linear size measurements reflecting skeleton growth are known to be less affected by temporary environmental factors than is live weight, and have been used to describe growth (e.g., Cartwright,
1979). The weight/height ratio is an indicator of body condition (Klosterman et al., 1968; Nelsen et al., 1985) and has been proposed as a selection criterion to improve feed efficiency (Mason et
al., 1957).
Herd life in hybrid dairy herds in Brazil is generally very long (Lemos et al., 1996), so changes in female size up to advanced ages are of interest, calling for polynomial rather than asymptotic
growth curves. Madureira et al. (2002) described crossbreeding effects on live weight and height of heifers and cows, using data from a comprehensive project on strategies of crossbreeding, in which
heifers of six crossbred groups were distributed to 67 co-operator farms for lifetime evaluation (Madalena, 1989, 1993). Here we concentrate on the relationships of those traits with age.
Serial data on live weight, height at withers and the weight/height ratio of 263 cows and 196 heifers were studied (only 232 and 167 animals, respectively for height, as some were not measured). The
animals were of six red and white Holstein-Friesian (HF)/Guzera (Guz) crosses (1/4, 1/2, 5/8, 3/4, 7/8 and ³31/32 HF-expected gene fraction). The 1/4 cows were sired by 14 Guz bulls, the 5/8 by
eight 5/8 bulls and the other four groups by (the same) 12 HF bulls.
Heifers were born at the Santa Monica Experimental Farm, State of Rio de Janeiro, between March 1977 and December 1981. They were distributed to co-operator private farms at mean age 22 months and
mean weight 220 kg, for lifetime monthly milk recording. With a few exceptions, each farm received a set of six contemporary heifers, one of each crossbred group. The animals were managed according
to each farm’s own criteria, with no interference from the research team. The data of the present study were obtained from 50 farms, with easy access, where the weights and heights could be
recorded. Those farms were located in the States of Minas Gerais, São Paulo and Rio de Janeiro. Rationale and other trial details were described in previous publications (Madalena, 1989, 1993).
There were two portable scales (accuracy 200 g) located at EMBRAPA centers in the States of Minas Gerais and São Paulo. As a rule, dry cows and heifers were fasted for 16 h before weighing
(Madalena, 1964). Withers height was measured with a metric rod.
With some exceptions, the animals were weighed twice a year, between 1981 and 1987, when the recording was stopped because of lack of funding, once in the dry season (April to September) and once in
the rainy season (the other months, but no weighing occurred in January and February). A summary of the available records is shown in Table 1.
232 cows with 1305 records and 135 heifers with 240 records.
For analytical purposes, farms were grouped in classes according to region and management similarity. Reproductive status was estimated by subtracting a standard 282-day gestation length from the
calving date and the observations were grouped into five classes according to status on day of weighing (first, second and third 92-day gestation period, empty and unknown, for cows dead/sold before
next calving). Four classes of lactation status were considered (up to 100, 101 to 200, >200 days in milk and dry). Separate analyses were performed for cows and heifers using the Proc Mixed of the
SAS package. Models included the fixed effects of farm class of region-management, season at weighing, reproductive and lactation status (in cows), season x reproductive status and season x
lactation status interactions and polynomial regressions on age (to the highest significant power) and age x crossbred group interaction, as continuous co-variables and regressions on the HF gene
fraction and on breed heterozygosity, plus the animal random effect. The later two regressions estimate, respectively, the breed additive difference (g^I, HF-Guz) and heterosis (h^I) (Dickerson,
1969; Madalena, 2001).
Different models were explored by dropping nonsignificant (P > 0.05) polynomial age regression terms and age x crossbred group and other interactions. The goodness of fit of models was assessed by
the likelihood ratio test using the profile likelihood (Littell et al., 1996; Vonesh and Cinchilli, 1997) and by examining the regressions of residuals on age. The goodness of fit of the
additive-dominance model was assessed by comparison with a model in which the crossbred group classification substituted for the g^I-h^I regressions, and whenever the likelihood ratio test was
significant, by an analysis of variance of the crossbred classification effects on the residuals of the regression model. Once a model was decided upon, it was re-run using restricted maximum
likelihood to obtain the final estimates. The Satterthwite option was used to obtain the denominator degrees of freedom for F-tests of fixed effects (Littell et al., 1996).
For a matrix Y of j weights on i animals, the linear mixed model was Y = XB + Zu + e, where B represents the vector of fixed effects and u the vector of random animal effects, X and Z being
incidence matrixes, u~MVN (0,G), e~MVN (0,R) and V(Y) = ZGZ´ + R. A spatial co-variance structure (SP (pow)) and a compound symmetric (CS) for R were compared according to the Akaike’s information
criterion and the Schwarz’s Bayesian criterion, using restricted maximum likelihood (Littell et al., 1996).
The CS co-variance structure had slightly better fit for all three traits on both categories than did SP and was adopted for all models.
The interactions of crossbred group x age were not significant for any trait in both categories (P > 0.05) and were removed from the models. Quadratic curves for age were sufficient to explain
growth of all three traits of cows and of live weight of heifers, while heifer growth on height and weight/height was linear with age. The corresponding regression coefficients are in Table 2.
P > 0.05, *P < 0.05, ***P < 0.001, ****P < 0.0001. HF = Holstein-Friesian.
In all three traits in both categories the individual additive-dominance model explained the variation between crossbred groups, as reported by Madureira et al. (2002). The g^I regression was not
significant for live weight of cows and for live weight of heifers and for their weight/age ratio (Table 2). Heterosis was nonsignificant only for height of cows. The effects of season, lactation
and reproductive status are not discussed as the focus was on age relationships, but they are shown in the Appendix.
The quadratic growth curves for the three traits in cows are depicted in Figure 1 and curves for heifers in Figure 2.
The maxima of these curves corresponded to ages 11.4, 8.3 and 12.4 years, respectively, for live weight, withers height and the weight/height ratio, with corresponding values shown in Table 3 for
each crossbred group. However, only the height maximum was within the actual range of ages studied. Oliveira et al. (1994) reported an asymptotic weight of 455 kg for stud Guz cows, but the maturing
rate was lower than in the present study. Perotto et al. (1977) reported estimated asymptotic maxima of 484 and 480 kg for F[1] and 3/4 HF-Guz, which were higher than those in our study.
Figure 1. Growth curves of cows of six Holstein-Friesian gene proportions.
Figure 2. Growth curves of heifers of six Holstein-Friesian gene proportions.
Ages at maxima of quadratic growth curves (11.4, 8.3 and 12.4 years for live weight, withers height and weight/height, respectively). HF = Holstein-Friesian.
Cartwright, T.C. (1979). Size as a component of beef production efficiency: cow-calf production. J. Anim. Sci. 48: 974-980.
Dickerson, G. (1969). Experimental approaches in utilizing breed resources. Anim. Breed. Abstr. 37: 191-202.
Klosterman, E.W., Sanford, L.G. and Parker, C.F. (1968). Effect of cow size and condition and ration protein content upon maintenance requirements of mature beef cows. J. Anim. Sci. 27: 242-246.
Lemos, A.M., Teodoro, R.L. and Madalena, F.E. (1996). Comparative performance of six Holstein-Friesian x Guzera grades in Brazil. 9. Stayability, herd life and reasons for disposal. Braz. J. Genet.
19: 259-264.
Littell, R.C., Milliken, G.A., Stroup, W.W. and Wolfinger, R.D. (1996). SAS System for Mixed Models. SAS Institute Inc., Cary, NC, USA.
Madalena, F.E. (1964). Técnicas de determinación del peso vivo en los bovinos. Bol. Téc. Est. Exp. “Dr. Mario A. Cassinoni”, Paysandú, Uruguay, 1: 49-54.
Madalena, F.E. (1989). Cattle breed resource utilization for dairy production in Brazil. Rev. Bras. Genet. 12 (Suppl.): 183-220.
Madalena, F.E. (1993). La Utilización Sostenible de Hembras F1 en la Producción del Ganado Lechero Tropical. Estudio FAO Producción y Sanidad Animal No. 111.
Madalena, F.E. (2001). Consideraciones sobre modelos para la predicción del desempeño de cruzamentos de bovinos. Arch. Latinoam. Prod. Anim. 9: 108-117.
Madureira, A.P., Madalena, F.E. and Teodoro, L.R. (2002). Desempenho comparativo de seis grupos de cruzamento Holandês-Guzerá. 11. Peso e altura de vacas e novilhas. Rev. Bras. Zootec. 31: 658-667.
Martins, G.A., Madalena, F.E., Bruschi, J.H., Costa, J.L. and Monteiro, J.B.N. (2003). Objetivos econômicos de seleção de bovinos de leite para fazenda demonstrativa na Zona da Mata de Minas Gerais.
Rev. Bras. Zootec. 32: 304-314.
Mason, I.L., Robertson, A. and Gjelstad, B. (1957). The genetic connection between body size, milk production and efficiency in dairy cattle. Dairy Res. 24: 135-143.
Nelsen, T.C., Short, R.E., Reynolds, W.L. and Urick, J.J. (1985). Palpated and visually assigned condition scores compared with weight, height and heart girth in Hereford and crossbred cows. J.
Anim. Sci. 60: 363-368.
Oliveira, H.N., Lôbo, R.B. and Pereira, C.S. (1994). Relationships among growth curve parameters, weights and reproductive traits in Guzera beef cows. Proc. 5th Congr. Genet. Apl. Livest. Prod.
Guelph, 19: 189-192.
Perotto, D., Castanho, M.J.P., Rocha, J.L. and Pinto, J.M. (1997). Descrição das curvas de crescimento de fêmeas bovinas Guzerá, Gir, Holandês x Guzerá e Holandês x Gir. Rev. Bras. Zootec. 26:
Vercesi-Filho, A.E., Madalena, F.E., Ferreira, J.J. and Penna, V.M. (2000). Pesos econômicos para seleção de gado de leite. Rev Bras. Zootec. 29: 145-152.
Vonesh, E.F. and Cinchilli, V.M. (1997). Linear and Non-linear Models for the Analysis of Repeated Measurements. Marcel Dekker Inc., New York, NY, USA. | {"url":"http://www.funpecrp.com.br/gmr/year2003/vol3-2/gmr0067_full_text.htm","timestamp":"2014-04-20T05:42:00Z","content_type":null,"content_length":"24937","record_id":"<urn:uuid:c62aaa63-c26b-4db8-b318-a557831a02a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Midlothian, TX Algebra 1 Tutor
Find a Midlothian, TX Algebra 1 Tutor
...I will also ensure that you become proficient in completing the "dreaded proofs". They are not difficult once you "learn the rules". Mathematics, especially geometry, is a game.
55 Subjects: including algebra 1, reading, chemistry, writing
...Thanks for WyzAnt, I could help kids around me in this area, hope I could reach you too. I have always worked with kids, And I have been enjoying working with children. So, please feel free to
let me know if I can help you.
13 Subjects: including algebra 1, calculus, vocabulary, autism
...I have taught math in high school and am working on completing the requirements to be certified in Secondary math. I have prepared many students in middle school to take the ISEE high school
entrance test. I am also able to work with students in reading, writing and English.
8 Subjects: including algebra 1, grammar, study skills, elementary (k-6th)
...In addition, I also assist students in English and Social studies. While I tutor in many subjects, much of my time is spent in Math and Science. Much of my time as a WyzAnt tutor has been
directed at Algebra, Geometry and SAT Math tutoring.
9 Subjects: including algebra 1, physics, geometry, algebra 2
I have taught Mathematics at the High School level for the previous three years. Classes that I have taught include Algebra 2, Geometry, Precalculus, Calculus and a couple of Engineering courses.
The way that I tutor is by building students confidence in their abilities, starting with basic proble...
13 Subjects: including algebra 1, chemistry, calculus, physics
Related Midlothian, TX Tutors
Midlothian, TX Accounting Tutors
Midlothian, TX ACT Tutors
Midlothian, TX Algebra Tutors
Midlothian, TX Algebra 2 Tutors
Midlothian, TX Calculus Tutors
Midlothian, TX Geometry Tutors
Midlothian, TX Math Tutors
Midlothian, TX Prealgebra Tutors
Midlothian, TX Precalculus Tutors
Midlothian, TX SAT Tutors
Midlothian, TX SAT Math Tutors
Midlothian, TX Science Tutors
Midlothian, TX Statistics Tutors
Midlothian, TX Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Cedar Hill, TX algebra 1 Tutors
Desoto algebra 1 Tutors
Duncanville, TX algebra 1 Tutors
Forest Hill, TX algebra 1 Tutors
Glenn Heights, TX algebra 1 Tutors
Lancaster, TX algebra 1 Tutors
Mansfield, TX algebra 1 Tutors
Oak Leaf, TX algebra 1 Tutors
Ovilla, TX algebra 1 Tutors
Red Oak, TX algebra 1 Tutors
Sachse algebra 1 Tutors
Saginaw, TX algebra 1 Tutors
University Park, TX algebra 1 Tutors
Venus, TX algebra 1 Tutors
Waxahachie algebra 1 Tutors | {"url":"http://www.purplemath.com/Midlothian_TX_algebra_1_tutors.php","timestamp":"2014-04-17T01:41:25Z","content_type":null,"content_length":"23987","record_id":"<urn:uuid:21125c95-3cbb-424d-bb53-7624141fb4b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Point Geometry Tutor
...Besides teaching these courses I have been involved with writing the new Common Core Curriculum and corresponding lessons, and developing shared assesments. Most of my lessons are on Powerpoint
which I will be more than happy to share with my students. While at my current school I was the Internship Coordinator and was a recipient of the Above and Beyond Award given by WABC TV in 2007.
20 Subjects: including geometry, algebra 1, GRE, finance
...I am an experienced Turkish private tutor and also a native speaker of Turkish. Do you need some basic Turkish knowledge for your visit to Turkey? Or do you need a heavy grammar study on
25 Subjects: including geometry, calculus, statistics, logic
...I'm happy to help improve one's English language skills, including reading, writing, grammar, listening and speaking as well as aid in any other English related assignment or task. My tutoring
approach for English and ESL strongly stems from the knowledge and skills that I obtained during the TE...
21 Subjects: including geometry, reading, ESL/ESOL, algebra 1
...I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and
will be done in a year. I have a lot of experience tutoring physics and math at all levels.
11 Subjects: including geometry, Spanish, calculus, physics
...I also have extensive background in the psychology of learning, moral development and human development. Let me know if I can be of help. Best, Rhonda Sarrazin I have over 23 years of
successful teaching experience in elementary education teaching grades k-12.
32 Subjects: including geometry, reading, writing, English | {"url":"http://www.purplemath.com/college_point_geometry_tutors.php","timestamp":"2014-04-20T04:00:21Z","content_type":null,"content_length":"24096","record_id":"<urn:uuid:775507e4-4d4a-43ef-a9f1-ffa186562ac8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell/Phantom types
Phantom types are a way to embed a language with a stronger type system than Haskell's.
Phantom typesEdit
An ordinary type
data T = TI Int | TS String
plus :: T -> T -> T
concat :: T -> T -> T
its phantom type version
data T a = TI Int | TS String
Nothing's changed - just a new argument a that we don't touch. But magic!
plus :: T Int -> T Int -> T Int
concat :: T String -> T String -> T String
Now we can enforce a little bit more!
This is useful if you want to increase the type-safety of your code, but not impose additional runtime overhead:
-- Peano numbers at the type level.
data Zero = Zero
data Succ a = Succ a
-- Example: 3 can be modeled as the type
-- Succ (Succ (Succ Zero)))
type D2 = Succ (Succ Zero)
type D3 = Succ (Succ (Succ Zero))
data Vector n a = Vector [a] deriving (Eq, Show)
vector2d :: Vector D2 Int
vector2d = Vector [1,2]
vector3d :: Vector D3 Int
vector3d = Vector [1,2,3]
-- vector2d == vector3d raises a type error
-- at compile-time:
-- Couldn't match expected type `Zero'
-- with actual type `Succ Zero'
-- Expected type: Vector D2 Int
-- Actual type: Vector D3 Int
-- In the second argument of `(==)', namely `vector3d'
-- In the expression: vector2d == vector3d
-- while vector2d == Vector [1,2,3] works
This page is a stub. You can help Haskell by expanding it.
Last modified on 18 January 2013, at 06:03 | {"url":"http://en.m.wikibooks.org/wiki/Haskell/Phantom_types","timestamp":"2014-04-21T09:39:03Z","content_type":null,"content_length":"25346","record_id":"<urn:uuid:49103604-a5c7-4021-ad8a-8c1c9c89ecae>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
root, in mathematics, a solution to an equation, usually expressed as a number or an algebraic formula.
In the 9th century, Arab writers usually called one of the equal factors of a number jadhr (“root”), and their medieval European translators used the Latin word radix (from which derives the
adjective radical). If a is a positive real number and n a positive integer, there exists a unique positive real number x such that x^n = a. This number—the (principal) nth root of a—is written ^n√ a
or a^1/n. The integer n is called the index of the root. For n = 2, the root is called the square root and is written √ a . The root ^3√ a is called the cube root of a. If a is negative and n is odd,
the unique negative nth root of a is termed principal. For example, the principal cube root of –27 is –3.
If a whole number (positive integer) has a rational nth root—i.e., one that can be written as a common fraction—then this root must be an integer. Thus, 5 has no rational square root because 2^2 is
less than 5 and 3^2 is greater than 5. Exactly n complex numbers satisfy the equation x^n = 1, and they are called the complex nth roots of unity. If a regular polygon of n sides is inscribed in a
unit circle centred at the origin so that one vertex lies on the positive half of the x-axis, the radii to the vertices are the vectors representing the n complex nth roots of unity. If the root
whose vector makes the smallest positive angle with the positive direction of the x-axis is denoted by the Greek letter omega, ω, then ω, ω^2, ω^3, …, ω[^n] = 1 constitute all the nth roots of unity.
For example, ω = −^1/[2] + ^ √( −3 ) /[2], ω^2 = −^1/[2] − ^ √( −3 ) /[2], and ω^3 = 1 are all the cube roots of unity. Any root, symbolized by the Greek letter epsilon, ε, that has the property that
ε, ε^2, …, ε^n = 1 give all the nth roots of unity is called primitive. Evidently the problem of finding the nth roots of unity is equivalent to the problem of inscribing a regular polygon of n sides
in a circle. For every integer n, the nth roots of unity can be determined in terms of the rational numbers by means of rational operations and radicals; but they can be constructed by ruler and
compasses (i.e., determined in terms of the ordinary operations of arithmetic and square roots) only if n is a product of distinct prime numbers of the form 2^h + 1, or 2^k times such a product, or
is of the form 2^k. If a is a complex number not 0, the equation x^n = a has exactly n roots, and all the nth roots of a are the products of any one of these roots by the nth roots of unity.
The term root has been carried over from the equation x^n = a to all polynomial equations. Thus, a solution of the equation f(x) = a[0]x^n + a[1]x^n − 1 + … + a[n − 1]x + a[n] = 0, with a[0] ≠ 0, is
called a root of the equation. If the coefficients lie in the complex field, an equation of the nth degree has exactly n (not necessarily distinct) complex roots. If the coefficients are real and n
is odd, there is a real root. But an equation does not always have a root in its coefficient field. Thus, x^2 − 5 = 0 has no rational root, although its coefficients (1 and –5) are rational numbers.
More generally, the term root may be applied to any number that satisfies any given equation, whether a polynomial equation or not. Thus π is a root of the equation x sin (x) = 0. | {"url":"http://www.britannica.com/print/topic/509457","timestamp":"2014-04-16T21:55:57Z","content_type":null,"content_length":"11752","record_id":"<urn:uuid:54dbab4b-855d-486e-b2b6-1fc0b4f52019>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
compound interest
03-19-2005 #1
Registered User
Join Date
Mar 2005
compound interest
i have to make a program to print out the following:
Enter a principal amount:
Enter an annual interest rate:
Your first month of interest will be: $5.83
Enter the number of years your money will be in the bank:
You plan to deposit $1000.00 for a term of 30 years.
The total amount of interest you will earn will be $7116.50
Your final balance will be $8116.50.
i have tried numerous things to solve this but everything i do is wrong. my while loop keeps returning extremely large numbers. my code looks like this.
#include <stdio.h>
int main()
double principal, interest_rate, balance, interest;
int months, total_months, total_years;
total_months = 12 * total_years;
months = 1;
balance = interest + principal;
printf("Enter a principal amount:\n");
scanf("%lf", &principal);
printf("Enter an annual interest rate:\n");
scanf("%lf", &interest_rate);
printf("Your first month of interest will be: $%0.2lf\n", interest_rate / 12 * (principal / 100));
printf("Enter the number of years your money will be in the bank:\n");
scanf("%d", &total_years);
printf("You plan to deposit $%0.2lf for a total of %d years.\n", principal, total_years);
while(months <= total_months)
interest = principal + (interest_rate / 12 * (principal / 100));
interest += principal;
printf("The total amount of interest you will earn will be %0.2lf\n", interest - principal);
printf("Your final account balance will be %0.2lf\n", balance);
return 0;
any help will be greatly appreciated
First, obtain the data before performing calculations on it.
months = 1;
printf("Enter a principal amount:\n");
scanf("%lf", &principal);
printf("Enter an annual interest rate:\n");
scanf("%lf", &interest_rate);
printf("Your first month of interest will be: $%0.2lf\n", interest_rate / 12 * (principal / 100));
printf("Enter the number of years your money will be in the bank:\n");
scanf("%d", &total_years);
printf("You plan to deposit $%0.2lf for a total of %d years.\n", principal, total_years);
total_months = 12 * total_years;
balance = interest + principal;
Last edited by Dave_Sinkula; 03-19-2005 at 06:20 PM.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
thanks a lot, that helped make the program print out
Enter a principal amount:
Enter an annual interest rate:
Your first month of interest will be: $5.83
Enter the number of years your money will be in the bank:
You plan to deposit $1000.00 for a total of 30 years.
The total amount of interest you will earn will be 1005.83
Your final account balance will be 3005.83
but my math is wrong in the code, but it seems like it should work.
i'm supposed to get a total interest of 7116.50 and a final balance of 8116.50. any word on fixing this problem?
thanks again or clearin that up for me though
Second, I tried to remove a repeated calculation.
printf("Enter an annual interest rate:\n");
scanf("%lf", &interest_rate);
interest_rate /= 12;
interest_rate /= 100;
printf("Your first month of interest will be: $%.2f\n", principal * interest_rate);
Third, I prefer for loops, but I think you calculate interest something like this.
for ( months = 0; months < total_months; ++months )
interest = balance * interest_rate;
balance += interest;
printf("The total amount of interest you will earn will be %.2f\n", balance - principal);
printf("Your final account balance will be %.2f\n", balance);
Note also the format specifiers for printf.
Last, throwing a couple of printfs into the loop while you are developing code is a good way to learn how to debug your own code.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
i must ask you how do i put printfs in the loop to test it out?
also when i ran the program it came out with the following outcome...
Enter a principal amount:
Enter an annual interest rate:
Your first month of interest will be: $5.83
Enter the number of years your money will be in the bank:
You plan to deposit $1000.00 for a total of 30 years.
The total amount of interest you will earn will be 0.00
Your final account balance will be 1000.00
it doesnt impliment the total interest.
i'm sorry for all this, i'm very new to programming and am often very confused/lost.
i guess it is calulating the interest to 0, and i can't understand why. it looks like it should be doing the math given the loop.
about the format specifiers.... i thought i had to use %lf when they are declared as type double.
thanks again
i must ask you how do i put printfs in the loop to test it out?
Just how it sounds...
for ( months = 0; months < total_months; ++months )
interest = balance * interest_rate;
balance += interest;
printf("months = %2d, interest = %.2f, balance = %.2f\n", months, interest, balance);
Post the code of your latest attempt.
[edit]Oh, I forgot I changed this line, too.
balance = principal;
for ( months = 0; months < total_months; ++months )
Last edited by Dave_Sinkula; 03-19-2005 at 07:01 PM.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
hey, it is still coming up witht he wrong data at the end, heres the code:
#include <stdio.h>
int main()
double principal, interest_rate, balance, interest;
int months, total_months, total_years;
printf("Enter a principal amount:\n");
scanf("%lf", &principal);
printf("Enter an annual interest rate:\n");
scanf("%lf", &interest_rate);
interest_rate /= 12;
interest_rate /= 100;
printf("Your first month of interest will be: $%0.2lf\n", principal * interest_rate);
printf("Enter the number of years your money will be in the bank:\n");
scanf("%d", &total_years);
printf("You plan to deposit $%0.2lf for a total of %d years.\n", principal, total_years);
total_months = 12 * total_years;
balance = principal;
for ( months = 0; months < total_months; ++months )
interest = balance * interest_rate;
balance += interest;
balance = interest + principal;
printf("The total amount of interest you will earn will be %.2lf\n", balance - principal);
printf("Your final account balance will be %.2lf\n", balance);
return 0;
thanks a lot for all the help.
Last edited by bliznags; 03-19-2005 at 07:13 PM. Reason: missed somehitng
Try removing this line after the for loop.
balance = interest + principal;
The loop calculates balance -- don't discard all that work.
[edit]Use %.2f instead of %0.2lf for doubles in printf.
Last edited by Dave_Sinkula; 03-19-2005 at 07:40 PM.
7. It is easier to write an incorrect program than understand a correct one.
40. There are two ways to write error-free programs; only the third one works.*
awesome man, that did it, it works perfectly now. thank you so much for the assistance....
if it isnt so much to ask do you think you could take a gander at my other post on the forum to see if you know a solution to it. its somewhat similar to this one, but different in the end... the
link is,
thanks a lot.
Last edited by anonytmouse; 03-19-2005 at 10:27 PM.
Bah that page didn't even mention the pert formula
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
03-19-2005 #2
03-19-2005 #3
Registered User
Join Date
Mar 2005
03-19-2005 #4
03-19-2005 #5
Registered User
Join Date
Mar 2005
03-19-2005 #6
03-19-2005 #7
Registered User
Join Date
Mar 2005
03-19-2005 #8
03-19-2005 #9
Registered User
Join Date
Mar 2005
03-19-2005 #10
03-19-2005 #11
03-20-2005 #12 | {"url":"http://cboard.cprogramming.com/c-programming/63231-compound-interest.html","timestamp":"2014-04-16T11:43:22Z","content_type":null,"content_length":"90079","record_id":"<urn:uuid:065a8382-9bdb-426c-99b7-71a923db8185>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Daniel's Challenge Thread
Well the first part comes out from the sum of a geometric series:
Induction proves the second pretty quickly:
Obviously the first part could also be done (and more quickly) by induction, but I feel induction gets rid of the 'why' behind it. I'm sure you're going to show me a nicer way to do the second part
Last edited by Daniel123 (2008-12-25 23:00:29) | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=11034","timestamp":"2014-04-19T05:15:00Z","content_type":null,"content_length":"26143","record_id":"<urn:uuid:e7be8941-8ad4-48ff-8635-1c78af64eb19>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roots of the equation
June 9th 2010, 09:12 AM
Roots of the equation
Please help me through this problem:
(1) Set $y=1/(px+q)$ to find the equation whose roots are $1/(p\alpha+q)$ and $1/(p\beta+q)$
Thank you.
June 9th 2010, 09:28 AM
I don't see what the substitution is for. If we know that a polynomial has only two roots, $1/(p\alpha+q)$ and $1/(p\beta+q)$, each with multiplicity 1, then we can immediately write
$f(x)=k\left(x-\frac{1}{p\alpha+q}\right)\left(x-\frac{1}{p\beta+q}\right), k \in \mathbb{R}, k e 0$
Maybe the question wants it in this form?
$f(x)=k(x-y_{\alpha})(x-y_{\beta}), k \in \mathbb{R}, k e 0$
June 9th 2010, 11:22 AM
The remainder of the question
Sorry; I forgot to type the first half of the question.
(1) Let $\alpha$ and $\beta$ be the roots of the equation $ax^2+bx+c=0$. The rest of the question goes as written in the first post.
June 10th 2010, 12:31 PM
Have you worked this out yet? I'm still not sure what to make of the problem. The word "the" marked in red above can't be right, because there are an infinite number of functions that have those
There's a straightforward albeit messy way to express a new polynomial with those roots in terms of just a,b,c. The function I wrote above in terms of alpha and beta is symmetric in alpha and
beta. Without loss of generality, let $\alpha=\frac{-b+\sqrt{b^2-4ac}}{2a}$ and let $\beta=\frac{-b-\sqrt{b^2-4ac}}{2a}$. Substitute in and you have your equation. (Choice of $k$ is arbitrary;
for simplicity, you can let $k=1$.)
I still don't know what setting $y=1/(px+q)$ is supposed to accomplish, but maybe I could figure it out by looking at an example in your book. Possibly it's something obvious that I'm just not
seeing. If you have the intended solution and wish to post it, I'd be curious to see. | {"url":"http://mathhelpforum.com/algebra/148445-roots-equation-print.html","timestamp":"2014-04-18T21:31:02Z","content_type":null,"content_length":"10537","record_id":"<urn:uuid:8ac16af2-8e61-473b-b0ba-021c5383ff53>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constrained Optimization and Calibration for Deterministic and Stochastic Simulation Experiments
Seminar Room 1, Newton Institute
Optimization of the output of computer simulators, whether deterministic or stochastic, is a challenging problem because of the typical severe multimodality. The problem is further complicated when
the optimization is subject to unknown constraints, those that depend on the value of the output, so the function must be evaluated in order to determine if the constraint has been violated. Yet,
even an invalid response may still be informative about the function, and thus could potentially be useful in the optimization. We develop a statistical approach based on Gaussian processes and
Bayesian learning to approximate the unknown function and to estimate the probability of meeting the constraints, leading to a sequential design for optimization and calibration.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/DAE/seminars/2011090814301.html","timestamp":"2014-04-20T16:24:46Z","content_type":null,"content_length":"6558","record_id":"<urn:uuid:499ed229-165e-4456-887f-311d65771eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fdf471de4b0f2662fd597a8","timestamp":"2014-04-20T06:10:33Z","content_type":null,"content_length":"82164","record_id":"<urn:uuid:9b293825-9a5b-4807-9d3b-c0aa674a5c94>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advogato: Blog for tampe
Compress that for me, please
So, if you are in a habit of analyze lists and tree structures like code, and like pattern matching, you may end up writing ton's of code like.
(def sum-of-leaves
((X . L) (+ (sum-of-leaves X) (sum-of-leaves L)))
(() 0)
( X X))
e.g. if you have a list sum the first and the sum of rest the empty list yields 0 and all non list elements -the leaves, assumed numbers - end up being just counted as the value they represent.
So how should we code this destruction into a primitive lang? One idea I used is to work with the guile VM directly and use a few extra primitives there.
So consider destructuring
(define (f A) (match A ((X 'a (X . L) . U) (list X L U) ...)
we could do this with a stack machine like,
(push A) ;; Stack -> (A)
(explode-cons) ;; Stack -> ((car A) (cdr A))
(set X) ;; Stack -> ((cdr A))
(explode-cons) ;; Stack -> ((cadr A) (cddr A))
(match-eq? 'a) ;; Stack -> ((cddr A))
(explode-cons) ;; Stack -> ((caddr A) (cdddr A))
(explode-cons) ;; Stack -> ((caaddr A) (cdaddr A)
(cdddr A)))
(match-eq? X) ;; Stack -> ((cdaddr A) (cdddr A))
(set L) ;; Stack -> ((cddr A))
(set U) ;; Stack -> ()
(list X L U)
;;; note if an explode-cons or eq? fails it will
;;; reset and the next pattern will be tried.
And what you note is that this is a more compact reduction of pattern matching then doing it with the standard VM of guile. So the end results is that code executed on this VM is both faster and more
compact then using the standard setup. But of cause if we would like to compile this to the native platform then the standard compilation to pure scheme probably has an edge.
Interestingly though (I'm implementing a prolog on top of guile), this pattern can be generalized to the case where A - the input is a unifying variable. The destruction will look almost the same,
but we need to tell the VM that we are in a mode of destructing a unifying variable. Meaning that if X is not bound we will set that variable to a cons and push two new unbound variables (car A) and
(cdr A) onto the stack. | {"url":"http://www.advogato.org/person/tampe/diary/101.html","timestamp":"2014-04-18T13:12:39Z","content_type":null,"content_length":"6256","record_id":"<urn:uuid:4453cc05-408f-477a-a191-1150ca15b586>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometry Tutors
Voorhees, NJ 08043
Math and Science Tutor for all ages
...I am currently tutoring students in subjects ranging from Chemistry and precalculus, to Geometry and English. Throughout high school and college, I have tutored students of all ages in subjects
such as Spanish, Chemistry, Physiology, Algebra, and Calculus. The...
Offering 10+ subjects including trigonometry | {"url":"http://www.wyzant.com/Maple_Shade_trigonometry_tutors.aspx","timestamp":"2014-04-18T16:45:25Z","content_type":null,"content_length":"60437","record_id":"<urn:uuid:87e71e70-95f4-4fae-ac7a-c95f91baaa93>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
plot simulink in matlab
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi
i have generated simulink figure and can convert it in matlab using plot(signal) . now if i want to take the graph between a time limit such as 0.2s t0 0.5s, which command should i write?
There is actually no other command needed than PLOT. You only have to plot different DATA. Example:
t=0:0.001:1; % ms data=sin(10*t); % example data plot(t,data) % full plot figure plot(t(11:51),data(11:51)) % from 0.01 s to 0.05 s
Well, may be you can do it. Say, tout is the vector of all time values and yout is vector of all signal values. | {"url":"http://www.mathworks.com/matlabcentral/answers/62299","timestamp":"2014-04-17T08:00:19Z","content_type":null,"content_length":"26384","record_id":"<urn:uuid:21e81b9f-d1dd-4a94-8cf8-b55a2633492b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Johns Creek, GA Geometry Tutor
Find a Johns Creek, GA Geometry Tutor
...Not all students learn alike, and sometimes creativity is the key! She has taught over 150 students in the past four years! Abigail was recently the Tutor of the Month for another tutoring
website! "Abigail really knows her stuff.
22 Subjects: including geometry, reading, writing, calculus
...I had an overall GPA of 3.75 throughout 6 years of college, and my math GPA was 4.0 (including Trigonometry). I also worked as a math tutor to other college students. More importantly, I know
how to make learning fun and easy. I have a Master's degree in Business Administration (MBA). I am also a published author.
29 Subjects: including geometry, English, reading, writing
...I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. I can provide references upon request. I look forward to hearing from you.
10 Subjects: including geometry, algebra 1, algebra 2, precalculus
I am a state certified teacher of students in grades K-8. Through over 12 years of teaching children in multiple subject areas, I have discovered that children excel in learning when they are
motivated through positive encouragement, the content is presented in a fun and exciting manner, and multip...
7 Subjects: including geometry, reading, algebra 1, grammar
...Not only am I a certified ESL teacher (by the London Teacher Training School), I have been a student of 4 foreign languages (Arabic, French, Spanish and Chinese) myself - so I know what it is
like to learn a new language and to face the challenge of communicating in a language that is not one's m...
14 Subjects: including geometry, Spanish, statistics, algebra 1
Related Johns Creek, GA Tutors
Johns Creek, GA Accounting Tutors
Johns Creek, GA ACT Tutors
Johns Creek, GA Algebra Tutors
Johns Creek, GA Algebra 2 Tutors
Johns Creek, GA Calculus Tutors
Johns Creek, GA Geometry Tutors
Johns Creek, GA Math Tutors
Johns Creek, GA Prealgebra Tutors
Johns Creek, GA Precalculus Tutors
Johns Creek, GA SAT Tutors
Johns Creek, GA SAT Math Tutors
Johns Creek, GA Science Tutors
Johns Creek, GA Statistics Tutors
Johns Creek, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Alpharetta geometry Tutors
Atlanta geometry Tutors
Berkeley Lake, GA geometry Tutors
Buford, GA geometry Tutors
Decatur, GA geometry Tutors
Duluth, GA geometry Tutors
Dunwoody, GA geometry Tutors
Lawrenceville, GA geometry Tutors
Marietta, GA geometry Tutors
Milton, GA geometry Tutors
Norcross, GA geometry Tutors
Roswell, GA geometry Tutors
Sandy Springs, GA geometry Tutors
Snellville geometry Tutors
Suwanee geometry Tutors | {"url":"http://www.purplemath.com/johns_creek_ga_geometry_tutors.php","timestamp":"2014-04-20T08:52:26Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:33699a8b-9848-4530-9fa4-7e8250099feb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newbie feeling very stupid
12-31-2010, 07:50 AM #1
I must have close to 50 books and not one of them tell me what I want to know. Every quilt pattern is preplanned or says "after cutting the strips, sew together..."
THAT'S what I want to know, how do I figure out how many strips of each color do I cut to get the desired number of sets to cut into segments to cut and sew together to get the number of needed
blocks? Is there a book out there somewhere that EXPLAINS how to do calculations?
My dh gave me the gift of a fab calculator, and together we still come out wrong and I end up having to go back to cutting every time.
I know how to figure out the pattern, the number of patches and yardage I need for each fabric in the pattern. Then I'm lost.
Because my sewing space is so limited in my small home, I must get everything cut and in place, take down my craft table and those supplies in order to set up to sew. It is so frustrating when I
come up with wrong answers and have to take the sewing back down, set cutting back up, take down, so repetitive. It would be nice if I had an explanation in detail of how to come up with this
Please don't tell me to just follow the pattern in the book because they are always in sizes that don't apply to my circumstances.There must be some simple equation, but maybe I'm just too dense
to see the obvious. HELP PLEASE!!!
Mabel, this still happens to me occasionally, after 20 years. However, this is why I took algebra and geometry. I'm gathering you need help because you are changing the size of the quilt from the
book/instructions? Then, you need to figure the total number of blocks you will be using. Then, figure the size of each piece in the block by color. So if you need a 4x4 finished size you will
need to cut a 4-1/2x4-1/2 piece for that block. Times, for example, 20 for the total number of blocks. You need 20-4-1/2x4-1/2 pieces. From a 40" strip (without selvedge, averaging) you can get 8
pieces (and rounding up for fudge factor). that means you need 3 strips of that color--really 3-1/2 strips but hey! Is this any help?
I hate to say this but, the pattern should help you with your problem. Can't you sew the ones you have cut and then cut some more to complete more blocks, lay them out on a bed or someplace and
then see how many more you will need to get your desired size???
Hi Mabel, welcome to the board!
I can't answer your question specifically but I find a lot of books useless as well.
I'm a visual learner and depend on Missouri Star Quilt Company videos (fantastic) and Youtube, along with fellow quilters, classes etc.
Not much help for you but a big WELCOME!! I need to tell my cat,Mabel, that I now know another Mabel LOL :D:D
NYCQuilter said it correctly. Not a real formula, just figure out how big the cut pieces need to be, then find out how many of that size piece you need and how many you can get out of a 40 - 42"
strip of fabric. clear as mud???
I'm the same way....I need to SEE it to understand it...and the Missouri Start Quilt Company as lots of great videos.
Also on this site, go to the links and resources to see tutorials. Watch some of them...I think just listening to the language they use and the things they do will help you to understand how to
calculate better.
I hardly ever use patterns for anything I make - I get out the pencil and paper and diagram it myself - adding in the size of my unfinished squares + the size of my sashing + the size of my
borders = my total. I do that for the width and then I do that for the length...and then I tell myself that a yard of fabric is 35" x 40" (rounding numbers for easy division)...and go from there.
I figure how many pieces I can get across the width of the fabric (WOF) (ex. - need 4.5" squares = divide useable WOF -measurement minus selvages - by the 4.5 to find how many squares you can get
from one WOF strip), then divide the number of squares I need by that number to get the number of strips to cut. In my example, having 41" useable in the WOF I can get 9 4 1/2" squares in each
strip. If I need 60 squares of that particular fabric, divide 60 by 9, so I would need to cut 7 strips to get the needed amount. Hope this is clear, I think we all figure a little differently. A
suggestion - could you put your cutting mat on your ironing board if you need to cut an additional amount?
There is a book called Patchwork without the Mathwork that I found very helpful. I took geometry and algebra both and the math and angles I need for quilting still have me stumped half the time.
Have you tried drawing it out on graph paper? Maybe the visual aid would be helpful.
Definitely no need to feel stupid. I'm a self taught quilter and didn't really have any resources (books, videos, etc), other than trial and error.
I would have killed for the wealth of resources (wisdom, knowledge, skill) that is available out on this site.
Just ask, most are more than willing to help. And there are always a variety of ways to do a process...so just read and listen to the different styles and then pick the one that makes most sense
to you.
I've changed my style over time...things that I use to do one way, I know do another way, only because I feel like I'm a little more advanced than I was when I first started.
12-31-2010, 07:56 AM #2
Super Member
Join Date
Dec 2010
New York City/Manhattan
12-31-2010, 07:58 AM #3
12-31-2010, 08:00 AM #4
12-31-2010, 08:04 AM #5
12-31-2010, 08:07 AM #6
Super Member
Join Date
Nov 2010
12-31-2010, 08:07 AM #7
Junior Member
Join Date
Apr 2010
IL Quad-cities
Blog Entries
12-31-2010, 08:16 AM #8
Senior Member
Join Date
Mar 2010
Blog Entries
12-31-2010, 08:39 AM #9
Super Member
Join Date
Aug 2010
Jacksonville NC
Blog Entries
12-31-2010, 08:43 AM #10
Super Member
Join Date
Nov 2010 | {"url":"http://www.quiltingboard.com/main-f1/newbie-feeling-very-stupid-t86452.html","timestamp":"2014-04-18T04:24:54Z","content_type":null,"content_length":"69473","record_id":"<urn:uuid:e267e480-bd48-49c1-9835-659214954759>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Soap Film Solutions to Plateau’s Problem
Find out how to access preview-only content
January 2014
Volume 24
Issue 1
pp 271-297
Soap Film Solutions to Plateau’s Problem
Purchase on Springer.com
$39.95 / €34.95 / £29.95*
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Get Access
Plateau’s problem is to show the existence of an area-minimizing surface with a given boundary, a problem posed by Lagrange in 1760. Experiments conducted by Plateau showed that an area-minimizing
surface can be obtained in the form of a film of oil stretched on a wire frame, and the problem came to be called Plateau’s problem. Special cases have been solved by Douglas, Rado, Besicovitch,
Federer and Fleming, and others. Federer and Fleming used the chain complex of integral currents with its continuous boundary operator, a Poincaré Lemma, and good compactness properties to solve
Plateau’s problem for orientable, embedded surfaces. But integral currents cannot represent surfaces such as the Möbius strip or surfaces with triple junctions. In the class of varifolds, there are
no existence theorems for a general Plateau problem. We use the chain complex of differential chains, a geometric Poincaré Lemma, and good compactness properties of the complex to solve Plateau’s
problem in such generality as to find the first solution which minimizes area taken from a collection of surfaces that includes all previous special cases, as well as all smoothly immersed surfaces
of any genus type, orientable or nonorientable, and surfaces with multiple junctions. Our result holds for all dimensions and codimension-one surfaces in ℝ^ n .
Communicated by Steven G. Krantz.
The author was partially supported by the Miller Institute for Basic Research in Science and the Foundational Questions in Physics Institute. This paper was first posted on the arXiv in February,
Within this Article
1. Introduction
2. Differential Chains of Type B
3. Operators
4. Integral Monopole and Dipole Chains
5. Geometric Poincaré Lemma
6. The Volume Functional Used to Compute Area
7. The Part of a Chain in a Compatible Cube
8. Existence of Area Minimizers for Surfaces Spanning a Smoothly Embedded Closed Curve in ℝ^3
9. References
10. References
Other actions
1. Almgren, F.J.: Plateau’s Problem, an Invitation to Varifold Geometry. Benjamin, Elmsford (1966)
2. Almgren, F.J.: Existence and regularity almost everywhere of solutions to elliptic variational problems with constraints. Bull. Am. Math. Soc. 81(1), 151–154 (1975) CrossRef
3. Alt, H.W.: Verzweigungspunkte von H-Flächen. ii. Math. Ann. 201, 33–55 (1973) CrossRef
4. Douglas, J.: Solutions of the problem of Plateau. Trans. Am. Math. Soc. 33, 263–321 (1931) CrossRef
5. Federer, H.: Geometric Measure Theory. Springer, Berlin (1969)
6. Federer, H., Fleming, W.H.: Normal and integral currents. Ann. Math. 72(3), 458–520 (1960) CrossRef
7. Fleming, W.H.: On the oriented Plateau problem. Rend. Circ. Mat. Palermo 11(1), 69–90 (1962) CrossRef
8. Fleming, W.H.: Flat chains over a finite coefficient group. Trans. Am. Math. Soc. 121(1), 160–186 (1966) CrossRef
9. Gulliver, R.: Regularity of minimizing surfaces of prescribed mean curvature. Ann. Math. 97(2), 275–305 (1973) CrossRef
10. Harrison, J.: Cartan’s magic formula and soap film structures. J. Geom. Anal. 14(1), 47–61 (2004) CrossRef
11. Harrison, J.: On Plateau’s problem for soap films with a bound on energy. J. Geom. Anal. 14(2), 319–329 (2004) CrossRef
12. Harrison, J.: Operator calculus of differential chains and differential forms. J. Geom. Anal., to appear
13. Harrison, J.: Differential chains, measures, and additive set functions (July 2012)
14. Hardt, R., Simon, L.: Boundary regularity and embedded solutions for the oriented Plateau problem. Bull. Am. Math. Soc. 1(1), 263–265 (1979) CrossRef
15. Morgan, F.: Geometric Measure Theory: A Beginners Guide. Academic Press, London (1988)
16. Osserman, R.: A proof of the regularity everywhere of the classical solution to Plateau’s problem. Ann. Math. 91, 550–569 (1970) CrossRef
17. Plateau, J.: Experimental and Theoretical Statics of Liquids Subject to Molecular Forces Only. Gauthier-Villars, Paris (1873)
18. Reifenberg, E.R.: Solution of the Plateau problem for m-dimensional surfaces of varying topological type. Acta Math. 80(2), 1–14 (1960) CrossRef
19. Whitney, H.: Geometric Integration Theory. Princeton University Press, Princeton (1957)
20. Ziemer, W.P.: Integral currents mod 2. Trans. Am. Math. Soc. 105, 496–524 (1962)
21. Ziemer, W.P.: Plateau’s problem: an invitation to varifold geometry. Bull. Am. Math. Soc. 75(5), 924–925 (1969) CrossRef
Soap Film Solutions to Plateau’s Problem
Cover Date
Print ISSN
Online ISSN
Springer US
Additional Links
□ Plateau’s problem
□ Differential chains
□ Differential forms
□ Chainlets
□ Dipole chains
□ Dirac chains
□ Poincare Lemma
□ Extrusion
□ Retraction
□ Prederivative
□ Pushforward
□ Volume functional
□ Soap films
□ Triple branches
□ Moebius strips
□ Compactness
□ Minimal sets
□ 49Q15
□ 49J52
□ 49J99
Author Affiliations
□ 1. Department of Mathematics, University of California, Berkeley, USA | {"url":"http://link.springer.com/article/10.1007/s12220-012-9337-x","timestamp":"2014-04-18T04:17:21Z","content_type":null,"content_length":"50252","record_id":"<urn:uuid:15482a2d-2fd0-4683-a781-3e1ce8966630>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Renton Prealgebra Tutor
Find a Renton Prealgebra Tutor
...I have used my knowledge of physics and mathematics to model and build high power gasdynamic lasers, to model and test hydraulic borehole mining systems, to study Arctic sea ice mechanics, and
to analyze the potential energy savings of emerging energy efficient technologies. And I have tutored p...
21 Subjects: including prealgebra, chemistry, physics, English
...I have helped my former classmates and my younger brother many times with Physics. I have been learning French for more than 6 years. I can also help with programming.
16 Subjects: including prealgebra, chemistry, French, calculus
...I view any tutoring appointment as a contract to which I am obligated, and ask the same from my clients. As a result, I ask for 4 hours notice of a need to cancel or reschedule. Your
satisfaction is what matters to me.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
...I am also a certified SCUBA diver who is a great admirer of our Puget Sound! I believe that conservation and sustainability is a life lesson that affects us all! I hope to further discuss my
skills and experience in detail with you soon!My accumulated education experience has given me the skills necessary to be an effective K-6th tutor.
9 Subjects: including prealgebra, reading, writing, algebra 1
...I continue to work in Windows in current projects using Visual Studio. I have been working in MS Windows for over 20 years. I have been coaching students from the community college in C#. I
have found that irrespective of the language used, the challenge in tackling assignments and in understanding programming lies in breaking down a problems and thinking logically through the
16 Subjects: including prealgebra, geometry, algebra 1, algebra 2
Nearby Cities With prealgebra Tutor
Auburn, WA prealgebra Tutors
Bellevue, WA prealgebra Tutors
Burien, WA prealgebra Tutors
Des Moines, WA prealgebra Tutors
Federal Way prealgebra Tutors
Issaquah prealgebra Tutors
Kent, WA prealgebra Tutors
Kirkland, WA prealgebra Tutors
Newcastle, WA prealgebra Tutors
Puyallup prealgebra Tutors
Redmond, WA prealgebra Tutors
Seatac, WA prealgebra Tutors
Seattle prealgebra Tutors
Tacoma prealgebra Tutors
Tukwila, WA prealgebra Tutors | {"url":"http://www.purplemath.com/Renton_prealgebra_tutors.php","timestamp":"2014-04-21T02:16:45Z","content_type":null,"content_length":"23775","record_id":"<urn:uuid:92fbbc06-60dc-42ba-99d6-0db38efe17db>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relationship in Sets using Venn Diagram | Venn diagram | Set Theory | Set
Relationship in Sets using Venn Diagram
The relationship in sets using Venn diagram are discussed below:
• The union of two sets can be represented by Venn diagrams by the shaded region, representing A ∪ B.
A ∪ B when A ⊂ B
A ∪ B when neither A ⊂ B nor B ⊂ A
A ∪ B when A and B are disjoint sets
• The intersection of two sets can be represented by Venn diagram, with the shaded region representing A ∩ B.
A ∩ B when A ⊂ B, i.e., A ∩ B = A
A ∩ B when neither A ⊂ B nor B ⊂ A
A ∩ B = ϕ No shaded part
• The difference of two sets can be represented by Venn diagrams, with the shaded region representing A - B.
A – B when B ⊂ A
A – B when neither A ⊂ B nor B ⊂ A
A – B when A and B are disjoint sets.
Here A – B = A
A – B when A ⊂ B
Here A – B = ϕ
Relationship between the three Sets using Venn Diagram
• If ξ represents the universal set and A, B, C are the three subsets of the universal sets. Here, all the three sets are overlapping sets.
Let us learn to represent various operations on these sets.
A ∪ B ∪ C
A ∩ B ∩ C
A ∪ (B ∩ C)
A ∩ (B ∪ C)
Some important results on number of elements in sets and their use in practical problems.
Now, we shall learn the utility of set theory in practical problems.
If A is a finite set, then the number of elements in A is denoted by n(A).
Relationship in Sets using Venn Diagram
Let A and B be two finite sets, then two cases arise:
Case 1:
A and B are disjoint.
Here, we observe that there is no common element in A and B.
Therefore, n(A ∪ B) = n(A) + n(B)
Case 2:
When A and B are not disjoint, we have from the figure
(i) n(A ∪ B) = n(A) + n(B) - n(A ∩ B)
(ii) n(A ∪ B) = n(A - B) + n(B - A) + n(A ∩ B)
(iii) n(A) = n(A - B) + n(A ∩ B)
(iv) n(B) = n(B - A) + n(A ∩ B)
A – B
B – A
A ∩ B
Let A, B, C be any three finite sets, then
n(A ∪ B ∪ C) = n[(A ∪ B) ∪ C]
= n(A ∪ B) + n(C) - n[(A ∪ B) ∩ C]
= [n(A) + n(B) - n(A ∩ B)] + n(C) - n [(A ∩ C) ∪ (B ∩ C)]
= n(A) + n(B) + n(C) - n(A ∩ B) - n(A ∩ C) - n(B ∩ C) + n(A ∩ B ∩ C)
[Since, (A ∩ C) ∩ (B ∩ C) = A ∩ B ∩ C]
Therefore, n(A ∪B ∪ C) = n(A) + n(B) + n(C) - n(A ∩ B) - n(B ∩ C) - n(C ∩ A) + n(A ∩ B ∩ C)
● Set Theory
● Finite Sets and Infinite Sets
● Problems on Intersection of Sets
● Problems on Complement of a Set
● Problems on Operation on Sets
● Venn Diagrams in Different Situations
● Relationship in Sets using Venn Diagram
● Union of Sets using Venn Diagram
● Intersection of Sets using Venn Diagram
● Disjoint of Sets using Venn Diagram
● Difference of Sets using Venn Diagram
8th Grade Math Practice
From Relationship in Sets using Venn Diagram to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question. | {"url":"http://www.math-only-math.com/relationship-in-sets-using-Venn-diagram.html","timestamp":"2014-04-19T14:29:12Z","content_type":null,"content_length":"51111","record_id":"<urn:uuid:25b171aa-4b9f-4b78-847e-535927aed4fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Johns Creek, GA Geometry Tutor
Find a Johns Creek, GA Geometry Tutor
...Not all students learn alike, and sometimes creativity is the key! She has taught over 150 students in the past four years! Abigail was recently the Tutor of the Month for another tutoring
website! "Abigail really knows her stuff.
22 Subjects: including geometry, reading, writing, calculus
...I had an overall GPA of 3.75 throughout 6 years of college, and my math GPA was 4.0 (including Trigonometry). I also worked as a math tutor to other college students. More importantly, I know
how to make learning fun and easy. I have a Master's degree in Business Administration (MBA). I am also a published author.
29 Subjects: including geometry, English, reading, writing
...I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. I can provide references upon request. I look forward to hearing from you.
10 Subjects: including geometry, algebra 1, algebra 2, precalculus
I am a state certified teacher of students in grades K-8. Through over 12 years of teaching children in multiple subject areas, I have discovered that children excel in learning when they are
motivated through positive encouragement, the content is presented in a fun and exciting manner, and multip...
7 Subjects: including geometry, reading, algebra 1, grammar
...Not only am I a certified ESL teacher (by the London Teacher Training School), I have been a student of 4 foreign languages (Arabic, French, Spanish and Chinese) myself - so I know what it is
like to learn a new language and to face the challenge of communicating in a language that is not one's m...
14 Subjects: including geometry, Spanish, statistics, algebra 1
Related Johns Creek, GA Tutors
Johns Creek, GA Accounting Tutors
Johns Creek, GA ACT Tutors
Johns Creek, GA Algebra Tutors
Johns Creek, GA Algebra 2 Tutors
Johns Creek, GA Calculus Tutors
Johns Creek, GA Geometry Tutors
Johns Creek, GA Math Tutors
Johns Creek, GA Prealgebra Tutors
Johns Creek, GA Precalculus Tutors
Johns Creek, GA SAT Tutors
Johns Creek, GA SAT Math Tutors
Johns Creek, GA Science Tutors
Johns Creek, GA Statistics Tutors
Johns Creek, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Alpharetta geometry Tutors
Atlanta geometry Tutors
Berkeley Lake, GA geometry Tutors
Buford, GA geometry Tutors
Decatur, GA geometry Tutors
Duluth, GA geometry Tutors
Dunwoody, GA geometry Tutors
Lawrenceville, GA geometry Tutors
Marietta, GA geometry Tutors
Milton, GA geometry Tutors
Norcross, GA geometry Tutors
Roswell, GA geometry Tutors
Sandy Springs, GA geometry Tutors
Snellville geometry Tutors
Suwanee geometry Tutors | {"url":"http://www.purplemath.com/johns_creek_ga_geometry_tutors.php","timestamp":"2014-04-20T08:52:26Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:33699a8b-9848-4530-9fa4-7e8250099feb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ratios - Grade 5 Math Questions With Answers
Questions on how to find math ratios in different situations with answers are presented.
1. In the figure below are three different shapes: squares, triangles and circles
Use the above picture to find the following:
A. the ratio of the number of triangles to the number of circles
B. the ratio of the number of circles to the number of squares
C. the ratio of the number of triangles to total number of shapes
D. the ratio of the number of cirlces to total number of shapes
E. the ratio of the number of circles to number of triangles
2. There are 15 apples, 10 bananas and 5 pears in a basket. Use the given information find the following:
A. the ratio of the number of bananas to the total number of fruits
B. the ratio of the number of pears to the number of apples
C. the ratio of the number of bananas to the number of apples
D. the ratio of the number of pears to the total number of fruits
E. the ratio of the number of apples to the number of fruits
3. In a school there 120 boys and 180 girls. 40 of the boys are under 10 years and 140 of the girls are under 10 years. Use the given information to find
A. the ratio of the number of boys to the number of girls
B. the ratio of the number of boys who are less than 10 to the number of boys who are 10 or older. (in simplest form)
C. the ratio of the number of girls who are less than 10 to the number of boys who are 10 or older. (in simplest form)
D. the ratio of the number of girls who are 10 or older to the total number of pupils. (in simplest form)
E. the ratio of the number of girls who are less than 10 to the total number of pupils who are 10 or older. (in simplest form)
4. a, b and c are the number of blue, yellow and white marbles in a box. a is greater than b and b is greater than c. Answer by true or false the following statements.
A. the ratio of blue marbles to the total number of marbles is less than the ratio of white marbles to the total number of marbles.
B. the ratio of yellow marbles to the total number of marbles is greater than the ratio of blue marbles to the total number of marbles.
C. the ratio of white marbles to the total number of marbles is less than the ratio of blue marbles to the total number of marbles.
Answers to the Above Questions
A. 9 to 6
B. 6 to 3
C. 9 to 18
D. 6 to 18
E. 6 to 9
A. 10 to 30
B. 5 to 15
C. 10 to 15
D. 5 to 30
E. 15 to 30
A. 2 to 3
B. 1 to 2
C. 7 to 4
D. 2 to 15
E. 7 to 6
A. F
B. F
C. T
More Primary Math with Free Questions and Problems With Answers
Author - e-mail
Home Page Updated: 7 March 2009 (A Dendane) | {"url":"http://www.analyzemath.com/primary_math/grade_5/ratio.html","timestamp":"2014-04-19T04:19:53Z","content_type":null,"content_length":"8911","record_id":"<urn:uuid:8624554e-764a-4c04-b3d4-33e0ee121eda>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weak convergence in the intersection of Lebesgue spaces or Sobolev spaces
up vote 4 down vote favorite
Let $B:=B_1\cap B_2\cap...\cap B_n$, where each $B_j$ is a reflexive Lebesgue space or Sobolev space (such as $L^4$, $H^1$, etc.) on a domain in $\mathbb{R}^d$. Then $B$ is a Banach space endowed
with the norm $$\|\cdot\| = \|\cdot\|_{B_1}+...+\|\cdot\|_{B_n}.$$ Let $\{f_n\}$ be a sequence in $B$ and $f\in B$.
Is the assertion that $f_n$ weakly converges to $f$ in $B$ equivalent to the assertion that $f_n$ weakly converges to $f$ in each $B_j$?
Note that it's easy to show that the weak convergence in $B$ implies that in each $B_j$. It's the converse that is not obvious. Does anyone know the answer or some reference for it?
Remark. My question is closely related to the following:
Is the reflexivity of every $B_j$ implies the reflexivity of $B$?
To see this, assume $f_n$ converges weakly to $f$ in each $B_j$. Then $f_n$ is a bounded sequence in each $B_j$, and hence is bounded in $B$. If we can prove that $B$ is reflexive, then $f_n$ has
weakly convergent subsequence in $B$. It's easy to show that every weak convergent subsequence of $f_n$ in $B$ must have weak limit $f$, and hence the sequence $f_n$ itself converges weakly to $f$.
Conversely, if for every sequence $f_n$ in $B$, $f_n$ converges weakly in each $B_j$ implies that $f_n$ converges weakly in $B$, then $B$ must be reflexive. I omit the proof of this assertion.
fa.functional-analysis banach-spaces
add comment
1 Answer
active oldest votes
The answer to both questions is "yes". One way of seeing this is to observe that $B$ embeds isometrically into $B_1 \oplus_1 B_2 \dots \oplus_1 B_n$ via $b \mapsto (b,b,\dots, b)$.
Another way is to use the (obvious) fact that a sequence in a Banach space converges weakly to $x$ if and only if for every subsequence $y_k$ there are convex combinations of $y_k$
up vote 4 down that converge in norm to $x$.
vote accepted
WOW, so simple, so smart. Thank you very much! – Liren Lin Jun 28 '13 at 16:10
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis banach-spaces or ask your own question. | {"url":"http://mathoverflow.net/questions/135125/weak-convergence-in-the-intersection-of-lebesgue-spaces-or-sobolev-spaces","timestamp":"2014-04-21T01:22:10Z","content_type":null,"content_length":"51595","record_id":"<urn:uuid:04887e03-7480-458f-81a6-27b76c454908>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clifton, NJ Algebra 2 Tutor
Find a Clifton, NJ Algebra 2 Tutor
...Two of my high school students were recently named regional finalists in the 2012 Google Science Fair for our project. My qualifications include:• A PhD in Cancer Biology from Stanford
University• More than fifteen years of professional experience tutoring middle school, high school and college ...
31 Subjects: including algebra 2, English, biochemistry, biostatistics
...I've been programming in Flash and Actionscript for 6 years! I have a degree in physics and a minor in mathematics. I'm also currently working on a masters in applied mathematics & statistics.
83 Subjects: including algebra 2, chemistry, calculus, geometry
...Many of them were admitted to NYC specialized high schools (Stuyvesant, Bronx Science, and Brooklyn Tech) based on their excellent SHSAT scores. College-bound Precalculus students (high school
sophomores and juniors) should seriously think about taking the SAT II - Math Level II in June. Most competitive schools require it and it is also an excellent way to study for the school finals.
11 Subjects: including algebra 2, calculus, algebra 1, geometry
...I believe my extensive schooling can now allow me to share my successes and knowledge with future scholars. As for my tutoring subjects, I am very confident in my ability to tutor: biology,
math, INCLUDING PSAT and SAT Math, and chemistry. I have taken intro to upper level chemistry courses, general biology, and up to calculus II in math.
14 Subjects: including algebra 2, chemistry, biology, ACT Math
...Dickinson High School in Jersey City, NJ. I have also taught Algebra 1, Algebra 2, Geometry and SAT math. I have been a teacher for 8 years.
10 Subjects: including algebra 2, physics, SAT math, algebra 1
Related Clifton, NJ Tutors
Clifton, NJ Accounting Tutors
Clifton, NJ ACT Tutors
Clifton, NJ Algebra Tutors
Clifton, NJ Algebra 2 Tutors
Clifton, NJ Calculus Tutors
Clifton, NJ Geometry Tutors
Clifton, NJ Math Tutors
Clifton, NJ Prealgebra Tutors
Clifton, NJ Precalculus Tutors
Clifton, NJ SAT Tutors
Clifton, NJ SAT Math Tutors
Clifton, NJ Science Tutors
Clifton, NJ Statistics Tutors
Clifton, NJ Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Bloomfield, NJ algebra 2 Tutors
East Orange algebra 2 Tutors
Elmwood Park, NJ algebra 2 Tutors
Garfield, NJ algebra 2 Tutors
Montclair, NJ algebra 2 Tutors
Nutley algebra 2 Tutors
Passaic algebra 2 Tutors
Passaic Park, NJ algebra 2 Tutors
Paterson, NJ algebra 2 Tutors
Rutherford, NJ algebra 2 Tutors
Union City, NJ algebra 2 Tutors
Wallington algebra 2 Tutors
Wayne, NJ algebra 2 Tutors
Weehawken algebra 2 Tutors
Woodland Park, NJ algebra 2 Tutors | {"url":"http://www.purplemath.com/Clifton_NJ_Algebra_2_tutors.php","timestamp":"2014-04-16T22:34:00Z","content_type":null,"content_length":"23942","record_id":"<urn:uuid:fb1ba0b8-3c30-42a1-8a81-fb7b467abde1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distillation of secret key and entanglement from quantum states
Based on techniques from classical information theory, which we generalise to the quantum domain, we show how to distill secret key between two parties by local operations and public communication
from given states, at an asymptotically optimal rate. By implementing this protocol "coherently", we then show how it yields actually entanglement, at the same rate, proving the long-conjectured
"hashing inequality": it states that from a given quantum state, once can, using one-way LOCC, distill entanglement with rate given by the coherent information.
This is joint work with Igor Devetak; it is contained in eprints quant-ph/0306078 and quant-ph/0307053. | {"url":"http://www.newton.ac.uk/programmes/QIS/Abstract3/winter.html","timestamp":"2014-04-20T19:33:14Z","content_type":null,"content_length":"2597","record_id":"<urn:uuid:bc8199d1-7db5-4b0e-8465-4ff78da973e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
This is a simple linear time voting algorithm designed by Robert S Boyer and J Stother Moore in 1980 which is discussed in their paper
MJRTY - A Fast Majority Vote Algorithm
This algorithm decides which element of a sequence is in the majority, provided there is such an element.
Suppose there are n characters (objects/candidates). When the ith element is visited, the set can be divided into two groups,ca group of k elements in favor of current selected candidate and a group
of elements that disagree.After processing all, we can conclude that candidate selected can be considered majority if there's any !
When the pointer forward over an element e:
If the counter is 0, we set the current candidate to e and we set the counter to 1.
If the counter is not 0, we increment or decrement the counter according to whether e is the current candidate.
When we are done, the current candidate is the majority element, if there is a majority.
I have written a simple java implementation.
Sometime ties may occur. But this algorithm doesn't fit as the solution.For an assurance, if the vote is greater than n/2, the candidate which is returned as majority it is announced to be the
selected one. This counting phase can be done when the increment for the candidate happens.This algorithm is really effective when the data is read from a tape.The algorithm only works when at least
half of the elements constitute the majority.
There is a descending iterator in linked list implementation in Java SDK. A humble private class in LinkedList. A good example of adapter.
calls up
public Iterator<E> descendingIterator() {
return new DescendingIterator(); | {"url":"http://bytescrolls.blogspot.com/2011_07_01_archive.html","timestamp":"2014-04-21T02:13:00Z","content_type":null,"content_length":"97157","record_id":"<urn:uuid:1b984def-1886-40b6-b2b9-b959b003ec4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lumens to watts
Lumens to watts calculator
Luminous flux in lumens (lm) to electric power in watts (W) calculator.
Enter the luminous flux in lumens, luminous efficacy in lumens per watt and press the Calculate button to get the power in watts:
* the predefined luminous efficacy values are typical / average values.
Lumens to watts calculation formula
Energy saving lamps have high luminous efficacy (more lumens per watt).
The power P in watts (W) is equal to the luminous flux Φ[V] in lumens (lm), divided by the luminous efficacy η in lumens per watt (lm/W):
P[(W)] = Φ[V(lm)] / η[(lm/W)]
Lumens to watts table
Incandescent Fluorecent
Lumens light bulb / LED
(watts) (watts)
375 lm 25 W 6.23 W
600 lm 40 W 10 W
900 lm 60 W 15 W
1125 lm 75 W 18.75 W
1500 lm 100 W 25 W
2250 lm 150 W 37.5 W
3000 lm 200 W 50 W
See also | {"url":"http://www.rapidtables.com/calc/light/lumen-to-watt-calculator.htm","timestamp":"2014-04-18T15:42:35Z","content_type":null,"content_length":"13679","record_id":"<urn:uuid:4afe92ca-35f1-4a0c-8958-5cdc8c26827f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] Wave files / PCM question
Nicolau Werneck nwerneck@gmail....
Sun Nov 7 17:50:52 CST 2010
Hi. That is one interesting question. The fact is that integer formats
allow, or better, force you to have one extra possible level for the
negative values (assuming we are using two's complement as
usual). This is one asymmetry we just have to live with.
In practice, when you need symmetry you will not use the 0 (or -128)
level in the case of 8 bits or the -2**15 level in the case of 16
bits. You have to keep that in mind when you are generating a
sinusoid, for example. But if you are generating a PWM signal for
example, you might use it. But of course, it's a very little and
subtle difference.
Myself, I would map the 0 to the 0.0, and normalize the maximum
absolute value (-2**15) to -1 when converting from integer to FP, but
then multiply by -2**15-1 when converting back. Unless you know the
input signal will certainly not have a -2**15 in it, in which case you
can use -2**15-1 both ways. If you map your floating point 0.0 to -0.5,
then round it down when converting to integer, your 0 level will be a
1 DC, and unless you have a high-pass filter in your DAC output (as is
usually the case), that can cause you trouble. All sorts of trouble,
not just in an electronic output...
It's important to know that your 0 really means the absolute
silence. And it's also important to avoid clipping yous signals. So in
general I advise you to simply forget about the possibility of the
-2**15 level. Unless you know what you are doing, in which case you
wouldn't need advice. :)
Fun fact: on floating point representation there is a "+0" and a "-0",
because the notation is not two's complement, it's a signal, mantissa
and exponent. This is kind of a curse with binary numbers
representation, if it's not an extra -2**15, it's a dual 0
representation... So don't be upset with wasting the -2**15 level,
because when you work with FP you are also dealing with other kinds of
tiny odd resource wasting too!
Happy hacking! :)
On Sun, Nov 07, 2010 at 11:51:51PM +0100, Dan Goodman wrote:
> Hi all,
> In a linear PCM encoded wave file, the samples are typically stored
> either as unsigned bytes or signed 16 bit integers. Does anyone know
> (and preferably have a solid reference for) the correct conversion for
> both of these types to floats between -1 and 1?
> My assumption would be that no possible values should be wasted, so that
> -1 should correspond to 0 (or -2**15) and +1 should correspond to 255
> (or 2**15-1) for 8 (or 16) bit samples. But this has the odd feature
> that 0 is not represented, as it would have to correspond to 127.5 (or
> -0.5). That doesn't bother me too much, at least in the case of the
> unsigned bytes, but in the case of the signed 16 bit ints, it means that
> the zero of the signed 16 bit int doesn't correspond to the zero of the
> float, and that essentially the signedness of the 16 bit int is more or
> less ignored.
> The alternative is that the signedness is used and +/- 1 corresponds to
> +/- 2**15-1, which would mean that the value -2**15 is never used for 16
> bit LPCM, which seems to violate my intuition about how people used to
> design file formats back in the good old days when everything was very
> efficient.
> So which is it? Waste -2**15 or violate 0=0? I've found web pages that
> seem to suggest both possibilities, but I'm not sure what the definitive
> reference is for this.
> Apologies for slightly offtopic question, although I am using numpy and
> scipy. :)
> Dan
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
Nicolau Werneck <nwerneck@gmail.com> C3CF E29F 5350 5DAA 3705
http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15
Linux user #460716
"One man's "magic" is another man's engineering. "Supernatural" is a null word. "
-- Robert Heinlein
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
Url : http://mail.scipy.org/pipermail/scipy-user/attachments/20101107/906d6e67/attachment.bin
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-November/027387.html","timestamp":"2014-04-16T04:17:31Z","content_type":null,"content_length":"7389","record_id":"<urn:uuid:1e56e439-d139-46b2-a7af-783149e010c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Supply and Demand (in that order)
Professor Nunes writes (and attaches a chart):
""Stilettoheels" is mistaken.The graph shows the traditional measure of growth as the latest four Qtr average over four qtyr average four qtrs ago. (or average GDP for year t over average GDP for
year t-1).The "depth' of this recession (GDP wise) is still far away from the one in 1981-82 as you mentioned and "Stilettoheels" contested."
I think part of the dispute is whether you count the mini recession that shortly preceded the 1981-82 recession. Real GDP in 1981 Q2 (just before the famous 1981-82 recession) was hardly greater than
it was in 1980 Q1. The 1981-82 recession therefore erased more than 3 years of GDP growth.
If real GDP growth resumes this year, and real GDP stays above my "11 trillion real GDP floor", about the same will have occurred: the 2008-9 recession will have erased less than 4 years real GDP
2 comments:
stilettoheels said...
OK Mulligan and Nunes, I must be dazed and confused but let's do some real GDP arithmetic. Remember Mulligan posted levels not percentages. Levels must be interpreted in context of percentages.
We will do it 2 ways.
1. Cumulative percentage decline from NBER peak to trough.
2. Cumulative percentage decline from GDP peak to trough.
Real GDP quarterly levels are here. NBER peaks are here.
1. 1981.Q3 NBER peak level is 5.3298, 1982.Q4 trough level is 5.1898. Cumulative decline is 2.627%. 2007.Q4 NBER peak level is 11.6207, 2009.Q1 (trough?) level is 11.3409. Cumulative decline is
2. 1981.Q3 GDP peak level is 5.3298,
1982.Q3 trough level is 5.1852. Cumulative decline is 2.713%. 2008.Q2 GDP peak level is 11.7274, 2009.Q1 (trough?) level is 11.3409. Cumulative decline is 3.296%.
The 2007 recession cumulative GDP (not NBER) decline already exceeds the 1981 recession, subject to revisions.
Mulligan's assertion that the level of GDP can fall from either 11.6207 or 11.7274 to 11.0 and the percentage decline is less than either 2.627% or 2.713% is an egregious error.
Joao Marcus said...
Definition of growth: "A positive change in the level of production of goods and services by a country over a certain period of time".
So recession means negative growth.
When year end comes along the calculation of GDP growth in 2009 will find the average level of GDP in 2009 and compare with the average level in 2008. We can do the same thing quarter by quarter
and find the average,say,of the GDP level between 08.2 and 09.1 (A)and compare that to the average of 07.2 to 08.1 (B). A/B will give you the growth (or fall) in GDP over the period. The
expression "a certain period of time" is usually meant to be 1 year (4 quarters).It provides a smoothed "trend". It is based on this metric that the figure shows that the present recession is
much less deep than the 81-82 recession. It may become deeper but that only the "future" will tell! | {"url":"http://caseymulligan.blogspot.com/2009/05/real-gdp-now-and-1981-82.html","timestamp":"2014-04-19T19:33:46Z","content_type":null,"content_length":"78859","record_id":"<urn:uuid:a188c07c-0c5c-48bc-afd1-3476fd7994b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] 502: Embedded Maximal Cliques 5
Harvey Friedman hmflogic at gmail.com
Wed Sep 26 01:21:36 EDT 2012
We have revamped sections 9 and 11. We have made some other minor
upgrades and added polish.
We have posted the revised September 26, 2012 version of the entire
Extended Abstract at my website at
downloadable manuscripts
manuscript 72, Embedded Maximal Cliques and Incompleteness, September
26, 2012, 17 pages.
This supersedes the September 23, 2012 version that was previously
posted on my website.
For the reader's convenience, we repeat the Abstract for the Extended
Abstract here, and the Table of Contents (again changed a little):
Abstract. Every order invariant graph on Q≥0^k has an f embedded
maximal clique, where f is +1 on {0,...,n}. Every order invariant
graph on Q≥0^k has an f embedded maximal clique, where f is +1 on
{0,...,n} extended by the identity on Q>n+1. We give a proof of the
first statement within the usual ZFC axioms for mathematics. We give a
proof of the second statement (and more general statements) that goes
well beyond ZFC, and establish that ZFC does not suffice (assuming ZFC
is consistent). As a consequence of the Gödel completeness theorem,
both statements are Π01. We present a nondeterministic algorithm for
generating finite f embedded cliques. We prove the explicitly Π01
sentence asserting that the algorithm succeeds, by going well beyond
ZFC - and show that ZFC does not suffice (assuming ZFC is consistent).
We also propose a practical form of the algorithm, with the
expectation that actual certificates will be produced. We suggest that
this can be used as a confirmation of the consistency, or pragmatic
consistency, of ZFC and some of its far reaching extensions.
1. Graphs, cliques, embeddings.
2. Order invariant and order theoretic.
3. OIG(J,f).
4. OIG(J,f1,...,fn).
5. OIG characterizations
6. Total embeddings.
7. n-invariance.
8. General conjectures.
9. Finite embedded cliques. .
10. Extremely strong statement.
11. Certificates.
I use http://www.math.ohio-state.edu/~friedman/ for downloadable
manuscripts. This is the 502nd in a series of self contained numbered
postings to FOM covering a wide range of topics in f.o.m. The list of
previous numbered postings #1-449 can be found
in the FOM archives at
450: Maximal Sets and Large Cardinals II 12/6/10 12:48PM
451: Rational Graphs and Large Cardinals I 12/18/10 10:56PM
452: Rational Graphs and Large Cardinals II 1/9/11 1:36AM
453: Rational Graphs and Large Cardinals III 1/20/11 2:33AM
454: Three Milestones in Incompleteness 2/7/11 12:05AM
455: The Quantifier "most" 2/22/11 4:47PM
456: The Quantifiers "majority/minority" 2/23/11 9:51AM
457: Maximal Cliques and Large Cardinals 5/3/11 3:40AM
458: Sequential Constructions for Large Cardinals 5/5/11 10:37AM
459: Greedy CLique Constructions in the Integers 5/8/11 1:18PM
460: Greedy Clique Constructions Simplified 5/8/11 7:39PM
461: Reflections on Vienna Meeting 5/12/11 10:41AM
462: Improvements/Pi01 Independence 5/14/11 11:53AM
463: Pi01 independence/comprehensive 5/21/11 11:31PM
464: Order Invariant Split Theorem 5/30/11 11:43AM
465: Patterns in Order Invariant Graphs 6/4/11 5:51PM
466: RETURN TO 463/Dominators 6/13/11 12:15AM
467: Comment on Minimal Dominators 6/14/11 11:58AM
468: Maximal Cliques/Incompleteness 7/26/11 4:11PM
469: Invariant Maximality/Incompleteness 11/13/11 11:47AM
470: Invariant Maximal Square Theorem 11/17/11 6:58PM
471: Shift Invariant Maximal Squares/Incompleteness 11/23/11 11:37PM
472. Shift Invariant Maximal Squares/Incompleteness 11/29/11 9:15PM
473: Invariant Maximal Powers/Incompleteness 1 12/7/11 5:13AMs
474: Invariant Maximal Squares 01/12/12 9:46AM
475: Invariant Functions and Incompleteness 1/16/12 5:57PM
476: Maximality, CHoice, and Incompleteness 1/23/12 11:52AM
477: TYPO 1/23/12 4:36PM
478: Maximality, Choice, and Incompleteness 2/2/12 5:45AM
479: Explicitly Pi01 Incompleteness 2/12/12 9:16AM
480: Order Equivalence and Incompleteness
481: Complementation and Incompleteness 2/15/12 8:40AM
482: Maximality, Choice, and Incompleteness 2 2/19/12 7:43AM
483: Invariance in Q[0,n]^k 2/19/12 7:34AM
484: Finite Choice and Incompleteness 2/20/12 6:37AM__
485: Large Large Cardinals 2/26/12 5:55AM
486: Naturalness Issues 3/14/12 2:07PM
487: Invariant Maximality/Naturalness 3/21/12 1:43AM
488: Invariant Maximality Program 3/24/12 12:28AM
489: Invariant Maximality Programs 3/24/12 2:31PM
490: Invariant Maximality Program 2 3/24/12 3:19PM
491: Formal Simplicity 3/25/12 11:50PM
492: Invariant Maximality/conjectures 3/31/12 7:31PM
493: Invariant Maximality/conjectures 2 3/31/12 7:32PM
494: Inv Max Templates/Z+up, upper Z+ equiv 4/5/12 4:17PM
495: Invariant Finite Choice 4/5/12 4:18PM
496: Invariant Finite Choice/restatement 4/8/12 2:18AM
497: Invariant Maximality Restated 5/2/12 2:49AM
498: Embedded Maximal Cliques 1 9/18/12 12:43AM
499. Embedded Maximal Cliques 2 9/19/12 2:50AM
500: Embedded Maximal Cliques 3 9/20/12 10:15PM
501: Embedded Maximal Cliques 4 9/23/12 2:16AM
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2012-September/016702.html","timestamp":"2014-04-18T16:07:55Z","content_type":null,"content_length":"8272","record_id":"<urn:uuid:120c134b-751b-4913-bcd7-325ec02059ff>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Arrays of Python Values
Ian Mallett geometrian@gmail....
Fri Jul 23 03:16:41 CDT 2010
So working on the radiosity renderer:
The code now runs fast enough to generate the data required to draw that.
Now, I need to optimize the radiosity calculation, so that it will converge
in a reasonable amount of time. Right now, the individual blocks
("patches") are stored as instances of Python classes in a list.
I'd heard that NumPy supports arrays of Python objects, so, I simply made an
array out of the list, which seemed ok. Unfortunately, there is a custom
sorting operation that sorted the list of by an attribute of each class:
self.patches.sort( lambda x,y:cmp(x.residual_radiance,y.residual_radiance),
reverse=True )
Because I've never used arrays of Python objects (and Googling didn't turn
up any examples), I'm stuck on how to sort the corresponding array in NumPy
in the same way.
Of course, perhaps I'm just trying something that's absolutely impossible,
or there's an obviously better way. I get the feeling that having no Python
objects in the NumPy array would speed things up even more, but I couldn't
figure out how I'd handle the different attributes (or specifically, how to
keep them together during a sort).
What're my options?
Thanks again for the extremely valuable help,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20100723/b06ce471/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-July/051789.html","timestamp":"2014-04-18T23:50:01Z","content_type":null,"content_length":"4099","record_id":"<urn:uuid:bc245a7d-b5a5-4b13-a57b-58ae3c12ad2f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/diwakar/medals","timestamp":"2014-04-20T14:13:49Z","content_type":null,"content_length":"94490","record_id":"<urn:uuid:33b284ba-412f-41d6-a0f5-500b2ed11b07>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
PERSONAL SELLING:
market potential
an estimate of the maximum possible sales of a good or service for an entire industry during a stated time period
sales potential
refers to the maximum market share that a particular firm can achieve under ideal conditions
sales forecast
an estimate of the dollar or unit sales for a specific future period under a proposed marketing plan or program for an individual firm
NOTE: a forecast is what is realistically expected, not what is hoped or desired
How might you predict demand for:
• wind generators in 2025
• Ford Escorts next year
• size 3a rivets made by Acme Mfg.
• refrigerators in two years
• Wendy's menu strips made by VFSign Co.
Controllable Factors
those that are under control of the firm
• pricing
• distribution
• promotion
• product characteristics
• product mix
• account policies
• choice of customers
• etc.
Uncontrollable Factors
environmental elements over which the firm has little, if any, direct control
• economy, interest rates, inflation
• public policy, government regulation
• political conditions
• market factors, changing demographics
• competitors, competitor actions
• supplies, supplier actions
• industry trends
• etc.
Three Basic Approaches:
• Judgmental / Qualitative
• Relational
• Analytical / Quantitative
Judgmental / Qualitative Techniques
• subjective; based on a hunch, intuition
• assume that somebody knows the answer and ask them
• experience based
• subjective: might result in bias
Judgmental / Qualitative Techniques
• Jury of Executive Opinion
• Sales Force Composite
• User's Expectation
• Delphi Techniques
• Scenario Method
Judgmental / Qualitative Techniques
Useful for:
long range forecasting
• e.g., where technological, political, etc. factors play a significant role
when data is limited or non-existent
• e.g., new product launch
Relational Techniques
• assume cause and effect, and cause can be used to predict sales
• if you know one variable, you can forecast the other
Relational Techniques
• leading indicators
□ e.g., housing starts suggest refrigerator sales
□ e.g., births suggest college enrollments
• regression techniques
□ assume a straight line; cannot account non-linear sales
□ in some cases, assumes a causal relationship between time and sales (don't repeat this one on a stats exam!)
□ use a ruler for "eyeball regression"
Analytical / Quantitative Techniques
time series approaches
• assume that historical data can be used to predict future demand
• all we look at is historical data over time used to reduce the element of subjectivity
used to describe a time series that is not flat
used to describe a time series as flat
Analytical / Quantitative Techniques
Four Approaches:
• naive
• cumulative mean
• moving average
• exponential smoothing
NOTE: cumulative mean is mentioned to develop insights into these methods and is generally not a method that is used in practice.
Idea behind what we will be doing:
• we want to smooth the data
• we want to find the pattern in the noise
S[t+1] = S[t]
• cumulative mean looks at all data
• naive approach looks at no data past the present
• forecast for the next period is the same for the last period
• works best when data follows a "random "walk" or is very noisy
• best in the short run, not so good in the long run
• does not work with data that is trended or has a clear pattern
• assumes high volatility
S[1] + S[2 ]+ . . . + S[t
]S[t+1] = --------------------
• assumes that all data are equally relevant
• never throw anything out
• not frequently used
period sales forecast
1 16,250 --
2 17,000 16,250
3 20,000 16,625
4 16,000 17,750
5 15,000 17,312
6 17,250 16,850
7 18,000 16,917
8 20,000 17,071
9 -- 17,438
S[t] + S[t-1] + S[t-2] + . . . + S[t-(N+1)
]S[t+1] = -----------------------------------
• we want to try to "average out" the forecast to cancel out noise
• looks for some sort of trend up or down; attempts to smooth out the trend, but always lags behind the trend
small N
the forecast will quickly respond to changes, but we lose the "averaging out" effect which cancels out noise
large N
we get good averaging out of noise, but poor response; sluggish
N is usually chosen by trial and error. Whatever has worked the best in predicting past data is presumed to be the best for predicting the next period.
Note: a "period" is some amount of time. It could be a year, a month, a week, an hour, or a millisecond.
period sales period forecast
1 16,250 --
2 17,000 --
3 20,000 53,250
4 16,000 53,000 17,750 (53,250/3)
5 15,000 51,000 17,667 .
6 17,250 48,250 17,000 .
7 18,000 50,250 16,080 .
8 20,000 55,250 16,750 .
9 -- 18,417 (55,250/3)
S^[t+1] = aS[t] + (1-a)S^[t]
a is a smoothing constant
naive forecast acts as though only the most recent observation has any forecasting value; all prior observations are treated as worthless
cumulative mean procedure ignores the age of the observation; all observations are treated as equally relevant, no matter how old the observation
moving average acts as though the last N periods of data are equally useful but that all prior observations are worthless
It might seem reasonable that historical observations gradually lose their value rather than so abruptly as in the moving average.
This idea leads to the concept of weighted moving averages.
• assumes that the most recent data is the most valuable
• assumes that data gradually loses its value over time
• similar to moving average except:
□ most recent sales are weighted more heavily
□ older sales weighted less
S^[t+1] = aS[t ]+ (1-a)S^[t]
a is a smoothing constant
large a
• fast smoothing
• heavy emphasis on new data
• highly responsive but "nervous" to noise
small a
• slow smoothing
• heavier reliance on older data
• sluggish response but calm to noise
The value of the smoothing constant is usually chosen by trial and error. Whatever has worked the best in predicting past data is presumed to be the best for predicting the next period.
a = .8
period sales forecast
1 16,250
2 17,000 16,250 (use naive to seed)
3 20,000 16,850 (.8)(17,000) + (.2)(16,250)
4 16,000 19,370 (.8)(20,000) + (.2)(16,850)
5 15,000 16,774 (.8)(16,000) + (.2)(19,300)
6 17,250 15,335 .
7 18,000 16,867 .
8 20,000 17,773 .
9 -- 19,555 (.8)(20,000) + (.2)(17,773)
| return to syllabus | return to homepage | | {"url":"http://www.sykronix.com/tsoc/courses/sales/forecast.htm","timestamp":"2014-04-21T14:40:52Z","content_type":null,"content_length":"12461","record_id":"<urn:uuid:db5050ef-bb84-45de-bd81-832e3df841e7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: creating cross tables/ matrices with expected/ observed frequencies
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: creating cross tables/ matrices with expected/ observed frequencies from long data set
From Nick Darson <nick.darson@googlemail.com>
To statalist@hsphsun2.harvard.edu
Subject st: creating cross tables/ matrices with expected/ observed frequencies from long data set
Date Sat, 22 Sep 2012 10:35:49 +1000
Dear Statalisters,
I would like to create several tables/matrices with expected
frequencies and observations (to be able to carry out a Chi Square
test of how well my model describes the data).
I have the following data set (each person chose from 2 sets, the
first one containing 3 option, the second one containing 4 options),
with ID=individuals (300), set=choice set, option= option, choice=
chosen option (dummy), Prob= expected probability for each individual
based on my random-effects logit model.
ID Set Option Choice Prob
1 1 A 0 0.2
1 1 B 1 0.7
1 1 C 0 0.1
1 2 D 0 0.1
1 2 E 0 0.2
1 2 F 1 0.4
1 2 G 0 0.3
First I would need a cross-table with observed frequencies for the two
choice sets (looking as follows):
A B C
I am familiar with the tab-command, but a bit lost how do to obtain
the table given my data-set arrangement? Any help would be
Then I would need the same table as above, but with expected (joint)
probabilities instead of observed frequency. For this, I had the
following strategy in mind:
1.) Create 2 matrices, one for each choice set
2.) Transpose one matrix and multiply to obtain joint probability
For 1.), I would need one table with the probabilities listed (column=
Individuals; rows= options) as follows:
ID1 ID2 ID3 …..
A 0.2 …..
B 0.7
C 0.1
(and same table/ matrix for set 2).
How would I do this in Stata? To my understanding, Tabs and tabstats
only provide summaries etc, but I want the original value of prob
Any help would be appreciated!
BTW: this is a simplification of the original data set (more sets and
options) and I would like to do several things in one set…therefore, I
“keep”/”reshape” combos would not be suitable.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-09/msg00788.html","timestamp":"2014-04-20T23:44:13Z","content_type":null,"content_length":"9327","record_id":"<urn:uuid:608578c3-2c5a-4aab-b292-ac8f62c0bf6b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to integrate this??
April 5th 2009, 12:54 PM #1
Junior Member
Oct 2008
how to integrate this??
im trying to integrate the function $x(1-x)$ which is $x-x^2$. if I make u=x^2 or u=x-x^2 for u-substitution it never turns out right though.. please someone help
u-substitution?! this is power rule!
remember, $\int x^n ~dx = \frac {x^{n + 1}}{n + 1} + C$ ........(for any constant $n e -1$ of course
here you have $\int x^1~dx - \int x^2~dx$, just integrate each separately using the power rule
No, don't substitute anything
$\int x-x^2 ~dx=\int x ~dx-\int x^2 ~dx$
Then recall this common antiderivative :
$\int x^n ~ dx=\frac{x^{n+1}}{n+1}+C$, for any $n eq 1$
Edit : woops, too late
April 5th 2009, 12:58 PM #2
April 5th 2009, 12:58 PM #3 | {"url":"http://mathhelpforum.com/calculus/82399-how-integrate.html","timestamp":"2014-04-16T11:56:12Z","content_type":null,"content_length":"38001","record_id":"<urn:uuid:91a01325-c606-4758-a05a-a6f0835d34ae>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Brimfield, MA Math Tutor
Find an East Brimfield, MA Math Tutor
...I have been living in the United States for 13 years, raised two children through Massachusetts school systems and am familiar with the requirements of MCAS, ACT, and SAT, as well as AP French
tests. I have tutored children through K-12 in maths, preparation to SAT, and AP French, and can teach ...
3 Subjects: including algebra 1, French, elementary math
...As an academic teacher and parent of a current high school senior, I have both an educational and parental perspective to help you child and you through this task. I have a Masters degree in
Special Education, with a license in SPED. I have worked with several autistic students during my years in education and also with children in my girl scout troop.
33 Subjects: including calculus, discrete math, differential equations, Aspergers
...Prior to becoming a graduate student, I worked for the Department of Civil Engineering as an undergraduate assistant where my primary job was to help grad students edit potential journal
articles. In the four years since I started working as an editor, I have continued working with graduate stud...
34 Subjects: including prealgebra, precalculus, ACT Math, SAT math
...As far as my tutoring background, I started in high school when I spent my study halls tutoring student peers that needed the extra help. Then spent after school volunteering at an elementary
school to help out children that were falling behind class. Though I did not tutor much during college,...
17 Subjects: including calculus, trigonometry, actuarial science, linear algebra
...Math can be fun, you can learn through games, and sometimes... there is more than one answer to a problem!I was a public school reading tutor in the Success for All program. I passed the
certification for this position through the Worcester Public School system. I have also tutored 4th and 5th grade students in science and reading.I have done prep work with students for the MCAS
12 Subjects: including geometry, algebra 1, elementary (k-6th), probability
Related East Brimfield, MA Tutors
East Brimfield, MA Accounting Tutors
East Brimfield, MA ACT Tutors
East Brimfield, MA Algebra Tutors
East Brimfield, MA Algebra 2 Tutors
East Brimfield, MA Calculus Tutors
East Brimfield, MA Geometry Tutors
East Brimfield, MA Math Tutors
East Brimfield, MA Prealgebra Tutors
East Brimfield, MA Precalculus Tutors
East Brimfield, MA SAT Tutors
East Brimfield, MA SAT Math Tutors
East Brimfield, MA Science Tutors
East Brimfield, MA Statistics Tutors
East Brimfield, MA Trigonometry Tutors
Nearby Cities With Math Tutor
Brimfield, MA Math Tutors
Dudley Hill, MA Math Tutors
East Putnam, CT Math Tutors
East Willington, CT Math Tutors
Lambs Grove, MA Math Tutors
Laurel Hill, CT Math Tutors
Old Furnace, MA Math Tutors
Pomfret Landing, CT Math Tutors
Putnam Heights, CT Math Tutors
Rhodesville, CT Math Tutors
Richardson Corners, MA Math Tutors
Sandersdale, MA Math Tutors
Sawyer District, CT Math Tutors
West Ashford, CT Math Tutors
Westville Lake, OH Math Tutors | {"url":"http://www.purplemath.com/East_Brimfield_MA_Math_tutors.php","timestamp":"2014-04-20T16:22:29Z","content_type":null,"content_length":"24546","record_id":"<urn:uuid:2d3138bc-85b8-4375-809a-2b4ca76f247e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] comparing floating point numbers
Ondrej Certik ondrej@certik...
Mon Jul 19 20:31:00 CDT 2010
I was always using something like
abs(x-y) < eps
(abs(x-y) < eps).all()
but today I needed to also make sure this works for larger numbers,
where I need to compare relative errors, so I found this:
and wrote this:
def feq(a, b, max_relative_error=1e-12, max_absolute_error=1e-12):
a = float(a)
b = float(b)
# if the numbers are close enough (absolutely), then they are equal
if abs(a-b) < max_absolute_error:
return True
# if not, they can still be equal if their relative error is small
if abs(b) > abs(a):
relative_error = abs((a-b)/b)
relative_error = abs((a-b)/a)
return relative_error <= max_relative_error
Is there any function in numpy, that implements this? Or maybe even
the better, integer based version, as referenced in the link above?
I need this in tests, where I calculate something on some mesh, then
compare to the correct solution projected on some other mesh, so I
have to deal with accuracy issues.
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-July/051675.html","timestamp":"2014-04-17T18:58:44Z","content_type":null,"content_length":"3679","record_id":"<urn:uuid:54fd1ddd-7607-4e3a-a066-99fe4364fae2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functional Analysis question
May 1st 2010, 02:38 AM
Functional Analysis question
I can't work this out
show $c_0$ (with the usual sup norm) is not a Hilbert Space.
my main problem stems from using the $c_0$ space which I don't fully grasp, I know the method for this type of problem so I'm really looking for a suggestion as a what to let x and y equal and
what their norms should look like
May 1st 2010, 07:28 AM
I can't work this out
show $c_0$ (with the usual sup norm) is not a Hilbert Space.
my main problem stems from using the $c_0$ space which I don't fully grasp, I know the method for this type of problem so I'm really looking for a suggestion as a what to let x and y equal and
what their norms should look like
The way to show results like this is to use the parallelogram identity. You can choose almost any two elements of the space to see that they do not satisfy the identity. The easiest choice would
be to take for example x to be the sequence in $c_0$ having a 1 for its first coordinate and 0 for every other coordinate; and take y to be the sequence having a 1 for its second coordinate and 0
for every other coordinate. Then x, y, x+y and x–y all have $c_0$-norm 1 | {"url":"http://mathhelpforum.com/differential-geometry/142415-functional-analysis-question-print.html","timestamp":"2014-04-16T19:06:48Z","content_type":null,"content_length":"5795","record_id":"<urn:uuid:007ac421-5db1-4b8f-baa3-f1da664dab44>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hirosi Ooguri
Hirosi Ooguri is a leading theorist in high energy physics and works at the interface of elementary particle physics, string theory, and related mathematics. He has made fundamental contributions to
conformal field theories in two dimensions and to topological string theory. He is also widely recognized for his research on geometric description of gauge theory dynamics, including geometric
engineering and the AdS/CFT correspondence.
Ooguri was born in 1962 in Japan and studied physics and mathematics at Kyoto University. After two years in the Graduate School of Kyoto University, at the age of 23, he was offered a tenured
assistant professor position at the University of Tokyo. After spending a year on sabbatical at the Institute for Advanced Study in Princeton, he moved to the University of Chicago as an assistant
professor in physics. A year later, he was lured back to Japan as an associate professor of mathematical physics at the Research Institute for Mathematical Sciences in Kyoto University. In Japan, he
was a co-principal investigator of the interdisciplinary project of physics and mathematics called "Infinite Analysis," funded by Japan Society for the Promotion of Science. In 1984, he became a
professor of physics at the University of California at Berkeley. Two years later, he also received a joint appointment at the Lawrence Berkeley National Laboratory as a faculty senior scientist. In
2000, he moved to the California Institute of Technology where he is the Fred Kavli Professor of Theoretical Physics and Mathematics.
Ooguri is a member of the Advisory Board and Steering Committee of the Kavli Institute for Theoretical Physics in Santa Barbara. He has also organized two long-term workshops at the Institute and
participated in various other activities there.
Additional Information | {"url":"http://www.kavlifoundation.org/hirosi-ooguri","timestamp":"2014-04-19T00:22:06Z","content_type":null,"content_length":"14181","record_id":"<urn:uuid:1ce7144e-d7fd-4112-8489-1121598e8166>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum field theory in Solovay-land
up vote 9 down vote favorite
Constructing quantum field theories is a well-known problem. In Euclidean space, you want to define a certain measure on the space of distributions on R^n. The trickiness is that the detailed
properties of the distributions that you get is sensitive to the dimension of the theory and the precise form of the action.
In classical mathematics, measures are hard to define, because one has to worry about somebody well-ordering your space of distributions, or finding a Hamel basis for it, or some other AC idiocy. I
want to sidestep these issues, because they are stupid, they are annoying, and they are irrelevant.
Physicists know how to define these measures algorithmically in many cases, so that there is a computer program which will generate a random distribution with the right probability to be a pick from
the measure (were it well defined for mathematicians). I find it galling that there is a construction which can be carried out on a computer, which will asymptotically converge to a uniquely defined
random object, which then defines a random-picking notion of measure which is good enough to compute any correlation function or any other property of the measure, but which is not sufficient by
itself to define a measure within the field of mathematics, only because of infantile Axiom of Choice absurdities.
So is the following physics construction mathematically rigorous?
Question: Given a randomized algorithm P which with certainty generates a distribution $\rho$, does P define a measure on any space of distributions which includes all possible outputs with certain
This is a no-brainer in the Solovay universe, where every subset S of the unit interval [0,1] has a well defined Lebesgue measure. Given a randomized computation in Solovay-land which will produce an
element of some arbitrary set U with certainty, there is the associated map from the infinite sequence of random bits, which can be thought of as a random element of [0,1], into U, and one can then
define the measure of any subset S of U to be the Lebesgue measure of the inverse image of S under this map. Any randomized algorithm which converges to a unique element of U defines a measure on U.
Question: Is it trivial to de-Solovay this construction? Is there is a standard way of converting an arbitrary convergent random computation into a measure, that doesn't involve a detour into logic
or forcing?
The same procedure should work for any random algorithm, or for any map, random or not.
EDIT: (in response to Andreas Blass) The question is how to translate the theorems one can prove when every subset of U gets an induced measure into the same theorems in standard set theory. You get
stuck precisely in showing that the set of measurable subsets of U is sufficiently rich (even though we know from Solovay's construction that they might as well be assumed to be everything!)
The most boring standard example is the free scalar fields in a periodic box with all side length L. To generate a random field configuration, you pick every Fourier mode $\phi(k_1,...k_n)$ as a
Gaussian with inverse variance $k^2/L^d$, then take the Fourier transform to define a distribution on the box. This defines a distribution, since the convolution with any smooth test function gives a
sum in Fourier space which is convergent with certain probability. So in Solovay land, we are free to conclude that it defines a measure on the space of all distributions dual to smooth test
But the random free field is constructed in recent papers of Sheffield and coworkers by a much more laborious route, using the exact same idea, but with a serious detour into functional analysis to
show that the measure exists (see for instance theorem 2.3 in http://arxiv.org/PS_cache/math/pdf/0312/0312099v3.pdf). This kind of thing drives me up the wall, because in a Solovay universe, there is
nothing to do--- the maps defined are automatically measurable. I want to know if there is a meta-theorem which guarantees that Sheffield stuff had to come out right without any work, just by knowing
that the Solovay world is consistent.
In other words, is the construction: pick a random Gaussian free field by choosing each Fourier component as a random gaussian of appropriate width and fourier transforming considered a rigorous
construction of measure without any further rigamarole?
EDIT IN RESPONSE TO COMMENTS: I realize that I did not specify what is required from a measure to define a quantum field theory, but this is well known in mathematical physics, and also explicitly
spelled out in Sheffield's paper. I realize now that it was never clearly stated in the question I asked (and I apologize to Andreas Blass and others who made thoughtful comments below).
For a measure to define a quantum field theory (or a statistical field theory), you have to be able to compute reasonably arbitrary correlation functions over the space of random distributions. These
correlation functions are averages of certain real valued functions on a randomly chosen distribution--- not necessarily polynomials, but for the usual examples, they always are. By "reasonably
arbitrary" I actually mean "any real valued function except for some specially constructed axiom of choice nonsense counterexample". I don't know what these distribtions look like a-priory, so
honestly, I don't know how to say anything at all about them. You only know what distributions you get out after you define the measure, generate some samples, and seeing what properties they have.
But in Solovay-land (a universe where every subset S of [0,1] is forced to have Lebesgue measure equal to the probability that a randomly chosen real number happens to be an element of S) you don't
have to know anything. The moment you have a randomized algorithm that converges to an element of some set of distributions U, you can immediately define a measure, and the expectation value of any
real valued function on U is equal to the integral of this function over U against that measure. This works for any function and any distribution space, without any topology or Borel Sets, without
knowing anything at all, because there are no measurability issues--- all the subsets of [0,1] are measurable. Then once you have the measure, you can prove that the distributions are continuous
functions, or have this or that singularity structure, or whatever, just by studying different correlation functions. For Sheffield, the goal was to show that the level sets of the distributions are
well defined and given by a particular SLE in 2d, but whatever. I am not hung up on 2d, or SLE.
If one were to suggest that this is the proper way to do field theory, and by "one" I mean "me", then one would get laughed out of town. So one must make sure that there isn't some simple way to
de-Solovay such a construction for a general picking algorithm. This is my question.
EDIT (in response to a comment by Qiaochu Yuan): In my view, operator algebras are not a good substitute for measure theory for defining general Euclidean quantum fields. For Euclidean fields,
statistical fields really, you are interested any question one can ask about typical picks from a statistical distribution, for example "What is the SLE structure of the level sets in 2d"(Sheffield's
problem), "What is the structure of the discontinuity set"? "Which nonlinear functions of a given smeared-by-a-test-function-field are certainly bounded?" etc, etc. The answer to all these questions
(probably even just the solution to all the moment problems) contains all the interesting information in the measure, so if you have some non-measure substitute, you should be able to reconstruct the
measure from it, and vice-versa. Why hide the measure? The only reason would be to prevent someone from bring up set-theoretic AC constructions.
For the quantities which can be computed by a stochastic computation, it is traditional to ignore all issues of measurability. This is completely justified in a Solovay universe where there are no
issues of measurability. I think that any reluctance to use the language of measure theory is due solely to the old paradoxes.
mp.mathematical-physics lo.logic
4 If you really want to sidestep all the set-theoretic issues, why are you using measure theory as a conceptual framework at all? – Qiaochu Yuan Jun 26 '11 at 2:14
One can effectively replace a measure space by a suitable algebra of random variables on it (see for example en.wikipedia.org/wiki/Abelian_von_Neumann_algebra and terrytao.wordpress.com/2010/02/
5 10/245a-notes-5-free-probability), and it is possible that a suitable generalization of this construction may produce a "generalized measure theory" suitable for QFT. I have no idea if it's
expected that this works, but my point is the assumption that measure theory is a reasonable framework for QFT seems to be the assumption you should be, but aren't, challenging. – Qiaochu Yuan
Jun 26 '11 at 2:39
6 It's unclear to me why you want a measure defined on all subsets of $U$. As Andreas explained, your randomized algorithm does define a probability measure on some $\sigma$-algebra of subsets of
$U$. Is there any reason to believe this $\sigma$-algebra is insufficient? – François G. Dorais♦ Jun 26 '11 at 20:16
3 Sufficient for what? Please add some substance to your questions... – François G. Dorais♦ Jun 27 '11 at 4:42
12 Dear Ron Maimon: if you cannot refrain from using language like "dinky" or "infantile" or "this answer is too trivial", then it might be best if you found another place to ask your questions. –
S. Carnahan♦ Jun 27 '11 at 15:31
show 7 more comments
3 Answers
active oldest votes
I don't know anything about the space of all distributions dual to smooth test functions, but do know a fair bit about computable measure theory (from a certain perspective).
First, you mention that you have a computable algorithm which generates a probability distribution. I believe you are saying that you have a computable algorithm from $[0,1]$ (or
technically the space of infinite binary sequences) to some set $U$ where $U$ is the space of distributions of some type.
Say your map is $f$. How are you describing the element $f(x) \in U$? In computable analysis, there is a standard way to talk about these things. We can describe each element of $U$ with
an infinite code (although each element has more than one code). Then $f$ works as follows: It reads the bits of $x$; from those bits, it starts to write out the code for the $f(x)$. The
more bits of $x$ known, the more bits of the code for $f(x)$ known.
(Note, not every space has such a nice encoding. If the space isn't separable, there isn't a good way to describe each object while still preserving the important properties, namely the
topology. Is say, in your example above, the space of distributions that are dual to smooth test functions, is it a separable space--maybe in a weak topology? Does the encoding you use for
elements of $U$ generate the same topology?)
The important property of such a computable map is that it must be continuous (in the topology generated by the encoding, but these usually coincide with the topology of the space). Since
$f$ is continuous, we know we can induce a Borel measure on $U$ as follows. If $S$ is an open set then $f^{-1}(S)$ is open and $\mu(f^{-1}(S))$ is known. Similarly, with any Borel sets,
hence you have a Borel measure.
Borel measures are sufficient for most applications I can think of (you can integrate continuous functions and from them, define and integrate the L^p functions), but once again, I don't
know anything about your applications.
up vote 4 Also, if the function $f$ doesn't always converge to a point in $U$, but only does so almost everywhere, the function $f$ is not continuous, but it is still fairly nice and I believe stuff
down vote can be said about the measure, although I need to think about it.
Update: If $f$ converges with probability one, then the set of input points that $f$ converges on is a measure one $G_{\delta}$ set, in particular it is Borel. The function remains
continuous on that domain (in the restricted topology). Hence there is still an induced Borel measure on the target space. (Take a Borel set; map it back. It is Borel on the restricted
domain, and hence Borel on [0,1]).
Update: Also, I am assuming that your algorithm directly computes the output from the input. I will give an example what I mean. Say one want to compute a real number. To compute it
directly, I should be able to ask the algorithm to give me that number within $n$ decimal places with an error bound of $1/10^n$. An indirect algorithm works as follows: The computer just
gives me a sequence of approximations that converge to the number. The computer may say $0,0,0,...$ so I think it converges to 0, but at some point it starts to change to $1,1,1,...$. I
can never be sure if my approximation is close to the final answer. Even if your algorithm is of the indirect type, it doesn't matter for your applications. It will still generate a Borel
map, albeit a more complex one than continuous, and hence it will generate a Borel measure on the target space. (The almost everywhere concerns are similar; they also go up in complexity,
but are still Borel.) Without knowing more about your application it is difficult for me to say much specific to your case.
Am I correct in my understanding of your construction, especially the computable side of it? For example, is this the way you describe the computable map from $[0,1]$ to $U$?
On a more general note, much of measure theory has been developed in a set theoretic framework. This isn't very helpful with computable concerns. But using various other definitions of
measures, one is able to once again talk about measure theory with an eye to what can and cannot be computed.
I hope this helps, and that I didn't just trivialize your question.
Yes on everything. The only topology I was thinking about is what you call the topology generated by the encoding. The only sticking point left is the you mention about almost everywhere
convergence. – Ron Maimon Jun 28 '11 at 5:37
I made some edits to the post addressing the almost everywhere convergence. (They are marked Update.) – Jason Rute Jun 28 '11 at 18:36
Thank you for explaining this constructive measure theory business. Although your answer is self-contained, I was wondering if you can add a literature pointer, just for my own
edification. For the specific questions: I didn't think about topology on the space of distributions, because Solovay guarantees that the measure will be defined on all subsets without
worrying about topology. But, as you pointed out, there is the implicit topology in the statement that the random-picking algorithm converges. This allows you to easily make a countable
dense set in the support. – Ron Maimon Jul 1 '11 at 1:49
Ron, there are two books: Computability in Analysis and Physics by Pour-El and Richards and Computable Analysis by Klaus Weihrauch. The first has less material, but might be easier to
read. The second is heavy on notation. The first also has a section on Fourier transforms which may be of interest to you. – Jason Rute Jul 4 '11 at 15:21
add comment
The question is not very clear, but the paragraph following it suggests that you might mean the following. Suppose we have an operation $P$ that takes as input an infinite sequence of binary
digits (or, almost equivalently, a number in $[0,1]$) and always produces an output in some set $U$ (of distributions). Does this induce a measure defined on all subsets of $U$? In general
(with or without Solovay, and regardless of what the elements of $U$ are), such a $P$ induces a measure on some subsets of $U$, namely those whose inverse-image under $P$ is Lebesgue
measurable. In Solovay's model where all sets of reals are Lebesgue measurable, the induced measure is thus defined on all subsets of $U$. In a universe where not all sets of reals are
up vote Lebesgue measurable, the natural induced measure on $U$ will not, in general, be defined on all subsets of $U$. For example, $P$ might be the identity map of $[0,1]$. Or, if you insist on $U$
4 down being a set of distributions, $P$ could send each $x\in[0,1]$ to the Dirac delta-distribution concentrated at $x$.
Instead of asking whether the natural induced measure on $U$ is defined on all subsets, one could ask (and maybe you meant to ask) whether there is a reasonable extension of this measure to
all subsets of$U$. In that case, I'd like to know what would count as reasonable.
The question is as follows: you can define the random free field (for example) by picking all Fourier components as Gaussian random with width $k^2/L^2$. This is a complete definition in
Solovay universe. Is it a complete definition in the standard universe? – Ron Maimon Jun 26 '11 at 14:26
It seems very unlikely to me that you can extend the measure in the presence of choice to arbitrary subsets, precisely because the measure has certain translation properties which should
allow a Vitali set (although I didn't do an explicit construction). – Ron Maimon Jun 26 '11 at 14:34
1 In answer to your first question, I think that what you propose is a complete definition in the sense that nothing more would need to be said to specify which measure you mean. The measure
it defined would not, however, have as its domain the collection of all subsets of $U$. – Andreas Blass Jun 26 '11 at 15:20
1 In answer to your second question: Your mention of translation invariance looks like at least the beginning of saying "what would count as reasonable". There are no translation-invariant
extensions of Lebesgue measure to all sets (in the standard universe where AC holds), but there might be extensions that are not translation-invariant. This is one reason why I would want
to know what sort of extensions you're willing to consider. – Andreas Blass Jun 26 '11 at 15:22
The construction of theorem 2.3 in the Sheffield paper is what I want to avoid. This is a theorem of Gross which is cited, which constructs a probability measure which has the properties
of the random picking measure. This theorem is trivial in Solovay-land, because the definition in the statement of the theorem automatically constructs the measure, but it is obviously
considered nontrivial by Sheffield et. al. Is there a way to transfer the trivial proof to the usual universe, and avoid this Gross thing. – Ron Maimon Jun 26 '11 at 18:36
show 1 more comment
I don't know anything about the Solovay land, but I can say a little about the random functions and this may be related to what you're going for.
People have for a while been considering random functions which are generated in this way in the context of nonlinear dispersive equations. Probably the most interesting examples are the Gibbs
measures associated to infinite dimensional Hamiltonian systems like nonlinear Schrödinger or wave equations. See for example these slides of Jim Colliander:
which has an outline of the technicalities to determine on which Banach space your measure will be supported. More can be found from this lecture of Gigliola Staffilani:
Now, if you want to put a measure on an infinite dimensional space (like the space of distributions), its support will necessarily be extremely thin even if you do get a dense subset. So for
instance if you start with some $f \in L^2$ and randomize its Fourier coefficients to make a random function $f^\omega$, you get an increased integrability $f^\omega \in L^p$ for all $p < \
up infty$ almost surely. There are other measures you can use besides Gaussian measures where the same phenomenon will occur (random $\pm 1$'s will also do the trick by the well-known
vote 3 Khintchine's inequality).
vote Using Gibbs measures (or just ad hoc randomizations like randomized Fourier coefficients), people have been able to establish almost surely globally defined flows for random data which is
"supercritical" when measured in a Sobolev space. The Gibbs measures just come with the nice feature of being invariant under the flow provided you can construct the flow. In physical space,
the random data looks much better than a typical element in the Banach space. For this reason, you can show solutions to nonlinear evolution equations exist almost surely and even establish
some kind of almost sure well-posedness when you deterministically would not have such a result. But there is a limit to what can be achieved with this freedom. In particular, if your
variances not only fail to decay but even grow with the frequency, then it can be very difficult or simply impossible to construct a solution to the equation. The example you gave of variance
like $k^2$, for instance, is likely to be too large at frequency infinity -- i.e. too irregular -- for any sensible solution to a familiar nonlinear evolution equation to exist, even though,
of course, it will make sense as a measure on the space of distributions and have support in some negative Sobolev space you can explicitly compute. Since Fourier multipliers applied to the
random Fourier series will also be random Fourier series, you will also be able to solve linear PDE with the random data with no difficulty, and these solutions will also possess higher
integrability than you would ever sensibly ask for. But if the equation you have in mind is the cubic nonlinear Schrödinger and you're insistent about very large data, what you're asking for
could likely be hopeless for reasons much more serious than this axiom of choice stuff. You have to pay attention to the space because even cubing -- let alone solving the equation -- may be
I inverted the variance by accident--- I meant inverse variance is k^2. This is the typical free bosonic quantum field variance, and it is the same as a Boltzmann distribution for an elastic
sheet. The inverse k^2 variance is still irregular at high frequencies at high dimensions, it is only continuous (Brownian) in 1d, and somewhat regular in dimension 2. – Ron Maimon Jun 28
'11 at 12:50
I am aware of the issues with nonlinear functions of quantum fields. Those need to be dealt with by using the appropriate renormalization at the computational stage. I just wanted to make
sure that given a solution to the computational problem (admittedly the more difficult one), that there are no further AC type difficulties in defining the theory. – Ron Maimon Jun 28 '11 at
add comment
Not the answer you're looking for? Browse other questions tagged mp.mathematical-physics lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/68825/quantum-field-theory-in-solovay-land/68996","timestamp":"2014-04-19T14:57:16Z","content_type":null,"content_length":"95548","record_id":"<urn:uuid:cecad06a-3af5-4949-9c5b-6a0ec935f70f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Re:st: Re: fixed effects vs random effects
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: Re:st: Re: fixed effects vs random effects
From "Rodrigo A. Alfaro" <ralfaro76@hotmail.com>
To <statalist@hsphsun2.harvard.edu>
Subject Re: Re:st: Re: fixed effects vs random effects
Date Sat, 3 Feb 2007 23:28:54 -0500
(1) You are using IV in the third step of HT!! then you could use -ivreg2-
but the approach is different, FE or FD instead of RE!!
(2) Below it is an example of doing HT "by hand"... requires an improvement
for unbalanced case using the formulae of the manual, after that you could
as instruments as you wish in the third step.
(3) I didn't ask for your R2's. I would like to know why you say that some X
has small within variation. Pretend x1 is that variable, then I would like
to see
-xtsum x1- in order to understand what you mean by small variation.
(4) I would like to point you that merges are treated differently by
For example, you have banks A and B, they merged and what you have...
bank A or B is not longer in the dataset, but this does not mean that B is
in the market, also you are over-estimating the effect of A!!! I read long
ago about this, it is not easy to think about it... but some authors suggest
keep A and B separated dividing the new A, and depending of the case, maybe
considering a new C that hits the market. You should search how they deal
with mergers... it is not the same as randomly unbalanced panel.
/***************** Example *******************************/
/// Setting
qui {
webuse psidextract, clear
local tv "lwage wks south smsa ms exp exp2 occ ind union"
local ti "fem blk ed"
keep `tv' `ti' id t
sort id t
tsset id t
foreach i of varlist `tv' {
by id: egen double `i'_m=mean(`i')
gen double `i'_dm = `i'-`i'_m
/// First step
reg lwage_dm wks_dm south_dm smsa_dm ms_dm exp_dm exp2_dm ///
occ_dm ind_dm union_dm, noc
sca sig_e2=e(rss)/3570
mat beta=e(b)
mat colnames beta = wks_m south_m smsa_m ms_m exp_m exp2_m ///
occ_m ind_m union_m
mat score double xbm_w = beta
gen double di = lwage_m - xbm_w
mat colnames beta = wks south smsa ms exp exp2 occ ind union
mat score double xb_w = beta
/// Second step
reg di fem blk ed (wks south smsa ms fem blk)
predict double zg, xb
reg di fem blk ed (wks_m south_m smsa_m ms_m fem blk) if t==7
predict double zg2, xb
/// Error and theta
g double fit1=lwage - xb_w - zg2
by id: gen double fit2=sum(fit1)
by id: replace fit2=(fit2[_N]/7)^2
sum fit2, meanonly
sca s2=r(sum)/595
sca sig_u2=(s2-sig_e2)/7
gen double theta=1-sqrt(sig_e2/(sig_e2+7*sig_u2))
/// GLS
foreach i of varlist `tv' {
gen `i'_g=`i'-theta*`i'_m
foreach i of varlist `ti' {
gen `i'_g=(1-theta)*`i'
/// More Instruments (for AM)
foreach i of varlist wks south smsa ms exp exp2 occ ind union {
forvalues k=1/7 {
gen aux=0
replace aux=`i' if t==`k'
by id: egen `i'_t`k'=sum(aux)
drop aux
/// Third step
reg lwage_g wks_g south_g smsa_g ms_g exp_g exp2_g ///
occ_g ind_g union_g fem_g blk_g ed_g ///
(*_dm south_m smsa_m occ_m ind_m fem blk), nohead
xthtaylor lwage wks south smsa ms exp exp2 occ ind ///
union fem blk ed, endog(wks ms exp exp2 union ed)
reg lwage_g wks_g south_g smsa_g ms_g exp_g exp2_g ///
occ_g ind_g union_g fem_g blk_g ed_g ///
(*_dm south_t* smsa_t* occ_t* ind_t* fem blk), nohead
xthtaylor lwage wks south smsa ms exp exp2 occ ind ///
union fem blk ed, endog(wks ms exp exp2 union ed) am
/***************** End Example ****************************/
----- Original Message -----
From: "tabreez shams" <tabreezsp@yahoo.com>
To: <statalist@hsphsun2.harvard.edu>; <tabreezsp@yahoo.com>
Sent: Saturday, February 03, 2007 7:50 AM
Subject: Re: Re:st: Re: fixed effects vs random effects
Dear All,
Thanks to all who have contributed to my query on FE
vs RE as means of addressing both unobserved effect
and endogeneity where regressors are less
time-variant. Some comments to your suggestions are in
Kit Baum: Presence of more than one endogenous
variable complicates estimating the equation using
xtivreg2 (which is considered to be single equation
model). An alternative resort could be estimating the
equations using 3sls with reg3. However, I am not sure
if this 3sls is recommended if the number of equations
increases. Moreover, the equations in the system need
to be identified and hence require adjustment to make
the system identifiable! Comments on this issue will
be helpful.
Daniel Hoechle: I was not aware of this Driscoll-Kraay
estimation procedure. Thanks for introducing this to
me. Much appreciate if I can avail a copy of the macro
including 2sls estimator.
Rodrigo: I am novice in programming stuffs and hence
may not be able to follow your suggestion in adjusting
HT for including instrument at the third step. Yet, I
tried with the standard one with xthataylor but this
requires at least one time-invariant endogenous and
exogenous variable where I have only one endogenous
time-invariant variable. For your interest, I am
including the following information regarding my data
and results:
1.The value of R-square with FE estimation is: within
= 0.4727 , between = 0.0628 , and sigma u =
0.39716817, sigma e = .21452491 and rho = .7741.
2.My sample is constructed on the basis of top 300 US
firms at the end of 2004 and the data is unbalanced as
the firms merged or acquired over the sample period
1997 to 2004.
Further query: I shall highly appreciate any comments
on: to what extent pooled-OLS with cluster by firm id
option able to address unobserved effect.
Thank you once again for your time and understanding
and helpful comments. Have a nice week end!
PhD candidate
Accounting and Finance Dept.
Monash University
--- Daniel Hoechle <daniel.hoechle@gmail.com> wrote:
> Hi,
> This is a really interesting discussion. I think
> Rodrigo is right in
> saying that the standard Fama-MacBeth procedure is
> not appropriate
> here because of the endogeneity problem. However,
> Antoni Sureda
> provides a version of the Fama-MacBeth approach
> which is based on the
> IV-estimator rather than on the OLS estimator. His
> -fmivreg- program
> is available from
> What could also be an interesting path to proceed is
> to estimate the
> regression model with Driscoll-Kraay standard
> errors. Why? If the
> panel is unbalanced then the panel might well be a
> microeconometric
> panel and these panels are likely to be
> cross-sectionally dependent
> (due to things like social norms, neighborhood
> effects, and all sorts
> of behavioral biases). Because Driscoll-Kraay
> standard errors are
> heteroscedasticity consistent and robust to very
> general forms of
> temporal and cross-sectional dependence, they might
> be interesting in
> this respect. I implemented the Driscoll-Kraay
> estimator for use with
> both balanced and unbalanced panels in my -xtscc-
> program (in Stata
> type -net search xtscc-). Unfortunately, however,
> the -xtscc- program
> currently does not allow for estimation of IV
> regression models but it
> would be straightforward to generalize the -xtscc-
> program such that
> it includes the 2SLS estimator. Please let me know
> if this would be of
> interest for anyone.
> Finally, if there is no cross-sectional dependence,
> then I think it
> could be a simple but tractable way to estimate the
> regression model
> by aid of the 2SLS estimator with panel-robust
> ("clustered" or Rogers)
> standard errors. Monte Carlo simulations have shown
> that panel-robust
> standard errors are robust to subject specific fixed
> effects.
> Best,
> Dan
> *
> * For searches and help try:
> *
> http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
Yahoo! Music Unlimited
Access over 1 million songs.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-02/msg00082.html","timestamp":"2014-04-16T07:16:21Z","content_type":null,"content_length":"14296","record_id":"<urn:uuid:d1466c54-ab66-4d57-a0d7-fb403a620b08>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |