content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Angles in Regular Hexagons and Octagons
A regular polygon is a polygon that is equilateral (all its sides are equal) and equiangular (all its angles are equal).
Angles in regular hexagons:
A six-sided polygon, in which all its sides are equal and whose angles are also equal.
Let's calculate the sum of the angles of a regular hexagon:
We will use the formula that serves to find the sum of the internal angles of a polygon
$180\times (6-2)=$
$180\times 4=720$
Then we will calculate the value of an angle in the regular hexagon:
Since all the angles are equal, we only have to divide the total by the number of angles in the regular hexagon.
Pay attention, the sum of the internal angles of any regular hexagon will always be $720^o$, and the size of each angle $120^o$. | {"url":"https://www.tutorela.com/math/angles-in-regular-hexagons-and-octagons","timestamp":"2024-11-02T12:08:28Z","content_type":"text/html","content_length":"49470","record_id":"<urn:uuid:b2366141-01d3-4456-8269-3aa944ad7abf>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00712.warc.gz"} |
General Chemistry 1 & 2
Learning Objectives
By the end of this section, you will be able to:
• Explain the process of measurement
• Identify the three basic parts of a quantity
• Describe the properties and units of length, mass, volume, density, temperature, and time
□ Relate mass, volume, and density of a substance
□ Identify the prefixes, units of measurement, and unit conversions in chemistry
• Perform basic unit calculations and conversions in the metric and other unit systems
Measurements provide the macroscopic information that is the basis of most of the hypotheses, theories, and laws that describe the behavior of matter and energy in both the macroscopic and
microscopic domains of chemistry. Every measurement provides three kinds of information: the size or magnitude of the measurement (a number); a standard of comparison for the measurement (a unit);
and an indication of the uncertainty of the measurement. While the number and unit are explicitly represented when a quantity is written, the uncertainty is an aspect of the measurement result that
is more implicitly represented and will be discussed later.
The number in the measurement can be represented in different ways, including decimal form and scientific notation. (Scientific notation is also known as exponential notation; a review of this topic
can be found in Appendix B.) For example, the maximum takeoff weight of a Boeing 777-200ER airliner is 298,000 kilograms, which can also be written as 2.98 × 10^5 kg. The mass of the average mosquito
is about 0.0000025 kilograms, which can be written as 2.5 × 10^−6 kg.
Units, such as liters, pounds, and centimeters, are standards of comparison for measurements. When we buy a 2-liter bottle of a soft drink, we expect that the volume of the drink was measured, so it
is two times larger than the volume that everyone agrees to be 1 liter. The meat used to prepare a 0.25-pound hamburger is measured so it weighs one-fourth as much as 1 pound. Without units, a number
can be meaningless, confusing, or possibly life threatening. Suppose a doctor prescribes phenobarbital to control a patient’s seizures and states a dosage of “100” without specifying units. Not only
will this be confusing to the medical professional giving the dose, but the consequences can be dire: 100 mg given three times per day can be effective as an anticonvulsant, but a single dose of 100
g is more than 10 times the lethal amount.
We usually report the results of scientific measurements in SI units, an updated version of the metric system, using the units listed in Table 2. Other units can be derived from these base units. The
standards for these units are fixed by international agreement, and they are called the International System of Units or SI Units (from the French, Le Système International d’Unités). SI units have
been used by the United States National Institute of Standards and Technology (NIST) since 1964.
Property Measured Name of Unit Symbol of Unit
length meter m
mass kilogram kg
time second s
temperature kelvin K
electric current ampere A
amount of substance mole mol
luminous intensity candela cd
Table 2. Base Units of the SI System
Sometimes we use units that are fractions or multiples of a base unit. Ice cream is sold in quarts (a familiar, non-SI base unit), pints (0.5 quart), or gallons (4 quarts). We also use fractions or
multiples of units in the SI system, but these fractions or multiples are always powers of 10. Fractional or multiple SI units are named using a prefix and the name of the base unit. For example, a
length of 1000 meters is also called a kilometer because the prefix kilo means “one thousand,” which in scientific notation is 10^3 (1 kilometer = 1000 m = 10^3 m). The prefixes used and the powers
to which 10 are raised are listed in Table 3.
Prefix Symbol Factor Example
femto f 10^−15 1 femtosecond (fs) = 1 × 10^−15 s (0.000000000000001 s)
pico p 10^−12 1 picometer (pm) = 1 × 10^−12 m (0.000000000001 m)
nano n 10^−9 4 nanograms (ng) = 4 × 10^−9 g (0.000000004 g)
micro µ 10^−6 1 microliter (μL) = 1 × 10^−6 L (0.000001 L)
milli m 10^−3 2 millimoles (mmol) = 2 × 10^−3 mol (0.002 mol)
centi c 10^−2 7 centimeters (cm) = 7 × 10^−2 m (0.07 m)
deci d 10^−1 1 deciliter (dL) = 1 × 10^−1 L (0.1 L )
kilo k 10^3 1 kilometer (km) = 1 × 10^3 m (1000 m)
mega M 10^6 3 megahertz (MHz) = 3 × 10^6 Hz (3,000,000 Hz)
giga G 10^9 8 gigayears (Gyr) = 8 × 10^9 yr (8,000,000,000 Gyr)
tera T 10^12 5 terawatts (TW) = 5 × 10^12 W (5,000,000,000,000 W)
Table 3. Common Unit Prefixes
Need a refresher or more practice with scientific notation? Visit this site to go over the basics of scientific notation.
SI Base Units
The initial units of the metric system, which eventually evolved into the SI system, were established in France during the French Revolution. The original standards for the meter and the kilogram
were adopted there in 1799 and eventually by other countries. This section introduces four of the SI base units commonly used in chemistry. Other SI units will be introduced in subsequent chapters.
The standard unit of length in both the SI and original metric systems is the meter (m). A meter was originally specified as 1/10,000,000 of the distance from the North Pole to the equator. It is now
defined as the distance light in a vacuum travels in 1/299,792,458 of a second. A meter is about 3 inches longer than a yard (Figure 1); one meter is about 39.37 inches or 1.094 yards. Longer
distances are often reported in kilometers (1 km = 1000 m = 10^3 m), whereas shorter distances can be reported in centimeters (1 cm = 0.01 m = 10^−2 m) or millimeters (1 mm = 0.001 m = 10^−3 m).
Figure 1. The relative lengths of 1 m, 1 yd, 1 cm, and 1 in. are shown (not actual size), as well as comparisons of 2.54 cm and 1 in., and of 1 m and 1.094 yd.
The standard unit of mass in the SI system is the kilogram (kg). A kilogram was originally defined as the mass of a liter of water (a cube of water with an edge length of exactly 0.1 meter). It is
now defined by a certain cylinder of platinum-iridium alloy, which is kept in France (Figure 2). Any object with the same mass as this cylinder is said to have a mass of 1 kilogram. One kilogram is
about 2.2 pounds. The gram (g) is exactly equal to 1/1000 of the mass of the kilogram (10^−3 kg).
Figure 2. This replica prototype kilogram is housed at the National Institute of Standards and Technology (NIST) in Maryland. (credit: National Institutes of Standards and Technology)
Temperature is an intensive property. The SI unit of temperature is the kelvin (K). The IUPAC convention is to use kelvin (all lowercase) for the word, K (uppercase) for the unit symbol, and neither
the word “degree” nor the degree symbol (°). The degree Celsius (°C) is also allowed in the SI system, with both the word “degree” and the degree symbol used for Celsius measurements. Celsius degrees
are the same magnitude as those of kelvin, but the two scales place their zeros in different places. Water freezes at 273.15 K (0 °C) and boils at 373.15 K (100 °C) by definition, and normal human
body temperature is approximately 310 K (37 °C). The conversion between these two units and the Fahrenheit scale will be discussed later in this chapter.
The SI base unit of time is the second (s). Small and large time intervals can be expressed with the appropriate prefixes; for example, 3 microseconds = 0.000003 s = 3 × 10^−6 and 5 megaseconds =
5,000,000 s = 5 × 10^6 s. Alternatively, hours, days, and years can be used.
Derived SI Units
We can derive many units from the seven SI base units. For example, we can use the base unit of length to define a unit of volume, and the base units of mass and length to define a unit of density.
Volume is the measure of the amount of space occupied by an object. The standard SI unit of volume is defined by the base unit of length (Figure 3). The standard volume is a cubic meter (m^3), a cube
with an edge length of exactly one meter. To dispense a cubic meter of water, we could build a cubic box with edge lengths of exactly one meter. This box would hold a cubic meter of water or any
other substance.
A more commonly used unit of volume is derived from the decimeter (0.1 m, or 10 cm). A cube with edge lengths of exactly one decimeter contains a volume of one cubic decimeter (dm^3). A liter (L) is
the more common name for the cubic decimeter. One liter is about 1.06 quarts.
A cubic centimeter (cm^3) is the volume of a cube with an edge length of exactly one centimeter. The abbreviation cc (for cubic centimeter) is often used by health professionals. A cubic centimeter
is also called a milliliter (mL) and is 1/1000 of a liter.
Figure 3 (a) The relative volumes are shown for cubes of 1 m^3, 1 dm^3 (1 L), and 1 cm^3 (1 mL) (not to scale). (b) The diameter of a dime is compared relative to the edge length of a 1-cm^3 (1-mL)
We use the mass and volume of a substance to determine its density. Thus, the units of density are defined by the base units of mass and length.
The density of a substance is the ratio of the mass of a sample of the substance to its volume. The SI unit for density is the kilogram per cubic meter (kg/m^3). For many situations, however, this as
an inconvenient unit, and we often use grams per cubic centimeter (g/cm^3) for the densities of solids and liquids, and grams per liter (g/L) for gases. Although there are exceptions, most liquids
and solids have densities that range from about 0.7 g/cm^3 (the density of gasoline) to 19 g/cm^3 (the density of gold). The density of air is about 1.2 g/L. Table 4 shows the densities of some
common substances.
Solids Liquids Gases (at 25 °C and 1 atm)
ice (at 0 °C) 0.92 g/cm^3 water 1.0 g/cm^3 dry air 1.20 g/L
oak (wood) 0.60–0.90 g/cm^3 ethanol 0.79 g/cm^3 oxygen 1.31 g/L
iron 7.9 g/cm^3 acetone 0.79 g/cm^3 nitrogen 1.14 g/L
copper 9.0 g/cm^3 glycerin 1.26 g/cm^3 carbon dioxide 1.80 g/L
lead 11.3 g/cm^3 olive oil 0.92 g/cm^3 helium 0.16 g/L
silver 10.5 g/cm^3 gasoline 0.70–0.77 g/cm^3 neon 0.83 g/L
gold 19.3 g/cm^3 mercury 13.6 g/cm^3 radon 9.1 g/L
Table 4. Densities of Common Substances
While there are many ways to determine the density of an object, perhaps the most straightforward method involves separately finding the mass and volume of the object, and then dividing the mass of
the sample by its volume. In the following example, the mass is found directly by weighing, but the volume is found indirectly through length measurements.
[latex]\text{density} = \frac{\text{mass}}{\text{volume}}[/latex]
Example 1
Calculation of Density
Gold—in bricks, bars, and coins—has been a form of currency for centuries. In order to swindle people into paying for a brick of gold without actually investing in a brick of gold, people have
considered filling the centers of hollow gold bricks with lead to fool buyers into thinking that the entire brick is gold. It does not work: Lead is a dense substance, but its density is not as great
as that of gold, 19.3 g/cm^3. What is the density of lead if a cube of lead has an edge length of 2.00 cm and a mass of 90.7 g?
The density of a substance can be calculated by dividing its mass by its volume. The volume of a cube is calculated by cubing the edge length.
[latex]\text{volume of lead cube}=2.00\text{cm}\times2.00\text{cm}\times2.00\text{cm}=9.00\text{cm}^3[/latex]
(We will discuss the reason for rounding to the first decimal place in the next section.)
Check Your Learning
(a) To three decimal places, what is the volume of a cube (cm^3) with an edge length of 0.843 cm?
(b) If the cube in part (a) is copper and has a mass of 5.34 g, what is the density of copper to two decimal places?
(a) 0.599 cm^3; (b) 8.91 g/cm^3
To learn more about the relationship between mass, volume, and density, use this interactive simulator to explore the density of different materials, like wood, ice, brick, and aluminum.
Example 2
Using Displacement of Water to Determine Density
This PhET simulation illustrates another way to determine density, using displacement of water. Determine the density of the red and yellow blocks.
When you open the density simulation and select Same Mass, you can choose from several 5.00-kg colored blocks that you can drop into a tank containing 100.00 L water. The yellow block floats (it is
less dense than water), and the water level rises to 105.00 L. While floating, the yellow block displaces 5.00 L water, an amount equal to the weight of the block. The red block sinks (it is more
dense than water, which has density = 1.00 kg/L), and the water level rises to 101.25 L.
The red block therefore displaces 1.25 L water, an amount equal to the volume of the block. The density of the red block is:
[latex]\text{density}=\frac{\text{mass}}{\text{volume}}=\frac{5.00\;\text{kg}}{1.25\;\text{L}}=4.00 \text{kg/L}[/latex]
Note that since the yellow block is not completely submerged, you cannot determine its density from this information. But if you hold the yellow block on the bottom of the tank, the water level rises
to 110.00 L, which means that it now displaces 10.00 L water, and its density can be found:
[latex]\text{density}=\frac{\text{mass}}{\text{volume}}=\frac{\text{5.00 kg}}{\text{10.00 L}}=0.500 \text{kg/L}[/latex]
Check Your Learning
Remove all of the blocks from the water and add the green block to the tank of water, placing it approximately in the middle of the tank. Determine the density of the green block.
Key Concepts and Summary
Measurements provide quantitative information that is critical in studying and practicing chemistry. Each measurement has an amount, a unit for comparison, and an uncertainty. Measurements can be
represented in either decimal or scientific notation. Scientists primarily use the SI (International System) or metric systems. We use base SI units such as meters, seconds, and kilograms, as well as
derived units, such as liters (for volume) and g/cm^3 (for density). In many cases, we find it convenient to use unit prefixes that yield fractional and multiple units, such as microseconds (10^−6
seconds) and megahertz (10^6 hertz), respectively.
Key Equations
• [latex]\text{density}=\frac{\text{mass}}{\text{volume}}[/latex]
Chemistry End of Chapter Exercises
1. Is one liter about an ounce, a pint, a quart, or a gallon?
2. Is a meter about an inch, a foot, a yard, or a mile?
3. Indicate the SI base units or derived units that are appropriate for the following measurements:
(a) the length of a marathon race (26 miles 385 yards)
(b) the mass of an automobile
(c) the volume of a swimming pool
(d) the speed of an airplane
(e) the density of gold
(f) the area of a football field
(g) the maximum temperature at the South Pole on April 1, 1913
4. Indicate the SI base units or derived units that are appropriate for the following measurements:
(a) the mass of the moon
(b) the distance from Dallas to Oklahoma City
(c) the speed of sound
(d) the density of air
(e) the temperature at which alcohol boils
(f) the area of the state of Delaware
(g) the volume of a flu shot or a measles vaccination
5. Give the name and symbol of the prefixes used with SI units to indicate multiplication by the following exact quantities.
(a) 10^3
(b) 10^−2
(c) 0.1
(d) 10^−3
(e) 1,000,000
(f) 0.000001
6. Give the name of the prefix and the quantity indicated by the following symbols that are used with SI base units.
(a) c
(b) d
(c) G
(d) k
(e) m
(f) n
(g) p
(h) T
7. A large piece of jewelry has a mass of 132.6 g. A graduated cylinder initially contains 48.6 mL water. When the jewelry is submerged in the graduated cylinder, the total volume increases to 61.2
(a) Determine the density of this piece of jewelry.
(b) Assuming that the jewelry is made from only one substance, what substance is it likely to be? Explain.
8. Visit this PhET density simulation and select the Same Volume Blocks.
(a) What are the mass, volume, and density of the yellow block?
(b) What are the mass, volume and density of the red block?
(c) List the block colors in order from smallest to largest mass.
(d) List the block colors in order from lowest to highest density.
(e) How are mass and density related for blocks of the same volume?
9. Visit this PhET density simulation and select Custom Blocks and then My Block.
(a) Enter mass and volume values for the block such that the mass in kg is less than the volume in L. What does the block do? Why? Is this always the case when mass < volume?
(b) Enter mass and volume values for the block such that the mass in kg is more than the volume in L. What does the block do? Why? Is this always the case when mass > volume?
(c) How would (a) and (b) be different if the liquid in the tank were ethanol instead of water?
(d) How would (a) and (b) be different if the liquid in the tank were mercury instead of water?
10. Visit this PhET density simulation and select Mystery Blocks.
(a) Pick one of the Mystery Blocks and determine its mass, volume, density, and its likely identity.
(b) Pick a different Mystery Block and determine its mass, volume, density, and its likely identity.
(c) Order the Mystery Blocks from least dense to most dense. Explain.
Celsius (°C)
unit of temperature; water freezes at 0 °C and boils at 100 °C on this scale
cubic centimeter (cm^3 or cc)
volume of a cube with an edge length of exactly 1 cm
cubic meter (m^3)
SI unit of volume
ratio of mass to volume for a substance or object
kelvin (K)
SI unit of temperature; 273.15 K = 0 ºC
kilogram (kg)
standard SI unit of mass; 1 kg = approximately 2.2 pounds
measure of one dimension of an object
liter (L)
(also, cubic decimeter) unit of volume; 1 L = 1,000 cm^3
meter (m)
standard metric and SI unit of length; 1 m = approximately 1.094 yards
milliliter (mL)
1/1,000 of a liter; equal to 1 cm^3
second (s)
SI unit of time
SI units (International System of Units)
standards fixed by international agreement in the International System of Units (Le Système International d’Unités)
standard of comparison for measurements
amount of space occupied by an object
Answers for Chemistry End of Chapter Exercises
2. about a yard
4. (a) kilograms; (b) meters; (c) kilometers/second; (d) kilograms/cubic meter; (e) kelvin; (f) square meters; (g) cubic meters
6. (a) centi-, × 10^−2; (b) deci-, × 10^−1; (c) Giga-, × 10^9; (d) kilo-, × 10^3; (e) milli-, × 10^−3; (f) nano-, × 10^−9; (g) pico-, × 10^−12; (h) tera-, × 10^12
8. (a) 8.00 kg, 5.00 L, 1.60 kg/L; (b) 2.00 kg, 5.00 L, 0.400 kg/L; (c) red < green < blue < yellow; (d) If the volumes are the same, then the density is directly proportional to the mass.
10. (a) (b) Answer is one of the following. A/yellow: mass = 65.14 kg, volume = 3.38 L, density = 19.3 kg/L, likely identity = gold. B/blue: mass = 0.64 kg, volume = 1.00 L, density = 0.64 kg/L,
likely identity = apple. C/green: mass = 4.08 kg, volume = 5.83 L, density = 0.700 kg/L, likely identity = gasoline. D/red: mass = 3.10 kg, volume = 3.38 L, density = 0.920 kg/L, likely identity =
ice; and E/purple: mass = 3.53 kg, volume = 1.00 L, density = 3.53 kg/L, likely identity = diamond. (c) B/blue/apple (0.64 kg/L) < C/green/gasoline (0.700 kg/L) < C/green/ice (0.920 kg/L) < D/red/
diamond (3.53 kg/L) < A/yellow/gold (19.3 kg/L) | {"url":"https://boisestate.pressbooks.pub/chemistry/chapter/1-4-measurements/","timestamp":"2024-11-07T01:00:22Z","content_type":"text/html","content_length":"138414","record_id":"<urn:uuid:99ca723a-d367-4e77-b8e4-1f6a1bdc41ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00360.warc.gz"} |
What is a concrete example? + Example
What is a concrete example?
1 Answer
A concrete example is an example that can be touched or sensed as opposed to an abstract example which can't be.
A concrete example is an example that can be touched or sensed as opposed to an abstract example which can't be.
Let's say that I'm trying to describe addition.
An abstract example of addition is something like this:
When we add, we're taking the value of one set and increasing it by the value of another set to achieve a sum.
Now here's a concrete example:
When we add the numbers 1 and 2, we can take 1 coin to represent the one and 2 coins to represent the 2 and put them together - so we count the coins... 1, 2, 3... 3 coins is the sum of 1 coin added
to 2 coins.
Impact of this question
80906 views around the world | {"url":"https://socratic.org/questions/what-is-a-concrete-example","timestamp":"2024-11-13T20:57:40Z","content_type":"text/html","content_length":"33594","record_id":"<urn:uuid:f4c398b6-6873-4a64-a873-67afd26bd178>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00355.warc.gz"} |
The specific basis of trigonometric functions in the problem of approximate solution of integral equations with the kernel of the kind K(x-t)
In this paper we will deal with the approximate solution of Fredholm's and Volterra's equations with the kernel of the kind K(x-t). We shall use the known algorithm for the search of the approximate
solution in the form of a linear combination of preassigned basic functions
φ(x) ≈ ∑nk=0 ckφk(x),
with the help of Galerkin's method.
The principal matter of the paper is the choice of the specific basis {φk(x)} which:
1. possesses high approximate properties, i.e., makes possible to find the approximate solution with a good accuracy, but with a small number of basic functions;
2. makes possible (by using the inner properties of the functions φk(x)) to easily transform the double integral by Galerkin's algorithm to a simple (of multiplicity 1) integral;
3. reduces the problem to a system of equations with a reducible matrix, i.e. reduces it to parallelizing an algorithm to two independent subsystems of equations if the kernel is K(|x-t|).
In the Appendix we illustrate the use of the specific basis of functions by solving the integral Peierls equation. | {"url":"https://bulletin.iis.nsk.su/article/1369","timestamp":"2024-11-12T01:19:15Z","content_type":"text/html","content_length":"127337","record_id":"<urn:uuid:1df4f9f0-e316-488c-b96d-60dc0771013a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00781.warc.gz"} |
How to Convert a Bread Recipe to Be Made with Preferment - ChainBaker
Any leavened dough can be made with a preferment.
Whether you should or should not make any recipe with a preferment is up to you. I have spoken about the benefits of using preferments in a previous video which you can watch here.
There are detailed explanations on all yeasted preferments and the use for each of them. All the specific benefits, and practical tips. You will learn everything about preferments in that video. In
this video I will only demonstrate HOW to convert a recipe to be made with a preferment.
A good knowledge of baker’s percentage also known as baker’s math, and dough hydration is essential. Without those you cannot make or adjust recipes reliably. Click here for a full detailed
explanation on that.
In short, a preferment is made by mixing a portion of the total ingredients of a bread dough recipe and leaving them to ferment for several hours. It improves the flavour, texture, and keeping
quality of bread while reducing bulk fermentation time.
In a regular bread recipe 10% – 20% of the total flour would be prefermented. There are exceptions as you could easily preferment 100% of the total flour in a bread dough.
You can find recipes using all the different preferments in various breads in my Breads with Preferment playlist. There are around 30 examples from loaves made with 10% prefermented flour to pizza
dough made with 100% prefermented flour. Sweet buns, ciabatta, rye bread, focaccia, baguettes, etc.
To convert a recipe to one that is made with a preferment first you must know the exact amount of ingredients. The example that I give is a regular loaf made with 500g flour, 300g water, 6g yeast,
and 10g salt.
It is a 60% hydration dough with 2% salt, and 1.2% yeast.
No matter the amount of ingredients in the recipe that you want to convert, baker’s math never changes. The prefermented flour and the ingredients added to the prefermented flour will have certain
percentages that you can use to calculate the weights accordingly. That is why baker’s percentage is important.
If you want to make the bread with poolish.
Poolish is a preferment with a hydration of 100%. Meaning that it contains the same amount of water and flour. Yeast is normally between 0.08% – 0.1% in relation to the flour in the poolish.
In the video I make this exact recipe. 20% of the total flour is prefermented.
So, we know that the total amount of flour is 500g. 20% of 500g is 100g.
That is the amount of flour in the preferment. To increase or decrease the amount of prefermented flour simply adjust the percentage up or down and calculate from the total flour.
Since a poolish has a hydration of 100% that means the amount of water we need to add to it is 100g.
As mentioned, yeast content should be around 0.08% – 0.1%. I went with 0.1%. 0.1% of 100g is 0.1g. My scales are not even able to pick this miniscule amount of weight up and that is why I normally
just tell people to add a tiny pinch of yeast. A small pinch does correspond to 0.1g most of the time.
If your scales can weigh 0.1g, then use that option. Otherwise, a couple of tries making a preferment will give you a good idea on how big of a pinch you need with your fingers to make the preferment
rise in time. If you watch any of my recipes that use preferments you will get a good idea on what 0.1g looks like. Alternatively weigh 1g of yeast and divide it into 10 equal parts.
With all that out the way, we have a preferment made with 100g flour, 100g water, and 0,1g yeast. The whole dough was made of 500g flour, 300g water, 6g yeast, and 10g salt. All that is left to do is
to subtract the preferment ingredients from the whole recipe and you will end up with the ingredients you have to mix the preferment with to make the final dough.
In this case:
500g flour – 100g flour = 400g flour.
300g water – 100g water = 200g water.
6g yeast – 0.1g yeast = 5.9g yeast.
There is no salt in this preferment, so no subtraction needed.
If you want to make bread with biga.
Biga is normally a low hydration preferment of around 50% – 60%. That is the only difference between biga and poolish. So, to convert a recipe to biga the only extra step you will have to take is to
calculate the amount of water in relation to the amount of flour in the biga.
If we use the previous example of prefermenting 20% of the total flour which is 100g, then if the biga we choose is 60% hydration that will mean the amount of water in it would be 60g because 60% of
100g is 60g.
The amount of yeast is the same. All that is left is to subtract the biga ingredients from the whole recipe.
Converting a recipe to be made with a sponge.
Sponge is different in that it contains all the yeast of the recipe and is normally used for enriched doughs. The calculations are still the same and you can find a detailed description and all the
specifics in my Preferment Explanation along with all other preferments.
To convert a recipe to pate fermentee.
This is the simplest one if you bake bread regularly as you do not need to calculate anything except the total amount of preferment, so you know how much to pinch off and leave for later. There is no
need to mix the preferment separately as it is made by taking a piece of fully mixed dough, leaving it to ferment and adding it to the following day’s dough. This can be repeated indefinitely.
The most important part of adjusting any recipe and making your own recipes is understanding baker’s percentage, dough hydration, and the preferments. Take just a little time to watch my videos on
those topics and I guarantee you will instantly become a more confident baker. It is super easy to understand, and I do try to explain it as simply as possible.
Baker’s percentage is a universal language that bakers use for reading, writing, adjusting, and sharing recipe formulas. Knowing how to read it...
Everything You Need to Know About Baker’s Percentage | {"url":"https://www.chainbaker.com/preferment-conversion/","timestamp":"2024-11-13T02:55:54Z","content_type":"text/html","content_length":"248337","record_id":"<urn:uuid:691e0e5f-b0b7-423f-9c24-c26857356f33>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00171.warc.gz"} |
Blindfolding is a sample re-use technique. It allows calculating Stone-Geisser's Q² value (Stone, 1974; Geisser, 1974), which represents an evaluation criterion for the cross-validated predictive
relevance of the PLS path model.
We have discontinued support for blindfolding in SmartPLS 4 and removed the algorithm. The blindfolding method does not provide an out-of-sample assessment of predictive power. However, the
PLSpredict preocedure and the cross-validated predictive ability test (CVPAT) provide the results required for an out-of-sample predictive power assessment (for further explanations, see Hair et. al,
2022). [PLSpredict](/documentation/algorithms-and-techniques/predict) and the [CVPAT](/documentation/algorithms-and-techniques/cvpat) have been implemented in SmartPLS and we recommend using these
methods instead of blindfolding.
Besides evaluating the magnitude of the R² values as a criterion of predictive accuracy, researchers may desire to also examine Stone-Geisser’s Q² value (Stone, 1974; Geisser, 1974) as a
criterion of predictive relevance. The Q² value of latent variables in the PLS path model is obtained by using the blindfolding procedure.
Blindfolding is a sample re-use technique, which systematically deletes data points and provides a prognosis of their original values. For this purpose, the procedure requires an omission distance D.
A value for the omission distance D between 5 and 12 is recommended in literature (e.g., Hair et al., 2017). An omission distance of seven (D=7) implies that every fifth data point of a latent
variable's indicators will be eliminated in a single blindfolding round. Since the blindfolding procedure has to omit and predict every data point of the indicators used in the measurement model of
the selected latent variable, an omission distance of D=7 results in seven blindfolding rounds. Hence, the number of blindfolding rounds always equals the omission distance.
In the first blindfolding round, the procedure starts with first data point and omits every D-th data point of a latent variable's indicators. Then, the procedure estimates the PLS path model by
using the remaining data points. The omitted data represent missing values and are treated accordingly (e.g., by mean value replacement or pairwise deletion). The PLS-SEM results are then used to
predict the omitted data points. The difference between the omitted data points and the predicted ones are the prediction error. The sum of squared prediction errors is used to calculate the Q²
value. Blindfolding is an iterative process. In the second blindfolding round, the algorithm starts with the second data point, omits every D-th data point and continues as described before. After D
blindfolding rounds, every data point has been omitted and predicted.
When PLS-SEM exhibits predictive relevance, it well predicts the data points of indicators. A Q² value larger than zero for a certain endogenous latent variable indicates the PLS path model model
has predictive relevance for this construct. For detailed explanations of the blindfolding procedure, see Hair et al. (2017).
Blindfolding Settings in SmartPLS
Default value of the omission distance: 7
The systematic pattern of data point elimination and prediction in the blindfolding procedure depends on the omission distance (D). The user must select a value for D when running the blindfolding
procedure. Suggested values of D are between 5 and 12.
An omission distance of five, for example, implies that every fifth data point of the target construct's indicators are eliminated in a single blindfolding round. Since the blindfolding procedure has
to omit and predict every data point of the indicators used in the measurement model of a certain latent variable, it comprises five blindfolding rounds. Hence, the number of blindfolding rounds
always equals the omission distance D.
It is important to note that the omission distance has to be chosen so that the number of observations in the data set divided by the omission distance D is not an integer. If the number of
observations divided by D results in an integer, the procedure would delete full observations (i.e., entire rows of the data set). Hence, the number of observations used per blindfolding round would
be smaller than the number of observations in the original data set. However, the goal of the blindfolding procedure is to use all observations for prediction and, thus, not to delete entire
observations per blindfolding round. For this reason, the number of observations used in the original data set divided by the omission distance D must not be integer.
• Geisser, S. (1974). A Predictive Approach to the Random Effects Model, Biometrika, 61(1): 101-107.
• Hair, J. F., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. (2017). A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM) (2 ed.). Sage: Thousand Oaks.
• Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2022). A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM) (3 ed.). Thousand Oaks, CA: Sage.
• Stone, M. (1974). Cross-Validatory Choice and Assessment of Statistical Predictions, Journal of the Royal Statistical Society, 36(2): pp 111-147.
Cite correctly
Please always cite the use of SmartPLS!
Ringle, Christian M., Wende, Sven, & Becker, Jan-Michael. (2024). SmartPLS 4. Bönningstedt: SmartPLS. Retrieved from https://www.smartpls.com | {"url":"https://smartpls.com/documentation/algorithms-and-techniques/blindfolding/","timestamp":"2024-11-12T16:06:52Z","content_type":"text/html","content_length":"668860","record_id":"<urn:uuid:937a363b-10c0-4542-b0e2-99ba0c8477f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00195.warc.gz"} |
Simple multiplication of 2 numbers practice worksheet. What is the value of 8 multiply by 2?
It will be cool exercise of multiply of numbers for kids for Grade/Class 1 It has 10 block to practice multiply of numbers.
There are many multiply of numbers worksheets available with us for free download and practice.
Download Multiply of Numbers worksheet 8 PDF worksheet for practice.
You can find the links below. | {"url":"https://www.sckool.io/simple-multiply-8/","timestamp":"2024-11-12T18:55:48Z","content_type":"text/html","content_length":"244353","record_id":"<urn:uuid:576b76ea-2fdb-46f1-a95e-f05cc0b3f602>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00676.warc.gz"} |
System Dynamic Simulation | techenware
top of page
The great majority of processes we observe in the world consist of continuous changes. However, when we try to analyze these processes it often makes sense to divide a continuous process into
discrete parts to simplify the analysis. Discrete Event Modeling techniques approximate continuous real-world processes with non-continuous events that you define.
Here are some examples of events:
• a customer arrives at a shop,
• a truck finishes unloading,
• a conveyor stops,
• a new product is launched,
• inventory levels reaches a certain threshold, etc.
In discrete event modeling the movement of a train from point A to point B would be modeled with two events, namely a departure event and an arrival event. The actual movement of the train would be
modeled as a time delay (interval) between the departure and arrival events. This doesn't mean however that you can't model the train as moving. In fact, you can produce visually continuous
animations for logically discrete events.
The term Discrete Event is however mainly used in the narrower sense to denote "Process-Centric" modeling that suggests representing the system being analyzed as a sequence of operations being
performed on entities (transactions) of certain types such as customers, documents, parts, data packets, vehicles, or phone calls. The entities are passive, but can have attributes that affect the
way they are handled or may change as the entity flows through the process. Process-centric modeling is a medium-low abstraction level modeling approach. Although each object is modeled individually
as an entity, typically the modeler ignores many “physical level” details, such as exact geometry, accelerations, and decelerations. Process-centric modeling is used widely in the manufacturing,
logistics, and healthcare fields.
Discrete event modeling techniques should be used only when the system under analysis can naturally be described as a sequence of operations. It is not always clear for any given modeling project
which of the three modeling paradigms is best. For example, if it is easier to describe the behavior of each individual entity than trying to put together a global workflow, agent based modeling may
be the way to go. Similarly, if you are interested in aggregates and not in individual unit interaction, system dynamics may be applied. Our logic tool solution supports all three modeling
approaches, so you can experiment with the abstraction levels and modeling approach without needing multiple tools.
SD Modeling
supports the design and simulation of feedback structures (stock and flow diagrams and decision rules, including array variables AKA “subscripts”) in a way most SD modelers are used to.
You can:
• Define stock and flow variables one by one or using a “flow tool”
• Use automatic “code completion” in formulas
• Define “shadow” variables for better readability of your model
• Use table functions (look up tables) with step, linear, or spline interpolation
• Define dimensions of both enumeration and range types
• Define sub-dimensions and sub-ranges
• Define array variables with an arbitrary number of dimensions
• Use multiple formulas for different parts of an array variable
• Use both SD-specific and standard Java mathematical functions
bottom of page | {"url":"https://www.techenware.com/system-dynamic-simulation","timestamp":"2024-11-13T05:36:41Z","content_type":"text/html","content_length":"652784","record_id":"<urn:uuid:f01d3ea1-d102-46d8-b8e9-10fc9a1e6a95>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00874.warc.gz"} |
Solution of centrally symmetric problem on propagation
of perturbations in rocks during camouflet explosion
Gornyi Zhurnal
ArticleName Solution of centrally symmetric problem on propagation of perturbations in rocks during camouflet explosion
DOI 10.17580/gzh.2024.04.03
ArticleAuthor Sednev V. A., Kopnyshev S. L., Sednev A. V.
Academy of State Fire Prevention Service, EMERCOM of Russia, Moscow, Russia
V. A. Sednev, Professor, Doctor of Engineering Sciences, Sednev70@yandex.ru
S. L. Kopnyshev, Senior Researcher, Associate Professor, Candidate of Engineering Sciences
Bauman Moscow State Technical University, Moscow, Russia
A. V. Sednev, Student
The use of the enormous energy contained in explosives is increasingly being used in various fields of human activities. Especially effective is the use of camouflet charges—buried
explosives which produce the destructive effect without visible disturbances on ground surface. Despite the simplicity and accessibility of camouflet blasting, the issues of safety
of such work are important. The camouflet explosion eliminates air shock waves and harmful effects of the explosion products on the environment, but leads to strong seismic
vibrations. The blasting-induced seismic impact on nearby objects can initiate their load-bearing capacity loss, damage and destruction; therefore, when choosing locations for
Abstract camouflet charges, it is required to assess PPV in enclosing medium during blasting, and to determine distances at which PPV are lower than the maximum allowable values set for the
objects under consideration. The article presents the solution of a centrally symmetric problem on the particle velocity field in a continuous elastoplastic medium during camouflet
explosion under the assumption of vibrationless motion and incompressibility of the medium in the plastic and elastic domains. The dependences are obtained for determining the
dimensions of the expansion zones and the plastic deformation of the medium. The solution is based on the “camouflet equation” previously obtained by the authors—the relation to
find pressure on the contact surface of an expanding spherical cavity due to internal pressure.
keywords Rocks, soils, pressure, safety, camouflet explosion, elastoplastic medium, particle velocity field, expansion
1. Kutuzov B. N. Blasting methods. Vol. 2. Blasting in mining and industry : Textbook. 2^nd ed. Moscow : Gornaya kniga, 2011. 512 p.
2. Gorodnichenko V. I., Dmitriev A. P. Basis of mining : Textbook. 2^nd ed. Moscow : Gornaya kniga, 2016. 443 p.
3. Komashchenko V. I., Atrushkevich V. A., Kachurin N. M., Stas G. V. The effectiveness of borehole charges in the destruction of rocks by explosion. Ustoychivoe razvitie gornykh
territoriy. 2019. Vol. 11, No. 2(40). pp. 191–198.
4. Komashchenko V. I. Application of advanced initiation and design of borehole charges to improve rock fragmentation quality. Ustoychivoe razvitie gornykh territoriy. 2015. Vol. 7,
No. 2(24). pp. 12–17.
5. Trubetskoy K. N., Kaplunov D. R. (Eds.). Mining : Terminology reference book. 5^th revised and enlarged edition. Moscow : Gornaya kniga, 2016. 635 p.
6. Bovt A. N., Lovetskiy E. E., Selyakov V. I. et al. Mechanical action of camouflet explosion. Moscow : Nedra, 1990. 184 p.
7. Chadwick P., Cox A. D., Hopkins H. G. Mechanics of deep underground explosions. London, 1964. 300 p.
8. Orlenko L. P. (Ed.). Physics of blast. In two volumes. Third enlarges and revised edition. Moscow : Fizmatlit, 2004. 1471 p.
9. Man K., Liu X., Song Z. Blasting vibration monitoring scheme and its application. Journal of Vibroengineering. 2021. Vol. 23, No. 7. pp. 1640–1651.
10. Xu S., Li Y., Liu J., Zhang F. Optimization of blasting parameters for an underground mine through prediction of blasting vibration. Journal of Vibration and Control. 2019. Vol.
25, Iss. 9. pp. 1585–1595.
11. Ammon C. J., Velasco A. A., Lay T., Wallace T. C. Foundations of Modern Global Seismology. 2nd ed. London : Academic Press, 2020. 586 p.
12. Armaghani D. J., Kumar D., Samui P., Hasanipanah M., Roy B. A novel approach for forecasting of ground vibrations resulting from blasting : Modifed particle swarm optimization
coupled extreme learning machine. Engineering with Computers. 2021. Vol. 37, Iss. 4. pp. 3221–3235.
13. Wang J., Yin Y., Esmaieli K. Numerical Simulations of Rock Blasting Damage Based on Laboratory Scale Experiments. Journal of Geophysics and Engineering. 2018. Vol. 15, Iss. 6.
pp. 2399–2417.
14. Chkalova O. N. Fundamentals of scientific research : Tutorial. Kiev : Vishcha shkola, 1978. 118 p.
15. Ruzavin G. I. Methods of scientific research. Moscow : Mysl, 1975. 237 p.
References 16. Chernukha N. A. Structural Analysis of Buildings at Explosive Actions in SCAD. Inzhenerno-stroitelnyi zhurnal. 2014. No. 1. pp. 12–22.
17. Sednev V. A., Kopnyshev S. L., Sednev A. V. Determination of camouflet explosion parameters. Journal of Applied Mechanics and Technical Physics. 2023. Vol. 64, No. 6. pp.
18. Bell J. F. Mechanics of Solids. Berlin : Springer, 1973. Vol. I: The Experimental Foundations of Solid Mechanics. 813 p.
19. Sednev V. A., Kopnyshev S. L. The model of structural materials and soils behavior under the impact of dynamic loads. Journal of Machinery Manufacture and Reliability. 2018. No.
2. pp. 82–87.
20. Sednev V. A., Kopnyshev S. L. Theoretical bases of the requirements substantiation to physical stability of hydrotechnical structures and other energetics objects at external
dynamicimpact. Problemy bezopasnosti i chrezvychaynykh situatsiy. 2018. No. 6. pp. 43–62.
21. Sednev V. A., Kopnyshev S. L. The model of spherical cavity expansion in the elastoplastic environment with its hardening. Journal of Machinery Manufacture and Reliability.
2018. No. 4. pp. 105–113.
22. Sednev V. A., Kopnyshev S. L., Sednev A. V. Research of process stages and justification of mathematical model of spherical cavity expansion in soils and rocks. Ustoychivoe
razvitie gornykh territoriy. 2020. Vol. 12, No. 2(44). pp. 302–314.
23. Ishlinskiy A. Yu., Zvolinskiy N. V., Stepanenko N. Z. Dynamics of soil masses. Transactions of the USSR Academy of Sciences. Earth Science Sections. 1954. Vol. 95, No. 4.
24. Shemyakin E. I. Expansion of gas cavity in incompressible elastoplastic medium. (Studying effect of explosion on soil). Journal of Applied Mechanics and Technical Physics. 1961.
No. 5. pp. 91–99.
25. Kharlanyuk L. F., Kopnyshev S. L. Buried explosion pervasion dynamics : Analysis of dynamics of inelastic medium in applied problems on impact and explosion. Moscow, 2009. 165
26. Samul V. I. The elements of theory of elasticity and plasticity : Tutorial. 2^nd revised edition. Moscow : Vysshaya shkola, 1982. 264 p.
27. Struzhanov V. V., Burmasheva N. V. Theory of elasticity—Basic provisons : Tutorial. Yekaterinburg : Izdatelstvo Uralskogo universiteta, 2019. 204 p.
28. Tsytovich N. A. Mechanics of soils: Full course. Series: Classics of engineering thinking. Construction. Moscow : Lenand, 2022. 640 p.
29. Available at: https://docs.cntd.ru/document/573219717 (accessed: 30.03.2024).
Language of russian
Full content Buy | {"url":"https://rudmet.ru/journal/2299/article/37943/","timestamp":"2024-11-09T07:35:07Z","content_type":"application/xhtml+xml","content_length":"23626","record_id":"<urn:uuid:1628ef7f-371c-4ab5-bc28-8a25c111fdfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00350.warc.gz"} |
(Solved) - The sample space listing the eight simple events that are possible... (1 Answer) | Transtutors
We store cookies data for a seamless user experience. To know more check the Privacy Policy
The sample space listing the eight simple events that are possible when a... 1 answer below »
• 34+ Users Viewed
• 11+ Downloaded Solutions
• Pennsylvania, US Mostly Asked From
The sample space listing the eight simple events that are possible when a couple has three children is? {bbb, bbg,? bgb, bgg,? gbb, gbg,? ggb, ggg}. After identifying the sample space for a couple
having four? children, find the probability of getting one girl and three boys (in any order).
Identify the sample space for a couple having four children.
1 Approved Answer
Navnath G
Solution-: Let B - Boy and G - Girl The couple having four children, the possible combinations are The sample space...
Help us make our solutions better
Rate this solution on a scale of 1-5 star
Recent Questions in Statistics - Others | {"url":"https://www.transtutors.com/questions/the-sample-space-listing-the-eight-simple-events-that-are-possible-when-a--10664183.htm","timestamp":"2024-11-13T02:49:57Z","content_type":"application/xhtml+xml","content_length":"66480","record_id":"<urn:uuid:5fbaffaa-4f5a-4d4d-9ed0-0b67952d2a54>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00779.warc.gz"} |
Who Invented the Computer? Journey of the Modern Computer
Who Invented the Computer?
A) Charles Babbage
B) Alan Turing
C) John von Neumann
D) Konrad Zuse
A) Charles Babbage
More Details on Invented the Computer
1. Charles Babbage is often regarded as the “father of the computer” for his conceptualization and design of the Analytical Engine, a mechanical general-purpose computer.
2. Alan Turing contributed significantly to the development of computing with his concept of the Turing Machine, laying the theoretical foundation for modern computers.
3. John von Neumann’s work on the Von Neumann Architecture established the basic framework for how computers are organized and operated, influencing computer design and programming.
4. Konrad Zuse created the Z3 computer, considered the world’s first programmable digital computer, pioneering early computing technologies.
Exploring Computer Invention: Frequently Asked Questions (FAQs)
1. Who is considered the inventor of the modern computer?
□ Charles Babbage is credited with conceptualizing the first mechanical general-purpose computer, known as the Analytical Engine, in the 19th century.
2. What was Charles Babbage’s contribution to computing?
□ Babbage’s designs laid the groundwork for modern computing concepts, such as programmability and mechanical computation, despite never fully realizing the Analytical Engine during his
3. How did Alan Turing contribute to the development of computers?
□ Turing’s theoretical work laid the foundation for computer science and artificial intelligence. His concept of the Turing Machine provided a theoretical model for general-purpose computation.
4. What is the significance of the Turing Machine?
□ In 1936, Alan Turing proposed the Turing Machine, a hypothetical device that serves as the theoretical foundation for modern computers. It showcases the concept that a machine adhering to
specific rules could execute any computation, highlighting its versatility in computing tasks
5. What role did John von Neumann play in the advancement of computing?
□ Von Neumann’s work on the Von Neumann Architecture, outlined in the 1940s, introduced the concept of stored-program computers, which became the standard architecture for most modern
6. Who created the first programmable digital computer, and what was it called?
□ Konrad Zuse, a German engineer, developed the Z3 computer in the 1940s. It is considered the world’s first functional programmable digital computer.
7. What were some early challenges faced by computer pioneers like Babbage and Zuse?
□ Early computer pioneers encountered challenges including limited technology, insufficient funding, and social barriers to the acceptance of their innovative ideas. However, despite these
obstacles, their groundbreaking work established the cornerstone of modern computing technology. | {"url":"https://govtjobsup2date.com/who-invented-the-computer","timestamp":"2024-11-13T14:21:16Z","content_type":"text/html","content_length":"82867","record_id":"<urn:uuid:fac2e635-c029-4a57-8513-3e8aa8d2310d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00797.warc.gz"} |
Mathematics and Physics of Disordered Systems | EMS Press
Mathematics and Physics of Disordered Systems
• Michael Baake
Universität Bielefeld, Germany
• Werner Kirsch
FernUniversität Hagen, Germany
• Hajo Leschke
Universität Erlangen-Nünberg, Germany
• Leonid Pastur
Université Paris 7, Denis Diderot, Paris, France
It was the aim of this workshop to bring together researchers from different fields, working on various aspects of the theory of disordered systems in theoretical and mathematical physics. On the one
hand, this means triggering an interaction between researchers working on these phenomena using methods from theoretical physics and those working with mathematical tools on the subject, as well as
scientists mainly interested in the mathematical methods themselves. On the other hand, the conference also tried to bring together researchers working on related but well distinguished physical
phenomena like Anderson localization in random systems, the theory of aperiodic order, the random matrix theory its applications (e.g., in quantum chaos).
To do justice to this mix of people with different background, about half the talks were survey talks (with an extended time for the speaker), giving an introduction to the corresponding topic in
general as well as to recent results. We found it important to give enough time for physicists to explain their ideas in this way. About a similar number of mathematical and physical review talks
were given. It was a nice experience that there was always a very lively and intense discussion after the talks and in many cases even during the talks. Many of these discussions were continued
outside the lecture hall in the free time in the afternoons and the evenings. In this way, many interactions started during the week, in particular between people who would otherwise not have met.
An important topic in the meeting was the theory of “quasiperiodic” order in various forms. Consequently, a survey talk on quasicrystals was scheduled. It triggered interesting discussions with
various other disciplines. One highlight was the demonstration that and how Delone sets and Schödinger operators defined on them form a natural bridge between the world of perfect (periodic) order
and that of stochastic phenomena. Though this is still in its infancy, it became clear that a high potential for unified approaches is still to be unraveled.
One of the main topics of the conference was the theory of Anderson localization for random Schrödinger operators from different points of view. There were reviews by theoretical physicists on the
subject explaining basic ideas about Anderson localization, its applications to the quantum Hall effect and the supersymmetric approach to this field. There was also special emphasis on the
phenomenon of “weak localization” both from physicists and mathematicians.
Another interesting topic was the explanation of the Aizenman–Molchanov method to prove Anderson localization in the continuum. This extension of the method is rather new and is considered to be very
important for future developments in the area. In the field of Anderson localization/delocalization, there were also presentations of new developments. There was also a survey talk on threshold
phenomena for the random Landau Hamiltonian which is not only an interesting result by itself, but may also lead to progress toward a proof for the existence of delocalized states.
A number of talks were devoted to the theory of quantum Hall conductance. Both the physical theory and recent mathematical progress were explained. Presumably this topic encouraged the highest amount
of discussion among the various disciplines. The phenomena are not yet understood from a physical point of view, even the model itself is under discussion.
The theory of random matrices is a discipline that evolved rather independently of the theory of random Schrödinger operators although there are obvious intersections between these fields. It was
therefore an important task of the conference to bring together people from these fields. As a result of this attempt there are joint research papers in preparation triggered by the meeting, e.g., on
Hamiltonians on random graphs which can be interpreted both as examples of random matrices and as discrete Schrödinger operators.
The use of poster presentations and the official allocation of one evening session to this proved successful in that it sparked lots of discussions that went on during the meeting. It became evident
that some results can actually profit from such a presentation. In view of the fact that time for talks is limited, it might be a reasonable alternative, long used by other sciences, and some more
wall space might help.
The unique atmosphere of the institute did its magic once again, and the combination with the excellent library makes Oberwolfach still one of the best possible places for meetings such as this one,
also in view of the recent competition from places like BIRS in Canada.
Cite this article
Michael Baake, Werner Kirsch, Hajo Leschke, Leonid Pastur, Mathematics and Physics of Disordered Systems. Oberwolfach Rep. 1 (2004), no. 2, pp. 1167–1232
DOI 10.4171/OWR/2004/22 | {"url":"https://ems.press/journals/owr/articles/705","timestamp":"2024-11-02T14:21:20Z","content_type":"text/html","content_length":"87086","record_id":"<urn:uuid:7fd5022e-4bb2-4dc7-b9c8-954a6a95768e>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00415.warc.gz"} |
New York is a big city. And when you stop to think about all the wild numbers around it, it seems even bigger! Read on
Normally, skyscrapers are big buildings for which you have to reach your head really far back to see the top. But the Newby-McMahon “skyscraper” in
Bedtime Math fan Yuri R. S. asked us, how many pictures are taken every year around the world? When you add up all the devices
Ever wonder who thought up some ideas that we now do every day? Like brushing your teeth with a toothbrush and toothpaste, for example. It
Our longtime fan Ajax L. just told us that October is also Inktober, where the challenge is to draw an ink drawing every day this
Bedtime Math fan Dillon M. asked us, how do we measure the distance from the Earth to the Sun? Read on to find out the
When we count out money to pay for something, usually we don’t study it very carefully – we just add it up. But money has
Why does a guinea pig need a suit of armor? You never know! So read on, and suit up with the math in pet armor.
Bedtime Math fan Lisa B. asked us, how many ice cream bars and popsicles does a truck sell in a day? It all depends on | {"url":"https://bedtimemath.org/category/daily-math/history/","timestamp":"2024-11-07T23:23:29Z","content_type":"text/html","content_length":"101964","record_id":"<urn:uuid:fc8ab69b-403e-4dd7-bca5-47ee9ab40763>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00168.warc.gz"} |
For Prospective Students
What is the difference between CS 145 and CS 135?
CS 145 is an advanced-level 1A Faculty of Mathematics core course, like Math 145 and Math 147. It is aimed at the top students in the Faculty of Mathematics. It covers the concepts that CS 135 does,
but in a different order and at a more rapid pace, using more abstractions and fewer (usually different) examples, and taking more care to illustrate connections between CS and mathematics. It also
introduces a number of enrichment topics that are not covered in CS 135.
What do I need in order to take CS 145?
The main qualifications for CS 145 are ability to reason and think abstractly, and enthusiasm for learning.
Good English reading and listening comprehension skills, and the ability to take notes, are also important.
Why should I take CS 145 if it is not required?
You should take it if you enjoy problem-solving and challenges, if you prefer being pointed in the right direction to being led in the right direction, and if you can take initiative in learning
instead of waiting to be told what to do.
Will a CS 145 credit give me any advantage in the future?
A mark of 70 or higher in CS 145 enables you to take CS 146 (other students, including those who have completed CS 135 with an excellent record, require instructor consent for CS 146). But taking CS
145 and CS 146 will not allow you to take other courses earlier than if you had taken CS 135 and CS 136. It is best to think of CS 145 and CS 146 as enrichment opportunities rather than as vehicles
for more rapid advancement.
What if I am not a CS major?
In the past, a significant fraction of the CS 145 class (sometimes more than half) have not been CS majors. Good students in other majors (even outside the Faculty of Mathematics) can benefit from a
more mathematical treatment of this material.
How will I know that CS 145 is right for me?
This is tricky, because some students will have the right qualities and not realize it, while others will think they do but don't.
One indicator is good marks in all high school courses, not just math and science. Unfortunately, grading standards vary by teacher and by school, and high marks are not necessarily an indicator of
ability. Some students, as Paul Lockhart says, are "just very good at following directions". Low marks could also be due to poor assessment, inadequate motivation, or resentment of makework.
Standardized tests and nationwide math contests offer more consistency, but are sometimes too based on the rote application of technique or on prior knowledge of the class of problems (such as can be
gained by coaching).
You do not need to be an excellent, or even average, coder. There are several key ideas in the course, and if these key ideas are understood and applied, students will be very successful. The choice
of programming language is also intended to "level the playing field" between novice programmers and more experienced programmers.
If you can think logically, apply deductive and inductive reasoning, manage your time well, pay attention to details, and most of all, not be afraid to try out things that might not work as intended,
this course is for you.
Do I need prior experience in programming to take CS 145?
No. In fact, prior experience can be a drawback if it closes your mind to new ways of doing things. CS 145 is unlike anything you will have seen in high school. The following course, CS 146, moves
towards more conventional notions of computing, but it does so with the perspectives and skills gained in CS 145. You should read this brief note for students with prior computing experience.
How do I get into CS 145?
Pre-enrollment for Math core courses takes place in the summer. At that time, students who score above a certain threshold on the Euclid math contest (80 for Fall 2024), Canadian Senior Math Contest
(CSMC) (45 for Fall 2024), or who have earned 50 in the Senior Canadian Computing Competition (CCC) may pre-enroll themselves into CS 145. Students who do not meet the above cutoffs can request to be
put on a waitlist through the Math Undergrad Office (MUO).
After the course selection period (mid-to-late June), there will be a sample assignment sent to all interested students in the course. The objective of the sample assignment is to help students to
make an informed decision about enroling in the course. This sample assignment will not be graded and will not be counted in any way in the course grade. Students may contact the instructor if they
have additional questions or wish to verify their understanding of the sample assignment.
If you wish to ask any questions regarding your particular situation, please e-mail the instructor.
It is best to apply early so that you can be transferred into CS 145 before the second phase of pre-enrollment (for elective courses) later in the summer. Transfers can be effected later, but become
more difficult.
What if I start in CS 145 but decide it is not for me?
You can transfer from CS 145 to CS 135 at any time up to the end of the sixth week of classes. The submission/grading mechanisms and marking rubrics are different in CS 135, so there will be more
work for you to do to catch up after the transfer. This is best done, if you are going to do it, as soon as possible.
What if I start in CS 135 and decide it is too easy and want to switch into CS 145?
CS 135 deliberately starts off slowly and carefully, and ramps up later in the term; CS 145 starts off more rapidly, in part to give students enough information to make up their minds about it, and
in part because the first midterm is scheduled after only a few weeks of lecture. The transfer is thus difficult, but it is possible (if there is room); the earlier the better. | {"url":"https://student.cs.uwaterloo.ca/~cs145/prospective.shtml","timestamp":"2024-11-03T03:25:19Z","content_type":"text/html","content_length":"12109","record_id":"<urn:uuid:bddc2047-b17e-47fd-9e2a-fa6d15aa7b86>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00015.warc.gz"} |
Excitation spectra of many-body systems by linear response: General theory and applications to trapped condensates
We derive a general linear-response many-body theory capable of computing excitation spectra of trapped interacting bosonic systems, e.g., depleted and fragmented Bose-Einstein condensates (BECs). To
obtain the linear-response equations we linearize the multiconfigurational time-dependent Hartree for bosons (MCTDHB) method, which provides a self-consistent description of many-boson systems in
terms of orbitals and a state vector (configurations), and is in principle numerically exact. The derived linear-response many-body theory, which we term LR-MCTDHB, is applicable to systems with
interaction potentials of general form. For the special case of a δ interaction potential we show explicitly that the response matrix has a very appealing bilinear form, composed of separate blocks
of submatrices originating from contributions of the orbitals, the state vector (configurations), and off-diagonal mixing terms. We further give expressions for the response weights and density
response. We introduce the notion of the type of excitations, useful in the study of the physical properties of the equations. From the numerical implementation of the LR-MCTDHB equations and
solution of the underlying eigenvalue problem, we obtain excitations beyond available theories of excitation spectra, such as the Bogoliubov-de Gennes (BdG) equations. The derived theory is first
applied to study BECs in a one-dimensional harmonic potential. The LR-MCTDHB method contains the BdG excitations and, also, predicts a plethora of additional many-body excitations which are out of
the realm of standard linear response. In particular, our theory describes the exact energy of the higher harmonic of the first (dipole) excitation not contained in the BdG theory. We next study a
BEC in a very shallow one-dimensional double-well potential. We find with LR-MCTDHB low-lying excitations which are not accounted for by BdG, even though the BEC has only little fragmentation and,
hence, the BdG theory is expected to be valid. The convergence of the LR-MCTDHB theory is assessed by systematically comparing the excitation spectra computed at several different levels of theory.
ASJC Scopus subject areas
• Atomic and Molecular Physics, and Optics
Dive into the research topics of 'Excitation spectra of many-body systems by linear response: General theory and applications to trapped condensates'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/excitation-spectra-of-many-body-systems-by-linear-response-genera","timestamp":"2024-11-07T10:35:46Z","content_type":"text/html","content_length":"59474","record_id":"<urn:uuid:52f816c3-14e2-495b-933f-df243d9a399b>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00832.warc.gz"} |
UMass Lowell Center for Atmospheric Research/ Digisonde DPSnavigationUntitled Document
The temporal and spatial variation in ionospheric structures have often frustrated the efforts of communications and radar system operators who base their frequency management decisions on monthly
mean predictions of radio propagation in the high frequency (short-wave) band. The University of Massachusetts Lowell s Center for Atmospheric Research (UMLCAR) has produced a low power miniature
version of its Digisonde^TM sounders, the Digisonde^TM Portable Sounder (DPS), capable of making measurements of the overhead ionosphere and providing real-time on-site processing and analysis to
characterize radio signal propagation to support communications or surveillance operations.
The system compensates for a low power transmitter (300 W vs. 10 kW for previous systems) by employing intrapulse phase coding, digital pulse compression and Doppler integration. The data
acquisition, control, signal processing, display, storage and automatic data analysis functions have been condensed into a single multi-tasking, multiple processor computer system, while the analog
circuitry has been condensed and simplified by the use of reduced transmitter power, wide bandwidth devices, and commercially available PC expansion boards. The DPS is shown in the composite Figure
1-1 (with the integrated transceiver package shown in Figure 1-1A, and one of the four crossed magnetic dipole receive antennas in Figure 1-1B).
│ │ │
│ │ │
│ │ Figure 1-1B Magnetic Loop Turnstile Antenna │
│ Figure 1-1A Digisonde^TM Portable Sounder │ │
│ │ │
Noteworthy new technology involved in this system includes:
□ Electronically switched active crossed loop receiving antenna
□ Commercially sourced 10 MIPS TMS 320C25 digital signal processor (DSP)
□ 4 million sample DSP buffer memory
□ 71 to 110 MHz digital synthesizer on a 4"x5" card
□ Compact DC-DC converters allowing operation on one battery
□ Four-channel high speed (1 million 12-bit samples/sec) digitizer board
□ A 160 Mbits/sec parallel data bus between the digitizer and the DSP
□ A proprietary multi-tasking operating system for remote interaction via a modem connection without suspending system operation
□ Direct digital synthesized coherent oscillators
□ 21 dB signal processing gain from phase coded pulse compression
□ 21 dB additional signal processing gain from coherent Doppler integration
□ Automatic ionospheric layer identification and parameter scaling by an embedded expert system
The availability of a small low power ionosonde that could be operated on-site wherever a high frequency (HF) radio or radar was in use, would greatly increase the value of the information produced
by the instrument since it would become available to the end user immediately.
One of the chief applications for the real-time data currently provided by digital ionospheric sounders is to manage the operation of HF radio channels and networks. Since many HF radios are operated
at remote locations (i.e., aircraft, boats, land vehicles of all sorts, and remote sites where telephone service is unreliable) the major obstacle to making practical use of the ionospheric sounder
data and associated computed propagation information is the dissemination of this data to a data processing and analysis site. Since HF is often used where no alternative communications link exists,
or is held in reserve in case primary communication is lost, it is not practical to assume that a communications link exists to make centrally tabulated real-time ionospheric data available to the
user. Furthermore, local measurements are superior to measurements at sites of opportunity in the user s general region of the globe since extreme variations in ionospheric properties are possible
even over short distances, especially at high latitudes [Buchau et al., 1985; Buchau and Reinisch, 1991] or near the sunset or sunrise terminator.
However, for most applications, the size, weight, power consumption and cost of a conventional ionospheric sounder have made local measurements impractical. Therefore the availability of a small, low
cost sounder is a major improvement in the usefulness of ionospheric sounder data. Shrinking the conventional 1 to 50 kW pulse sounders to a portable, battery operated 100 to 500 W system requires
the application of substantial signal processing gain to compensate for the 20 dB reduction in transmitter power. Furthermore, a compact portable package requires the use of highly integrated
control, data acquisition, timing, data processing, display and storage hardware.
The objective of the DPS development project was to develop a small vertical incidence (i.e., monostatic) ionospheric sounder which could automatically collect and analyze ionospheric measurements at
remote operating sites for the purpose of selecting optimum operating frequencies for obliquely propagated communication or radar propagation paths. Intermediate objectives assumed to be necessary to
produce such a capability were the development of optimally efficient waveforms and of functionally dense signal generation, processing and ancillary circuitry. Since the need for an embedded general
purpose computer was a given imperative, real-time control software was developed to incorporate as many functions as was feasible into this computer rather than having to provide additional
circuitry and components to perform these functions. The DPS duplicates all of the functions of its predecessor the Digisonde^TM 256 [Bibl et al., 1981] and [Reinisch, 1987] in a much smaller, low
power package. These include the simultaneous measurement of seven observable parameters of reflected (or in oblique incidence, refracted) signals received from the ionosphere:
1) Frequency
2) Range (or height for vertical incidence measurements)
3) Amplitude
4) Phase
5) Doppler Shift and Spread
6) Angle of Arrival
7) Wave Polarization
Because the physical parameters of the ionospheric plasma affect the way radio waves reflect from or pass through the ionosphere, it is possible by measuring all of these observable parameters at a
number of discrete heights and discrete frequencies to map out and characterize the structure of the plasma in the ionosphere. Both the height and frequency dimensions of this measurement require
hundreds of individual measurements to approximate the underlying continuous functions. The resulting measurement is called an ionogram and comprises a seven dimensional measurement of signal
amplitude vs. frequency and vs. height as shown in Figure 1-2 (due to the limitations of current software only five may be displayed at a time). Figure 1-2 is a five-dimensional display, with
sounding frequency as the abscissa, virtual reflection height (simple conversion of time delay to range assuming propagation at 3x10^8 m/sec) as the ordinate, signal amplitude as the spot (or pixel)
intensity, Doppler shift as the color shade and wave polarization as the color group (the blue-green-grey scale or "cool" colors showing extraordinary polarization, the red-yellow-white scale or
"hot" colors showing ordinary polarization).
Figure 1-2 Five-Dimensional Ionogram
Another objective of the DPS development was to store the data created by the system in an easily accessible format (e.g., DOS formatted personal computer files), while maintaining compatibility with
the existing base of Digisonde^TM sounder analysis software in use at the UMLCAR and at over 40 research institutes around the world. This objective often competed with the additional objective of
providing an easily accessible and simply understood standard data format to facilitate the development of novel post-processing analysis and display programs.
Ionospheric Propagation of Electromagnetic Waves back to top
An ionospheric sounder uses basic radar techniques to detect the electron density (equal to the ion density since the bulk plasma is neutral) of ionospheric plasma as a function of height. The
ionospheric plasma is created by energy from the sun transferred by particles in the solar wind as well as direct radiation (especially ultra-violet and x-rays). Each component of the solar emissions
tends to be deposited at a particular altitude or range of altitudes and therefore creates a horizontally stratified medium where each layer has a peak density and to some degree, a definable width,
or profile. The shape of the ionized layer is often referred to as a Chapman function [Davies, 1989] which is a roughly parabolic shape somewhat elongated on the top side. The peaks of these layers
usually form between 70 and 300 km altitude and are identified by the letters D, E, F1 and F2, in order of their altitude.
By scanning the transmitted frequency from 1 MHz to as high as 40 MHz and measuring the time delay of any echoes (i.e., apparent or virtual height of the reflecting medium) a vertically transmitting
sounder can provide a profile of electron density vs. height. This is possible because the relative refractive index of the ionospheric plasma is dependent on the density of the free electrons (N
[e]), as shown in Equation 1-1 (neglecting the geomagnetic field):
m^2(h)= 1 k (N[e]/f^2) (1 1)
where k = 80.5, N[e] is electrons/m^3, and f is in Hz [Davies, 1989; Chen, 1987].
The behavior of the plasma changes significantly in the presence of the Earth s magnetic field. An exhaustive derivation of m [Davies, 1989] results in the Appleton Equation for the refractive index,
which is one of the fundamental equations used in the field of ionospheric propagation. This equation clearly shows that there are two values for refractive index, resulting in the splitting of a
linearly polarized wave incident upon the ionosphere, into two components, known as the ordinary and extraordinary waves. These propagate with a different wave velocity and therefore appear as two
distinct echoes. They also exhibit two distinct polarizations, approximately right hand circular and left hand circular, which aid in distinguishing the two waves.
When the transmitted frequency is sufficient to drive the plasma at its resonant frequency there is a total internal reflection. The plasma resonance frequency (f[p]) is defined by several constants,
e the charge of an electron, m the mass of an electron, e[o] the permittivity of free space, but only one variable, N[e] electron density in electrons/m^3 [Chen, 1987]:
fp^2 = (N[e] e^2/4pe[o]m) = kN[e] (1-2)
A typical number for the F-region (200 to 400 km altitude) is 10^12 electrons/m^3, so the plasma resonance frequency would be 9 MHz. The value of m in Equation 1 1 approaches 0 as the operating
frequency, f, approaches the plasma frequency. The group velocity of a propagating wave is proportional to m, so m = 0 implies that the wave slows down to zero which is obviously required at some
point in the process of reflection since the propagation velocity reverses.
The total internal reflection from the ionosphere is similar to reflection of radio frequency (RF) energy from a metal surface in that the re-radiation of the incident energy is caused by the free
electrons in the medium. In both cases the wave penetrates to some depth. In a plasma the skin depth (the depth into the medium at which the electric field is 36.8% of its incident amplitude) is
defined by:
d = ---------- (1-3)
where l[0 ]is the free space wavelength.
The major difference between ionospheric reflection and reflection from a metallic surface is that the latter has a uniform electron density while the ionospheric density increases roughly
parabolically with altitude, with densities starting at essentially zero at stratospheric altitudes and rising to a peak at about 200 to 400 km. In the case of a metal there is no region where the
wave propagates below the resonance frequency, while in the ionosphere the refractive index and therefore the wave velocity change with altitude until the plasma resonance frequency is reached. Of
course if the RF frequency is above the maximum plasma resonance frequency the wave is never reflected and can penetrate the ionosphere and propagate into outer space. Otherwise what happens on a
microscopic scale at the surface of a metal and on a macroscopic scale at the plasma resonance in the ionosphere is very similar in that energy is re-radiated by electrons which are responding to the
incident electric field.
Coherent Integration back to top
During the 1960 s and 1970 s several variations in sounding techniques started moving significantly beyond the basic pulse techniques developed in the 1930 s. First was the coherent integration of
several pulses transmitted at the same frequency. Two signals are coherent if, having a phase and amplitude, they are able to be added together (e.g., one radar pulse echo received from a target
added to the next pulse echo received from the same target, thousandths of a second later) in such a way that the sum may be zero (if the two signals are exactly out of phase with each other) or
double the amplitude (if they are exactly in phase). Coherent integration of N signals can provide a factor of N improvement in power. This technique was first used in the Digisonde^TM 128 [Bibl and
Reinisch, 1975].
In ionospheric sounding, the motion of the ionosphere often makes it impossible to integrate by simple coherent summation for longer that a fraction of a second, although it is not rare to receive
coherent echoes for tens of seconds. However, with the application of spectral integration (which is a byproduct of the Fourier transform used to create a Doppler spectrum) it is possible to
coherently integrate pulse echoes for tens of seconds under nearly all ionospheric conditions [Bibl and Reinisch, 1978]. The integration may progress for as long a time as the rate of change of phase
remains constant (i.e., there is a constant Doppler shift, Df). The Digisonde^TM 128PS, and all subsequent versions perform this spectral integration.
Additional detail on this topic is contained in Chapter 2 in this section.
Coded Pulses to Facilitate Pulse Compression Radar Techniques back to top
A third general technique to improve on the simple pulse sounder is to stretch out the pulse by a factor of N, thus increasing the duty cycle so the pulse contains more energy without requiring a
higher power transmitter (power x time = energy). However, to maintain the higher range resolution of the simple short pulse the pulse can be bi-phase, or phase reversal modulated with a phase code
to enable the receiver to create a synthetic pulse with the original (i.e., that of the short pulse) range resolution. A network of sounders using a 13-bit Barker Code were operated by the U.S. Navy
in the 1960 s.
The critical factor in the use of pulse compression waveforms for any radar type measurement is the correlation properties of the internal phase code. Phase codes proposed and experimented with
included the Barker Code [Barker, 1953], Huffman Sequences [Huffman 1962], Convoluted Codes [Coll, 1961], Maximal Length Sequence Shift Register Codes (M-codes) [Sarwate and Pursley, 1980], or Golay
s Complementary Sequences [Golay, 1961], which have been implemented in the VHF mesospheric sounding radar at Ohio State University [Schmidt et al., 1979] and in the DPS. The internal phase code
alternative has just recently become economically feasible with the availability of very fast microprocessor and signal processor IC s. Barker Coded pulses have been implemented in several
ionospheric sounders to date, but until the DPS was developed there have been no other successful implementations of Complementary Series phase codes in ionospheric sounders.
The European Incoherent Scatter radar in Tromso, Norway (VanEiken, 1991 and 1993) and an over-the-horizon (OTH) HF radar used the Complementary Series codes. However most major radar systems
including all currently active OTH radars opted for the FM/CW chirp technique, due to its resistance to Doppler induced leakage and its compatibility with analog pulse compression processing
techniques. Basically, the chirp waveform avoids the need for extremely fast digital processing capabilities, since only the final stage is performed digitally, while the pulse compression is best
performed entirely digitally. Even at the modest bandwidths used for ionospheric sounding, this digital capability was until recently, much more expensive and cumbersome than the special synthesizers
required for chirpsounding.
Another new development in the 1970 s was the coherent multiple receiver array [Bibl and Reinisch, 1978] which allows angle of arrival (incidence angle) to be deduced from phase differences between
antennas by standard interferometer techniques. Given a known operating frequency, and known antenna spacing, by measuring the phase or phase difference on a number of antennas, the angle of arrival
of a plane wave can be deduced. This interferometry solution is invalid, however, if there are multiple sources contributing to the received signal (i.e., the received wave therefore does not have a
planar phase front). This problem can be overcome in over 90% of the cases as was first shown with the Digisonde^TM 256 [Reinisch et al., 1987] by first isolating or discriminating the multiple
sources in range, then in the Doppler domain (i.e., isolating a plane wavefront) before applying the interferometry relationships.
Except for the FM/CW chirpsounder which operates well on transmitter power levels of 10 to 100 W (peak power) the above techniques and cited references typically employ a 2 to 30 kW peak power pulse
transmitter. This power is needed to get sufficient signal strength to overcome an atmospheric noise environment which is typically 20 to 50 dB (CCIR Noise Tables) above thermal noise (defined as
kTB, the theoretical minimum noise due to thermal motion, where k = Boltzman s constant, T = temperature in ° K, and B = system bandwidth in Hz). More importantly, however, since ionogram
measurements require scanning of the entire propagating band of frequencies in the 0.5 to 20 MHz RF band (up to 45 MHz for oblique measurements), the sounder receiver will encounter broadcast
stations, ground-to-air communications channels, HF radars, ship-to-shore radio channels and several very active radio amateur bands which can add as much as 60 dB more background interference.
Therefore, the sounder signal must be strong enough to be detectable in the presence of these large interfering signals.
To make matters worse, a pulse sounder signal must have a broad bandwidth to provide the capability to accurately measure the reflection height, therefore the receiver must have a wide bandwidth,
which means more unwanted noise is received along with the signal. The noise is distributed quite evenly over bandwidth (i.e., white), while interfering signals occur almost randomly (except for
predictably larger probabilities in the broadcast bands and amateur radio bands) over the bandwidth. Thus a wider-bandwidth receiver receives proportionally more uniformly distributed noise and the
probability of receiving a strong interfering signal also goes up proportionally with increased bandwidth.
The DPS transmits only 300 W of pulsed RF power but compensates for this low power by digital pulse compression and coherent spectral (Doppler) integration. The two techniques together provide about
30 dB of signal processing gain (up to 42 dB for the bi-static oblique waveforms) thus for vertical incidence measurements the system performs equivalently with a simple pulse sounder of 1000 times
greater power (i.e., 300 kW).
Additional detail on this topic is contained in Chapter 2 in this section.
Current Applications of Ionospheric Sounding back to top
Current applications of ionospheric sounders fall into two categories:
a. Support of operational systems, including shortwave radio communications and OTH radar systems. This support can be in the form of predictions of propagating frequencies at given times and
locations in the future (e.g., over the ensuing month) or the provision of real-time updates (updated as frequently as every 15 minutes) to detect current conditions such that system operating
parameters can be optimized.
b. Scientific research to enable better prediction of ionospheric conditions and to understand the plasma physics of the solar-terrestrial interaction of the Earth s atmosphere and magnetic field
with the solar wind.
There has been considerable effort in producing global models of ionospheric densities, temperature, chemical constitution, etc, such that a few sounder measurements could calibrate the models and
improve the reliability of global predictions. It has been shown that if measurements are made within a few hundred kilometers of each other, the correlation of the measured parameters is very high
[Rush, 1978]. Therefore a network of sounders spaced by less than 500 km can provide reliable estimates of the ionosphere over a 250 km radius around them.
The areas of research pursued by users of the more sophisticated features of the Digisonde^TM sounders include polar cap plasma drift, auroral phenomena, equatorial spread-F and plasma irregularity
phenomena, and sporadic E-layer composition [Buchau et al., 1985; Reinisch 1987; and Buchau and Reinisch 1991]. There may be some driving technological needs (e.g., commercial or military uses) in
some of these efforts, but many are simply basic research efforts aimed at better understanding the manifestations of plasma physics provided by nature.
Requirements for a Small Flexible Sounding System back to top
The detailed design and synthesis of a RF measurement system (or any electronic system) must be based on several criteria:
a. The performance requirements necessary to provide the needed functions, in this case scientific measurements of electron densities and motions in the ionosphere.
b. The availability of technology to implement such a capability.
c. The cost of purchasing or developing such technology.
d. The risk involved in depending on certain technologies, especially if some of the technology needs to be developed.
e. The capabilities of the intended user of the system, and its expected willingness to learn to use and maintain it; i.e., how complicated can the operation be before the user will give up and
not try to learn it.
The question of what technology can be brought to bear on the realization of a new ionospheric sounder was answered in a survey of existing technology in 1989, when the portable sounder development
started in earnest. This survey showed the following available components, which showed promise in creating a smaller, less costly, more powerful instrument. Many of these components were not
available when the last generation of Digisondes^TM (circa 1980) was being developed:
Solid-state 300 W MOSFET RF power transistors
High-speed high precision (12, 14 and 16 bit) analog to digital (A D) converters
High-speed high precision (12 and 16 bit) digital to analog (D A) converters
Single chip Direct Digital Synthesizers (DDS)
Wideband (up to 200 MHz) solid state op amps for linear feedback amplifiers
Wideband (4 octaves, 2 32 MHz) 90° phase shifters
Proven Digisonde^TM 256 measurement techniques
Very fast programmable DSP (RISC) IC s
Fast, single board, microcomputer systems and supporting programming languages
Many of these components are inexpensive and well developed because they feed a mass market industry. The MOSFET transistors are used in Nuclear Magnetic Resonance medical imaging systems to provide
the RF power to excite the resonances. The high speed D A converters are used in high resolution graphic video display systems such as those used for high performance workstations. The DDS chips are
used in cellular telephone technology, in which the chip manufacturer, Qualcomm, is an industry leader. The DSP chips are widely used in speech processing, voice recognition, image processing
(including medical instrumentation). And of course, fast microcomputer boards are used by many small systems integrators which end up in a huge array of end user applications ranging from cash
registers to scientific computing to industrial process controllers.
The performance parameters were well known at the beginning of the DPS development, since several models of ionospheric pulse sounders had preceded it. The frequency range of 1 to 20 MHz for vertical
sounding was an accepted standard, and 2 to 30 MHz was accepted as a reasonable range for oblique incidence measurements. It was well known that radio waves of greater than 30 MHz often do propagate
via skywave paths, however, most systems relying on skywave propagation don t support these frequencies, so interest in this frequency band would only be limited to scientific investigations. A
required power level in the 5 to 10 kW range for pulse transmitters had provided good results in the past. The measurement objectives were to simultaneously measure all seven observable parameters
outlined at Paragraph 107 above in order to characterize the following physical features:
The height profile of electron density vs. altitude
Position and spatial extent of irregularity structures, gradients and waves
Motion vectors of structures and waves
As mentioned in the section above dealing with Current Applications of Ionospheric Sounding (Paragraph 127 et seq. above), the accurate measurement of all of the parameters, except frequency (it
being precisely set by the system and need not be measured) depends heavily on the signal to noise ratio of the received signal. Therefore vertical incidence ionospheric sounders capable of acquiring
high quality scientific data have historically utilized powerful pulse transmitters in the 2 to 30 kW range. The necessity for an extremely good signal to noise ratio is demanded by the sensitivity
of the phase measurements to the random noise component added to the signal level. For instance, to measure phase to 1 degree accuracy requires a signal to noise ratio better than 40 dB (assuming a
Gaussian noise distribution which is actually a best case), and measurement of amplitude to 10% accuracy requires over 20 dB signal to noise ratio. Of course, is it desirable that these measurements
be immune to degradation from noise and interference and maintain their high quality over a large frequency band. This requires that at the lower end of the HF band the system s design has to
overcome absorption, noise and interference, and poor antenna performance and still provide at least a 20 to 40 dB signal to noise ratio.
METHODOLOGY, THEORETICAL BASIS AND IMPLEMENTATION back to top
The VIS/DPS borrows several of the well proven measurement techniques used by the Digisonde^TM 256 sounder described in [Bibl, et al, 1981; Reinisch et al., 1989] and [Reinisch, 1987], which has been
produced for the past 12 years by the UMLCAR. The addition of digital pulse compression in the DPS makes the use of low power feasible, the implementation in software of processes that were
previously implemented in hardware results in a much smaller physical package, and the high level language control software and standard PC-DOS (i.e., IBM/PC) data file formats provide a new level of
flexibility in system operation and data processing.
A technical description of the DPS (sounder unit and receive antennas sub-systems) are contained in Section 2 of this manual.
Coherent Phase Modulation and Pulse Compression back to top
The DPS is able to be miniaturized by lengthening the transmitted pulse beyond the pulse width required to achieve the desired range resolution where the radar range resolution is defined as,
DR=c/2b where b is the system bandwidth, or (1-4)
DR=cT/2 for a simple rectangular pulse
waveform, with T being the width
of the rectangular pulse
The longer pulse allows a small low voltage solid state amplifier to transmit an amount of energy equal to that transmitted by a high power pulse transmitter (energy = power x time, and power = V^2/
R) without having to provide components to handle the high voltages required for tens of kilowatt power levels. The time resolution of the short pulse is provided by intrapulse phase modulation using
programmable phase codes (user selectable and firmware expandable), the Complementary Codes, and M-codes are standard. The use of a Complementary Code pulse compression technique is described in this
chapter, which shows that at 300 W of transmitter power the expected measurement quality is the same as that of a conventional sounder of about 500 kW peak pulse power.
The transmitted spread spectrum signal s(t) is a biphase (180° phase reversal) modulated pulse. As illustrated in Figure 1 3, bi-phase modulation is a linear multiplication of the binary spreading
code p(t) (a.k.a. a chipping sequence, where each code bit is a "chip") with a carrier signal sin(2pf[0]t) or in complex form, exp[j2pf[0]t], to create a transmitted signal,
s(t)=p(t)exp[j2pf[0]t] (1-5)
Figure 1-3 Generation of a Bi-phase Modulated Spread Spectrum Waveform
Notation throughout this chapter will use s(t) as the transmitted signal, r(t) the received signal and p(t) as the chip sequence. Functions r[1](t) and r[2](t) will be developed to describe the
signal after various stages of processing in the receiver.
The term chip is used rather than bit because for spread spectrum communications many chips are required to transmit one bit of message information, so a distinct term had to be developed. Figure 1-4
on the following page depicts the modulation of a sinusoidal RF carrier signal by a binary code (notice that the code is a zero mean signal, i.e., centred around 0 volts amplitude). Since the mixer
in Figure 1-3 can be thought of as a mathematical multiplier, the code creates a 180^o (p radians) phase shift in the sinusoidal carrier whenever p(t) is negative, since sin(wt) = sin(wt+p).
The binary spreading code is identical to a stream of data bits except that it is designed such that it forms a pattern with uniquely desirable autocorrelation function characteristics as described
later in this chapter. The 16-bit Complementary Code pair used in the DPS is 1-1-0-1-1-1-1-0-1-0-0-0-1-0-1-1 modulated onto the odd-numbered pulses and 1-1-0-1-1-1-1-0-0-1-1-1-0-1-0-0 modulated onto
the even-numbered pulses. This pattern of phase modulation chips is such that the frequency spectrum of such a signal (as shown in Figure 1-4) is uniformly spread over the signal bandwidth, thus the
term "spread spectrum". In fact, it is interesting to note that the frequency spectrum content of the spread spectrum signal used by the DPS is identical to that of the higher peak power, simple
short pulse used by the Digisonde^TM 256, even though the physical pulse is 8 times longer. Since they have the same bandwidth, Equation 1 4 would suggest that they have the same range resolution. It
will be shown later in this chapter, that the ability of the Digisonde^TM 256 and the DPS to determine range (i.e., time delay), phase, Doppler shift and angle of arrival is also identical between
the two systems, even though the transmitted waveforms appear to be vastly different.
Figure 1 4 Spectral Content of a Spread-Spectrum Waveform
Since the transmitted signal would obscure the detection of the much weaker echo in a monostatic system the transmitted pulse must be turned off before the first E-region echoes arrive at the
receiver which, as shown in Figure 1-5, is about T[E] = 600 m sec after the beginning of the pulse. Also, since the receiver is saturated when the transmitter pulse comes on again, the pulse
repetition frequency is limited by the longest time delay (listening interval) of interest, which is at least 5 msec, corresponding to reflections from 750 km altitude. To meet these constraints, a
533 m sec pulse made up of eight 66.67 m sec phase code chips (15 000 chips/sec) is selected which allows detection of ionospheric echoes starting at 80 km altitude. To avoid excessive range
ambiguity, a highest pulse repetition frequency of 200 pps is chosen, which allows reception of the entire pulse from a virtual height of 670 km (the pulse itself is 80 km long) altitude before the
next pulse is transmitted. This timing captures all but the highest multihop F-region echoes which are of little interest. Under conditions where higher unambiguous ranges, and therefore longer
receiver listening intervals, are desired 100 pps or 50 pps can be selected under software control.
Figure 1-5 Natural Timing Limitations for Monostatic Vertical Incidence Sounding
The key to the pulse compression technique lies in the selection of a spreading function, p(t), which possesses an autocorrelation function appropriate for the application. The ideal autocorrelation
function for any remote sensing application is a Dirac delta function (or instantaneous impulse, d (t) since this would provide perfect range accuracy and infinite resolution. However, since the
Dirac delta function has infinite instantaneous power and infinite bandwidth, the engineering tradeoffs in the design of any remote sensing system mainly involve how far one can afford to deviate
from this ideal (or how much one can afford to spend in more closely approximating this ideal) and still achieve the accuracy and resolution required. More to the point, for a discussion of a
discrete time digital system such as the DPS, the ideal signal is a complex unit impulse function, with the phase of the impulse conveying the RF phase of the received signal. The many different
pulse compression codes all represent some compromise in achieving this ideal, although each code has its own advantages, limitations, and trade-offs. The autocorrelation function as applied to code
compression in the VIS/DPS is defined as:
R(k)=S p(n) p(n+k) (1-6)
Therefore the ideal as described above is R(k) = d(k). (Several examples of autocorrelation functions of the codes described in this Section can be seen in Figures 1-9 through 1-13.)
For ionospheric applications, the received spread-spectrum coded signal, r(t), may be a superposition of several multipath echoes (i.e., echoes which have traveled over various propagation paths
between the transmitter and receiver) reflected at various ranges from various irregular features in the ionosphere. The algorithm used to perform the code compression operates on this received
multipath signal, r(t), which is an attenuated and time delayed (possibly multiple time delays) replica of the transmitted signal s(t) (from Equation 1 5), which can be represented as:
r(t)=S a[i] s(t-t[i]) or (1-7)
r(t)=S a[i] p(t-t[i])exp[j2pf[0]t - fi]
where S shows that the P multipath signals sum linearly at the receive antenna, a[i] is the amplitude of the ith multipath component of the signal, and t[i] is the propagation delay associated with
multipath i. The carrier phase f[i] of each multipath could be expressed in terms of the carrier frequency and the time delay t [i] ; however, since the multiple carriers (from the various multipath
components) cannot be resolved, while the delays in the complex code modulation envelope can be, a separate term, f [i], is used. Next, when the carrier is stripped off of the signal, this RF phase
term will be represented by a complex amplitude coefficient a[i] rather than a[i].
Figure 1-6 Conversion to Baseband by Undersampling
By down-converting to a baseband signal (a digital technique is shown in Figure 1-6), the carrier signal can be stripped away, leaving only the superposed code envelopes delayed by P multiple
propagation paths. Figure 1-6 presents one way to strip the carrier off a phase modulated signal. This is the screen display on a digital storage oscilloscope looking at the RF output from the DPS
system operating at 3.5 MHz. Notice that the horizontal scan spans 2 msec, which if the oscilloscope was capable of presenting more than 14 000 resolvable points, would display 7 000 cycles of RF.
The sample clock in the digital storage scope is not synchronized to the DPS, however, the digital sampling remains coherent with the RF for periods of several milliseconds. The analog signal is
digitized at a rate such that each sample is made an integer number of cycles apart (i.e., at the same phase point) and therefore looks like a DC level until the phase modulation creates a sudden
shift in the sampled phase point. Therefore the 180º phase reversals made on the RF carrier show up as DC level shifts, replicating the original modulating code exactly. The more hardware intensive
method of quadrature demodulation with hardware components (mixers, power splitters and phase shifters) can be found in any communications systems textbook, such as [Peebles, 1979]. After removing
the carrier, the modified r(t), now represented by r[1](t) becomes:
r[1](t)=S a[i] p(t-t[i]) (1-8)
where the carrier phase of each of the multipath components is now represented by a complex amplitude a i which carries along the RF phase term, originally defined by f [i] in Equation 1 7, for each
multipath. Since the pulse compression is a linear process and contributes no phase shift, the real and imaginary (i.e., in-phase and quadrature) components of this signal can be pulse compressed
independently by cross-correlating them with the known spreading code p(t). The complex components can be processed separately because the pulse compression (Equation 1 9B) is linear and the code
function, p(n), is all real. Therefore the phase of the cross-correlation function will be the same as the phase of r[1](t).
The classical derivation of matched filter theory [e.g., Thomas, 1964] creates a matched filter by first reversing the time axis of the function p(t) to create a matched filter impulse response h(t)
= p( t). Implementing the pulse compression as a linear system block (i.e., a "black box" with impulse response h(t)) will again reverse the time axis of the impulse response function by convolving h
(t) with the input signal. If neither reversal is performed (they effectively cancel each other) the process may be considered to be a cross-correlation of the received signal, r(t) with the known
code function, p(t). Either way, the received signal, r[2](n) after matched filter processing becomes:
r[2](n)=r[1](n)*h(n)=r[1](n)*p(-n) (1-9A)
or by substituting Equation 1 8 and writing out the discrete convolution, we obtain the cross-correlation approach,
[P M P]
r[2](n)=S a[i ]S p(k-ti)p(k-n)=S Ma[i ]d(n-t[i]) (1-9B)
^i=1 ^k=1 ^i=1
where n is the time domain index (as in the sample number, n, which occurs at time t = nT where T is the sampling interval), P is the number of multipaths, k is the auxiliary index used to perform
the convolution, and M is the number of phase code chips. The last expression in Equation 1 9B, the d(n), is only true if the autocorrelation function of the selected code, p(t), is an ideal unit
impulse or "thumbtack" function (i.e., it has a value of M at correlation lag zero, while it has a value of zero for all other correlation lags). So, if the selected code has this property, then the
function r[2](n), in Equation 1 9 is the impulse response of the propagation path, which has a value a[i], (the complex amplitude of multipath signal i) at each time n = t [i] (the propagation delay
attributable to multipath I).
Figure 1-7 Illustration of Complementary Code Pulse Compression
Figure 1-7 illustrates the unique implementation of Equation 1 9 employed for compression of Complementary Sequence waveforms. A 4-bit code is used in this figure for ease of illustration but
arbitrarily long sequences can be synthesized (the DPS s Complementary Code is 8-chips long). It is necessary to transmit two encoded pulses sequentially, since the Complementary Codes exist in
pairs, and only the pairs together have the desired autocorrelation properties. Equation 1 8 (the received signal without its sinusoidal carrier) is represented by the input signal shown in the upper
left of Figure 1-7. The time delay shifts (indexed by n in Equation 1 9 are illustrated by shifting the input signal by one sample period at a time into the matched filter. The convolution shifts
(indexed by k in Equation 1 9 sequence through a multiply-and-accumulate operation with the four ± 1 tap coefficients. The accumulated value becomes the output function r[2](n) for the current value
of n. The two resulting expressions for Equation 1 9 (an r[2](n) expression for each of the two Complementary Codes) are shown on the right with the amplitude M=4 clearly expressed. The non-ideal
approximation of a delta function, d(n ti), is apparent from the spurious a and a amplitudes. However, by summing the two r[2](n) expressions resulting from the two Complementary Codes, the spurious
terms are cancelled, leaving a perfect delta function of amplitude 2M.
The amplitude coefficient M in Equation 1 9 is tremendously significant! It is what makes spread-spectrum techniques practical and useful. The M means that a signal received at a level of 1 mv would
result in a compressed pulse of amplitude M mv, a gain of 20 log[10](M) dB. Unfortunately, the benefits of all of that gain are not actually realized because the RMS amplitude of the random noise
(which is incoherently summed by Equation 1 9B) which is received with the signal goes up by a factor of \/M. However, this still represents a power gain (since power = amplitude^2) equal to M, or
10log[10](M) dB. The \/M coefficient for the incoherent summation of multiple independent noise samples is developed more thoroughly in the following section on Coherent Spectral Integration, but the
factor of M-increase for the coherent summation of the signal is clearly illustrated in Figure 1-7.
The next concern is that the pulse compression process is still valid when multiple signals are superimposed on each other as occurs when multipath echoes are received. It seems likely that multiple
overlapping signals would be resolved since Equation 1 9 and the free space propagation phenomenon are linear processes, so the output of the process for multiple inputs should be the same as the sum
of the outputs for each input signal treated independently. This linearity property is illustrated in Figure 1-8. Two 4-chip input signals, one three times the amplitude of the other, are overlapped
by two chips at the upper left of the illustration. After pulse compression, as seen in the lower right, the two resolved components, still display a 3:1 amplitude ratio and are separated by two chip
Figure 1-8 Resolution of Overlapping Complementary Coded Pulses
The phase of the received signal is detected by quadrature sampling; but, how is the complex quantity, a i, or ai exp[f[i]], related to the RF phase (f[i]) of each individual multipath component? It
can be shown that this phase represents the phase of the original RF signal components exactly. As shown in Equations 1 10 and 1 11, the down-converting (frequency translation) of r(t) by an
oscillator, exp[j2pf[0]t] results in:
[P P]
^i=0 i=0
r[1](t)=Sa[i]p(t-t[i]) where a[i]=a[i]exp[jf[i]] is a complex amplitude (1-11)
This signal maintains the parameter f[i] which is the original phase of each RF multipath component. Note that the oscillator is defined as having zero phase (exp[j2pf[0]t]).
Alternative Pulse Compression Codes back to top
Due to many possible mechanisms the pulse compression process will have imperfections, which may cause energy reflected from any given height to leak or spill into other heights to some degree. This
leakage is the result of channel induced Doppler, mathematical imperfection of the phase code (except in the Complementary Codes which are mathematically perfect) and/or imperfection in the phase and
amplitude response of the transmitter or receiver. Several codes were simulated and analyzed for leakage from one height to another and for tolerance to signal distortion caused by band-limiting
filters. All of the pulse compression algorithms used are cross-correlations of the received signal with a replica of the unit amplitude code known to have been sent. Therefore, since Equation 1 9B
represents a "cross-correlation" (the unit amplitude function p(t) is cross-correlated with the complex amplitude weighted version) of p(k) with itself, it is the leakage properties of the
autocorrelation functions which are of interest.
The autocorrelation functions of several codes were computed either on a PC or a VAX computer for several different codes and are shown in the following figures:
a. Complementary Series (Figure 1-9)
b. Periodic M-codes (Figure 1-10)
c. Non-periodic M-codes (Figure 1-11)
d. Barker Codes (Figure 1-12)
e. Kasami Sequence Codes (Figure 1-13)
Figure 1-9 Autocorrelation Function of the Complementary Series
Figure 1-10 Autocorrelation Function of a Periodic Maximal Length Sequence
Figure 1-11 Autocorrelation Function of a Non-Periodic Maximal Length Sequence
Figure 1-12 Autocorrelation Function of the Barker Code
Figure 1-13 Autocorrelation Function of the Kasami Sequence
Since the Complementary Series pairs do not leak energy into any other height bin this phase code scheme seemed optimum and was chosen for the DPS s vertical incidence measurement mode in order to
provide the maximum possible dynamic range in the measurement. If there is too much leakage (for instance at a 20 dB level) then stronger echoes would create a "leakage noise floor" in which weaker
echoes could not be detectable. The autocorrelation function of the Maximal Length Sequence (M-code) is particularly good since for M = 127, the leakage level is over 40 dB lower than the correlation
peak and the correlation peak provides over 20 dB of SNR enhancement. However, since these must be implemented as a continuous transmission (100% duty cycle) they are not suitable for vertical
incidence monostatic sounding. Therefore the M-Code is the code of choice for oblique incidence bi-static sounding, where the transmitter need not be shut off to provide a listening interval.
The M-codes which provide the basic structure of the oblique waveform, all have a length of M = (2^N 1). The attractive property of the M-codes is their autocorrelation function, shown in Figure
1-10. This type of function is often referred to as a "thumbtack". As long as the code is repeated at least a second time, the value of the cross correlation function at lag values other than zero is
1 while the value at zero is M. However, if the M-Code is not repeated a second time, i.e., if it is a pulsed signal with zero amplitude before and after the pulse, the correlation function looks
more like Figure 1-11. The characteristics of Figure 1-11 also apply if the second repetition is modulated in phase, frequency, amplitude, code # or time shift (i.e., starting chip). So to achieve
the "clean" correlation function with M-Codes (depicted in Figure 1-10), the identical waveform must be cyclically repeated (i.e., periodic).
The problem that occurs using the M-codes is if any of the multipath signal components starts or ends during the acquisition of one code record, then there are zero amplitude samples (for that
multipath component) in the matched filter as the code is being pulse compressed. If this happens then the imperfect cancellation of code amplitude (which is illustrated by Figure 1-11) at
correlation lag values other than zero will occur. In order to obtain the thumbtack pulse compression, the matched filter must always be filled with samples from either the last code repetition, the
current code repetition or the next code repetition (with no significant change), since these sample values are necessary to make the code compression work. "Priming" the channel with 5 msec of
signal before acquiring samples at the receiver ensures that all of the multipath components will have preceding samples to keep the matched filter loaded. Similarly after the end of the last code
repetition an extra code repetition makes the synchronization less critical.
This "priming" becomes costly however, for when it is desired to switch frequencies, antennas, polarizations etc., the propagation path(s) have to be primed again. The 75% duty cycle waveform (X = 3)
allows these multiplexed operations to occur, but as a result, only 8.5 msec out of each 20 msec of measurement time is spent actually sampling received signals. The 100% duty cycle waveform (X = 4)
does not allow multiplexed operation, except that it will perform an O polarization coherent integration time (CIT) immediately after an X polarization CIT has been completed. Since the simultaneity
of the O/X multiplexed measurement is not so critical (the amplitude of these two modes fade independently anyway), this is essentially still a simultaneous measurement. Because the 100% mode
performs an entire CIT without changing any parameters, it can continuously repeat the code sequence and therefore the channel need only be primed before sampling the very first sample of each CIT.
After this subsequent code repetitions are primed by the previous repetition.
Even though the Complementary Code pairs are theoretically perfect, the physical realization of this signal may not be perfect. The Complementary Code pairs achieve zero leakage by producing two
compressed pulses (one from each of the two codes) which have the same absolute amplitude spurious correlation peaks (or leakage) at each height, but all except the main correlation peak are inverted
in phase between the two codes. Therefore, simply by adding the two pulse compression outputs, the leakage components disappear. Since the technique relies on the phase distance of the propagation
path remaining constant between the sequential transmission of the two coded pulses, the phase change vs. time caused by any movement in the channel geometry (i.e., Doppler shift imposed on the
signal) can cause imperfect cancellation of the two complex amplitude height profile records. Therefore, the Complementary Code is particularly sensitive to Doppler shifts since channel induced phase
changes which occur between pulses will cause the two pulse compressions to cancel imperfectly, while with most other codes we are only concerned with channel induced phase changes within the
duration of one pulse. However, if given the parameters of the propagation environment, we can calculate the maximum probable Doppler shift, and determine if this yields acceptable results for
vertical incidence sounding.
With 200 pps, the time interval between one pulse and the next is 5 msec. If one pulse is phase modulated with the first of the Complementary Codes, while the next pulse has the second phase code,
the interval over which motions on the channel can cause phase changes is only 5 msec. The degradation in leakage cancellation is not significant (i.e., less than 15 dB) until the phase has changed
by about 10 degrees between the two pulses. The Doppler induced phase shift is:
Df=2pTf[D] radians (1-12)
where f[D ]is the Doppler shift in Hz and T is the time between pulses.
The Doppler shift can be calculated as:
f[D]=(f[0]v[r])/c< (or for a 2-way radar propagation path)
f[D]=(2f[0]v[r])/c (1-13)
where f[0] is the operating frequency and v[r] is the radial velocity of the reflecting surface toward or away from the sounder transceiver. The radial velocity is defined as the projection of the
velocity of motion (v) on the unit amplitude radial vector (r) between the radar location and the moving object or surface, which in the ionosphere is an isodensity surface. This is the scalar
product of the two vectors:
v[r]=v.r=|v|cos(q) (1-14)
A phase change of 10° in 5 msec would require a Doppler shift of about 5.5 Hz, or 160 m/sec radial velocity (roughly half the speed of sound), which seldom occurs in the ionospheric except in the
polar cap region. The 8-chip complementary phase code pulse compression and coherent summation of the two echo profiles provides a 16-fold increase in signal amplitude, and a 4-fold increase in noise
amplitude for a net signal processing gain of 12 dB. The 127-chip Maximal Length Sequence provides a 127-fold increase in amplitude and a net signal processing gain of 21 dB. The Doppler integration,
as described later can provide another 21 dB of SNR enhancement, for a total signal processing gain of 42 dB, as shown by the following discussion.
Coherent Doppler (Spectral or Fourier) Integration back to top
The pulse compression described above occurs with each pulse transmitted, so the 12 to 21 dB SNR improvement (for 8-bit complementary phase codes or 127-bit M-codes respectively) is achieved without
even sending another pulse. However, if the measurement can be repeated phase coherently, the multiple returns can be coherently integrated to achieve an even more detectable or "cleaner" signal.
This process is essentially the same as averaging, but since complex signals are used, signals of the same phase are required if the summation is going to increase the signal amplitude. If the phase
changes by more than 90° during the coherent integration then continued summation will start to decrease the integrated amplitude rather than increase it. However, if transmitted pulses are being
reflected from a stationary object at a fixed distance, and the frequency and phase of the transmitted pulses remain the same, then the phase and amplitude of the received echoes will stay the same
The coherent summation of N echo signals causes the signal amplitude, to increase by N, while the incoherent summation of the noise amplitude in the signal results in an increase in the noise
amplitude of only \/N. Therefore with each N pulses integrated, the SNR increases by a factor of \/N in amplitude which is a factor of N in power. This improvement is called signal processing gain
and can be defined best in decibels (to avoid the confusion of whether it is an amplitude ratio or a power ratio) as:
Processing Gain = 20 log[10] {(S[p]/Q[p])/ (S[i]/Q[i])} (1-15)
where S[i] is the input signal amplitude, Q[i] the input noise amplitude, S[p] the processed signal amplitude, and Q[p] the processed noise amplitude. Q is chosen for the random variable to represent
the noise amplitude, since N would be confusing in this discussion. This coherent summation is similar to the pulse compression processing described in the preceding section, where N, the number of
pulses integrated is replaced by M, the number of code chips integrated.
Another perspective on this process is achieved if the signal is normalized during integration, as is often done in an FFT algorithm to avoid numeric overflow. In this case S[p] is nearly equal to S
[i], but the noise amplitude has been averaged. Thus by invoking the central limit theorem [Freund, 1967 or any basic text on probability], we would expect that as long as the input noise is a zero
mean (i.e., no DC offset) Gaussian process, the averaged RMS noise amplitude, s[np] (p for processed) will approach zero as the integration progresses, such that after N repetitions:
s[np]^2=s[ni]^2/N (the variance represents power) (1-16)
Since the SNR can be improved by a variable factor of N, one would think, we could use arbitrarily weak transmitters for almost any remote sensing task and just continue integrating until the desired
signal to noise ratio (SNR) is achieved. In practical applications the integration time limit occurs when the signal undergoes (or may undergo, in a statistical sense) a phase change of 90°. However,
if the signal is changing phase linearly with time (i.e., has a frequency shift, Dw ), the integration time may be extended by Doppler integration (also known as, spectral integration, Fourier
integration, or frequency domain integration). Since the Fourier transform applies the whole range of possible phase shifts needed to keep the phase of a frequency shifted signal constant, a coherent
summation of successive samples is achieved even though the phase of the signal is changing. The unity amplitude phase shift factor, e^ j^w^t, in the Fourier Integral (shown as Equation 1 17) varies
the phase of the signal r(t) as a function of time during integration. At the frequency (w) which stabilizes the phase of the component of r(t) with frequency w over the interval of integration
(i.e., makes r(t) e^ j^w^t coherent) the value of the integral increases with time rather than averaging to zero, thus creating an amplitude peak in the Doppler spectrum at the Doppler line which
corresponds to w:
F[r(t)]=R(w)=òr(t)e^-j^w^tdt (1-17)
Does this imply that an arbitrarily small transmitter can be used for any remote sensing application, since we can just integrate long enough to clearly see the echo signal? To some extent this is
true. There is no violation of conservation of energy in this concept since the measurement simply takes longer at a lower power; however, in most real world applications, the medium or environment
will change or the reflecting surface will move such that a discontinuous phase change will occur. Therefore a system must be able to detect the received signal before a significant movement (e.g., a
quarter to a half of a wavelength) has taken place. This limits the practical length of integration that will be effective.
The discrete time (sampled data) processing looks very similar (as shown in Equation 1 18). For a signal with a constant frequency offset (i.e., phase is changing linearly with time) the integration
time can be extended very significantly, by applying unity amplitude complex coefficients before the coherent summation is performed. This stabilizes the phase of a signal which would otherwise drift
constantly in phase in one direction or the other (a positive or negative frequency shift), by adding or subtracting increasingly larger phase angles from the signal as time progresses. Then when the
phase shifted complex signal vectors are added, they will be in phase as long as that set of "stabilizing" coefficients progress negatively in phase at the same rate as the signal vector is
progressing positively. The Fourier transform coefficients serve this purpose since they are unity amplitude complex exponentials (or phasors), whose only function is to shift the phase of the
signal, r(n), being analyzed.
Since the Digisonde^TM sounders have always done this spectral integration digitally, the following presentation will cover only discrete time (sampled data rather than continuous signal notation)
Fourier analysis.
F[r(t)]=R[k]=S r[n]exp[-jnk2p/N] (1-18)
where r[n] is the sampled data record of the received signal at one certain range bin, n is the pulse number upon which the sample r[n] was taken, T is the time period between pulses, N is the number
of pulses integrated (number of samples r[n] taken), and k is the Doppler bin number or frequency index. Since a Doppler spectrum is computed for each range sampled, we can think of the Fourier
transforms as F[56][w] or F[192][w] where the subscripts signify with which range bin the resulting Doppler spectra are associated.
By processing every range bin first by pulse compression (12 to 21 dB of signal processing gain) then by coherent integration, all echoes from each range have gained 21 to 42 dB of processing gain
(depending on the waveform used and the length of integration) before any attempt is made to detect them.
Further explanation of Equation 1 18 which can be gathered from any good reference on the Discrete Fourier Transformation, such as [Openheim & Schaefer, Prentice Hall, 1975], follows. The total
integration time is NT, where T is the sampling period (in the DPS, the time period between transmitted pulses). The frequency spacing between Doppler lines, i.e., the Doppler resolution, is 2p/
NT rads/sec (or 1/NT Hz) and the entire Doppler spectrum covers 2p/T rad/sec (with complex input samples this is ± p/T, but with real input samples the positive and negative halves of the spectra
are mirror image replicas of each other, so only p/T rad/sec are represented).
What is coherently integrated by the Fourier transformation in the DPS (as in any pulse-Doppler radar) is the time sequence of complex echo amplitudes received at the same range (or height) that is,
at the same time delay after each pulse is transmitted. Figure 1-14 shows memory buffers with range or time delay vertically and pulse number (typically 32 to 128 pulses are transmitted) horizontally
which hold the received samples as they are acquired by the digitizer. After each pulse is transmitted, one column is filled from the bottom up at regular sampling intervals, as the echoes from
progressively higher heights are received (33.3 msec/5 km). These columns of samples are referred to as height profiles, which are not to be confused with electron density profiles, but rather mirror
the radar terminology of a "slant range profile" (range becomes height for vertical incidence sounding) which is simply the time record of echoes resulting from a transmitted pulse. A height profile
is simply a column of numeric samples which may or may not represent any reflected energy (i.e., they may contain only noise)
Figure 1-14 Eight Coherent Parallel Buffers for Simultaneous Integration of Spectra
Complex Windowing Function back to top
With T, the sampling period between subsequent samples of the same coherent process, i.e., the same hardware parameters) defined by the measurement program, the first element of the Discrete Fourier
Transform (i.e., the amplitude of the DC component) will have a spectral width of 1/NT. This spectral resolution may be so wide that all Doppler shifts received from the ionosphere fall into this one
line. For instance, in the mid-latitudes it is very rare to see Doppler shifts of more that 3 Hz, yet with a ± 50 Hz spectrum of 16 lines, the Doppler resolution is 6.25 Hz, so a 3 Hz Doppler shift
would still appear to show "no movement". For sounding, it would be much more interesting if instead of a DC Doppler line, a +3.25 Hz and a 3.25 Hz line were produced, such that even very fine
Doppler shifts would indicate whether the motion was up or down. The DC line is a seemingly unalterable characteristic of the FFT method of computing the Discrete Fourier Transform, yet with a true
DFT algorithm the Fourier transform coefficients can be chosen such that, the centre of the Doppler lines analyzed can be placed wherever the designer desires them to be. Since the DSP could no
longer keep up with the real-time operation if the DFT algorithm were used another solution had to be found. What was needed was a ½ Doppler line shift which would be correct for any value of N or
Because the end samples in the sampled time domain function are random, a tapering window had to be used to control the spurious response of the Doppler spectrum to below 40 dB (to keep the SNR high
enough to not degrade the phase measurement beyond 1°). Therefore a Hanning function, H(n), which is a real function, was chosen and implemented early in the DPS development. The reader is referred
to [Oppenheim and Schafer, 1975] for the definition and applications of the Hanning function. The solution to achieving the ½ Doppler line shift was to make the Hanning function amplitudes complex
with a phase rotation of 180° during the entire time domain sampling period NT. The new complex Hanning weighting function is applied simply by performing complex rather than real multiplications.
This implements a single-sideband frequency conversion of ½ Doppler line before the FFT is performed. In the following equation, each received multipath signal has only one spectral component (k = D
[i]) such that it can be represented as, a[i] exp[j2pnDi]:
r(n) = {Sa[i]exp[-j2p(nD[i])} |H(n)| exp[-j2p(n/2NT)]=
=|H(n)| S a[i] exp[-j2p(nD[i]+n/2NT) (1-19)
Multiplexing back to top
When sending the next pulse, it need not be transmitted at the same frequency, or received on the same antenna with the same polarization. With the DPS it is possible to "go off" and measure
something else, then come back later and transmit the same frequency, antenna and polarization combination and fill the second column of the coherent integration buffer, as long as the data from each
coherent measurement is not intermingled (all samples integrated together must be from the same coherent statistical process). In this way, several coherent processes can be integrated at the same
time. Figure 1-14 shows eight coherent buffers, independently collecting the samples for two different polarizations and four antennas. This can be accomplished by transmitting one pulse for each
combination of antenna and polarization while maintaining the same frequency setting (to also integrate a second frequency would require eight more buffers), in which case, each subsequent column in
each array will be filled after each eight pulses are transmitted and received. This multiplexing continues until all of the buffers are filled with the desired number of pulse echo records. The DPS
can keep track of 64 separate buffers, and each buffer may contain up to 32 768 complex samples. The term "pulse" is used generically here. For Complementary Coded waveforms a pulse actually requires
two pulses to be sent, and for 127 chip M-codes the pulse becomes a 100% duty cycle, or CW, waveform. However, in both cases, after each pulse compression, one complex amplitude synthesized pulse, r2
(n) in Equation 1 9 which is equivalent to a 67 msec rectangular pulse exists which can be placed into the coherent buffer.
The full buffers now contain a record of the complex amplitude received from each range sampled. Most of these ranges have no echo energy; only externally generated manmade and natural noise or
interference from radio transmitters. If a particular ionospheric layer is providing an echo, each height profile will have significant amplitude at the height corresponding to that layer. By Fourier
transforming each row of the coherent buffer a Doppler spectrum describing the radial velocity of that layer will be produced. Notice that the sampling frequency at that layer is less than or equal
to the pulse repetition frequency (on the order of 100 Hz).
After the sequence of N pulses is processed, the pulse compression and Doppler integration have resulted in a Doppler spectrum stored in memory on the DSP card for each range bin, each antenna, each
polarization, and each frequency measured (maximum of 4 MILLION simultaneously integrated samples). The program now scans through each spectrum and selects the largest one amplitude per height. This
amplitude is converted to a logarithmic magnitude (dB units) and placed into a new one-dimensional array representing a height profile containing only the maximum amplitude echoes. This technique of
selecting the maximum Doppler amplitude at each height is called the modified maximum method, or MMM. If the MMM height profile array is plotted for each frequency step made, this results in an
ionogram display, such as the one shown in Figure 1-15.
Figure 1-15 VI Ionogram Consisting of Amplitudes of Maximum Doppler Lines
Angle of Arrival Measurement Techniques back to top
Figure 1-16 Angle of Arrival Interferometry
The DPS system uses two distinct techniques for determining the angle of arrival of signals received on the four antenna receiver array, an aperture resolution technique using digital beamforming
(implemented as an on-site real-time capability) and a super-resolution technique which is accomplished when the measurement data is being analyzed, in post-processing. Both techniques utilize the
basic principle of interferometry, which is illustrated in Figure 1-16. This phenomenon is based on the free space path length difference between a distant source and each of some number of receiving
antennas. The phase difference (Df) between antennas is proportional to this free space path difference (Dl) based on the fraction of a wavelength represented by Dl.
Dl=dsinq and
Df=(2pDl)/l=(2p d sinq)/l (1-20)
where q is the zenith angle, d is the separation between antennas in the direction of the incident signal (i.e., in the same plane as q is measured), and l is the free space wavelength of the RF
signal. This relationship is used to compute the phase shifts required to coherently combine the four antennas for signals arriving in a given beam direction, and this relationship (solved for q) is
also the basis of determining angle of arrival directly from the independent phase measurements made on each antenna.
Figure 1-17 shows the physical layout of the four receiving antennas. The various separation distances of 17.3, 34.6, 30 and 60 m are repeated in six different azimuthal planes (i.e., there is six
way symmetry in this array) and therefore, the Df s computed for one direction also apply to five other directions. This six-way symmetry is exploited by defining the six azimuthal beam directions
along the six axes of symmetry of the array, making the beamforming computations very efficient. Section 3 of this manual contains detailed information for the installation of receive antenna arrays.
Figure 1-17 Antenna Layout for 4-Element Receiver Antenna Array
Digital Beamforming back to top
At the end of the previous section it was shown that after completing a multiplexed coherent integration there is an entire Doppler spectrum stored for each height, each antenna, each frequency and
each polarization measured. All of these Doppler lines are available to the beamforming algorithm. In addition, the DSP software stores the complex amplitudes of the maximum Doppler line at each
height (i.e., the height profile in an MMM format, is an array of 128 or 256 heights) separately for each antenna. By setting a threshold (typically 6 dB above the noise floor), the heights
containing significant echo amplitude can quickly be determined. These are the heights for which beam amplitudes will be computed and a beam direction (the beam which creates the largest amplitude at
that height) declared. Due to spatial decorrelation (an interference pattern across the ground) of the signals received at the four antennas, it is possible that the peak amplitude in each of the
four Doppler spectra will not appear in the same Doppler line. Therefore, to ensure that the same Doppler line is used for each antenna (using different Doppler lines would negate the significance of
any phase difference seen between antennas) only Antenna #1 s spectra are used to determine which Doppler line position will be used for beamforming at each height processed.
At each height where an echo is strong enough to be detected, the four complex amplitudes are passed to a C function (beam_form) where seven beams are formed by phase shifting the four complex
samples to compensate for the additional path length in the direction of each selected beam. If a signal has actually arrived from near the centre of one of the beams formed, then after the phase
shifting, all four signals can be summed coherently, since they now have nearly the same phase, so that the beam amplitude of the sum is roughly four times each individual amplitude. The farther the
true beam direction is away from a given beam centre the farther the phase of the four signals drift apart and the smaller the summed amplitude. However, in the DPS system the beams are so wide that
even at the higher frequencies the signal azimuth may deviate more than 30° from the beam centres and the four amplitudes will still sum constructively [Murali, 1993].
The technique for finding the angle of arrival is then simply to compare the amplitude of the signal on each beam and declare the direction as the beam centre of the strongest beam. Therefore the
accuracy of this technique is limited to 30° in azimuth and 15° in elevation angle (the six azimuth beams are separated by 60° and the oblique beams are normally set 30° away from the vertical beam);
as opposed to the Drift angle of arrival technique described in the next section which obtains accuracies approaching 1°. There may be some question about the amplitude of the sidelobes of these
beams, but it is really immaterial (computation of the array pattern for 10 MHz is shown in [Murali, 1993]). The fundamental principle of this technique is that there is no direction which can create
a larger amplitude in a given beam than the direction of the centre of that beam. Therefore, detecting the direction by selecting the beam with the largest amplitude can never be an incorrect thing
to do. One has to avoid thinking of the beam as excluding echoes from other directions and realize that all that is needed is that a beam favours echoes more as their angle of arrival becomes closer
to the centre of that beam. In fact with a four element array the summed amplitude in a wrong direction may be nearly as strong as it is in the correct beam, however, given that the same four complex
amplitudes are used as input it cannot be stronger.
The DPS forms seven beams, one overhead (0° zenith angle) and six oblique beams (the nominal 30° zenith angle can be changed by the operator) centred at North and South directions and each 60° in
between. Using the same four complex samples (at one reflection height at a time) seven overlapping beams are formed, one overhead (for which the phase shifting required on each antenna is 0°) and
six beams each separated by 60° in azimuth and tipped 30° from vertical. If one of the off-vertical beams is found to produce the largest amplitude, the displayed echo on the ionogram is color coded
as an oblique reception.
The phase shifts required to sum echoes into each of the seven beams depend on four variables:
a. the signal wavelength,
b. the antenna geometry (separation distance and orientation),
c. the azimuth angle of arrival, and
d. the zenith angle of arrival.
The antenna weighting coefficients are unity amplitude with a phase which is the negative of the extra phase delay caused by the propagation delay, thereby removing the extra phase delay. The phase
delays for antenna is resulting from arrival angle spherical coordinates (q[j], f[j]) which corresponds to the direction of beam j, are described (using Equation 1 20) by the following:
DF[ij]=(2p sinq[j]/l)d'[ij] (1-21)
where DF [ij] is the phase difference between antenna i s signal and antenna 1 s signal, q[j] is the zenith angle (0 for overhead), and d'[ij] is the projection of the antenna separation distance
(from antenna i to antenna 1) upon the wave propagation direction. The parameter d' is dependent on the antenna positions which can be placed on a Cartesian coordinate system with the central
antenna, antenna 1, at the origin and the X axis toward the North and the Y axis toward the West. With this definition the azimuth angle f is 0° for signals arriving from the North and:
d'[ij]=(x[i] cos f[j]+y[i]sinf[j]) (1-22)
Since antenna 1 is defined as the origin, x1 and y1 are always zero, so Df [i] has to be zero. This makes antenna 1 the phase reference point which defines the phase of signals on the other antennas.
The correction coefficients b[i] are unit amplitude phase conjugates of the propagation induced phase delays:
b[ij]=1.0 Ð DFi(f,x[i],y[i],q[j],f[j])=1 Ð -DF[ij] (1-23)
Because they are frequency dependent, these correction factors must be computed at the beginning of each CIT when the beamforming mode of operation has been selected. A full description as well as
some modeling and testing results were reported by [Murali, 1993].
│ Example A.: │
│ │
│ Given the antenna geometry shown in Figure 1-17, at an operating frequency of 4.33 MHz (l = 69.28 m), a beam in the eastward direction and 30° off vertical would, according to Equation 1 20, │
│ require a phase shift of 90° on antenna 4, 45° on antennas 2 and 3, and 0° on antenna 1. If an echo is received from that direction it would be received on the four antennas as four complex │
│ amplitudes at the height corresponding to the height (or more precisely, the range, since there may be a horizontal component to this distance) of the reflecting source feature. Therefore, a │
│ single number per antenna can be analyzed by treating one echo height at a time, and by selecting only one (the maximum) complex Doppler line at that height and that antenna. Assume that the │
│ following four complex amplitudes have been receive on a DPS system at, for instance, a height of 250 km. This is represented (in polar notation) as: │
│ │
│ Antenna 1: 830 Ð 135° │
│ │
│ Antenna 2: 838 Ð 42° │
│ │
│ Antenna 3: 832 Ð 182° │
│ │
│ Antenna 4: 827 Ð 179° │
│ │
│ To these sampled values add the +90° and 45° phase corrections mentioned above producing: │
│ │
│ Antenna 1: 830 Ð 135° or 586 + j586 │
│ │
│ Antenna 2: 838 Ð 132° or 561 + j623 │
│ │
│ Antenna 3: 832 Ð 137° or 608 + j567 │
│ │
│ Antenna 4: 827 Ð 134° or 574 + j594 │
│ │
│ East Beam (sum of above) = 2329+j2370 (3329Ð 134.5° in polar form) │
│ │
│ Since the sum is roughly four times the signal amplitude on each antenna there has been a coherent signal enhancement for this received echo because it arrived from the direction of the beam. │
│ It is interesting to note here, that these same four amplitudes could have been phase shifted corresponding to another beam direction in which case they would not add up in-phase. The DPS │
│ does this seven times at each height, using the same four samples, then detects which beam results in the greatest amplitude at that height. Of course at a different height another source may │
│ appear in a different beam, so the beamforming must be computed independently at each height. │
Although the received signal is resolved in range/height before beamforming, the beamforming technique is not dependent on isolating a signal source before performing the angle of arrival
calculations. If two sources exist in a single Doppler line then these components (the amplitude of the Doppler line can be thought of as a linear superposition of the two signal components) then
some of each of them will contribute to an enhanced amplitude in their corresponding beam direction. Conversely, the Drift technique assumes that the incident radio wave is a plane wave (thus
requiring isolation of any multiple sources).
Drift Mode Super-Resolution Direction Finding back to top
By analyzing the spatial variation of phase across the receiver aperture, using Equation 1 20, the two-dimensional angle of arrival (zenith angle and azimuth angle) of a plane wave can be determined
precisely using only three antennas. The term super-resolution applies to the ability to resolve distinct closely spaced points when the physical dimensions (in this case, the 60 m length of one side
of the triangular array) of the aperture used is insufficient to resolve them (from a geometric optics standpoint). Therefore, the use of interferometry provides super resolution. This is required
for the Drift measurements because the beam resolution achievable with a 60 m aperture at 5 MHz is about 60° , while 5° or better is required to measure plasma velocities accurately. Using
beamforming to achieve a 5° angular resolution at 5 MHz would require an aperture dimension of 600 m, which would have to be filled with on the order of 100 receiving antenna elements. Therefore the
Drift technique described here is a tremendous savings in system complexity. The Drift mode concept appears at first glance to be similar to the beamforming technique, but it is a fundamentally
different process.
The Drift mode depends on a single echo source being isolated such that its phase is not contaminated by another echo (from a different direction but possibly arriving with the same time delay). This
technique works amazingly well because at a given time, the overhead ionosphere tends to drift uniformly in the same direction with the same velocity. This means that each off-vertical echo will have
a Doppler shift proportional to the radial velocity of the reflecting plasma and to cos a where a is the angle between the position vector (radial vector from the observation site to the plasma
structure) and velocity vector of the plasma structure, as presented in Equation 1 14. Therefore, for a uniform Drift velocity the sky can be segmented into narrow bands (e.g., 10 s of bands) based
on the value of cos a which correspond to particular ranges of Doppler shifts [Reinisch et al, 1992]. These bands are shown in Figure 1-18 as the hyperbolic dashed lines [Scali, 1993] which indicate
at what angle of arrival the Doppler line number should change if the whole sky is drifting at the one velocity just calculated by the DDA program. In other words, the agreement of the Doppler
transitions with the boundaries specified by the uniform drift assumption is a test of the validity of the assumption for the particular data being analyzed.
Both isolating the sources of different radial velocities and resolving echoes having different ranges (into 10 km height bins), results in very effective isolation of multiple sources into separate
range/Doppler bins. If multiple sources exist at the same height they are usually resolved in the Doppler spectrum computed for that height, because of the sorting effect which the uniform motion has
on the radial velocities. If the resolution is sufficient that a range/Doppler bin holds signal energy from only one source, the phase information in this Doppler line can be treated as a sample of
the phase front of a plane wave. Even though many coherent echoes have
Figure 1-18 Radial Velocity Bands as Defined by Doppler Resolution
been received from different points in the sky, the energy from these other points is not represented in the complex amplitude of the Doppler line being processed. This is important because the angle
of arrival calculation is accomplished with standard interferometry (i.e., solving Equation 1 20 for q ), which assumes no multiple wave interference (i.e., a perfect plane wave).
A fundamental distinction between the Drift mode and beamforming mode is that in the Drift mode the angle of arrival calculation is applied for each Doppler line in each spectrum at each height
sampled, not just at the maximum amplitude Doppler line. A data dependent threshold is applied to try to avoid solving for locations represented by Doppler lines that contain only noise, but even
with the threshold applied the resulting angle of arrival map may be filled with echo locations which result from echoes much weaker than the peak Doppler line amplitudes. In beamforming, only the
echoes representing the dominant source at each height are stored on tape, therefore no other source echoes are recoverable from the recorded data.
It has been found that vertical velocities are roughly 1/10th the magnitude of horizontal velocities [Reinisch et al, 1991]. Since the horizontal velocities from echoes directly overhead result in
zero radial velocity to the station, the Drift technique works best in a very rough, or non-uniform ionosphere, such as that found in the polar cap regions or the equatorial regions, because they
provide many off-vertical echoes.
For a smooth spherically concentric (with the surface of the earth) ionosphere all the echoes will arrive from directly overhead and the resulting Drift skymaps will show a single source location at
zenith angle = 0°. For horizontal gradients or tilts within that spherically concentric uniform ionosphere however, the single source point would move in the direction of the DN/N (N as in Equation 1
1) gradient (the local electron density gradient), one degree per degree of tilt, so the Drift measurement can provide a straightforward measurement of ionospheric tilt.
Resolution of source components by first isolating multiple echoes in range then in Doppler spread (velocity distribution) combined with interferometer principles is a powerful technique in
determining the angle of arrival of superimposed multipath signals.
High Range Resolution (HRR) Stepped Frequency Mode back to top
The phase of an echo from a target, or the phase of a signal after passing through a propagation medium is dependent on three things:
1. the absolute phase of the transmitted signal;
2. the transmitted frequency (or free space wavelength); and
3. the phase distance, d, where:
]d = ò m(f,x,y,z)dl (1-24)
is the line integral over the propagation path, scaled by the refractive index if the medium is not free space. If the first two factors, the transmitted phase and frequency, can be controlled very
precisely, then measuring the received phase at two different frequencies makes it possible to solve for the propagation distance with an accuracy proportional to the accuracy of the phase
measurement, which in turn is proportional to the received SNR. This is often referred to as the df>/df technique. The two measurements form a set of linear equations with two equations and two
unknowns, the absolute transmitted phase and the phase distance. If there are several "propagation path distances" as is the case in a multipath environment, then measurement at several wavelengths
can provide a measure of each separate distance. However, instead of using a large set of linear equations, the phase of the echoes have chosen to be analyzed as a function of frequency, which can be
done very efficiently with a Fast Fourier Transform. The basic relations describing the phase of an echo signal are:
f(f)=-2pft[p]=-2pd/l=-2p(f/c)d (1-25)
where d is the propagation path length in metres (the phase path described in Equation 1 24, f in Hz, f in radians, l in metres and t[p] is the propagation delay in seconds. Note that the first
expression casts the propagation delay in terms of time delay (# of cycles of RF), the second in terms of distance (# of wavelengths of RF), and the third relates frequency and distance using c.
For monostatic radar measurements the distance, d is twice the range, R, so Equation 1 25 becomes:
f(f)=-4pR/l = -4p(f/c)R (1-26)
If a series of N RF pulses is transmitted, each changed in frequency by D f, one can measure the phases of the echoes received from a reflecting surface at range R. It is clear from Equation 1 26
that the received phase will change linearly with frequency at a rate directly determined by the magnitude of R. Using Equation 1 26 one can express the received phase from each pulse (indexed by i)
in this stepped frequency pulse train:
f[i](f[i])=-4pf[i]t[p]=-4pf[i](R/c) (1-27)
where the transmitted frequency fi can be represented as:
f[i]=f[0] + iDf (1-28)
a start frequency plus some number of incremental steps.
Two Frequency Precision Ranging back to top
This measurement forms the basis of the DPS s Precision Group Height mode. By making use of the simultaneous (multiplexed) operation at multiple frequencies (i.e., multiplexing or interlacing the
frequency of operation during a coherent integration time ( CIT) it is possible to measure the phases of echoes from a particular height at two different frequencies. If these frequencies are close
enough that they are reflected at the same height then the phase difference between the two frequencies determines the height of the echo.
The following development of the two frequency ranging approach leads to a general theory (but not expoused here) covering FM/CW ranging and stepped frequency radar ranging. Using Equation 1 26 a two
frequency measurement of f allows the direct computation of R, by:
f[2]-f[1]=4pR(f[1]-f[2])/c=4pRDf/c (1-29)
R=c(f[2]-f[1])/4pDf (1-30)
It is easy to see from Equation 1 29 that if the range is such that RDf/c is greater than 1/2 then the magnitude of f[2]-f[1] will exceed 2p which is usually not discernible in a phase measurement,
and therefore causes an ambiguity. This ambiguity interval (D for distance) is
R=DA=(1/2)c/Df=c/2Df (1-31)
│ Example B.: │
│ │
│ The measured phase is (f[2] - f[1]) = p/8 while Df = 1 kHz, then R = 9.375 km. │
│ │
│ In the example above with Df = 1 kHz, the ambiguous range D[A] is 150 km. Since a 0 km reflection height must certainly give the same phase for any two frequencies (i.e., 0° ), then given │
│ that the ambiguity interval is 150 km, then for this value of Df, the phase difference must again be zero at 150, 300, 450 km etc, since 0 km is one of the equal phase points, and all other │
│ ranges giving a phase difference of 0° are spaced from it by 150 km. If the phase measurements f[2] and f[1] were taken after successive pulses at a time delay corresponding to a range of 160 │
│ km (at least one sample of the received echo must be made during each pulse width, i.e., at a rate equal to or greater than the system bandwidth, see Equation 1 4), one would conclude that │
│ there is an extra 2p in the phase difference and that the true range is 159.375 km, not 9.375 km. Therefore, the measurement must be designed such that the raw range resolution of the │
│ transmitted pulse is sufficient to resolve the ambiguity in the df/df measurement. │
The validity of the two-frequency precision ranging technique is lost if there is more than one source of reflection within the resolution of the radar pulse. The phase of the received pulse will be
the complex vector sum of the multiple overlapping echoes, and therefore any phase changes (f[i]) will be partially influenced by each of the multiple sources and will not correctly represent the
range to any of them. Therefore, in the general propagation environment where there may be multiple echo sources (objects producing a reflection of RF energy back to the transmitter), or for
multipath propagation to and from one or more sources, many frequency steps are needed to resolve the different components influencing fi. This "many step" approach can be performed in discrete
frequency steps as in the DPS s HRR mode, or by a continuous linear sweep, as done in a chirpsounder described in [Haines, 1994].
Signal Flow Through the DPS Transmitter and Receiver back to top
Signal flow through the DPS Transmitter Exciter
The transmitted code is generated on the transmitter exciter card (XMT) by selecting and clocking out the phase code bits stored in a ROM on the XMT card (Section 5 (Hardware Description) describes
the functions of the various system components in detail). These bits are offset and balanced such that their positive and negative swings are equal. Then they are applied to a double balanced mixer
along with the 70.08 MHz signal from the oscillator (OSC) card. This multiplication process results in either a 0° or 180° phase inversion since multiplication of a sine wave by 1 is the same as
performing a phase inversion, since sin(t) = sin(t ± p ). This modulated 70.08 MHz signal is then filtered by a linear phase surface acoustic wave (SAW) filter, split into phase quadrature (to
enable selection of circular transmitter polarization), and mixed with the variable local oscillator from the Frequency Synthesizer (SYN) card. The mixing process (a passive diode double balanced
mixer is used) effectively multiplies the two input signals (along with some non-linear distortion products) which produces a sum and difference frequency at the output:
y(t)=sin(a)sin(b)=0.5[cos(a-b)-cos(a+b)] (1-32)
The variable local oscillator signal ranges from 71 MHz to 115 MHz, which mixed with 70.08 MHz creates a 1 to 45 MHz difference frequency (a 140 to 185 MHz sum frequency is also produced but is
low-pass filtered out of the final signal) which is amplified and sent to the RF power amplifier chassis. The RF amplifier boosts up the signal level to be applied to the antenna(as) for
Signal Flow Through the DPS Receiver Antennas back to top
The receive loop antennas (Figure 1-1B) are sensitive to the horizontal magnetic field component of the received signal, and can be phased to favour either the right hand circular or left hand
circular polarization. The two loop antennas are oriented at a 90° angle to each other and detect the same peak of the incident circularity polarized wave, separated by exactly a quarter of a RF
cycle. Therefore, if the phase of the signal on one antenna is shifted by 90° the sum of the two signals has either double the amplitude or zero amplitude depending on the sense of the circular
polarization. This is a linear process and therefore treats each of the multipath components independently. For instance if there is one O polarized echo at 250 km and an X polarized echo at 200 km,
the fact that the X polarized energy is rejected has no effect on the reception of the O polarized energy. The received signal which is applied to the receivers is the sum of the signals from the two
crossed antennas after shifting one by ± 90° with a broadband quadrature phase shifter. The 90° phase shift can be expressed in an equation using the phasor exp[± jp/2], so using the form of Equation
1 6:
r(t)=S{a[i]p(t-t[i]) exp[j2pf[0]t-jf[i]]+a[i] p(t-t[i])exp[j2pf[0]t-
-jf[i]-jp/2] exp[±jp/2]}=
=2Sa[i]p(t-t[i]) exp[j2pf[0]t-jf[i]] if the last term is exp[+jp/2] OR
=0 if the last term is exp[-jp/2] (1-33)
200 m sec before each waveform is transmitted, the DPS can shift the signal from one of the receive loops either ± 90° under control of the DPS software thus switching sensitivity from left circular
polarization to right circular polarization. In the DPS, the signals from the four crossed loop receive antennas are fed into the antenna switch box, which either selects one signal to feed to the
single receiver card or combines all four in phase. In the DPS-4 (four-channel receiver variant), one receiver is dedicated to each receive antenna (one receive antenna is the sum of the two crossed
elements, but since the two elements are combined in the field and fed to the system on a single coax there is only one signal from each crossed loop assembly). Therefore, in a DPS-4, four signals
from the antennas are simply passed through the antenna switch box to the four receivers in which case the only functions of the antenna switch box are to switch in a calibration signal from the
transmitter exciter card and to apply the DC power to the receiver antenna preamplifiers via coaxial cables).
Received Signal Flow through the DPS Receiver back to top
The received wideband RF signal from the antenna switch is fed to the receiver (RCV) card where it is first stepped up in voltage 2:1 in a transformer to increase the impedance from 50 to 200 W for a
better match to the high input impedance (about 1 kW ) preamplifier. Based on the level of one of the receiver gain control bits, which in turn responds to a manual setting in the DPS hardware setup
file (the Hi_Noise parameter) the gain through this amplifier is either 6 dB or 15 dB. Since the maximum achievable output swing from this amplifier is about 8 Vp-p the maximum allowable input
voltage is therefore 4 or 1.5 V (at the antenna preamplifier output) respectively for the two different gain settings. Considering the 2:1 step-up, this means that if the wideband input from the
receive antennas is over 0.7 Vp-p the lower gain setting must be used. The 8 Vp-p maximum output of the preamplifier is reduced to 5 Vp-p by a 33 W resistor which matches the highest allowed input to
the passive diode mixer (the 23 dBm LO level double balanced mixer allows a maximum of 20 dBm input). The remainder of the receiver applies successively more gain and filtering (the bandwidth narrows
down to 20 kHz after seven stages of tuning), and outputs the received signal at a fixed 225 kHz intermediate frequency (IF).
Signal Flow through the Digitizer back to top
The reason for selecting exactly 225 kHz as the last IF frequency is that there are an even number of cycles in the time period that corresponds to a 10 km height interval (66.667 m sec). This means
that, if spaced by 66.667 msec, samples of the IF signal (which has a period of 4.444 msec) will represent baseband samples of the received envelope amplitude, since:
15 cycles of 225 kHz=66.6667 msec=10 km radar range.
For instance, if a constant amplitude coherent sine wave carrier were received directly on the current receiver frequency, samples of the IF would have a constant amplitude. The only problem is that
without being synchronized to the peaks of this sine wave it is possible that all of the samples of the IF will occur at zero crossings of the received signal. This apparent problem is avoided by the
use of quadrature sampling.
The more standard quadrature sampling approach [Peebles, 1979] is to use a 90° phase shifter to produce a quadrature Local Oscillator and down-convert the IF to a complex (two channel) baseband.
However, in the DPS since very fast analog to digital (A/D) converters were available inexpensively, the signal was simply sampled as pairs at 90° (1.1111 msec) intervals. This pair of samples is
then repeated at the desired sampling interval, 16.6667 msec for 2.5 km delay intervals, 33.3333 msec for 5 km or 66.6667 msec for 10 km intervals [Bibl, K., 1988]. The samples at 2.5 km or 5 km
intervals are not equal in phase, since 3.75 and 7.5 cycles respectively have passed between the complex sample pairs. However, at the 10 km interval, exactly 15 cycles have passed. Adjacent 2.5 or 5
km samples within a received pulse should have the same phase since they are sampling the continuation of the coherent transmitted pulse. In order to correct the 90° and 180° phase errors made by the
3.75 or 7.5 cycle sampling interval, an efficient numeric correction brings these samples back into phase. The 90° and 180° phase correction is simply a matter of inverting the sign for 180° or
swapping the real and imaginary samples and inverting the real sample for the 90° shift. No complex multiplications are required but this does add another level of "bookkeeping" to the signal
processing algorithms.
Signal Flow through the DSP Card back to top
From here, the next step is to cross-correlate the received samples with the known phase code, as was described in the above section on Coherent Phase Modulation and Pulse Compression. The known
phase code is either ± 1 for each code chip, therefore the cross multiplication required in the correlation process is in reality only addition or subtraction. However, with a modern signal
processor, the pipelined multiplication process is faster than addition due to the on-chip hardware multiplier and automatic sequencing of address pointers, so as implemented, the multiplications by
± 1. Another interesting detail in this algorithm is that the real samples and the imaginary samples are pulse compressed independently of each other. The two resulting range profiles are then
combined into complex samples which represent the phase and amplitude of the original RF signal at the height/range corresponding to the correlation time lag of the cross-correlation function. As is
evident from Equation 1 9, this is a linear process and therefore superimposed signals at different time delays can be detected without distorting each other as was shown by Figure 1-8.
Another interesting feature of the DPS s pulse compression algorithm is a technique to avoid the M^2 processing load penalty inherent in the pulse compression operation when the phase code chips are
double sampled (5 km sample period, making the pulse duration 16 samples) or quadruple sampled (2.5 km intervals, making the pulse duration 32 samples). Since the phase transitions are always 66.667
msec apart, we can "decimate" the input record by taking every 2nd or 4th sample and then cross-correlating it with an 8-sample matched filter rather than a 16 or 32 sample matched filter. The full 4
times over-sampled resolution can be restored by successively taking each fourth sample but starting one sample higher each time. Then after performing the four cross-correlation functions,
interleave the four pulse compressed records back into a new 4 times over-sampled output record. A quantitative analysis of the savings in processing steps is presented next.
When the phase code chips are double sampled (5 km sample period) or quadruple sampled (2.5 km intervals) the M^2 increased processing load required for a cross correlation is avoided by
independently performing the pulse compression of the odd # d and even # d samples (for 5 km spacing, or each fourth sample for 2.5 km sample spacing, since the signal s range resolution is only 10
km) and reconstructing the finer resolution profile after compression. In addition the savings obtained by processing the real record and imaginary record simultaneously is analyzed. The number of
operations required to cross-correlate a 256 sample complex data record (e.g., a 256 sample height profile), using 5 km sampling intervals, and the 127 length maximal-length sequence code are as
1) Cross correlating the 2 times over-sampled record:
256-pt complex record convolved with 254-pt MF 260,096 multiplications
260,096 additions
Knowing that the real and imaginary samples are independent and that the phase code itself is all real, the complex multiplications (i.e., the cross-terms) can be done away with, resulting in:
2) Two 256-pt real records convolved with 254-pt MF 130,048 multiplications
130,048 additions
By pulse compressing only every other sample in a double over-sampled record then going back and compressing the every other sample skipped the first time:
3) Four 128-pt real records convolved with 127-pt MF 65,024 multiplications
65,024 additions
With the much shorter Complementary Codes, the pulse compression computational load is greatly reduced, since only an 8-pt MF is used. Using the same real pulse compression algorithm and skipping
every other sample, the Complementary Code processing load is:
4) Eight 128-pt real records (the 8 sub-records are: real and
imaginary samples, odd and even height numbers, then code 1
and code 2) convolved with an 16-pt filter
16,384 multiplications
16,384 additions
Implemented in the TMS320C25 16-bit fixed point processor, these pulse compression algorithms run at about 10 000 multiplications and additions (they are done in parallel) per millisecond, so these
pulse compressions with 20 msec between repetitions of the 127-length codes and 10 msec between Complementary Code pairs are easily performed in real time (e.g., one waveform is entirely processed
before the next waveform repetition is finished).
A faster way to perform the matched filter convolution is described by Oppenheim and Schaefer [Oppenheim & Schaefer, 1976] which uses Fourier transforms. This is based on the Fourier transform
S(w)=F(w)H(w) is an identical expression to:
F[s(t)]=F[f(t)*h(t)] (1-34)
This identity says that multiplication in the frequency domain accomplishes a convolution in the time domain, if the transformed function (the product of the two functions, S(w) in the Equation 1 34)
is transformed back to the time domain. This would reduce the compression of the 127 chip waveform (sampled twice per code chip) from 65 000 operations to about 4500 operations (Nlog[2](N) for N=512
points). This algorithm change has not been implemented. To incorporate this algorithm the samples must be doubled again, since the code repeats at an interval other than a power of two, to
accommodate the cyclic nature of the convolutional code compression algorithm. Furthermore, the sampling rate must always be 60 000 samples/sec (the 2.5 km resolution mode) to preclude aliasing from
Regardless of how it is performed the Complementary Code pulse compression provides 12 dB of SNR improvement and the M-codes (only useful in a bi-static measurement) provide 21 dB of SNR improvement.
In addition to that, the coherent Doppler integration described above provides another 9 to 21 dB of SNR improvement.
The pulse compression and Doppler integration have resulted in a Doppler spectrum stored in memory on the DSP card for each range bin. The program now scans through each spectrum and selects the
largest amplitude. This amplitude is converted to a logarithmic magnitude (dB units) and placed into a one-dimensional array representing a time-delay profile of any echoes. This one dimensional
array is called a height profile, or range profile, and if plotted for each frequency step made, results in an ionogram display, such as the one shown in Figure 1-17. The 11 520 amplitudes shown as
individual pixels on the height vs. frequency display are the amplitude of the maximum Doppler line from the spectrum at each height and frequency. Therefore, the ionogram shown, covering 9 MHz in
100 kHz steps is the result of 737 280 separate samples, and 23 040 separate Doppler spectra (11 520 O polarization and 11 520 X polarization).
back to top
Barker R.H., "Group Synchronizing of Binary Digital Systems", Communication Theory, London, pp. 273-287, 1953
Bibl, K. and Reinisch B.W., "Digisonde 128P, An Advanced Ionospheric Digital Sounder", University of Lowell Research Foundation, 1975.
Bibl, K and Reinisch B.W., "The Universal Digital Ionosonde", Radio Science, Vol. 13, No. 3, pp 519-530, 1978.
Bibl K., Reinisch B.W., Kitrosser D.F., "General Description of the Compact Digital Ionospheric Sounder, Digisonde 256", University of Lowell Center for Atmos Rsch, 1981.
Bibl K., Personal Communication, 1988.
Buchau, J. and Reinisch B.W., "Electron Density Structures in the Polar F Region", Advanced Space Research, 11, No. 10, pp 29-37, 1991.
Buchau, J., Weber E.J. , Anderson D.N., Carlson H.C. Jr, Moore J.G., Reinisch B.W. and Livingston R.C., "Ionospheric Structures in the Polar Cap: Their Origin and Relation to 250 MHz Scintillation",
Radio Science, 20, No. 3, pp 325-338, May-June 1985.
Bullett T., Doctoral Thesis, University of Massachusetts, Lowell, 1993.
Chen, F., "Plasma Physics and Nuclear Engineering", Prentice-Hall, 1987.
Coll D.C., "Convoluted Codes", Proc of IRE, Vol. 49, No 7, 1961.
Davies, K., "Ionospheric Radio", IEE Electromagnetic Wave Series 31, 1989.
Golay M.S., "Complementary Codes", IRE Trans. on Information Theory, April 1961.
Huffman D. A., "The Generation of Impulse-Equivalent Pulse Trains", IRE Trans. on Information Theory, IT-8, Sep 1962.
Haines, D.M., "A Portable Ionosonde Using Coherent Spread Spectrum Waveforms for Remote Sensing of the Ionosphere", UMLCAR, 1994.
Hayt, W. H., "Engineering Electromagnetics", McGraw-Hill, 1974.
Murali, M.R., "Digital Beamforming for an Ionospheric HF Sounder", University of Massachusetts, Lowell, Masters Thesis, August 1993.
Oppenheim, A. V., and R. W. Schafer, "Digital Signal Processing", Prentice Hall, 1976.
Peebles, P. Z., "Communication System Principles", Addison-Wesley, 1979.
Reinisch, B.W., "New Techniques in Ground-Based Ionospheric Sounding and Studies", Radio Science, 21, No. 3, May-June 1987.
Reinisch, B.W., Buchau, J. and Weber, E.J., "Digital Ionosonde Observations of the Polar Cap F Region Convection", Physica Scripta, 36, pp. 372-377, 1987.
Reinisch, B. W., et al., "The Digisonde 256 Ionospheric Sounder World Ionosphere/ Thermosphere Study, WITS Handbook, Vol. 2, Ed. by C. H. Liu, December 1989.
Reinisch, B.W., Haines, D.M. and Kuklinski, W.S., "The New Portable Digisonde for Vertical and Oblique Sounding," AGARD-CP-502, February 1992.
Rush, C.M., "An Ionospheric Observation Network for use in Short-term Propagation Predictions", Telecomm, J., 43, p 544, 1978.
Sarwate D.V. and Pursley M.B., "Crosscorrelation Properties of Pseudorandom and Related Sequences", Proc. of the IEEE, Vol 68, No 5, May 1980.
Scali, J.L., "Online Digisonde Drift Analysis", User s Manual, University of Massachusetts Lowell Center for Atmospheric Research, 1993.
Schmidt G., Ruster R. and Czechowsky, P., "Complementary Code and Digital Filtering for Detection of Weak VHF Radar Signals from the Mesosphere", IEEE Trans on Geoscience Electronics, May 1979.
Wright, J.W. and Pitteway M.L.V., "Data Processing for the Dynasonde", J. Geophys. Rsch, 87, p 1589, 1986.
Send mail to webmaster with questions or comments about this web site.
back to top | {"url":"https://ulcar.uml.edu/digisonde_dps.html","timestamp":"2024-11-03T01:03:25Z","content_type":"text/html","content_length":"230486","record_id":"<urn:uuid:22f63608-b950-4c73-be32-3a2fca51a502>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00685.warc.gz"} |
Bayesian Coin Flips
In this blog post, we will look at the coin flip problem in a bayesian point of view. Most of this information is already widely available through the web, but I want to write it up anyways, so I can
go into more involved bayesian concepts in future posts.
Lets say we flip a coin, and get $h$ heads and $t$ tails, the probability follows a binomial distribution:
$$ P(D|\theta) = {_{h+t}}C_{h}\theta^{h}(1-\theta)^{t} $$
where $D$ is the event of getting $h$ heads and $t$ tails, $\theta$ is the probability of heads, and $1-\theta$ is the probability of tails. Let say we want to flip the conditional probability using
Bayes' theorem:
$$ P(\theta|D) = \dfrac{P(D|\theta)P(\theta)}{P(D)} $$
Why do we want to write the conditional probability this way?
The conditional probability, $P(\theta|D)$, treats the probability of heads, $\theta$, as a random variable. It is the probability of $\theta$, given that we observed the event $D$. To make speaking
of these probabilies easier they are given names:
• $P(\theta)$: the prior
• $P(\theta|D)$: the posterior
• $P(D|\theta)$: the likelihood
For example, lets say we flipped some coins and observed 3 heads and 5 tails, ($D$ is the event of 3 heads and 5 tails), the posterior allows us to obtain the probabilities of $P(\theta=0.1|D)$ or $P
(\theta=0.7|D)$, etc. The posterior givens us probabilities for all possible values of $\theta$ (the probability of heads).
Next, lets look at the prior, $P(\theta)$, this is the probability of $\theta$ before any coin flips. In other words, this is the measure of the belief before we perform the experiment. For the coin
flipping example, we normally come across coins that have $\theta=0.5$, so our prior should center around 0.5. For now, lets pick a beta distribution with $\alpha=2$ and $\beta=2$ as our prior:
$$ P(\theta) = \dfrac{1}{B(\alpha,\beta)}\theta^{\alpha-1}(1-\theta)^{\beta-1} $$
where $B(\alpha, \beta)$ is the Beta Function. This prior is centered at 0.5 and is lower for all other values. Let's graph the beta prior distribution:
Show Code
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
# configure style
mpl.rc('text', usetex=True)
mpl.rc('font', size=26)
sns.set_context("talk", rc={"figure.figsize": (12, 8)}, font_scale=1.5)
current_palette = sns.color_palette()
def plot_prior(alpha, beta, ax=None):
x = np.linspace(0, 1, 1000)
y = scipy.stats.beta.pdf(x, alpha, beta)
if not ax:
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set_xlabel(r"$\theta$", fontsize=20)
ax.set_ylabel(r"$P(\theta)$", fontsize=20)
ax.set_title("Prior: BetaPDF({},{})".format(alpha,beta));
plot_prior(alpha=2, beta=2)
The maximum of our prior is centered at 0.5 and is lower for other values. This means that we normally see coins which are fair, but do not rule out that there is a chance that the coin could be
The last thing we need to get the posterior is the denominator of bayes theorem, $P(D)$, which is the probability of the event happening. In general, this is calculated by integrating over all the
possible values of $\theta$:
$$ P(D) = \int_0^1 P(D|\theta)P(\theta)d\theta $$
Normally this integral would not be possible to do analytically, but since our prior is a beta distribution and our likelihood is a binomial distribution, this integral would be worked out to be:
$$ P(D) = {_{h+t}}C_{h}\dfrac{B(h+\alpha, t+\beta)}{B(\alpha, \beta)} $$
For other priors, the integral would not be able to be computed, and other techniques are used to get the posterior, which I will get into in a future blog post.
Putting $P(D)$, the prior, and likelihood together into Bayes' theorem to get the posterior: $$ P(\theta|D) = \dfrac{1}{B(h+\alpha, t+\beta)}\theta^{h+\alpha-1}(1-\theta)^{t+\alpha-1} $$
Recall, for our example, $\alpha=2$ and $\beta=2$, thus posterior becomes:
$$ P(\theta|D) = \dfrac{1}{B(h+2, t+2)}\theta^{h+1}(1-\theta)^{t+1} $$
Let's say create function in python to plot the posterior:
def plot_posterior(heads, tails, alpha, beta, ax=None):
x = np.linspace(0, 1, 1000)
y = scipy.stats.beta.pdf(x, heads+alpha, tails+beta)
if not ax:
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set_xlabel(r"$\theta$", fontsize=20)
ax.set_ylabel(r"$P(\theta|D)$", fontsize=20)
ax.set_title("Posterior after {} heads, {} tails, \
Prior: BetaPDF({},{})".format(heads, tails, alpha, beta));
Lets say we flipped the coin 17 times and observed 5 heads and 12 tails, our posterior becomes:
plot_posterior(heads=5, tails=12, alpha=2, beta=2)
With 5 heads and 12 tails, our belief of the possible values of $\theta$ shifts to the left, suggesting that $\theta$ is more likely to be lower than $0.5$. Now lets say we flipped 75 times and
observed 50 heads and 25 tails:
plot_posterior(heads=50, tails=25, alpha=2, beta=2)
With that many heads, the posterior shifts to the right, implying that $\theta$ is higher. Notice that the distrubution for 75 flips is narrower than the 17 times. With 75 flips, we have a clearer
picture of what the value of $\theta$ should be.
What would happen when we choose other priors?
We'll explore how to handle non-beta priors in a future blog post. Right now, lets look at what happens when we choose different beta priors. Lets say we come from a world where coins are not 50-50,
but are biased toward a bigger $\theta$:
plot_prior(alpha=20, beta=4)
Let's see what happens to the posterior when we flip a coin and get:
• 4 heads 5 tails
• 20 heads 20 tails
• 50 heads 49 tails
• 75 heads 74 tails
• 400 heads 399 tails
fig, axes = plt.subplots(5)
flips = [(4, 5), (20, 20), (50, 49), (75, 74), (400, 399)]
for i, flip in enumerate(flips):
plot_posterior(heads=flip[0], tails=flip[1], alpha=20, beta=4, ax=axes[i])
for ax in axes:
When we only flip 9 coins in the first case, our belief does not change much and is still skewed. But as we flip more coins and get an 50-50 distrubution of heads and tails, our belief changes to a
distrubution around $\theta=0.5$, i.e. a fair coin. Since our prior was so skewed, even with 400 heads and 399 tails, the peak of the posterior distrubution is still not 0.5.
In the next post, we will look at what happens when we have a non-beta prior, and use priors that are more strange. Specifically, we will look at situations where $P(D)$ can not be solve
analytically, and must switch to other methods to obtain the posterior distrubution. | {"url":"https://thomasjpfan.com/2015/09/bayesian-coin-flips/","timestamp":"2024-11-14T02:11:22Z","content_type":"text/html","content_length":"250415","record_id":"<urn:uuid:05d86e1e-e4fa-49c4-bbb9-85a796744d11>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00642.warc.gz"} |
Balancing Equations Worksheet Answers About Chemistry - Equations Worksheets
Balancing Equations Worksheet Answers About Chemistry
Balancing Equations Worksheet Answers About Chemistry – Expressions and Equations Worksheets are created to assist children in learning faster and more effectively. These worksheets are interactive
and questions based on the sequence of operations. These worksheets make it simple for children to grasp complex concepts as well as simple concepts in a short time. These PDF resources are
completely free to download and may be used by your child in order to practise maths equations. These resources are helpful for students who are in the 5th through 8th grades.
Get Free Balancing Equations Worksheet Answers About Chemistry
These worksheets can be used by students between 5th and 8th grades. These two-step word puzzles are constructed using fractions and decimals. Each worksheet contains ten problems. You can access
them through any website or print source. These worksheets can be a wonderful way to practice rearranging equations. Alongside practicing rearranging equations, they also aid your student in
understanding the basic properties of equality and the inverse of operations.
The worksheets are intended for students in the fifth and eighth grades. They are great for students who have difficulty learning with calculating percentages. There are three types of problems you
can choose from. There is the option to either work on single-step problems which contain whole numbers or decimal numbers, or to employ words-based techniques for fractions and decimals. Each page
is comprised of 10 equations. These worksheets on Equations are suitable for students from 5th to 8th grade.
These worksheets can be a wonderful source for practicing fractions and other aspects of algebra. You can pick from kinds of challenges with these worksheets. You can select a word-based or a
numerical one. It is crucial to select the problem type, because every problem is different. Each page will have ten challenges that make them a fantastic aid for students who are in 5th-8th grade.
These worksheets will help students comprehend the relationships between numbers and variables. These worksheets help students test their skills at solving polynomial equations, and discover how to
use equations in their daily lives. These worksheets are a great method to understand equations and formulas. They will help you learn about the different types of mathematical equations and the
various kinds of mathematical symbols used to represent them.
These worksheets are extremely beneficial to students in the beginning grade. These worksheets will help them learn how to graph and solve equations. These worksheets are ideal to practice polynomial
variables. These worksheets will help you factor and simplify the process. There is a fantastic set of equations, expressions and worksheets for kids at any grade. The most effective way to learn
about equations is to complete the work yourself.
There are numerous worksheets available that teach quadratic equations. Each level comes with its own worksheet. These worksheets are a great way to solve problems to the fourth level. After you’ve
completed an amount of work it is possible to work on solving different kinds of equations. Then, you can take on the same problems. For example, you might discover a problem that uses the same axis
as an elongated number.
Gallery of Balancing Equations Worksheet Answers About Chemistry
49 Balancing Chemical Equations Worksheets with Answers
49 Balancing Chemical Equations Worksheets with Answers
49 Balancing Chemical Equations Worksheets with Answers
Leave a Comment | {"url":"https://www.equationsworksheets.net/balancing-equations-worksheet-answers-about-chemistry/","timestamp":"2024-11-10T17:35:17Z","content_type":"text/html","content_length":"66242","record_id":"<urn:uuid:7ad36f95-25e9-4192-9340-e7a4cda63d1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00807.warc.gz"} |
Practice Coding with the exercise "Probability for Dummies"
Burt studies math but is too dumb to derive fancy formulas, so he solves his homework problems using random numbers.
Given an American roulette wheel with 38 possible outcomes (0-36 or 00), what are the odds of landing on at least m different numbers in n spins?
The probability rounded to the nearest percent.
1 ≤ m ≤ 38
m ≤ n ≤ 50
A higher resolution is required to access the IDE | {"url":"https://www.codingame.com/training/easy/probability-for-dummies","timestamp":"2024-11-06T04:36:20Z","content_type":"text/html","content_length":"143382","record_id":"<urn:uuid:67649de5-a758-40ff-afc6-20509e0217f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00686.warc.gz"} |
But I Am Not A Math Player... | Red Chip Poker
I see this often, thankfully from my opponents. I like asking students questions like “How often will you flop a set with 77?” Drawing upon their dismal recent history of flopping sets they answer,
“About one in twenty times.” This is a pretty important statistic to misremember, and it will massively color how you play that hand.
There are two kinds of math at the table, facts that you recall and those numbers that you calculate. Statistics like flopping a set (1 in 8 times) or flopping a pair with unpaired cards (30% of the
time) must simply be memorized. The other kind of math at the table is calculations, I will argue those are easier because one procedure will cover a large number of situations.
The most famous and useful calculations in Hold’em revolve around the Rule of Two and Rule of Four. I have recently derived some add-on’s to these core rules that allow me to estimate equities within
a few percentage easily at the tables.
Let’s go through the mental dialog where this math saved me from making a bad call in a tempting spot.
We are five to the flop there is $100 in the pot and we are on the button with a lovely suited Ace. The flop comes down Jack of Flush, Ten of Flush and Deuce of brick. We have the nut flush draw in a
multi-way pot.
We just happen to remember that Flushes come in 20% of the time on each card or about 40% by the river. We usually just remember this because over the years we have frequently thought “Nine flush
outs multiplied by two is about 20% on the next card.”
We like our spot here.
There is a check and someone bets full pot, then someone else min-raises to $200. This second player is very straight-forward so he has a big hand. We only have $350 back. Remembering statistics will
no longer help us in this unique situation, we need to do some math along with our poker feel.
Flush draws get a lot of their value from fold equity. Do we have any of that? We only have the ability to put $150 on top. The second player just min-raised to $200, we doubt $150 more is going to
ever fold him out. We just lost one of the most potent weapons of the flush draw, now we are going to have to show Villain a flush to get this pot.
We normally can just say “We get a flush 40% of the time by the River.” But we have some special information here. Using our poker feel, we suspect we are up against top pair from the first player
and either top two pair or a set from the second player. We are plenty happy for the top pair player to be in the pot, but the Two Pair or set is a real problem. We can make our flush and easily lose
to the redraw.
How often do we win against these specified hands? The memorized statistics are not going to help here, we need another procedure. Rule of Four is not enough to account for these redraws and they are
In my new book, Poker Work Book for Math Geeks, I introduce some very useful additions to the Rule of Four to account for the redraw. Without explanation of the modification, let’s try it.
All of this money is eventually getting in, so really we are getting two cards for the cost of the rest of our stack. Nine outs times four is 36%. Villain very likely has Two Pair or a set. The
redraw requires us to give back 20% of our total equity. This means we take a 7% loss of equity. This puts us at 29% equity. Flopzilla says it is really 29.8%. This is a fine estimate for what we
need to do.
We expect to get back 30% of the final pot. Under the best case scenario all three of us put money in and we basically break even. Realistically, many times the original raiser will fold and then we
are a huge dog.
As much as we love nut flush draws, without fold equity and with Villain’s likely redraw, we are at best breaking even and often going in as a big dog. There is no way to make money, we are going to
fold here. | {"url":"https://redchippoker.com/but-i-am-not-a-math-player/","timestamp":"2024-11-05T08:53:31Z","content_type":"text/html","content_length":"187152","record_id":"<urn:uuid:c1a0409f-d869-4f29-866d-4063f6123987>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00008.warc.gz"} |
Free Must-Have Statistics Books For Data Scientists
If you're going to be a data scientist, you're going to need to be really good at statistics. And we know a lot of our readers love getting FREE Statistics books to get started or brush up on their
stats knowledge.
Well, we've got you covered. We've been putting together a list of the best FREE Statistics books for Data Scientist in this post.
In this post we bring you all the FREE Statistics books written for Data Science that we've found, categorised by sub-topic so you can find what you're looking for easily.
We've recently updated this blog post and we'll be adding more FREE books on Statistics for anybody wanting to improve their statistical and data analysis skills and learn new concepts.
Bookmark this page, enjoy and don't forget to share!
Here are some of the best FREE ebooks on Statistics any Data Scientist needs to read.
To get the book you're interested in, click on the images of the books and you'll be taken to a page where you can read or download a copy of the book.
Since we know some of these books are must-haves and some of our Data Ninjas love having a paper copy of the books for their library, we've included links for those of you interested in having a hard
Disclosure: The FREE ebooks were free to download at the time of posting but other links in this post may contain affiliate links. As Amazon Associates we may earn from qualifying purchases.
You can find further details in our TCs
FREE General Statistics Books
Probabilistic Programming and Bayesian Methods for Hackers
The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three
chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial
This can leave the user with a ‘so-what’ feeling about Bayesian inference. In fact, this was the author’s own prior opinion.
"After some recent success of Bayesian methods in machine-learning competitions, I decided to investigate the subject again. Even with my mathematical background, it took me three straight days of
reading examples and trying to put the pieces together to understand the methods. There was simply not enough literature bridging theory to practice. The problem with my misunderstanding was the
disconnect between Bayesian mathematics and probabilistic programming."
"That being said, I suffered then so the reader would not have to now. This book attempts to bridge the gap."
Computational And Inferential Thinking
Ani Adhikari and John DeNero
Computational and Inferential Thinking is an introductory text for data science that explores foundational concepts in data processing and statistics using modern programming tools. Ideas are
illustrated by real-world data sets and examples.
While rigorous in presentation, this text does not expect prior experience in computing, calculus, or linear algebra.
Introduction to Probability
Joseph K. Blitzstein and Jessica Hwang
This book will give you a great introduction to probability and a strong foundation for understanding statistics, randomness and uncertainty. It does so by offering many intuitive explanations,
diagrams and practice problems.
At the end of each section the authors explain how to explore the ideas in the chapter using R.
Computer Age and Statistical Inference
Bradley Efron and Trevor Hastie
This book takes us on a journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories – Bayesian,
frequentist, Fisherian – individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural
networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The book integrates methodology and algorithms with statistical inference, and ends with speculation on the
future direction of statistics and data science.
Introduction to Probability
Charles M. Grinstead and J. Laurie Snell
This text is designed for an introductory probability course at the university level for undergraduates in mathematics, the physical and social sciences, engineering, and computer science.
It presents a thorough treatment of probability ideas and techniques necessary for a firm understanding of the subject. The text is also recommended for use in discrete probability courses. The
material is organized so that the discrete and continuous probability discussions are presented in a separate, but parallel, manner.
This organization does not over emphasize an overly rigorous or formal view of probability and therefore offers some strong pedagogical value. Hence, the discrete discussions can sometimes serve to
motivate the more abstract continuous probability discussions.
A First Course On Design And Analysis Of Experiments
This text, for students needing to prepare and analyze experimental data, gives a balanced presentation of the design and analysis of experiments, teaching students when to use various designs, how
to analyze the results, and how to recognize design options. The book is also fully oriented towards the use of statistical software in analyzing experiments, and the companion web site offers data
sets for most of the exercises in the text.
Putting It All Together - Essays On Data Analysis
What is a data analysis? What makes for a successful data analysis? These are difficult questions that even long-time practitioners have difficulty answering. The way that we have thought about data
analysis to date has been focused on the data and the statistical tools that we employ to produce results. But data analysis is about more than those things, and developing an understanding of the
things "outside" the data is critical to characterizing the actual process of data analysis, the process that data analysts go through every day.
This book attempts to draw a more complete picture of the data analysis process and presents a new view about what makes for a successful data analysis. It is presented in a completely non-technical
and highly readable style that should be of interest to practitioners and managers in data analysis.
Collaborative Statistics
Barbara Illowsky and Susan Dean
This book is intended for introductory statistics courses being taken by students at two– and four–year colleges who are majoring in fields other than math or engineering.
Intermediate algebra is the only prerequisite. The book focuses on applications of statistical knowledge rather than the theory behind it. The text is named Collaborative Statistics because students
learn best by doing. In fact, they learn best by working in small groups. The old saying “two heads are better than one” truly applies here.
FREE Statistics Books for Data Science
Theory and Applications For Advanced Text Mining
Edited by Shigeaki Sakurai
Due to the growth of computer technologies and web technologies, we can easily collect and store large amounts of text data in the belief that these data contain useful knowledge.
Text mining techniques have been studied aggressively in order to extract the knowledge from the data since the late 1990s. Even if many important techniques have been developed, the text mining
research field continues to expand for the needs arising from various application fields. This book is composed of 9 chapters introducing advanced text mining techniques. There are various techniques
from relation extraction to under or less resourced language.
This book will give new knowledge in the text mining field and help many readers open new research fields.
Advanced Statistical Computing
The journey from statistical model to useful output has many steps, most of which are taught in other books and courses.
The purpose of this book is to focus on one particular aspect of this journey: the development and implementation of statistical algorithms. It's often nice to think about statistical models and
various inferential philosophies and techniques, but when the rubber meets the road, we need an algorithm and a computer program implementation to get the results we need from a combination of our
data and our models. This book is about how we fit models to data and the algorithms that we use to do so. Examples are given using the R programming language.
Advanced Linear Models For Data Science
Linear models are the cornerstone of statistical methodology. Perhaps more than any other tool, advanced students of statistics, biostatistics, machine learning, data science, econometrics, etcetera
should spend time learning the finer grain details of this subject.
In this book, we give a brief, but rigorous treatment of advanced linear models. It is advanced in the sense that it is of level that an introductory PhD student in statistics or biostatistics would
see. The material in this book is standard knowledge for any PhD in statistics or biostatistics.
Students will need a fair amount of mathematical prerequisites before trying to undertake this class. First, is multivariate calculus and linear algebra. Especially linear algebra, since much of the
early parts of linear models are direct applications of linear algebra results applied in a statistical context. In addition, some basic proof based mathematics is necessary to follow the proofs. In
addition, some regression models and mathematical statistics are needed.
Modeling With Data
Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different
problems, how to create and debug statistical models, and how to run an analysis and evaluate the results.
Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and
computationally intensive procedures.
He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods.
Klemens's accessible survey describes these models in a unified and non-traditional manner, providing alternative ways of looking at statistical concepts that often befuddle students. The book
includes nearly one hundred sample programs of all kinds.
FREE Books for Programming Statistics in R
A Little Book Of R For Time Series
This is a simple introduction to time series analysis using the R statistics software (have you spotted the pattern yet?). It includes instruction on how to read and plot time series, time series
decomposition, forecasting, and ARIMA models
A Little Book Of R For Multivariate Analysis
A Little Book of R for Multivariate Analysis is a simple introduction to multivariate analysis using the R statistics software.
It covers topics such as reading and plotting multivariate data, principal components analysis, and linear discriminant analysis.
It's only 49 pages long and you can read it online or download it as a pdf.
Practical Regression And Anova Using R
This book is not for beginners. It presumes some knowledge of basic statistical theory and practice, such as statistical inference like estimation, hypothesis testing and confidence intervals. A
basic knowledge of data analysis is presumed. Some linear algebra and calculus is also required..
Introduction to Statistical Thought
The book is intended as an upper level undergraduate or introductory graduate textbook in statistical thinking with a likelihood emphasis for students with a good knowledge of calculus and the
ability to think abstractly. "Statistical thinking" means a focus on ideas that statisticians care about as opposed to technical details of how to put those ideas into practice. The book does contain
technical details, but they are not the focus. "Likelihood emphasis" means that the likelihood function and likelihood principle are unifying ideas throughout the text.
Another unusual aspect is the use of statistical software as a pedagogical tool. That is, instead of viewing the computer merely as a convenient and accurate calculating device, the book uses
computer calculation and simulation as another way of explaining and helping readers understand the underlying concepts. The book is written with the statistical language R embedded throughout.
Forecasting Principles And Practice
Rob J. Hyndman and George Athanasopoulos
Forecasting is required in many situations. Deciding whether to build another power generation plant in the next five years requires forecasts of future demand. Scheduling staff in a call centre next
week requires forecasts of call volumes. Stocking an inventory requires forecasts of stock requirements. Telecommunication routing requires traffic forecasts a few minutes ahead.
Whatever the circumstances or time horizons involved, forecasting is an important aid in effective and efficient planning. This textbook provides a comprehensive introduction to forecasting methods
and presents enough information about each method for readers to use them sensibly. Examples use R with many data sets taken from the authors' own consulting experience.
From Algorithms to Z-Scores: Probabilistic And Statistical Modeling In Computer Science
The materials here form a textbook for a course in mathematical probability and statistics for computer science students.
Computer science examples are used throughout, in areas such as: computer networks; data and text mining; computer security; remote sensing; computer performance evaluation; software engineering;
data management; etc.
The R statistical/data manipulation language is used throughout. Since this is a computer science audience, a greater sophistication in programming can be assumed. It is recommended that the R
tutorial, R for Programmers, be used as a supplement.
Throughout the units, mathematical theory and applications are interwoven, with a strong emphasis on modelling.
An Introduction To Statistical Learning With Applications In R
Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani
This book provides an introduction to statistical learning methods.
It is aimed for upper level undergraduate students, masters students and Ph.D. students in the non-mathematical sciences.
The book also contains a number of R labs with detailed explanations on how to implement the various methods in real life settings, and should be a valuable resource for a practicing data scientist.
R Programming For Data Science
This book brings the fundamentals of R programming to you, using the same material developed as part of the industry-leading Johns Hopkins Data Science Specialization. The skills taught in this book
will lay the foundation for you to begin your journey learning data science.
Data Analysis And Graphics Using R
John Maindonald and W. John Braun
Introducing the R system, covering standard regression methods, then tackling more advanced topics, this book guides users through the practical, powerful tools that the R system provides. The
emphasis is on hands-on analysis, graphical display, and interpretation of data.
The many worked examples, from real-world research, are accompanied by commentary on what is done and why.
Assuming basic statistical knowledge and some experience with data analysis (but not R), the book is ideal for research scientists, final-year undergraduate or graduate-level students of applied
statistics, and practicing statisticians. It is both for learning and for reference. This third edition expands upon topics such as Bayesian inference for regression, errors in variables, generalized
linear mixed models, and random forests.
FREE Books for Statistics and Machine Learning
Statisticial Learning With Sparsity
Trevor Hastie, Robert Tibshirani, Martin Wainwright
During the past decade there has been an explosion in computation and information technology. With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and
marketing. This book describes the important ideas in these areas in a common conceptual framework.
Hope you found this list of FREE Statistics Books for Data Science helpful.
If you're looking for more FREE Data Science Books we also have the following posts.
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"} | {"url":"https://www.chi2innovations.com/blog/free-stats-books/","timestamp":"2024-11-09T13:56:42Z","content_type":"text/html","content_length":"880305","record_id":"<urn:uuid:899f2835-c93a-4a7b-b293-a815d1985f7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00470.warc.gz"} |
Practice (2) - Math All Star
Practice (2)
How many ordered pairs of positive integers $(a, b, c)$ that can satisfy $$\left\{\begin{array}{ll}ab + bc &= 44\\ ac + bc &=23\end{array}\right.$$
How many ordered pairs of integers $(x, y)$ satisfy $0 < x < y$ and $\sqrt{1984} = \sqrt{x} + \sqrt{y}$?
Two of the three sides of a triangle are 20 and 15. Which of the following numbers is not a possible perimeter of the triangle?
Mr. Patrick teaches math to 15 students. He was grading tests and found that when he graded everyone's test except Payton's, the average grade for the class was 80. after he graded Payton's test, the
class average became 81. What was Payton's score on the test?
A collection of circles in the upper half-plane, all tangent to the $x$-axis, is constructed in layers as follows. Layer $L_0$ consists of two circles of radii $70^2$ and $73^2$ that are externally
tangent. For $k\ge1$, the circles in $\bigcup_{j=0}^{k-1}L_j$ are ordered according to their points of tangency with the $x$-axis. For every pair of consecutive circles in this order, a new circle is
constructed externally tangent to each of the two circles in the pair. Layer $L_k$ consists of the $2^{k-1}$ circles constructed in this way. Let $S=\bigcup_{j=0}^{6}L_j$, and for every circle $C$
denote by $r(C)$ its radius. What is \[\sum_{C\in S} \frac{1}{\sqrt{r(C)}}?\]
Amelia needs to estimate the quantity $\frac{a}{b} - c$, where $a, b,$ and $c$ are large positive integers. She rounds each of the integers so that the calculation will be easier to do mentally. In
which of these situations will her answer necessarily be greater than the exact value of $\frac{a}{b} - c$?
A box contains 2 red marbles, 2 green marbles, and 2 yellow marbles. Carol takes 2 marbles from the box at random; then Claudia takes 2 of the remaining marbles at random; and then Cheryl takes the
last 2 marbles. What is the probability that Cheryl gets 2 marbles of the same color?
Integers $x$ and $y$ with $x>y>0$ satisfy $x+y+xy=80$. What is $x$?
On a sheet of paper, Isabella draws a circle of radius $2$, a circle of radius $3$, and all possible lines simultaneously tangent to both circles. Isabella notices that she has drawn exactly $k \ge
0$ lines. How many different values of $k$ are possible?
The parabolas $y=ax^2 - 2$ and $y=4 - bx^2$ intersect the coordinate axes in exactly four points, and these four points are the vertices of a kite of area $12$. What is $a+b$?
A league with $12$ teams holds a round-robin tournament, with each team playing every other team exactly once. Games either end with one team victorious or else end in a draw. A team scores $2$
points for every game it wins and $1$ point for every game it draws. Which of the following is NOT a true statement about the list of $12$ scores?
What is the value of $a$ for which $\frac{1}{\text{log}_2a} + \frac{1}{\text{log}_3a} + \frac{1}{\text{log}_4a} = 1$?
What is the minimum number of digits to the right of the decimal point needed to express the fraction $\frac{123456789}{2^{26}\cdot 5^4}$ as a decimal?
The zeros of the function $f(x) = x^2-ax+2a$ are integers. What is the sum of the possible values of $a$?
Isosceles triangles $T$ and $T'$ are not congruent but have the same area and the same perimeter. The sides of $T$ have lengths $5$, $5$, and $8$, while those of $T'$ have lengths $a$, $a$, and $b$.
Which of the following numbers is closest to $b$?
A circle of radius r passes through both foci of, and exactly four points on, the ellipse with equation $x^2+16y^2=16.$ The set of all possible values of $r$ is an interval $[a,b).$ What is $a+b?$
For each positive integer $n$, let $S(n)$ be the number of sequences of length $n$ consisting solely of the letters $A$ and $B$, with no more than three $A$s in a row and no more than three $B$s in a
row. What is the remainder when $S(2015)$ is divided by $12$?
Rational numbers $a$ and $b$ are chosen at random among all rational numbers in the interval $[0,2)$ that can be written as fractions $\frac{n}{d}$ where $n$ and $d$ are integers with $1 \le d \le
5$. What is the probability that \[(\text{cos}(a\pi)+i\text{sin}(b\pi))^4\] is a real number?
What is the value of $2-(-2)^{-2}$ ?
Isaac has written down one integer two times and another integer three times. The sum of the five numbers is 100, and one of the numbers is 28. What is the other number?
David, Hikmet, Jack, Marta, Rand, and Todd were in a $12$-person race with $6$ other people. Rand finished $6$ places ahead of Hikmet. Marta finished $1$ place behind Jack. David finished $2$ places
behind Hikmet. Jack finished $2$ places behind Todd. Todd finished $1$ place behind Rand. Marta finished in $6^{th}$ place. Who finished in $8^{th}$ place?
The Tigers beat the Sharks $2$ out of $3$ times they played. They then played $N$ more times, and the Sharks ended up winning at least $95\%$ of all the games played. What is the minimum possible
value for $N$?
Back in 1930, Tillie had to memorize her multiplication facts from $0 \times 0$ to $12 \times 12$. The multiplication table she was given had rows and columns labeled with the factors, and the
products formed the body of the table. To the nearest hundredth, what fraction of the numbers in the body of the table are odd?
A regular 15-gon has $L$ lines of symmetry, and the smallest positive angle for which it has rotational symmetry is $R$ degrees. What is $L+R$ ?
What is the value of $(625^{\log_5 2015})^{\frac{1}{4}}$ ? | {"url":"https://www.mathallstar.org/Practice/SearchByCompetition?Competitions=2","timestamp":"2024-11-15T02:23:06Z","content_type":"text/html","content_length":"59450","record_id":"<urn:uuid:099eb28d-7c51-4566-83cd-10526918b3e2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00395.warc.gz"} |
Is Information-Theoretic Topology-Hiding Computation Possible?
Topology-hiding computation (THC) is a form of multi-party computation over an incomplete communication graph that maintains the privacy of the underlying graph topology. Existing THC protocols
consider an adversary that may corrupt an arbitrary number of parties, and rely on cryptographic assumptions such as DDH. In this paper we address the question of whether information-theoretic THC
can be achieved by taking advantage of an honest majority. In contrast to the standard MPC setting, this problem has remained open in the topology-hiding realm, even for simple “privacy-free”
functions like broadcast, and even when considering only semi-honest corruptions. We uncover a rich landscape of both positive and negative answers to the above question, showing that what types of
graphs are used and how they are selected is an important factor in determining the feasibility of hiding topology information-theoretically. In particular, our results include the following. We show
that topology-hiding broadcast (THB) on a line with four nodes, secure against a single semi-honest corruption, implies key agreement. This result extends to broader classes of graphs, e.g., THB on a
cycle with two semi-honest corruptions.On the other hand, we provide the first feasibility result for information-theoretic THC: for the class of cycle graphs, with a single semi-honest corruption.
Given the strong impossibilities, we put forth a weaker definition of distributional-THC, where the graph is selected from some distribution (as opposed to worst-case). We present a formal separation
between the definitions, by showing a distribution for which information theoretic distributional-THC is possible, but even topology-hiding broadcast is not possible information-theoretically with
the standard definition.We demonstrate the power of our new definition via a new connection to adaptively secure low-locality MPC, where distributional-THC enables parties to “reuse” a secret
low-degree communication graph even in the face of adaptive corruptions.
Original language English (US)
Title of host publication Theory of Cryptography - 17th International Conference, TCC 2019, Proceedings
Editors Dennis Hofheinz, Alon Rosen
Publisher Springer
Pages 502-530
Number of pages 29
ISBN (Print) 9783030360290
State Published - 2019
Event 17th International Conference on Theory of Cryptography, TCC 2019 - Nuremberg, Germany
Duration: Dec 1 2019 → Dec 5 2019
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 11891 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Conference 17th International Conference on Theory of Cryptography, TCC 2019
Country/Territory Germany
City Nuremberg
Period 12/1/19 → 12/5/19
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'Is Information-Theoretic Topology-Hiding Computation Possible?'. Together they form a unique fingerprint. | {"url":"https://nyuscholars.nyu.edu/en/publications/is-information-theoretic-topology-hiding-computation-possible","timestamp":"2024-11-11T17:13:10Z","content_type":"text/html","content_length":"55413","record_id":"<urn:uuid:49833c3d-929e-4335-a1ad-566602b93e74>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00601.warc.gz"} |
ECE 515 - Control System Theory & Design
Homework 1 - Due: 01/25
Problem 1
Which of the following are vector spaces over \(\mathbb{R}\) (with respect to standard addition and scalar multiplication). Justify your answers.
a. The set of real valued \(n \times n\) matrices with nonnegative entries where \(n\) is a given positive integer.
b. The set of rational functions of the form \(\dfrac{p(s)}{q(s)}\) where \(p\) and \(q\) are polynomials in the complex variable \(s\) and the degree of \(q\) does not exceed a given fixed positive
integer \(k\).
c. The space \(L^2\left(\mathbb{R}, \mathbb{R}\right)\) of square-integrable functions, i.e., functions \(f : \mathbb{R} \to \mathbb{R}\) with the property that
\[ \int \limits _{-\infty} ^{\infty} f^2 (t) dt < \infty \]
Problem 2
Let \(A\) be the linear operator in the plane corresponding to the counter-clockwise rotation around the origin by some given angle \(\theta\). Compute the matrix of \(A\) relative to the standard
basis in \(\mathbb{R}^2\).
Problem 3
Let \(A: X \to Y\) be a linear transformation.
a. Prove that \(\dim N (A) + \dim R(A) = \dim X\) (the sum of the dimension of the nullspace of \(A\) and the dimension of the range of \(A\) equals the dimension of \(X\)).
b. Now assume that \(X = Y\). It is not always true that \(X\) is a direct sum of \(N(A)\) and \(R(A)\). Find a counterexample demonstrating this. Also, describe a class of linear transformations
(as general as you can think of) for which this statement is true.
Problem 4
Consider the standard RLC circuit, except now allow its characteristics \(R, L\) and \(C\) to vary with time. Starting with the same non-dynamic physical laws as in class (\(q = CV_c\) for the
capacitor charge, \(\varphi = LI\) for the inductor flux), derive a dynamical model of this circuit. It should take the form:
\[ \dot{x} = A(t) x + B(t) u \]
Problem 5
Three employees — let’s call them Alice, Bob, and Cheng — received their end-of-the-year bonuses which their boss calculated as a linear combination of three performance scores: leadership,
communication, and work quality. The coefficients (weights) in this linear combination are the same for all three employees, but the boss doesn’t disclose them. Alice knows that she got the score of
4 for leadership, 4 for communication, and 5 for work quality. Bob’s scores for the same categories were 3, 5, and 4, and Cheng’s scores were 5, 3, and 3. The bonus amounts are $18000 for Alice,
$16000 for Bob, and $14000 for Cheng. The employees are now curious to determine the unknown coefficients (weights).
a. Set up this problem as solving a linear equation of the form \(Ax = b\) for the unknown vector \(x\).
b. Calculate the unknown weights. It’s up to you whether you use part (a) for this or do it another way.
c. Are the weights that you computed unique? Explain why or why not. | {"url":"https://courses.grainger.illinois.edu/ece515/sp2024/homework/hw01.html","timestamp":"2024-11-13T22:07:04Z","content_type":"application/xhtml+xml","content_length":"34424","record_id":"<urn:uuid:a2f69914-ea90-45a5-b5c9-e2e04e701429>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00012.warc.gz"} |
Solve The Multiplication Fact In Each Box. Next Write 2 within Worksheets Relating Multiplication And Division
Solve The Multiplication Fact In Each Box. Next Write 2 within Worksheets Relating Multiplication And Division is the latest worksheet that you can find. This worksheet was uploaded on March 04, 2020
by admin in Worksheets.
Here is the Solve The Multiplication Fact In Each Box. Next Write 2 within Worksheets Relating Multiplication And Division that exist. Ensure you apply it purposively for education. The
multiplication can be very valuable especially for kids to perform. Download the particular Solve The Multiplication Fact In Each Box. Next Write 2 within Worksheets Relating Multiplication And
Division under!
Applying free multiplication worksheets is a superb technique to add some selection to your homescho
Solve The Multiplication Fact In Each Box. Next Write 2 within Worksheets Relating Multiplication And Division in your computer by clicking resolution image in Download by size:. Don't forget to rate
and comment if you like this worksheet. | {"url":"https://www.printablemultiplication.com/worksheets-relating-multiplication-and-division/solve-the-multiplication-fact-in-each-box-next-write-2-within-worksheets-relating-multiplication-and-division/","timestamp":"2024-11-06T15:44:26Z","content_type":"text/html","content_length":"63546","record_id":"<urn:uuid:9b12943c-4f52-42f2-b7f1-09d3ce395914>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00395.warc.gz"} |
What is Computational Fluid Dynamics Software? 3 Essential Features
Computational fluid dynamics (CFD) is a method many engineers, scientists, and researchers use to mathematically model and solve momentum, energy, and mass transport fluid flow problems.
Thanks to modern computers and the exponential boom in computational power, CFD software brings fluid flow testing to an engineer’s computer screen. Computational fluid dynamics software can be used
in a variety of industries and for many applications, allowing engineers to test, simulate, and solve physics problems and equations that follow physical laws.
Here’s why computational fluid dynamics software has become so popular.
Why CFD Software?
Engineers, scientists, and researchers use computational fluid dynamics software to predict and simulate fluid flow, heat transfer, species transport, chemical reactions, particle transport and
rigid-body dynamics, and other physical processes that would otherwise be too time consuming, complex, and expensive to investigate using experimental approaches.
For example, in the pharmaceutical industry, biomanufacturing processes which produce biologic drugs are an important area of study. These biologic drugs are typically produced by living organisms
within stirred tank bioreactors, requiring a continuous supply of sparged oxygen.
However, oxygen transfer in bioreactors is governed by complex fluid mechanics, which causes challenges when it comes to scaling up from lab-scale, tabletop bioreactors with small operating volumes
to production-sized bioreactors that can hold up to tens of thousands of liters.
CFD software helps solves problems like this by providing a physics-based approach to modeling oxygen transfer, predicting fluid flow behavior—like oxygen transfer at multiple vessel scales, in this
example—so manufacturers can more effectively and accurately scale up production.
Interested in learning more about this example?
Check out our academic paper with Bristol-Myers Squibb on the topic.
The benefits can be summed up in three simple ways:
1. Save time
2. Save money
3. Solve more complex physics problems
To further understand why CFD software works, let’s look at how it has evolved—and how it should work today.
History of Computational Fluid Dynamics Software
Scientists and researchers have always been fascinated with fluid dynamics, and through the years many have tried to mathematically describe the motion of fluids. One could argue that the roots of
computational fluid dynamics software stretch all the way back to the 17^th century when Sir Isaac Newton tried to quantify and predict fluid flow phenomena through his Newtonian physical equations.
Fast forward to the early 20^th century—when many agree the modern definition of CFD begins. This was when mathematical and numerical methods were being improved upon and refined. By the mid-1900s,
these models and methods could be integrated to generate numerical solutions based on hand calculations. These calculations transitioned to computer-based computations with early computers in the
Ultimately, the development of computational fluid dynamics software for commercial use began in the 1980s after Boeing, NASA, and other organizations released codes to the public. These codes have
been continuously improved upon—and transformed completely—ever since. CFD software has truly come a long way.
Today, it’s available for laptops, desktops, GPU clusters, or even directly on the cloud on various platforms including Windows and Linux. Here’s how modern CFD works.
How CFD Software Works: 3 Essential Features
Not all computational fluid dynamics software are created equal. To be able to solve complex problems faster, three things need to be true about the solution you deploy:
1. Modern Algorithms
Modern computational fluid dynamics software is based on Lattice Boltzmann algorithms, which solves the time-dependent Navier-Stokes equations.
Unlike legacy CFD tools that describe time-average/steady-state flow fields using low-fidelity turbulence models, Lattice Boltzmann is a mesoscopic modeling tool that describes the space-time
dynamics of a probability distribution function across phase-space.
This approach, which is inherently time-dynamic, enables superior turbulence modeling capabilities, faster run times, and scales more favorably with increasing system complexity.
2. Modern GPU Architectures
For modern algorithms to work, they need to be paired with modern graphical processing card architectures (GPUs).
GPUs provide significant computational power, allowing users to maximize the amount of science they can do per unit man-hour.
Grid fineness informs physics complexity—which is what modern algorithms provide—and speed informs practicality. That’s where GPUs come in. GPU-based algorithms help users model transient,
three-dimensional physics in real-time.
The problem? Many “modern” CFD solutions are coded for CPUs (central processing units). So if you want more accurate results faster, you need CFD software that has been coded specifically with
GPU-based algorithms.
3. Minimal User Setup
Finally, CFD software is only as useful as it is usable. It’s no good if it takes too long to learn how to add geometries, setup models, and generate results. You need something that can help you
start visualizing your simulation after just minutes of processing time. Here are capabilities that indicate ease of setup:
• Import Low-Quality or Imperfect Geometry Files
When it comes to setting up the simulation, your tool should allow you to focus on what matters without getting bogged down by CAD shape healing or meshing complex geometries.
• Use Existing & Custom Fluid Models
By itself, the combination of a Lattice Boltzmann CFD solver and high-resolution meshes should provide the basis fluid model for your simulations that you can then layer and enrich with additional,
extraordinary physics. This helps you solve more problems faster, because you already have a starting place.
• Analyze Data In Real-Time
Imagine how much faster you could get results if you could analyze data as it’s generated in real time by the solver. With modern CFD software that offers an integrated post-processing suite, you
Computational fluid dynamics software has come a long way. Modern CFD software that solves Lattice Boltzmann algorithms on GPUs has transformed the industry, helping engineers, scientists, and
researchers in the pharmaceutical space and beyond solve more complex physics problems—faster and at an accuracy that rivals experimental data.
Build advanced fluid models in minutes, predict real-time dynamics with precision, and solve more complex fluid flow problems faster with M-Star CFD—CFD software for the real world. | {"url":"https://mstarcfd.com/blog/computational-fluid-dynamics-software-essential-features/","timestamp":"2024-11-08T07:24:16Z","content_type":"text/html","content_length":"101439","record_id":"<urn:uuid:e0388eed-3e22-4752-a653-797d756adf33>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00825.warc.gz"} |
Directional Derivatives and the Gradient
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
In Partial Derivatives, we introduced the partial derivative. A function \(z=f(x,y)\) has two partial derivatives: \(∂z/∂x\) and \(∂z/∂y\). These derivatives correspond to each of the independent
variables and can be interpreted as instantaneous rates of change (that is, as slopes of a tangent line). For example, \(∂z/∂x\) represents the slope of a tangent line passing through a given point
on the surface defined by \(z=f(x,y),\) assuming the tangent line is parallel to the \(x\)-axis. Similarly, \(∂z/∂y\) represents the slope of the tangent line parallel to the \(y\)-axis. Now we
consider the possibility of a tangent line parallel to neither axis.
Directional Derivatives
We start with the graph of a surface defined by the equation \(z=f(x,y)\). Given a point \((a,b)\) in the domain of \(f\), we choose a direction to travel from that point. We measure the direction
using an angle \(θ\), which is measured counterclockwise in the \(xy\)-plane, starting at zero from the positive \(x\)-axis (Figure \(\PageIndex{1}\)). The distance we travel is \(h\) and the
direction we travel is given by the unit vector \(\vecs u=(\cos θ)\,\hat{\mathbf i}+(\sin θ)\,\hat{\mathbf j}.\) Therefore, the \(z\)-coordinate of the second point on the graph is given by \(z=f(a+h
\cos θ,b+h\sin θ).\)
Figure \(\PageIndex{1}\): Finding the directional derivative at a point on the graph of \(z=f(x,y)\). The slope of the blue arrow on the graph indicates the value of the directional derivative at
that point.
We can calculate the slope of the secant line by dividing the difference in \(z\)-values by the length of the line segment connecting the two points in the domain. The length of the line segment is \
(h\). Therefore, the slope of the secant line is
\[m_{sec}=\dfrac{f(a+h\cos θ,b+h\sin θ)−f(a,b)}{h}\]
To find the slope of the tangent line in the same direction, we take the limit as \(h\) approaches zero.
Definition: Directional Derivatives
Suppose \(z=f(x,y)\) is a function of two variables with a domain of \(D\). Let \((a,b)∈D\) and define \(\vecs u=(\cos θ)\,\hat{\mathbf i}+(\sin θ)\,\hat{\mathbf j}\). Then the directional derivative
of \(f\) in the direction of \(\vecs u\) is given by
\[D_{\vecs u}f(a,b)=\lim_{h→0}\dfrac{f(a+h \cos θ,b+h\sin θ)−f(a,b)}{h} \label{DD}\]
provided the limit exists.
Equation \ref{DD} provides a formal definition of the directional derivative that can be used in many cases to calculate a directional derivative.
Note that since the point \((a, b)\) is chosen randomly from the domain \(D\) of the function \(f\), we can use this definition to find the directional derivative as a function of \(x\) and \(y\).
That is,
\[D_{\vecs u}f(x,y)=\lim_{h→0}\dfrac{f(x+h \cos θ,y+h\sin θ)−f(x,y)}{h} \label{DDxy}\]
Example \(\PageIndex{1}\): Finding a Directional Derivative from the Definition
Let \(θ=\arccos(3/5).\) Find the directional derivative \(D_{\vecs u}f(x,y)\) of \(f(x,y)=x^2−xy+3y^2\) in the direction of \(\vecs u=(\cos θ)\,\hat{\mathbf i}+(\sin θ)\,\hat{\mathbf j}\).
Then determine \(D_{\vecs u}f(−1,2)\).
First of all, since \(\cos θ=3/5\) and \(θ\) is acute, this implies
\[\sin θ=\sqrt{1−\left(\dfrac{3}{5}\right)^2}=\sqrt{\dfrac{16}{25}}=\dfrac{4}{5}. \nonumber\]
Using \(f(x,y)=x^2−xy+3y^2,\) we first calculate \(f(x+h\cos θ,y+h\sin θ)\):
\[\begin{align*} f(x+h\cos θ,y+h\sin θ)&=(x+h\cos θ)^2−(x+h\cos θ)(y+h\sin θ)+3(y+h\sin θ)^2 \\&=x^2+2xh\cos θ+h^2\cos^2 θ−xy−xh\sin θ−yh\cos θ−h^2\sin θ\cos θ+3y^2+6yh\sin θ+3h^2\sin^2 θ \\ &=x^
2+2xh(\dfrac{3}{5})+\dfrac{9h^2}{25}−xy−\dfrac{4xh}{5}−\dfrac{3yh}{5}−\dfrac{12h^2}{25}+3y^2+6yh(\dfrac{4}{5})+3h^2(\dfrac{16}{25})\\&=x^2−xy+3y^2+\dfrac{2xh}{5}+\dfrac{9h^2}{5}+\dfrac{21yh}{5}. \end
We substitute this expression into Equation \ref{DD} with \(a = x\) and \(b = y\):
\[\begin{align*} D_{\vecs u}f(x,y)&=\lim_{h→0}\dfrac{f(x+h\cos θ,y+h\sin θ)−f(x,y)}{h}\\ &=\lim_{h→0}\dfrac{(x^2−xy+3y^2+\dfrac{2xh}{5}+\dfrac{9h^2}{5}+\dfrac{21yh}{5})−(x^2−xy+3y^2)}{h}\\ &=\lim_
{h→0}\dfrac{\dfrac{2xh}{5}+\dfrac{9h^2}{5}+\dfrac{21yh}{5}}{h}\\ &=\lim_{h→0}\dfrac{2x}{5}+\dfrac{9h}{5}+\dfrac{21y}{5}\\&=\dfrac{2x+21y}{5}. \end{align*}\]
To calculate \(D_{\vecs u}f(−1,2),\) we substitute \(x=−1\) and \(y=2\) into this answer (Figure \(\PageIndex{2}\)):
\[ D_{\vecs u}f(−1,2)=\dfrac{2(−1)+21(2)}{5}=\dfrac{−2+42}{5}=8. \nonumber\]
Figure \(\PageIndex{2}\): Finding the directional derivative in a given direction \(\vecs u\) at a given point on a surface. The plane is tangent to the surface at the given point \((−1,2,15).\)
An easier approach to calculating directional derivatives that involves partial derivatives is outlined in the following theorem.
Directional Derivative of a Function of Two Variables
Let \(z=f(x,y)\) be a function of two variables \(x\) and \(y\), and assume that \(f_x\) and \(f_y\) exist. Then the directional derivative of \(f\) in the direction of \(\vecs u=(\cos θ)\,\hat{\
mathbf i}+(\sin θ)\,\hat{\mathbf j}\) is given by
\[D_{\vecs u}f(x,y)=f_x(x,y)\cos θ+f_y(x,y)\sin θ. \label{DD2v}\]
Applying the definition of a directional derivative stated above in Equation \ref{DD}, the directional derivative of \(f\) in the direction of \(\vecs u=(\cos θ)\,\hat{\mathbf i}+(\sin θ)\,\hat{\
mathbf j}\) at a point \((x_0, y_0)\) in the domain of \(f\) can be written
\[D_{\vecs u}f((x_0, y_0))=\lim_{t→0}\dfrac{f(x_0+t \cos θ,y_0+t\sin θ)−f(x_0,y_0)}{t}.\]
Let \(x=x_0+t\cos θ\) and \(y=y_0+t\sin θ,\) and define \(g(t)=f(x,y)\). Since \(f_x\) and \(f_y\) both exist, we can use the chain rule for functions of two variables to calculate \(g′(t)\):
\[g′(t)=\dfrac{∂f}{∂x}\dfrac{dx}{dt}+\dfrac{∂f}{∂y}\dfrac{dy}{dt}=f_x(x,y)\cos θ+f_y(x,y)\sin θ.\]
If \(t=0,\) then \(x=x_0\) and \(y=y_0,\) so
\[g′(0)=f_x(x_0,y_0)\cos θ+f_y(x_0,y_0)\sin θ\]
By the definition of \(g′(t),\) it is also true that
\[g′(0)=\lim_{t→0}\dfrac{g(t)−g(0)}{t}=\lim_{t→0}\dfrac{f(x_0+t\cos θ,y_0+t\sin θ)−f(x_0,y_0)}{t}.\]
Therefore, \(D_{\vecs u}f(x_0,y_0)=f_x(x_0,y_0)\cos θ+f_y(x_0,y_0)\sin θ\).
Since the point \( (x_0,y_0) \) is an arbitrary point from the domain of \(f\), this result holds for all points in the domain of \(f\) for which the partials \(f_x\) and \(f_y\) exist.
Therefore, \[D_{\vecs u}f(x,y)=f_x(x,y)\cos θ+f_y(x,y)\sin θ.\]
Example \(\PageIndex{2}\): Finding a Directional Derivative: Alternative Method
Let \(θ=\arccos (3/5).\) Find the directional derivative \(D_{\vecs u}f(x,y)\) of \(f(x,y)=x^2−xy+3y^2\) in the direction of \(\vecs u=(\cos θ)\,\hat{\mathbf i}+(\sin θ)\,\hat{\mathbf j}\).
Then determine \(D_{\vecs u}f(−1,2)\).
First, we must calculate the partial derivatives of \(f\):
\[\begin{align*}f_x(x,y)&=2x−y \\ f_y(x,y)&=−x+6y, \end{align*}\]
Then we use Equation \ref{DD2v} with \(θ=\arccos (3/5)\):
\[\begin{align*} D_{\vecs u}f(x,y)&=f_x(x,y)\cos θ+f_y(x,y)\sin θ \\&=(2x−y)\dfrac{3}{5}+(−x+6y)\dfrac{4}{5} \\ &=\dfrac{6x}{5}−\dfrac{3y}{5}−\dfrac{4x}{5}+\dfrac{24y}{5}\\&=\dfrac{2x+21y}{5}. \end
To calculate \(D_{\vecs u}f(−1,2),\) let \(x=−1\) and \(y=2\):
\[D_{\vecs u}f(−1,2)=\dfrac{2(−1)+21(2)}{5}=\dfrac{−2+42}{5}=8.\]
This is the same answer obtained in Example \(\PageIndex{1}\).
Exercise \(\PageIndex{1}\):
Find the directional derivative \(D_{\vecs u}f(x,y)\) of \(f(x,y)=3x^2y−4xy^3+3y^2−4x\) in the direction of \(\vecs u=(\cos \dfrac{π}{3})\,\hat{\mathbf i}+(\sin \dfrac{π}{3})\,\hat{\mathbf j}\) using
Equation \ref{DD2v}.
What is \(D_{\vecs u} f(3,4)\)?
Calculate the partial derivatives and determine the value of \(θ\).
\(D_{\vecs u}f(x,y)=\dfrac{(6xy−4y^3−4)(1)}{2}+\dfrac{(3x^2−12xy^2+6y)\sqrt{3}}{2}\)
\(D_{\vecs u}f(3,4)=\dfrac{72−256−4}{2}+\dfrac{(27−576+24)\sqrt{3}}{2}=−94−\dfrac{525\sqrt{3}}{2}\)
If the vector that is given for the direction of the derivative is not a unit vector, then it is only necessary to divide by the norm of the vector. For example, if we wished to find the directional
derivative of the function in Example \(\PageIndex{2}\) in the direction of the vector \(⟨−5,12⟩\), we would first divide by its magnitude to get \(\vecs u\). This gives us \(\vecs u=⟨−\frac{5}{13},\
\[ \begin{align*} D_{\vecs u}f(x,y)&=f_x(x,y)\cos θ+f_y(x,y)\sin θ \\ &=−\dfrac{5}{13}(2x−y)+\dfrac{12}{13}(−x+6y) \\ &=−\dfrac{22}{13}x+\dfrac{17}{13}y \end{align*}\]
The right-hand side of Equation \ref{DD2v} is equal to \(f_x(x,y)\cos θ+f_y(x,y)\sin θ,\) which can be written as the dot product of two vectors. Define the first vector as \(\vecs ∇f(x,y)=f_x(x,y)\,
\hat{\mathbf i}+f_y(x,y)\,\hat{\mathbf j}\) and the second vector as \(\vecs u=(\cos θ)\,\hat{\mathbf i}+(\sin θ)\,\hat{\mathbf j}\). Then the right-hand side of the equation can be written as the
dot product of these two vectors:
\[D_{\vecs u}f(x,y)=\vecs ∇f(x,y)⋅\vecs u. \label{gradDirDer}\]
The first vector in Equation \ref{gradDirDer} has a special name: the gradient of the function \(f\). The symbol \(∇\) is called nabla and the vector \(\vecs ∇f\) is read “del \(f\).”
Definition: The Gradient
Let \(z=f(x,y)\) be a function of \(x\) and \(y\) such that \(f_x\) and \(f_y\) exist. The vector \(\vecs ∇f(x,y)\) is called the gradient of \(f\) and is defined as
\[\vecs ∇f(x,y)=f_x(x,y)\,\hat{\mathbf i}+f_y(x,y)\,\hat{\mathbf j}. \label{grad}\]
The vector \(\vecs ∇f(x,y)\) is also written as “grad \(f\).”
Example \(\PageIndex{3}\): Finding Gradients
Find the gradient \(\vecs ∇f(x,y)\) of each of the following functions:
1. \(f(x,y)=x^2−xy+3y^2\)
2. \(f(x,y)=\sin 3 x \cos 3y\)
For both parts a. and b., we first calculate the partial derivatives \(f_x\) and \(f_y\), then use Equation \ref{grad}.
a. \( f_x(x,y)=2x−y\) and \(f_y(x,y)=−x+6y\), so
\[\begin{align*} \vecs ∇f(x,y)&=f_x(x,y)\,\hat{\mathbf i}+f_y(x,y)\,\hat{\mathbf j}\\&=(2x−y)\,\hat{\mathbf i}+(−x+6y)\,\hat{\mathbf j}.\end{align*}\]
b. \( f_x(x,y)=3\cos 3x \cos 3y\) and \(f_y(x,y)=−3\sin 3x \sin 3y\), so
\[\begin{align*} \vecs ∇f(x,y)&=f_x(x,y)\,\hat{\mathbf i}+f_y(x,y)\,\hat{\mathbf j} \\ &=(3\cos 3x \cos 3y)\,\hat{\mathbf i}−(3\sin 3x \sin 3y)\,\hat{\mathbf j}. \end{align*}\]
Exercise \(\PageIndex{2}\)
Find the gradient \(\vecs ∇f(x,y)\) of \(f(x,y)=\dfrac{x^2−3y^2}{2x+y}\).
Calculate the partial derivatives, then use Equation \ref{grad}.
\(\vecs ∇f(x,y)=\dfrac{2x^2+2xy+6y^2}{(2x+y)^2}\,\hat{\mathbf i}−\dfrac{x^2+12xy+3y^2}{(2x+y)^2}\,\hat{\mathbf j}\)
The gradient has some important properties. We have already seen one formula that uses the gradient: the formula for the directional derivative. Recall from The Dot Product that if the angle between
two vectors \(\vecs a\) and \(\vecs b\) is \(φ\), then \(\vecs a⋅\vecs b=‖\vecs a‖‖\vecs b‖\cos φ.\) Therefore, if the angle between \(\vecs ∇f(x_0,y_0)\) and \(\vecs u=(cosθ)\,\hat{\mathbf i}+(sinθ)
\,\hat{\mathbf j}\) is \(φ\), we have
\[D_{\vecs u}f(x_0,y_0)=\vecs ∇f(x_0,y_0)⋅\vecs u=\|\vecs ∇f(x_0,y_0)\|‖\vecs u‖\cos φ=\|\vecs ∇f(x_0,y_0)\|\cos φ.\]
The \(‖\vecs u‖\) disappears because \(\vecs u\) is a unit vector. Therefore, the directional derivative is equal to the magnitude of the gradient evaluated at \((x_0,y_0)\) multiplied by \(\cos φ\).
Recall that \(\cos φ\) ranges from \(−1\) to \(1\).
If \(φ=0,\) then \(\cos φ=1\) and \(\vecs ∇f(x_0,y_0)\) and \(\vecs u\) both point in the same direction.
If \(φ=π\), then \(\cos φ=−1\) and \(\vecs ∇f(x_0,y_0)\) and \(\vecs u\) point in opposite directions.
In the first case, the value of \(D_{\vecs u}f(x_0,y_0)\) is maximized and in the second case, the value of \(D_{\vecs u}f(x_0,y_0)\) is minimized.
We can also see that if \(\vecs ∇f(x_0,y_0)=\vecs 0\), then
\[ D_{\vecs u}f(x_0,y_0)=\vecs ∇f(x_0,y_0)⋅\vecs u=0\]
for any vector \(\vecs u\). These three cases are outlined in the following theorem.
Properties of the Gradient
Suppose the function \(z=f(x,y)\) is differentiable at \((x_0,y_0)\) (Figure \(\PageIndex{3}\)).
1. If \(\vecs ∇f(x_0,y_0)=\vecs 0\), then \(D_{\vecs u}f(x_0,y_0)=0\) for any unit vector \(\vecs u\).
2. If \(\vecs ∇f(x_0,y_0)≠\vecs 0\), then \(D_{\vecs u}f(x_0,y_0)\) is maximized when \(\vecs u\) points in the same direction as \(\vecs ∇f(x_0,y_0)\). The maximum value of \(D_{\vecs u}f(x_0,y_0)
\) is \(\|\vecs ∇f(x_0,y_0)\|\).
3. If \(\vecs ∇f(x_0,y_0)≠\vecs 0\), then \(D_{\vecs u}f(x_0,y_0)\) is minimized when \(\vecs u\) points in the opposite direction from \(\vecs ∇f(x_0,y_0)\). The minimum value of \(D_{\vecs u}f
(x_0,y_0)\) is \(−\|\vecs ∇f(x_0,y_0)\|\).
Figure \(\PageIndex{3}\): The gradient indicates the maximum and minimum values of the directional derivative at a point.
Note: Gradient indicates direction of steepest ascent
Since the gradient vector points in the direction within the domain of \(f\) that corresponds to the maximum value of the directional derivative, \(D_{\vecs u}f(x_0,y_0)\), we say that the gradient
vector points in the direction of steepest ascent or most rapid increase in \(f\), that is, at any given point, the gradient points in the direction with the steepest uphill slope.
Example \(\PageIndex{4}\): Finding a Maximum Directional Derivative
Find the direction for which the directional derivative of \(f(x,y)=3x^2−4xy+2y^2\) at \((−2,3)\) is a maximum. What is the maximum value?
The maximum value of the directional derivative occurs when \(\vecs ∇f\) and the unit vector point in the same direction. Therefore, we start by calculating \(\vecs ∇f(x,y\)):
\[f_x(x,y)=6x−4y \; \text{and}\; f_y(x,y)=−4x+4y \nonumber\]
\[\vecs ∇f(x,y)=f_x(x,y)\,\hat{\mathbf i}+f_y(x,y)\,\hat{\mathbf j}=(6x−4y)\,\hat{\mathbf i}+(−4x+4y)\,\hat{\mathbf j}. \nonumber\]
Next, we evaluate the gradient at \((−2,3)\):
\[\vecs ∇f(−2,3)=(6(−2)−4(3))\,\hat{\mathbf i}+(−4(−2)+4(3))\,\hat{\mathbf j}=−24\,\hat{\mathbf i}+20\,\hat{\mathbf j}. \nonumber\]
The gradient vector gives the direction of the maximum value of the directional derivative.
The maximum value of the directional derivative at \((−2,3)\) is \(\|\vecs ∇f(−2,3)\|=4\sqrt{61}\) (see the Figure \(\PageIndex{4}\)).
Figure \(\PageIndex{4}\): The maximum value of the directional derivative at \((−2,3)\) is in the direction of the gradient.
Exercise \(\PageIndex{3}\):
Find the direction for which the directional derivative of \(g(x,y)=4x−xy+2y^2\) at \((−2,3)\) is a maximum. What is the maximum value?
Evaluate the gradient of \(g\) at point \((−2,3)\).
The gradient of \(g\) at \((−2,3)\) is \(\vecs ∇g(−2,3)=\,\hat{\mathbf i}+14\,\hat{\mathbf j}\). This gives the direction of the maximum value of the directional derivative at the point \((−2,3)
The maximum value of the directional derivative is \(\|\vecs ∇g(−2,3)\|=\sqrt{197}\).
Figure \(\PageIndex{5}\) shows a portion of the graph of the function \(f(x,y)=3+\sin x \sin y\). Given a point \((a,b)\) in the domain of \(f\), the maximum value of the directional derivative at
that point is given by \(\|\vecs ∇f(a,b)\|\). This would equal the rate of greatest ascent if the surface represented a topographical map. If we went in the opposite direction, it would be the rate
of greatest descent.
Figure \(\PageIndex{5}\): A typical surface in \(\mathbb R^3\). Given a point on the surface, the directional derivative can be calculated using the gradient.
When using a topographical map, the steepest slope is always in the direction where the contour lines are closest together (Figure \(\PageIndex{6}\)). This is analogous to the contour map of a
function, assuming the level curves are obtained for equally spaced values throughout the range of that function.
Figure \(\PageIndex{6}\): Contour map for the function \(f(x,y)=x^2−y^2\) using level values between \(−5\) and \(5\).
Gradients and Level Curves
Recall that if a curve is defined parametrically by the function pair \((x(t),y(t)),\) then the vector \(x′(t)\,\hat{\mathbf i}+y′(t)\,\hat{\mathbf j}\) is tangent to the curve for every value of \(t
\) in the domain. Now let’s assume \(z=f(x,y)\) is a differentiable function of \(x\) and \(y\), and \((x_0,y_0)\) is in its domain. Let’s suppose further that \(x_0=x(t_0)\) and \(y_0=y(t_0)\) for
some value of \(t\), and consider the level curve \(f(x,y)=k\). Define \(g(t)=f(x(t),y(t))\) and calculate \(g′(t)\) on the level curve. By the chain Rule,
But \(g′(t)=0\) because \(g(t)=k\) for all \(t\). Therefore, on the one hand,
on the other hand,
\[f_x(x(t),y(t))x′(t)+f_y(x(t),y(t))y′(t)=\vecs ∇f(x,y)⋅⟨x′(t),y′(t)⟩.\]
\[\vecs ∇f(x,y)⋅⟨x′(t),y′(t)⟩=0.\]
Thus, the dot product of these vectors is equal to zero, which implies they are orthogonal. However, the second vector is tangent to the level curve, which implies the gradient must be normal to the
level curve, which gives rise to the following theorem.
Gradient Is Normal to the Level Curve
Suppose the function \(z=f(x,y)\) has continuous first-order partial derivatives in an open disk centered at a point \((x_0,y_0)\). If \(\vecs ∇f(x_0,y_0)≠\vecs 0\), then \(\vecs ∇f(x_0,y_0)\) is
normal to the level curve of \(f\) at \((x_0,y_0)\).
We can use this theorem to find tangent and normal vectors to level curves of a function.
Example \(\PageIndex{5}\): Finding Tangents to Level Curves
For the function \(f(x,y)=2x^2−3xy+8y^2+2x−4y+4,\) find a tangent vector to the level curve at point \((−2,1)\). Graph the level curve corresponding to \(f(x,y)=18\) and draw in \(\vecs ∇f(−2,1)\)
and a tangent vector.
First, we must calculate \(\vecs ∇f(x,y):\)
\[f_x(x,y)=4x−3y+2 \;\text{and}\; f_y=−3x+16y−4 \;\text{so}\; \vecs ∇f(x,y)=(4x−3y+2)\,\hat{\mathbf i}+(−3x+16y−4)\,\hat{\mathbf j}.\]
Next, we evaluate \(\vecs ∇f(x,y)\) at \((−2,1):\)
\[\vecs ∇f(−2,1)=(4(−2)−3(1)+2)\,\hat{\mathbf i}+(−3(−2)+16(1)−4)\,\hat{\mathbf j}=−9\,\hat{\mathbf i}+18\,\hat{\mathbf j}.\]
This vector is orthogonal to the curve at point \((−2,1)\). We can obtain a tangent vector by reversing the components and multiplying either one by \(−1\). Thus, for example, \(−18\,\hat{\mathbf i}
−9\,\hat{\mathbf j}\) is a tangent vector (see the following graph).
Figure \(\PageIndex{7}\): Tangent and normal vectors to \(2x^2−3xy+8y^2+2x−4y+4=18\) at point \((−2,1)\).
Exercise \(\PageIndex{4}\):
For the function \(f(x,y)=x^2−2xy+5y^2+3x−2y+3\), find the tangent to the level curve at point \((1,1)\). Draw the graph of the level curve corresponding to \(f(x,y)=8\) and draw \(\vecs ∇f(1,1)\)
and a tangent vector.
Calculate the gradient at point \((1,1)\).
\(\vecs ∇f(x,y)=(2x−2y+3)\,\hat{\mathbf i}+(−2x+10y−2)\,\hat{\mathbf j}\)
\(\vecs ∇f(1,1)=3\,\hat{\mathbf i}+6\,\hat{\mathbf j}\)
Tangent vector: \(6\,\hat{\mathbf i}−3\,\hat{\mathbf j}\) or \(−6\,\hat{\mathbf i}+3\,\hat{\mathbf j}\)
The fact that the gradient of a surface always points in the direction of steepest increase/decrease is very useful, as illustrated in the following example.
Example \(\PageIndex{6}\): The flow of water downhill
Consider the surface given by \(f(x,y)= 20-x^2-2y^2\). Water is poured on the surface at \((1,\frac{1}{4})\). What path does it take as it flows downhill?
Let \(\vecs r (t) = \langle x(t), y(t)\rangle\) be the vector--valued function describing the path of the water in the \(xy\)-plane; we seek \(x(t)\) and \(y(t)\). We know that water will always flow
downhill in the steepest direction; therefore, at any point on its path, it will be moving in the direction of \(-\vecs\nabla f\). (We ignore the physical effects of momentum on the water.) Thus \(\
vecs r '(t)\) will be parallel to \(\vecs\nabla f\), and there is some constant \(c\) such that \(c\vecs\nabla f(t) = \vec r '(t) = \langle x'(t), y'(t)\rangle\).
We find \(\vecs\nabla f(x,y) = \langle -2x, -4y\rangle\) and write \(x'(t)\) as \(\frac{dx}{dt}\) and \(y'(t)\) as \(\frac{dy}{dt}\). Then
c\vecs\nabla f(t) &= \langle x'(t), y'(t)\rangle \\
\langle -2cx, -4cy \rangle & = \langle \frac{dx}{dt}, \frac{dy}{dt}\rangle.
This implies
\[-2cx = \frac{dx}{dt} \quad \text{and} \quad -4cy =\frac{dy}{dt}, \text{ i.e.,}\]
\[c = -\frac{1}{2x}\frac{dx}{dt} \quad \text{and} \quad c =-\frac{1}{4y}\frac{dy}{dt}.\]
As \(c\) equals both expressions, we have
\[\frac{1}{4y}\frac{dy}{dt} = \frac{1}{2x}\frac{dx}{dt}.\]
To find an explicit relationship between \(x\) and \(y\), we can integrate both sides with respect to \(t\). Recall from our study of differentials that \( \frac{dx}{dt}dt = dx\). Thus:
\int \frac{1}{4y}\frac{dy}{dt}\ dt &= \int \frac{1}{2x}\frac{dx}{dt}\ dt \\
\int\frac{1}{4y}\ dy &= \int \frac{1}{2x}\ dx \\
\frac14\ln|y| &= \frac 12\ln|x| +C_1\\
\ln|y| &= 2\ln|x| +C_1\\
\ln|y| &= \ln |x^2|+C_1 \end{align*}\]
Now raise both sides as a power of \(e\):
|y| &= e^{\ln |x^2|+C_1}\\
|y| &= e^{\ln x^2}e^{C_1}\\
y &= \pm e^{C_1}x^2
Then \[ y = Cx^2, \quad \text{where}\; C = \pm e^{C_1} \; \text{or} \; C = 0.\]
As the water started at the point \((1,\frac{1}{4})\), we can now solve for \(C\):
\[C(1)^2 = \frac14 \quad \Rightarrow \quad C = \frac14.\]
Figure \(\PageIndex{8}\): A graph of the surface along with the path in the \(xy\)-plane with the level curves.
Thus the water follows the curve \(y=\frac{x^2}{4}\) in the \(xy\)-plane. The surface and the path of the water is graphed in Figure \(\PageIndex{8}\). In part (b) of the figure, the level curves of
the surface are plotted in the \(xy\)-plane, along with the curve \(y=\frac{x^2}{4}\). Notice how the path intersects the level curves at right angles. As the path follows the gradient downhill, this
reinforces the fact that the gradient is orthogonal to level curves.
Three-Dimensional Gradients and Directional Derivatives
The definition of a gradient can be extended to functions of more than two variables.
Definition: Gradients in 3D
Let \(w=f(x, y, z)\) be a function of three variables such that \(f_x, \, f_y\), and \(f_z\) exist. The vector \(\vecs ∇ f(x,y,z)\) is called the gradient of \(f\) and is defined as
\[\vecs ∇f(x,y,z)=f_x(x,y,z)\,\hat{\mathbf i}+f_y(x,y,z)\,\hat{\mathbf j}+f_z(x,y,z)\,\hat{\mathbf k}.\label{grad3d}\]
\(\vecs ∇f(x,y,z)\) can also be written as grad \(f(x,y,z).\)
Calculating the gradient of a function in three variables is very similar to calculating the gradient of a function in two variables. First, we calculate the partial derivatives \(f_x, \, f_y,\) and
\(f_z\), and then we use Equation \ref{grad3d}.
Example \(\PageIndex{7}\): Finding Gradients in Three Dimensions
Find the gradient \(\vecs ∇f(x,y,z)\) of each of the following functions:
1. \(f(x,y,z)=5x^2−2xy+y^2−4yz+z^2+3xz\)
2. \(f(x,y,z)=e^{−2z}\sin 2x \cos 2y\)
For both parts a. and b., we first calculate the partial derivatives \(f_x,f_y,\) and \(f_z\), then use Equation \ref{grad3d}.
a. \(f_x(x,y,z)=10x−2y+3z\), \(f_y(x,y,z)=−2x+2y−4z\), and \( f_z(x,y,z)=3x−4y+2z\), so
\[\begin{align*} \vecs ∇f(x,y,z)&=f_x(x,y,z)\,\hat{\mathbf i}+f_y(x,y,z)\,\hat{\mathbf j}+f_z(x,y,z)\,\hat{\mathbf k} \\ &=(10x−2y+3z)\,\hat{\mathbf i}+(−2x+2y−4z)\,\hat{\mathbf j}+(3x-4y+2z)\,\hat{\
mathbf k}. \end{align*}\]
b. \(f_x(x,y,z) =2e^{−2z}\cos 2x \cos 2y\), \( f_y(x,y,z)=−2e^{−2z} \sin 2x \sin 2y\), and \(f_z(x,y,z)=−2e^{−2z}\sin 2x \cos 2y\), so
\[\begin{align*} \vecs ∇f(x,y,z)&=f_x(x,y,z)\,\hat{\mathbf i}+f_y(x,y,z)\,\hat{\mathbf j}+f_z(x,y,z)\,\hat{\mathbf k} \\&=(2e^{−2z}\cos 2x \cos 2y)\,\hat{\mathbf i}+(−2e^{−2z}\sin 2x \sin 2y)\,\hat{\
mathbf j}+(−2e^{−2z}\sin 2x \cos 2y)\,\hat{\mathbf k} \\ & =2e^{−2z}(\cos 2x \cos 2y \,\hat{\mathbf i}−\sin 2x \sin 2y\,\hat{\mathbf j}−\sin 2x \cos 2y\,\hat{\mathbf k}). \end{align*}\]
Exercise \(\PageIndex{5}\):
Find the gradient \(\vecs ∇f(x,y,z)\) of \(f(x,y,z)=\dfrac{x^2−3y^2+z^2}{2}\) and find its gradient vector at the point \( (-1, 2, 3) \).
\(\vecs ∇f(x,y,z)=x\,\hat{\mathbf i}−3y\,\hat{\mathbf j}+z\,\hat{\mathbf k}\)
\(\vecs ∇f(-1,2,3)=-\hat{\mathbf i}−6\,\hat{\mathbf j}+3\,\hat{\mathbf k}\)
the gradient of a function of three variables is normal to the level surface
Suppose the function \(z=f(x,y,z)\) has continuous first-order partial derivatives in an open sphere centered at a point \((x_0,y_0,z_0)\). If \(\vecs ∇f(x_0,y_0,z_0)≠\vecs 0\), then \(\vecs ∇f
(x_0,y_0,z_0)\) is normal to the level surface of \(f\) at \((x_0,y_0,z_0)\).
Figure \(\PageIndex{9}\) shows the gradient vectors at various points on a level surface of the function in Exercise \(\PageIndex{5}\). The points shown on the level surface are: \( (-1, 2, 3) \), \(
(3, -2, -1) \), \( (0, \sqrt{\frac{2}{3}}, 0) \), \( (2, \sqrt{\frac{10}{3}}, 2) \), and \( (2, \sqrt{\frac{10}{3}}, -2) \).
Figure \(\PageIndex{9}\): A level surface of the function \(f(x,y,z)=\dfrac{x^2−3y^2+z^2}{2}\) for \(C = -1\) along with various points on this level surface and the corresponding gradient vectors.
Note how these gradient vectors are normal to this level surface.
The directional derivative can also be generalized to functions of three variables. To determine a direction in three dimensions, a vector with three components is needed. This vector is a unit
vector, and the components of the unit vector are called directional cosines. Given a three-dimensional unit vector \(\vecs u\) in standard form (i.e., the initial point is at the origin), this
vector forms three different angles with the positive \(x\)-, \(y\)-, and \(z\)-axes. Let’s call these angles \(α,β,\) and \(γ\). Then the directional cosines are given by \(\cos α,\cos β,\) and \(\
cos γ\). These are the components of the unit vector \(\vecs u\); since \(\vecs u\) is a unit vector, it is true that \(\cos^2 α+\cos^2 β+\cos^2 γ=1.\)
Definition: Directional Derivative of a Function of Three variables
Suppose \(w=f(x,y,z)\) is a function of three variables with a domain of \(D\). Let \((x_0,y_0,z_0)∈D\) and let \(\vecs u=\cos α\,\hat{\mathbf i}+\cos β\,\hat{\mathbf j}+\cos γ\,\hat{\mathbf k}\) be
a unit vector. Then, the directional derivative of \(f\) in the direction of \(u\) is given by
\[D_{\vecs u}f(x_0,y_0,z_0)=\lim_{t→0}\dfrac{f(x_0+t \cos α,y_0+t\cos β,z_0+t\cos γ)−f(x_0,y_0,z_0)}{t}\]
provided the limit exists.
We can calculate the directional derivative of a function of three variables by using the gradient, leading to a formula that is analogous to Equation \ref{DD2v}.
Directional Derivative of a Function of Three Variables
Let \(f(x,y,z)\) be a differentiable function of three variables and let \(\vecs u=\cos α\,\hat{\mathbf i}+\cos β\,\hat{\mathbf j}+\cos γ\,\hat{\mathbf k}\) be a unit vector. Then, the directional
derivative of \(f\) in the direction of \(\vecs u\) is given by
\[D_{\vecs u}f(x,y,z)=\vecs ∇f(x,y,z)⋅\vecs u=f_x(x,y,z)\cos α+f_y(x,y,z)\cos β+f_z(x,y,z)\cos γ. \label{DDv3}\]
The three angles \(α,β,\) and \(γ\) determine the unit vector \(\vecs u\). In practice, we can use an arbitrary (nonunit) vector, then divide by its magnitude to obtain a unit vector in the desired
Example \(\PageIndex{8}\): Finding a Directional Derivative in Three Dimensions
Calculate \(D_{\vecs v}f(1,−2,3)\) in the direction of \(\vecs v=−\,\hat{\mathbf i}+2\,\hat{\mathbf j}+2\,\hat{\mathbf k}\) for the function
\[ f(x,y,z)=5x^2−2xy+y^2−4yz+z^2+3xz. \nonumber\]
First, we find the magnitude of \(v\):
\[‖\vecs v‖=\sqrt{(−1)^2+(2)^2}=3. \nonumber\]
Therefore, \(\dfrac{\vecs v}{‖\vecs v‖}=\dfrac{−\hat{\mathbf i}+2\,\hat{\mathbf j}+2\,\hat{\mathbf k}}{3}=−\dfrac{1}{3}\,\hat{\mathbf i}+\dfrac{2}{3}\,\hat{\mathbf j}+\dfrac{2}{3}\,\hat{\mathbf k}\)
is a unit vector in the direction of \(\vecs v\), so \(\cos α=−\dfrac{1}{3},\cos β=\dfrac{2}{3},\) and \(\cos γ=\dfrac{2}{3}\). Next, we calculate the partial derivatives of \(f\):
\[\begin{align*} f_x(x,y,z)& =10x−2y+3z \\ f_y(x,y,z)&=−2x+2y−4z \\ f_z(x,y,z)&=−4y+2z+3x, \end{align*} \]
then substitute them into Equation \ref{DDv3}:
\[\begin{align*} D_{\vecs v}f(x,y,z)&=f_x(x,y,z)\cos α+f_y(x,y,z)\cos β+f_z(x,y,z)\cos γ \\ &=(10x−2y+3z)(−\dfrac{1}{3})+(−2x+2y−4z)(\dfrac{2}{3})+(−4y+2z+3x)(\dfrac{2}{3}) \\ &=−\dfrac{10x}{3}+\
dfrac{2y}{3}−\dfrac{3z}{3}−\dfrac{4x}{3}+\dfrac{4y}{3}−\dfrac{8z}{3}−\dfrac{8y}{3}+\dfrac{4z}{3}+\dfrac{6x}{3} \\ &=−\dfrac{8x}{3}−\dfrac{2y}{3}−\dfrac{7z}{3}. \end{align*}\]
Last, to find \(D_{\vecs v}f(1,−2,3),\) we substitute \(x=1,\, y=−2\), and \(z=3:\)
\[\begin{align*} D_{\vecs v}f(1,−2,3)&=−\dfrac{8(1)}{3}−\dfrac{2(−2)}{3}−\dfrac{7(3)}{3} \\ &=−\dfrac{8}{3}+\dfrac{4}{3}−\dfrac{21}{3} \\&=−\dfrac{25}{3}. \end{align*}\]
Exercise \(\PageIndex{6}\):
Calculate \(D_{\vecs v}f(x,y,z)\) and \(D_{\vecs v}f(0,−2,5)\) in the direction of \(\vecs v=−3\,\hat{\mathbf i}+12\,\hat{\mathbf j}−4\,\hat{\mathbf k}\) for the function
\[f(x,y,z)=3x^2+xy−2y^2+4yz−z^2+2xz.\nonumber \]
First, divide \(\vecs v\) by its magnitude, calculate the partial derivatives of \(f\), then use Equation \ref{DDv3}.
\(D_{\vecs v}f(x,y,z)=−\dfrac{3}{13}(6x+y+2z)+\dfrac{12}{13}(x−4y+4z)−\dfrac{4}{13}(2x+4y−2z)\)
\(D_{\vecs v}f(0,−2,5)=\dfrac{384}{13}\)
• A directional derivative represents a rate of change of a function in any given direction.
• The gradient can be used in a formula to calculate the directional derivative.
• The gradient indicates the direction of greatest change of a function of more than one variable.
Key Equations
• directional derivative (two dimensions)
\(D_{\vecs u}f(a,b)=\lim_{h→0}\dfrac{f(a+h\cos θ,b+h\sin θ)−f(a,b)}{h}\)
\(D_{\vecs u}f(x,y)=f_x(x,y)\cos θ+f_y(x,y)\sin θ\)
\(D_{\vecs u}f(x,y)=\vecs ∇f(x,y) \cdot \vecs u\), where \(\vecs u\) is a unit vector in the \(xy\)-plane
• gradient (two dimensions)
\(\vecs ∇f(x,y)=f_x(x,y)\,\hat{\mathbf i}+f_y(x,y)\,\hat{\mathbf j}\)
• gradient (three dimensions)
\(\vecs ∇f(x,y,z)=f_x(x,y,z)\,\hat{\mathbf i}+f_y(x,y,z)\,\hat{\mathbf j}+f_z(x,y,z)\,\hat{\mathbf k}\)
• directional derivative (three dimensions)
\(D_{\vecs u}f(x,y,z)=\vecs ∇f(x,y,z)⋅\vecs u=f_x(x,y,z)\cos α+f_y(x,y,z)\cos β+f_x(x,y,z)\cos γ\)
directional derivative
the derivative of a function in the direction of a given unit vector
the gradient of the function \(f(x,y)\) is defined to be \(\vecs ∇f(x,y)=(∂f/∂x)\,\hat{\mathbf i}+(∂f/∂y)\,\hat{\mathbf j},\) which can be generalized to a function of any number of independent
• Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
• Example \(\PageIndex{6}\) is adapted from Apex Calculus by Gregory Hartman (Virginia Military Institute)
• Edited and expanded by Paul Seeburger (Monroe Community College) | {"url":"https://math.libretexts.org/Courses/Montana_State_University/M273%3A_Multivariable_Calculus/14%3A_Functions_of_Multiple_Variables_and_Partial_Derivatives/Directional_Derivatives_and_the_Gradient","timestamp":"2024-11-08T14:30:16Z","content_type":"text/html","content_length":"175588","record_id":"<urn:uuid:a0cf963b-0022-4296-91d1-dd66cd5f8c31>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00319.warc.gz"} |
Gap Opening by Extremely Low-mass Planets in a Viscous Disk
By numerically integrating the compressible Navier-Stokes equations in two dimensions, we calculate the criterion for gap formation by a very low mass (q ~ 10^-4) protoplanet on a fixed orbit in a
thin viscous disk. In contrast with some previously proposed gap-opening criteria, we find that a planet can open a gap even if the Hill radius is smaller than the disk scale height. Moreover, in the
low-viscosity limit, we find no minimum mass necessary to open a gap for a planet held on a fixed orbit. In particular, a Neptune-mass planet will open a gap in a minimum mass solar nebula with
suitably low viscosity (α <~ 10^-4). We find that the mass threshold scales as the square root of viscosity in the low mass regime. This is because the gap width for critical planet masses in this
regime is a fixed multiple of the scale height, not of the Hill radius of the planet.
The Astrophysical Journal
Pub Date:
May 2013
□ hydrodynamics;
□ methods: numerical;
□ planet-disk interactions;
□ planets and satellites: formation;
□ protoplanetary disks;
□ Astrophysics - Earth and Planetary Astrophysics;
□ Physics - Computational Physics;
□ Physics - Fluid Dynamics
ApJ accepted | {"url":"https://ui.adsabs.harvard.edu/abs/2013ApJ...769...41D","timestamp":"2024-11-11T18:02:56Z","content_type":"text/html","content_length":"41355","record_id":"<urn:uuid:bfe7ce95-01db-44ec-9021-bda653f87d1d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00576.warc.gz"} |
MaffsGuru.com - Making maths enjoyableEquivalent fractions
Click the link below to download your notes.
If your lesson notes are not downloading this is due to your school blocking access to the notes server. If you try connecting at home, or using a different computer which is not controlled by your
school, you will be able to download the notes. | {"url":"https://maffsguru.com/videos/equivalent-fractions/","timestamp":"2024-11-14T07:51:26Z","content_type":"text/html","content_length":"39479","record_id":"<urn:uuid:a0939721-ad21-4359-b003-6c4e08055a53>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00202.warc.gz"} |
CE 100
Special Topics in Civil Engineering
Units to be based upon work done, any term
Special problems or courses arranged to meet the needs of first-year graduate students or qualified undergraduate students. Graded pass/fail.
Ae/APh/CE/ME 101 abc
Fluid Mechanics
9 units (3-0-6) | first, second, third terms
Prerequisites: APh 17 or ME 11 abc, and ME 12 or equivalent, ACM 95/100 or equivalent (may be taken concurrently).
Fundamentals of fluid mechanics. Microscopic and macroscopic properties of liquids and gases; the continuum hypothesis; review of thermodynamics; general equations of motion; kinematics; stresses;
constitutive relations; vorticity, circulation; Bernoulli's equation; potential flow; thin-airfoil theory; surface gravity waves; buoyancy-driven flows; rotating flows; viscous creeping flow; viscous
boundary layers; introduction to stability and turbulence; quasi one-dimensional compressible flow; shock waves; unsteady compressible flow; and acoustics.
Instructors: Bae, Pullin, Colonius
Ae/AM/CE/ME 102 abc
Mechanics of Structures and Solids
9 units (3-0-6) | first, second, third terms
Prerequisites: ME 12 abc.
Introduction to continuum mechanics: kinematics, balance laws, constitutive laws with an emphasis on solids. Static and dynamic stress analysis. Two- and three-dimensional theory of stressed elastic
solids. Wave propagation. Analysis of rods, plates and shells with applications in a variety of fields. Variational theorems and approximate solutions. Elastic stability.
Instructors: Lapusta, Rosakis, Pellegrino
CE/Ae/AM 108
Computational Mechanics
9 units (3-5-1) | first, second terms
Prerequisites: Ae/AM/ME/CE 102 abc or Ae/GE/ME 160 ab, or instructor's permission.
Numerical methods and techniques for solving initial boundary value problems in continuum mechanics (from heat conduction to statics and dynamics of solids and structures). Finite difference methods,
direct methods, variational methods, finite elements in small strains and at finite deformation for applications in structural mechanics and solid mechanics. Solution of the partial differential
equations of heat transfer, solid and structural mechanics, and fluid mechanics. Transient and nonlinear problems. Computational aspects and development and use of finite element code. Not offered
ME/CE/Ge/ESE 146
Computational Methods for Flow in Porous Media
9 units (3-0-6) | second term
Prerequisites: ME 11 abc, ME 12 abc, ACM 95/100, ACM 106 ab (may be taken concurrently).
This course covers physical, mathematical and simulation aspects of single and two-phase flow and transport through porous media. Conservation equations for multiphase, multicomponent flow. Modeling
of fluid mechanical instabilities such as viscous fingering, gravity fingering and gravity-driven convection. Coupling fluid flow with chemical reactions. Coupling single phase flow with
poromechanics. Numerical methods for elliptic equations: finite volume methods, two-point flux approximations, finite difference, spectral method. Numerical methods for hyperbolic equations:
high-order explicit methods, implicit method. Applications in hydrology, geological CO2 sequestration and induced seismicity, among others will be demonstrated.
Instructor: Fu
AM/CE/ME 150 abc
Graduate Engineering Seminar
1 unit | each term
Students attend a graduate seminar each week of each term and submit a report about the attended seminars. At least four of the attended seminars each term should be from the Mechanical and Civil
Engineering seminar series. Students not registered for the M.S. and Ph.D. degrees must receive the instructor's permission. Graded pass/fail.
Instructor: Staff
AM/CE 151
Dynamics and Vibration
9 units (3-0-6) | third term
Equilibrium concepts, conservative and dissipative systems, Lagrange's equations, differential equations of motion for discrete single and multi degree-of-freedom systems, natural frequencies and
mode shapes of these systems (Eigenvalue problem associated with the governing equations), phase plane analysis of vibrating systems, forms of damping and energy dissipated in damped systems,
response to simple force pulses, harmonic and earthquake excitation, response spectrum concepts, vibration isolation, seismic instruments, dynamics of continuous systems, Hamilton's principle, axial
vibration of rods and membranes, transverse vibration of strings, beams (Bernoulli-Euler and Timoshenko beam theory), and plates, traveling and standing wave solutions to motion of continuous
systems, Rayleigh quotient and the Rayleigh-Ritz method to approximate natural frequencies and mode shapes of discrete and continuous systems, frequency domain solutions to dynamical systems,
stability criteria for dynamical systems, and introduction to nonlinear systems and random vibration theory.
Instructor: Staff
CE 160 ab
Structural and Earthquake Engineering
9 units (3-0-6) | second, third terms
Matrix structural analysis of the static and dynamic response of structural systems, Newmark time integration, Newton-Raphson iteration methodology for the response of nonlinear systems, stability of
iteration schemes, static and dynamic numerical analysis of planar beam structures (topics include the development of stiffness, mass, and damping matrices, material and geometric nonlinearity
effects, formulation of a nonlinear 2-D beam element, uniform and nonuniform earthquake loading, soil-structure interaction, 3-D beam element formulation, shear deformations, and panel zone
deformations in steel frames, and large deformation analysis), seismic design and analysis of steel moment frame and braced frame systems, steel member behavior (topics include bending, buckling,
torsion, warping, and lateral torsional buckling, and the effects of residual stresses), reinforced concrete member behavior (topics include bending, shear, torsion, and PMM interaction), and seismic
design requirements for reinforced concrete structures. Not offered 2023-24.
Ae/CE 165 ab
Mechanics of Composite Materials and Structures
9 units (2-2-5) | first, second terms
Prerequisites: Ae/AM/CE/ME 102 a.
Introduction and fabrication technology, elastic deformation of composites, stiffness bounds, on- and off-axis elastic constants for a lamina, elastic deformation of multidirectional laminates
(lamination theory, ABD matrix), effective hygrothermal properties, mechanisms of yield and failure for a laminate, strength of a single ply, failure models, splitting and delamination. Experimental
methods for characterization and testing of composite materials. Design criteria, application of design methods to select a suitable laminate using composite design software, hand layup of a simple
laminate and measurement of its stiffness and thermoelastic coefficients. Not offered 2023-24.
ME/CE/Ge 174
Mechanics of Rocks
9 units (3-0-6) | third term
Prerequisites: Ae/Ge/ME 160 a.
Basic principles of deformation, strength, and stressing of rocks. Elastic behavior, plasticity, viscoelasticity, viscoplasticity, creep, damage, friction, failure mechanisms, shear localization, and
interaction of deformation processes with fluids. Engineering and geological applications.
Instructor: Lapusta
CE 180
Experimental Methods in Earthquake Engineering
9 units (1-5-3) | first term
Prerequisites: AM/CE 151 abc or equivalent.
Laboratory work involving calibration and performance of basic transducers suitable for the measurement of strong earthquake ground motion, and of structural response to such motion. Study of
principal methods of dynamic tests of structures, including generation of forces and measurement of structural response. Not offered 2023-24.
CE 200
Advanced Work in Civil Engineering
6 or more units as arranged | any term
A faculty mentor will oversee a student proposed, independent research or study project to meet the needs of graduate students. Graded pass/fail. The consent of a faculty mentor and a written report
is required for each term.
CE 201
Advanced Topics in Civil Engineering
9 units (3-0-6) | third term
The faculty will prepare courses on advanced topics to meet the needs of graduate students.
Instructor: Andrade
Ae/AM/CE/ME 214
Computational Solid Mechanics
9 units (3-5-1) | second term
Prerequisites: ACM 100 ab or equivalent; CE/AM/Ae 108 ab or equivalent or instructor's permission; Ae/AM/CE/ME 102 abc or instructor's permission.
This course focuses on the analysis of elastic thin shell structures in the large deformation regime. Problems of interest include softening behavior, bifurcations, loss of stability and
localization. Introduction to the use of numerical methods in the solution of solid mechanics and multiscale mechanics problems. Variational principles. Finite element and isogeometric formulations
for thin shells. Time integration, initial boundary value problems. Error estimation. Accuracy, stability and convergence. Iterative solution methods. Adaptive strategies. Not offered 2023-24.
Ae/CE 221
Space Structures
9 units (3-0-6) | first term
This course examines the links between form, geometric shape, and structural performance. It deals with different ways of breaking up a continuum, and how this affects global structural properties;
structural concepts and preliminary design methods that are used in tension structures and deployable structures. Geometric foundations, polyhedra and tessellations, surfaces; space frames, examples
of space frames, stiffness and structural efficiency of frames with different repeating units; sandwich plates; cable and membrane structures, form-finding, wrinkle-free pneumatic domes, balloons,
tension-stabilized struts, tensegrity domes; deployable and adaptive structures, coiled rods and their applications, flexible shells, membranes, structural mechanisms, actuators, concepts for
adaptive trusses and manipulators. Pellegrino
AM/CE/ME 252
Linear and Nonlinear Waves in Structured Media
9 units (2-1-6) | third term
The course will cover the basic principles of wave propagation in solid media. It will discuss the fundamental principles used to describe linear and nonlinear wave propagation in continuum and
discrete media. Selected recent scientific advancements in the dynamics of periodic media will also be discussed. Students learn the basic principles governing the propagation of waves in discrete
and continuum solid media. These methods can be used to engineer materials with predefined properties and to design dynamical systems for a variety of engineering applications (e.g., vibration
mitigation, impact absorption and sound insulation). The course will include an experimental component, to test wave phenomena in structured media. Not offered 2023-24.
Ae/AM/CE/ME/Ge 265 ab
Static and Dynamic Failure of Brittle Solids and Interfaces, from the Micro to the Mega
9 units (3-0-6) | second term
Prerequisites: Ae/AM/CE/ME 102 abc (concurrently) or equivalent and/or instructor's permission.
Linear elastic fracture mechanics of homogeneous brittle solids (e.g. geo-materials, ceramics, metallic glasses); small scale yielding concepts; experimental methods in fracture, fracture of
bi-material interfaces with applications to composites as well as bonded and layered engineering and geological structures; thin-film and micro-electronic components and systems; dynamic fracture
mechanics of homogeneous engineering materials; dynamic shear dominated failure of coherent and incoherent interfaces at all length scales; dynamic rupture of frictional interfaces with application
to earthquake source mechanics; allowable rupture speeds regimes and connections to earthquake seismology and the generation of Tsunamis. only one term will be offered in 2023-24
Instructor: Rosakis
CE 300
Research in Civil Engineering
Hours and units by arrangement
Research in the field of civil engineering. By arrangements with members of the staff, properly qualified graduate students are directed in research.
Published Date: Aug. 2, 2023 | {"url":"https://catalog.caltech.edu/archive/2023-24/2023-24/department/CE/","timestamp":"2024-11-09T11:19:25Z","content_type":"text/html","content_length":"202849","record_id":"<urn:uuid:955625ce-218e-46db-a97c-8df1ef03cfda>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00435.warc.gz"} |
Fifth (Fraction)
Auslan Sign for fifth (fraction)
This is the second part of a two part sign, the first being the number of fifths (E.g. 1 for 1/5); then angle hand downwards at wrist while changing handshake to sign five.
As seen in Dictionaries
Related Signs
ninth (fraction)
third (fraction)
thirty five
forty five
five hundred
fifty five
two hundred
one hundred
two hundred
five hundred
twenty five
Used In Regions
Used In Languages:
Part of Categories | {"url":"https://signplanet.net/sign/1166","timestamp":"2024-11-02T15:47:42Z","content_type":"text/html","content_length":"51864","record_id":"<urn:uuid:c122c802-8581-4659-b2a6-330640d80fc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00031.warc.gz"} |
Identifying Productivity Drivers by Modeling Work Units
Identifying Productivity Drivers by Modeling Work Units Using Partial Data
Todd L. Graves^* and Audris Mockus^+
^*National Institute of Statistical Sciences and ^*+Bell Laboratories
We describe a new algorithm for estimating a model for an independent variable which is not directly observed, but which represents one set of marginal totals of a positive sparse two-way table whose
other margin and zero pattern are known.
The application which inspired the development of this algorithm arises in software engineering. It is desired to know what factors affect the effort required for a developer to make a change to the
software: for instance, to identify difficult areas of the code, measure changes in the code difficulty through time, and evaluate the effectiveness of development tools. Unfortunately, measurements
of effort for changes are not available in historical data. We model change effort using a developer's total monthly effort and information about which changes she worked on in each month. We produce
measures of uncertainty in the estimated model coefficients using the jackknife by omitting one developer at a time. We illustrate a few specific applications of our tool, and present a simulation
study which speaks well of the reliability of the results our algorithm produces. In short, this algorithm allows analysts to identify if and how much impact a promising software engineering tool or
practice has on coding effort.
A particularly important quantity related to software is the cost of making a change to it. Large software systems are maintained using change management systems, which record information about all
the changes programmers made to the software. These systems record a number of measurements on each change, such as its size and type, which one expects to be related to the effort required to make
the change. In this paper we discuss the methodology and algorithms we have developed to assess the influence of these factors on software change effort.
Since change management data do not include measurements of the effort that was required to make a change, it is at first difficult to imagine estimating models for change effort. The effort
estimates are often obtained for very large changes like new releases, but is important to study effort at the level of individual changes, because at coarser aggregation levels, qualities affecting
the effort of individual changes are likely not to be estimable because of aggregation. Therefore, our methodology makes use of two types of known information: the total amount of effort that
developer expends in a month, and the set of months during which a developer worked on a given change.
This gives rise to an ill-posed problem in which developers' monthly efforts represent the known column marginal totals of several two-way tables, and we also know which entries in these sparse
tables are strictly positive. Also, we postulate the form of a model for the unobservable row marginal totals (amounts of effort for each change) as a function of several observed covariates. Our
algorithm then iterates between improving the coefficients in the model for effort using imputed values of change effort, and adjusting the imputed values so that they are consistent with the known
measurements of aggregated effort. We have called this algorithm MAIMED, for Model-Assisted Iterative Margin Estimation via Disaggregation. Uncertainty estimates for the model coefficients can then
be obtained using the jackknife (Efron, 1982), in which each replication omits one of the two-way tables (developers). We anticipate that this methodology will be useful in other sorts of problems
where individuals produce units of output, cost information is available only in units of time, and where the variables affecting productivity operate on the level of the output units.
This methodology has enormous potential for serving as a foundation for a variety of historical studies of software change effort. One example from this research is our finding that the code under
study ``decayed'' in that changes to the code grew harder to make at a rate of 20% per year. We were also able to measure the extent to which changes which fix faults in software are more difficult
than comparably sized additions of new functionality: fault fixes can be as much as 80% more difficult. (Difficulty of fixing bugs relative to new feature addition depends strongly on the software
project under study, and the results from our analyses are consistent with the opinions of developers on those projects.) Other findings include evidence that a development tool makes developers more
productive, and confirmation that a collection of subsystems believed to be more difficult than others in fact required measurably more effort to make an average change.
We present recommendations on using the estimation method in practice, discuss alternative approaches and present a simulation study to confirm accuracy, sensitivity to outliers, and convergence of
the method.
The next section defines the estimation algorithm. §3 contains more detailed discussion of the data available in the software engineering context, and describes three sample data analyses using this
algorithm. §4 describes our simulation study, §5 explains why more familiar approaches such as the EM algorithm were prohibitively difficult, §6 describes the software used to perform the estimation,
and §7 gives our conclusions. We include an appendix §8 which describes some theoretical insight into the convergence properties of the algorithm.
In this section we discuss the procedure to estimate models for change effort. Consider the data corresponding to a single developer d. These data may be thought of as a non-negative two-way table,
Here i ranges over the M work units (here, changes to the code), and j ranges over the time periods (here, months) over which effort data are available. Known are the zero pattern
since change data record when work on a change begins and ends, and the column sums
The statistical model for the row sums
where X[id] is a vector of covariates predicting change effort, g is a link function. (Since the quantities to be predicted are positive, generalized linear models which lead to multplicative models
are appropriate).
The estimation procedure is inspired by iterative proportional fitting (IPF; see Deming and Stephan, 1940), a technique for estimating cell values in a multi-way table of probabilities when marginal
totals are known and a sample from the population is available. IPF alternates (in the two-way case) between rescaling the rows so that the cells sum to the correct row totals, and performing the
same operation on the columns, until a normally rapid convergence. In our problem only one of the margins is known, we postulate a model for the quantities given by the other margin, and knowledge of
the zero pattern takes the place of the sample. After generating initial guesses at the cell values in the table, our algorithm alternates between making the cell values consistent with the column
sums, and fitting a model to the resultant row sums, then rescaling cell values to be consistent with the model's fitted values.
To be more precise, we first generate initial values for the cells in the two-way tables. A natural approach is to divide each column total evenly across all the nonzero entries in that column:
where we have denoted by Y[ijd](t) the estimate of Y[ijd] after the t-th iteration of the algorithm (it will be necessary to allow the argument t to take on fractional values because the cell values
are updated twice within each iteration). Alternative initialization methods are considered in §4.3.
The following steps are then repeated until convergence.
1. Generate row sums from the current estimate of the cell values,
2. Fit the statistical model given by (1), using each of the tables (one for each developer
3. Rescale the cell values so that the row sums are equal to the model's fitted values:
4. Finally rescale the cell values so that they agree with the known column sums:
Convergence is obtained when the improvement in the error measure in the model fitting is negligible. In practice, we have found that the error measure decreases at each iteration, and that
convergence is faster with a Poisson generalized linear model with a log link (i.e. a multiplicative model) than for an additive model.
At convergence, the algorithm reports the coefficients estimated in the model fitting during the last iteration.
2.2 Jackknife
Since these regression coefficients are obtained using an iterative algorithm and a number of fitted regressions, it is necessary to measure their uncertainties using a nonparametric method which
takes the estimation procedure into account. The jackknife (see, for instance, Efron, 1982) supplies a method of estimating uncertainties in estimated parameters by rerunning the estimation algorithm
once for each data point in the sample, each time leaving one of the data points out. A collection of estimated parameters results, and the degree of difference between these parameters is related to
the sampling variability in the overall estimate. For example, suppose that the statistic of interest is denoted by n. What is required is an estimate of the standard error of ith observation
deleted. Set
An alternative to the jackknife is the bootstrap (for which see Efron and Tibshirani, 1993, for example). Owing to the time-consuming iterative algorithm, we preferred the jackknife since it required
only one additional run of the algorithm for each of the developers. For the purposes of resampling, we have only one datapoint for each developer, because omitting some of a developer's changes
leaves some months in which the total effort of the changes is less than the observed monthly effort. Omitting some months will break up changes whose lifetimes span those months, making it
impossible to define the total effort for such a change.
3. Change Data and their Analysis
Change history is a promising new tool for measuring and analyzing software evolution (Mockus et al, 1999). Change histories are automatically recorded by source code control and change management
systems (see 3.1). Every change to every part of the source code is recorded by the change control system, providing a repository of software modifications which is large, detailed, and uniform over
time. Obtaining data for software engineering research relevant to an industrial environment can be difficult and costly; change management data are collected automatically.
In our studies, the data represent changes to the source code of 5ESS (Martersteck and Spencer, 1985), which is a large, distributed, high availability software product. The product was developed
over two decades, has tens of millions of lines of source code, and has been changed several hundred thousand times. The source code is mostly written in the C programming language, augmented by the
Specification and Description Language (SDL). Tools used to process and analyze software change data are described in Mockus et al, 1999.
The change management (CM) data include information on each change made to the code, its size and content, submission time, and developer. Also available are financial support system (FSS) data,
which record amounts of effort spent each month by each developer. Because developers tend to work on multiple changes during most months, and because 14 percent of changes start and end in different
months, it is impossible to recover effort measurements for individual changes exactly from these FSS data. In fact, developers' monthly efforts rarely stray far from their natural value of one
month, so in most applications we have used
3.1 Change data
The extended change management system (ECMS; see, for example, Midha, 1997), and the source code control system (SCCS; Rochkind, 1975) were used to manage the source code. The ECMS groups atomic
changes (``deltas'') recorded by SCCS into logical changes referred to as Maintenance Requests (MRs). The open time of the MR is recorded in ECMS. Also available are records of the MR completion
time; we use the time of the last delta in that MR. We used the textual abstract that the developer wrote when working on an MR to infer that change's purpose (Mockus and Votta, 1998). In addition to
the three primary reasons for changes (repairing faults, adding new functionality, and improving the structure of the code; see, for example, Swanson, 1976), we defined a class for changes that
implement code inspection suggestions since this class was easy to separate from others and had distinct size and interval properties (Mockus and Votta, 1998).
The SCCS database records each delta as a tuple including the actual source code that was changed, login of the developer, MR number, date, time, numbers of lines added, deleted, and unchanged.
There a number of practical issues when modeling change effort. Some are typical to all statistical problems, some specific to software engineering, and some specific to the MAIMED fitting procedure.
Here we omit the traditional validity checks in regression analysis such as collinearity among predictors and dwell mostly on issues related to software engineering and the MAIMED procedure. Two
particularly important issues are including essential predictors in the model and choosing subsets of the data.
In our studies of software data, we have found that several variables should always be included in the regression models and that their effects on effort are always present. One can include other
variables to test whether they have important effects.
First among variables that we recommend always including in the model is a developer effect. Other studies (for example, Curtis, 1981) have found substantial variation in developer productivity. Even
when we have not found significant differences and even though we do not believe that estimated development coefficients constitute a reliable method of rating developers, we have left developer
coefficients in the model. The interpretation of estimated developer effects is problematic. Not only could differences appear because of differing developer abilities, but the seemingly less
productive developer could be the expert on a particularly difficult area of the code, or that developer could have more extensive duties outside of writing code.
Naturally, the size of a change has a strong effect on the effort required to implement it. A large number of measures are available which measure something closely related to size, and the analyst
should generally include exactly one of these in the model. We have typically chosen the number of atomic changes (deltas) that were part of the MR as the measure of the size of an MR. This measure
seems to be slightly better for this purpose than other size measures like the total number of lines added and deleted in the MR. Another useful measure of size is the span of the change, i.e., the
number of files it touches. A large span tends to increase the effort for a change even if other measures of size are constant.
We found that the purpose of the change (as estimated using the techniques of Mockus and Votta, 1998) also has a strong effect on the effort required to make a change. In most of our studies, changes
which fix bugs are more difficult than comparably sized additions of new code. Difficulty of changes classified as ``cleanup'' varies across different parts of the code, while implementing
suggestions from code inspections is easy.
Another important problem when using MAIMED methodology is choosing a set of developers whose changes to include in the analysis. The 5ESSTMhas been changed by a large number of developers, and we
recommend restricting attention to a subset of these, with the subset chosen in order to yield the sharpest possible estimates of the parameters of interest.
Variability in project size and developer capability and experience are the largest sources of variability in software development (see, for example, Curtis, 1981). The effects of tools and process
are often smaller by an order of magnitude. To obtain the sharpest results on the effect of a given tool in the presence of developer variability, it is important to have observations of the same
developer changing files both using the tool and performing the work without the aid of the tool.
To reduce inherently large naturally occurring inter-developer variability, it is important to select developers who make similar numbers of changes.
To avoid confounding tool effects (see §3.3.2) with developers, it is imperative to choose developers who make similar number of changes with and without the tool. When estimating code-base effects
(see §3.3.3), it is important to select developers who make similar numbers of changes on each type of product.
Given the considerable size of the version history data available, both tasks are often easy: in the tool effect study we selected developers who made between 300 and 500 MRs in the six year period
between 1990 and 1995 and had similar numbers (more than 40) of MRs done with and without the tool.
We used relatively large samples of MRs in all three examples below. When comparing subsystems, we used 6985 MRs over 72 months to estimate 18 developer coefficients and 6 additional parameters. When
analyzing the tool benefits, we used 3438 MRs over 72 months for 9 developers and five additional parameters.
3.3 Results of Data Analyses
The specific model we fit in the modeling stage of the algorithm was a generalized linear model (McCullagh and Nelder, 1989) of effort. MR effort was modeled as having a Poisson distribution with
mean given below.^1
The covariate ``Cause'' was different in each of three studies summarized below. The decay study (Graves and Mockus, 1998) investigated the effects of calendar time, the tool usage study considered
cost savings from using a tool, and the subsystem study investigated effects of different source code bases.
3.3.1 Code Decay
Beyond developer, size, and type of change, another interesting measurement on a change which we have found to be a significant contributor to the change's difficulty is the date of the change
(Graves and Mockus, 1998). We were interested to see if there was evidence that the code was getting harder to change, or decaying, as discussed in Belady and Lehman (1976), Parnas (1994), and Eick
et al (1999). There was statistically significant evidence of a decay effect: we estimated that in our data, a change begun a year later than an otherwise similar change would require 20% more effort
than the earlier change. The fitted model was:
Here the estimated developer coefficients, the 2.2) indicated that there was no statistically significant evidence that the developers were different from each other. Although we sometimes find
quotes in the literature (Curtis, 1981) regarding the maximal productivity ratio for developers ranging from 10 times to 100 times, we were not surprised to see a much smaller ratio in this
situation. For one thing, we selected developers in a way which helps ensure that they are relatively uniform, by requiring that they completed large numbers of changes. One should also expect a
great deal of variation in these ratios across studies, because they are based on the extreme developers in the studies and hence are subject to enormous variability. Also, we believe that there are
a number of tasks beyond coding (requirements, inspections, architecture, project management) that take substantial time from developers involved in such tasks. Consequently, the coding productivity
does not necessarily reflect the total productivity or the overall value of the developer to an organization.
We defined
The value of
We estimated
3.3.2 Tool usage effects
This section tests our hypothesis that the use of a tool reduces the effort needed to make a subset of changes. The tool in this study was the Version Editor, which was designed to make it easier to
edit software which is simultaneously developed in several different versions. It does this by hiding all lines of code which belong to versions the developer is not currently working on (for details
see Atkins et al (1999). There were three types of changes:
1. a ``control'' set of changes which modified only files which had no versioning information. Here usage of the tool should have had no effect. Roughly half of all the changes fell into this
category; they will be denoted as CONTROL;
2. changes where the tool should have an effect (i.e. those modifying files with versioning information) and the tool was used (denoted as USED);
3. changes where tool should have an effect and it was not used (denoted as NOT).
We fit the model (Equation 5, estimated standard errors using the jackknife method, and obtained the following results, as summarized in Table 1.
The penalty for failing to use the tool is the ratio of
The type of a change was a significant predictor of the effort required to make it, as bug fixes were 26% more difficult than comparably sized additions of new functionality. Improving the structure
of the code, the third primary reason for change was of comparable difficulty to adding new code, as was a fourth class of changes, implementing code inspection suggestions.
The variable 3) was estimated to be 0.19. That is, the size of a change did not have a particularly strong effect on the effort required to make it.
Table: Estimated coefficients and their precision. By
│ estimate │ 0.17 │ 0.31 │ -0.34 │ -0.32 │ 0.29 │ -0.11 │
│ std err │ 0.14 │ 0.11 │ 0.53 │ 0.53 │ 0.19 │ 0.21 │
3.3.3 Subsystem effects
In the last example we considered the influence of code-base on effort. The 5ESS switch (the product under discussion) is broken into a number of relatively independent subsystems (each several
million lines of code) that belong to and are changed by separate organizations. This product is updated by adding new features or capabilities that can then be sold to customers. An average feature
contains tens to hundreds of MRs.
A development manager identified six subsystems that tend to have higher cost per feature than another six similar sized subsystems. To identify whether the effort was higher even at the MR level, we
chose 18 developers who made similar numbers of changes to high- and low-cost subsystems. We fit the model (6). The estimates are summarized in Table 2. Since
In this case we used additional predictor - number of files touched in MR to measure its size. The factor ``LOW'' indicates that it is a low-cost-per-feature subsystem.
Table: Estimated coefficients and precision for the code
base effects. By definition,
│ estimate │ 0.08 │ 0.16 │ -0.107 │ 0.233 │ 0.204 │ 0.249 │
│ std err │ 0.06 │ 0.07 │ 0.016 │ 0.03 │ 0.08 │ 0.07 │
4. Simulation
To provide confirmation that the effort modeling methodology gives adequate results, we conducted a simulation experiment. We simulated data from a specified model and measured to what extent the
model coefficients are recoverable using the MAIMED algorithm. In particular, we study the influence of the initialization method, and the effect of incorrectly specified monthly efforts (§4.3). We
also studied what happens to estimated coefficients when we fit models without one of the essential variables, or with an extra variable which does not itself affect effort but which is correlated
with one of the predictor variables (§4.4).
The simulation is in part a bootstrap analysis of the data. It begins with a model fit to a set of data. We used the model from the tool usage exampleEquation 5 and the estimated coefficients from
Table 1.
Each replication in the experiment consists of the following. We resample with replacement a set of developers. The set of developers in the simulated data consists of a random number of copies of
each of the developers in the true data set. We then construct a collection of MRs for a synthetic developer by resampling MRs of the corresponding actual developer. More precisely, we do the
1. Resample with replacement a set of developers. The following steps are iterated over each developer in the resulting set.
2. Choose an MR at random from the developer's collection. This MR has associated with it a size, type, and other covariates (ignore its open or close time).
3. Derive the MR's open date by assuming it was opened at the time the MR opened previously to this MR for the same developer was closed (use time zero if this is the first MR for the developer).
4. Adjust the open date for this MR by generating a time shift using parametric bootstrap (for details see §4.1).
5. Compute the expected value of the effort for this synthetic MR using the covariates for the resampled MR and the estimated model.
6. Generate the effort for this MR according to the exponential distribution with the computed mean. The close date of this MR is then the open date, plus this effort value.
This simulation procedure preserves the correlations between all the variables in the model, except for time. In the simulations, the covariates and effort for MRs are independent of the covariates
and effort of the previous MRs. Since the actual MRs do overlap we also generated overlaps based on the actual MR overlap data (see §4.1).
We repeated this procedure to generate 100 synthetic data sets, and estimated parameters for the correct model with these data sets. We checked the sensitivity to initialization method and to
incorrectly specified monthly efforts. Finally, we use our algorithm to fit models on these data when we misspecify the model by omitting an important variable or by adding an unnecessary variable.
(The reader may find it strange that we simulate exponential data, which have constant coefficient of variation, and model these data as if they were Poisson, which have constant variance to mean
ratio. This is intentional; we believe from other studies that the true data have constant coefficient of variation, but the larger changes are not only more variable, but also more important to
model correctly. We believe coefficients from a Gamma model are too dependent on the smaller observations.)
4.1 MR overlap distribution
Overlap between MRs is an important component of the software engineering problem. One third of actual MRs have the property that they are closed after a subsequent MR is opened. To produce a
realistic simulation, we had to reproduce MR overlap. Number the MRs for a single developer in order of their open times O[i] so that for all i, C[i] be the closing time of MR i. The overlap is
defined as follows:
The actual overlap when O[i+1] > C[i]) may be defined in absolute time. The empirical distribution of skips and overlap is in Figure 1. The probability of overlap in actual data was OVL[i]:O[i+1] > C
[i]) followed a
Figure 1: Empirical distribution of MR overlap. The dotted line shows actual distribution and the solid line shows parametric distribution used to simulate the data.
The overlap distribution indicates that when the overlap occurs, MRs are likely to be started at almost the same time. The skip distribution is fairly heavy tailed which is probably caused by
vacation or other developer's activities not directly related to changing the code.
In all of the following the hundred data sets were simulated from the model in Equation (5) with coefficients as in Table 1. Overlap of MRs was generated as described in §4.1. 4) to reflect frequent
situations in practice when the actual monthly effort is not available or unreliable. The algorithm in all cases is run for thirty iterations for each data set.
Table 3 shows the means of the estimated coefficients and their standard deviations.
Table 3: Estimated coefficients for correct model with
incorrect monthly efforts. All coefficients are rounded
to the two significant digits.
│ true │ 0.17 │ 0.31 │ -0.34 │ -0.32 │ 0.29 │ -0.11 │
│ mean │ 0.087 │ 0.17 │ -0.15 │ -0.22 │ 0.16 │ -0.043 │
│ stdev │ 0.049 │ 0.075 │ 0.18 │ 0.28 │ 0.11 │ 0.097 │
The estimates are biased because the monthly efforts are set to one to reflect practical situations when the actual monthly efforts are not known. Not surprisingly, setting all the monthly efforts to
be equal causes all coefficients to be estimated to be closer to zero. Using unit monthly efforts is thus a conservative procedure which will generally err on the side of pronouncing too many factors
to be unrelated to effort. In particular, the HAND coefficient is underestimated, suggesting that if the exact monthly efforts were known the effect of the tool usage would have been even larger.
First we check the sensitivity with respect to alternative initialization methods and with respect to variations in monthly effort.
4.3 Sensitivity to initialization method
The first initialization method in Equation (2) divides monthly effort evenly among MRs open that month. We also used an alternative initialization method which divides monthly effort proportionally
to the time each MR is open during that month:
where i (done by developer d) was open during month j.
The results are identical (up to two significant digits) to the results in Table 3, i.e., the initialization method does not affect the values of the estimated coefficients.
In the simulated data (more so than in practice) the monthly efforts vary across months. In most cases monthly effort data is not available, so we substitute unit effort. In other situations such
information is available (see, e.g., Graves and Mockus, 1998). In the simulation we have the advantage of knowing the monthly MR efforts exactly and can use this information in the initialization
The total monthly efforts would be no longer unity but rather 4 shows the results.
Table 4: Estimated coefficients for correct model with
correct monthly efforts and alternative initialization
method. t-values (for testing the hypothesis that the mean
estimated coefficients from simulation are equal to their
true values) are obtained by subtracting the coefficients
used in simulation. The final row lists the standard errors
obtained by the jackknife for comparison to these standard
│ true │ 0.17 │ 0.31 │ -0.34 │ -0.32 │ 0.29 │ -0.11 │
│ mean │ 0.18 │ 0.34 │ -0.61 │ -0.59 │ 0.31 │ -0.10 │
│ stdev │ 0.079 │ 0.18 │ 0.64 │ 0.87 │ 0.19 │ 0.23 │
│ t-value │ 0.77 │ 1.64 │ -4.21 │ -3.05 │ 1.05 │ 0.32 │
│ jk stderr │ 0.14 │ 0.11 │ 0.53 │ 0.53 │ 0.19 │ 0.21 │
As expected, the results are much closer to the actual values. The only exceptions are coefficients for the rarely occurring cleanup and inspection MRs. Those two coefficients appear consistently
below the values used in the simulation.
Typically such behavior would not cause a problem in practice (here, the cleanup and inspection coefficients were not significantly different from the new coefficient), because the uncertainty
estimates will be large for coefficients estimated on the basis of a small number of such MRs. There were 15778 inspection and 30538 cleanup MRs (4.5% and 8.7% percent respectively) out of total
348204 MRs in the generated 100 samples. However, we recommend against using predictors that occur rarely in the data and which might be correlated with individual developers. Confounding helps
explain the bias problem in this example, because the developer with the least productive coefficient has the smallest percentage, and the most productive developer has the highest percentage of
inspection and cleanup MRs.
4.4 Fitting the misspecified model
To test the behavior of the estimator under model misspecification we fitted the model omitting the size predictor and a set of models with correlated predictors.
Table 5 shows the estimates obtained by omitting the size predictor. The estimates are not significantly different from the ones obtained using the correct model in Table 3.
Table 5: Estimated coefficients for incorrect model with
incorrect monthly efforts.
│ mean │ -- │ 0.15 │ -0.15 │ -0.20 │ 0.16 │ -0.067 │ │
│ stdev │ -- │ 0.075 │ 0.18 │ 0.27 │ 0.11 │ 0.095 │ │
We also ran a sequence of tests by adding a coefficient correlated with the size covariate. Since the logarithm of the size was used in the model, the collinear covariate was correlated with the the
logarithm of size. Medium correlation of 0.5 and high correlations of 0.8 and 0.95 were used. The results are in Table 6.
Table: Estimated coefficients for incorrect model (predictor collinear
│ │ │ extra │ │ │ │ │ │
│ E(0.5) │ 0.091 │ -0.0055 │ 0.167 │ -0.15 │ -0.22 │ 0.15 │ -0.043 │
│ Stdev(0.5) │ 0.056 │ 0.041 │ 0.075 │ 0.18 │ 0.28 │ 0.11 │ 0.097 │
│ E(0.8) │ 0.085 │ 0.0021 │ 0.17 │ -0.15 │ -0.22 │ 0.16 │ -0.043 │
│ Stdev(0.8) │ 0.082 │ 0.058 │ 0.076 │ 0.18 │ 0.28 │ 0.11 │ 0.097 │
│ E(0.95) │ 0.091 │ -0.0035 │ 0.17 │ -0.15 │ -0.22 │ 0.15 │ -0.044 │
│ Stdev(0.95) │ 0.15 │ 0.11 │ 0.075 │ 0.18 │ 0.28 │ 0.11 │ 0.097 │
The results are as expected -- the collinear coefficient is not significantly different from zero and the model coefficients are not significantly different from those in Table 3. The collinear
predictor increases uncertainty of the size coefficient, but does not affect the other estimates.
5. Related work
It is possible to think of this problem as a missing data problem, tempting one to attack it with the EM algorithm or data augmentation (see Tanner, 1993, for either). These approaches have serious
difficulties with this problem. Taking EM as an example, it is necessary to specify distributions of the monthly effort expended on changes in such a way that is practical to compute conditional
expectations given row sums and the parameters that determine mean change effort. Our two rescaling steps have the feel of an expectation step, but it is unclear whether one could construct a set of
distributions under which they would be. Data augmentation has similar problems.
The problem of estimating network traffic from edge counts, considered in Vardi (1996), has some similarities with the present problem. There, numbers of traversals of each edge in a network are
available, but the traffic for complete paths (sequences of connected edges) is of interest. Both problems deal with nonnegative quantities, and in each case linear combinations of quantities of
interest are observed, since edge traffic is the sum over all the paths that traverse that edge.
The effort problem adds another layer of difficulty to the network problem, in which it is desired to recover a vector X given a vector Y of sums of the elements of X, satisfying the equation Y=AX,
for some known matrix of zeroes and ones A. The difficulty is in the fact that there are more X's than Y's. In the effort estimation problem we seek a collection of sums X=A1 of unobservable
quantities A, given another set of sums A. Auxiliary modeling information relates to the desired collection of sums, while in the network problem, it is the unsummed values which can potentially be
6. Software
Although the algorithm is fairly simple to implement, we describe a set of functions in S-PLUS that would help practitioners use the methodology more readily. The code contains five S-PLUS functions:
maimed: the main wrapper function which prepares the data objects for initialization and fitting. The main parameters are:
data: the data frame containing a list of the MRs with required fields: ``name''-- developer name, ``open'' and ``close'' -- date specified in fractional months (the first MR must have value
0 in the ``open'' field), as well as additional covariates as needed;
formula: the vector of predictor names to be used when performing the fitting. The predictors are taken from the the dataframe data. The first predictor should always be ``name'' since
differences among developers represent the largest variation in the effort required to complete an MR;
effmat: optional reported effort matrix where rows correspond to distinct developer names and columns to months. If the matrix is not provided it is assumed to contain unit efforts in each
calculateEffort: boolean variable indicating whether to calculate the effort matrix based on the intervals of the MRs as in Equation (9);
weighted: the boolean variable indicating whether to initialize the algorithm weighting MRs by their open time during a month as in Equation (8);
jk: the boolean variable indicating whether or not to perform jackknife estimation of errors;
maimed.validate: computes standard deviations and t-statistics from the jackknife estimates obtained with maimed;
maimed.initial: default initialization function which distributes effort equally among MRs open in a particular month;
maimed.initial2: alternative initialization function which distributes effort proportionally to the length of time MRs are open;
maimed.initial3: initialization function which calculates monthly effort by adding lengths of time open for each MR in a month;
maimed.fit: runs the fitting algorithm.
Following is a detailed example analysis with some practical considerations. First, we create a data frame data using information retrieved from change management database using, for example, the
SoftChange system (Mockus et al, 1999). Each MR should occupy a row in this data frame with the following fields: name containing the developer's name or login; bug, containing 1 if the change is bug
fix (otherwise 0); clean containing 1 if the change is perfective; new containing 1 if the change is adaptive; ndelta containing size of the change in number of deltas; open and close representing
timestamp of the first and the last delta (or open and close time for an MR) in unix time format (seconds since 1970); and additional factors ToolA and ToolB that indicates whether tool A or/and B
were used to complete this MR.
First we select only MRs done by developers that completed at least 20 MRs with and without the tool. This might be done in following way:
usage.table <- table(data$name,data$usedX);
name.subset <- table(data$name)[usage.table[,1]>20 & usage.table[,2]>20];
data.new <- data[match(data$name, name.subset, nomatch=0)>0,];
Now the dataframe data.new contains only developers that worked with and without the tool and hence we will be able to make sharper comparisons. In the next step we transform the open and close
fields into fractional months starting at zero:
data.new$open <- data.new$open/3600/2/365.25;
data.new$close <- data.new$close/3600/2/365.25 - min(data.new$open);
data.new$open <- data.new$open - min(data.new$open);
We may also transform the size of the change by a logarithm:
data.new$logndelta <- log(data.new$ndelta+1);
Finally we may run the maimed algorithm to estimate the coefficients for several models. One of the models might be:
To fit the model we run:
model <- maimed (data.new, formula=c("name","logndelta","bug","clean","ToolA","ToolB"))
Notice that since in this example the categorization had three types of changes (bug, clean, new) we included only first two in the model. The remaining type (new) is used the basis for comparison
with the first two. If all the indicators were included we would get an over determined model.
Once we find a subset of interesting models we estimate the significance of the coefficients via jackknife:
model <- maimed (data.new, formula=c("name","logndelta","bug","clean","ToolA","ToolB"),jk=T)
The result contains n+1 estimates of coefficients obtained by running maimed algorithm on the full dataset and by removing one of the N different developers. The significance values may be calculated
as follows:
This command may produce following output:
estimate stdev t-statistic p-value
lognumdelta 0.69179651 0.12414385 5.5725394 2.736186e-05
bug 0.63789480 0.18284625 3.4886950 2.621708e-03
clean -0.32523246 0.57564448 -0.5649884 5.790559e-01
ToolA -1.40700987 0.19134298 -7.3533393 7.966553e-07
ToolB 0.03172900 0.17362807 0.1827412 8.570436e-01
Developer1 -2.81395695 0.20994002 -13.4036235 8.338685e-11
Developer2 -2.08035974 0.18109041 -11.4879622 1.014959e-09
Developer3 -2.85884880 0.11088977 -25.7809968 1.110223e-15
Developer4 -2.85802540 0.17252158 -16.5661905 2.419842e-12
Developer5 -2.84639737 0.21325331 -13.3474946 8.934120e-11
Developer6 -2.58167088 0.19006092 -13.5833860 6.696665e-11
Developer7 -2.64266728 0.21459470 -12.3146901 3.322902e-10
We see that bug fixes are significantly harder than new code, and tool A significantly reduces the change effort. Cleanup (perfective) changes are not significantly easier than new code chnages and
Tool B is not significantly increasing change effort. Hence for practical purposes the model may be rewritten using the estimates above as
Here I(characteristic) is equal to 1 if the change has that characteristic, and otherwise equals 0. This implies that, for example, changes with 18 delta are 50 percent harder than changes with 10
delta ( 18^0.69/10^0.69=1.5), bug fixes are 88 percent harder than new code changes (
The constant multiplier is included with all developer coefficients. The ratio of the highest and the lowest coefficient is around two e^-2.08/e^-2.86=2.18.
7. Conclusions
This paper introduced a method for quantifying the extent to which properties of changes to software affect the effort necessary to implement them. The problem is difficult because measurements of
the response variable, effort, are available only at an aggregated level. To solve this difficulty we propose an algorithm which imputes disaggregated values of effort, which can be improved by using
a statistical model. The method may be applied to the problem of estimating a model for the row sums given only the column sums and zero pattern of an arbitrary sparse non-negative table.
We provided three examples of data analyses in which we used the new algorithm to address important questions in software engineering: monitoring degradation of the code as manifested by increased
effort, quantifying the benefits of a development tool, and identifying subsystems of code which are especially difficult.
We explored the properties of the algorithm through simulation. The simulations demonstrated that given enough data, the algorithm can recover the parameters of a true model, and that the algorithm
still performs well when an important variable is left out of the analysis or when an extra variable correlated with one of the true predictors is included in the model. Also, the algorithm is
conservative in the frequent practical situation in which aggregated effort values must be assumed to be one unit of effort per month. Further justification for the algorithm of a theoretical nature
appears in the appendix.
We also described how to use software which implements the algorithm (it is available for download) for the benefit of practitioners.
8. Appendix: Convergence conditions
In this appendix we provide some theoretical results regarding the convergence properties of the algorithm. We present two results characterizing the fixed points of the algorithm.
Each iteration of the method represents a nonlinear transformation of the entries in the tables, of the row sums and of the fitted coefficients. Without loss of generality assume that all rows
(changes) and all columns (months) have at least one positive entry. In the notation that follows we will omit the dependence on the developer d. We will often suppress the dependence on the
iteration t as well. For example, recall that i-th row and j-th column at iteration t. Let
Definition 8.1 Rows i and k are called zero connected if there exists a column j such that Y[ij]>0 and Y[kj]>0. Rows i and k are called n connected if there exists a row such that and are at least n
-1 connected (i.e., if are n[1] connected and are n[2] connected for all ). Rows i and k are called connected if there exists a finite n such that i,k are n connected. A table is called connected if
all pairs of its rows are connected.
Note that if i,k are not connected, then the entries in row i (except for j such Y[ij]=0 or Y[ij]=C[j]) can be changed independently of the entries in row k. The connection relation factorizes the
table into connected subtables which can be analyzed independently since they affect each other only through the model fitting procedure.
It is of interest when the algorithm stops.
Theorem 8.2
Let (
be entries in the table, with C[j] and defined as above. The Y[ij] represent a fixed point of the algorithm if and only if one of these three conditions is satisfied for each entry Y[ij]:
1. Y[ij]=0;
2. Y[ij]=C[j];
3. k connected to i.
Proof: It is easy to see that the algorithm does not modify entries where Y[ij]=0 or Y[ij]=C[j]. Consider subtables defined by connected rows. Without loss of generality consider only one such
subtable. We will first prove the ``if'' part. Since the ratios 10) simplifies to
i.e., the entries represent a fixed point of the algorithm. Now we prove the ``only if'' part. Denote W are strictly positive the eigenvectors corresponding to eigenvalue 1 form a one-dimensional
space (see First Frobenius theorem in, e.g., Karlin and Taylor, 1975, pp. 543). Consequently, W has rank one. Since diagonal entries in W are all 1 and i and j (using Equation (10) and the fixed
point condition Y[ij](t+1)=Y[ij]), all entries in W must be equal to 1.
Denote transformation on rows performed by an algorithm as R[i], the MLE estimate of the model
Theorem 8.3
Y[ij] be entries in a table which correspond to a fixed point of the algorithm, with row values Let Q have bounded third partial derivative tensor. Also, let
1. the model be Gaussian with a single mean parameter, so that the deviance is the sum of squared differences between fitted and original values, and so that the fitting procedure is linear;
2. the table be connected.
Then the deviance in a neighborhood of the fixed point is larger than at the fixed point. That is, let z[ij] be deviations to add to the table of Y's (and denote the resulting deviations in the row
values by z[i]; denotes the vector of z[i]'s). There such that and for all such , we have .
Proof: Note that the constraints of the problem require
Expand Q around 12) may be rewritten as:
Because the fitting step involves taking averages, Theorem 8.2 implies that
The transition was due to the fact that in the simple theorem setting P is an average, so
To proceed further, we need the first order term of the transformation Q. Denote Y[ij] in row i and column j (
In the first transition we assumed that the table is connected and so (PR)[k]/(PR)[i]=R[k]/R[i]. In the second transition we used the fact that the projection is an average so 12) is therefore
where we have written R[i]=R[k], and W=(w[k]^i)[i,k].
Let W, with
First, note that W is a stochastic matrix. All W is stochastic, the magnitude of an eigenvalue may not exceed 1. It is also true that W is aperiodic and irreducible. It is obviously aperiodic, since
all elements in W are non-negative and diagonal elements are positive. Suppose W is reducible, i.e., there exists a set of indexes U such that i and l in the effort table are not connected. This
implies that
and this bound is independent of
Because Q has a bounded third derivative, it is possible to choose 12) is smaller in absolute value than 12) is strictly positive for all
Although the conditions of the theorem are restrictive, we believe they may be relaxed substantially. In particular, the result should be true for an arbitrary Gaussian or Poisson model and also for
disconnected tables.
We thank George Schmidt, Janis Sharpless, Harvey Siy, Mark Ardis, David Weiss, Alan Karr, Iris Dowden, and interview subjects for their help and valuable suggestions. This research was supported in
part by NSF grants SBR-9529926 and DMS-9208758 to the National Institute of Statistical Sciences.
D. Atkins, T. Ball, T. Graves, and A. Mockus, ``Using version control data to evaluate the effectiveness of software tools,'' in 1999 International Conference on Software Engineering (submitted),
(Los Angeles, CA), May 1999.
L. A. Belady and M. M. Lehman, ``A model of large program development,'' IBM Systems Journal, pp. 225-252, 1976.
B. Boehm, Software Engineering Economics.
Prentice-Hall, 1981.
B. Curtis, ``Substantiating programmer variability,'' Proceedings of the IEEE, vol. 69, p. 846, July 1981.
W. E. Deming and F. F. Stephan, ``On a least-squares adjustment of a sampled frequency table when the expected marginal totals are known,'' Annals of Mathematical Statistics, vol. 11, pp. 427-444,
B. Efron, The Jackknife, the Bootstrap and Other Resampling Plans.
Philadelphia, PA: Society for Industrial and Applied Mathematics, 1982.
B. Efron and R. J. Tibshirani, An Introduction to the Bootstrap.
New York: Chapman and Hall, 1993.
S. G. Eick, T. L. Graves, A. F. Karr, J. S. Marron, and A. Mockus, ``Does code decay? assessing the evidence from change management data,'' IEEE Trans. Soft. Engrg., 1998.
To appear.
T. L. Graves and A. Mockus, ``Inferring change effort from configuration management data,'' in Metrics 98: Fifth International Symposium on Software Metrics, (Bethesda, Maryland), pp. 267-273,
November 1998.
S. Karlin and H. M. Taylor, A First Course in Stochastic Processes, 2nd Edition.
New York: Academic Press, 1975.
K. Martersteck and A. Spencer, ``Introduction to the 5ESS(TM) switching system,'' AT&T Technical Journal, vol. 64, pp. 1305-1314, July-August 1985.
P. McCullagh and J. A. Nelder, Generalized Linear Models, 2nd ed.
New York: Chapman and Hall, 1989.
A. K. Midha, ``Software configuration management for the 21st century,'' Bell Labs Technical Journal, vol. 2, no. 1, Winter 1997.
A. Mockus, S. G. Eick, T. Graves, and A. F. Karr, ``On measurement and analysis of software changes,'' tech. rep., Bell Labs, Lucent Technologies, 1999.
A. Mockus and L. G. Votta, ``Identifying reasons for software changes using historic databases,''
Submitted to ACM Transactions on Software Engineering and Methodology.
D. L. Parnas, ``Software aging,'' in Proceedings 16th International Conference On Software Engineering, (Los Alamitos, California), pp. 279-287, IEEE Computer Society Press, 16 May 1994.
M. Rochkind, ``The source code control system,'' IEEE Trans. on Software Engineering, vol. 1, no. 4, pp. 364-370, 1975.
G. Seber, Linear Regression Analysis.
New York: Wiley, 1977.
E. B. Swanson, ``The dimensions of maintenance,'' in 2nd Conf. on Software Engineering, (San Francisco, California), pp. 492-497, 1976.
M. A. Tanner, Tools for Statistical Inference.
New York, Berlin, Heidelberg: Springer-Verlag, 1993.
Y. Vardi, ``Network tomography: Estimating source destination traffic intensities from link data,'' JASA, vol. 91, no. 433, pp. 365-377, 1996.
We strictly speaking do not assume a Poisson distribution because effort values need not be integers. The only critical part of the Poisson assumption is that the variance of a random effort is
proportional to its mean. | {"url":"http://digitalarchaeology.info/papers/effsim/eff-techreport.html","timestamp":"2024-11-08T15:48:38Z","content_type":"text/html","content_length":"114251","record_id":"<urn:uuid:70d8364e-fa16-48b6-a4b6-d0e7f64d3301>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00292.warc.gz"} |
\angle DFM$ respectively. Prove that,
Hint: In this question we will use the following properties:
If a line passes through the center of an angle then it divides the same angle into equal parts.
Sum of two interior angles of a triangle is equal to the opposite exterior angle of a triangle. Sum of all angles of the triangle is always equal to ${180^ \circ }$. Two triangles are said to be
similar if the corresponding angles of two triangles are congruent and lengths of corresponding sides are proportional. Two triangles are said to be congruent if all the sides of one triangle are
equal to the corresponding sides of another triangle and the corresponding angles are equal.
Complete step-by-step answer:
Given, $DE\parallel GF$
Ray $EG$and ray $FG$ are bisectors of $\angle DEF$and$\angle DFM$.
Means $EG$and $FG$divides $\angle DEF$and$\angle DFM$respectively in two equal parts. So,
$ \Rightarrow \angle DEG = \angle GEF = \dfrac{1}{2}\angle DEF$ ……..$1$
Also, $\angle DFG = \angle GFM = \dfrac{1}{2}\angle DFM$……. $2$
Since, $DE\parallel GF$
So, $\angle EDF = \angle DFG$ ………$3$ ($\because $we know that alternate interior angles are equal to each other)
Also we can write, $\angle EDF = \dfrac{1}{2}\angle DFM$$.....4$
Now, consider $\vartriangle DEF$
We know that the exterior angle of a triangle is equal to the sum of opposite two interior angles.
$\angle DFM = \angle DEF + \angle EDF$
From equation $4$$\angle EDF = \dfrac{1}{2}\angle DFM$. We can also write this equation like this, $2\angle EDF = \angle DFM$ Therefore,
$ \Rightarrow 2\angle EDF = \angle DEF + \angle EDF$
We get, $\angle EDF = \angle DEF$
From equation $1$, $\angle DEG = \dfrac{1}{2}\angle DEF$. We can also write this equation like this $2\angle DEG = \angle DEF$
$ \Rightarrow \angle EDF = 2\angle DEG$
So we get $\angle DEG = \dfrac{1}{2}\angle EDF$.
Hence proved.
Given, $DE\ parallel GF$
$\angle DEG = \angle EGF$…… $5$ ($\because $We know that alternate interior angles are equal to each other)
Now, From equation 1 , $\angle DEG = \angle GEF$
$\angle GEF = \angle EGF$
Since, the $\vartriangle EGF$sides opposite to equal angles are also equal.
$ \Rightarrow EF = FG$
Hence, proved
Note:While solving this question one should have remembered all the properties angles and triangles i.e. If a triangle has the same three angles then it has similar length of sides or vice versa and
Bisectors always cut angle into two equal parts etc. Also should take care while doing calculation. | {"url":"https://www.vedantu.com/question-answer/in-the-given-figure-line-de-parallel-line-gf-ray-class-8-maths-cbse-5f5c3cef8a2fd7303bea85b1","timestamp":"2024-11-07T22:38:24Z","content_type":"text/html","content_length":"162806","record_id":"<urn:uuid:71daa20d-fd63-4875-b246-a77dae5f98a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00496.warc.gz"} |
Understanding Decimals: Working through Decimal Basics
Decimals are an essential part of our everyday lives, whether we’re balancing our budget, measuring ingredients for a recipe, or calculating distances on a map. In this blog, we’ll explore the basics
of decimals, including what they are, how they work, and why they’re important.
What are Decimals?
Decimals are a way of expressing parts of a whole. They are numbers that have a value between two whole numbers. For example, between 0 and 1, 0.5 would be a decimal between these two numbers. It is
more than 0 but less than 1. You can also represent decimals as mixed fractions. 0.5 would be ½. This can be taken one step further and the mixed fraction can be represented as a percentage. 0.5 or ½
can be changed into a percentage of 50%.
Decimals are a mix of whole numbers and fractions, all bundled up with a dot called a decimal point. The numbers to the left of the decimal point are whole numbers – like 1, 2, 3, and so on,
including units, tens, hundreds, and even thousands. But wait, there’s more! On the right side of that dot, you’ve got the fractions– tenths, hundredths, and thousandths. It’s like having a
mini-fraction party right there in your number!
Now, why does all this matter? Understanding where those numbers belong is key to solving all sorts of math puzzles. Whether you’re calculating how much pizza each friend gets or figuring out how
fast your car is going, mastering the place value of decimals is your secret weapon.
Types of Decimals
There are different types of decimals:
Terminating decimals
Non-terminating decimals
Recurring decimals
Non-recurring decimals
To understand decimals better, let’s break down some key concepts:
Place Value: Each digit in a decimal number has a place value determined by closer the number is to the decimal point. Moving from left to right, the place values are powers of 10: units, tenths,
hundredths, thousandths, and so on. Here’s a visual example to make it easier to understand!
Hundreds – Tens – Ones – POINT – Tenths – Hundredths – Thousandths
Let’s take a look at an example: 32.14
3: 3 Tens or 30
2: 2 Ones or 2
1: 1 Tenth or 0.1
4: 1 Hundredth or 0.01
Operations with Decimals: Just like with whole numbers, you can do addition, subtraction, multiplication, and division with decimals. Remember to align the decimal points when adding or subtracting,
and to multiply and divide as if the decimal point isn’t there. Take a look at our other blogs to learn how to multiply and divide decimals!
Decimal fractions: These can compared to a bridge between whole numbers and fractions, offering a precise way to represent parts of a whole using the familiar decimal notation. Picture this: you’ve
got your whole numbers on the left side of the decimal point, representing complete units. But on the right side, that’s where the magic happens – each digit after the decimal point represents a
fraction of that whole, whether it’s tenths, hundredths, or even tinier fractions like thousandths. Decimal fractions are everywhere, from dividing up a pizza into equal slices to measuring the exact
amount of ingredients for your favorite recipe.
Why are Decimals Important?
Decimals are so important in different real-life situations:
Spending Money: From calculating taxes to managing budgets, decimals help us deal with money accurately.
Measurements: Whether it’s measuring length, weight, or volume, decimals provide precise measurements.
Science and Engineering: Fields like physics, chemistry, and engineering rely heavily on decimal notation for calculations and measurements.
Getting a good grip on decimals is like unlocking a superpower for tackling everyday challenges. Once you’ve got the hang of decimal notation, place value, and how to work with them, you’ll find
yourself breezing through all sorts of numerical tasks, both in your day-to-day life and in more complex situations.
So, next time you encounter a decimal, remember: it’s just a way of expressing a part of a whole, and with a little practice, you’ll master the art of decimals in no time!
Enhance your math skills with professional math tutors
Boost your math knowledge by making use of the learning resources of Step Up Academy Tutoring Center. Our math tutors will give you individual lessons in a single specialty and you will understand in
a more personalized manner the math concepts and your grades will therefore increase. Besides that, we can assist in the subject of examination preparation, languages, and science as well. It’s up to
you whether it be one-on-one or online tutoring, we have the perfect schedule options available to meet any of your needs. We take great care to give you constructive one-to-one help with homework or
for example, in regard to the upcoming exams. Let us together make your academic ambitions come true!
Our Summer Programs
Summer Programs
Our Subjects
Widget Subjects
Our Programs
Our Programs
Click for Tutoring Directions. | {"url":"https://stepupacademy.ca/understanding-decimals/","timestamp":"2024-11-05T19:29:04Z","content_type":"text/html","content_length":"87915","record_id":"<urn:uuid:52cd5fa7-9a3e-4eac-b26d-e9ffa86f9e8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00236.warc.gz"} |
Top 30 Data Science Intern Interview Questions You Need to Know
Data science is an ever-expanding field, and landing an internship can be a pivotal move toward establishing a thriving career. Whether you’re a beginner or shifting into data science from a
different industry, thorough preparation for your interview is crucial. In this blog, we’ll explore the top 30 interview questions commonly asked for data science internships. We’ll not only
present these questions but also offer in-depth answers, practical examples, and professional insights to help you confidently excel in your interview and secure the position.
If you’re preparing for a data science role, consider enrolling in H2K Infosys Data Science using Python Online Training to strengthen your knowledge and skills. This course offers Data science
training with placement to give you a competitive edge in the job market.
As a Data Science intern, you will be expected to demonstrate foundational knowledge of key concepts and tools used in the industry. These interviews typically cover areas like statistics, machine
learning, Python programming, and data manipulation. Employers often assess your ability to analyze data, draw meaningful insights, and apply machine learning techniques to solve problems.
The following guide presents the top 30 interview questions that will help you land that coveted internship, with examples and explanations for better clarity.
What is Data Science, and why is it important?
Data Science involves the extraction of meaningful insights from vast datasets using statistical methods, algorithms, and machine learning models. It helps organizations make informed decisions,
predict trends, and optimize operations. For example, companies like Amazon use data science to recommend products to customers based on past behavior.
Explain the difference between supervised and unsupervised learning.
In supervised learning, the model is trained on labeled data, meaning that the output is known (e.g., classification tasks). In contrast, unsupervised learning deals with unlabeled data, where the
model tries to identify patterns or groups (e.g., clustering).
What are outliers? How can they be detected and treated?
Outliers are extreme values that differ significantly from the rest of the data. They can be detected using statistical tests, visualizations (box plots, scatter plots), or Z-scores. Treatment
involves removing them, transforming data, or capping their values.
Describe the bias-variance tradeoff in machine learning.
The bias-variance tradeoff refers to the balance between a model’s complexity and its accuracy. High bias (underfitting) leads to oversimplified models, while high variance (overfitting) causes
models to be too complex. A good model achieves an optimal balance between bias and variance.
What is cross-validation, and why is it important?
Cross-validation is a technique for evaluating model performance by splitting the data into training and test sets multiple times. It prevents overfitting and ensures the model generalizes well to
new data. K-Fold Cross-Validation is a popular method.
Key Concepts and Practical Examples
Explain what overfitting is and how to avoid it.
Overfitting occurs when a model learns the noise in the data instead of the signal, resulting in poor performance on new data. It can be avoided by using techniques like regularization (L1/L2
penalties), pruning decision trees, and cross-validation.
What is the difference between data normalization and standardization?
Normalization scales data to a range (e.g., 0 to 1), while standardization scales data based on its mean and standard deviation, ensuring a mean of 0 and a standard deviation of 1. Standardization is
preferred when the algorithm assumes normally distributed data.
How do you select important features for a dataset?
Feature selection can be done using techniques like Recursive Feature Elimination (RFE), Lasso regression, and Tree-based algorithms (e.g., Random Forest feature importance). These methods help in
identifying and keeping only the most relevant features.
What is a confusion matrix?
A confusion matrix is used to evaluate the performance of a classification algorithm. It displays true positives, false positives, true negatives, and false negatives. From this matrix, you can
calculate metrics like accuracy, precision, recall, and F1-score.
Explain the concept of p-value in statistical tests.
The p-value measures the probability of obtaining results as extreme as those observed, assuming the null hypothesis is true. A p-value less than 0.05 typically indicates statistical significance,
meaning the observed effect is unlikely to have occurred by chance.
Python-Specific Questions for Data Science
What libraries are used in Python for Data Science?
Common libraries include:
• NumPy: For numerical operations.
• Pandas: For data manipulation and analysis.
• Matplotlib/Seaborn: For data visualization.
• Scikit-learn: For machine learning algorithms.
How would you handle missing data in a dataset?
Missing data can be handled by:
• Removing rows/columns with missing values.
• Imputation (filling missing values with mean, median, or mode).
• Using algorithms that can handle missing values, such as decision trees.
Explain what a Jupyter Notebook is.
Jupyter Notebooks are open-source web applications that allow users to create and share documents containing live code, equations, visualizations, and narrative text. They are widely used in data
science for exploratory analysis.
What is the difference between a list and a tuple in Python?
A list is mutable, meaning its elements can be changed after creation, while a tuple is immutable and cannot be altered once defined. Lists are more flexible, but tuples are more efficient for fixed
collections of items.
Industry-Relevant Case Studies and Challenges
Can you walk us through a data science project you’ve worked on?
In this question, describe a project in detail, including problem definition, data collection, preprocessing, modeling, and evaluation. Highlight your use of tools like Python, Pandas, Scikit-learn,
and any machine learning models you deployed.
How would you explain machine learning to a non-technical person?
Machine learning is about teaching computers to learn from data without being explicitly programmed. For example, a recommendation system like Netflix learns what shows you like based on your past
viewing history and suggests similar shows.
Crucial questions:
What is a p-value in hypothesis testing?
The p-value measures the probability that the observed data would occur by random chance. A p-value less than 0.05 typically indicates statistical significance, suggesting that the null hypothesis
can be rejected.
How does a decision tree work?
A decision tree splits the data based on feature values to create a tree-like model of decisions. At each node, the dataset is split into two or more homogeneous sets based on the most significant
What is K-Nearest Neighbors (KNN), and how does it work?
KNN is a simple algorithm that classifies data points based on the majority class of their nearest neighbors. It calculates the distance between points using metrics like Euclidean distance.
What is linear regression, and how does it work?
Linear regression models the relationship between a dependent variable and one or more independent variables using a straight line. It assumes linearity between variables.
What is a random forest, and how does it differ from a decision tree?
Random forest is an ensemble method that combines multiple decision trees to improve accuracy and reduce overfitting. It averages predictions from several trees to make a final prediction.
What is logistic regression, and when is it used?
Logistic regression is used for binary classification problems, where the outcome is a probability between 0 and 1. It applies a logistic function to linear regression outputs to constrain
predictions within this range.
How do you evaluate the performance of a machine learning model?
Common metrics include accuracy, precision, recall, F1-score, ROC-AUC, and mean squared error for regression. Choose metrics based on the model and task.
What is gradient descent, and how does it work?
Gradient descent is an optimization algorithm used to minimize the cost function. It iteratively updates the model parameters in the direction of the negative gradient of the cost function.
Explain principal component analysis (PCA).
PCA is a dimensionality reduction technique that transforms a large set of variables into a smaller one by finding new variables (principal components) that maximize variance while minimizing
information loss.
What is a support vector machine (SVM)?
SVM is a supervised learning algorithm used for classification and regression. It finds a hyperplane that best separates data points into classes while maximizing the margin between them.
How do you deal with imbalanced datasets?
Techniques include:
• Resampling the dataset (undersampling the majority class or oversampling the minority class).
• Using algorithms designed for imbalanced data, like weighted decision trees.
• Using performance metrics such as precision, recall, and F1-score instead of accuracy.
Explain clustering and list some popular clustering algorithms.
Clustering is an unsupervised learning method used to group data points based on similarity. Popular algorithms include K-means, DBSCAN, and hierarchical clustering.
What is a time series, and how is it different from other data?
A time series is a sequence of data points collected at consistent time intervals. It differs from other data because it incorporates a temporal component, meaning the order of data points matters.
What is A/B testing?
A/B testing is a statistical method used to compare two versions of a variable (e.g., a webpage) to determine which one performs better. It is widely used in marketing and product development.
Mastering these data science intern interview questions will help you tackle any challenge thrown at you during interviews. As you prepare, make sure to practice coding, review key concepts, and work
on real-world projects. Hands on learning is essential, which is why H2K Infosys Data Science using Python Online Training is designed to give you both theoretical knowledge and practical experience.
By enrolling in this program, you’ll benefit from:
• Comprehensive data science training with placement opportunities.
• Access to Free data analyst training and placement resources.
• The chance to earn a data science certification online free, boosting your credentials.
Key Takeaways
• Data Science interviews often test your knowledge of statistics, machine learning, and Python programming.
• Be prepared to answer questions about real-world projects and industry-relevant problems.
• Hands-on experience is critical to success, so ensure you’re working on real datasets and applying machine learning models regularly.
Call to Action:
Ready to advance your career in Data Science? Enroll in H2K Infosys’s Data Science using Python Online Training for a comprehensive learning experience that includes data science training with
placement assistance. Don’t miss the opportunity to learn from industry experts and secure a Data science certification online free. Sign up now!
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.h2kinfosys.com/blog/top-30-data-science-intern-interview-questions-you-need-to-know/","timestamp":"2024-11-02T08:41:28Z","content_type":"text/html","content_length":"190587","record_id":"<urn:uuid:9f98c259-a6ae-42b4-be9f-365c746669dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00622.warc.gz"} |
Efficiency of Frequency-Domain Filtering - Electrical Engineering Textbooks
Efficiency of Frequency-Domain Filtering
To determine for what signal and filter durations a time- or frequency-domain implementation would be the most efficient, we need only count the computations required by each. For the time-domain,
difference-equation approach, we need
. The frequency-domain approach requires three Fourier transforms, each requiring
computations for a length-
FFT, and the multiplication of two spectra (
computations). The output-signal-duration-determined length must be at least
. Thus, we must compare
Exact analytic evaluation of this comparison is quite difficult (we have a transcendental equation to solve). Insight into this comparison is best obtained by dividing by
With this manipulation, we are evaluating the number of computations per sample. For any given value of the filter's order
, the right side, the number of frequency-domain computations, will exceed the left if the signal's duration is long enough. However, for filter durations greater than about 10, as long as the input
is at least 10 samples, the frequency-domain approach is faster so long as the FFT's power-of-two constraint is advantageous.
The frequency-domain approach is not yet viable; what will we do when the input signal is infinitely long? The difference equation scenario fits perfectly with the envisioned digital filtering
structure, but so far we have required the input to have limited duration (so that we could calculate its Fourier transform). The solution to this problem is quite simple: Section the input into
frames, filter each, and add the results together. To section a signal means expressing it as a linear combination of length-
non-overlapping "chunks." Because the filter is linear, filtering a sum of terms is equivalent to summing the results of filtering each term.
As illustrated in Figure 1, note that each filtered section has a duration longer than the input. Consequently, we must literally add the filtered sections together, not just butt them together.
Figure 1. The noisy input signal is sectioned into length-48 frames, each of which is filtered using frequency-domain techniques. Each filtered section is added to other outputs that overlap to
create the signal equivalent to having filtered the entire input. The sinusoidal component of the signal is shown as the red dashed line.
Computational considerations reveal a substantial advantage for a frequency-domain implementation over a time-domain one. The number of computations for a time-domain implementation essentially
remains constant whether we section the input or not. Thus, the number of computations for each output is
. In the frequency-domain approach, computation counting changes because we need only compute the filter's frequency response
once, which amounts to a fixed overhead. We need only compute two DFTs and multiply them to filter a section. Letting
denote a section's length, the number of computations for a section amounts to
. In addition, we must add the filtered outputs together; the number of terms to add corresponds to the excess duration of the output compared with the input (
). The frequency-domain approach thus requires
computations per output value. For even modest filter orders, the frequency-domain approach is much faster.
Show that as the section length increases, the frequency domain approach becomes increasingly more efficient.
denote the input's total duration. The time-domain implementation requires a total of
computations, or
computations per input value. In the frequency domain, we split the input into
sections, each of which requires
per input in the section. Because we divide again by
to find the number of computations per input value in the entire input, this quantity decreases as
increases. For the time-domain implementation, it stays constant.
Note that the choice of section duration is arbitrary. Once the filter is chosen, we should section so that the required FFT length is precisely a power of two: Choose
so that
Implementing the digital filter shown in the A/D block diagram with a frequency-domain implementation requires some additional signal management not required by time-domain implementations.
Conceptually, a real-time, time-domain filter could accept each sample as it becomes available, calculate the difference equation, and produce the output value, all in less than the sampling interval
. Frequency-domain approaches don't operate on a sample-by-sample basis; instead, they operate on sections. They filter in real time by producing
outputs for the same number of inputs faster than
. Because they generally take longer to produce an output section than the sampling interval duration, we must filter one section while accepting into memory the next section to be filtered. In
programming, the operation of building up sections while computing on previous ones is known as buffering. Buffering can also be used in time-domain filters as well but isn't required.
We want to lowpass filter a signal that contains a sinusoid and a significant amount of noise. The example shown in Figure 1 shows a portion of the noisy signal's waveform. If it weren't for the
overlaid sinusoid, discerning the sine wave in the signal is virtually impossible. One of the primary applications of linear filters is noise removal: preserve the signal by matching filter's
passband with the signal's spectrum and greatly reduce all other frequency components that may be present in the noisy signal.
A smart Rice engineer has selected a FIR filter having a unit-sample response corresponding a period-17 sinusoid:
, which makes
. Its frequency response (determined by computing the discrete Fourier transform) is shown in Figure 2. To apply, we can select the length of each section so that the frequency-domain filtering
approach is maximally efficient: Choose the section length
so that
is a power of two. To use a length-64 FFT, each section must be 48 samples long. Filtering with the difference equation would require 33 computations per output while the frequency domain requires a
little over 16; this frequency-domain implementation is over twice as fast! Figure 1 shows how frequency-domain filtering works.
Figure 2. The figure shows the unit-sample response of a length-17 Hanning filter on the left and the frequency response on the right. This filter functions as a lowpass filter having a cutoff
frequency of about 0.1.
We note that the noise has been dramatically reduced, with a sinusoid now clearly visible in the filtered output. Some residual noise remains because noise components within the filter's passband
appear in the output as well as the signal.
Note that when compared to the input signal's sinusoidal component, the output's sinusoidal component seems to be delayed. What is the source of this delay? Can it be removed?
The delay is not computational delay here--the plot shows the first output value is aligned with the filter's first input--although in real systems this is an important consideration. Rather, the
delay is due to the filter's phase shift: A phase-shifted sinusoid is equivalent to a time-delayed one:
. All filters have phase shifts. This delay could be removed if the filter introduced no phase shift. Such filters do not exist in analog form, but digital ones can be programmed, but not in real
time. Doing so would require the output to emerge before the input arrives!
Explore CircuitBread
Get the latest tools and tutorials, fresh from the toaster. | {"url":"https://www.circuitbread.com/textbooks/fundamentals-of-electrical-engineering-i/digital-signal-processing/efficiency-of-frequency-domain-filtering","timestamp":"2024-11-03T06:39:32Z","content_type":"text/html","content_length":"956067","record_id":"<urn:uuid:f13de68b-a7d6-492a-ac06-0174daf2bfda>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00478.warc.gz"} |
What is a Volt-Ratio Box? - Definition & Explanation - Circuit Globe
Volt-Ratio Box
Definition: The volt-ratio box measures the high voltage. The construction of the volt-ratio box is very simple. It consists the simple resistive potential divider which has many tapping on the input
side. The whole arrangement of the volt-ratio box is placed inside the wooden box. The volt-ratio box gives the accurate result of measure voltage.
The operation of the volt-ratio box is simple. The volt-ratio box uses for measuring the voltage having the value higher than 1.8 volts. It is also used for step down the high voltage for
The arrangement of the volt-ratio box is shown in the figure below. The voltage marked on the internal tapping of the volt-ratio box along with the multiplying factor of the potential scale. The
electromotive force which is to be measured is applied to the input terminals of the volt-ratio box. The potential difference between the two points is measured through the potentiometer.
Consider the voltage measured by the potentiometer is v and the multiplying factor of the voltage ratio box is k then the value of the measured voltage is V = v.k volt.
Example – Consider the voltage to be measured is applied to the terminal of the voltage ratio box. The potentiometer reads the value of 0.825. The value of the unknown voltage is measured through the
V = 0.825 X (300/1.5) = 165 volts.
During the high voltage measurement, the current passes through the resistance R shown in the figure above. The voltage drop occurs across the resistance in the measurement of the high voltage. The
resistance act as a load on the measurand voltage source and hence consume power. The resistance of high value is used in the volt-ratio box for low power consumption.
Usually, the resistance of 100 Ω/volt or 200 Ω/volt is used in the volt-ratio box which allows maximum 5 to10 mA current to pass through it.
Note: The multiplying factor in the volt-ratio box shows the number of times the value of the voltage increases.
Leave a Comment | {"url":"https://circuitglobe.com/volt-ratio-box.html","timestamp":"2024-11-03T00:53:52Z","content_type":"text/html","content_length":"157133","record_id":"<urn:uuid:19740227-fd9e-4335-92ae-36c0688bce83>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00336.warc.gz"} |
Visiblespace: Art, Science and Culture
In 1986, the physicist Fritz Haake identified a significant area of research in the area of quantum mechanics to investigate a question of ‘whether quantum chaos can be more than a mere
transientmimicry of classical chaos’ (Haake et al. 1987).
The concept persisted in trying to detect whether there was some unique spatial experiment that could deal with this problem. Before the article entitled ‘Quantum Signatures of Chaos in a Kicked Top’
in 2009, there was no real way of knowing whether chaos could exist in a quantum world
of subatomic particles as the momentum or position cannot be precisely known, then the unique conditions of classical chaos would not be able to translate to the quantum world.
For example, when drops of ink are added to water the ink bifurcates and the particles are characterized
by complex, aperiodic trajectories that diverge exponentially as a function of initial separation. ‘This description of states and time evolution is fundamentally incompatible with quantum mechanics,
where conjugate observables such as position and momentum cannot take on well defined values at the same time’ (Chaudhury et al. 2009).
Scientists have revisited the conundrum by asking the question not only whether quantum chaos could mimic chaos but also whether chaos assists the quantum world. If you think of the spin of an
electron as being analogous to a microscopic spinning top that spins erratically as it loses momentum, it is in these erratic areas that we find quantum chaos taking place.
This implies that the nature of matter not only has to contend with its own classical form of chaos but also at its core it has a unique form, quantum chaos. Two chaoses existing simultaneously in
the same space, a complex chaos of chaos, a chaos pregnant with chaos – with Heisenberg policing the space between; resembling Heimdall, who protected the fabled rainbow bridge between the human
world and Asgard, prohibiting classical chaos from entering to contaminate the quantum world. Chaos affecting chaos, a tautology of chaos but the similarities, are an illusion of cultural
transference where everything appears to be the same but belongs to different worlds both unique and polluted by metaphorical analogies.
Chaudhury, S., Smith, A., Anderson, B. E., Ghose, S. and Jessen, P. S. (2009),‘Quantum signatures of chaos in a kicked top’, Nature, 461:7265, pp. 768–71.
Haake, F., Kuś, M. and Scharf, R. (1987),‘Classical and quantum chaos for a kicked top’, Zeitschrift für Physik B Condensed Matter, 65:3, pp. 381–95.
Paul Thomas, Jan Andruszkiewicz visualising dynamic signature of chaos entanglement | {"url":"http://visiblespace.com/blog/?p=2279","timestamp":"2024-11-07T12:36:44Z","content_type":"text/html","content_length":"33746","record_id":"<urn:uuid:379318ed-388e-4755-9132-0833d0c7960f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00384.warc.gz"} |
How many solutions does this system of equations have? - Ask Spacebar
How many solutions does this system of equations have?
y= -3x + 7
y= -3x -6
a. infinitely many
b. 2
c. 1
d. 0
Views: 0 Asked: 11-29 06:52:55
On this page you can find the answer to the question of the mathematics category, and also ask your own question
Other questions in category | {"url":"https://ask.spacebarclicker.org/question/1074","timestamp":"2024-11-11T14:03:22Z","content_type":"text/html","content_length":"26912","record_id":"<urn:uuid:67735550-a333-4ef6-a5bf-a9277a81cc60>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00111.warc.gz"} |
Gah! I Don't Know
Give the correct answer to the sum
Fail to give the correct answer
There's only around 5 seconds to both read and calculate the answer
The contestant sits in a round sledge at the top of a runway. To begin the game, Popcorn push the sled down the track.
As the contestant is sliding along, several boards are turned around at the side of the runway. Each board has a number or mathematical sign on, so together all the boards make up a long sum. The
contestant has to quickly work out the equation as they reach the end of the track.
When they stop at the end the contestant must tell Yoshichi the answer to the sum. If they've got the answer right they win the game. However if they get the answer wrong, or take too long working it
out, the end of the runway will collapse and the contestant will be dropped into a pit of powder or mud below. | {"url":"https://www.keshiheads.co.uk/games/runway","timestamp":"2024-11-07T22:22:03Z","content_type":"text/html","content_length":"10259","record_id":"<urn:uuid:010c718d-8a52-4b44-8cc5-0fda934de847>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00319.warc.gz"} |
Optimization Algorithms
An important pillar is developing practical optimization algorithms for nonconvex optimization problems. I believe that there is an urgent need for practical optimization algorithms in particular in
the context of nonconvex data-driven optimization algorithms for which off-the-shelf software is not available. I have worked on (i) developing exact combinatorial optimization algorithms for
learning from high-dimensional data such as genomics data and (ii) deriving faster first-order methods for specialized nonconvex optimization problems such as those arising in optimal power flow
Exact Combinatorial Optimization for High-Dimensional Learning
Modern data often counts many more features than measured observations. Such high-dimensional data sets are particularly prevalent in genomics and other areas of computational biology. Cancer data
sets as an important example are comprised of the expression levels of many thousands of genes but typically concern only a few hundred individuals. Learning with high-dimensional data is
particularly challenging as most classical learning methods tend to fit the noise instead of the signal.
Simple and highly regularized learning methods are required here. Sparse linear models with a small number of nonzero coefficients are a popular choice when confronted with high-dimensional data. It
is indeed often the case that all but very few features in high-dimensional data sets are truly relevant. As an example, most cancer types can be predicted based on a very limited number of genetic
markers among all those measured. Equally important, sparse linear models are far more interpretable than are most other models. Fast heuristic sparse learning methods such as Lasso consequently rank
among the most cited algorithms in the learning community.
Unfortunately, heuristic sparse methods tend to include many more irrelevant features than their exact counterparts do. I developed scalable learning methods to find the best sparse regressor, i.e.,
$$\min_{\Vert w \Vert_0 \leq k} ~~ \frac12 \Vert Y -X w \Vert^2 + \frac{1}{2\gamma} \Vert w \Vert^2$$
or similarly the best classifier exactly using integer optimization. Moore’s law taken together with a comparable improvement in modern integer optimization solvers indeed represents a true
revolution in this context. By exploiting an exact convex dual formulation exact sparse models can indeed be found using modern integer optimization for data sets counting up to a 100,000
observations and features which is two orders of magnitude larger than previously possible.
Related publications
Optimal First-Order Algorithms
Optimization formulations associated with learning problems based on huge data sets remain however out of reach for exact nonconvex optimization methods. Practical necessity dictates at such huge
scale the use of first-order methods which only use simple gradient information to determine a local optimizer. Most convex optimization problems are well studied and have associated optimal
first-order methods. In stark contrast, even for simple classes of nonconvex functions such as all smooth nonconvex functions no optimal first-order method is known. I made significant progress in
how to design faster first-order methods to solve nonconvex optimization problems to local optimality. More significantly, such faster first-order methods are not determined with pen and paper but
rather found in a computer-assisted fashion by solving an auxiliary optimization formulation $$\min_{a\in \mathcal A} \max_{f\in \mathcal F} ~P_N(f, a)$$ where the decision variable $a \in \mathcal
A$ is best though of as characterizing a particular first-order algorithm whereas the decision variable $f$ represents a particular functions from a class of interest $\mathcal F$, e.g., all smooth
nonconvex functions. The performance of an algorithm $a$ on a function $f$ is denoted here as $P_{N}(f, a)$ and may for instance represent the norm of the gradient of the best iterate observed among
the first $N$ iterates. I believe that automated discovery of better optimization methods for specialized classes of optimization problems using computer assisted tools is a promising research
direction as it allows for fast tailored methods to be determined algorithmically rather than analytically.
Related publications
• Das Gupta, S., B.P.G. Van Parys, and E.K. Ryu (2023). “Branch-and-bound performance estimation programming: A unified methodology for construction optimal optimization methods”. In: Mathematical
Programming. Appeared also in Quanta Magazine. Link
• Das Gupta, S., B. Stellato, and B.P.G. Van Parys (2022). “Exterior-point optimization for nonconvex learning”. In: SIAM Journal of Optimization. Submitted. Link
>> Home | {"url":"https://www.vanparys.xyz/topics/optimization-algorithms/","timestamp":"2024-11-08T14:43:37Z","content_type":"text/html","content_length":"9811","record_id":"<urn:uuid:16429da5-8930-49d7-80dd-8a1549ba6a5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00297.warc.gz"} |
Understanding the Bubble Sort Algorithm in JavaScript - Cloudsoft Zone
681 Views
3 Min Read
Share This!
Understanding the Bubble Sort Algorithm in JavaScript
October 7, 2023
681 Views
3 Min Read
Are you curious about how sorting works in programming? One of the simplest and most straightforward sorting algorithms is the Bubble Sort. In this blog post, we’ll break down the Bubble Sort
algorithm in JavaScript using plain and simple language.
What is Bubble Sort?
Bubble Sort is a sorting algorithm that works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items, and swapping them if they are in the wrong order. This
process continues until no swaps are needed, indicating that the list is sorted.
How Bubble Sort Works
Now, let’s dive deeper into the inner workings of the Bubble Sort algorithm when implemented in JavaScript.
Initialization: To start, we define a function called bubbleSort that takes an array arr as its parameter. We also initialize two variables: len, which represents the length of the array, and swapped
, a boolean flag.
function bubbleSort(arr) {
var len = arr.length;
var swapped;
Main Loop with a Do-While: The core of the Bubble Sort algorithm is a loop that continues until no more swaps are needed. We use a do-while loop for this purpose.
do {
swapped = false;
Iterating through the Array: Within the loop, we use a for loop to iterate through the elements of the array.
for (var i = 0; i < len - 1; i++) {
Comparing and Swapping: For each pair of adjacent elements, we compare them. If the element at the current position i is greater than the element at i + 1, we swap them.
if (arr[i] > arr[i + 1]) {
// Swap elements
var temp = arr[i];
arr[i] = arr[i + 1];
arr[i + 1] = temp;
swapped = true;
This swapping process continues as we move through the array. If any swaps are made during a pass through the array, the swapped flag is set to true.
Optimization: The reason we use the swapped flag is to optimize the algorithm. If no swaps were made during a pass, it means the array is already sorted, and we can exit the loop early. This
optimization reduces unnecessary iterations.
Repeat Until Sorted: The do-while loop continues until no more swaps are needed. This means that the largest elements “bubble up” to the end of the array, and we repeat the process, ignoring the last
sorted element in each subsequent pass.
Sorted Array: Once the loop exits because no more swaps are required, the array is sorted in ascending order.
} while (swapped);
JavaScript Implementation
Now, let’s see how to implement Bubble Sort in JavaScript. Here’s a simple example:
function bubbleSort(arr) {
var len = arr.length;
var swapped;
do {
swapped = false;
for (var i = 0; i < len - 1; i++) {
if (arr[i] > arr[i + 1]) {
// Swap elements
var temp = arr[i];
arr[i] = arr[i + 1];
arr[i + 1] = temp;
swapped = true;
} while (swapped);
var myArray = [64, 34, 25, 12, 22, 11, 90];
console.log(myArray); // [11, 12, 22, 25, 34, 64, 90]
This JavaScript function bubbleSort takes an array as input and sorts it using the Bubble Sort algorithm.
Understanding how Bubble Sort works in JavaScript involves the iterative comparison and swapping of adjacent elements in an array until the entire array is sorted. The algorithm is simple to grasp
and serves as a fundamental concept in sorting algorithms. By following this step-by-step explanation, you can better appreciate the mechanics behind Bubble Sort in JavaScript. Happy coding!
1 Comment
Click here to post a comment
• Great article! The explanation of the Bubble Sort Algorithm in JavaScript is clear and concise. I appreciate how you broke down the steps and provided code examples. Thanks for sharing this
informative post!
You may also like
Introduction to JavaScript arrow functions JavaScript is at the heart of web development, and ES6 brought us a handy tool known as arrow functions...
How to Compare Arrays in JavaScript
631 Views
Arrays serve as a fundamental building block in JavaScript, facilitating the storage and manipulation of collections of values. As you work with...
Follow Me
Get the most recent articles from Cloudsoft Zone about digital marketing and programming.
Recent Posts | {"url":"https://www.cloudsoftzone.com/understanding-the-bubble-sort-algorithm-in-javascript","timestamp":"2024-11-11T04:04:18Z","content_type":"text/html","content_length":"105910","record_id":"<urn:uuid:12a22ed9-b2f5-48be-af0a-3519c875db12>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00604.warc.gz"} |
Search result: Catalogue data in Spring Semester 2020
Data Science Master
Core Courses
Data Management
Number Title Type ECTS Hours Lecturers
261-5110-00L Optimization for Data Science W 8 credits 3V + 2U + B. Gärtner, D. Steurer
Abstract This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in data science.
Learning Understanding the theoretical guarantees (and their limits) of relevant optimization methods used in data science. Learning general paradigms to deal with optimization problems arising
objective in data science.
This course provides an in-depth theoretical treatment of optimization methods that are particularly relevant in machine learning and data science.
In the first part of the course, we will first give a brief introduction to convex optimization, with some basic motivating examples from machine learning. Then we will analyse
classical and more recent first and second order methods for convex optimization: gradient descent, projected gradient descent, subgradient descent, stochastic gradient descent,
Content Nesterov's accelerated method, Newton's method, and Quasi-Newton methods. The emphasis will be on analysis techniques that occur repeatedly in convergence analyses for various classes
of convex functions. We will also discuss some classical and recent theoretical results for nonconvex optimization.
In the second part, we discuss convex programming relaxations as a powerful and versatile paradigm for designing efficient algorithms to solve computational problems arising in data
science. We will learn about this paradigm and develop a unified perspective on it through the lens of the sum-of-squares semidefinite programming hierarchy. As applications, we are
discussing non-negative matrix factorization, compressed sensing and sparse linear regression, matrix completion and phase retrieval, as well as robust estimation.
Prerequisites As background, we require material taught in the course "252-0209-00L Algorithms, Probability, and Computing". It is not necessary that participants have actually taken the course, but
/ Notice they should be prepared to catch up if necessary. | {"url":"https://www.vorlesungen.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?lang=en&abschnittId=84914&semkez=2020S&ansicht=2&&seite=1","timestamp":"2024-11-14T22:25:47Z","content_type":"text/html","content_length":"10069","record_id":"<urn:uuid:925b5c8e-a37c-472b-9ee5-afcd6302a099>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00218.warc.gz"} |
VLOOKUP With MATCH In Google Sheets
What Is VLOOKUP With MATCH In Google Sheets?
The VLOOKUP with MATCH in Google Sheets is a VLOOKUP formula, with the MATCH() supplied as its dynamic index argument. The formula helps overcome the limitation of the VLOOKUP(), which is it does not
work once we insert or remove one or more columns to or from the lookup range.
Users can apply the VLOOKUP with MATCH formula in Google Sheets to perform a two-way lookup by matching row and column-wise.
For example, the source dataset holds a list of employees, and their designation and employee ID data.
We should update the designation of the employee cited in cell E2 and display the outcome in cell F2, with the evaluation based on the source dataset.
Then, we can utilize the VLOOKUP() containing the MATCH() in the target cell to fetch the required data, according to VLOOKUP with MATCH In Google Sheets explained earlier.
Individually, the VLOOKUP and MATCH functions in Googles Sheets work the same way as the Excel VLOOKUP function and Excel MATCH function.
The MATCH() searches for an exact match for the phrase “Designation” in the range A1:C1, containing the column names of the source dataset. Since the match is found in cell B1, the function returns 2
, which is the relative position of the search value in the search range.
Next, the VLOOKUP() looks for an exact match for the value “Scott Cook” in the first column of the lookup range A2:C11. Since it locates a match in cell A6, the function returns the value in the cell
where row 6 and column 2 (MATCH() output) meet, which is the cell B6 value, Specialist.
The last arguments in the MATCH() and VLOOKUP() indicate that the two functions aim to find the exact matches for the corresponding search values.
Furthermore, the formula VLOOKUP with MATCH in Google Sheets returns the correct output when the search value in MATCH() is the same phrase as what is used in the source dataset column headings. For
instance, we might use the term Designation in the source dataset and the term Desig in the table where we must update the concerned employee’s designation. In this case, the MATCH()’ search phrase
and the column heading in the source dataset do not match, leading to the formula returning an error value.
Key Takeaways
• The VLOOKUP with MATCH in Google Sheets is a formula where we use the MATCH() to supply a dynamic index argument value to the VLOOKUP function. It ensures the VLOOKUP function returns an
error-free output even when we add or remove columns to or from the lookup range.
• The VLOOKUP with MATCH formula in Google Sheets helps when we must look up a value using the VLOOKUP() in a dynamic lookup range.
• We can utilize the VLOOKUP with MATCH function in Google Sheets as an individual formula. However, implementing the formula with other inbuilt functions, such as IF and IFERROR, makes it quite
Problem With VLOOKUP
The problems with the VLOOKUP() are as follows:
• The index argument in the VLOOKUP() is static or hard-coded. In other words, it is typically a fixed number. Then, in the case when the VLOOKUP() output should be the return value from another
column, we will have to update the index argument accordingly.
• Adding or deleting columns to or from the lookup range makes the VLOOKUP() return an incorrect or error value. The reason is that the position of the column from where the function must fetch the
return value changes. And since the index argument value is not dynamic, the function will refer to a column different from the intended one.
VLOOKUP MATCH Formula
The VLOOKUP-MATCH formula in Google Sheets syntax is the following:
• search_key: The data point to look for in the first column of the search range.
• range: The maximum and minimum bounds of the cell range to search.
• MATCH(): The function output is the index argument value. It is the index of the column in the search range holding the return value, with the index being a positive integer. The MATCH()
arguments are as follows:
□ column_heading_search_key: The value we aim to look for in the column headings of the source dataset.
□ column_headings_range: The 1-D array of the column headings of the source dataset we aim to search.
Please note that if the supplied range has a height and width of more than 1, the MATCH() output will be the #N/A! error.
• search_type: The value indicates the way we aim to search.
• 1: It is the default value. The MATCH() considers the search range values to be in ascending order, and its output is the largest value below or equal to the column_heading_search_key.
• 0: The MATCH() finds an exact match. In this case, the range need not be necessarily sorted.
• -1: The MATCH() considers that the search range values are in descending order, and its output is the smallest value more than or equal to the column_heading_search_key.
• is_sorted: The value indicates if the VLOOKUP() must find an exact or approximate match.
• FALSE or 0: It indicates an exact match search, and it is the recommended value.
• TRUE or 1: It indicates an approximate match search, and it is the default value if we omit the is_sorted argument.
Please note that in the case of an approximate match, we must sort our search key range in ascending order. Otherwise, we may get an incorrect return value.
Furthermore, we must provide all the argument values when applying VLOOKUP with MATCH in Google Sheets, except for the last ones in the two functions being optional.
How To Use VLOOKUP With MATCH In Google Sheets?
The steps to use the VLOOKUP with MATCH formula in Google Sheets are as follows:
1. Click the cell where we aim to show the result.
2. Type =VLOOKUP( in the cell. [Alternatively, type =V or =VL and click the function name VLOOKUP from the suggestions to choose it.]
3. Enter the first two arguments, separated by a comma. Next, enter a comma and type MATCH(. Otherwise, type M or MA and click the function name MATCH from the suggestions to choose it. Next, update
the MATCH() argument values, separated by commas, and close the bracket. Finally, enter a comma and the last argument of VLOOKUP() to supply the VLOOKUP With MATCH In Google Sheets arguments, and
close the bracket.
4. Press Enter to secure the formula output.
The following illustrations explain the practical methods of applying VLOOKUP With MATCH in Google Sheets.
Example #1
The source dataset contains a list of tech companies and their market cap data.
The requirement is to find the market cap value for the specified tech company in cell E1, based on the source dataset, and showcase the result in cell E2.
Then, considering the concept of VLOOKUP with MATCH in Google Sheets explained previously, we can apply the VLOOKUP-MATCH() in the target cell to get the desired outcome.
Step 1: Choose the target cell E2 and enter the VLOOKUP().
Once we enter the function name, we will see it listed as a suggestion. Click the function name to view its arguments list.
Next, enter the first argument value followed by a comma.
Next, enter the second argument value and a comma.
The third argument value is the MATCH(). So, on entering the function name, we will see the function listed as a suggestion. Click it to view its arguments list.
Enter the MATCH()’s arguments, separated by commas and close the bracket, as depicted below.
Next, enter a comma and the VLOOKUP()’s last argument value to supply all the VLOOKUP with MATCH in Google Sheets arguments.
Finally, close the bracket to complete the expression.
Step 2: Press Enter to view the value that VLOOKUP with MATCH in Google Sheets returns.
First, the MATCH() searches the value Market Cap ($) for an exact match in the range A1:B1. It then returns the value’s relative place in the specified range, 2, as the output.
Next, the VLOOKUP() searches the value Amazon for an exact match in the first column of the search range A2:B11. It finds the exact match in cell A6. Next, it considers the MATCH() output value of 2
as the required column index value. So, the VLOOKUP() returns the value in the cell where row 6 and column 2 meet, which is the cell B6 value of $1.931 T, as the required output.
Example #2
We have a list of item codes and their quarterly inventory level data.
The task is to display the quarterly inventory data for the item code, cited in cell A11, in the order of the quarters specified in the second table. Assume the range B11:E11 is the target cells.
Step 1: Select cell B11, enter the following expression, and press Enter.
Next, utilizing the fill handle, execute the formula in the remaining target cells.
We shall see the cell E11 formula logic to know how the formula works.
First, the MATCH() looks for an exact match of the value Q3 – Inventory Level (Cartons) in the range A1:E1, which it finds in cell D1. Thus, the function returns 4, which is the search value’s
relative position in the search range.
Next, the VLOOKUP() looks for an exact match of the value JVJ_004 in the range A2:E7, which it finds in cell A5. Next, it takes the MATCH() output value of 4 as the required column index value. So,
the VLOOKUP() returns the value in the cell where row 5 and column 4 intersect, which is the cell D5 value 1580, as the required output.
Please note that we use absolute reference for specific cell references, as they should be the same in all the target cells’ formulas. Just like the absolute reference in Excel, It makes applying the
formula in all the remaining target cells easy once we enter the formula in the first target cell.
On the other hand, we could have applied VLOOKUP() in the target cells. However, we would have had to enter the formula in each target cell individually, as the index argument value would be static.
Since we supply the MATCH() as the index argument to the VLOOKUP(), the argument value is dynamic, helping us copy the formula in the required cells without editing.
Example #3
The source dataset shows the Teams A and B employees’ salaries.
The aim is to update the salary of the employee, cited in cell L2, based on the team and month values specified in cells L3:L4. We must fetch the required data according to the source dataset and
show the outcome in cell L5.
Furthermore, if the VLOOKUP() returns an error value, an error message should be displayed on executing the formula in the target cell.
Step 1: Select cell L5, enter the following formula, and press Enter.
=IFERROR(VLOOKUP(L2,IF(L3=”Team A”,A4:D8,F4:I8),IF(L3=”Team A”,MATCH(L4,A3:D3,0),MATCH(L4,F3:I3,0)),0),”Invalid Input Or Data Not Available.”)
The VLOOKUP() has the IF(), that works similar to the Excel IF function, in the range and index argument values. The two IF functions check if the specified team is Team A. If the condition holds in
the first IF(), it returns the TRUE value, which is the range A4:D8 in the first dataset. However, since the condition is false, the function returns the FALSE value, which is the range F4:I8 in the
second dataset.
On the other hand, assume the condition is true in the second IF(). Then, its output is the MATCH() that finds an exact match for the Mar month to get its relative position in the range A3:D3 in the
first dataset. However, in this case, the condition is false. So, the function output is the MATCH() that finds an exact match for the Mar month to get its relative position in the range F3:I3 in the
second dataset, which is 4.
Thus, based on the two IF()s, we get the required VLOOKUP()’s range and index argument values. So, now the VLOOKUP() looks for the employee name Gladys Hubbard in the first column of the search range
F4:I8, which in this case is F7. Next, the MATCH() returns the required column index, which is 4. Thus, the VLOOKUP() returns the value in the cell at the intersection of row 7 and column 4 of the
second dataset, which is the cell I7 value of $30,100.
Finally, the IFERROR(), that follows the same logic as the Excel IFERROR function, checks if the VLOOKUP() output is an error value. Since the VLOOKUP() output is not an error value, the IFERROR()
returns the VLOOKUP() return value as the output, $30,100.
Important Things To Note
• The VLOOKUP() and MATCH() in the formula VLOOKUP with MATCH in Google Sheets are case-insensitive.
• The two functions in the VLOOKUP-MATCH formula in Google Sheets return the #N/A! error if they fail to identify a match for the specified lookup value.
• We can use the Wildcard characters in the two functions in the VLOOKUP-MATCH formula in Google Sheets for returning values based on partial matches.
Frequently Asked Questions (FAQs)
1. What is the main advantage of using VLOOKUP with Match in Google Sheets?
The main advantage of using VLOOKUP with MATCH in Google Sheets is that we can supply the index argument value as a dynamic value to the VLOOKUP(). For that, we specify the MATCH() as the index
argument value.
Thus, if we add or delete columns from the search range, the index argument value adjusts accordingly, leading to the VLOOKUP() returning the appropriate error-free return value.
For example, the following source dataset contains invoice numbers, dates, and the order delivery status.
Further, we update the order delivery status for the invoice number, cited in cell E2, based on the source dataset using the VLOOKUP() in the target cell F2.
However, assume we insert a new column to show the order quantity data for the invoice numbers between the invoice date and order delivery status data in the source dataset.
In this case, the VLOOKUP() output is incorrect since all the inputs to the function remain the same, with the index argument value being static. So, it leads to the function output being the return
value from the revised column 3.
We can overcome the issue by using the MATCH() as the index argument value in the VLOOKUP().
Step 1: Select the target cell G2, enter the VLOOKUP() containing the MATCH(), and press Enter.
The MATCH() looks for an exact match for the phrase Order Delivery Status in the range A1:D1, which is in cell D1. So, the function returns 4 as the specified value’s relative position in the given
search range.
Next, the VLOOKUP() looks for an exact match of the invoice number, cited in cell F2, in the search range A2:D11, which is in cell A7. So, the function returns the value in the cell at the
intersection of row 7 and column 4, which is the cell D7 value, Pending.
2. When to use VLOOKUP with MATCH in Google Sheets?
We can use VLOOKUP with MATCH in Google Sheets when we know that we shall be adding or removing columns of data to or from the concerned lookup range. So, the supplied index argument value should be
dynamic, which the formula ensures.
3. Does VLOOKUP with MATCH in Google Sheets work the same as that in Excel?
The VLOOKUP with MATCH in Google Sheets does work the same as that in Excel.
Download Template
This article must be helpful to understand VLOOKUP With MATCH In Google Sheets, with its formula and examples. You can download the template here to use it instantly.
Recommended Articles
This has been a guide to What Is VLOOKUP with MATCH in Google Sheets. We explain how to use the formula in Googles Sheets with examples & points to remember. You can learn more from the following
articles – | {"url":"https://www.excelmojo.com/vlookup-with-match-in-google-sheets/","timestamp":"2024-11-10T11:27:54Z","content_type":"text/html","content_length":"235485","record_id":"<urn:uuid:275f2d1f-a4ad-4d58-bfef-bbbd4b760a65>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00670.warc.gz"} |
Flocking with informed agents
Two similar Laplacian-based models for swarms with informed agents are proposed and analyzed analytically and numerically. In these models, each individual adjusts its velocity to match that of its
neighbors and some individuals are given a preferred heading direction towards which they accelerate if there is no local velocity consensus. The convergence to a collective group swarming state with
constant velocity is analytically proven for a range of parameters and initial conditions. Using numerical computations, the ability of a small group of informed individuals to accurately guide a
swarm of uninformed agents is investigated. The results obtained in one of our two models are analogous to those found for more realistic and complex algorithms for describing biological swarms,
namely, that the fraction of informed individuals required to guide the whole group is small, and that it becomes smaller for swarms with more individuals. This observation in our simple system
provides insight into the possibly robust dynamics that contribute to biologically effective collective leadership and decision-making processes. In contrast with the more sophisticated models
mentioned above, we can describe conditions under which convergence to consensus is ensured.
Published online:
Classification: 93C15
Keywords: Particle systems, flocking
Author's affiliations:
Felipe Cucker ^1; Cristián Huepe ^2
author = {Felipe Cucker and Cristi\'an Huepe},
title = {Flocking with informed agents},
journal = {MathematicS In Action},
pages = {1--25},
publisher = {Soci\'et\'e de Math\'ematiques Appliqu\'ees et Industrielles},
volume = {1},
number = {1},
year = {2008},
doi = {10.5802/msia.1},
mrnumber = {2519063},
zbl = {1163.93306},
language = {en},
url = {https://msia.centre-mersenne.org/articles/10.5802/msia.1/}
TY - JOUR
AU - Felipe Cucker
AU - Cristián Huepe
TI - Flocking with informed agents
JO - MathematicS In Action
PY - 2008
SP - 1
EP - 25
VL - 1
IS - 1
PB - Société de Mathématiques Appliquées et Industrielles
UR - https://msia.centre-mersenne.org/articles/10.5802/msia.1/
DO - 10.5802/msia.1
LA - en
ID - MSIA_2008__1_1_1_0
ER -
%0 Journal Article
%A Felipe Cucker
%A Cristián Huepe
%T Flocking with informed agents
%J MathematicS In Action
%D 2008
%P 1-25
%V 1
%N 1
%I Société de Mathématiques Appliquées et Industrielles
%U https://msia.centre-mersenne.org/articles/10.5802/msia.1/
%R 10.5802/msia.1
%G en
%F MSIA_2008__1_1_1_0
Felipe Cucker; Cristián Huepe. Flocking with informed agents. MathematicS In Action, Volume 1 (2008) no. 1, pp. 1-25. doi : 10.5802/msia.1. https://msia.centre-mersenne.org/articles/10.5802/msia.1/
[1] C.M. Breder. Equations descriptive of fish schools and other animal aggregations. Ecology, 35:361–370, 1954. | DOI
[2] J. Cortes, S. Martinez, and F. Bullo. Spatially-distributed coverage optimization and control with limited-range interactions. ESAIM Control Optim. Calc. Var., 11:691–719, 2005. | DOI | MR | Zbl
[3] I.D. Couzin, J. Krause, N.R. Franks, and S.A. Levin. Effective leadership and decision making in animal groups on the move. Nature, 433:513–516, 2005. | DOI
[4] I.D. Couzin, J. Krause, R. James, G.D. Ruxton, and N.R. Franks. Collective memory and spatial sorting in animal groups. Journal of Theoretical Biology, 218:1–11, 2002. | DOI | MR
[5] F. Cucker and S. Smale. Best choices for regularization parameters in learning theory. Found. Comput. Math., 2:413–428, 2002. | DOI | MR | Zbl
[6] F. Cucker and S. Smale. Emergent behavior in flocks. IEEE Trans. on Autom. Control, 52:852–862, 2007. | DOI | MR | Zbl
[7] F. Cucker and S. Smale. On the mathematics of emergence. Japan J. Math., 2:197–227, 2007. | DOI | MR | Zbl
[8] A. Czirok, H.E. Stanley, and T. Vicsek. Spontaneous ordered motion of self-propelled particles. J. Phys. A: Math. Gen., 30:1375–1385, 1997. | DOI
[9] A. Czirok and T. Vicsek. Collective behavior of interacting self-propelled particles. Physica A, 281:17–29, 2000. | DOI
[10] U. Erdmann, W. Ebeling, and A. S. Mikhailov. Noise-induced transition from translational to rotational motion of swarms. Phys. Rev. E, 71:051904, 2005. | DOI
[11] J.A. Fax and R.M. Murray. Information flow and cooperative control of vehicle formation. IEEE Trans. Aut. Contr., 49:1465–1476, 2004. | DOI | MR | Zbl
[12] G. Flierl, D. Grünbaum, S. Levin, and D. Olson. From individuals to aggregations: the interplay between behavior and physics. J. Theor. Biol., 196:397–454, 1999. | DOI
[13] V. Gazi and K.M. Passino. Stability analysis of swarms. IEEE Trans. Aut. Contr., 48:692–697, 2003. | DOI | MR | Zbl
[14] D. Grunbaum and A. Okubo. Modeling social animal aggregations, volume 100 of Lecture Notes in Biomathematics, pages 296–325. Springer-Verlag, 1994. | DOI | Zbl
[15] S.-Y. Ha and J.-G. Liu. A simple proof of the Cucker-Smale flocking dynamics and mean-field limit. Preprint, 2008. | DOI | MR
[16] S.-Y. Ha and E. Tadmor. From particle to kinetic and hydrodynamic descriptions of flocking. Kinetic and Related Models, 1:415–435, 2008. | DOI | MR | Zbl
[17] A. Jadbabaie, J. Lin, and A.S. Morse. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. on Autom. Control, 48:988–1001, 2003. | DOI | MR | Zbl
[18] E.W. Justh and P.S. Krishnaprasad. Equilibria and steering laws for planar formations. Syst. and Contr. Lett., 52:25–38, 2004. | DOI | MR | Zbl
[19] A. Kolpas, J. Moehlis, and I.G. Kevrekidis. Coarse-grained analysis of stochasticity-induced switching between collective motion states. Proc. Nat. Acad. Sc., 104:5931–5935, 2007. | DOI
[20] P. Ogren, E. Fiorelli, and N. E. Leonard. Cooperative control of mobile sensor networks: adaptive gradient climbing in a distributed environment. IEEE Trans. Aut. Contr., 49:1292–1302, 2004. |
DOI | MR | Zbl
[21] A. Okubo. Dynamical aspects of animal grouping: Swarms, schools, flocks, and herds. Adv. Biophys, 22:1–94, 1986. | DOI
[22] L. Perea, P. Elosegui, and G. Gómez. Extension of the Cucker-Smale control law to space flight formations. Preprint, 2008. | DOI
[23] J. Shen. Cucker-Smale flocking under hierarchical leadership. SIAM J. Appl. Math, 68:694–719, 2007. | DOI | MR | Zbl
[24] I. Suzuki and M. Yamashita. Distributed anonymous mobile robots: Formation of geometric patterns. SIAM Journal on Computing, 28:1347–1363, 1999. | DOI | MR | Zbl
[25] C.M. Topaz and A.L. Bertozzi. Swarming patterns in a two-dimensional kinematic model for biological groups. SIAM J. Appl. Math., 65:152–174, 2004. | DOI | MR | Zbl
[26] T. Vicsek, A. Czirók, E. Ben-Jacob, and O. Shochet. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Letters, 75:1226–1229, 1995. | DOI | MR
[27] K. Warburton and J. Lazarus. Tendency-distance models of social cohesion in animal groups. J. Theoret. Biol., 150:473–488, 1991. | DOI | {"url":"https://msia.centre-mersenne.org/articles/10.5802/msia.1/","timestamp":"2024-11-02T15:55:09Z","content_type":"text/html","content_length":"48659","record_id":"<urn:uuid:848ae1cd-2377-46fa-918f-d3cad4b3f6f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00043.warc.gz"} |
Continuity and Differentiability Class 12 MCQ Questions with Answer
Class 12 Mathematics Chapter 5 Continuity and Differentiability MCQ Question with Answer
Continuity and Differentiability Class 12 MCQ is one of the best strategies to prepare for the CBSE Class 12 Board exam. If you want to complete a grasp concept or work on one’s score, there is no
method except constant practice. Students can improve their speed and accuracy by doing more MCQ on Continuity and Differentiability Class 12, which will help them all through their board test.
Continuity and Differentiability Class 12 MCQ Questions with Answer
Class 12 Maths MCQ with answers are given here to Chapter 5 Continuity and Differentiability. These MCQs are based on the latest CBSE board syllabus and relate to the latest Class 12 Mathematics
syllabus. By Solving these Class 12 MCQs, you will be able to analyze all of the concepts quickly in the chapter and get ready for the Class 12 Annual exam.
Learn Continuity and Differentiability Class 12 MCQ with answers pdf free download according to the latest CBSE and NCERT syllabus. Students should prepare for the examination by solving CBSE Class
12 Mathematics Continuity and Differentiability MCQ with answers given below.
Question 1. The number of points at which the function f (x) = 1/x-[x] is not continuous is
(a) 1
(b) 2
(c) 3
(d) none of these
Question 2. The value of c in Rolle’s theorem for the function f(x) = x3 – 3x in the interval [0, 3] is
(a) 1
(b) –1
(c) 3/2
(d) 1/3
Question 3. The function f (x) = e x is
(a) continuous everywhere but not differentiable at x = 0
(b) continuous and differentiable everywhere
(c) not continuous at x = 0
(d) none of these
Question 4. The function f (x) = 4-x2 /4x-x3 is
(a) discontinuous at only one point
(b) discontinuous at exactly two points
(c) discontinuous at exactly three points
(d) none of these
Question 5. The function f(x) = |x| + |x – 1| is
(a) continuous at x = 0 as well as at x = 1.
(b) continuous at x = 1 but not at x = 0.
(c) discontinuous at x = 0 as well as at x = 1.
(d) continuous at x = 0 but not at x = 1.
Question 6. The value of c in Rolle’s Theorem for the function f(x) = ex sin x, in [0, π] is
(a) π/6
(b) π/4
(c) π/2
(d) 3π/4
Question 7. Let f (x) = sinx . Then
(a) f is everywhere differentiable
(b) f is everywhere continuos but not differentiable at x = nπ : n ∈ Z
(c) f is everywhere continuous but not differentiable at x (2n + 1)π/2 , n ∈ Z
(d) none of these
Question 8. If f (x) x sin 1/x, where x ! 0 , then the value of the function f at x = 0, so that the function is continuous at x = 0, is
(a) 0
(b) –1
(c) 1
(d) None of these
Question 9. The function f(x) = [x], where [x] denotes the greatest integer function, is continuous at
(a) 4
(b) –2
(c) 1
(d) 1.5
Question 10. The function f : R → R given by f(x) = – |x – 1| is [CBSE 2020 (65/2/1)]
(a) continuous as well as differentiable at x = 1
(b) not continuous but differentiable at x = 1
(c) continuous but not differentiable at x = 1
(d) neither continuous nor differentiable at x = 1
Question 11. The value of c in Mean Value Theorem for the function f(x) = x(x – 2), x ∈ [1, 2] is
(a) 3/2
(b) 2/3
(c) 1/2
(d) 7/4
Question 12. If u sin-1 (2x/1+x2) and v tan-1 (2x/1-x2) , then du/dv is
(a) 1/2
(b) x
(c) 1-x2/1+x2[4, -4]Φ
(d) 1
Question 13. If y = log tanx , then the value of dy/dx at x = π/4 is
(a) 0
(b) 1
(c) 1/2
(d) ∞
Question 14. Differential coefficient of sec (tan–1x) w.r.t. x is
(a) x/√(1+x2)
(b) x/(1+x2)
(c) x/(1+x2)
(d) 1/√(1+x2)
Question 15. The function (x) =x-1/x(x2-1) is discontinuous at
(a) exactly one point
(b) exactly two points
(c) exactly three points
(d) no point
Question 16. If y = A e5x + B e–5x, then d2y/dx2 is equal to
(a) 25 y
(b) 5 y
(c) –25 y
(d) 15 y
Question 17. For the curve √x + √y =1 ,dy/dx , at(1/4 ,1/4) is
(a) 1/2
(b) 1
(c) –1
(d) 2
Question 18. The set of points where the functions f given by f (x) = |x – 3| cos x is differentiable is
(a) R
(b) R – {3}
(c) (0, ∞)
(d) none of these
Question 19. If f ′(1) = 2 and y = f (log ex), then dy/dx at x = e is
(a) 0
(b) 1
(c) e
(d) 2/e
Question 20. If f(x) = 2x and g(x) = x2/2 + 1 , then which of the following can be a discontinuous function?
(a) f(x) + g(x)
(b) f(x) – g(x)
(c) f(x) . g(x)
(d) g(x)/ f(x )
Whoever needs to take the CBSE Class 12 Board Exam should look at this MCQ. To the Students who will show up in CBSE Class 12 Mathematics Board Exams, It is suggested to practice more and more
questions. Aside from the sample paper you more likely had solved. These Continuity and Differentiability Class 12 MCQ are ready by the subject specialists themselves.
Fill in the blanks
Question 1. d/dx sec(tan-1)= _____________ .
Questions 2. If f (x) = cos x , then f (π/4) = _____________ .
Questions 3. If f(x) = (x + 1), then (d/dx) fof (x) = _____________ .
Question 4. The function f (x) = 2 -X^2 /9X – X^3 is discontinuous exactly at _____________ points.
Question 5. If cos (xy) = k, where k is a constant and xy ≠ np, n ∈ Z, then dy/dx is equal to _____________ .
Question 6. The number of points of discontinuity of f defined by f(x) = |x| – |x + 1| is _____________.
Questions 7. The number of points at which the function f(x)=1/log|x| is discontinuous is _________ .
Question 8. If y = tan–1 x + cot–1 x, x ∈ R, then dy/dx is equal to _____________ .
You can easily get good marks If you study with the help of Class 12 Continuity and Differentiability MCQ. We trust that information provided is useful for you. NCERT MCQ Questions for Class 12
Continuity and Differentiability PDF Free Download would without a doubt create positive results.
We hope the information shared above in regards to MCQ on Continuity and Differentiability Class 12 with Answers has been helpful to you. if you have any questions regarding CBSE Class 12 Mathematics
Solutions MCQs Pdf, write a comment below and we will get back to you as soon as possible.
Frequently Asked Question (FAQs)
How many MCQ questions are there in Class 12 Chapter 5 Mathematics?
In Class 12 Chapter 5 Mathematics, we have provided 28 Important MCQ Questions, But in the future, we will add more MCQs so that you can get good marks in the Class 12 exam.
Can we score good marks in Class 12 Mathematics with the help of Continuity and Differentiability MCQ Questions?
Yes, MCQ Question is one of the best strategies to make your preparation better for the CBSE Board Exam. It also helps to know the student’s basic understanding of each chapter. So, You can score
good marks in the Class 12 Mathematics exam. | {"url":"https://jbrconsultant.com/continuity-and-differentiability-class-12-mcq/","timestamp":"2024-11-13T19:12:02Z","content_type":"text/html","content_length":"93743","record_id":"<urn:uuid:7c13f045-7d73-42e2-891e-2e0b9aac1816>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00160.warc.gz"} |
On the Interaction between Player Heterogeneity and Partner Heterogeneity in Two-way Flow Strict Nash Networks
This paper brings together analyses of two-way flow Strict Nash networks under exclusive player heterogeneity assumption and exclusive partner heterogeneity assumption. This is achieved through
examining how the interactions between these two assumptions influence important properties of Strict Nash networks. Built upon the findings of Billand et al (2011) and Galleotti et al (2006), which
assume exclusive partner heterogeneity and exclusive player heterogeneity respectively, I provide a proposition that generalizes the results of these two models by stating that: (i) Strict Nash
network consists of multiple non-empty components as in Galleotti et al (2006), and (ii) each non-empty component is a branching or Bi network as in Billand et al (2011). This proposition requires
that a certain restriction on link formation cost (called Uniform Partner Ranking), which encloses exclusive partner heterogeneity and exclusive player heterogeneity as a specific case, is satisfied.
In addition, this paper shows that value heterogeneity plays a relatively less important role in changing the shapes of Strict Nash networks. | {"url":"http://coalitiontheory.net/content/interaction-between-player-heterogeneity-and-partner-heterogeneity-two-way-flow-strict-nash","timestamp":"2024-11-10T04:41:30Z","content_type":"application/xhtml+xml","content_length":"33357","record_id":"<urn:uuid:e4a33224-1fc1-4922-b462-9908e7b3d331>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00005.warc.gz"} |
Multi-point functional central limit theorem for Wigner Matrices
Preprint | Submitted | English
Consider the random variable $\mathrm{Tr}( f_1(W)A_1\dots f_k(W)A_k)$ where $W$ is an $N\times N$ Hermitian Wigner matrix, $k\in\mathbb{N}$, and choose (possibly $N$-dependent) regular functions
$f_1,\dots, f_k$ as well as bounded deterministic matrices $A_1,\dots,A_k$. We give a functional central limit theorem showing that the fluctuations around the expectation are Gaussian. Moreover, we
determine the limiting covariance structure and give explicit error bounds in terms of the scaling of $f_1,\dots,f_k$ and the number of traceless matrices among $A_1,\dots,A_k$, thus extending the
results of [Cipolloni, Erdős, Schröder 2023] to products of arbitrary length $k\geq2$. As an application, we consider the fluctuation of $\mathrm{Tr}(\mathrm{e}^{\mathrm{i} tW}A_1\mathrm{e}^{-\mathrm
{i} tW}A_2)$ around its thermal value $\mathrm{Tr}(A_1)\mathrm{Tr}(A_2)$ when $t$ is large and give an explicit formula for the variance.
Date Published
Cite this
J. Reker, “Multi-point functional central limit theorem for Wigner Matrices,” arXiv. .
All files available under the following license(s):
This Item is protected by copyright and/or related rights. [...]
Dissertation containing ISTA record | {"url":"https://research-explorer.ista.ac.at/record/17173","timestamp":"2024-11-04T10:45:48Z","content_type":"text/html","content_length":"30640","record_id":"<urn:uuid:3c88954b-66fd-43a8-acac-49759b2ddc35>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00575.warc.gz"} |
E7 (mathematics)
DISPLAYTITLE:E7 (mathematics) In mathematics, E7 is the name of several closely related Lie groups, linear algebraic groups or their Lie algebras e7, all of which have dimension 133; the same
notation E7 is used for the corresponding root lattice, which has rank 7. The designation E7 comes from the Cartan–Killing classification of the complex simple Lie algebras, which fall into four
infinite series labeled An, Bn, Cn, Dn, and five exceptional cases labeled E6, E7, E8, F4, and G2. The E7 algebra is thus one of the five exceptional cases. The fundamental group of the (adjoint)
complex form, compact real form, or any algebraic version of E7 is the cyclic group Z/2Z, and its outer automorphism group is the trivial group. The dimension of its fundamental representation is 56.
There is a unique complex Lie algebra of type E7, corresponding to a complex group of complex dimension 133. The complex adjoint Lie group E7 of complex dimension 133 can be considered as a simple
real Lie group of real dimension 266. This has fundamental group Z/2Z, has maximal compact subgroup the compact form (see below) of E7, and has an outer automorphism group of order 2 generated by
complex conjugation. As well as the complex Lie group of type E7, there are four real forms of the Lie algebra, and correspondingly four real forms of the group with trivial center (all of which have
an algebraic double cover, and three of which have further non-algebraic covers, giving further real forms), all of real dimension 133, as follows: The compact form (which is usually the one meant if
no other information is given), which has fundamental group Z/2Z and has trivial outer automorphism group. The split form, EV (or E7(7)), which has maximal compact subgroup SU(8)/{±1}, fundamental
group cyclic of order 4 and outer automorphism group of order 2. EVI (or E7(-5)), which has maximal compact subgroup SU(2)·SO(12)/(center), fundamental group non-cyclic of order 4 and trivial outer
automorphism group. | {"url":"https://graphsearch.epfl.ch/en/concept/649115","timestamp":"2024-11-04T05:56:50Z","content_type":"text/html","content_length":"109543","record_id":"<urn:uuid:d5138427-0189-4855-8ddb-0f1f684711aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00654.warc.gz"} |
Kickstarting R - Contingency tables
How do I get a crosstab?
You've been locked in a room with a PC containing the data for 248 subjects and they won't let you have lunch until you have crosstabulated all the demographic data. It's almost noon and you only
have R. You hesitantly try
> table(infert$education,infert$parity)
and you get a very sparse tabulation of the parity (number of births) by educational attainment. You try the enhanced version of this function,
> xtabs(infert$education,infert$parity)
and are faced with a slightly more informative display. Unfortunately, you know that Bronwyn will want to know what percentage of women who completed high school had 2 or fewer children and Hans will
have to have a chi-squared test for every contingency table. Let's see what can be done. R follows the precepts of a bunch of brilliant people at Bell Labs in making statistics modular. That is,
individual functions do fairly simple, general things very well, and intelligently combining the modules will do almost anything that you want. The beginner's problem is usually figuring out what the
heck are the functions that will do the particular things that they want. We'll use the example data frame infert provided with R to illustrate how to build on that. First, let's find and retrieve
the data.
> show.data()
freeny Freeny's Revenue Data
infert Secondary infertility matched case-control study
iris Edgar Anderson's Iris Data as data.frame
> data(infert)
A quick summary of the data will reveal that parity ranges from 1-6. This will have to be reduced to two categories. That's pretty easy to do by assigning the output of a logical comparison.
> gt2<-infert$parity>2
> table(infert$education,gt2)
The observant reader may ask why the comparison "greater than" was used rather than "less than or equal to". Convenience is the answer. By default, R orders factors, and FALSE (0) is less than TRUE
(1). Using "greater than" here gets the factors "right way round", rather than having "more than 2" in the first column and "less than or equal to 2" in the second. When factors are coded as labels,
they are ordered alphabetically. You can explicitly order factors if you wish.
This is still a pretty laconic table which will have to be explained. Putting the dimnames in will help.
> table(infert$education,gt2,dnn=c("Education","Parity"))
It would also be nice if there were some descriptive labels rather than just "FALSE" and "TRUE". The really useful function ifelse() will do the trick.
> gt2<-ifelse(infert$parity>2,"Over 2","2 or less")
> table(infert$education,gt2,dnn=c("Education","Parity"))
Notice how the labels have been doctored so that they will be in the conventional order. Now we have a reasonable looking contingency table, but what about Bronwyn's percentages and Hans'
chi-squares? We're going to have to go a bit beyond what table() will do to get output that will satisfy them. Let's go through the function format.xtab().
First, we check that the minimal data is there, then get the base table from which to derive the rest of the information. In order to calculate the percentages, we'll need the row and column sums.
These can be calculated in one hit by using apply(). Next up come the row and column names. Here, formatC() pops up. Plain old format() would have formatted each set of labels to the length of the
longest label plus 1, but if we want a neat table, we want all of the labels to be the same length. Also notice that the fieldwidth has been given the default value of 10, allowing the user to shrink
or expand the columns. dnn is given a default value if none was passed, and we're ready to go.
First the variable names (dnn) and the column names, then each of the rows, starting with the cell counts and row counts, the cell row percentages and the overall row percentages and then the cell
column percentages. After that, the column counts and grand total and the column percentages. Finally, if a chi-square test was ordered by including the argument chisq=T, the rather complicated bit
at the bottom to print out the values of the chi-square test will do its stuff. It would be simpler just to run the chi-square test and let it print itself, but we would then get variables labeled as
v1 and v2, which might be confusing. You'll also notice when you run this function that chisq.test() warns you that some of the cells have smaller than recommended counts. You may wish to recode
educational attainment to two categories as an exercise.
But wait, I want more than two dimensions!
Contingency tables with more than two dimensions can be pretty difficult to interpret, and the chi-square test will only handle 2D at present anyway. However, that's no reason to get spooked. In the
same file as format.xtab() is the xtab() function.
If you pass it a two element formula, it will act just like format.xtab().
If you ask for more than two dimensions, it will print out hierarchical counts and percentages of all levels of variables starting at the last one in the formula. When it gets to the first two, it
will print out 2D contingency tables for those variables. It gets silly pretty quickly. Both table() and ftable() will also display multi-way crosstabulations.
This is also an introduction to the use of recursion, in which a function calls itself until whatever test you have set is satisfied. In this case, the function stops calling itself when there are at
most two variables to be crosstabulated.
Get the nachos, you deserve it.
For more information, see Introduction to R: Frequency tables from factors | {"url":"https://cran.radicaldevelop.com/doc/contrib/Lemon-kickstart/kr_xtab.html","timestamp":"2024-11-08T21:22:17Z","content_type":"text/html","content_length":"6907","record_id":"<urn:uuid:b1f8c593-b3a2-4624-87c5-74af4ddc0580>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00017.warc.gz"} |
(FAQ) Astronomical Calculations for the Amateur
[This FAQ is limited to questions about calculating planetary positions and related problems of spherical astronomy. Other areas of interest, such as calculations of telescope optics, are beyond the
bounds of this document].
Calculation of astronomical events is a vast field with literature stretching back centuries, even to ancient times. This "frequently asked questions" list is directed toward the amateur astronomer
who is looking for starting points. If you become familiar with the first two books recommended below, you will be well on your way. You will, in fact, have surpassed the author of the FAQ!
1. What is a good source of books and software? The Willmann-Bell (http://www.willbell.com/) printed catalog has a large section on "Computational "Astronomy", as well as many other astronomy books,
atlases and telescope-making supplies: Willmann-Bell Inc
PO Box 35025
Richmond VA 23235
Monday-Friday, 9AM-5PM Eastern time
800-825-STAR (order only)
24 hour fax: 804-272-5920 If you have access to a good library, books under the subject headings "Spherical Astronomy" and "Celestial Mechanics" would be the places to start.
2. What is the best beginner's book? Astronomical Algorithms by Jean Meeus, Willmann-Bell, Second Edition 1998, $24.95.
Although it requires some study, this is the closest thing to a "cookbook" approach I have seen. Better than that, it explains and makes comprehensible many difficult concepts, and has many
worked examples and illustrations. It is not restricted to elementary problems, but treats many advanced topics. No calculus is required.
Beginners face two obstacles before they can calculate anything useful: (1) they must learn to convert between civil and astronomical dates and times (a task made more difficult by the fact that
the Earth's rate of rotation is variable), and (2) they must learn a number of translations between coordinate systems (Sun-centered to Earth-centered to location-centered, as well as ecliptic to
equatorial to horizon) and the application of corrections for precession and nutation and parallax. This is why questions such as "How do I predict the location of the moon?" do not have simple
answers. You must know how to do (1) and (2) before you can start on the moon.
The proper order of corrections and coordinate conversions had previously been very confusing for me, but Meeus gave me everything I needed to overcome these obstacles.
He covers the basics of time and coordinate transformations, corrections for precession and nutation, and for the observer's true "topocentric" location as offset from the center of the Earth.
For any given time, you can predict the positions of the Sun, Moon and planets and derive all the normal phenomena of the almanac. You can derive physical ephemerides (that is, the orientation of
the objects as seen through a telescope) for the Sun, Moon, Jupiter, Mars and Saturn's rings. He provides both low-precision and high-precision techniques for charting Jupter's four largest
moons. The Keplerian techniques of dealing with the orbits of new bodies such as comets and asteroids are also given.
A software supplement was available for the first edition, but this is no longer the case.
3. How much computer power does it take to perform these calculations? Modern personal computers, especially those with floating point hardware, are very capable machines. Calculating the position
of all the planets several different ways, using Meeus' techniques, takes my 68040 a small fraction of a second. Performance on a PowerPC or Pentium would be stunning.
4. What is a more advanced reference work? Explanatory Supplement to the Astronomical Almanac, edited by P.K. Seidelmann, University Science Books 1992, 752 pages, $65 (available from
"Completely Revised and Rewritten", so make you sure you get the 1992 edition.
This explains how the data in the annual "Astronomical Almanac" is produced. It is also a high-quality spherical astronomy text with many references to the current research literature. If you've
read Meeus and want "more", this is the logical next step.
Note that it contains very few worked examples and the math is much more advanced than in Meeus. Some of the chapters deal with issues of the professional astronomer that will not usually concern
the amateur. Examples: plate tectonic motion can cause an observing site to shift its position several centimeters per year. Ocean tidal pressure on the continental shelves, and atmospheric
pressure above the continents, can cause elevation to vary by similar amounts.
Note also that they use a different method of calculating planetary positions than does Meeus.
5. Are there any relevant periodicals for amateurs? Sky & Telescope magazine has an astronomical computing column.
Astronomy publishes programs from time to time.
Willmann-Bell sells back issues of Celestial Computing, "A Journal for Personal Computers and Celestial Mechanics", dated from 1988 through 1992, edited by David Eagle. This is no longer
The Computing Section of the Association of Lunar and Planetary Observers (A.L.P.O.) has a Computing Section and an electronic journal called The Digital Lens: http://www.m2c3.com/alpocs/
6. Where are online sources of algorithms? Sky & Telescope maintains an archive of program sources which have appeared in the magazine: http://www.skypub.com/software/software.html
Unfortunately, these consist of uncommented BASIC listings. Pseudo-code articles would be of greater use to those trying to understand the calculations. Astronomy magazine provides a small set of
BASIC programs: http://www.kalmbach.com/astro/Bytes/Bytes.html Keith Burnett (kburnett@btinternet.com) maintains an "Approximate astronomical positions" web page containing algorithms and many
links: http://www.btinternet.com/~kburnett/kepler/
http://www.stargazing.net/kepler/ Paul Schlyter (pausch@saaf.se) has a "Calculating Planetary Positions" web page at: http://hotel04.ausys.se/pausch/comp/ppcomp.html Sites listed in the next
topic also have software.
7. Where are online sources of data? There are astronomical amounts of data online. Try these web sites as starting points:
8. What commercial and shareware programs are available? [Readers: I have not been paying attention to announcements of these programs in sci.astro.amateur. Anyone who has such or knows of same,
please e-mail me the info and I will include descriptions here. The emphasis is not on "planetarium" or charting programs, but on ephemeris-generating software. Obviously, these categories
□ The freeware ephemeris program "ephem" for PC by Elwood Charles Downey (and VGA `Watch' plots by J.D. McDonald) is available at: ftp://ftp.funet.fi/pub/astro/progs/pc/solar/ephem423.exe (self
extracting archive.) The same site carries many other ephemeris programs also for other platforms.
(Nov 15 1997) There is a Web page for the Motif version at: http://www.clearskyinstitute.com/xephem/xephem.html
□ (Dec 7 1995) Dave Lane, Nova Astronomics (dlane@ap.stmarys.ca) says I have recently completed a freeware program which might interest you. It's called the "Windows Ephemeris Tool" and it
calculates tables of positions (and other data) for comets and asteroids.
It's available at: http://fox.nstn.ca/~ecu/ecu.html
□ (Jun 1 1996) Stephen Tonkin (sft@aegis1.demon.co.uk) says: I am very impressed with a program called ASTROWIN, sometimes referred to as ASTROMEUSS (It uses Meeus' algorithms). It is simple,
fast and accurate. Text-only output. I use it a lot.
This is for DOS and Windows, and is on the web at: ftp://ftp.demon.co.uk/pub/misc/astronomy/winmeuss.exe Caution: there is another program called ASTROWIN for astrology.
□ Willmann-Bell sells several software supplements which have ephemeris capabilities. See their catalog ([1] above) for details.
□ (Jan 31 1997) Bill Arnet (billa@znet.com) maintains links to planetarium programs that can be found on the net at: http://www.seds.org/billa/astrosoftware.html
9. How do I convert right ascension and declination to altitude and azimuth? Given the hour angle H of the object with right ascension RA and declination DEC, and the observer's latitude LAT:
azimuth = atan2(sin(H), cos(H) * sin(LAT) - tan(DEC) * cos(LAT))
altitude = asin(sin(LAT) * sin(DEC) + cos(LAT)* cos(DEC) * cos(H)) where "atan2(x,y)" is C-library function equivalent to "atan(x/y)".
Bill Owen (wmo@wansor.jpl.nasa.gov) offers the following comments: For the azimuth, it might be better to multiply both numerator and denominator by cos(DEC). Granted that the answer should turn
out the same either way, since 0/something = something else/infinity, but you'll avoid the overflow that would otherwise result when you compute tan(DEC) near the poles.
Also, the formula you have here is zero when you're looking south. Although there are different conventions, the most common one reckons azimuth eastward from *north*.
Combine these nits, and the formula I use is: azimuth = atan2 (-sin(H)*cos(DEC), cos(LAT)*sin(DEC) - sin(LAT)*cos(DEC)*cos(H) )
10. What's the hour angle? Given an object with right ascension RA and the observer's longitude LONG, and the sidereal time at Greenwich ST: H = ST - LONG - RA where LONG is positive to the west and
ST is represented as an angle. If you measure longitude to the east: H = ST + LONG - RA.
11. What's the sidereal time? Everything seems to depend on something else, doesn't it? Better get the Meeus book described in [2] above.
12. How do I predict the ocean tides? This is not commonly done by amateurs. The Explantory Supplement has a small section on the subject and the method seems quite complex.
13. How do I calculate the date of Easter? Many people know the formula: Easter is the first Sunday after the first full Moon following the vernal equinox. Caution! This is "astronomical Easter", and
it is usually but not always the same day as "ecclesiastical Easter", which is the date used by the churches and printed on calendars. "Ecclesiastical Easter" is determined by a formula codified
many years ago.
Here is the method published in the Explanatory Supplement. Perform integer math and drop all remainders. It is valid for any Gregorian year "Y": C = Y / 100
N = Y - 19 * (Y / 19)
K = (C - 17) / 25
I = C - C / 4 - (C - K) / 3 + 19 * N + 15
I = I - 30 * (I / 30)
I = I - (I / 28) * (1 - (I / 28) * (29 / (I + 1)) * ((21 - N) / 11))
J = Y + Y / 4 + I + 2 - C + C / 4
J = J - 7 * (J / 7)
L = I - J
M = 3 + (L + 40) / 44
D = L + 28 - 31 * (M / 4) "M" is the month number (3 -> March, 4 -> April) and "D" is the day of the month.
There is a short BASIC program at http://www.skypub.com/software/software.html See also the informative Royal Observatory leaflet on Easter at: http://www.rog.nmm.ac.uk/leaflets/easter/
easter.html There is an HTML Ecclesiatical Calendar generator at: http://cssa.stanford.edu/~marcos/ec-cal.html See also the Calendar FAQ at: http://www.tondering.dk/claus/calendar.html Tidbits:
the pattern of Gregorian Easter days, one year to the next, repeats in a cycle 5,700,000 years long. March 22 is the earliest date of Easter, April 25 is the latest, and April 19 is the most
14. How fast does that comet (or asteroid) move? From Harald Lang (lang@math.kth.se).
The current speed of a body like a comet orbiting the sun, or in a hyperbolic or parabolic orbit, is: 2 * pi * sqrt(2/r - (1-e)/q) AU/year
where r is the current distance in AU to the sun, q is the perihelion distance in AU, and e is the eccentricity of the orbit.
15. How do I find my longitude and latitude? Here are some sites that give longitude and latitude information.
It has been suggested to me that the following precisions are appropriate for the applications shown: 100 miles for most skyviewing work, 2 miles for accurately predicting Iridium flares, 50 feet
for occultation work.
This document is archived at: | {"url":"https://alebedev.narod.ru/lib/text/astro_calc.html","timestamp":"2024-11-05T07:28:42Z","content_type":"text/html","content_length":"22506","record_id":"<urn:uuid:b1703fd4-f477-4b4a-8e44-66e218eaaa2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00488.warc.gz"} |
Self-referential method and apparatus for creating stimulus representations that are invariant under systematic transformations of sensor states
The inventive method and apparatus include sensory devices that invariantly represent stimuli in the presence of processes that cause systematic sensor state transformations. Such processes include:
1) alterations of the device's detector, 2) changes in the observational environment external to the sensory device and the stimuli, and 3) certain modifications of the presentation of the stimuli
themselves. A specific embodiment of the present invention is an intelligent sensory device having a “front end” comprised of such a representation “engine”. The detectors of such a sensory device
need not be recalibrated, and its pattern analysis module need not be retrained, in order to account for the presence of the above-mentioned transformative processes. Another embodiment of the
present invention is a communications system that encodes messages as representations of signals. The message is not corrupted by signal transformations due to a wide variety of processes affecting
the transmitters, receivers, and the channels between them.
[0001] This application claims the benefit of priority from copending provisional application Ser. No. 60/235,695 filed on Sep. 27, 2000, entitled Self-Referential Method and Apparatus For
Creating Stimulus Representations That Are Invariant Under Systematic Transformations Of Sensor States, which is hereby incorporated by reference in its entirety.
[0002] A portion of this disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by
anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office Patent files or records, but otherwise reserves all copyright rights whatsoever.
[0003] The present invention relates generally to a method and apparatus that senses stimuli, creates internal representations of them, and uses its sensor data and their representations to
understand aspects of the nature of the stimuli (e.g., recognize them). More specifically, the present invention is a method and apparatus that represents sensed stimuli in a manner that is invariant
under systematic transformations of the device's sensor states. This device need not recalibrate its detector and/or retrain its pattern analysis module in order to account for sensor state
transformations caused by extraneous processes (e.g., processes affecting the condition of the device's detectors, the channel between the stimuli and the device, and the manner of presentation of
the stimuli themselves).
[0004] Most intelligent sensory devices contain pattern recognition software for analyzing the state of the sensors that detect stimuli in the device's environment. This software is usually
“trained” to classify a set of sensor states that are representative of the “unknown” sensor states to be subsequently encountered. For instance, an optical character recognition (OCR) device might
be trained on letters and numbers in images of printed pages. Or, a speech recognition device may be trained to recognize the spoken words of a particular speaker. After these devices have been
trained, their performance may be degraded if the correspondence between the stimuli and sensor states is altered by factors extrinsic to the stimuli of interest. For example, the OCR device may be
“confused” by distortions of pixel patterns due to a derangement of the camera's optical/electronic path, or it may be unfamiliar with pixel intensity changes due to altered intensity of illumination
of the printed page. Similarly, the speech recognition device may be compromised if the microphone's output signal is altered by changes in the microphone's internal response characteristics, or it
may fail to recognize words if the frequency spectrum of sound is altered by changes in the transfer function of the “channel” between the speaker's lips and the microphone. These processes
systematically deform the sensor states elicited by stimuli and thereby define a mapping of sensor states onto one another. If such transformations map one of the sensor states in the training set
onto another one (e.g., the pixel intensity pattern of one letter is mapped onto that of another letter), the pattern recognition software will misclassify the corresponding stimuli. Likewise, the
device will not recognize a stimulus in the training set if it's original sensor state has been transformed into one outside of the training set.
[0005] These problems can be addressed by periodically recalibrating the device's detector to account for sensor state transformations caused by changed conditions. For example, the device
can be exposed to a stimulus consisting of a test pattern that produces a known sensor state under “normal” conditions. The observed differences between the actual sensor state and ideal sensor state
for this test stimulus can be used to correct subsequently encountered sensor states. Alternatively, the device's pattern analysis (e.g. pattern recognition) module can be retrained to recognize the
transformed sensor states. These procedures must be implemented after each change in observational conditions in order to account for time-dependent distortions. Because the device may not be able to
detect the presence of such a change, it may be necessary to recalibrate or retrain it at short fixed intervals. However, this will decrease the device's duty cycle by frequently taking it
“off-line”. Furthermore, the recalibration or retraining process may be logistically impractical in some applications (e.g., computer vision and speech recognition devices at remote locations).
[0006] A similar problem occurs when the fidelity of electronic communication is degraded due to distortion of the signal as it propagates through the transmitter, receiver, and the channel
between them. Most communications systems attempt to correct for these effects by periodically transmitting calibration data (e.g., test patterns) so that the receiver can characterize the distortion
and then compensate for it by “unwarping” subsequently received signals. As mentioned above, these techniques may be costly because they periodically take the system “off-line” or otherwise reduce
its efficiency.
[0007] The present invention substantially overcomes the disadvantages of prior sensory devices by providing a novel self-referential method and apparatus for creating stimulus
representations that are invariant under systematic transformations of sensor states. Because of the invariance of the stimulus representations, the device effectively “filters out” the effects of
sensor state transformations caused by extraneous processes (e.g., processes affecting the condition of the sensory device, the channel between the stimulus and the sensory device, and the manner of
presentation of the stimulus itself). This means that the device can use these invariant representations to understand the nature of the stimuli (e.g., to recognize them), without explicitly
accounting for the transformative processes (e.g., without recalibrating the device's detector and without retraining its pattern recognition module).
[0008] The behavior of this device mimics some aspects of human perception, which is remarkably invariant when raw signals are distorted by a variety of changes in observational conditions.
This has been strikingly illustrated by experiments in which subjects wore goggles creating severe geometric distortions of the observed scene. For example, the visual input of some subjects was
warped non-linearly, inverted, and/or reflected from right to left. Although the subjects initially perceived the distortion, their perceptions of the world returned to the pre-experimental baseline
after several weeks of constant exposure to familiar stimuli seen through the goggles. For example, lines reported to be straight before the experiment were initially perceived to be warped, but
these lines were once again reported to be straight after several weeks of viewing familiar scenes through the distorting lenses. Similar results were observed when the goggles were removed at the
end of the experiment. Namely, the world initially appeared to be distorted in a manner opposite to the distortion due to the lenses, but eventually no distortion was perceived. These experiments
suggest that humans utilize recent sensory experiences to adaptively “recalibrate” their perception of subsequent sensory data. There are many other examples of how our percepts are often invariant
under changed observational conditions. For example, human observers are not usually confused by a different intensity of illumination of a scene. Although the raw sensory state of the observer is
altered by this change, this is usually not attributed to changed intrinsic properties of the stimulus of interest (e.g., the scene). Similarly, humans perceive the information content of ordinary
speech to be remarkably invariant, even though the signal may be transformed by significant alterations of the speaker's voice, the listener's auditory apparatus, and the channel between them. Yet
there is no evidence that the speaker and listener exchange calibration data in order to characterize and compensate for these distortions. Rather, these observations suggest that the speech signal
is redundant in the sense that listeners extract the same content from multiple acoustic signals that are transformed versions of one another. Finally, it is worth noting the tendency of different
persons to share the same perceptions of the world, despite obvious differences in their sensory organs and processing pathways. This “universality” of perception may also be due to the apparent
ability of each individual to “filter out” the effects of systematic sensor state transformations, including the transformations relating his/her sensor states to those of other individuals.
[0009] The present invention is a sensory method and apparatus that creates stimulus representations that are invariant in the presence of processes that remap its sensor states. These
representations may share the following properties of human percepts: immediately after the onset of such a process, they may be affected, but they eventually adapt to the presence of sensor state
transformations and return to the form that would have been produced in the absence of the transformative process. In order to see how to design such a device, consider any process that
systematically alters the correspondence between the stimuli and the sensor states. For example, consider: 1) changes in the performance of the device's detectors (e.g., drifting gain of a detector
circuit or distortion of an electronic image in a camera), 2) alterations of observational conditions that are external to the detectors and the stimuli (e.g., different intensity of a scene's
illumination or different positioning of the detectors with respect to the stimuli), 3) systematic modifications of the presentation of the stimuli themselves (e.g., systematic warping of printed
pages or systematic morphing of a voice). Because of such changes, a stimulus that formerly resulted in sensor state x will now induce another sensor state x′. Let the array of numbers x
corresponding to a sensor state be the coordinates of that state on the manifold of possible sensor states. In this language, the above-mentioned processes systematically transform the absolute
coordinates of the sensor state associated with each stimulus. However, certain relationships between the coordinates of a collection of sensor states may remain invariant in the presence of such a
process. This is analogous to the fact that the physical rotation or translation of a collection of particles in a plane does not affect the relationships among the members of the collection, even
though the absolute coordinates of each particle are transformed. For example, Euclidean coordinate geometry can be used to describe the relative positions of such particles in terms of a “natural”
internal coordinate system (or scale) that is rooted in the collection's intrinsic structure; i.e., the coordinate system that originates at the collection's center of “mass” and is oriented along
its principal moments of “inertia”. Such a self-referential description is invariant under global rotations and translations that change the absolute coordinates of each particle. This suggests the
following strategy: if we describe stimuli in terms of the relationships among their sensor states, we may be able to represent them in a way that is not affected by the above-described
transformative processes. Specifically, we show that a sufficiently dense collection of sensor states in a time series has a locally defined structure that can be used to describe the relationship
between each sensor state and the whole time series. Because this description is referred to the local structure of the collection of sensor states in the time series, it is invariant under any
linear or non-linear transformations of all of the states in the collection. Now consider a specific embodiment of the invention that uses this method and apparatus to describe stimuli in terms of
recently encountered stimuli. If a sufficient time has elapsed since the onset of a transformative process, each stimulus will be represented by the relationship between its transformed sensor state
and a collection of recently encountered transformed sensor states. The resulting representation will identical to the one that would have been derived in the absence of the transformative process:
namely, the representation describing the relationship between the corresponding untransformed sensor state and the collection of recently encountered untransformed sensor states. Furthermore, the
stimulus will be represented in the same way as it was before the onset of the transformative process, as long as both representations were referred to collections of sensor states (transformed and
untransformed) that were produced by the same sets of stimuli. In essence, the temporal stability of this type of stimulus representation is due to the stability of the device's recent “experience”
(i.e., the stability of the set of recently encountered stimuli to which descriptions are referred). Immediately after the onset of a transformative process, the representation of a stimulus may
drift during the transitional period when the device is referring its description to a mixed collection of untransformed and transformed sensor states. However, as in the human case, the
representation of each stimulus will eventually revert to its baseline form when the collection of recently encountered states is entirely comprised of transformed sensor states In sensory devices of
this type, the sensor signal is represented by a non-linear function of its instantaneous level at each time, with the form of this scale function being determined by the collection of signal levels
encountered during a certain time period (e.g., during a recent period of time) [Levin, D. N., “Time-dependent signal representations that are independent of sensor calibration”, Journal of the
Acoustical Society of America, Vol. 108, p. 2575, 2000; Levin, D. N., “Stimulus representations that are invariant under invertible transformations of sensor data”, Proceedings of the Society of
Photoelectronic Instrumentation Engineers, Vol. 4322, pp. 1677-1688, 2001; Levin, D. N., “Universal communication among systems with heterogeneous ‘voices’ and ‘ears’ ”, Proceedings of the
International Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the Internet, Scuola Superiore G. Reiss Romoli S.p.A., L'Aquila, Italy, Aug. 6-12, 2001&
rsqb;. This rescaled signal is invariant if the signal levels at all relevant times are invertibly transformed by the same distortion. This is because the relationship between each untransformed
signal level and the scale derived from the collection of untransformed signal levels is the same as the relationship between the corresponding transformed signal level and the scale derived from the
collection of transformed signal levels. This can be understood in the context of the above-described analogy, involving the positions of particles in a plane. Each particle's position with respect
to the collection's intrinsic coordinate system or scale is invariant under rigid rotations and translations that change all particle coordinates in an extrinsic coordinate system. This is because
each particle and the collection's intrinsic coordinate system are rotated and translated in the same manner. According to the present invention, the signal levels detected by the sensory device in a
suitable time period have an intrinsic structure that defines a non-linear coordinate system (or scale) on the manifold of possible signal levels. The “location” of the currently detected signal
level with respect to this intrinsic coordinate system is invariant under any invertible transformation (linear or non-linear) of the entire signal time series. This is because the signal level at
any time and the scale function at the same time point are transformed in a manner that leaves the rescaled signal level unchanged.
[0010] As suggested above, the task of representing stimuli in an invariant fashion can be reduced to the mathematical task of describing sensor state relationships that are not affected by
systematic transformations on the sensor state manifold. Now, assume that the change in observational conditions defines a one-to-one transformation of the sensor states. This requirement simply
excludes processes (e.g., a change in the spectral content of scene illumination) that make it possible to distinguish previously indistinguishable stimuli or that obscure the difference between
previously distinguishable stimuli. Such a process has exactly the same effect on sensor state coordinates as a change of the coordinate system on the manifold (x→x′) in the absence the process. This
is analogous to the fact that the physical rotation of an array of particles in a plane has the same effect on their coordinates as the inverse rotation of the axes of the coordinate system.
Therefore, the task of finding sensor state relationships that are independent of transformative processes is mathematically equivalent to the task of describing sensor state relationships in a
coordinate-independent manner. In other words, the relationships among the sensor states must be described in a manner that is independent of the coordinate system used to label them. In specific
embodiments of the invention, differential tensor calculus and differential geometry are used to provide the mathematical machinery for deriving such coordinate-independent descriptions of a time
series of points on a manifold.
[0011] The features of the present invention which are believed to be novel are set forth with particularity in the appended claims. The invention, together with further objects and
advantages thereof, may best be understood by reference to the following description in conjunction with the accompanying drawings.
[0012] FIG. 1 is a pictorial diagram of a specific embodiment of a sensory device, according to the present invention, in which energy from a stimulus is detected and processed by a sensor
to produce a sensor state characterized by an array of numbers x. The method and apparatus described in the present invention are used to generate a stimulus representation s from the sensor state x
and sensor states encountered at chosen time points, and that representation is then subjected to higher level analysis (e.g., pattern recognition). The detectors, processing unit, representation
generator, and analysis module are connected to a computer that is comprised of components selected from the group consisting of a central processing unit, memory unit, display unit, mouse, and
[0013] FIG. 2 is a pictorial illustration of a specific embodiment of a path x(u) (0≦u≦1) between a reference sensor state x0 and a sensor state of interest. If vectors ha can be
defined at each point along the path, each line segment &dgr;x can be decomposed into its components &dgr;sa along the vectors at that point;
[0014] FIG. 3a is a pictorial illustration of an untransformed signal x(t) describing a long succession of identical pulses that are uniformly spaced in time;
[0015] FIG. 3b is a pictorial illustration of the signal representation S(t) that results from applying the rescaling method in Section II.A either to the signal in FIG. 3a or to the
transformed version of that signal in FIG. 3c;
[0016] FIG. 3c is a pictorial illustration of the signal obtained by subjecting the signal in FIG. 3a to the distortion: x′(x)=g1ln(1+g2x) where g1=0.5 and g2=150;
[0017] FIG. 4a is a pictorial illustration of the signal obtained by digitizing the acoustic signal of the word “door”, uttered by an adult male speaker of American English. A 40 ms segment
in the middle of the 334 ms signal is shown, with time given in ms. The horizontal lines show signal amplitudes that have rescaled values equal to s=±50n for n=1, 2, . . . ;
[0018] FIG. 4b is a pictorial illustration of the signal S(t) (in units of &mgr;s) obtained by rescaling the signal in FIG. 4a, with the parameter &Dgr;T=10 ms;
[0019] FIG. 4c is a pictorial illustration of the non-linear function x′(x) that was used to transform the signal in FIG. 4a into the one in FIG. 4d;
[0020] FIG. 4d is a pictorial illustration of the transformed version of the signal in FIG. 4a, obtained by applying the non-linear transformation in FIG. 4c;
[0021] FIG. 4e is a pictorial illustration of the signal obtained by resealing the signal in FIG. 4d with the parameter &Dgr;T=10 ms;
[0022] FIGS. 5a-c is a pictorial illustration of the effects of an abrupt change in the transformation of the signal;
[0023] FIG. 5a is a pictorial illustration of the non-linear signal transformation;
[0024] FIG. 5b is a pictorial illustration of the signal obtained by applying the transformation in FIG. 4c to the first half (167 ms) of the signal excerpted in FIG. 4a and by applying the
transformation in FIG. 5a to the second half of that signal;
[0025] FIG. 5c is a pictorial illustration of the signal obtained by rescaling the signal in FIG. 5b, using the parameter &Dgr;T=10 ms;
[0026] FIGS. 6a-b are pictorial illustrations of the effect of noise on the resealing process;
[0027] FIG. 6a is a pictorial illustration of the signal derived from the signal in FIG. 4d by adding white noise with amplitudes randomly chosen from a uniform distribution between −200
and +200;
[0028] FIG. 6b is a pictorial illustration of the signal obtained by rescaling the signal in FIG. 6a with &Dgr;T=10 ms;
[0029] FIGS. 7a-d are pictorial illustrations of the results obtained with speaker #1 and listener #1;
[0030] FIG. 7a is a pictorial illustration of the time course of the parameter g, which describes the state of speaker #1's vocal apparatus, during a particular utterance. Time is in
[0031] FIG. 7b is a pictorial illustration of the spectrogram of the sound produced by speaker #1 during the utterance described by g(t) in FIG. 7a. Time is in ms;
[0032] FIG. 7c is a pictorial illustration of the curve swept out by the third, fourth, and fifth cepstral coefficients of the spectra produced by speaker #1's vocal tract when it
passed through all of its possible configurations (i.e., when the parameter g passed through all of its possible values);
[0033] FIG. 7d is a pictorial illustration of the sensor signal (left figure) induced in listener #1 when speaker #1 uttered the sound produced by the sequence of vocal apparatus
configurations in FIG. 7a. Here, x denotes the instantaneous position of the sound spectrum's cepstral coefficients with respect to a convenient coordinate system along the curve in FIG. 7c. Time is
in seconds. The right figure is the rescaled representation of the raw sensory signal on the left;
[0034] FIGS. 8a-b are pictorial illustrations of the results obtained with speaker #1 and listener #2;
[0035] FIG. 8a is a pictorial illustration of the curve swept out by the second, third, and sixth DCT coefficients of the spectra produced by speaker #1's vocal tract, when it passed
through all of its possible configurations;
[0036] FIG. 8b is a pictorial illustration of the sensor state (left figure) induced in listener #2 when speaker #1 uttered the sound produced by the sequence of vocal apparatus
configurations in FIG. 7a. Here, x′ denotes the instantaneous position of the sound spectrum's DCT coefficients with respect to a convenient coordinate system along the curve in FIG. 8a. Time is in
seconds. The right figure is the rescaled representation of the sensor signal on the left;
[0037] FIGS. 9a-c are pictorial illustrations of the results obtained with speaker #2 and listener #2;
[0038] FIG. 9a is a pictorial illustration of the spectrogram produced when speaker #2 uttered the sound described by the “gesture” function g(t) in FIG. 7a. Time is in ms;
[0039] FIG. 9b is a pictorial illustration of the curve swept out by the second, third, and sixth DCT coefficients of the spectra produced by speaker #2's vocal tract when it passed
through all of its possible configurations (i.e., when the parameter g passed through all of its possible values);
[0040] FIG. 9c is a pictorial illustration of the sensor signal (left figure) produced in listener #2 when speaker #2 uttered the sound produced by the sequence of vocal apparatus
configurations in FIG. 7a. Here, x′ denotes the instantaneous position of the spectrum's DCT coefficients with respect to a convenient coordinate system along the curve in FIG. 9b. Time is in
seconds. The right figure is the rescaled representation of the sensor signal on the left;
[0041] FIG. 10a is a pictorial illustration of the simulated trajectory of recently encountered sensor states x(t). The speed of traversal of each trajectory segment is indicated by the
dots, which are separated by equal time intervals. The nearly horizontal and vertical segments are traversed in the left-to-right and bottom-to-top directions, respectively. The graph depicts the
range −5≦xk≦5;
[0042] FIG. 10b is a pictorial illustration of the local preferred vectors ha that were derived from the data in FIG. 10a by means of the method and apparatus in Section II.B. The nearly
horizontal and vertical lines denote vectors that are oriented to the right and upward, respectively;
[0043] FIG. 10c is a pictorial illustration of the level sets of s(x), which shows the intrinsic coordinate system or scale derived by applying the method and apparatus in Section II.B to
the data in FIG. 10a. The nearly vertical curves are loci of constant s1 for evenly spaced values between −11 (left) and 12 (right); the nearly horizontal curves are loci of constant s2 for evenly
spaced values between −8 (bottom) and 8 (top);
[0044] FIG. 11 is a pictorial illustration of the coordinate-independent representation (right figure) of a grid-like array of sensor states (left figure), obtained by using FIG. 10c to
rescale those sensor states;
[0045] FIG. 12a is a pictorial illustration of the simulated trajectory of recently encountered sensor states x(t) that are related to those in FIG. 10a by the coordinate transformation in
Eq.(25). The speed of traversal of each trajectory segment is indicated by the dots, which are separated by equal time intervals. The nearly horizontal and vertical segments are traversed in the
left-to-right and bottom-to-top directions, respectively. The graph depicts the range −5≦xk≦5;
[0046] FIG. 12b is a pictorial illustration of the local preferred vectors ha that were derived from the data in FIG. 12a by means of the method in Section II.B. The nearly horizontal and
vertical lines denote vectors that are oriented to the right and upward, respectively;
[0047] FIG. 12c is a pictorial illustration of the level sets of s(x), which shows the intrinsic coordinate system or scale that was derived by applying the method and apparatus in Section
II.B to the data in FIG. 12a. The vertical curves are loci of constant s1 for evenly spaced values between −12 (left) and 11 (right); the horizontal curves are loci of constant s2 for evenly spaced
values between −9 (bottom) and 7 (top);
[0048] FIG. 13 is a pictorial illustration of the coordinate-independent representation (right figure) of array of sensor states (left figure), obtained by resealing the sensor states by
means of FIG. 12c. The panel on the left was created by subjecting the corresponding left panel in FIG. 11 to the coordinate transformation in Eq.(25). Notice that the right panel is nearly identical
to the one in FIG. 11, thereby confirming the fact that these representations are invariant under the coordinate transformation;
[0049] FIG. 14a is a pictorial illustration of the simulated trajectory of recently encountered sensor states x(t). The speed of traversal of each trajectory segment is indicated by the
dots, which are separated by equal time intervals. The nearly horizontal and vertical segments are traversed in the left-to-right and bottom-to-top directions, respectively. The graph depicts the
range −10≦xk≦10;
[0050] FIG. 14b is a pictorial illustration of the level sets of s(x), which shows the intrinsic coordinate system or scale derived by applying the method and apparatus in Section III to
the data in FIG. 14a. The nearly vertical curves are loci of constant s1 for evenly spaced values between −16 (left) and 16 (right); the nearly horizontal curves are loci of constant s2 for evenly
spaced values between −16 (bottom) and 16 (top);
[0051] FIG. 15 is a pictorial illustration of the coordinate-independent representation (right figure) of a grid-like array of sensor states (left figure), obtained by using FIG. 14b to
rescale those sensor states;
[0052] FIG. 16a is a pictorial illustration of the simulated trajectory of recently encountered sensor states x(t) that are related to those in FIG. 14a by the coordinate transformation in
Eq.(27). The speed of traversal of each trajectory segment is indicated by the dots, which are separated by equal time intervals. The nearly horizontal and vertical segments are traversed in the
left-to-right and bottom-to-top directions, respectively. The graph depicts the range −10≦xk≦10;
[0053] FIG. 16b is a pictorial illustration of the level sets of s(x), which shows the intrinsic coordinate system or scale that was derived by applying the method and apparatus in Section
III to the data in FIG. 16a. The vertical curves are loci of constant s1 for evenly spaced values between −24 (left) and 10 (right); the horizontal curves are loci of constant s2 for evenly spaced
values between −22 (bottom) and 16 (top);
[0054] FIG. 17 is a pictorial illustration of the representation (right figure) of an array of sensor states (left figure), obtained by resealing the sensor states by means of FIG. 16b. The
panel on the left was created by subjecting the corresponding left panel in FIG. 15 to the coordinate transformation in Eq.(27). Notice that the right panel is nearly identical to the one in FIG. 15,
thereby confirming the fact that these rescaled representations are invariant under the coordinate transformation; and
[0055] FIG. 18 is a pictorial illustration of the system for communicating information in the form of representations that are self-referentially encoded and decoded by the transmitter and
receiver, respectively. The inverse representation generator finds the transmitter state x that corresponds to the representation s to be communicated. The state x controls the energy waveform that
is transmitted by the broadcasting unit of the transmitter. After the energy traverses a channel, it is detected and processed by the receiver to create the receiver state x′. The representation
generator in the receiver decodes x′ as the representation s.
[0056] In this written description, the use of the disjunctive is intended to include the conjunctive. The use of definite or indefinite articles in not intended to indicate cardinality. In
particular, a reference to “the” object or thing or “an” object or “a” thing is intended to also describe a plurality of such objects or things.
[0057] It is to be further understood that the title of this section of the specification, namely, “Detailed Description of the Invention” relates to Rules of the U.S. Patent and Trademark
Office, and is not intended to, does not imply, nor should be inferred to limit the subject matter disclosed herein or the scope of the invention.
I. Coordinate-independent Descriptions of Sensor States
[0058] One specific embodiment of the present invention is a sensory method and apparatus having a number of detectors that are sensitive to various features of stimuli (FIG. 1). For
example, these detectors could respond to electromagnetic energy at various wavelengths, or they could respond to mechanical energy in the form of vibrations of the adjacent medium (e.g., air or
water). These detectors may send their output to a processing unit that combines them in a possibly non-linear fashion. For example, in an imaging system, the processing units may determine the
coordinates of a particular image feature. In a speech recognition system, the processing units could compute parameters characterizing aspects of the short-term Fourier spectrum of a microphone's
signal. Let the device's sensor state x denote the entire array of numbers xk (k=1, . . . , N, N≧1) that form the output of the processing unit.
[0059] Our goal is to create a description of the sensor states that is independent of the x coordinate system, which we happen to be using to label them. In other words, the same
description must result if we used another (x′) coordinate system to label the sensor states. Such a coordinate-independent description can be created with the help of coordinate-independent ways of
identifying: 1) a reference sensor state (x0), 2) a path x(u) (0≦u≦1) through the manifold of sensor states that connects the reference sensor state to the sensor state of interest (x(0)&
equals;x0, x(1)=x), 3) N linearly-independent contravariant vectors ha (a=1 . . . , N) at each point along the path (Levin, D. N., Method and apparatus for measurement, analysis,
characterization, emulation, and translation of perception, U.S. Pat. No. 5,860,936, Jan. 19, 1999; Levin, D. N., Method and apparatus for measurement, analysis, characterization, emulation, and
translation of perception, U.S. Pat. No. 6,093,153, Jul. 25, 2000; Levin, D. N., A differential geometric description of the relationships among perceptions, Journal of Mathematical Psychology, Vol.
44, pp. 241-284, 2000). Here, a vector h is said to be contravariant if it transforms as under the change of coordinate systems x→x′. If the foregoing conditions are met, each infinitesimal segment &
dgr;x along the path can be decomposed into its components &dgr;s along the vectors ha (FIG. 2): 1 δ ⁢ ⁢ x = ∑ a = 1 , … ⁢ , N ⁢ h a ⁢ δ ⁢ ⁢ s a ( Eq . ⁢ 1 )
[0060] Note that &dgr;s is a coordinate-independent (scalar) quantity because &dgr;x and ha are contravariant vectors. Therefore, if the components &dgr;s are integrated over the specified
path connecting x0 and x, the result is a coordinate-independent description of the sensor state x: 2 s = ∫ x 0 x ⁢ δ ⁢ ⁢ s ( Eq . ⁢ 2 )
[0061] The next two sections show how the information required for this type of description (a reference state, paths connecting it to other sensor states, and the vectors ha) can be
derived from the local structure of a database of sensor states encountered at chosen time points.
II. Sensor State Manifolds Having Local Directionality
[0062] In this Section, we discuss the specific embodiment of the invention in which the vectors ha are directly derived from nearby sensor states encountered in a chosen time interval. For
the sake of simplicity, this is first illustrated for one-dimensional (N=1) manifolds of sensor states. Then, we show how a similar procedure can be used to handle manifolds of any dimension.
[0063] II.A. One-dimensional Sensor State Manifolds Having Local Directionality
[0064] Consider the specific embodiment of the invention in which x is the number that characterizes the state of the device's sensor when it is exposed to a stimulus. For example, x could
represent the intensity of a pixel at a certain spatial location in a digital image of a scene, or it could represent the amplitude of the output signal of a microphone. Suppose that the device has
been exposed to a time-dependent series of stimuli, which produce sensor states x(t), where t denotes time, and let X be the sensor signal at time T. In this paragraph, we show how to rescale the
signal level at this particular time point. The exact same procedure can be used to rescale the signal level at other times, thereby deriving a representation of the entire signal time series.
Suppose that x(t) passes through all of the signal levels in [0, X] at one or more times during a chosen time interval of length &Dgr;T (e.g. T−&Dgr;T≦t<T). Here, &Dgr;T is a parameter
that can be chosen freely, although it influences the adaptivity and noise sensitivity of the method (see below). At each y&egr;[0, X], define the value of the function h(y) to be 3 h ⁡
( y ) = ⟨ ⅆ x ⅆ t ⟩ y ( Eq . ⁢ 3 )
[0065] where the right side denotes the derivative averaged over those times in T−&Dgr;T≦t<T when x(t) passes through the value y. If h(y) is non-vanishing for all y&egr;[0,X],
it can be used to compute the scale function s(x) on this interval 4 s ⁡ ( x ) = ∫ 0 x ⁢ dy h ⁡ ( y ) ( Eq . ⁢ 4 )
[0066] The quantity S=s(X) can be considered to represent the level of the untransformed signal X at time T, after it has been non-linearly rescaled by means of the function s(x).
Now, now consider the signal related to the untransformed signal by the time-independent transformation x→x′=x′(x). The transformation x′(x) could be the result of a time-independent
distortion (linear or non-linear) that affects the signal as it propagates through the detector and other circuits of the sensory device, as well as through the channel between the stimulus and the
sensory device. Furthermore, suppose that x→x′ is invertible (i.e., x′(x) is monotonic), and suppose that it preserves the null signal (i.e., x′(0)=0). As mentioned earlier, the requirement of
invertibility is relatively weak. It simply means that the distortion does not compromise the sensory device's ability to distinguish between signal levels. The transformed signal x′(t)=x′&
lsqb;x(t)] has the value X′=x′(X) at t=T. During T−&Dgr;T≦t<T, x′(t) passes through each of the values in [0,X′], because of our assumption that x(t) attains all of
the values in [0,X] during that time interval. Therefore, for each y′&egr;[0,X′], the process in Eq.(3) can be applied to the transformed signal in order to define the function h′
(y′) at time T 5 h ′ ⁡ ( y ′ ) = ⟨ ⅆ x ′ ⅆ t ⟩ y ′ ( Eq . ⁢ 5 )
[0067] where the right side denotes the derivative averaged over those times in T−&Dgr;T≦t<T when x′(t) passes through the value y′. By substituting x′(t)=x′[x(t)] in
Eq.(5), using the chain rule of differentiation, and noting that x(t) passes through the value y when x′(t) passes through the value y′=x′(y), we find 6 h ′ ⁢ ( y ′ ) = ⅆ ⁢ x ′ ⅆ
⁢ x ⁢ | y ⁢ h ⁢ ( y ) .
[0068] The function h′(y′) is non-vanishing for y′&egr;[0, X′] because the monotonicity of x′(x) implies dx′/dx≠0. This means that the process in Eq.(4) can be used to compute a
scale function s′(x′) on this interval 7 s ′ ⁡ ( x ′ ) = ∫ 0 x ′ ⁢ dy ′ h ′ ⁡ ( y ′ ) ( Eq . ⁢ 6 )
[0069] The quantity S′=s′(X′) represents the level of the transformed signal X′ at time T, after it has been rescaled by means of a function s′(x′), which was derived from x′(t) just
as s(x) was derived from x(t). Because of our assumption that x=0 transforms into x′=0, a change of variables (y→y′) in Eq.(4) implies s′(x′)=s(x) and, therefore, S′=S.
This means that the rescaled value of a signal is invariant under the signal transformation x→x′. In other words, the rescaled value S of the undistorted signal level at time T, computed from
recently encountered undistorted signal levels, will be the same as the rescaled value S′ of the distorted signal level at time T, computed from recently encountered distorted signal levels. Now, the
above procedure can be followed in order to rescale the signal levels at times other than T. The resulting time series of rescaled signal levels S(t), which the sensory device derives from the
untransformed signal x(t) in this way, will be identical to the time series of rescaled signal levels S′(t), which the sensory device derives from the transformed signal x′(t). Note that the scale
function defined by Eq.(4) is the same as that defined by Eqs.(1, 2) in the special case of a one-dimensional sensor state manifold. From this more general perspective, h(y) is the contravariant
vector identified at each point on the one-dimensional sensor state manifold, and the null signal is the reference sensor state in each relevant coordinate system Notice that the forms of the scale
functions s(x) and s′(x′) (and of h(y) and h′(y′)) will usually be time-dependent because they are computed from the time course of previously encountered signals. At some times, the sensory device
may be unable to compute a rescaled signal level. This will happen if the scale function in Eq.(4) does not exist because the quantity h(y) vanishes for some y&egr;[0, X] or if the function
h(y) cannot even be computed at some values of y because these signal levels were not encountered recently. Because of the monotonicity of x′(x), a signal invariant at such times cannot be computed
from either the untransformed or transformed signals. The inability to compute signal invariants at some time points means that the number of independent signal invariants (i.e., the number of time
points at which S(t) can be computed) may be less than the number of degrees of freedom in the raw signal from which the invariants were computed (i.e., the number of time points at which the signal
x(t) is measured). The above-mentioned particle analogy suggests that this is not surprising. Note that there are a number of linear relationships among the coordinates of the particles when they are
expressed in the collection's “center-of-mass” coordinate system. For example, their sum vanishes. Therefore, the number of independent invariants (i.e., the number of independent particle positions
in the intrinsic coordinate system) is less than the number of degrees of freedom of the particle collection (i.e., the number of particle locations in an extrinsic coordinate system). This is
because some of the collection's degrees of freedom were used to define the intrinsic coordinate system itself.
[0070] It is useful to illustrate these results with a simple example. Suppose the untransformed signal x(t) is a long periodic sequence of triangular shapes, like those in FIG. 3a. For
example, if the sensor state represents the intensity of a pixel in a digital image of a scene, FIG. 3a might be its response to a series of identical objects passing through the scene at a constant
rate. Alternatively, if the sensor state represents the amplitude of a microphone's output, FIG. 3a might be its response to a series of uniformly spaced identical pulses. Let a and b be the slopes
of the lines on the left and right sides, respectively, of each shape; FIG. 3a shows the special case: a=0.1 and b=−0.5 (measured in inverse time units). If we choose &Dgr;T to be an
integral number of periods of x(t), it is easy to see from Eqs.(3, 4) that the untransformed signal implies h(y)=(a+b)/2 and S(t)=s[x(t)]=2x(t)/(a+b) at each
point in time. FIG. 3b shows S(t), which is the untransformed signal after it has been rescaled at each time point as dictated by its earlier time course. Now, consider the transformed signal that is
related to the untransformed signal by any of the following non-linear functions: x′(x)=g1ln(1+g2x) where g2>0. For example, if g1=0.5 and g2=150, the transformed signal x′
(t) looks like FIG. 3c. For instance, in the above-mentioned examples, this could represent the effect of a non-linear change in the gain of the detector (pixel intensity detector or microphone).
When Eq.(5) is used to compute h′(y′) from the transformed signal, the result is: 8 h ′ ⁡ ( y ′ ) = 1 2 ⁢ ( a + b ) ⁢ g 1 ⁢ g 2 ⁢ e - y ′ / g 1 ( Eq . ⁢ 7 )
[0071] at each point in time. Then, Eq.(6) shows that the rescaled version of the transformed signal is 9 S ⁡ ( t ) = s ′ ⁡ [ x ′ ⁡ ( t ) ] = 2 ⁢ ⅇ x ′ ⁡ ( t ) / g 1 - 1 g
2 ⁡ ( a + b ) , ( Eq . ⁢ 8 )
[0072] Substituting x′(t)=x′[x(t)] into Eq.(8) shows that S′(t)=S(t). In other words, the rescaled signal S′(t), which is derived from the transformed signal x′(t),
is the same as the rescaled signal S(t), which is derived from the untransformed signal x(t). This is because the effect of the invertible signal transformation on the signal level at any given time
(x(t)→x′(t)) is compensated by its effect on the form of the scale function at that time (s(x)→s′(x′)). Notice that s(x) and s′(x′) (as well as h(y) and h′(y′)) happen to be time-independent in this
particular example, and this implies that x(t) and x′(t) are rescaled in a time-independent fashion. This is because, in order to simplify the calculation, x(t) was chosen to be periodic and &Dgr;T
was chosen to be an integral number of these periods. In the general case, the scale functions depend on time in a manner dictated by the earlier time course of the signal. However, identical
self-scaled signals (i.e., S(t)=S′(t)) will still be derived from the untransformed and transformed signals, as demonstrated by the proof at the beginning of this Section.
[0073] In the above discussion, the null signal was taken to be the reference sensor state x0, and the signal transformation was assumed to preserve the null signal. In general, any sensor
state can be taken to be the reference sensor state, as long as the reference sensor state x0′ used to rescale a transformed signal time series is the transformed version of the reference state x0
used to rescale the untransformed signal time series: i.e., as long as x0′=x′(x0). In mathematical terms, this means that the reference state must be chosen in a coordinate-independent manner.
For example, the reference sensor state could be chosen to be the sensor state that is the local maximum of a function defined to be the number of times each sensor state is encountered in a chosen
time interval. Alternatively, prior knowledge may be used to choose the reference state. For instance, as described above, we may know that the null sensor state always corresponds to the same
stimulus, and, therefore, it can be chosen to be the reference state. For example, this might be the case if the transformations of interest are due to changes in the intensity of a scene's
illumination or alterations of the gain of a microphone circuit. Finally, the reference sensor state may be chosen to be the sensor state produced by a user-determined stimulus that is “shown” to the
sensory device. Recall that the reference sensor state serves as the origin of the scale function used to rescale other sensor states. Therefore, this last procedure is analogous to having a choir
leader play a note on a pitch pipe in order to “show” each singer the origin of the desired musical scale. Notice that stimulus representations that are referred to different reference sensor states
will reflect different “points of view”. For example, suppose that a device is observing a glass of beverage. It will “perceive” the glass to be half full or half empty if it uses reference sensor
states corresponding to an empty glass or a full glass, respectively.
[0074] As mentioned previously, Eq.(1) can be used to find &dgr;s only if h(y) is well defined and non-vanishing at each y. In other words, this method requires that the sensor states
encountered in a chosen time interval determine a non-vanishing one-dimensional vector at each point of the sensor state manifold; i.e., the sensor states encountered in the chosen time interval must
impose some directionality and scale at each point. In Sections III and IV, we show how this requirement can be relaxed if the history of sensor states is used to define a coordinate-independent way
of moving vectors on the manifold (i.e., a way of “parallel transporting” them). In that case, the manifold need only have well-defined directionality and scale at one point. The vector defined at
that point can then be moved to all other points on the manifold in order to define vectors h(y) there.
[0075] Finally, in the above discussion, it was assumed that the sensory device encountered either a time series of untransformed signal levels or the corresponding time series of
transformed signals and that these were related by a time-independent transformation. Now, suppose that there is the sudden onset of a process that causes transformation of subsequently encountered
sensor states, and suppose that the rescaling of the signal is determined by signal levels encountered in the most recent period of length AT. During a transitional period of length &Dgr;T after the
transformation's onset, the sensory device will record a mixture of untransformed and transformed signal levels (e.g., a mixture of the shapes in FIGS. 3a and 3c). During this transition, the
device's scale function will evolve from the form derived from untransformed signals to the form derived from transformed signals (e.g., from s(x) to s′(x)), and during this transitional period the
transformed sensor states may be represented differently than the signals at corresponding times in the untransformed time series. However, once &Dgr;T time units have elapsed since the
transformation's onset, the device's scale function will be wholly derived from a time series of transformed sensor states. Thereafter, transformed signal levels will again be represented in the same
way as the signals at corresponding times in the untransformed time series. Like a human, the system adapts to the presence of the transformation after a period of adjustment.
[0076] II.B. Multidimensional Sensor State Manifolds Having Local Directionality
[0077] In this section, we describe the specific embodiment of the invention in which the above approach is generalized to sensory devices with multiple detectors. Let the device's sensor
state be represented by an array of numbers x(xk, k=1, . . . , N where N≧1), and let x(t) be the time series of sensor states encountered in a chosen time interval (e.g., the most recently
time interval of length &Dgr;T). This function describes a trajectory that crosses the sensor state manifold. We now show how these data can be used to define local vectors ha(x) in a manner that is
independent of the coordinate system. Consider a point x that has multiple trajectory segments passing through it in at least N different directions, where N is the manifold's dimension. The time
derivatives of the segments passing through x form a collection of contravariant vectors ĥ1 at x: 10 ( h ^ i = ⅆ x ⅆ t &RightBracketingBar; ) t i ( Eq . ⁢ 9 )
[0078] where ti denotes the ith time at which the trajectory passed through x. These quantities can be used to define N vectors at x if they tend to fall into clusters oriented along
different directions in the manifold. To see this, pick an integer C≧N and partition the indices i into C non-empty sets labeled Sc where c=1, . . . , C . Next, compute the N×N covariance
matrix Mc of the vectors corresponding to each set of indices: 11 M c = 1 N c ⁢ ∑ i ∈ S c ⁢ h ^ i ⁢ h ^ i ( Eq . ⁢ 10 )
[0079] where Nc is the number of indices in Sc. Each of these matrices transforms as a tensor with two contravariant indices, and the determinant of each matrix |MCc|
transforms as a scalar density of weight equal to minus two; namely, if coordinates on the manifold are transformed as x→x′, then 12 &LeftBracketingBar; M c &RightBracketingBar; → &
LeftBracketingBar; M c ′ &RightBracketingBar; = &LeftBracketingBar; ∂ x ′ ∂ x &RightBracketingBar; 2 ⁢ &LeftBracketingBar; M c &RightBracketingBar; ( Eq . ⁢ 11 )
[0080] Next, compute E, which is defined to be the sum of powers of these determinants: 13 E = ∑ c ⁢ &LeftBracketingBar; M c &RightBracketingBar; p ( Eq . ⁢ 12 )
[0081] where p is some real positive number. Equation 11 implies that E transforms as a scalar density of weight −2p. In other embodiments of the present invention, E is defined
differently; e.g., as another quantity that transforms as a scalar density of some weight. Now tabulate the values of E for all possible ways of partitioning the set of vectors ĥ1 into C
non-empty sets, and find the partition that results in the smallest value of E. This partition will tend to group the vectors into subsets with minimal matrix determinants. Therefore, the vectors in
each group will tend to be linearly dependent or nearly linearly dependent, and they will tend to form a cluster that is oriented in one direction. Next, compute the vectors hc at x by finding the
average vector in each part of the optimal partition: 14 h c = 1 N c ⁢ ∑ iεS c ⁢ h ^ i ( Eq . ⁢ 13 )
[0082] Because the ĥ1 are contravariant vectors, the hc will also transform as contravariant vectors as long as they are partitioned in the same manner in any coordinate system.
However, because E transforms by a positive multiplicative factor, the same partition minimizes it in any coordinate system. Therefore, the optimal partition is independent of the coordinate system,
and the hc are indeed contravariant vectors. Finally, the indices of the hc can be relabeled so that the corresponding determinants |Mc| are in order of ascending magnitude. This
ordering is also coordinate-independent because these determinants transform by a positive multiplicative factor (Eq.(11)). As a result, if the foregoing computations are done in any coordinate
system, the same vectors hc will be created, and these vectors provide a coordinate-independent characterization of the directionality of the trajectories passing through x.
[0083] The first N vectors that are linearly independent can be defined to be the ha in Eq.(1). These can be used to compute &dgr;s, the coordinate-independent representation of any line
element passing through x. Once we have specified a path connecting a reference state x0 to any sensor state x, Eq.(2) can be integrated to create a coordinate-independent representation s of that
state. The path must be completely specified because the integral in Eq.(2) may be path-dependent. To see this, note that Eq.(1) can be inverted to form:
&dgr;sa={tilde over (h)}a·&dgr;x(Eq. 14)
[0084] where the covariant vectors {tilde over (h)}a are found by solving 15 ∑ a = 1 , … , N ⁢ h ~ ak ⁢ h a l = δ k l
[0085] and &dgr;1k is the Kronecker delta function. It follows from Eq.(2) that each component of s is a line integral of {tilde over (h)}a for a=1, . . . , N. Stoke's theorem shows
that these line integrals will be path-dependent unless the “curl” of {tilde over (h)}a vanishes: 16 ∂ h ~ ak ∂ x l - ∂ h ~ al ∂ x k = 0 ( Eq . ⁢ 15 )
[0086] Because this may not be true for some sensor state manifolds, we must create a coordinate-independent way of specifying a path from x0 to any point x on the manifold. In one specific
embodiment of this invention, the path is determined in the following manner: first, generate a “type 1” trajectory through x0 by moving along the local h1 direction at x0 and then moving along the
h1 direction at each subsequently encountered point. Next, generate a “type 2” trajectory through each point on the type 1 trajectory by moving along the local h2 direction at that point and at each
subsequently-encountered point. Continue in this fashion until a type N trajectory has been generated through each point on every trajectory of type N-1. Because of the linear independence of the ha
at each point, the collection of points on type n trajectories (1≦n≦N) comprises an n-dimensional subspace of the manifold. Therefore, each point on the manifold lies on a type N trajectory and
can be reached from x0 by traversing the following type of path: a segment of the type 1 trajectory, followed by a segment of a type 2 trajectory, followed by a segment of a type N trajectory. This
path specification is coordinate-independent because the quantities ha transform as contravariant vectors. Therefore, if Eq.(2) is integrated along this “canonical” path, the resulting value of s
provides a coordinate-independent description of the sensor state x in terms of the recently-encountered sensor states; i.e., a description that is invariant in the presence of processes that remap
sensor states.
[0087] In order to illustrate this process, consider a manifold on which the sensor states move in several characteristic directions at every point. For example, imagine a large plane on
which the sensor states have been observed to move at a fixed speed along the two directions of an invisible Cartesian grid. Or, consider a large sphere on which the sensor states move along
invisible longitudes and latitudes at constant polar and azimuthal angular speeds, respectively. Equation (13) can used to derive local vectors (called “north” and “east”) from the observed evolution
of sensor states in the vicinity of each point. We can then create an east-west trajectory through a convenient reference point x0 by moving away from it in locally specified east and west
directions. Next, we can create north-south trajectories through each point on the east-west trajectory by moving away from it in the locally specified north and south directions. Each point on the
manifold can then be represented by sn, which is related to the distances traversed along the two types of trajectories in order to reach x from x0. In the above-mentioned planar manifold example,
this process may represent each point in a Cartesian coordinate system. On the other hand, in the spherical manifold example, each point may be represented by its longitude and its latitude (up to
constant scale factors and a shift of the origin). In each case, the resulting representation does not depend on which coordinate system was originally used to record the evolution of sensor states
and to derive local vectors from this data.
[0088] Strictly speaking, the vectors ha must be computed in the above-described manner at every point x on each path in Eq.(2). This means that the trajectory x(t) of previously
encountered sensor states must cover the manifold very densely so that it passes through every point x at least N times. However, this requirement can be relaxed for most applications. Specifically,
suppose that the ha are only computed at a finite collection of sample points on the manifold, and suppose that these vectors are computed from derivatives of trajectories passing through a very
small neighborhood of each sample point (not necessarily passing through the sample point itself). Furthermore, suppose that values of ha between the sample points are estimated by parametric or
non-parametric interpolation (e.g., splines or neural nets, respectively). This method of computation will be accurate as long as the spacing between the sample points is small relative to the
distance over which the directionality of the manifold varies. This must be true in all relevant coordinate systems; i.e., in coordinate systems corresponding to the transformative effects of all
interesting processes that remap the device's sensor states. Some circumstances may prevent the derivation of the ha at a sufficiently dense set of sample points on the manifold. For example, suppose
that there is no unique way of partitioning the ĥ1 at each point in order to minimize E, or suppose that the hc (Eq.(13)) associated with a minimal value of E do not contain N linearly
independent members. These results would indicate that the temporal course of sensor states x(t) does not endow the manifold with sufficient directionality. However, in this situation, it may still
be possible to create coordinate-independent representations of stimuli by means of the methods in Sections III and IV, which only require that the manifold have intrinsic directionality at a single
point. The vectors at that point can then be moved (parallel-transported) to other points on the manifold.
[0089] Note that certain exceptional points on the manifold may be connected to x0 by more than one of the above-described “canonical” paths. For example, on the above-mentioned spherical
manifold, the “north” pole may be connected to a reference point on the “equator” by multiple “canonical” paths. Specifically, the north pole can be reached by moving any distance along the equator
(a possible east-west trajectory), followed by a movement of one-quarter of a great circle along the corresponding longitude (a possible “north-south” trajectory). Such exceptional points have
multiple coordinate-independent representations (i.e., multiple s “coordinates”).
III. Sensor State Manifolds that Support Parallel Transport
[0090] In this section, we describe a specific embodiment of the invention in which the temporal course of sensor states x(t) has sufficient internal structure to define a method of moving
vectors (“parallel transporting” them) across the manifold. It may be possible to define parallel transport rules on a manifold, even in the absence of local directionality at every point, the
property that was required to implement the method in Section II. As long as a manifold supports parallel transport and has directionality at a single point, vectors ha can be moved across the
manifold from that point in order to define vectors ha at all other points in a coordinate-independent manner. Then, Eqs.(1-2) can be used to create coordinate-independent descriptions of sensor
states. Roughly speaking, the vectors ha can be considered to intrinsically “mark” the manifold at one point. The parallel transport process makes it possible to “carry” this information across the
manifold and make analogous “marks” at other points.
[0091] As in Section II.B, consider a device that has one or more detectors. Let the sensor state be represented by an array of numbers x (xk, k=1, . . . , N where N≧1), and let x
(t) be the time series of sensor states encountered in a chosen time interval. As before, this function describes a trajectory that crosses the sensor state manifold. According to the methods of
affine-connected differential geometry, any vector can be moved across this manifold in a coordinate-independent manner if one can define a local affine connection &Ggr;lmk(x) which is a quantity
transforming as: 17 Γ lm ′ ⁢ ⁢ k = ∑ r , s , t = 1 , … , N ⁢ ∂ x k ′ ∂ x r ⁢ ∂ x s ∂ x l ′ ⁢ ∂ x t ∂ x m ′ ⁢ Γ st r + &
Sum; n = 1 , … , N ⁢ ∂ x k ′ ∂ x n ⁢ ∂ 2 ⁢ x n ∂ x l ′ ⁢ ∂ x m ′ ( Eq . ⁢ 16 )
[0092] Specifically, given any contravariant vector V at x, consider the array of numbers V+&dgr;V where: 18 δ ⁢ ⁢ V k = - ∑ l , m = 1 , … , N ⁢ Γ lm k ⁢ V l ⁢ δ &
it; ⁢ x m ( Eq . ⁢ 17 )
[0093] It can be shown that V+&dgr;V transforms as a contravariant vector at the point x+&dgr;x, as long as the affine connection transforms as shown in Eq.(16). The vector V&
plus;&dgr;V at x+&dgr;x is said to be the result of parallel transporting V along &dgr;x. Our task is to use the time series of sensor states x(t) to derive an affine connection on the sensor
state manifold. Then, given a set of vectors ha at just one point on the manifold (e.g., at the reference sensor state x0), we will be able to use the affine connection to populate the entire
manifold with parallel-transported versions of those vectors. These can be used in Eqs.(1-2) to derive a coordinate-independent representation of any sensor state.
[0094] Consider a point x that is on at least N(N+1)/2 trajectory segments. Each of these segments can be divided into infinitesimal line elements dx that correspond to equal
infinitesimal time intervals. These line elements transform as contravariant vectors. Therefore, we can look for affine connections that parallel transport a given line element along itself into the
next line element on the same trajectory segment. In other words, we can look for affine connections for which a given trajectory segment is locally geodesic. Equation 17 shows that such an affine
connection {circumflex over (&Ggr;)}lmk must satisfy the following N constraints: 19 δd ⁢ ⁢ x k = - ∑ l , m = 1 , … , N ⁢ Γ ^ lm k ⁢ ⁢ dx l ⁢ dx m ( Eq . ⁢ 18 )
[0095] where dx+&dgr;dx represents the trajectory's line element at x+dx. Now consider any collection of N(N+1)/2 of the trajectory segments at x. An affine connection that
makes all of these trajectory segments locally geodesic must satisfy N2(N+1)/2 linear constraints like those in Eq.(18). Because a symmetric affine connection (&Ggr;lmk=&Ggr;mlk) has N2(N
+1)/2 components, one and only symmetric connection satisfies these equations unless they happen to be inconsistent (no solutions) or redundant (multiple solutions). Notice that if {circumflex
over (&Ggr;)}lmk is a solution of these equations in one coordinate system, then {circumflex over (&Ggr;)}′lmk is a solution of the corresponding equations in any other coordinate system, where
{circumflex over (&Ggr;)}lmk and {circumflex over (&Ggr;)}′lmk are related by Eq.(16). Therefore, if these equations have a unique solution in one coordinate system, there is a unique solution of the
corresponding equations in any other coordinate system, and these solutions are related by Eq.(16). Now, consider all collections of N(N+1)/2 trajectory segments that have a unique solution to
these equations; i.e. all collections that are locally geodesic with respect to one and only one symmetric affine connection at x. Let &Ggr;lmk be the average of the affine connections computed from
these subsets of trajectory segments: 20 Γ lm k = 1 N T ⁢ ∑ i = 1 , … , N T ⁢ Γ ^ lm k ⁡ ( i ) . ( Eq . ⁢ 19 )
[0096] where {circumflex over (&Ggr;)}lmk(i) is the symmetric affine connection that makes the ith collection of trajectory segments locally geodesic and NT is the number of such
collections. The quantity &Ggr;lmk transforms as shown by Eq.(16) because each contribution to the right side of Eq.(19) transforms in that way. Therefore, &Ggr;lmk can be defined to be the affine
connection at point x on the sensor state manifold. Notice that it may be possible to derive an affine connection from the sensor state time series even if the local trajectory segments are not
oriented along any particular “principal” directions. In other words, this method is more generally applicable than the method in Section II, which required that the trajectory segments be clustered
along preferred directions at each point.
[0097] Now, suppose that a reference sensor state x0 and N linearly independent reference vectors ha at x0 can be defined on the manifold in a coordinate-independent manner. Several ways of
defining x0 were outlined in Section II.A. Section II.B described coordinate-independent techniques that could be used to derive reference vectors from sensor states encountered in the vicinity of x0
in a chosen time period. Alternatively, the device may have prior knowledge of certain vectors at x0 that are known to be numerically invariant under all relevant coordinate transformations, and
these could be identified as the reference vectors. Or, the device's operator could choose the reference vectors and “show” them to the device by exposing it to the corresponding stimulus changes.
Once the reference sensor state and reference vectors have been determined, the affine connection can be used to parallel transport these vectors to any other point x on the manifold. The resulting
vectors at x will depend on the path that was used to create them if the manifold has non-zero curvature; i.e., if the curvature tensor Blmnk is non-zero at some points, where: 21 B lmn k = - &
PartialD; Γ lm k ∂ x n + ∂ Γ ln k ∂ x m + ∑ i = 1 , … , N ⁢ ( Γ im k ⁢ Γ ln i - Γ in k ⁢ Γ lm i ) ( Eq . ⁢ 20 )
[0098] Because this tensor will not vanish in many cases, the path connecting x0 and x must be completely specified in a coordinate-independent manner. In one specific embodiment of the
present invention, such a path can be prescribed in the following fashion. Generate a trajectory through x0 by repeatedly parallel transferring the vector h1 along itself, and call this trajectory a
type 1 geodesic. Next, parallel-transfer all of the vectors ha along this trajectory. Now, generate a type 2 geodesic through each point of this geodesic by repeatedly parallel transferring the
vector h2 along itself. Then, parallel-transfer all of the vectors ha along each of these geodesics, and generate a type 3 geodesic through each point on each type 2 geodesic by repeatedly parallel
transferring the vector h3 along itself. Continue in this manner until type N geodesics have been generated through each point on each type N-1 geodesic. Because of the linear independence of the
vectors ha at x0, the parallel transported ha will also be linearly independent. It follows that the collection of points on all trajectories of type n comprises an n-dimensional subspace of the
manifold, and the type N trajectories will reach every point on the manifold. This means that any point x can be reached from x0 by following a “canonical” path consisting of a segment of the type 1
geodesic, followed by a segment of a type 2 geodesic, . . . , followed by a segment of a type N geodesic. This path specification is coordinate-independent because it is defined in terms of a
coordinate-independent operation: namely, the parallel transport of vectors. After the ha have been “spread” to the rest of the manifold along these paths, a coordinate-independent representation s
of any point x can be generated by integrating Eq.(2) along the “canonical” path between x0 and x.
[0099] In order to visualize the entire procedure described above, consider a manifold containing a single point x0 at which vectors ha are locally specified; e.g., a plane or a small patch
of a sphere that is intrinsically “marked” at x0 with two “pointers” oriented in preferred directions on the sensor state manifold (called the “north” and “east” directions). Equation (19) can used
to derive the affine connection at each point from the observed evolution of sensor states through it. For instance, if the sensor states move at constant speed along straight lines in the plane or
along great circles on the sphere, Eq.(19) leads to the usual parallel transport rules of Riemannian geometry on a plane or sphere. The resulting affine connection can be used to parallel transport
the east pointer along itself in order to create a “east-west” geodesic through x0. We can then parallel transport the north pointer to create a new north pointer at each point along this east-west
geodesic. Finally, we can parallel transport each north pointer along itself in order to create a north-south geodesic through it. Each point on the manifold can then be represented by sa, which
represents the number of parallel transport operations (east or west, followed by north or south) that are required to reach it from x0. If the manifold is a plane with the above-described straight
sensor trajectories and if the “north”/“east” pointers at x0 are orthogonal, sa will represent each point in a Cartesian coordinate system. On the other hand, if the manifold is a sphere with the
above-described great circular trajectories of sensor states, sa will represent each point by its longitude and latitude. In each case, the resulting representation does not depend on which
coordinate system was originally used to record sensor states and to derive the affine connection.
[0100] Strictly speaking, the affine connection &Ggr;lmk must be computed from sensor state data at every point on the path used in Eq.(2). This means that the trajectory x(t) of sensor
states encountered in a chosen time interval must cover the manifold densely so that it passes through each of these points at least N(N+1)/2 times. However, this requirement can be relaxed for
most applications. Specifically, suppose that &Ggr;lmk is only computed at a finite collection of sample points on the manifold, and suppose that it is computed from trajectory segments passing
through a very small neighborhood of each sample point (not necessarily through the point itself). Furthermore, suppose that values of &Ggr;lmk at intervening points are estimated by parametric or
non-parametric interpolation (e.g., splines or neural nets, respectively). This method of computation will be accurate as long as the distance between sample points is small relative to the distance
over which the locally geodesic affine connection changes. This must be true in all relevant coordinate systems; i.e., in coordinate systems that describe sensor states recorded in the presence of
all expected transformative processes. If those transformations remap the sensor states in a relatively smooth fashion, the sampling and interpolation of the affine connection are likely to be
accurate in all relevant coordinate systems, as long as they are accurate in one of them.
[0101] As mentioned previously, the above-described method is more generally applicable than the one in Section II.B. From a mathematical standpoint, this is because manifolds with
well-defined directionality at each point (i.e., those in Section II.B) are a subset of manifolds that support parallel transport (i.e., those considered in this Section). To illustrate this
statement, consider a vector V at point x on a manifold with directionality at each point. V has certain components when expressed as a linear combination of the vectors ha at x. At any other point
on the manifold, we can define the parallel-transported version of V to be the same linear combination of the vectors ha at that point. It can be shown that this is equivalent to choosing the affine
connection to be: 22 Γ lm k = - ∑ a = 1 , … ⁢ , N ⁢ h ~ al ⁢ ∂ h a k ∂ x m ( Eq . ⁢ 21 )
[0102] The part of this expression that is symmetric in the lower two indices also constitutes an affine connection on the manifold. Thus, manifolds with local directionality have more than
enough “structure” to support parallel transport.
IV. Sensor State Manifolds that Support Metrics
[0103] In this section, we describe a specific embodiment of the invention in which the time series of sensor states x(t) imposes a Riemannian metric on the manifold, even in the absence of
local directionality at every point, the property that was required to implement the method in Section II.B. This metric can then be used to define parallel transport rules. As long as the manifold
has sufficient directionality to define vectors ha at a single point, those vectors can be parallel transported in order to define vectors ha at all other points. Then, Eqs.(1-2) can be used to
create coordinate-independent descriptions of sensor states.
[0104] As in Section II.B and Section III, consider a device that has one or more detectors. Let the sensor state be represented by an array of numbers x (xk, k=1, . . . , N where N&
gE;1), and let x(t) be the sensor state time series. Consider a point x that is on at least N(N+1)/2 trajectory segments. Each of these segments defines an infinitesimal line element dx=x
(t+dt)−x(t), where t is the time at which the trajectory segment passed through x and dt is an infinitesimal time interval. Now consider one of these line elements, and look for metrics that
assign unit length to it. Such a metric ĝkl must satisfy the following constraint: 23 ∑ k , l = 1 , … ⁢ , N ⁢ g ^ kl ⁢ dx k ⁢ dx l = 1 ( Eq . ⁢ 22 )
[0105] Next, consider any collection containing N(N+1)/2 of the line elements at x. A metric that assigns unit length to all of these line elements must satisfy N(N +1)/2 linear
constraints like the one in Eq.(22). Because a metric has N(N+1)/2 components, one and only one metric satisfies these equations unless they happen to be inconsistent (no solutions) or redundant
(multiple solutions). If these equations have a unique solution in one coordinate system, there is a unique solution of the corresponding equations in any other coordinate system, and these solutions
define the same covariant tensor in the different coordinate systems. This is a consequence of the fact that each line element dx transforms as a contravariant vector. Now, consider all collections
of N(N+1)/2 line elements that have a unique solution to these equations. Let gkl be the average of the metrics computed from these subsets of line elements: 24 g kl = 1 N L ⁢ ∑ i = 1 ,
... , N L ⁢ g ^ kl ⁡ ( i ) . ( Eq ⁢ .23 )
[0106] where ĝkl(i) is the metric that assigns unit length to the ith collection of line elements and NL is the number of such collections. Note that sets of line elements for which
Eq.(22) has no solution or multiple solutions do not contribute to Eq.(23). The quantity gkl transforms as a covariant tensor because each contribution to the right side of Eq.(23) transforms in that
way. Therefore, gkl can be defined to be the metric at point x on the sensor state manifold. Notice that it may be possible to derive such a metric from the sensor state time series in a chosen
interval even if the local trajectory segments are not oriented along any particular “principal” directions. In other words, this method is more generally applicable than the method in Section II.B,
which required that the trajectory segments be clustered along preferred directions at each point.
[0107] The above-derived metric can now be used to define parallel transport on the sensor state manifold. For example, in one specific embodiment of the invention, the affine connection is
determined to be the following quantity that preserves the metrically computed lengths of vectors during parallel transport: 25 Γ l ⁢ ⁢ m k = 1 2 ⁢ ∑ n = 1 , … ⁢ , N ⁢ g kn &
af; ( ∂ g mn ∂ x l + ∂ g nl ∂ x m - ∂ g l ⁢ ⁢ m ∂ x n ) ( Eq . ⁢ 24 )
[0108] where gkl is the contravariant tensor that is the inverse of gkl. Other definitions of the affine connection are also possible and are used in other embodiments of the invention.
Now, suppose that a reference state x0, together with N linearly independent vectors ha at x0, can be defined on the sensor state manifold. Specific embodiments of the method and apparatus for doing
this were described in Sections II.A and III. The above-described affine connection can be used to parallel transport these vectors to any other point x on the manifold. The resulting vectors at x
will depend on the path that was used to create them if the manifold has non-zero curvature. Therefore, in general, the path connecting x0 and x must be completely specified in a
coordinate-independent manner. In one specific embodiment of the present invention, such a path is prescribed as it was in Section III. Namely, we can define a “canonical” path to x that follows a
specific sequence of geodesics, which were created by parallel transport of the vectors ha at x0. Then, a coordinate-independent representation s of any sensor state x can be generated by integrating
Eq.(2) along the canonical path between x0 and x.
[0109] As in Section III, the metric and the affine connection must be computed from sensor state data at every point on the paths used in Eq.(2). This means that the previously-encountered
sensor state trajectory x(t) must cover the manifold densely so that it passes through each of these points at least N(N+1)/2 times. However, this requirement can usually be relaxed by computing
the metric at a finite collection of sample points from trajectory segments passing through a very small neighborhood of each sample point (not necessarily through the point itself). Then, the values
of gkl at intervening points can be estimated by parametric or non-parametric interpolation (e.g., splines or neural nets, respectively). As before, this method of computation will be accurate as
long as the distance between sample points is small relative to the distance over which the metric changes.
[0110] In the specific embodiments of the present invention in this Section and in Sections II and III, the quantities hc, &Ggr;lmk, and gkl are computed by averaging over the ĥi,
{circumflex over (&Ggr;)}lmk(i), and ĝkl(i), respectively, and these quantities are computed from trajectory data in a chosen time interval; e.g., the time interval between t−&Dgr;T and t. In
other words, data from that epoch is weighted by unity, and data from prior times (before t−&Dgr;T) and subsequent times (after t) is weighted by zero. In other specific embodiments of the present
invention, Eqs. 13, 19, and 23 are applied to data from each of multiple epochs (e.g., epochs demarcated by t−N&Dgr;T and t−(N−1)&Dgr;T where N is any integer) in order to compute hc(N), &Ggr;lmk(N),
and gkl(N) for each epoch. Then, hc, &Ggr;lmk, and gkl can be computed by taking a weighted sum of hc(N), &Ggr;lmk(N), and gkl(N). For example, the weighting factor w(N) could become smaller as the
magnitude of N increases.
V. Tests with Simulated Data
[0111] V.A. One Dimensional Sensor State Manifolds
[0112] V.A.1. Acoustic Waveforms of Human Speech
[0113] In this Section, the mathematical properties of the present invention are further illustrated by applying a specific embodiment of it to acoustic waveforms of human speech. An adult
male American uttered English words with speed and loudness that were characteristic of normal conversation. These sounds were digitized with 16 bits of depth at a sample rate of 11.025 kHz. FIG. 4a
shows a 40 ms segment of digitized signal (x(t)), located at the midpoint of the 334 ms signal corresponding to the word “door”. FIG. 4b shows the “s representation” (i.e., the rescaled signal S(t))
that was derived from FIG. 4a by the method of Section II.A. The value of S was determined at each time point by a scale function s(x), which was derived from the previous 10 ms of signal (i.e., &
Dgr;T=10 ms). These scale functions are shown by the horizontal lines in FIG. 4a, which denote values of x corresponding to s=±50n for n=1, 2, . . . FIG. 4d shows the signal that
was derived from FIG. 4a by means of the non-linear transformation (x′(x)) shown in FIG. 4c. FIG. 4e is the rescaled signal that was derived from FIG. 4d with the parameter &Dgr;T chosen to be 10 ms.
Although there are significant differences between the “raw” signals in FIGS. 4a and 4d, their s representations (FIGS. 4b and 4e) are almost identical, except for a few small discrepancies that can
be attributed to the discrete methods used to compute derivatives. Thus, the s representation was invariant under a non-linear signal transformation, as expected from the derivation in Section II.A.
It is interesting to note that this result is apparent when one listens to the sounds represented in FIG. 4. Although all four signals in FIG. 4 sound like the word “door”, there is a clear
difference between the sounds of the two “raw” signals, and there is no perceptible difference between the sounds of their rescaled representations. In general, the rescaled signals sound like the
word “door”, uttered by a voice degraded by slight “static”.
[0114] The above example suggests how dynamic rescaling might be used to enable universal communication among systems with a variety of transmitters and receivers. To see this, imagine that
FIGS. 4a and 4d are the signals in the detector circuits of two receivers, which are “listening” to the same transmission. The non-linear transformation that relates these raw signals (FIG. 4c) could
be due to differences in the receivers' detector circuits (e.g., their gain curves), or it could be due to differences in the channels between the receivers and the transmitter, or it could be due to
a combination of these mechanisms. As long as both receivers use resealing to “decode” the detected signals, they will derive the same information content (i.e., the same function S(t)) from them. If
one of the receivers is part of the system that originated the transmission (i.e., if this system is “listening” to its own transmission), then the information in the signal's s representation will
be faithfully communicated to the other receiver, despite the fact that it has different “ears” than the transmitting system. Alternatively, imagine that FIGS. 4a and 4d are the signals in a single
receiver, when it detects the broadcasts from two different transmitters. In this case, the non-linear transformation that relates these signals could be due to differences in the “voices” (i.e., the
transmission characteristics) of the two transmitters. As long as the receiver “decodes” the detected signals by resealing, it will derive the same information content (i.e., the same S(t)) from
them. In other words, it will “perceive” the two transmitters to be broadcasting the same message in two different “voices”. As mentioned above, the transmitters will derive the same information
content as the receivers if they “listen” to their own transmissions and then rescale them. In this way, systems with heterogeneous transmitters and receivers can communicate accurately without using
calibration procedures to measure their transmission and reception characteristics.
[0115] Some comments should be made about technical aspects of the example in FIG. 4. The dynamically rescaled signals in FIGS. 4b and 4e were computed by a minor variant of the method in
Section II.A. Specifically, we assumed that all signal transformations were monotonically positive, and we restricted the contributions to Eq.(3) and Eq.(5) to those time points at which the signal
had a positive time derivative as it passed through the values y and y′, respectively. The rescaled signal is still invariant because monotonically positive transformations do not change the sign of
the signal's time derivative, and, therefore, the functions h(y) and h′(y′) were still constructed from time derivatives at identical collections of time points. At each time point, we attempted to
compute the rescaled signal from the signal time derivatives encountered during the most recent 10 ms (&Dgr;T=10 ms). At some times, the signal could not be rescaled because the signal level
at that time was not attained during the previous 10 ms, and, therefore, there were no contributions to the right side of Eq.(3) for some values of y. For example, this happened at t˜163, 174, and
185 ms in FIG. 4. As mentioned in Section II.A, this occurs at identical time points when resealing is applied to the untransformed signal (e.g., FIG. 4a) and to any transformed version of it (e.g.,
FIG. 4d). This means that the s representations of all of these signals are non-existent at identical time points and that at all other times they exist and have the same values. Therefore, this
phenomenon does not corrupt the invariance of the signal's s representation, although it does reduce its information content. In this experiment, the s representation could be computed at 92% of all
time points.
[0116] FIG. 5 shows what happened when the nature of the signal transformation changed abruptly. The signal in FIG. 5b was derived by applying the non-linear transformation in FIG. 4c to
the first half (i.e., the first 167 ms) of the signal excerpted in FIG. 4a and by applying the non-linear transformation in FIG. 5a to the second half of that signal. FIG. 5c shows the s
representation derived by resealing FIG. 5b with &Dgr;T=10. Comparison of the latter to FIG. 4b shows that the s representation was invariant except during the time period 167 ms≦t≦177
ms. These discrepancies can be understood in the following way. During this time interval, the rescaled signal in FIG. 5c was derived from a mixed collection of signal levels, some of which were
transformed as in FIG. 4c and some of which were transformed as in FIG. 5a. This violates the proof of invariance (Section II.A), which assumed the time-independence of the signal transformation.
Notice the transitory nature of this corruption of the s representation. The rescaled signals in FIGS. 4b and 5c became identical again, once sufficient time (&Dgr;T) elapsed for the transformation
to become time-independent over the time interval utilized by the resealing procedure. In other words, the rescaling process was able to adapt to the new form of the transformation and thereby
“recover” from the disturbance. This adaptive behavior resembles that of the human subjects of the goggle experiments mentioned in Summary of the Invention.
[0117] FIG. 6 illustrates the effect of noise on the resealing procedure. FIG. 6a was derived from FIG. 4d by adding white noise, chosen from a uniform distribution of amplitudes between
−200 and +200. This causes a pronounced hiss to be superposed on the word “door” when the entire 334 ms sound exemplified by FIG. 6a is played. FIG. 6b is the s representation, derived by
rescaling FIG. 6a with &Dgr;T=10 ms. Comparison of FIGS. 6b, 4e, and 4b shows that the noise has caused some degradation of the invariance of the s representation. This is expected because
additive noise ruins the invertibility of the transformations relating FIGS. 6a, 4d, and 4a, thereby violating the proof of the invariance of S in Section II.A. The noise sensitivity of the s
representation can be decreased by increasing &Dgr;T, because this increases the number of contributions to the right side of Eq.(3), which tends to “average out” the effects of noise. However, such
an increase in &Dgr;T means that more time is required for the rescaling process to adapt to a sudden change in the signal transformation.
[0118] V.A.2. Spectra of Synthetic Speech-like Sounds
[0119] In the previous Section, a specific embodiment of the invention was demonstrated by applying it to time-domain human speech waveforms that were related to one another by invertible
transformations. However, these transformations incorporated the effects of a relatively small range of speakers' “voices” and listeners' “ears”. For example, signals related by such transformations
did not mimic voices with a significant range of pitches. A much wider range of speech signals can be created by transforming a sound's short-term Fourier spectra with multidimensional non-linear
transformations. In this Section, we demonstrate that, if the speech spectra produced by different speakers and/or detected by different listeners are related by such transformations, they will have
the same rescaled representation. For computational simplicity, we consider synthetic speech-like signals that are generated by a “glottis” and “vocal tract” controlled by a single degree of freedom.
These signals mimic the “one-dimensional speech” produced by multiple muscles whose motion is determined by the value of a single time-dependent parameter. The same approach can be applied to human
speech signals, which are produced by a vocal apparatus with multiple degrees of freedom, by utilizing the specific embodiments of the invention described in Sections II.B, III, and IV.
[0120] The “ID speech” signals were generated by a standard linear prediction (LP) model. In other words, the signals' short-term Fourier spectra were equal to the product of an “all pole”
transfer function and a glottal excitation function. The transfer function had six poles: two real poles and four complex poles (forming two complex conjugate pairs). The resulting speech spectra
depended on the values of eight real quantities, six that described the positions of the poles and two that described the pitch and amplitude (“gain”) of the glottal excitation. Each of these
quantities was a function of a single parameter (g), which itself depended on time. These eight functions described the nature of the speaker's “voice”, in the sense that they defined the ID manifold
of all spectra that the speaker could produce as g ranged over all of its possible values. The actual sound produced at any given time was determined by these eight functions, together with the value
of g(t). The latter function defined the “articulatory gesture” of the speaker, in the sense that it determined how the speaker's vocal apparatus was configured at each time. In a musical analogy,
the g-dependent functions of the LP model would describe the range of possible states of a musical instrument played with one finger, and the function g(t) would describe the motions of the
musician's finger as it configures the instrument during a particular tune. In these examples, we considered speakers who produced “voiced” speech sounds that were driven by regular glottal impulses.
However, it is straightforward to apply the same methods to “unvoiced” speech sounds that are driven by noise-like glottal excitation functions. The pitch of the first speaker's voice was taken to be
constant and equal to 200 Hz.
[0121] The first listener's “ears” were described by his/her method of detecting and processing the time domain speech signals. The above-described signals were digitized at 10 kHz, and
then short-term Fourier spectra were produced from the signals in a 10 ms Hamming window that was advanced in increments of 5 ms. FIG. 7b shows the spectrogram that resulted from the signal generated
by the first speaker's “voice”, when it went through the series of configurations described by the “gesture” function in FIG. 7a. The spectrum at each time point was parameterized by cepstral
coefficients, which were generated by the discrete cosine transformation (DCT) of the log of the spectral magnitude, after the spectral magnitude had been averaged in equally spaced 600 Hz bins. The
listener described in this paragraph (listener #1) was assumed to detect only the third, fourth, and fifth cepstral coefficients of each spectrum. The cepstral coefficients from each short-term
spectrum defined a single point in this three-dimensional space. Each of these points fell on a curve defined by the cepstral coefficients corresponding to all possible configurations of the
speaker's vocal apparatus (i.e., all possible values of g). The precise shape of this curve depended on the nature of the speaker's voice (specified by the g-dependence of the speech model's poles
and other parameters). FIG. 7c shows the configuration of this curve for the voice of the speaker described in the previous paragraph. A convenient coordinate system (denoted by x) was established on
this curve by projecting each of its points onto a connected array of chords that hugged the curve. The “raw” sensor signal for a specific utterance consisted of the temporal sequence of coordinates
x(t) that were generated as the cepstrum traversed that curve. Because the spectrogram in FIG. 7b was generated by an oscillatory g(t), previously generated spectra (and cepstra) were revisited from
time to time, and the corresponding cepstral coefficients moved back and forth along the curve in FIG. 7c. The left side of FIG. 7d shows the oscillatory sensory signal x(t) that was generated in
this way.
[0122] The ears of a second listener were modeled in the following manner. The second listener was assumed to compute the short-term Fourier spectra of the time domain signal, as described
above. However, instead of calculating the cepstrum of each spectrum, the second listener was assumed to compute the DCT of its magnitude (not its log magnitude), after it (the spectral magnitude)
had been averaged in equally spaced 600 Hz bins. This listener detected only the second, third, and sixth of these DCT coefficients. The voice of the above-described speaker was characterized by the
curve (FIG. 8a) defined by the spectral DCT coefficients of all possible sounds that could be generated by the vocal apparatus (i.e., all possible values of g). As before, a convenient coordinate
system (x′) was established on this curve by projecting each of its points onto a connected array of chords that hugged the curve. The left side of FIG. 8b is the sensor signal x′(t) that was induced
in listener #2 by the sound in FIG. 7b. Note that the x and x′ coordinate systems could bear any relationships to the curves in FIGS. 7c and 8a, respectively, and need not have any definite or
known relationship to one another, except that x=0 and x′=0 must correspond to the same sound (e.g., the same value of g). In this example, this condition was satisfied by defining the
x and x′ coordinate systems so that x=x′=0 corresponded to the first short-term spectrum in the utterance in FIG. 7b. Alternatively, this could be arranged by having both listeners hear
any single sound produced by speaker #1 and agree to originate their coordinate systems at the corresponding point on the speaker's “voice” curve; this is analogous to having a choir leader play
a pitch pipe in order to establish a common origin of the musical scale among the singers. Finally, the raw sensory signal in each listener, x(t) and x′(t), was processed by rescaling with &Dgr;T&
equals;500 ms. The results are shown on the right sides of FIGS. 7d and 8b, respectively. Notice the similarity between these s representations despite the differences between the sensor signals, x
(t) and x′(t), from which they were created. This means that the two listeners created the same rescaled representation of the utterance, despite the dramatic differences in their “ear” mechanisms
(FIGS. 7c and 8a). The rescaled representations were the same because the sensor signals, x(t) and x′(t), were related to one another by an invertible transformation that preserved the null
amplitude. This was true because each listener was sensitive to the spectral changes produced by all changes in g, and, therefore, each sensor signal was invertibly related to g(t). Furthermore, for
the same reason, any other gesture function {tilde over (g)}(t) that is invertibly related to the function in FIG. 7a will generate an utterance with the rescaled representation in the right panel of
FIG. 7d. In other words, the utterances that are produced by these “different” gesture functions will be internally represented as the same message uttered in two different tones of voice. Finally,
notice that the rescaled representation of g1(t)≡g(t)−g(0) is identical to the rescaled representations of x(t) and x′(t). This is expected because g1(t) is invertibly related to each of these sensor
signals in a way that transforms g1=0 into x=x′=0. This means that the speaker creates identical internal representations of both the spoken sound and the “motor” signal that
controls the configuration of the vocal apparatus.
[0123] The voice of a second speaker was modeled by choosing different g-dependent functions for the 8 quantities in the LP model of the vocal apparatus. Specifically, the glottal pitch was
set equal to 125 Hz, and the poles of the vocal tract transfer function were chosen to be significantly different functions of g than for the first voice. FIG. 9a shows the spectrogram produced by
this second “voice” when it made the “articulatory gesture” in FIG. 7a. FIG. 9b is the curve in DCT coefficient space induced in listener #2 by this voice, when it produced all possible spectra
(i.e., spectra corresponding to all possible values of g). FIGS. 8a and 9b show that listener #2 characterized the first and second voices by dramatically different curves in DCT coefficient
space. The left side of FIG. 9c depicts the sensor signal x′(t) induced in listener #2 by the utterance in FIG. 9a. As before, the origin of the x′ coordinate system along the curve in FIG. 9b
was chosen to correspond to the first sound spectrum emitted by the speaker. Finally, the right side of FIG. 9c is the rescaled representation of this raw sensor signal. Notice that there is no
significant difference between the rescaled representations in FIGS. 9c and 8b, despite the fact that they were derived from the utterances of different voices and corresponded to raw sensor signals
from different spectrograms (FIGS. 9a and 7b). This is because these sensor signals are related by an invertible transformation. Such a transformation exists because each sensor signal is invertibly
related to the same gesture function (i.e., g(t) in FIG. 7a).
[0124] V.B. Multidimensional Sensor State Manifolds Having Local Directionality
[0125] In this section, we demonstrate the specific embodiment of the present invention in Section II.B by applying it to simulated data on a two-dimensional sensor state manifold. Let x&
equals;(x1x2) represent the state of the device's sensor. For example, these numbers might be the coordinates of a specific feature being tracked in a time series of digital images, or they could be
the amplitudes or frequencies of peaks in the short-term Fourier spectrum of an audio signal. Suppose that FIG. 10a represents the trajectories of the sensor states that were previously encountered
by the system. Notice that these lines tend to be oriented in nearly horizontal or vertical directions, thereby endowing the manifold with directionality at each point. We used these data to compute
the local vectors ha on a uniform grid of sample points that was centered on the origin and had spacing equal to two units. To do this, we considered a small neighborhood of each sample point, and
the time derivative of each trajectory segment traversing the neighborhood was computed at equal time intervals. Then, Eqs.(9-13) with p=1 were applied in order to derive local vectors from
the collection of time derivatives at each sample point. The resulting vectors ha, shown in FIG. 10b, were then interpolated in order to estimate the vectors at intervening points. As expected, these
vectors reflect the horizontal and vertical orientations of the trajectories from which they were derived. Finally, Eqs.(1-2) were applied to these ha in order to compute the coordinate-independent
representation sa of each sensor state on the manifold, relative to the reference state which was chosen to be x0=(0,0). The result is shown in FIG. 10c, which depicts the level sets of the
scale function sa(x) that is intrinsic to the sensor state history in FIG. 10a. FIG. 11 shows how an “image” of sensor states in the x coordinate system is represented in the s coordinate system.
[0126] Next, we considered what would have happened if the same device had “experienced” sensor states shown in FIG. 12a. These trajectories are related to those in FIG. 10a by the =
=following non-linear transformation:
x2→0.2−0.2x1+x2−0.01x12+0.02x22+0.01x1x2(Eq. 25)
[0127] For example, suppose that the sensor state is the location of a feature in a digital image. Equation (25) could represent the way the sensor states are transformed by a distortion of
the optical/electronic path within the camera or by a distortion of the surface on which the camera is focused (e.g., distortion of a printed page). The procedure outlined above was used to compute
the local vectors on a uniform grid of sample points. FIG. 12b shows the resulting vectors, which are oriented along the principal directions apparent in FIG. 12a. Next, interpolation was used to
estimate the ha at intervening points, and Eqs.(1-2) were used to compute the coordinate-independent representation sa of each sensor state on the manifold, relative to the reference sensor state
which was chosen to be x0=(0.1,0.2). Notice that we have assumed prior knowledge of the transformed position of the reference sensor state. In other words, we have assumed that we have the
prior knowledge necessary to identify this state both before and after the onset of the process, which remaps the sensor states. The result of this calculation is shown in FIG. 12c, which depicts the
level sets of the functions sa(x), the scale function inherent to the sensor state data in FIG. 12a. These functions were used to compute the sa representation of the transformed version of the
“image” in the left panel of FIG. 11. The transformed image and its sa representation are shown in FIG. 13. Comparison of FIG. 11 and FIG. 13 shows that the sa representations of the untransformed
and transformed image are nearly identical. Thus, the invented method and apparatus make it possible to maintain invariant representations of stimuli in the presence of unknown invertible
transformations of sensor states, such as the one in Eq.(25). The tiny discrepancies between FIG. 11 and FIG. 13 can be attributed to errors in the interpolation of the ha, which is due to the
coarseness of the grid on which ha was sampled. This error can be reduced if the distance between sample points can be decreased. This is possible if the device is allowed to experience a denser set
of sensor states (i.e., more trajectory segments than shown in FIGS. 10a and 12a) so that even tiny neighborhoods contain enough data to compute the ha.
[0128] V.C. Multidimensional Sensor State Manifolds Supporting Parallel Transport
[0129] In this section, we demonstrate the specific embodiment of the present invention in Section III by applying it to simulated data on a two-dimensional sensor state manifold. Let x&
equals;(x1,x2) represent the state of the device's sensor. For example, these numbers might be the coordinates of a specific feature being tracked in digital images, or they could be the amplitudes
and/or frequencies of peaks in the short-term Fourier spectrum of a microphone's output signal. Suppose that FIG. 14a represents the trajectories of the sensor states that were previously encountered
by the system. Notice that these happen to be straight lines that are traversed at constant speed. Equations 18-19 were used to compute the affine connection on a uniform grid of sample points that
was centered on the origin and had spacing equal to two units. To do this, we considered a small square neighborhood of each sample point with dimensions equal to 1.6 units. Each trajectory segment
that traversed the neighborhood was divided into line elements traversed in equal time intervals. Next, we considered any three pairs of such line elements, where each pair consisted of two adjacent
line elements on a trajectory segment. Then, we asked if there was a unique affine connection {circumflex over (&Ggr;)}lmk that parallel transported each line element into the other line element of
the same pair. The affine connection at the sample point was set equal to the average of the quantities {circumflex over (&Ggr;)}lmk that were derived for all possible triplets of paired line
elements in the neighborhood. Triplets of paired line elements for which there was no unique solution (e.g., multiple solutions or no solution) did not contribute to this average. In this way, we
derived an affine connection for which the neighboring trajectory segments were geodesic in an average sense. In this particular case, all components of the resulting affine connection equaled zero;
i.e., the x coordinate system is a geodesic coordinate system of a flat manifold. This result is expected because a vanishing affine connection is the only one that parallel transports equally long
line elements of straight lines into one another.
[0130] The reference state was chosen to be the origin of the x coordinate system (x0=0), and the method in Section II.B (Eq.(13)) was used to compute local vectors from the
directionality of nearby trajectory segments. Specifically, we considered the trajectory segments passing through a small square neighborhood of x0 with dimensions equal to 1.0. Local vectors ĥ
1 were found by calculating the time derivatives along these trajectory segments at the equally spaced time points shown in FIG. 14a (Eq.(9)). We then looked at all possible ways of partitioning this
collection of vectors into two subsets and found the partition with the minimal value of E (Eq.(12) with p=1). Finally, the average vector in each of these partitions was computed in order to
find the principal vectors hc at x0. As explained in Section II.B, these are the directions in which the local trajectory segments tend to be oriented. In this example, these vectors were:
h1(0.488,−0.013)and h2=(0.064,0.482)(Eq. 26)
[0131] This result expresses the fact that the trajectory segments in FIG. 14a tend to be oriented in nearly horizontal and vertical directions.
[0132] The affine connection was used to “spread” these vectors throughout the manifold by parallel transporting them along type 1 and type 2 geodesics. Then, Eqs.(1-2) were used to compute
the values of s that comprise the coordinate-independent representation of each sensor state x. The results are shown in FIG. 14b, which depicts the level sets of s(x). Because of the flat nature of
this particular manifold, the s coordinate system is related to the x coordinate system by an affine transformation. FIG. 15 shows how an “image” of sensor states in the x coordinate system is
represented in the s coordinate system.
[0133] Next, we considered what would have happened if the same device had “experienced” sensor states shown in FIG. 16a. These trajectories are related to those in FIG. 14a by the
following non-linear transformation:
x2→x2−0.01x12+0.02x22+0.01x1x2(Eq. 27)
[0134] For example, suppose that x is the location of a feature in a digital image. Equation (27) could represent the way the sensor states are transformed by a distortion of the optical/
electronic path within the camera or by a distortion of the surface on which the camera is focused (e.g., distortion of a printed page). The exact same procedure as outlined above was used to compute
the affine connection on a uniform grid of sample points. The resulting affine connection was non-vanishing at each sampled point, and smooth interpolation was used to estimate its values at
intervening points. Next, the above-described procedure was used to compute the principal directions of the trajectories at the reference point (2.5, 0). Notice that we have chosen the transformed
sensor state that corresponds to the untransformed reference sensor state x0=0. In other words, we have assumed that we have the prior knowledge necessary to identify this reference sensor
state in a coordinate-independent fashion. The preferred directions at this point are:
h1=(0.483, 0.025) and h2=(0.058, 0.486)(Eq. 28)
[0135] Notice that these vectors are almost the same as those in Eq.(26), as expected, because Eq.(27) implies that ∂x′k/∂x1=&dgr;lk at x=0. The small discrepancies between
Eq.(26) and Eq.(28) can be attributed to the finite breadth of the neighborhood of xo that was used to compute these vectors. The affine connection was used to “spread” these vectors throughout the
manifold along type 1 and type 2 geodesics, and the s representation of each point in the manifold was computed by means of Eq.(1-2). FIG. 16b shows the level sets of the resulting function s(x). The
function s(x) was used to compute the s representation of the transformed version of the sensor state “image” in the left panel of FIG. 15. The transformed image and its S representation are shown in
FIG. 17. Comparison of FIGS. 15 and 17 shows that the s representations of these images are nearly identical. In other words, these representations are invariant with respect to the process that
transforms the sensor states by Eq.(27), and, therefore, they are suitable for analysis by a pattern analysis program. The tiny discrepancies between FIGS. 17 and 15 can be attributed to errors in
the interpolation of the affine connection, which is due to the coarseness of the grid on which the affine connection was sampled. This error can be reduced if the distance between sample points can
be decreased. This is possible if the device is allowed to experience a denser set of sensor states (i.e., more trajectory segments than shown in FIG. 16a) so that even tiny neighborhoods contain
enough data to compute the affine connection.
VI. Discussion
[0136] The sensor states of a device for detecting stimuli may be invertibly transformed by extraneous physical processes. Examples of such processes include: 1) alterations of the internal
characteristics of the device's sensory apparatus, 2) changes in observational conditions that are external to the sensory device and the stimuli, and 3) certain systematic modifications of the
presentation of the stimuli themselves. It is clearly advantageous for the device to create an internal representation of each stimulus that is unaffected by these sensor state transformations. Then,
it will not be necessary to recalibrate the device's detector (or to retrain its pattern analysis module) in order to account for the effects of these processes. In other words, it is advantageous
for a sensory device to encode stimuli in a way that reflects their intrinsic properties and is independent of the above-mentioned extrinsic factors.
[0137] As discussed in earlier Sections, certain relationships among a collection of sensor states may be preserved in the presence of such a transformation, even though each sensor state
is individually transformed. It is mathematically possible to characterize these relationships and to use them to generate stimulus representations that will be invariant under such sensor state
transformations. This can be done by exploiting the “natural” intrinsic structure of the collection of sensor states encountered during a chosen time period. Specifically, such a sensor state time
series may have a local internal structure that makes it possible to determine vectors ha at each point in the collection. In essence, these vectors establish local coordinate systems, which are
analogous to the global “center-of-mass” coordinate system of a collection of particles (see the Summary of the Invention). If the representation of each sensor state is referred to these local
coordinate systems of the collection, it will be invariant under any local transformations of sensor states. Therefore, such representations will be not be affected by the presence of processes
causing such sensor state transformations. This is the basis for one embodiment of the present invention, which includes a non-linear signal processing technique for identifying the “part” of a
time-dependent signal that is invariant under signal transformations. This form of the signal is found by rescaling the signal at each time, in a manner that is determined by signal levels in a
chosen time interval (e.g., during the most recent time period of length &Dgr;T). The rescaled signal (called its s representation) is unchanged if the original signal time series is subjected to any
time-independent invertible transformation.
[0138] A specific embodiment of the present invention consists of multiple sensory devices that have different sensors but represent stimuli in the same way. Specifically, any two sensory
devices will create identical rescaled representations of an evolving stimulus as long as there is an invertible mapping between their sensor state time series. The latter condition will be satisfied
by a wide variety of sensory devices. Specifically, consider any sensory device that is designed to sense a stimulus that has d degrees of freedom; i.e., a stimulus whose configurations define a
d-dimensional manifold. Furthermore, assume that there is a time-independent invertible mapping between these stimulus configurations and a d-dimensional manifold of sensor states in the device. This
is a weak assumption, which simply means that the device is sensitive to all degrees of freedom of the stimulus. It follows that there will be a time-independent invertible mapping between the sensor
state time series of any two such devices as they both observe the same evolving stimulus. Therefore, the rescaled representations of those evolving sensor states will be identical and can be
subjected to the same pattern analysis. Notice that, because the sensor state time series of such a device is invertibly related to the time series of evolving stimulus configurations, it has the
same rescaled representation as the stimulus configuration time series. In this sense, such a device encodes “inner” properties of the time series of the stimulus configurations themselves; i.e.,
properties that are independent of the nature of the observing device or the conditions of observation. As an illustration, consider a computer vision device that is designed to detect the
expressions of a particular face, and suppose that those expressions form a 2D manifold. For instance, this would be the case if each facial expression is defined by the configuration of the mouth
and eyes and if these are controlled by two parameters. An example of such a device is a computer vision system V in which the sensor state x consists of a particular pair of 2D Fourier components of
the face. As long as each change of facial expression causes a change in this pair of Fourier components, there will be a time-independent invertible mapping between x and the manifold of facial
expressions (i.e., between x and the two parameters controlling the expressions). Now, consider another computer vision system V′, which computes the coefficients of 16 particular Bessel functions in
the 2D Bessel expansion of the facial image. Each face will correspond to a point in a 16-D space of Bessel coefficients, and all possible facial expressions will lie on a 2D subspace of that space.
Now, suppose that the sensory state x′ of system V′ consists of the coordinates of each face in some convenient coordinate system on this 2D subspace. As long as each change of facial expression
causes a change in the set of 16 Bessel coefficients, there will be a time-independent invertible mapping between x′ and the manifold of facial expressions. It follows that there will be a
time-independent invertible mapping between the sensor state x of system V and the sensor state x′ of system V′, as they both observe any time series of facial expressions. Therefore, these two
vision systems will derive identical rescaled representations of each face, despite the dramatic difference in their detectors. More generally, any two sensory devices, which are built with
completely different numbers and types of detectors, will “see” the world in the same way, as long as each device is sensitive to the same degrees of freedom of the stimulus and as long as each
device rescales its detectors' output. Furthermore, any such device will produce identical rescaled representations of two different stimuli (e.g., S and S′) whose time-dependent configurations are
related by a time-independent invertible mapping. To see this, recall that there is a time-independent invertible mapping between the time series of S configurations and the time series of sensor
states x(t) produced in the device by S. Likewise, there is an invertible mapping between the time series of S configurations and the time series of sensor states x′(t) when the device observes S. It
follows that there is a time-independent invertible mapping between x(t) and x′(t), and, therefore, these time series have identical rescaled representations. As an example, suppose that one of
above-described computer vision systems (e.g., V) was exposed to a time series of expressions of face F, and, on another occasion, it was exposed to a time series of expressions of a different face
F′. Further, suppose that the two time series depicted similar sequences of facial expressions in the sense that there was a time-independent invertible mapping between the two parameters controlling
F and the two analogous parameters controlling F′. It follows that the vision system would produce identical rescaled representations of the F and F′ time series.
[0139] Note that there is more than one way to interpret the fact that two stimulus time series produce different time series of sensor states, which lead to the same time series of
rescaled representations. For example, suppose the above-described vision system V observes two evolving facial stimuli, F and F′, that have identical time series of rescaled representations but
different time series of raw sensor states, x(t) and x′(t). Without additional information, the device may not be able to tell whether the differences between the two sensor state time series were
due to: 1) physical differences in the stimuli themselves; 2) the presence of a process that affected the device's detector or the “channel” between it and the face. For example, the above-described
computer vision device may not be able to determine if x(t) and x′(t) were produced by: 1) two different faces that evolved through analogous facial expressions; 2) the same face that underwent the
same sequence of expressions, first in the absence and then in the presence of some transformative process (e.g., the absence and presence of an image-warping lens). Similarly, suppose the device
recorded two sensor state time series that differed by a scale factor but had identical rescaled representations. The device could attribute the sensor state differences to: 1) a change in the
complexion of the observed face; 2) a change in the gain of the device's camera or a change in the illumination of the face. Of course, humans can suffer from illusions due to similar confusions.
Like a human, the device could distinguish between these possibilities only if it had additional information about the likelihood of various processes that might cause the transformation between the
observed sensor states or if it was able to observe additional degrees of freedom of the stimulus.
[0140] In the above discussion it was assumed that the sensor states in a given time series were remapped by a time-independent invertible transformation. Now, consider the effects of the
sudden onset of a process that invertibly transforms the sensor states. Suppose that each sensor state is represented in terms of the intrinsic properties of the collection of sensor states
encountered in the most recent &Dgr;T time units. After the onset of the transformative process, there will be a transitional period of length &Dgr;T, during which the device's stimulus
representations will not be the same as those derived from the corresponding time series of untransformed sensor states. This is because these representations are referred to a mixture of transformed
and untransformed sensor states. However, once the sensor state “database” is dominated by transformed data (i.e., once &Dgr;T time units have elapsed), the representation of each stimulus will
return to the form that is derived from the untransformed sensor state time series. This is because the description of each subsequently encountered sensor state will be referred to the properties of
a collection of transformed sensor states. The time interval &Dgr;T should be long enough so that the sensor states observed within it populate the sensor state manifold with sufficient density to
derive sensor state representations (see the discussions of this issue in Sections II, III, and IV). Specifically, there must be enough sensor state trajectories near each point to endow the manifold
with local structure (local vectors, affine connection, or metric). Thus, like a human, the described device must have sufficient “experience” in order to form stimulus representations. Increasing &
Dgr;T will also tend to decrease the noise sensitivity of the method, because it increases the amount of signal averaging in the determination of the local structure. Within these limitations, &Dgr;T
should be chosen to be as short as possible so that the device rapidly adapts to changing observational conditions.
[0141] Notice that, if the representation at each point in time is derived from sensor states encountered in a “sliding time window” (e.g., the most recent time interval of length &Dgr;T),
a given sensor state may be represented in different ways at different times. This is because the two representations may be referred to different collections of recently encountered sensor states.
In other words, the representation of an unchanged stimulus may be time-dependent because the representations are derived from the device's recent “experience” and that experience may be
time-dependent. Conversely, a given stimulus will be represented in the same way at two different times as long as the two descriptions are referred to collections of stimuli having the same average
local properties (i.e., the same ha) To visualize this, consider the following example. Consider the location of a particle in the center-of-mass coordinate systems of two different clusters of
particles in a plane. The two descriptions of the particle's location will be the same, as long as the two collections have the same center-of-mass coordinate systems. In other words, the two
representations of the particle's location are identical as long as these descriptions are referred to particle collections with the same average properties. Similarly, the stability of the average
local properties of the recently encountered sensor states will stabilize the representation of individual stimuli. If this type of temporal stability is important, stimulus representations should be
derived from collections of sensor states that are sufficiently large to have stable average properties. This may put a lower bound on the length of the time period (e.g., &Dgr;T) during which those
sensor states are collected.
[0142] The devices comprising the present invention behave in some respects like the human subjects of “goggle” experiments. Those experiments suggest that the properties of recently
experienced sensory data strongly influence the way a subject's percepts are constructed from subsequent sensory data. Specifically, each subject's perception of stimuli returned to the pre-goggle
baseline, after a period of adjustment during which he/she was exposed to familiar stimuli seen through the goggles. In a similar fashion, the invented device's recent “experience” with the external
world determines the way it internally represents subsequently encountered stimuli. In particular, the device represents stimuli just as it did before the onset of a transformative process, after a
period of adjustment during which it encounters stimuli with average properties similar to those encountered earlier.
[0143] Some technical comments should be made about the specific embodiments of the invention in Sections III and IV. In those embodiments, an affine connection was directly derived from
sensor states x(t) encountered in a chosen time interval, or it was derived from a metric that was directly related to those sensor states. In either case, the affine connection was used to populate
the entire manifold with vectors, which were parallel transported versions of reference vectors at a reference sensor state (e.g., reference vectors derived from the manifold's directionality at the
reference sensor state). These vectors were then utilized to create coordinate-independent representations of sensor states. These methods have some advantages with respect to the approach in Section
II, which required that local vectors be directly derived from x(t) at every point on the manifold. First of all, there may be points on the manifold at which such vectors cannot be derived because x
(t) may not endow the manifold with directionality there. However, it may still be possible to determine an affine connection at such points, and it can then be used to transport vectors to those
locations from some other point at which vectors have already been determined. Thus, the method based on affine-connected geometry is more generally applicable. The approach in Section III (i.e.,
deriving a parallel transport operation directly from sensor state trajectories) is particularly advantageous because it tends to represent the most commonly encountered sensor state trajectories as
geodesics (the generalization of the straight lines of Euclidean geometry). In other words, the stimulus tends to evolve along the direction created by parallel transporting the most recently
observed line element of x(t) along itself. Thus, such a device has a simple rule that provides some “intuition” about the likely evolution of a changing stimulus. In contrast, a device based on the
approach in Section II (i.e., deriving local directionality from sensor state trajectories) has no “intuition” about the likely course of stimulus evolution. It “knows” a set of preferred directions
at each point on the manifold, but it cannot use the past behavior of the stimulus to predict which direction it will follow in the future. Similarly, a device based on the methodology in Section IV
(the derivation a metric from sensor state trajectories) “knows” that the average speed of sensor state evolution is unity because of Eq.(22). However, the device has no way to predict the direction
of stimulus evolution because the stimulus trajectories may not resemble the geodesics of the derived affine connection. In this sense, the specific device embodiments described in Section III are
more “intelligent” than those discussed in Sections II and IV.
[0144] Notice the important role played by time in all of the above-described methods. Specifically, it establishes local scales along the trajectories through each point on the manifold.
In Section II, it sets the scale of the quantities ĥ1 (Eq.(9)) that are used to derive the local vectors Ha at each point. In Section III, it sets the scale of the line elements that are
parallel transported into one another in an average sense. These trajectory-dependent scales are just sufficient to derive an affine connection at each point. Without this temporal scale, the affine
connection would not be fully defined by the sensor state trajectories. Finally, in Section IV, time sets the scale of each line element, thereby making it possible to derive a metric.
[0145] It is worth mentioning a number of other embodiments of the invention, based on variations and generalizations of the described method and apparatus. For example, in Section II, the
principal vectors at each point could be chosen by using a different clustering criterion (e.g., by using a functional form for E differing from that given by Eq.(12)). In Section IV, there are other
ways of using a metric to define an affine connection than that shown in Eq.(24). Furthermore, the path of integration in Eq.(2) could be specified differently than in Sections II-IV. Different
values of s will be associated with these different path prescriptions if the manifold has non-vanishing torsion and/or curvature. Finally, in other embodiments, the invented method and apparatus can
utilize multiple applications of the resealing process. To see this, note that the family of all signals x(t) that rescale to a given function S(t) can be considered to form an equivalence class. If
such a class includes a given signal, it also includes all invertible transformations of that signal. Signals can be assigned to even larger equivalence classes of all signals that produce the same
result when rescaling is applied N times in succession, where N≧2. Successive applications of rescaling may eventually create a function that is not changed by further applications of the
procedure (i.e., the serial resealing process may reach a fixed “point”). For example, it is easy to show that, if the self-referential scale of a signal is time-independent (i.e., if h(y) and s(x)
are time-independent), it will rescale to such a fixed point. Such a signal is loosely analogous to music, in the sense that musical compositions are also based on a time-independent scale (e.g., the
equally tempered scale of Western music).
[0146] One embodiment of the present invention is a sensory device with a “front end” that creates stimulus representations, which are not affected by processes that transform the device's
sensor states. The output of such a representation “engine” can be subjected to higher level analysis (e.g., pattern recognition) without recalibratiing the device's detector and without modifying
the pattern analysis algorithms in order to account for the sensor state transformations. For example, specific embodiments of the invention are computer vision devices that tolerate: 1) variations
in the optical/electronic paths of their cameras, 2) alterations of the optical environment and changes in the orientation and position of the camera with respect to the scene, 3) systematic
distortions of the stimuli in the scene. Other specific embodiments of the invention are speech recognition devices that tolerate: 1) drifting responses of the microphone and circuitry for detecting
sound, 2) changes in the acoustic environment and alterations of the acoustic “channel” between the speaker and the microphone, 3) systematic distortions of the spoken words due to the changing
condition of the speaker or, possibly, due to changes in the identity of the speaker. Note the following attractive feature of such devices: the device can successfully adapt to a large change in
observational conditions without any loss of data, as long as the change occurs at a sufficiently slow rate. Specifically, if the change occurs in small steps separated by relatively long time
intervals, each increment will cause a small distortion of the stimulus representations during a transitional period before the representations revert to their baseline forms. If the pattern analysis
software can tolerate these small temporary distortions, it will continue to recognize stimuli correctly, even though the cumulative change of observational conditions may be large over a long time
period. In essence, the device is able to “keep up” with a slow pace of transformative change by continually making the adjustments that are necessary to maintain invariant stimulus representations.
In contrast, the conventional method of explicit calibration would require that the device be taken “off-line” multiple times during this period in order to expose it to a test pattern.
[0147] In Section V.A.2, self-referential rescaling was demonstrated by applying it to synthetic speech-like signals produced by a variety of “voices” and detected by a variety of “ears.”
These experiments showed that the utterance of any one speaker produced the same rescaled representations in listeners with different ears (FIGS. 7 and 8). Likewise, identical rescaled
representations were induced in any one listener by the utterances of two speakers, who sought to transmit the same message (FIGS. 8 and 9). The listener-independence and speaker-independence of the
rescaled representations is quite general, even though it was demonstrated in the context of a specific family of voice and ear models. As long as each listener is sensitive to the differences
between any two configurations of a speaker's vocal apparatus, there will be an invertible mapping between those configurations and the sensor states produced in the listener. Therefore, if the
speaker's utterance is heard by two different listeners with this sensitivity, their sensor states will be invertibly related to one another and, consequently, have identical rescaled
representations. Similarly, assume that there is an invertible transformation between the vocal configurations of two speakers when they utter the same message. For example, this might happen because
one speaker mimics the other in a consistent fashion or because both speakers “read” from the same “text” in a consistent manner. Then, the sensor signals induced in a listener by the two speakers
will also be invertibly related. This is because these sensor signals are invertibly related to vocal configurations, which are themselves invertibly related. It follows that the listener will
construct an identical rescaled representation of each speaker's utterance. Finally, as mentioned in Section V.A.2, because the vocal apparatus configurations are invertibly related to the resulting
sensor signals, the “gesture” parameter controlling the time series of vocal configurations (i.e., g(t)) will have the same rescaled representation as the utterance itself. If this “gesture”
parameter is taken to be the “motor” signal in the speaker, this result is consistent with the “motor” theory of speech perception.
[0148] Although the experiments in Section V.A.2 were performed with ID speech signals, it is straightforward to generalize the methodology to signals produced by models with multiple
degrees of freedom. For example, consider the spectra generated by a vocal apparatus with two degrees of freedom. Each spectrum will correspond to a point on a 2D subspace (i.e., a sheet-like
surface) in the space of spectral parameters (e.g., cepstral coefficients), and each utterance will be characterized by a trajectory on this 2D surface. Sections II.B, III, and IV describe several
techniques for resealing signals with two (or more) degrees of freedom. It may be computationally practical to apply this technique to human speech that is generated by a vocal apparatus with a
relatively small number of degrees of freedom. For the reasons cited previously, such a specific embodiment of the present invention would generate the same internal (rescaled) representation of any
given utterance by a wide variety of speakers. Therefore, a speech recognition device with such a “front end” may not need extensive retraining when the speaker's voice or certain other conditions
are changed. Furthermore, the adaptive nature of the resealing process might enable it to account for coarticulation during human speech. Recall that the manner in which each sound (i.e., each
parameterized spectrum) is rescaled may depend on the nature of recently encountered sounds. It could also depend on the nature of sounds to be encountered in the near future, if the interval AT is
defined to include times after the sound to be rescaled. In other words, the rescaled representation of each sound spectrum depends on its acoustic context (defined by the endpoints of &Dgr;T),
similar to the contextual dependence of speech perception that is the hallmark of the coarticulation phenomenon. Finally, the foregoing considerations make it tempting to speculate that the human
brain itself decodes speech signals by constructing some type of rescaled version of speech spectra. This could account in part for the ease of speech communication involving a variety of speakers,
listeners, and acoustic environments.
[0149] Another specific embodiment of the present invention is a communications system. In this system, information is communicated in the form of representations that are encoded in
transmitted energy and decoded from received energy by means of the above-described self-referential method and apparatus. Because the message is encoded as signal components that are invariant under
invertible transformations, its content is not influenced by the specific configurations of the receiver's sensor, the transmitter's broadcasting module, or the channel between them. As shown in FIG.
18, the transmitter is assumed to have state x that controls the waveform of the energy to be transmitted (e.g., controls the excitation of the antenna). The transmitter determines the time series of
transmitter states x(t) that is represented by a time series S(t), which constitutes the information to be communicated. This determination is the function of the “inverse representation generator”
in FIG. 18. The transmitter then uses the determined x(t) to control its transmission. The transmitted energy is detected and processed by the receiver to produce the time series of receiver states
x′(t). We assume that there is an invertible correspondence between the transmitter states and the receiver states; i.e., xx′ is one-to-one. This implies that the transmitter does not distinguish
between transmissions that are indistinguishable at the receiver, and the receiver does not distinguish between transmissions that the transmitter does not distinguish. This is true in a variety of
circumstances. For example, suppose that x and x′ are the short-term Fourier spectra of time-dependent baseband signals in the antennas of the transmitter and receiver, respectively. Then, they will
be related in a one-to-one fashion if the “channel” between the transmitter and receiver is characterized by any linear time-independent transfer function that has non-vanishing Fourier components
and a sufficiently short temporal dispersion (e.g., OFDM). The receiver decodes the received signal by determining the time series of representations of the receiver states. Because this process is
coordinate-independent, it produces the same time series of representations S(t) that was encoded in the transmission. For example, if the transmitter seeks to communicate the information in FIG. 3b,
it could encode this information as the transmitter states in FIG. 3a. Even if the channel non-linearly distorts the signal to produce the receiver states in FIG. 3c, the receiver will decode the
transmission to recover the message in FIG. 3b. In one embodiment, the invention can be used to establish universal communication among heterogeneous transmitters and receivers, whose states differ
by unknown invertible mappings. Such a communication system resembles speech in the sense that: 1) the same information is carried by signals that are related to one another by a wide variety of
transformations; 2) the transmitter and receiver need not explicitly characterize the transformation, which remains unknown; 3) if the nature of the transformation changes, faithful communication
resumes after a period of adaptation. In this type of communications system, the “instructions” for encoding and decoding each message increment are contained in the transmitted and received signals,
respectively. In this sense, the communications signal is similar to music that inherently contains information about the musical scale and the key of upcoming bars. This communication process is
also illustrated by the following analogy. Suppose that one individual sought to visually communicate numbers to another individual as the coordinates of a particle at changing positions in a plane,
and suppose that the coordinate system of the receiving party differed from that of the transmitting party by an unknown rotation and/or translation. The transmitter could encode information as the
particle's coordinates in the internal coordinate system of the P most recently displayed particle positions (i.e., the coordinate system that originates at the collection's “center-of-mass” and is
oriented along the principal axes of its inertia tensor). Information will be transmitted faithfully, because the receiving party can also compute the particle's coordinates in the collection's
internal coordinate system. Notice that this method of communication will be accurate even if there are time-dependent changes in the distribution and internal coordinate system of the P most
recently displayed particles. This is because the transmitter and receiver utilize the same changing collection to encode and decode each subsequent message increment, respectively. Therefore, the
technique does not require stability of the intrinsic structure of the “stimulus” collection, the property that was required by the previously described sensory devices in order to ensure temporally
stable stimulus representations. The only requirement for accurate communication is that the collection of transmitter states from earlier transmissions must densely populate the part of the manifold
to be used for subsequent message increments.
[0150] Humans have a remarkable ability to perceive the intrinsic constancy of a stimulus even though its “appearance” is changing due to extraneous factors. This phenomenon has been the
subject of philosophical discussion since the time of Plato, and it has also intrigued modern neuroscientists. A specific embodiment of the present invention is a sensory device that represents
stimuli invariantly in the presence of processes that systematically transform its sensor states. These stimulus representations are invariant because they encode “inner” properties of the time
series of the stimulus configurations themselves; i.e., properties that are independent of the nature of the observing device or the conditions of observation. Perhaps, human perception of stimulus
constancy is due to a similar appreciation of the “inner” structure of experienced stimulus time series. A significant evolutionary advantage would accrue to organisms that developed this ability.
VII. Embodiments as a Sensory Device
[0151] VII.A. Stimuli
[0152] Stimuli emit and/or reflect energy that causes one or more of the device's detectors to produce a signal. Specific embodiments of the present invention detect stimuli that may be
external to the sensory device and/or stimuli that may be internal to it. Examples of external stimuli include “scenes” containing a variety of animate subjects (e.g., humans or other living matter)
and/or inanimate objects (e.g., naturally occurring parts of a “landscape”, manufactured items, etc.). Internal stimuli that may affect the device's detectors include components measuring the
position and/or orientation and/or motion of the device with respect to its environment, components measuring the position and/or orientation and/or motion of parts of the device (e.g., its
detectors) relative to the rest of the device, components measuring the internal state of any of the device's parts (including the detectors, processing units, representation generator, etc.).
[0153] VII.B. Energy
[0154] Specific embodiments of the present invention detect electromagnetic energy emitted or reflected by a stimulus. The energy may have frequencies in any part of the electromagnetic
spectrum, including the radio, microwave, infrared, optical, ultraviolet, and/or x-ray parts of the spectrum. This energy may be transmitted from the stimulus to the device's detectors through any
type of medium, including empty space, earth's atmosphere, wave-guides, wires, and optical fibers. The energy from the stimulus may also be transmitted by pressure variations and/or movements in a
gaseous, liquid, or solid medium (e.g., acoustic or mechanical vibrations).
[0155] VII.C. Detectors
[0156] One or more detectors that are part of the sensor module of the device may detect the energy emitted and/or reflected from stimuli. Specific embodiments of the present invention
utilize detectors including radio antennas, microwave antennas, infrared and optical cameras, and media sensitive to ultraviolet and/or X-ray energy. Other examples of detectors include microphones,
hydrophones, pressure transducers, devices for measuring translational and angular position, devices for measuring translational and angular velocity, devices for measuring translational and angular
acceleration, and devices for measuring electrical voltage and/or electrical current. The output of the detectors may be saved or recorded in a memory device (e.g., the memory module of a computer or
in the weights of a neural network). In specific embodiments of the present invention, the recorded detector signals may be used to determine a time series of synthetic (“imaginary”) detector signals
that is also recorded in a memory device. For example, the synthetic detector signals may form a path (in the space of possible detector signals) connecting detector signals from observed stimuli to
one another or connecting them to a synthetic detector signal corresponding to a “template” stimulus. In the following, “detector output” refers to the output of the device's detectors produced by
stimuli and to synthetic detector signals.
[0157] VII.D. Processing Units
[0158] In specific embodiments of the present invention, the detector output signals may be combined in linear or non-linear fashion by the processing units. This processing could be done
by general-purpose central processing units that utilize serial software programs and/or parallel software programs (e.g., programs with neural net architecture). The processing units could also
utilize specialized computer hardware (e.g., array processors), including neural network circuits. Examples of such signal processing include filtering, convolution, Fourier transformation,
decomposition of signals along specific basis functions, wavelet analysis, dimensional reduction, parameterization, linear or non-linear resealing of time, image formation, and image reconstruction.
The processed signals are saved in a memory device (e.g., the memory module of a computer or in the weights of a neural network). In specific embodiments of the present invention, the recorded
processed signals may be used to determine a time series of synthetic (“imaginary”) processed signals that is also saved in a memory device. For example, the synthetic processed signals may form a
path (in the space of possible processed signals) connecting the processed signals from observed stimuli to one another or to a synthetic processed signal corresponding to a “template” stimulus. In
the following, “processed signal” refers to the output of the signal processor produced by stimuli and to synthetic processed signals.
[0159] VII.E. Sensor State
[0160] A sensor state is a set of numbers that comprises the processed signal. In specific embodiments of the present invention, possible sensor states include: pixel values at one or more
locations in a digital image, numbers characterizing one or more aspects of a transformed image (e.g., filtered image, convolved image, Fourier transformed image, wavelet transformed image,
morphologically transformed image, etc.), numbers characterizing the locations and/or intensities of one or more specific features of an image or a transformed image, numbers characterizing a time
domain signal at certain times, numbers characterizing one or more aspects of a transformed time domain signal (e.g., a filtered signal, a convolved signal, a Fourier transformed signal, a wavelet
transformed signal, etc.), numbers characterizing the locations and/or intensities of one or more features of a time domain signal or a transformed time domain signal, and/or numbers characterizing
the parameterization of the time-domain signal.
[0161] VII.F. Representation Generator
[0162] Specific embodiments of the present invention may have one or more representation generators. Each representation generator may be implemented on a general-purpose central processing
unit and/or on specialized processors (e.g., array processors, neural network circuits) with software having serial and/or neural network architecture. The input of a representation generator
includes the time series of sensor states x(t) encountered in a chosen time interval, as well as certain prior knowledge mentioned below. The output of the representation generator includes the time
series of coordinate-independent sensor state representations S(t), as well as the input time series of sensor states x(t). The input and output of the representation generator may also include the
time series of detector signals from which the sensor states were created. At any time, the representation generator will utilize the input information to identify in a coordinate-independent fashion
one or more of the following features on the sensor state manifold: a reference sensor state x0, reference vectors hoa at the reference sensor state, vectors ha at all other points of interest on the
manifold, and a path connecting x0 to any other point of interest on the manifold. A representation generator will use the procedure denoted by Eqs.(1-2) to create s, a coordinate-independent
representation of any point of interest x on the manifold. One or more representation generators may also receive inputs, which are representations S(t) produced by one or more other representation
generators, and use these inputs to create other functions S′(t) that constitute representations of the input representations.
[0163] VII.F.1. Reference State
[0164] A representation generator may identify the reference sensor state to be a coordinate-independent feature of the time series of sensor states x(t) encountered in a chosen time
interval. For example, in one specific embodiment of the present invention, it could identify the reference state as the local maximum of the function defined by the number of times each sensor state
is encountered during a specific time period. Such a state may be identified by explicit computational processes or by neural networks, which are designed to find such a state from the time series of
sensor states x(t). Prior knowledge could also be used to identify the reference sensor state. For example, the device may choose the reference state to be a specific sensor state that is known a
priori to remain invariant under all relevant coordinate transformations (i.e., in the presence of all expected transformative processes). Or, the device could identify the reference state to be the
sensor state produced by a specific stimulus that the device's operator “shows” to the device at specific times.
[0165] VII.F.2. Reference Vectors at the Reference Sensor State
[0166] A representation generator may identify the reference vectors h0a at the reference sensor state as coordinate-independent features of the time series of sensor states encountered
during a chosen time interval. For example, in one specific embodiment of the present invention, it could identify these vectors to be the most characteristic values of 26 ⅆ ⁢ x ⅆ ⁢ t
[0167] when the sensor state trajectory x(t) is in the vicinity of the reference sensor state (e.g., Section II.B). These vectors could be identified by explicit computational processes or
by neural networks, which are designed to find such vectors from the history of previously encountered states x(t). Prior knowledge could also be used to identify the vectors h0a at the reference
sensor state. For example, the device may choose these vectors to be specific vectors that are known a priori to remain invariant under all relevant coordinate transformations (i.e., in the presence
of all expected transformative processes). Or, the device could identify these vectors with the sensor state changes produced by specific stimulus changes that the device's operator “shows” to the
device at specific times.
[0168] VII.F.3. Vectors at Other Points on the Sensor State Manifold
[0169] In specific embodiments of the present invention, the representation generator may identify vectors at other points on the manifold by any coordinate-independent means, including the
[0170] VII.F.3.a. Sensor State Manifolds Having Local Directionality
[0171] A representation generator may identify the vectors ha at any given point of interest with coordinate-independent features of the time series of sensor states encountered in a chosen
time interval. For example, it could identify these vectors to be the most characteristic values of 27 ⅆ ⁢ x ⅆ ⁢ t
[0172] when the sensor state trajectory x(t) is in the vicinity of the point of interest (e.g., Section II.B). These vectors could be identified by explicit computational processes or by
neural networks, which are designed to find such vectors from the sensor states x(t) encountered in a chosen time interval. The values of these vectors at a collection of closely spaced points may be
interpolated in order to estimate their values at other points on the manifold. In specific embodiments of the present invention, the interpolation process could be implemented with parametric
techniques (e.g., splines) or by neural networks. The representation generator can use these vectors to specify a particular path connecting the reference state to any sensor state of interest. For
example, such a path can be specified by requiring it to consist of N or fewer segments (N being the manifold's dimension), where each segment is directed along the local vector ha with one
particular value of index a and where these index values are encountered along the path in a predetermined order that does not repeat (e.g., in order of ascending values of a, as in Section II.B).
The procedure in Eqs.(1-2) may be applied to this path and to the vectors ha along it to generate the coordinate-independent representation s of a sensor state of interest. The values of s
corresponding to predetermined values of x may be computed in this manner. The values of s at intervening values of x may be computed by interpolation between the predetermined values of x. In
specific embodiments of the present invention, the interpolation may be performed by parametric means (e.g., splines) or by neural network means.
[0173] VII.F.3.b. Sensor State Manifolds that Support Parallel Transport
[0174] In specific embodiments of the present invention, the representation generator may use the trajectory of sensor states x(t) encountered in a chosen time interval in order to derive
coordinate-independent parallel transport rules in a portion of the sensor state manifold. For example, in specific embodiments of the present invention, such rules may be derived from the
requirement that the sensor state trajectory segments in that part of the manifold be geodesic or approximately geodesic (e.g., in an average or statistical sense; see Section III). These parallel
transport rules (e.g., the corresponding affine connection) may be identified by explicit computational processes or by neural networks, which are designed to find such rules from the states x(t)
encountered in a chosen time interval. The parallel transport rules at a collection of closely spaced points may be interpolated in order to estimate the parallel transport rules at other points on
the manifold. In specific embodiments of the present invention, the interpolation process may be implemented with parametric techniques (e.g., splines) or by neural networks. The resulting parallel
transport operation on the manifold may be implemented by explicit computational processes (e.g., Section III) or by a neural network. The representation generator may use the parallel transport
rules and the reference vectors at the reference sensor state to specify a particular path connecting the reference state to any sensor state of interest and to determine vectors ha along the path.
For example, in some embodiments of the present invention, the procedure in Section III can be used to specify such a path. Alternatively, in other embodiments of the present invention, the procedure
in Section III can be modified by creating and following N or fewer connected geodesic segments, each segment corresponding to a different vector index a and the segments being connected in a
predetermined order of indices that differs from the ascending order used in Section III. The procedure in Eqs.(1-2) may be applied to this path and to the vectors ha along it in order to generate
the coordinate-independent representation s of a sensor state of interest. The values of s corresponding to predetermined values of x may be computed in this manner. The values of s at intervening
values of x may be computed by interpolation between the predetermined values of x. In specific embodiments of the present invention, the interpolation may be performed by parametric means (e.g.,
splines) or by neural network means.
[0175] VII.F.3.c. Sensor State Manifolds that Support a Metric
[0176] In one specific embodiment of the present invention, the representation generator may use the trajectory of sensor states x(t) encountered in a chosen time interval in order to
derive a coordinate-independent metric operation in a portion of the manifold. For example, in some embodiments of the present invention, such a metric operation may be derived from the requirement
that the local sensor state trajectory segments traversed in unit time intervals have approximately unit length (e.g., in an average or statistical sense; see Section IV). This metric operation
(e.g., the corresponding metric tensor) may be identified by explicit computational processes or by neural networks, which are designed to find such a metric operation from the sensor states x(t)
encountered in a chosen time interval. The metric operation at a collection of closely spaced points may be interpolated in order to estimate the metric operation at other points on the manifold. In
specific embodiments of the present invention, the interpolation process may be implemented with parametric techniques (e.g., splines) or by neural networks. The computation of length from the
resulting metric operation may be implemented by explicit computational processes or by a neural network. The metric operation may be used to derive parallel transport rules on the manifold by
requiring each segment on each geodesic of the parallel transport process be parallel transported into a segment with equal metric length on the same geodesic. In some specific embodiments of the
present invention, the parallel transport rules may also be required to parallel transport any vector into another vector with equal metric length (e.g., Eq.(24)). The resulting parallel transport
process may be derived from the metric operation and/or implemented by explicit computational processes (e.g., Section IV) or by a neural network. The representation generator may use the parallel
transport rules and the reference vectors at the reference sensor state to specify a particular path connecting the reference state to any sensor state of interest and to derive vectors ha at points
along the path. For example, in one embodiment of the present invention, the procedure in Section IV can be used to specify such a path and the vectors on it. Alternatively, in other embodiments, the
procedure in Section IV can be modified by creating and following N or fewer geodesic segments, each segment corresponding to a different vector index a and the segments being connected in a
predetermined order of indices that differs from the ascending order used in Section IV. The procedure in Eqs.(1-2) may be applied to this path and to the vectors ha along it to generate the
coordinate-independent representation s of a sensor state of interest. The values of s corresponding to predetermined values of x may be computed in this manner. The values of s at intervening values
of x may be computed by interpolation between the predetermined values of x. In specific embodiments of the present invention, the interpolation may be performed by parametric means (eg., splines) or
by neural network means.
[0177] Note is made of the fact that specific embodiments of the present invention may contain more than one of the above-described representation generators. Each of these may receive
input that consists of a time series of sensor states x(t) and/or a time series of representations S(t), generated by one or more other representation generators. The latter time series can be
processed as a sensor state time series.
[0178] Note is also made of the fact that, in specific embodiments of the inventive method and apparatus, the sensor states encountered in a predetermined time interval endow the sensor
state manifold with local structure (e.g., vectors ha, parallel transport operation, and/or metric operation). In a portion of the manifold, this structure may vary over distances greater than a
scale |&Dgr;x| and may not vary significantly over shorter distances. In preferred embodiments of the invention, the local structure at any sample sensor state may be derived from the
sensor states encountered in a small neighborhood of the sample sensor state during a predetermined time interval. The size of the small neighborhood may be less than |&Dgr;x| divided
by a small positive integer, and the spacing between the sample sensor states may be less than |&Dgr;x|. The local structure at sensor states between a collection of sample sensor
states may be estimated by interpolating among its values at the sample sensor states, by means of parametric or non-parametric (e.g., neural network) interpolation techniques. The coordinate
independent representation of a sensor state of interest may be estimated by performing the sum corresponding to Eq.(2), in which the magnitude of each small displacement &dgr;x is less than the
local value of |&Dgr;x|. The local vectors at each path point (Eq.(1)) may be estimated to be the vectors at a point that is separated from the path point by a distance less than the
local value of |&Dgr;x|, or they may be estimated by the above-described interpolation procedure.
[0179] VII.G. Higher Level Analysis of Stimulus Representations
[0180] In specific embodiments of the present invention, the output of the representation generators, including the sensor states and detector signals at predetermined time points, may form
the input of hardware and/or software modules that perform higher level analysis. Such analysis may identify aspects of the nature of the stimuli (e.g., pattern recognition and/or pattern
VIII. Embodiments as a Communication System
[0181] VIII.A. Transmitter
[0182] VIII.A.1. Inverse Representation Generator
[0183] In one specific embodiment of the present invention, the input of the inverse representation generator consists of the representations S(t) to be communicated at chosen time points
and, possibly, the transmitter states x(t) in chosen time intervals. This generator determines the transmitter states in other time intervals so that the resulting transmitter state time series is
represented by S(t) at the said chosen time points. In one specific embodiment of the present invention, the representation is determined by a process that produces the same representation from the
transmitter state time series as it produces from an invertible transformation of the transmitter state time series. The determined time series of transmitter states x(t) controls the transmission of
energy by the broadcasting unit of the transmitter.
[0184] VIII.A.2. Broadcasting Unit
[0185] In a specific embodiment of the present invention, the broadcasting unit uses the above-mentioned time series of transmitter states x(t) to control the energy that it transmits to
the receiver. The broadcasting unit may first subject the values of x to a variety of linear and non-comprises the input of the analysis module that subjects it to analysis (e.g., pattern recognition
and classification). The output of the analysis module, as well as the output of the representation generator, may be displayed to the operator of the receiver. In specific embodiments, the receiver
may include mechanisms that account for the fact that multiple users may be transmitting and receiving energy simultaneously (e.g., the mechanisms of TDMA, FDMA, CDMA, etc.).
[0186] Each of the above-mentioned steps may be performed by one or more general-purpose central processing units and/or special computer hardware units (e.g., array processors, neural
network circuits) that utilize serial software programs and/or parallel software programs (e.g., programs with neural net architecture). Any suitable computer, which would include monitor, mouse,
keyboard, RAM, ROM, disc drive, and communication ports, can be used to implement the inventive method and apparatus.
IX. Embodiments as a “Speech” Recognition Device
[0187] IX.A. Sources of Speech Stimuli or Speech-like Stimuli
[0188] The speech stimuli or speech-like stimuli may be produced by humans, other animals, or machines (including a machine comprising a part of the apparatus described in this invention).
[0189] IX.B. Energy and Medium
[0190] In specific embodiments of the present invention, the energy emitted by the above-described sources may be carried by pressure variations and/or movements in a gaseous, liquid, or
solid medium (e.g., acoustic or mechanical vibrations). It may also be carried by electromagnetic fields with frequencies in the audio, radio, microwave, infrared, optical, ultraviolet, and/or x-ray
part of the electromagnetic spectrum. These fields may occur in a variety of media, including empty space, earth's atmosphere, wave-guides, wires, and optical fibers.
[0191] IX.C. Detectors
[0192] One or more detectors that are part of the sensor module of the device may detect the energy of the stimuli. In specific embodiments of the present invention, such detectors may
include microphones, hydrophones, pressure transducers, devices for measuring electrical voltage and electrical current, radio antennas, microwave antennas, infrared and optical cameras, and media
sensitive to ultraviolet and/or X-ray energy.
[0193] IX.D. Processing Units
[0194] The signals from the detectors may be combined in linear or non-linear fashion by the processing units. In specific embodiments of the present invention, such signal processing may
include filtering, convolution, Fourier transformation, decomposition of signals along specific basis functions, wavelet analysis, parameterization, dimensional reduction, and linear or non-linear
rescaling of time. For example, in specific embodiments of the present invention, the time-dependent detector signals within any given time interval may be used to derive a set of parameters that
constitutes a “feature vector”. For instance, the time-dependent detector signal may be multiplied by any “windowing” function such as a Hamming or Hanning window. The resulting weighted data may
then be subjected to a Fourier transformation or a wavelet transformation, or these data may be projected onto any other set of basis functions. The “spectrum” produced by such a transformation may
be further processed by averaging it over suitable intervals in the space of the transformation indices (e.g., the indices of the utilized basis functions, such as the frequency of the Fourier basis
functions). A cepstrum may then be derived from the processed spectrum. Alternatively, the time-dependent detector signals in any given time interval may be used to derive linear prediction
coefficients, which may be used to derive the positions of the poles of an associated filter transfer function. In this way, the time-dependent data in each time interval may be used to derive a
“feature vector” from some or all of the time-dependent data and/or some or all of the associated spectral values and/or some or all of the associated cepstral values and/or some or all of the linear
prediction coefficients and/or some or all of the transfer function pole positions and/or other quantities derived from the time-dependent data in the given time interval. In specific embodiments of
the present invention, the feature vector may be processed by determining a subspace of the space containing the feature vector and by determining a procedure for projecting the feature vector into
the subspace. The feature vector may then be assigned the coordinates of its projection in any convenient coordinate system defined in this subspace. For example, that subspace may be a piece-wise
linear subspace comprised of an aggregation of portions of hyper-planes in the space containing the feature vector.
[0195] In specific embodiments of the present invention, one or more of the above-mentioned processing steps may be performed by general-purpose central processing units and/or special
computer hardware units (e.g., array processors, neural network circuits) that utilize serial software programs and/or parallel software programs (e.g., programs with neural net architecture).
[0196] IX.E. Sensor State
[0197] The sensor state is the set of numbers that the processing units create from the detector signals induced by a given stimulus. In specific embodiments of the present invention,
sensor states include numbers characterizing the time domain signal in a chosen time interval, numbers characterizing one or more aspects of the processed time domain signal (e.g., a filtered signal,
a convolved signal, a Fourier transformed signal, a wavelet transformed signal, etc.), and/or numbers characterizing the locations and/or intensities of one or more features of the time domain signal
or the processed time domain signal. For example, the sensor state may consist of the feature vector described in Section IX.D, or it may consist of the coordinates of the feature vector's projection
in the lower dimensional subspace that was described in IX.D.
[0198] IX.F. Representation Generator
[0199] The device may have one or more representation generators. Each representation generator may be implemented on general-purpose central processing units and/or on specialized
processors (e.g., array processors, neural network circuits) with software having serial and/or parallel (e.g., neural network) architecture. The input of a representation generator includes the time
series of sensor states x(t) in a chosen time interval, as well as certain prior knowledge. The output of the representation generator includes the time series of coordinate-independent sensor state
representations S(t), as well as the input time series of sensor states x(t). The input and output of a representation generator may also include the time series of detector signals from which sensor
states are created, as well as a description of the subspace of feature space onto which feature vectors are projected in order to produce sensor states. In specific embodiments of the present
invention, at any time, the representation generator will utilize the input information to identify in a coordinate-independent fashion one or more of the following features on the sensor state
manifold: a reference sensor state x0, reference vectors h0a at the reference state, vectors ha at all other points of interest on the manifold, and a path connecting x0 to any other point of
interest on the manifold. In specific embodiments of the present invention, these features on the sensor state manifold may be identified as described in VII.F and other Sections. In specific
embodiments of the present invention, a representation generator will use the procedures described in Sections II, III, and IV to create s, a coordinate-independent representation of any point of
interest x on the manifold. One or more representation generators may also receive inputs, which are representations S(t) produced by one or more other representation generators, and use these inputs
to create other functions S (t) that constitute representations of the input representations.
[0200] IX.G. Higher Level Analysis of Stimulus Representations
[0201] In specific embodiments of the present invention, the output of the representation generators may form the input of hardware and/or software modules that perform higher level
analysis. Such analysis may include pattern recognition and pattern classification. For example, this module may associate the output of the representation generators with a sequence of phonemic
features and/or phonemes and/or allophones and/or demisyllables and/or syllables and/or words and/or phrases and/or sentences. The analysis module may recognize the voices of certain speakers by
associating each speaker with the characteristics of the output of the representation generator, including the characteristics of the subspace of feature space onto which feature vectors are
projected in order to produce sensor states.
X. Embodiments as a Stimulus Translation Device
[0202] In one specific embodiment of the present invention, the stimuli of stimulus source S are “translated” into the stimuli of stimulus source S′ by the following process. The stimuli to
be translated are produced by S, and the method and apparatus of earlier Sections are used to find a time series of scale values S(t) corresponding to a time series of sensor states x(t) derived from
the S stimuli. These scale values are determined from sensor states recorded in a chosen time interval, these sensor states being produced by the S stimuli to be translated (i.e., x(t)) and,
possibly, by other stimuli produced by S. The next step is to find a time series of sensor states x′(t) on the sensor state manifold of S′ that implies the same time series of scale values S(t) when
it is subjected to the process in earlier Sections. The scale values of the determined time series x′(t) are derived from sensor states in a chosen time interval, these sensor states including the
sensor states of the translated stimuli (i.e., x′(t)) and, possibly, the sensor states produced by other S′ stimuli. Next, one finds a time series of feature vectors that corresponds to the
determined time series of sensor states x′(t). Then, a time series of S′ stimuli is found that is characterized by the derived time series of feature vectors. For example, suppose that the stimulus
sources are two speakers (S and S′) and the stimuli are acoustic waveforms of their speech. The time series of feature vectors may be a time series of: 1) Fourier spectra or 2) cepstra or 3) wavelet
representations or 4) linear prediction coefficients or 5) positions of poles corresponding to linear prediction coefficients. Then, a translated acoustic waveform from S′ may be synthesized by
inverting: 1) the short-term Fourier analysis or 2) the cepstral analysis or 3) the wavelet analysis or 4) the linear prediction analysis or 5) the linear prediction pole position analysis,
respectively. The synthesized acoustic waveform from S′ is the utterance of S, after it has been translated into the speech of S′.
[0203] In a specific embodiment of the present invention as a speech translation device, the following process is used to determine the above-described time series of sensor states x′(t).
First, a non-message sample of speech from speaker S is used to create a scale function sNM(X) on the “voice” manifold of sensor states x produced by that speaker, using the procedure described in
Section VIII and earlier Sections. Similarly, a non-message sample of speech from speaker S′ is used to create a scale function sNM′(X′) on the “voice” manifold of sensor states x′ produced by that
speaker. The scale function sNM(X) is used to derive s(t)=sNM[x(t)] from the sensor state time series x(t), produced by the S utterance to be translated. Then, the scale function
sNM′(x′) is used to find x′(t) such that s(t)=sNM′[x′(t)].
[0204] In all of the specific embodiments of the present invention, each step may be performed by one or more general-purpose central processing units and/or special computer hardware units
(e.g., array processors, neural network circuits) that utilize serial software programs and/or parallel software programs (e.g., programs with neural net architecture). Any suitable computer, which
would include monitor, mouse, keyboard, RAM, ROM, disc drive, and communication ports, can be used to implement the inventive method and apparatus.
[0205] Specific embodiments of a method and apparatus for creating stimulus representations according to the present invention have been described for the purpose of illustrating the manner
in which the invention may be made and used. It should be understood that implementation of other variations and modifications of the invention and its various aspects will be apparent to those
skilled in the art, and that the invention is not limited by the specific embodiments described. It is therefore contemplated to cover by the present invention any and all modifications, variations,
or equivalents that fall within the true spirit and scope of the basic underlying principles disclosed and claimed herein.
[0206] This application includes a computer program appendix listing (in compliance with 37 C.F.R. § 1.96) containing source code for a specific embodiment illustrating how the present
inventive method and apparatus may be implemented. The computer program appendix listing is submitted herewith on one original and one duplicate compact disc (in compliance with 37 C.F.R. § 1.52(e))
designated respectively as Copy 1 and Copy 2, and labeled in compliance with 37 C.F.R. § 1.52(e)(6).
[0207] Each listed computer program and/or file was created using the programming language Mathematica 3.0 (Wolfram Research, Inc., Urbana, Ill.), running on a Mactintosh PowerBook G3
computer with the Mac OS 8.1 operating system.
[0208] All the material in this computer program appendix listing on compact disc is hereby incorporated in its entirety herein by reference, and identified by the following table of file
names, creation/modification date, and size in bytes: 1 CREATED/ SIZES IN NAMES OF FILES MODIFIED BYTES 1DExample.SP.1.txt 03-Feb-01 12,000 1DSpCoord.2.32_1DSyn.3.31.NL.tx 15-Jun-01 12,000
1DSpCoord.2.32_1DSyn.3.31.txt 15-Jun-01 12,000 1DSyn.3.31.coord.txt 15-Jun-01 24,000 1DSyn.3.31.txt 15-Jun-01 24,000 1DTransform.txt 23-Oct-00 12,000 Disp2DHMap2.0_2.0_1.25.txt 12-Oct-00 36,000
DisplayMap3.0_2.4_2.2.txt 23-Aug-00 36,000 HMapParameterFile.txt 12-Oct-00 12,000 Make1DMap.3.19.txt 04-Oct-00 12,000 Make2DHMap.2.0.txt 11-Oct-00 36,000 MakeMap2.4.txt 16-Aug-00 36,000
MapParameterFile.txt 22-Aug-00 12,000 SI.3.1.txt 23-Apr-01 24,000 SI.3.1a_1DSpCo.2.32.NL.txt 15-Jun-01 24,000 SI.3.1a_1DSpCo.2.32.txt 15-Jun-01 24,000 SIoP.6.4_SI3.1.Door.CBrA.txt 16-May-01 12,000
SIoP.6.4_SI3.1.Door.CrA.txt 08-May-01 12,000 SRRec4.6.txt 08-Nov-00 36,000 SRRec5.5.txt 09-Nov-00 36,000 SRRec6.1_1A.txt 16-Nov-00 48,000 SRRec6.1_1AB.txt 16-Nov-00 48,000
VETraj.1.0_1DSyn.3.31.NL.txt 17-Jun-01 12,000 VETraj.1.0_1DSyn.3.31.txt 17-Jun-01 12,000
1. A method of detecting and processing time-dependent signals from a stimulus source, the method comprising the steps of:
a) detecting with a detector the signal energy from stimuli from the stimulus source at predetermined time points;
b) processing the output signals of the detector to produce a sensor state x(t) at each time point t in a collection of predetermined time points, said sensor state x(t) including one or more
c) saving in computer memory said output signals of the detector and said sensor states x(t) at each time point in the collection;
d) processing the saved sensor states x(t) to produce a representation of each sensor state x of a predetermined collection of sensor states in the space of possible sensor states, each said
representation including one or more numbers;
e) saving in computer memory said sensor states in said predetermined collection of sensor states and said representations of the sensor states in said predetermined collection of sensor states;
f) processing at least one of the saved output signals of the detector and the sensor states in said predetermined collection and corresponding representations to determine aspects of the nature
of the stimuli producing the sensor states in said predetermined collection.
2. The method according to claim 1 wherein said processing of said saved sensor states x(t) has the property that said representation of each said sensor state x is approximately the same as the
representation of the transformed said sensor state x′=x′(x), said representation of said transformed sensor state x′ being produced from the transformed time series of said saved sensor
states x′(t)=x′[x(t)], x′(x) being a transformation on the space of possible sensor states.
3. The method according to claim 2 wherein said transformation x′(x) is an invertible transformation on the space of possible sensor states.
4. A method of detecting and processing time-dependent signals from a stimulus source, the method comprising the steps of:
a) detecting with a detector the signal energy from stimuli from the stimulus source at predetermined time points;
b) processing the output signals of the detector to produce a sensor state x(t) at each time point t in a collection of predetermined time points, said sensor state x(t) including one or more
c) saving in computer memory said output signals of the detector and said sensor states x(t) at each time point of said collection;
d) determining a reference sensor state x0 in the space of possible sensor states;
e) determining one or more reference vectors h0a at the reference sensor state x0, the reference vector label a having integer values and each said reference vector having one or more dimensions;
f) processing at least one of said saved sensor states x(t) and said reference sensor state x0 and said reference vectors h0a to determine one or more preferred vectors ha at each sensor state x
in a predetermined collection of sensor states, each said preferred vector having one or more dimensions;
g) processing at least one of said saved sensor states x(t) and said reference sensor state x0 and said reference vectors h0a to determine paths in the space of possible sensor states, each said
path connecting the reference sensor state x0 to a sensor state of interest in a predetermined collection of sensor states;
h) determining the representation
28 s = ∫ x 0 x ⁢ δ ⁢ ⁢ s
of each sensor state x in a predetermined collection of sensor states, said integral being along the path connecting x0 to x, &dgr;s at each sensor state on said path satisfying
29 δ ⁢ ⁢ x = ∑ a = 1, …, N ⁢ h a ⁢ δ ⁢ ⁢ s a, δ ⁢ ⁢ x
being a small displacement along the path at said sensor state on said path, ha denoting the preferred vectors near said sensor state on said path, and N being the number of dimensions of the
space of possible sensor states;
i) saving in computer memory said sensor states in said predetermined collection of sensor states and said representations of the sensor states in said predetermined collection of sensor states;
j) processing at least one of the saved output signals of the detector and the sensor states in said predetermined collection and corresponding representations to determine aspects of the nature
of the stimuli producing the sensor states in said predetermined collection.
5. The method according to claim 4 wherein the stimulus source is at least one of a stimulus source external to the device that detects and processes the signal energy from stimuli and a stimulus
source that is a part of the device that detects and processes the signal energy from stimuli.
6. The method according to claim 4 wherein the stimulus source produces at least one stimulus selected from the group consisting of an electromagnetic stimulus, auditory stimulus and mechanical
7. The method according to claim 4 wherein the energy produced by the stimulus source is carried by a medium selected from the group consisting of empty space, earth's atmosphere, wave-guide, wire,
optical fiber, gaseous medium, liquid medium, and solid medium.
8. The method according to claim 4 wherein the detector is selected from the group consisting of radio antenna, microwave antenna, infrared camera, optical camera, ultraviolet detector, X-ray
detector, microphone, hydrophone, pressure transducer, translational position detector, angular position detector, translational motion detector, angular motion detector, electrical voltage detector,
and electrical current detector.
9. The method according to claim 4 wherein a sensor state is produced by processing the output signals of the detector using a method selected from the group consisting of a linear procedure,
non-linear procedure, filtering procedure, convolution procedure, Fourier transformation procedure, procedure of decomposition along basis functions, wavelet analysis, dimensional reduction
procedure, parameterization procedure, and procedure for rescaling time in one of a linear and non-linear manner.
10. The method according to claim 4 wherein said reference sensor state x0 in the space of possible sensor states is determined by processing said saved sensor states x(t), said processing having the
property that the transformed reference sensor state x0′=x′(x0) is approximately determined by processing the transformed saved sensor states x′(t)=x′[x(t)], x′(x) being a
transformation on the space of possible sensor states.
11. The method according to claim 4 wherein said reference sensor state x0 in the space of possible sensor states is determined to be a sensor state that is a local maximum of a function on the space
of possible sensor states, the value of said function at a sensor state being determined to be the number of times said sensor state appears in the collection of saved sensor states in a
predetermined time interval.
12. The method according to claim 4 wherein said reference sensor state x0 in the space of possible sensor states is determined as a sensor state produced by a stimulus determined by a user.
13. The method according to claim 4 wherein said reference vectors h0a at the reference sensor state x0 are determined by processing a predetermined collection of said saved sensor states x(t), the
saved sensor states in said predetermined collection being in a small neighborhood of the sensor state x0.
14. The method according to claim 13 wherein said processing has the property that the transformed reference vectors
30 h 0 ⁢ a ′ = ∂ x ′ ∂ x ⁢ h 0 ⁢ a
at the transformed reference sensor state x0′=x′(x0) are approximately produced by said processing of the transformed sensor states in the predetermined collection x′(t)=x′[x
(t)], x′(x) being a transformation on the space of possible sensor states.
15. The method according to claim 14 wherein said transformation x′(x) is an invertible transformation on the space of possible sensor states.
16. The method according to claim 13 wherein the reference vectors h0a at the reference sensor state x0 are determined to be
31 h 0 ⁢ a = ∑ j = 1, … ⁢ , N Δ ⁢ ⁢ T ⁢ w j ⁢ h 0 ⁢ a ⁢ ( j ),
h0a(j) being the reference vectors determined at said reference sensor state x0 from said saved sensor states in a predetermined time interval with label j, N&Dgr;T being the number of said
predetermined time intervals, and wj being a predetermined number depending on j.
17. The method according to claim 4 wherein determining said reference vectors h0a at the reference sensor state x0 further includes the steps of:
a) determining an approximate value of the time derivative
32 h ^ i = ⅆ ⁢ x ⅆ ⁢ t ⁢ | t i
at each time point ti of a predetermined collection of the time points at which sensor states x(t) have been saved, i being an integer label and said sensor states x(ti) at said time points being
in a small neighborhood of the reference sensor state x0;
b) partitioning the values of the indices i into C non-empty partitioning sets labeled Sc, c=1,..., C, C being a predetermined integer;
c) determining the value of E for each possible way of creating the partitioning sets Sc, E depending on the quantities ĥi and on the partitioning sets Sc;
d) determining hc, the principal vectors at said x0,
33 h c = 1 N c ′ ⁢ ∑ i ⁢ ⁢ ϵ ⁢ ⁢ S c ⁢ h ^ i,
N′c being a predetermined number dependent on c and Sc being the partitioning sets that lead to the smallest value of E; and
e) determining the reference vectors h0a at said reference sensor state x0 to be a predetermined subset of said principal vectors at x0.
18. The method according to claim 17 wherein the quantity E for each collection of partitioning sets Sc is
34 E = ∑ c = 1, … ⁢ , C ⁢ &LeftBracketingBar; M c &RightBracketingBar; p,
|Mc| being the determinant of Mc, Mc being given by
35 M c = 1 N c ⁢ ∑ i ⁢ ⁢ ϵ ⁢ ⁢ S c ⁢ h ^ i ⁢ h ^ i,
Nc being a predetermined number dependent on c, and p being a predetermined real positive number.
19. The method according to claim 17 wherein determining said reference vectors h0a at the reference sensor state x0 further includes the steps of:
a) ordering said principal vectors hc so that the corresponding quantities |Mc| are in order of ascending magnitude, |Mc| being the determinant of Mc, Mc being given
36 M c = 1 N c ⁢ ∑ i ⁢ ⁢ ϵ ⁢ ⁢ S c ⁢ h ^ i ⁢ h ^ i,
and Nc being a predetermined number dependent on c; and
b) determining the reference vectors at said reference sensor state x0 to be the first N principal vectors that are linearly independent, N being the number of dimensions of the space of possible
sensor states.
20. The method according to claim 4 wherein each said reference vector at x0 is determined to be a directed line segment in the space of possible sensor states, said directed line segment connecting
two or more sensor states, said two or more sensor states being produced by two or more stimuli that are determined by a user.
21. The method according to claim 4 wherein the processing of at least one of said saved sensor states x(t), said reference sensor state x0, and said reference vectors h0a to determine one or more
preferred vectors ha at each sensor state x in said predetermined collection of sensor states has the property that the preferred vectors
37 h a ′ = ∂ x ′ ∂ x ⁢ h a
at the transformed said sensor state x′=x′(x) are approximately produced by the processing of at least one of the transformed time series of said saved sensor states x′(t)=x′[x
(t)] and the transformed said reference sensor state x0′=x′(x0) and the transformed said reference vectors
38 h 0 ⁢ a ′ = ∂ x ′ ∂ x ⁢ h 0 ⁢ a
at the transformed said reference sensor state, x′(x) being a transformation on the space of possible sensor states.
22. The method according to claim 21 wherein said transformation x′(x) is an invertible transformation on the space of possible sensor states.
23. The method according to claim 4 wherein the preferred vectors ha at each sensor state x in said predetermined collection of sensor states are determined to be
39 h a = ∑ j = 1, … ⁢ , N Δ ⁢ ⁢ T ⁢ w j ⁢ h a ⁢ ( j ),
ha(j) being the preferred vectors determined at said sensor state x by processing at least one of said reference sensor state x0 and said saved sensor states in a predetermined time interval with
label j, N&Dgr;T being the number of said predetermined time intervals, and wj being a predetermined number depending on j.
24. The method according to claim 4 wherein the processing of at least one of said saved sensor states x(t) and said reference sensor state x0 and said reference vectors h0a to determine the path
{tilde over (x)}(&tgr;), 0≦&tgr;≦1, in the space of possible sensor states, said path connecting the reference sensor state x0={tilde over (x)}(0) to a sensor state of interest {tilde
over (x)}(1), has the property that an approximation of the transformed path {tilde over (x)}′(&tgr;)=x′[{tilde over (x)}(&tgr;)] is produced by processing at least one of the time
series of transformed said saved sensor states x′(t)=x′[x′(t)] and the transformed said reference sensor state x0′=x′(x0) and the transformed said reference vectors
40 h 0 ⁢ a ′ = ∂ x ′ ∂ x ⁢ h 0 ⁢ a
at the transformed said reference sensor state, where x′(x) is a transformation on the space of possible sensor states.
25. The method according to claim 24 wherein said transformation x′(x) is an invertible transformation on the space of possible sensor states.
26. The method according to claim 4 wherein determining aspects of the nature of stimuli further includes the steps of:
a) determining the representations s(t) of each of the said saved sensor states x(t);
b) determining another time series of sensor states to be said time series of representations s(t);
c) processing said another time series of sensor states to determine the representations of each of the representations of a predetermined collection of sensor states in the space of said saved
sensor states x(t); and
d) processing at least one of the saved output signals of the detector and the sensor states in said predetermined collection and their representations and the representations of their
representations to determine aspects of the nature of the stimuli producing said sensor states in said predetermined collection.
27. The method according to claim 4 wherein at least one step is performed by a general-purpose computer performing the computations of a software program, said software program having an
architecture selected from the group consisting of a serial architecture, parallel architecture, and neural network architecture.
28. The method according to claim 4 wherein at least one step is performed by a computer hardware circuit, said circuit having an architecture selected from the group consisting of a serial
architecture, parallel architecture, and neural network architecture.
29. A method of detecting and processing time-dependent signals from a stimulus source, the method comprising the steps of:
a) detecting with a detector the signal energy from stimuli from said stimulus source at predetermined time points;
b) processing the output signals of the detector to produce a sensor state x(t) at each time point t in a collection of predetermined time points, said sensor state x(t) including one or more
c) saving in computer memory said output signals of the detector and said sensor states x(t) at each of said predetermined time points;
d) determining a reference sensor state x0 in the space of possible sensor states;
e) determining one or more preferred vectors ha at each sensor state x in a predetermined collection of sensor states by processing a predetermined collection of said saved sensor states, the
sensor states in said predetermined collection being in a small neighborhood of said sensor state x and said preferred vectors having one or more dimensions;
f) processing at least one of said reference sensor state x0 and said preferred vectors ha to determine paths in the space of possible sensor states, each said path being between the reference
sensor state x0 and a sensor state x in a predetermined collection of sensor states;
g) determining the representation
41 s = ∫ x 0 x ⁢ δ ⁢ ⁢ s
of each sensor state x in a predetermined collection of sensor states, said integral being along said path connecting x0 to x, &dgr;s at each sensor state on said path satisfying
42 δ ⁢ ⁢ x = ∑ a = 1, ⁢ … ⁢ , N ⁢ h a ⁢ δ ⁢ ⁢ s a, δ ⁢ ⁢ x
being a small displacement along the path at said sensor state on said path, ha denoting the preferred vectors near said sensor state on said path, and N being the number of dimensions of the
space of possible sensor states;
h) saving in computer memory said sensor states in said predetermined collection of sensor states and said representations of the sensor states in said predetermined collection of sensor states;
i) processing at least one of the saved output signals of the detector and the sensor states in said predetermined collection and corresponding representations to determine aspects of the nature
of the stimuli producing the sensor states in said predetermined collection.
30. The method according to claim 29 wherein said preferred vectors ha at said sensor state x are determined by processing a predetermined collection of said saved sensor states, the sensor states x
(t) in said predetermined collection being in a small neighborhood of the sensor state x and said processing having the property that the transformed preferred vectors
43 h a ′ = ∂ x ′ ∂ x ⁢ h a
at the transformed sensor state x′=x′(x) are approximately produced by said processing of the transformed said saved sensor states in the predetermined collection x′(t)=x′[x(t)
], x′(x) being a transformation on the space of possible sensor states.
31. The method according to claim 30 wherein said transformation x′(x) is an invertible transformation on the space of possible sensor states.
32. The method according to claim 29 wherein determining the preferred vectors ha at each sensor state x in said predetermined collection of sensor states further includes the steps of:
a) determining an approximate value of a time derivative
44 ( h ^ i = ⅆ x ⅆ t &RightBracketingBar; ) t i
at each time point ti of a collection of predetermined time points at which sensor states x(t) have been saved, i being an integer label and said sensor states x(ti) at said predetermined time
points being in a small neighborhood of said sensor state x;
b) partitioning the values of the indices i into C non-empty partitioning sets labeled Sc, c=1,..., C, C being a predetermined integer;
c) determining the value of E for all possible ways of creating the partitioning sets Sc, E depending on the quantities ĥi and on the partitioning sets Sc;
d) determining hc, the principal vectors at said sensor state x,
45 h c = 1 N c ′ ⁢ ∑ i ⁢ ⁢ ϵ ⁢ ⁢ S c ⁢ h ^ i,
N′c being a predetermined number dependent on c and Sc being the partitioning sets that lead to the smallest value of E;
e) determining the preferred vectors ha at x to be a predetermined subset of the principal vectors at x.
33. The method according to claim 32 wherein the quantity E is
46 E = ∑ c = 1, ⁢ … ⁢ , C ⁢ &LeftBracketingBar; M c &RightBracketingBar; p,
|Mc| being the determinant of Mc, Mc being given by
47 M c = 1 N c ⁢ ∑ i ⁢ ⁢ ϵ ⁢ ⁢ S c ⁢ h ^ i ⁢ h ^ i,
Nc being a predetermined number dependent on c, and p being a predetermined real positive number.
34. The method according to claim 32 wherein determining the preferred vectors ha at each sensor state x in said predetermined collection of sensor states further includes the steps of:
a) ordering said principal vectors hc so that the corresponding quantities |Mc| are in order of ascending magnitude, |Mc| being the determinant of Mc, Mc being given
48 M c = 1 N c ⁢ ∑ i ⁢ ⁢ ϵ ⁢ ⁢ S c ⁢ h ^ i ⁢ h ^ i,
and Nc being a predetermined number dependent on c; and
b) determining the preferred vectors ha at x to be the first N principal vectors that are linearly independent, N being the number of dimensions of the space of possible sensor states.
35. The method according to claim 29 wherein the preferred vectors ha at each sensor state x in a predetermined collection of sensor states are determined as
49 h a = ∑ j = 1, ⁢ … ⁢ , ⁢ N Δ ⁢ ⁢ T ⁢ w j ⁢ h a ⁢ ( j ),
ha(j) being the preferred vectors determined at said sensor state x from said saved sensor states in a predetermined time interval with label j, N&Dgr;T being the number of said predetermined
time intervals, and wj being a predetermined number depending on j.
36. The method according to claim 29 wherein determining said path connecting x0 to x further includes the steps of:
a) determining a type m trajectory through x0, m being a predetermined integer, by moving across a space of possible sensor states along a direction of at least one of said preferred vector hm
near x0 and minus one times said preferred vector hm near x0, and then moving across the space of possible sensor states along a direction of at least one of said preferred vector hm near each
subsequently encountered sensor state and minus one times said preferred vector hm near each subsequently encountered sensor state, and repeating this last procedure a predetermined number of
b) determining a type n trajectory through each sensor state on each trajectory of a last-determined type, n being an integer unequal to any of the indices labeling previously determined
trajectories, by moving across the space of possible sensor states along a direction of at least one of said preferred vector hn near said each sensor state and minus one times said preferred
vector hn near said each sensor state, and then moving across the space of possible sensor states along a direction of at least one of said preferred vector hn near each subsequently encountered
sensor state and minus one times said preferred vector near each subsequently encountered sensor state, and repeating this last procedure a predetermined number of times;
c) performing step (b) until said sensor state x has been reached by the last-determined trajectory; and
d) determining said path connecting the reference sensor state x0 to said sensor state x to be a path containing at most one segment of each type of said determined trajectories, said segments
being connected in the order in which said determined trajectory types were determined.
37. A method of detecting and processing time-dependent signals from a stimulus source, the method comprising the steps of:
a) detecting with a detector the signal energy from stimuli from the stimulus source at predetermined time points;
b) processing the output signals of the detector to produce a sensor state x(t) at each time point t in a collection of predetermined time points, said sensor state x(t) including one or more
c) saving in computer memory said output signals of the detector and said sensor states x(t) at each of said predetermined time points;
d) determining a reference sensor state x0 in the space of possible sensor states;
e) determining one or more reference vectors h0a at the reference sensor state x0, each said reference vector having one or more dimensions;
f) determining a parallel transport operation at each sensor state x in a predetermined collection of sensor states by processing a predetermined collection of said saved sensor states, the saved
sensor states in said predetermined collection being in a small neighborhood of said sensor state x and said parallel transport operation transporting vectors across the space of possible sensor
states near x;
g) processing at least one of said saved sensor states x(t), said reference sensor state x0, said reference vectors h0a and said parallel transport operation to determine one or more preferred
vectors ha at each sensor state x in a predetermined collection of sensor states, each said preferred vector having one or more dimensions;
h) processing at least one of said saved sensor states x(t), the reference sensor state x0, said reference vectors h0a and said parallel transport operation to determine paths across the space of
possible sensor states, each said path being between the reference sensor state x0 and a sensor state x in a predetermined collection of sensor states;
i) determining the representation
50 s = ∫ x 0 x ⁢ δ ⁢ ⁢ s
of each sensor state x in a predetermined collection x0 of sensor states, said integral being along said path connecting x0 to x, &dgr;s at each sensor state on said path satisfying
51 δ ⁢ ⁢ x = ∑ a = 1, … ⁢ , ⁢ N ⁢ h a ⁢ δ ⁢ ⁢ s a,
&dgr;x being a small displacement along the path at said sensor state on said path, ha denoting the preferred vectors near said sensor state on said path, and N being the number of dimensions of
the space of possible sensor states;
j) saving in computer memory said sensor states in said predetermined collection of sensor states and said representations of the sensor states in said predetermined collection of sensor states;
k) processing at least one of the saved output signals of the detector and the sensor states in said predetermined collection and corresponding representations to determine aspects of the nature
of the stimuli producing the sensor states in said predetermined collection.
38. The method according to claim 37 wherein the parallel transport operation that is determined by processing said predetermined collection of saved sensor states x(t) in a small neighborhood of
said sensor state x parallel transports a vector V at x along a line segment &dgr;x at x into a parallel transported vector {tilde over (V)} at a destination sensor state x+&dgr;x and the
parallel transport operation at the transformed said sensor state x′=x′(x), said parallel transport operation being determined by processing the transformed said saved sensor states x′(t)&
equals;x′[x(t)] in a small neighborhood of the transformed said sensor state x′, approximately parallel transports the transformed said vector
52 V ′ = ∂ x ′ ∂ x ⁢ V
at x′ along the transformed said line segment
53 δ ⁢ ⁢ x ′ = ∂ x ′ ∂ x ⁢ δ ⁢ ⁢ x
at x′ into the transformed said parallel transported vector
54 V ~ ′ = ∂ x ′ ∂ x ⁢ V ~
at the transformed said destination sensor state x′(x+&dgr;x), x′(x) being a transformation on the space of possible sensor states.
39. The method according to claim 38 wherein said transformation x′(x) is an invertible transformation on the space of possible sensor states.
40. The method according to claim 37 wherein determining said parallel transport operation at said predetermined sensor state x further includes the steps of:
a) determining three said saved sensor states within a small neighborhood of said predetermined sensor state x, said three saved sensor states being saved at times within a predetermined time
b) determining a pair of line segments, the first said line segment connecting the earlier two of said three sensor states, said earlier two sensor states being saved at times earlier than the
time at which the last saved sensor state of the three sensor states was saved and said line segment being directed from the sensor state at the earlier time to the sensor state at the later
time, and the second said line segment connecting the later two of said three sensor states, said later two sensor states being saved at times later than the time at which the first saved sensor
state of the three sensor states was saved and said line segment being directed from the sensor state at the earlier time to the sensor state at the later time;
c) determining zero or more additional line segment pairs in the neighborhood of said predetermined sensor state x; and
d) determining a parallel transport operation at said predetermined sensor state x that transports vectors along paths through the space of possible sensor states near x and that parallel
transports the first line segment in each said line segment pair along itself into a line segment that approximates the second line segment of the same line segment pair.
41. The method according to claim 37 wherein determining said parallel transport operation at said predetermined sensor state x further includes the steps of:
a) determining three said saved sensor states within a small neighborhood of said predetermined sensor state x, said three saved sensor states being saved at times within a predetermined time
b) determining a pair of line segments, the first said line segment connecting the earlier two of said three sensor states, said earlier two sensor states being saved at times earlier than the
time at which the last saved sensor state of the three sensor states was saved and said line segment being directed from the sensor state at the earlier time to the sensor state at the later
time, and the second said line segment connecting the later two of said three sensor states, said later two sensor states being saved at times later than the time at which the first saved sensor
state of the three sensor states was saved and said line segment being directed from the sensor state at the earlier time to the sensor state at the later time;
c) determining a collection of zero or more additional line segment pairs in the neighborhood of said predetermined sensor state x;
d) determining one or more collections, each said collection labeled by an integer i and containing one or more said line segment pairs for which there is a unique {circumflex over (&Ggr;)}lmk(i)
that satisfies
55 δ ⁢ ⁢ dx k = - ∑ l, m = 1, ⁢ … ⁢ , ⁢ N ⁢ Γ ^ lm k ⁢ ( i ) ⁢ dx l ⁢ dx m
for each line segment pair, dx and dx+&dgr;dx, in said collection, N being the number of dimensions of the space of possible sensor states;
e) determining the affine connection &Ggr;lmk at said predetermined sensor state x,
56 Γ lm k = 1 W Γ ⁢ ∑ i = 1, ⁢ … ⁢ , ⁢ N Γ ⁢ Γ ^ lm k ⁢ ( i ),
N&Ggr; being the number of said collections of line segment pairs for which there is a unique said {circumflex over (&Ggr;)}lmk(i) and W&Ggr; being a predetermined number; and
f) determining the parallel transport operation at said predetermined sensor state x so that the vector V at x is parallel transported along the line segment &dgr;x at x into the vector V+&
dgr;V at x+&dgr;x, &dgr;V being
57 δ ⁢ ⁢ V k = - ∑ l, m = 1, … ⁢ , N ⁢ Γ l ⁢ ⁢ m k ⁢ V l ⁢ δ ⁢ ⁢ x m,
and N being the number of dimensions of the space of possible sensor states.
42. The method according to claim 37 wherein said parallel transport operation at said predetermined sensor state x parallel transports the vector V at x along the line segment &dgr;x at x into the
vector V+&dgr;V at x+&dgr;x, &dgr;V being
58 δ ⁢ ⁢ V k = ∑ j = 1, … ⁢ , N Δ ⁢ ⁢ T ⁢ w j ⁢ δ ⁢ ⁢ ( V ⁢ ( j ) ) k,
&dgr;V(j) having the property that the parallel transport operation determined from said saved sensor states in a predetermined time interval with label j parallel transports the vector V at x
along the line segment &dgr;x at x into the vector V+&dgr;V(j) at x+&dgr;x, N&Dgr;T being the number of said predetermined time intervals, and wj being a predetermined number depending
on j;
43. The method according to claim 37 wherein determining the path connecting the reference sensor state x0 to the sensor state x further includes the steps of:
a) determining a type m trajectory through x0, m being a predetermined integer, by parallel transporting the reference vector h0m along a direction of at least one of itself and minus one times
itself, and parallel transporting the resultant vector along a direction of at least one of itself and minus one times itself and repeating the last procedure a predetermined number of times;
b) parallel transporting the reference vectors h0a along said type m trajectory to produce preferred vectors ha at each sensor state on said trajectory;
c) determining a type n trajectory through each sensor state on each trajectory of a last-determined type, n being an integer unequal to any of the indices labeling any of the previously
determined trajectories, by parallel transporting said preferred vector hn at each sensor state on said each trajectory along a direction of at least one of itself and minus one times itself and
parallel transporting the resultant vector along a direction of at least one of itself and minus one times itself and repeating this last procedure a predetermined number of times;
d) parallel transporting said preferred vectors ha located at each sensor state on each trajectory of a next to last-determined type along said type n trajectory that passes through said each
sensor state in order to produce preferred vectors ha at each sensor state on said type n trajectory;
e) performing steps (c) and (d) until said predetermined sensor state x has been reached by a determined trajectory and by said process of parallel transporting the preferred vectors ha; and
f) determining said path connecting the reference sensor state x0 to said sensor state x to be a path containing at most one segment of each type of said determined trajectories, said segments
being connected in the order in which said determined trajectory types were determined.
44. A method of detecting and processing time-dependent signals from a stimulus source, the method comprising the steps of:
a) detecting with a detector the signal energy from stimuli from the stimulus source at predetermined time points;
b) processing the output signals of the detector to produce a sensor state x(t) at each time point t in a collection of predetermined time points, said sensor state x(t) including one or more
c) saving in computer memory said output signals of the detector and said sensor states x(t) at each of said predetermined time points;
d) determining a reference sensor state x0 in the space of possible sensor states;
e) determining one or more reference vectors h0a at the reference sensor state x0, each said reference vector having one or more dimensions;
f) determining a metric operation at each sensor state x in a collection of predetermined sensor states by processing a predetermined collection of said saved sensor states, the saved sensor
states in said predetermined collection being in a small neighborhood of said sensor state x and said metric operation assigning lengths to vectors near x;
g) determining a parallel transport operation at each sensor state x in a predetermined collection of sensor states, said parallel transport operation transporting vectors across the space of
possible sensor states near x;
h) processing at least one of said saved sensor states x(t), said reference sensor state x0, said reference vectors h0a, said metric operation and said parallel transport operation to determine
one or more preferred vectors ha at each sensor state x in a predetermined collection of sensor states, each said preferred vector having one or more dimensions;
i) processing at least one of said saved sensor states x(t), said reference sensor state x0, said reference vectors h0a, said metric operation and said parallel transport operation to determine
paths across the space of possible sensor states, each said path being between the reference sensor state x0 and a sensor state x in a predetermined collection of sensor states;
i) determining the representation
59 s = ∫ x 0 x ⁢ δ ⁢ ⁢ s
of each sensor state x in a predetermined collection of sensor states, said integral being along said path connecting x0 to x, &dgr;s at each sensor state on said path satisfying
60 δ ⁢ ⁢ x = ∑ a = 1, … ⁢ , N ⁢ h a ⁢ δ ⁢ ⁢ s a, δ ⁢ ⁢ x
being a small displacement along the path at said sensor state on said path, ha denoting the preferred vectors near said sensor state on said path, and N being the number of dimensions of the
space of possible sensor states;
j) saving in computer memory said sensor states in said predetermined collection of sensor states and said representations of the sensor states in said predetermined collection of sensor states;
k) processing at least one of the saved output signals of the detector and the sensor states in said predetermined collection and corresponding representations to determine aspects of the nature
of the stimuli producing the sensor states in said predetermined collection.
45. The method according to claim 44 wherein said metric operation at said predetermined sensor state x assigns a length to a vector V at x that is the same as the length assigned to the transformed
said vector
61 V ′ = ∂ x ′ ∂ x ⁢ V
at the transformed said sensor state x′=x′(x) by the metric at x′, said metric at x′ being determined by processing the transformed saved sensor states x′(t)=x′[x(t)],
each x(t) being one of the saved sensor states in said predetermined collection of saved sensor states near x and x′(x) being a transformation on the space of possible sensor states.
46. The method according to claim 45 wherein said transformation x′(x) is an invertible transformation on the space of possible sensor states.
47. The method according to claim 44 wherein determining the metric operation at each said predetermined sensor state x further includes the steps of:
a) determining two said saved sensor states within a small neighborhood of said predetermined sensor state x, said saved sensor states being saved at times within a predetermined time interval;
b) determining a line segment connecting said two sensor states, said line segment being directed from the sensor state at the earlier time to the sensor state at the later time;
c) determining zero or more additional line segments in the neighborhood of x,; and
d) determining a metric operation at said predetermined sensor state x that assigns metric lengths to vectors at said predetermined sensor state x and that assigns approximately the same metric
length to each of said line segments.
48. The method according to claim 44 wherein determining the metric operation at said predetermined sensor state x further includes the steps of:
a) determining two said saved sensor states within a small neighborhood of said predetermined sensor state x, said saved sensor states being saved at times within a predetermined time interval;
b) determining a line segment connecting said two sensor states, said line segment being directed from the sensor state at the earlier time to the sensor state at the later time;
c) determining zero or more additional line segments in the neighborhood of said predetermined sensor state x;
d) determining one or more collections of said line segments, each collection labeled with i and containing one or more such line segments for which there is a unique ĝkl(i) that satisfies
62 ∑ k, l = 1, … ⁢ , N ⁢ g ^ kl ⁢ ( i ) ⁢ dx k ⁢ dx l = &LeftBracketingBar; d ⁢ ⁢ λ &RightBracketingBar; 2
for each line segment dx in said collection, |d&lgr;|2 being a predetermined number, and N being the number of dimensions of the space of possible sensor states;
e) determining the metric tensor gkl at said predetermined sensor state x,
63 g kl = 1 W g ⁢ ∑ i = 1, … ⁢ , N g ⁢ g ^ kl ⁢ ( i ),
Ng being the number of said collections of line segments at said predetermined sensor state x for which there is a unique said ĝkl(i) and Wg being a predetermined number; and
f) determining the metric operation at said predetermined sensor state x so that the vector V at x is assigned the metric length |V|
64 &LeftBracketingBar; V &RightBracketingBar; 2 = ∑ k, l = 1, … ⁢ , N ⁢ g kl ⁢ V k ⁢ V l.
49. The method according to claim 44 wherein said metric operation at said predetermined sensor state x assigns the metric length |V| to the vector V at x, |V| being
65 &LeftBracketingBar; V &RightBracketingBar; 2 = ∑ j = 1, … ⁢ , N Δ ⁢ ⁢ T ⁢ w j ⁢ &LeftBracketingBar; V ⁢ ( j ) &RightBracketingBar; 2,
|V(j)| having the property that the metric operation determined from said saved sensor states in a predetermined time interval with label j assigns the metric length |V(j)&
verbar; to the vector V at x, N&Dgr;T being the number of said predetermined time intervals, and wj being a predetermined number depending on j.
50. The method according to claim 44 wherein said parallel transport operation at said predetermined sensor state x parallel transports a line segment at x along itself into a parallel transported
line segment, said parallel transported line segment having approximately the same metric length as said line segment at x.
51. The method according to claim 44 wherein determining said parallel transport operation at said predetermined sensor state x further includes the steps of:
a) determining the affine connection &Ggr;lmk at said predetermined sensor state x
66 Γ l ⁢ ⁢ m k = 1 2 ⁢ ∑ n, 1, … ⁢ , N ⁢ g kn ⁢ ( ∂ g mn ∂ x l + ∂ g nl ∂ x m - ∂ g l ⁢ ⁢ m ∂ x n ),
gkl being the contravariant tensor that is the inverse of gkl, gkl satisfying
67 &LeftBracketingBar; V &RightBracketingBar; 2 = ∑ k, l = 1, … ⁢ , N ⁢ g kl ⁢ V k ⁢ V l,
and |V| being the length assigned to a vector V at x by the metric operation at x, and N being the number of dimensions of the space of possible sensor states; and
b) determining the parallel transport operation at said predetermined sensor state x so that a vector V at x is parallel transported along a line segment &dgr;x at x into the vector V+&dgr;V
at x+&dgr;x, &dgr;V being
68 δ ⁢ ⁢ V k = - ∑ l, m = 1, … ⁢ , N ⁢ Γ l ⁢ ⁢ m k ⁢ V l ⁢ δ ⁢ ⁢ x m,
and N being the number of dimensions of the space of possible sensor states
52. The method according to claim 44 wherein determining the path connecting the reference sensor state x0 to the sensor state x further includes the steps of:
a) determining a type m trajectory through x0, m being a predetermined integer, by parallel transporting the reference vector h0m along a direction of at least one of itself and minus one times
itself and parallel transporting the resultant vector along a direction of at least one of itself and minus one times itself and repeating the last procedure a predetermined number of times;
b) parallel transporting the reference vectors h0a along said type m trajectory to produce preferred vectors ha at each sensor state on said trajectory;
c) determining a type n trajectory through each sensor state on each trajectory of a last-determined type, n being an integer unequal to any of the indices labeling any of the previously
determined trajectories, by parallel transporting said preferred vector hn at each sensor state on said each trajectory along a direction of at least one of itself and minus one times itself and
parallel transporting the resultant vector along a direction of at least one of itself and minus one times itself and repeating this last procedure a predetermined number of times;
d) parallel transporting said preferred vectors ha located at each sensor state on each trajectory of a next to last-determined type along said type n trajectory that passes through said each
sensor state in order to produce preferred vectors ha at each sensor state on said type n trajectory;
e) repeating steps (c)-(d) until said predetermined sensor state x has been reached by a determined trajectory and by said process of parallel transporting the preferred vectors ha; and
f) determining said path to be a path connecting the reference sensor state x0 to said sensor state x, said path containing at most one segment of each type of said determined trajectories, said
segments being connected in the order in which said determined trajectory types were determined.
53. A method of translating stimuli from stimulus source S to stimuli from stimulus source {overscore (S)} by determining a time series of stimuli from said stimulus source S that produces a time
series of sensor states, said sensor states determined by processing the output of a detector of energy from said stimuli from S, and determining a time series of stimuli from said stimulus source
{overscore (S)} that produces a time series of sensor states, said sensor states determined by processing the output of a detector of energy from said stimuli from {overscore (S)}, said time series
of sensor states produced by said stimuli from S being related to said time series of sensor states produced by said stimuli from {overscore (S)} by an invertible transformation between possible
sensor states produced by processing possible output of said detector of energy from S and possible sensor states produced by processing possible output of said detector of energy from {overscore
54. A method of translating stimuli from one stimulus source S into stimuli from another stimulus source {overscore (S)}, the method comprising the steps of:
a) detecting with a detector the signal energy from stimuli from said source S at predetermined time points;
b) processing the output signals of the detector to produce a sensor state x(t) at each time point t in a collection of predetermined time points, said sensor state x(t) including one or more
c) saving in computer memory said sensor state x(t) at each time point in said collection;
d) processing said saved sensor states x(t) to produce a representation of each sensor state x of a predetermined collection of sensor states in the space of possible sensor states, each said
representation including one or more numbers;
e) using a computer to save said sensor states x in said predetermined collection of sensor states and to save said representations of the sensor states in said predetermined collection of sensor
f) detecting with a detector the signal energy from stimuli from said source {overscore (S)} at predetermined time points;
g) processing the output signals of said detector to produce a sensor state {overscore (x)}(t) at each time point t in a collection of predetermined time points, said sensor state {overscore (x)}
(t) including one or more numbers;
h) saving in computer memory said sensor state {overscore (x)}(t) at each time point in said collection;
i) processing said saved sensor states {overscore (x)}(t) to produce a representation of each sensor state x of a predetermined collection of sensor states in the space of sensor states produced
by possible stimuli from source {overscore (S)}, each said representation including one or more numbers;
j) using a computer to save said sensor states {overscore (x)} in said predetermined collection of sensor states and to save said representations of the sensor states in said predetermined
collection of sensor states; and
k) determining a time series of stimuli from said stimulus source S that produces a time series of sensor states, and determining a time series of stimuli from said stimulus source {overscore
(S)} that produces a time series of sensor states, the sensor state at each time in said time series of sensor states produced by said stimuli from S having the same representation as the sensor
state at the same said time in the time series of sensor states produced by said stimuli from {overscore (S)}.
55. The method according to claim 54 wherein the stimulus source S produces at least one of electromagnetic signals, auditory signals, and mechanical signals and wherein the stimulus source
{overscore (S)} produces at least one of electromagnetic signals, auditory signals, and mechanical signals.
56. The method according to claim 54 wherein the energy of the stimuli from S is carried by a medium selected from the group consisting of empty space, the earth's atmosphere, wave guide, wire,
optical fiber, gaseous medium, liquid medium, and solid medium, and wherein the energy of the stimuli from {overscore (S)} is carried by a medium selected from the group consisting of empty space,
the earth's atmosphere, wave guide, wire, optical fiber, gaseous medium, liquid medium, and solid medium.
57. The method according to claim 54 wherein the energy of the stimuli from source S is detected by detectors selected from the group consisting of a radio antenna, microwave antenna, infrared
camera, optical camera, ultraviolet detector, X-ray detector, microphone, hydrophone, pressure transducer, translational position detector, angular position detector, translational motion detector,
angular motion detector, electrical voltage detector, and electrical current detector and wherein the energy of the stimuli from source {overscore (S)} is detected by detectors selected from the
group consisting of a radio antenna, microwave antenna, infrared camera, optical camera, ultraviolet detector, X-ray detector, microphone, hydrophone, pressure transducer, translational position
detector, angular position detector, translational motion detector, angular motion detector, electrical voltage detector, and electrical current detector.
58. The method according to claim 54 wherein the representation of a sensor state x determined by processing a time series of sensor states x(t) is approximately the same as the representation of the
transformed sensor state x′=x′(x), determined by processing the transformed time series of sensor states x′(t)=x′[x(t)], x′(x) being an invertible transformation on the space
of possible sensor states.
59. A method of communicating information from a transmitter to a receiver, the method comprising the steps of
a) determining information to be communicated from the transmitter to the receiver, said information consisting of a collection of number arrays, each said number array including one or more
b) saving in computer memory said information;
c) determining a transmitter state x(t) at each time point t in a predetermined collection of time points, x(t) at each said time point being one or more numbers, the processing of said time
series of transmitter states determining a representation of each transmitter state in a predetermined collection of said transmitter states, each said representation including one or more
numbers, and the representations of the transmitter states in another predetermined collection of said transmitter states being said information;
d) saving in computer memory said transmitter states x(t) at each time point t in said predetermined collection of time points;
e) using the transmitter to transmit energy, said energy transmission being controlled by said determined time series of transmitter states;
f) detecting with a detector of the receiver energy transmitted by the transmitter at a set of predetermined time points;
g) processing the output of the detector of the receiver to produce a time series of receiver states {overscore (x)}(t) at predetermined time points t, {overscore (x)}(t) at each said time point
t being one or more numbers;
h) saving in computer memory said receiver states {overscore (x)}(t) at each of said predetermined time points t;
i) processing said saved receiver states {overscore (x)}(t) to produce a representation of each saved receiver state in a predetermined collection of said saved receiver states;
j) saving in computer memory said representations of the saved receiver states in said collection;
k) processing the saved representations to determine said information; and
l) saving in computer memory said information.
60. The method according to claim 59 wherein the transmitter transmits at least one of electromagnetic signals, auditory signals and mechanical signals.
61. The method according to claim 59 wherein the energy transmitted by the transmitter is carried by a medium selected from the group consisting of empty space, the earth's atmosphere, wave-guide,
wire, optical fiber, gaseous medium, liquid medium, and solid medium.
62. The method according to claim 59 wherein the receiver detectors are selected from the group consisting of radio antenna, microwave antenna, infrared camera, optical camera, ultraviolet detector,
X-ray detector, microphone, hydrophone, pressure transducer, translational position detector, angular position detector, translational motion detector, angular motion detector, electrical voltage
detector, and electrical current detector.
63. The method according to claim 59 wherein the representation of a transmitter state x determined by a time series of transmitter states x(t) is approximately the same as the representation of the
transformed transmitter state x′=x′(x), determined by the transformed time series of transmitter states x′(t)=x′[x(t)], x′(x) being an invertible transformation on the space
of possible transmitter states.
64. The method according to claim 59 wherein the representation of a receiver state {overscore (x)} determined by a time series of receiver states {overscore (x)}(t) is approximately the same as the
representation of the transformed receiver state {overscore (x)}′=x′({overscore (x)}), determined by the transformed time series of receiver states {overscore (x)}′(t)=x′[
{overscore (x)}(t)], x′({overscore (x)}) being an invertible transformation on the space of possible receiver states.
Patent History
Publication number
: 20020065633
: Sep 25, 2001
Publication Date
: May 30, 2002
Patent Grant number
6687657 Inventor
David N. Levin
(Chicago, IL)
Application Number
: 09962768 | {"url":"https://patents.justia.com/patent/20020065633","timestamp":"2024-11-11T04:19:59Z","content_type":"text/html","content_length":"310144","record_id":"<urn:uuid:e02db3c0-9fc3-4516-b96e-4ee9f1843686>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00681.warc.gz"} |
Hydrostatic Fluid Calculators | List of Hydrostatic Fluid Calculators
List of Hydrostatic Fluid Calculators
Hydrostatic Fluid calculators give you a list of online Hydrostatic Fluid calculators. A tool perform calculations on the concepts and applications for Hydrostatic Fluid calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of
Hydrostatic Fluid calculators with all the formulas. | {"url":"https://www.calculatoratoz.com/en/hydrostatic-fluid-Calculators/CalcList-8061","timestamp":"2024-11-05T06:47:38Z","content_type":"application/xhtml+xml","content_length":"102580","record_id":"<urn:uuid:deb9c0f5-0194-4dc1-b564-af8178b1ef1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00716.warc.gz"} |
The electrical resistivity of rough thin films: A model based on electron reflection at discrete step edges
The effect of the surface roughness on the electrical resistivity of metallic thin films is described by electron reflection at discrete step edges. A Landauer formalism for incoherent scattering
leads to a parameter-free expression for the resistivity contribution from surface mound-valley undulations that is additive to the resistivity associated with bulk and surface scattering. In the
classical limit where the electron reflection probability matches the ratio of the step height h divided by the film thickness d, the additional resistivity Δρ=$3/2$/(g[0]d) × ω/ξ, where g[0] is
the specific ballistic conductance and ω/ξ is the ratio of the root-mean-square surface roughness divided by the lateral correlation length of the surface morphology. First-principles non-equilibrium
Green's function density functional theory transport simulations on 1-nm-thick Cu(001) layers validate the model, confirming that the electron reflection probability is equal to h/d and that the
incoherent formalism matches the coherent scattering simulations for surface step separations≥2nm. Experimental confirmation is done using 4.5–52nm thick epitaxial W(001) layers, where ω=
0.25–1.07nm and ξ=10.5–21.9nm are varied by in situ annealing. Electron transport measurements at 77 and 295K indicate a linear relationship between Δρ and ω/(ξd), confirming the model
predictions. The model suggests a stronger resistivity size effect than predictions of existing models by Fuchs [Math. Proc. Cambridge Philos. Soc. 34, 100 (1938)], Sondheimer [Adv. Phys. 1, 1
(1952)], Rossnagel and Kuan [J. Vac. Sci. Technol., B 22, 240 (2004)], or Namba [Jpn. J. Appl. Phys., Part 1 9, 1326 (1970)]. It provides a quantitative explanation for the empirical parameters in
these models and may explain the recently reported deviations of experimental resistivity values from these models.
The effect of surfaces on electron transport in thin films has attracted great interest over many decades, for both its technological importance and the underlying physics of mesoscopic systems.^1–19
The Fuchs-Sondheimer (FS) model,^1 first proposed in 1938 and extended by various researchers,^20–28 is still the best known and most widely used analytical approach to describe the resistivity due
to electron surface scattering. It is a classical model based on the Boltzmann transport equation, incorporating surface scattering as a boundary condition, and employing a phenomenological
scattering specularity parameter p which represents the probability for specular (rather than diffuse) electron reflection from a surface. Due to its simplicity and versatility, the FS model has been
widely used to fit measured thin film resistivity data.^6,10,24,29–31 However, the use of the single parameter p to describe the electron surface scattering has resulted in ambiguity, as the
understanding regarding the physical parameters that determine p is limited. More specifically, some studies indicate that p is affected by the surface chemistry, reporting that the surface
scattering specularity of Cu(001) decreases during oxidation^32 or when adding metallic cap layers,^33 including Ta,^34 Ni,^35 or Ti.^14 Some specularity is retained for insulating cap layers, which
is attributed to the low surface density of states.^14 Other studies primarily attribute changes in p to the surface morphology, e.g., adatoms and surface vacancies, which disturb the smooth surface
potential and increase the resistivity, corresponding to a scattering specularity p=0.34 from measurements on evaporated silver thin films,^26 and p=0.29 from first principles simulations.^9,36
In addition, these atoms/vacancies aggregate on the surface and form clusters which results in larger scale undulations that develop during thin film deposition and are commonly observed as surface
mound features. The clusters may exhibit atomically smooth surface potentials and therefore do not alter the specularity, but nevertheless increase the resistivity by causing deviations in the
thickness and scattering at discrete atomic-height surface steps.^7 Consequently, many recent studies describe the effects of surface chemistry and atomic roughness with the specularity parameter p
within the framework of the FS model, while the scattering from larger scale undulations is commonly referred to as the surface roughness effect and is evaluated by introducing additional parameters^
8,27,34 as discussed in the following paragraph.
The surface roughness of narrow conductors contributes to the resistivity increase associated with electron scattering and may be the cause for the incorrect resistivity prediction by the FS model
for narrow conductors. More specifically, the reported measured resistivity of thin films<20nm is consistently higher than the prediction from the FS model,^30,34,37–39 suggesting that a single
parameter p may be insufficient to correctly describe the resistivity vs thickness dependence. As a consequence, multiple models have been developed which explicitly treat surface roughness as a
contributor to the resistivity. Namba^23 has considered the resistivity increase due to the variation of the film thickness around its mean and derived an expression that describes the resistivity as
a function of the experimentally measurable root-mean-square (RMS) surface roughness ω. However, this roughness effect is secondary and the model typically underestimates the measured resistivity
increase.^13,30,39 Kuan et al.^27,34 extended the approximate version of the FS model with an empirical factor S that accounts for surface roughness effects and linearly increases the surface
scattering contribution to the resistivity. This model has been successfully employed to describe sub-20nm measured resistivity data,^13 yet the physical meaning of the empirical parameter S is not
completely clear, since expressing S as a function of ω involves two additional parameters (S=αω^β) and the fitting is not unique.^13 When applying these FS based models to fit experimental data,
the adjustable parameters raise concerns, as it is not evident whether the improved fit is the result of the correct physics or due to the larger fitting flexibility.^40 Alternatively, multiple
quantum mechanical^3–5,41 and quasi-classical^12,19,42,43 models have been developed that describe the thin film resistivity as a function of surface morphology, abandoning any adjustable parameters.
These models do not include surface chemistry effects and therefore cannot completely replace the phenomenological FS model. Nevertheless, they make significant achievements towards an emerging
parameter-free surface scattering model. For example, one major discovery is a 1/d^2 dependence in the resistivity due to surface roughness without bulk scattering,^3,4,19,40,41,43 which completely
contradicts the zero resistivity prediction by the FS model. These quantum mechanical approaches describe the semi two-dimensional electron transport sandwiched between flat surfaces with
perturbations, where the perturbations are in principle a function of the complete description of the surface structure which, however, in practice is approximated using Gaussian^3 or Delta^41
functions. The surface roughness is accounted for as a perturbation which is assumed to be small, that is, the surface root-mean-square roughness ω is small in comparison to the film thickness, and
the lateral correlation length ξ is also often approximated to be small, in comparison to the Fermi wavelength^43 or the mean free path,^4 in order to simplify the mathematical treatment and derive
the 1/d^2 dependency.^3,4,19,41,43 Correspondingly, these quantum mechanical models describe the effect due to atomic–level roughness while neglecting larger scale roughness, i.e., surface mound
features. That is, both the FS model and the more advanced quantum-mechanical treatments account for the effect of small-scale atomic-level roughness using the specularity parameter p or explicit
expressions, respectively, but they neglect the effect of larger-scale surface roughness on the thin film resistivity. This motivates the present investigation which focuses on exactly that: The
resistivity increase associated with surface mounds of a thin film.
In this paper, we present a model that describes the effect of the surface roughness on the thin film resistivity. It uses no empirical roughness parameters but describes the resistivity contribution
due to surface mounds in terms of the measurable surface morphology parameters ω and ξ. The model provides an additive resistivity term without accounting for atomic-level surface roughness/defects,
as the latter can be accounted for with the widely used FS model or more advanced quantum mechanical approaches. The new model describes the surface roughness as an assembly of atomic-height steps
which cause discrete local scattering events corresponding to finite transmission probabilities that are summed using the Landauer formalism.^44,45 The resulting expression for the resistivity
contains an additive term that accounts for the surface roughness effect and is independent of the specific surface scattering specularity. The derived model is validated using a combination of
first-principles transport simulations on Cu layers containing atomic-height steps and electron transport measurements on epitaxial tungsten thin films with variable thickness and surface morphology.
A. Landauer formula for incoherent scattering
The Landauer formalism is a powerful and versatile approach to describe electron transport in mesoscopic systems. It was first developed in an analysis of current flow within a system with electrical
field variations around localized scattering centers.^44 In this analysis, the conductance due to a series of planar obstacles in a one dimensional chain is calculated by the transmission and
reflection probabilities. This approach to treat the conductance as a transmission problem has been widely accepted and adapted, and was generalized to describe arbitrary scattering centers,
many-channel transport, ballistic transport, and contact resistance.^45–52 Within this framework, the conductance G, which is the inverse of the resistance R, of a general 1-D system can be expressed
through its transmission probabilities T[i] of each transverse mode i, as^48,51,53
where M is the total number of transverse modes, and T≤1 is the average transmission probability of electron waves traveling through the system. The system being studied can be either a single
scattering center or a scattering region containing many scattering centers, but needs to be in reflectionless contact with two perfect conductors.^48,53 A scattering-free region has a transmission
probability T=1 and is referred to as a ballistic conductor with a conductance G=G[0] =g[0]A, where A is the cross-sectional area and the material property g[0] is the ballistic conductance per
unit area. The ballistic conductance G[0] and its inverse, the ballistic resistance R[0], are independent of the length l of the conductor. However, when the conductor contains a distribution of
incoherent scatterers, the transmission probability decreases with increasing length, which causes the total resistance R=R[0]+R[Ω] to increase linearly with l. This additional component is the
usual ohmic resistance R[Ω]=ρl/A,^53 where ρ is the resistivity due to these distributed scatterers, while the length-independent ballistic part R[0] =G[0]^−1 behaves like a contact resistance R
[0]. Substituting the corresponding net conductance in the form G=(G[0]^−1+ρl/A)^−1 in (1) then yields
Equation (2) is valid in the ballistic regime where the conductor length l is smaller than the length l[ph] at which energy dissipation occurs and transport becomes diffusive.^40,53 The total
transmission probability of a system with two incoherent scattering regions in series becomes^46,53
where T[1,2] and R[1][,][2]=1 –T[1,2] are the transmission and reflection probabilities of regions 1 and 2, and the prime symbol denotes the reversion of the propagation direction. Rearrangement of
this equation leads to T^−1– 1=T[1]′/T[1]×(T[2]^−1 – 1)+(T[1]^−1 – 1), which can be immediately generalized to N scattering regions
Microscopically scattering should be time-reversal invariant, i.e., T[j]′ =T[j], so Eq. (4) simplifies to
We note that Eq. (5) for the overall transmission probability T is derived for the case of ballistic transport, not for diffusive transport. Nevertheless, Eq. (5) is still applicable because a
diffusive conductor can be divided into small ballistic segments with lengths less than the dissipation length l[ph]. The ohmic resistance of the entire conductor can then be obtained from an
incoherent sum over all segments $RΩ=∑i=1NRΩi$, where the ohmic resistance of each segment $RΩi=(g0A)−1(Ti−1−1)$ is related to the well-defined transmission probability T[i] of that segment, which
in turn can be calculated as the T in Eq. (5). The resulting ohmic resistance in the diffusive limit $RΩ=ρl/A=(g0A)−1∑i=1N(Ti−1−1)$ then matches the ballistic expression in Eq. (5).
B. Electron transmission at surface steps, at terraces, and in the bulk
We describe the rough surface of a thin film as a series of atomic-height steps which are separated by flat terraces, as shown in the schematic in Fig. 1. The “flat” terraces may contain adatoms,
advacancies, impurities, or surface reconstructions that result in diffuse electron surface scattering, but are otherwise described by a single plane corresponding to a low index crystalline facet.
Correspondingly, the sections of the thin film between the surface steps exhibit bulk and surface scattering, identical to a flat layer. In addition, however, there are discrete electron scattering
events at each surface step which cause the increased resistivity due to surface roughness. We note that the scattering at the steps is assumed to be incoherent, because the carriers' phase is
randomized from bulk and surface scattering between the steps; thus, the Landauer formula for incoherent scattering can be applied. This assumption is further discussed in Sec. III.
Let us consider a film of length l with N steps and N terraces along the transport direction. Denote the terrace lengths by l[1], l[2], …l[N], electron transmission probabilities at each step by η
[1], η[2], … η[N] and those in the terraces between steps by ϕ[1], ϕ[2], …ϕ[N]. Note however that the number of modes differs from one side of the step to the other, making transmission through the
steps direction dependent, i.e., η[i]≠ η[i]′. More specifically, transmission through “down steps” is accompanied by a reduction of the number of transverse modes, which results in an average
transmission probability smaller than unity, while travel through “up steps” causes an increase in the number of transverse modes and, in turn, nearly complete transmission as discussed in more
detail below. However, an up step becomes a down step when the propagation direction is reversed. Therefore, considering the product $∏j=1i−1Tj′/Tj$ in Eq. (4), each reverse up step term $ηj1′$ is
the inverse of some other forward down step term $ηj2$. Correspondingly, the product becomes unity if there are equal numbers of up and down steps. In this paper, we consider conduction through
“macroscopically flat” films, that is, their initial and final thicknesses are identical and their roughness is smaller than their thickness. Correspondingly, the product in Eq. (4) is unity and Eq.
(5) can be applied, yielding
Separating the sums over 1/η[i] and 1/ϕ[i] and expressing the contribution due to scattering between steps ϕ[i] in terms of the resistivity ρ[ff] of a flat film without steps with Eq. (2) yield
This equation shows that the total resistivity of a thin film can be expressed by the sum of the resistivity ρ[ff] of a flat film and a term that accounts for surface roughness. Here, ρ[ff] includes
bulk and surface scattering and therefore accounts for effects associated with atomic-level point defects and surface chemistry, including adatoms, advacancies, impurities, and surface
reconstructions. ρ[ff] can be modeled by the Fuchs-Sondheimer model or any other model of choice. In contrast, the latter term in Eq. (8) evaluates the contribution to the resistivity from the
surface roughness, which is described as a series of steps. This latter term is the primary focus of this study. To quantitatively understand this term, we note that g[0]=e^2D(E[F])v[F]/4 for a
Fermi liquid,^52 where D(E[F]) is the density of states at the Fermi energy E[F] and v[F] is the corresponding average Fermi velocity. By comparing with the corresponding expression for the bulk
resistivity ρ[0]^−1=e^2D(E[F])v[F]^2τ[0]/3, where τ[0] is the bulk relaxation time, we find the relation g[0]=3/(4ρ[0]v[F]τ[0]). Using this relation, we can rearrange Eq. (8) in the absence of
surface scattering (ρ[ff]=ρ[0]) as
where τ is the net relaxation time. This equation matches the well-known Matthiessen's rule, indicating an additional scattering rate due to surface steps. This last term is also quite intuitive: the
prefactor is the average inverse transit time of an electron through the structure (where 4/3 appears from the average over angles relative to the length of the conductor), and the sum of (1/η[i] −
1) is the probability that the electron scatters during transit.
We next consider the magnitudes for the transmission probability η[i]. For this purpose, we consider electron transport through a perfect film with flat surfaces and thickness d and width w. It is a
ballistic conductor with a conductance G[0] =g[0]wd. Let us consider an atomically sharp down-step of height h, such that the film thickness is reduced to d–h. The overall film conductance is now
limited by the number of conduction channels in the thinner section and is therefore G[1] =g[0]w(d − h). This reduction of conductance from G[0] to G[1] can be interpreted as electrons being
reflected at the atomic step. The electron transmission probability at this down-step becomes
where d is the thickness of the source section and h is the height of the down-step. Following a corresponding argument, the transmission probability for up-steps is unity, i.e., η=1, as the
overall number of available conduction channels is not affected by increasing the film thickness on one side. However, we note that any variation in potential (e.g., caused by a surface step) in a
quantum mechanical system causes a non-zero reflection probability and, correspondingly, η for an up-step is slightly less than unity. The reflection from up-steps is a local effect and therefore
does not scale with h, as quantified by first-principles transport simulations (see also Sec. III) indicating an h-independent reduction in the conductance of 7×10^−5 Ω^−1 per nm width, which
corresponds to a relatively small reflection probability of, e.g., 3% or 1% for a 2 or 5nm thick Cu layer, respectively. This reflection could be explicitly included in our model, but is neglected
because (i) it is a relatively small contribution and (ii) time-reversal symmetry indicates that the total transmissions from a thin to a thick layer and a thick to a thin layer are identical. The
latter point effectively means that the small reflection at an up-step equally affects the down step and, correspondingly, η in Eq. (10) is the transmission from a pair of down and up steps. This
point and particularly the validity of Eq. (10) are verified using first-principles transport simulations, as described in Sec. III.
Combining Eqs. (8) and (10) and assuming incoherent scattering from neighboring steps yield
Here, the factor 1/2 comes from η=1 for all up-steps, that is 50% of all steps. We focus on the case where the step-height is small in comparison to the film thickness, i.e., h ≪ d, which leads to
Equation (12) indicates that the resistivity contribution Δρ=ρ – ρ[ff] from the surface roughness is proportional to 1/d. Importantly, we note that the additional resistivity is only dependent on
the geometry of the conductor and the material property g[0] which is easy to calculate for any metal because it depends only on the Fermi velocity and the density of states at the Fermi level, as
discussed above. We demonstrate the accuracy and explore the limits of this simple expression in Sec. III using explicit first-principles transport simulations.
C. Extension to a rough 2D surface
We now extend the above expression derived for 1D transport perpendicular to discrete surface steps to a more general quasi-continuum 2D surface, represented by a surface height function z(x,y), with
the transport along the x axis. For the special case in which z is independent of y, Eq. (12) for 1D transport applies and the sum $∑i=1Nhi$ for discrete steps is replaced by an integral $∫0l|∂z/∂x|
⋅dx$ that describes the quasi-continuous surface height. In the more general case of randomly oriented surface steps, the steps are described as a linear combination of steps perpendicular and
parallel to the transport direction, evaluated by $∂z/∂x$ and $∂z/∂y$, respectively. Scattering at the latter does not affect the electron momentum along the transport direction and is therefore
neglected. We also note that the resistance caused by one single atomic step is small, so the redistribution of the current density over y at different x is negligible. Consequently, the film is
modeled by a series of resistors in parallel, distributed over y, such that $(∫dy)/ρ=∫[dy/ρ(y)]$, where $ρ(y)=ρff+(2g0dl)−1∫|∂z(x,y)/∂x|⋅dx$. Noting that the last term is smaller than ρ[ff], we
The latter term in this expression contains no phenomenological or empirical parameter, such that the resistivity associated with the surface roughness can be directly determined from the ballistic
conductance g[0], the layer thickness d, and the surface profile z(x,y) which is, at least in principle, directly measurable.
For the purpose of direct comparison with experiment, it is convenient to express the integral in Eq. (13) in terms of the root-mean-square (RMS) surface roughness ω and the lateral correlation
length ξ of the surface roughness. These two parameters are commonly determined when quantitatively analyzing the surface morphology of thin films, as described in more detail in Sec. IV. We consider
a surface with a hexagonal close-packed array of identical conical mounts, each with a height H and a radius r. For that case, Eq. (13) becomes ρ=ρ[ff]+ $3$H/6g[0]dr. Furthermore, ξ=r for this
surface morphology, since ξ can be defined as the separation at which the height-height correlation function reaches its maximum, and ω is calculated without considering the empty space in the close
packed plane, yielding H≈3 $2$ω where the systematic error in the H to ω ratio is<5%. Using these relationships, the total resistivity for a film with conical mounds becomes
Here, the proportionality factor $6$ is determined by the geometric assumption of a hexagonal close-packed mound array and may thus be different for different surface morphologies. However, the
expected variations are not large as, for example, a square mound array would lead to a correction of just 16%. More importantly, the proportionalities predicted by Eq. (14) are independent of the
surface morphology model. In particular, the additional resistivity Δρ due to the surface roughness is inversely proportional to the layer thickness and the specific ballistic conductance, and
proportional to the ratio ω/ξ, which corresponds to the average surface slope.
Non-equilibrium Green's function (NEGF) density functional theory (DFT) calculations are performed to confirm the above-described step-model, to verify the step transmission probability η for an
actual atomistic system, and to investigate possible effects from coherent scattering at neighboring steps. The simulations are done using as a material system a 6 monolayer (ML) thick Cu(001) layer,
which was primarily chosen because of the relatively simple Cu Fermi surface and the s-character of the conduction band that provides an accurate electronic structure with a limited-size localized
atomic orbital basis set. A scattering region is placed between two electrodes that are semi-infinite along the [100] transport direction. Both the scattering region and the electrodes have a height
of 6 ML corresponding to 3a=11Å where a=3.615Å is the experimental room-temperature Cu lattice constant, and a width of a along [010], but with periodic boundary conditions such that they
represent a thin film that is infinite in the [010] direction. The corresponding supercell size is (L+1.5a×2)×a×10a, where L=(1–17)a is the length of the scattering region, 1.5a on each
side of the scattering region corresponds to 3 ML of each electrode that is included as a buffer layer, and 10a along [001] provides space for the six ML of the layer plus a vacuum of 7a above/below
the Cu layer. The electronic structure is calculated with SIESTA,^54 using a Γ-centered 12×18×1 k-point mesh for the electrodes, and a 1×18×1 mesh for the scattering region. All calculations
use single-zeta basis with polarization orbitals,^55–58 an energy shift of 0.02Ry, a norm-conserving pseudopotential for copper that includes all core electrons up to the 3p electrons, and the
Perdew-Burke-Ernzerhof (PBE) exchange correlation functional.^59,60 Electron smearing is carried out with a Fermi-Dirac occupation function with a temperature of 100K. The electronic transport
properties are calculated using the TRANSIESTA^61 code with zero bias. Green's functions of the electrodes are determined with 32 points on the complex contour. After the TRANSIESTA calculations, the
transport coefficients are calculated with a 1×255×1 k-point mesh. All computational parameters, including the k-point mesh density, basis set, energy shift, and points on the complex contour,
are chosen such that the presented transmission probabilities are numerically converged to within 1%. The calculated ballistic conductance for the 6 ML thin film along [100] is 0.996×10^15 Ω^−1m^
−2. This value is 7% smaller than the ballistic conductance for bulk Cu, g[0]=1.08×10^15 Ω^−1m^−2, calculated using a comparable computational parameter set. The latter value compares well to a
reported g[0]=1.10×10^15 Ω^−1m^−2 based on the Fermi velocity and density of states of bulk Cu,^52 consistent with the overall computational accuracy of±1%.
The electron transmission probability at a surface step is calculated using a scattering region where the left and right sides of the thin film are misaligned, forming an up-step on the top surface
and a down-step at the bottom surface, as illustrated with a schematic in Fig. 2. This is done by raising the right half of the scattering region as well as the right electrode by an integer number
of monolayers (ML), and correcting for the stacking sequence such that all Cu atoms both in the scattering region and in the two electrodes occupy sites of the same fcc lattice. This geometry with
both an up-step and a down-step on opposite surfaces is chosen because it removes any ambiguity associated with defining the thickness, as the thickness remains constant throughout the simulated
configuration. In addition, including an up-step within the calculation of a down-step corrects to first order the approximation of η=1 for up-steps, such that the model presented in Sec. II should
become valid even for the case of very thin layers where the reflection at up-steps is not negligible. We note that this geometry illustrated in the inset of Fig. 2 is designed to quantify the
scattering at a local surface step, while the overall rough surface that is considered in the model in Sec. II is represented in Fig. 1. The conductance is calculated as a function of step height h,
from h=0 ML for a perfectly smooth layer to h=1 ML for a single monolayer step to h=6 ML for a completely misaligned thin film to h=9 ML for a discontinuous film.
Figure 2 is a plot of the calculated conductance per area g(h), in units of Ω^−1m^−2, as a function of the step height h, for the simulated 6 ML-thick Cu (001) layer. The right y-axis shows the
corresponding transmission probability η(h), which is determined from the ratio of the calculated conductance g(h) divided by the conductance of the perfectly smooth (h=0) film, where the latter
has a g(h=0)=0.996×10^15 Ω^−1m^−2. Introducing a single atomic-height step (h=1 ML) results in a reduction of g by 17% to 0.825×10^15 Ω^−1m^−2, corresponding to η=0.829. g decreases
with increasing step height, reaching 0.189×10^15 Ω^−1m^−2 for h=6 ML. This latter case (h=6 ML) corresponds to the two sides of the thin film being nominally completely misaligned, such that
the conductance should (classically) be zero. In contrast, the calculated η=18.9%, which is attributed to tunneling that is facilitated by the wave function extending well beyond half an
inter-planar spacing above at the Cu(001) surface. In fact, even h=7 ML leads to a finite η=0.045, while h≥8 ML results in a negligible η<0.0003. The calculated g(h) data exhibit an
approximately linear dependency, as illustrated by the dashed line through the data points in Fig. 2, which is obtained using a linear curve fitting with a y-intercept fixed at η=1. This linear
relationship is in perfect agreement with Eq. (10), confirming our assumptions that lead to the simple prediction of the electron transmission probability at a step η=1 – h/d. We note, however,
that the x-axis intercept indicates an effective thickness d=7.25 ML which is considerably larger than the nominal thickness d[o]=6 ML. This difference is again attributed to the wavefunction
extending well (∼1 ML) above the surface. This difference of the order of one ML becomes negligible for larger (experimentally more realistic) layer thicknesses.
Figure 3 is a plot of the calculated total transmission probability T of two-step structures which are investigated to quantify the possible correlation between neighboring steps. The simulated
structures exhibit two steps which are each one monolayer high (h=1 ML) and are separated by a distance l, as illustrated in the schematics in the inset. As before (in Fig. 2), each step is formed
by the misalignment of a 6-ML-thick film, and therefore corresponds to a pair of an up-step and a down-step on opposite surfaces. Two configurations are investigated, namely, a structure where the
two mismatch junctions are in the same direction, as illustrated in Fig. 3 in blue, and a structure where the mismatch of the two junctions is in the opposite direction, as illustrated in red. The
corresponding data points are plotted as blue x's and red circles, respectively, showing the calculated total transmission probability T through the two junctions as a function of their separation l.
If the steps are separated by only a single lattice constant, l=a=3.615Å, the calculated transmission probability is 0.774 and 0.758 for the blue and red configurations, respectively. Increasing
l results in a decrease in the transmission, which converges to T=0.74±0.01 for l≥6a, as indicated by the dashed horizontal line in Fig. 3. We attribute the higher T at small l to direct
tunneling across the terrace between the step-junctions. The transmission probability exhibits some oscillations even for l>6a, which is attributed to coherent scattering at the two step junctions.
The positions of the blue maxima match the red minima and vice versa, with an oscillation wavelength that increases with l and reaches ∼2nm for l>10a. This is considerably larger than the bulk Cu
Fermi wavelength of 4.6Å, suggesting that the oscillations are attributed to standing waves within the terrace with length l. We note that coherent scattering within these structures is possible
because these calculations are done without any incoherent scattering centers, i.e., there are no phonons, point defects, or surface irregularities on the terrace. In contrast, Sec. II assumes the
presence of incoherent scattering events, which is a prerequisite for the validity of Eq. (5) and subsequent derivations. To illustrate this point, we apply the incoherent formalism to the two-step
problem. Equation (3) for incoherent scattering simplifies for the case of two identical and symmetric steps to T=η/(2 − η), where η is the transmission at a single step. There are two ways to
extract η from the data presented in Fig. 2. Direct calculation of a 1-ML-step yields η=0.829, while the linear fit η=1−h/d with h/d=1/7.25 yields η=0.862. Both values have merits: the former
corresponds to the actual 1-ML-high step used in the step-interaction calculations presented in Fig. 3, while the latter value has a considerably smaller computational uncertainty as it is obtained
from multiple calculated configurations. These multiple configurations also reduce “noise” associated with specific ratios of the electron wavelength with a particular step height or thin film
thickness, which reduces the uncertainty of the latter value. Correspondingly, we use the two values to define a range η=0.829–0.862 for the transmission probability for a 1-ML-high step, and
calculate the corresponding transmission across two sequential incoherent 1-ML-high steps to be T=0.708–0.757. This range corresponds to T=0.73±0.02 for the incoherent prediction, which is in
excellent agreement with the calculated T=0.74, indicating that the incoherent scattering formalism is applicable even for this somewhat extreme case of completely coherent scattering at two
neighboring steps of a very thin (6 ML=11Å) film. We note that the 2.7% uncertainty in the incoherent prediction T=0.73±0.02 leads to a larger relative uncertainty of 10% in the predicted
resistivity, since according to Eq. (2) the resistivity is proportional to (1/T–1). However, for the case of a more realistic system which includes incoherent scattering both in the bulk and at the
terrace surfaces, the incoherent formalism becomes increasingly more valid which is expected to reduce both the uncertainty in the predicted T and any possible deviation between T for coherent and
incoherent scattering.
In summary, the results in Figs. 2 and 3 validate the model presented in Sec. II. In particular, they support (i) the assumption that the transmission probability of a surface step can be accurately
approximated from the ratio of the step height over the layer thickness, as defined in Eq. (10), and (ii) the assumption that scattering at neighboring steps is well described within an incoherent
scattering formalism.
Transport measurements were performed in order to evaluate the prediction of Eq. (14) for the thin film resistivity as a function of the root mean square surface roughness ω and the lateral
correlation length ξ of the surface morphology. We have chosen as an experimental model system epitaxial W(001) films, primarily because the high melting point facilitates epitaxial (single-crystal)
growth of thin continuous layers on insulating substrates down to thicknesses of 4nm,^62,63 and we have previously developed in situ annealing procedures that allow variations in the surface
roughness with negligible changes in the crystalline quality.^39 4.5–52nm thick W(001) films were deposited on MgO(001) substrates in a three chamber ultrahigh vacuum DC magnetron sputter deposition
system with a base pressure<10^−9Torr following the procedure in Ref. 62. After deposition, they were transported without breaking vacuum to the analysis chamber maintained at a base pressure of 10
^−9Torr for in situ resistivity measurements by a linear 4-point probe, as described in Ref. 35. In order to vary the surface morphology, some of the samples were annealed at a base pressure<10^
−7Torr at 1000°C for 2h followed by another in situ resistivity measurement. After the samples were removed from the vacuum, they were dropped into liquid N[2] within 2 s, followed by
4-point-probe measurements at 77K in liquid N[2]. The layer thickness and surface roughness were determined by x-ray reflectivity (XRR) analyses according to the procedure described in Ref. 14.
Extensive structural characterization was done to characterize the layer microstructure by various x-ray diffraction methods similar to the procedures described in Refs. 62,64, and 65 for epitaxial
ScN(001), Sc[1–][x]Al[x]N(001), and W(001) layers, confirming that there is no significant change in the structural quality of the samples during annealing for d≤20nm, as previously reported in
Ref. 62. The surface morphology is quantitatively analyzed by extracting the height-height correlation function H(r) from atomic force micrographs (AFM) following the procedure described in Refs. 66
and 67. The RMS surface roughness ω and the lateral correlation length ξ are then obtained by fitting of the H(r) data assuming a self-affine surface morphology^68 using eight micrographs for each
sample, as previously reported in Ref. 39. A total of nine W(001) layers with thickness d=4.5–52nm are investigated, yielding a range of surface morphologies with ω=0.25–1.07nm and ξ=
10.5–21.9nm. Annealing causes a reduction in ω, for example, from 0.67±0.05nm for d=5nm to 0.29±0.03nm for d=4.5nm or from 1.07±0.11nm for d=48nm to 0.25±0.03nm for d=52nm,
while ξ remains nearly constant at 13 and 20nm, respectively. That is, annealing considerably reduces ω (by a factor of 2–4) without affecting ξ (which increases by 6%–14%).^39 This provides the
ideal sample set to test the presented model, which predicts a resistivity change as a function of the ratio ω/ξ.
To check the validity of our model, we compare the measured resistivity with the prediction from Eq. (14). For this purpose, the flat film resistivity ρ[ff] is set equal to the expected ρ[FS]
calculated from the classical Fuchs-Sondheimer model^1
where λ=15.5nm is the reported bulk mean free path at room temperature,^69^,p is the specularity parameter of the two surfaces which is set to zero (completely diffuse scattering) according to
Ref. 63, and the bulk resistivity ρ[bulk]=5.6 and 1.08 μΩcm at 295 and 77K, respectively, is obtained from the measured resistivity of thick samples (320 and 390nm) which is corrected for by the
relatively small 1.8% and 8.3% resistivity size effect in these samples at 295 and 77K. The mean free path at 77K, λ=80.4nm, is determined from the room temperature mean free path and the fact
that the product λ × ρ[bulk] is temperature independent. The flat film resistivity determined with Eq. (15) is significantly smaller than the measured total resistivity ρ. We attribute the difference
Δρ=ρ–ρ[ff] to the surface roughness effect. It corresponds to the measured resistivity contribution due to surface roughness and is plotted in Fig. 4 as a function of ω/(ξd), where all three
values, ω, ξ, and d, are directly obtained from AFM and XRR measurements. The plot includes Δρ of the as-deposited and annealed W(001) layers denoted by triangles and squares, respectively, measured
at two temperatures, 77K and 295K, as indicated by the blue and red color. The dashed lines are the result from linear curve fitting through the origin, which is done independently for 77 and
295K. The measured data are well described by the dashed lines suggesting that Δρ is proportional to ω/(ξd), as predicted by Eq. (14). That is, our measured resistivity data support the model
developed in Sec. II. We note that the thinnest samples have the largest experimental uncertainty in Δρ, as indicated by the plotted error bars in Fig. 4. This may, in turn, explain the relatively
large scatter from the linear trend for the data points at ω/(ξd)=0.0049, which are obtained from the thinnest sample with d=4.5nm. The fitting provides values for the slopes of (8.3±0.8) × 10
^−15 Ωm^2 and (11.0±1.2) × 10^−15 Ωm^2, for 77 and 295K, respectively, from which a specific ballistic conductance of (1.5±0.2) × 10^14 Ω^−1m^−2 and (1.1±0.1)×10^14 Ω^−1m^−2 can be
determined by directly applying Eq. (14) while neglecting the substrate-layer roughness. We note that the plotted Δρ is systematically lower at 77K than at 295K, corresponding to a 24% lower slope.
This observation deviates from our model that predicts a temperature independent roughness effect. We attribute this deviation to the uncertainty in the determination of ρ[ff], which requires values
for the bulk resistivity and the bulk electron mean free path as shown in Eq. (15). In particular, a small error in the low-temperature bulk resistivity due to residual impurities leads to a
systematic error in ρ[ff] that could explain the 24% difference in the slope. More importantly, the exact slope is affected by the value of the bulk mean free path, which is taken as λ=15.5nm from
Ref. 69. However, there is no consensus in the literature regarding the most appropriate value for λ of W. For example, fitting our data with another reported λ=38nm which corresponds to a product
λ × ρ[bulk]=2.1×10^−15 Ωm^2,^70 yields a specific ballistic conductance of 9.88×10^14 Ω^−1m^−2. This value is close to the reported first-principles predictions for the ballistic conductance
for W along the [100] and [110] transport directions of 9.5×10^14 and 8.7×10^14 Ω^−1m^−2, respectively.^63 We emphasize here again that the primary conclusion from the experimental measurements
is the linear relationship in Fig. 4 that confirms our new model and particularly the resistivity prediction of Eq. (14) which includes no phenomenological or empirical parameters.
In this paper, we have presented in Sec. II a new explicit expression for the effect of surface roughness on the resistivity of a thin film, have provided in Sec. III first-principles calculations
that validate the assumptions for incoherent scattering and the electron reflection probability at surface steps used in the development of the new model, and have presented in Sec. IV results from
experimental transport measurements that support the derived expression for the resistivity. The primary focus of this Sec. V is to discuss how the new derived expression relates to existing models
for the resistivity of thin films.
The exact expression of the surface roughness effect from our model is presented in Eq. (13), which expresses the resistivity due to surface roughness in terms of the average surface slope, while the
approximate Eq. (14) uses instead the experimentally easier accessible values of the surface roughness ω and lateral correlation length ξ. In both equations, the contribution from the surface
roughness is an additive term to the resistivity of a flat layer ρ[ff], where the latter includes the resistivity due to bulk scattering as well as scattering at surface imperfections including
adatoms, surface reconstructions, impurities, and the chemical interaction with an adjacent layer. That is, our model explicitly separates resistivity effects due to (i) diffuse electron scattering
from surfaces and (ii) surface roughness. This is in direct contrast to the conventional Fuchs-Sondheimer (FS) model,^1,20 which accounts for both effects using a single phenomenological specularity
parameter p, as shown in Eq. (15). In order to directly compare the FS model with our new prediction, we express the conventional specularity parameter as a sum p=p[s]+Δp, where p[s] accounts for
electron scattering at a flat surface and Δp accounts for the surface roughness. Correspondingly, applying the FS model expression from Eq. (15), the total thin film resistivity
where the flat-film resistivity ρ[ff]=ρ[bulk][1+3(1 − p[s])λ/8d]. Now we solve for Δp by setting this resistivity from Eq. (16) equal to the expression in Eq. (14) from our new model, cancelling
the flat-film resistivity ρ[ff] and using ρ[bulk]=1/(g[0]λ) leads to
We note that Δp is negative. This is because surface roughness causes an increase in the resistivity, which corresponds within the conventional FS model to an increase in diffuse surface scattering
and therefore a reduction in the phenomenological specularity parameter p. That is, previous studies that determined the surface scattering specularity by fitting measured resistivity data with the
FS model may have reported a p-value that is lowered by the surface roughness. More problematic is the fact that a surface that causes completely diffuse surface scattering (p[s]=0) but also
exhibits some roughness (Δp<0) yields resistivity vs d data that corresponds to a negative p, which is outside the limits 0≤p≤1 of the FS model. This may explain the reported discrepancy
between measured thin film resistivities and the FS model prediction.^30,34,37,38
A well-known extension to the FS model which has the goal to resolve this discrepancy has been proposed by Kuan et al.,^27,34 who introduced an empirical parameter S to account for surface roughness
effects such that the FS prediction becomes
In this expression, the contribution from surface roughness on the resistivity is dependent on the surface scattering specularity in such a way that the effect of roughness becomes negligible for a
surface with a completely specular (p=1) scattering. In contrast, the surface roughness contribution in our model is an additive term which is independent of the surface specularity. This becomes
evident when setting Eqs. (14) and (18) equal, applying ρ[ff]=ρ[bulk][1+3(1 − p)λ/8d] and solving for the roughness factor S, which is explicitly expressed based on our model by
This indicates that S is a direct function of the surface scattering specularity and diverges for completely specular surface scattering.
An earlier attempt to account for the surface roughness effect was made by Namba,^23 who considered the effect of variations in the film thickness on the resistivity. More specifically, the (rough)
thin film resistivity is expressed as an integral over the flat film resistivity ρ[ff](d + Δd) where Δd is the deviation from the average thickness d and varies with position x. Applying the FS model
to the Namba approach and assuming that the RMS roughness ω is much smaller than d, we derive
where ρ[ff]=ρ[bulk][1+3(1 − p)λ/8d] is the flat film resistivity for a constant d. Thus, the second term in Eq. (20) corresponds to the resistivity contribution from the surface roughness
according to Namba. This term is proportional to (1/d)(ω^2/d^2), while our model in Eq. (14) predicts a (1/d)(ω/ξ) proportionality. Since ω/d is small, the roughness effect described by Namba is only
a secondary effect in comparison to our model. This is because, contrary to our model, the surface roughness in the Namba model does not cause additional electron scattering, such that the
resistivity becomes independent of the lateral correlation length ξ. This is in clear contradiction to our expression which predicts a considerably stronger impact of the surface roughness on the
thin film resistivity.
We note that existing quantum models^3,4,41,43 also provide expressions for the resistivity as a function of surface morphology. However, as summarized in the Introduction section, these models focus
on the effect of small-scale atomic-level roughness while neglecting the effect of larger-scale surface roughness which is the focus of our study. Therefore, these models provide expressions for ρ
[ff], which is within this paper defined as a “known” resistivity of a flat surface that contains atomic-level roughness and can be described with the FS model or these more advanced quantum models.
Correspondingly, these models are compatible with our derivation for the effect of surface roughness but cannot be directly compared with it since they focus on a distinctly different effect which
results, within our framework, in an additive resistivity term.
In summary, in this section we have directly compared our expression for the resistivity due to surface roughness with predictions from existing models. More specifically, we find that (i) surface
roughness causes a decrease in the apparent specularity parameter p within the FS model, which may become negative and, thus, unphysical, (ii) the roughness factor S within the model of Kuan et al.
is a function of ω/ξ, which is consistent with our prediction, but S tends to infinity for specular surface scattering, suggesting a possible divergence of this model with our prediction for p≈1,
(iii) the Namba model predicts a resistivity due to roughness which is independent of the lateral length scale for roughness and is relatively small in comparison to our prediction, and (iv) existing
quantum models complement our model by providing expressions for ρ[ff].
We have presented a new model that predicts the impact of surface roughness on the resistivity of a thin film. This is done by analyzing electron scattering at discrete surface step edges. This
resistivity is found to be additive to the flat film resistivity, and can be expressed in terms of a transmission probability at each step by applying a Landauer formalism. The transmission
probability at a single step decreases linearly with the step height, as confirmed with NEGF-DFT simulations of six-monolayer-thick Cu(001) layers. The simulations also show that the transmission
across a two-step structure converges to the expected incoherent transmission across two individual steps with increasing step-step separation, indicating that the incoherent scattering approximation
is applicable to layers with reasonably flat surfaces. Generalizing the transport model to 2D surfaces yields a parameter-free expression [Eq. (13)] for the resistivity due to surface roughness. It
is proportional to the average surface slope and inversely proportional to the film thickness and can be expressed, for direct comparison with experiments, in terms of the RMS surface roughness ω and
lateral correlation length ξ, showing a linear dependence Δρ ∼ ω/(ξd). Experimental validation of this proportionality is done by measurements of the surface morphology and resistivity of the
annealed and as-deposited epitaxial W(001) films. They reveal a significant resistivity contribution Δρ due to the surface roughness ranging from 0.3 to 11.2 μΩcm. Plotting the measured Δρ at both
295 and 77K against the measured ω/(ξd) suggests a linear relationship, confirming the model's prediction. Finally, our result is compared to previously developed surface roughness models, providing
a quantitative explanation of their empirical parameters as well as revealing the limitations of existing models. This work provides a parameter-free method to evaluate the surface roughness
contribution to the thin film resistivity.
This research was funded by SRC, MARCO, and DARPA through the FAME STARnet center. The authors also acknowledge the NSF under Grant Nos. 1712752 and 1740271. Computational resources were provided by
the Center for Computational Innovations at RPI. The authors thank Professor Hong Guo for fruitful discussions on electron transport simulations.
Math. Proc. Cambridge Philos. Soc.
J. C.
R. T.
J. M.
, and
F. C.
Phys. Rev. Lett.
M. V.
, and
Phys. Rev. Lett.
N. W.
Phys. Rev. B
Phys. Rev. Lett.
, and
Phys. Rev. B
E. Z.
, and
Phys. Rev. B
S. H.
, and
Annu. Rev. Mater. Res.
, and
Phys. Rev. B
J. S.
Appl. Phys. Lett.
R. C.
C. A.
, and
J. Appl. Phys.
, and
J. Appl. Phys.
Y. P.
R. F.
K. M.
, and
, and
Semicond. Sci. Technol.
S. L. T.
J. J.
A. P.
R. E.
J. S.
, and
J. C.
Phys. Rev. B
S. L. T.
J. J.
A. P.
R. E.
J. S.
, and
J. C.
Phys. Rev. B
R. C.
, and
M. S.
Appl. Phys. Lett.
J. S.
X. Y.
, and
J. Appl. Phys.
Phys. Rev. B
E. H.
Adv. Phys.
S. B.
J. Appl. Phys.
G. N.
L. A.
Thin Solid Films
Jpn. J. Appl. Phys., Part 1
A. F.
Phys. Rev. B
, and
Phys. Rev. B
Thin Solid Films
T. S.
C. K.
G. S.
S. M.
, and
Mater. Res. Soc. Symp. Proc.
J. S.
K. P.
J. S.
, and
Phys. Rev. B
K. L.
L. C.
, and
M. H.
J. Appl. Phys.
S. P.
, and
Thin Solid Films
J. M.
Thin Solid Films
J. S.
, and
Appl. Phys. Lett.
, and
Phys. Rev. B
S. M.
T. S.
J. Vac. Sci. Technol., B
P. Y.
R. P.
, and
Appl. Phys. Lett.
, and
J. Appl. Phys.
Y. P.
K. M.
, and
Appl. Phys. Lett.
P. K.
K. R.
, and
J. Appl. Phys.
P. Y.
B. J.
J. S.
, and
J. Appl. Phys.
R. C.
Appl. Phys. Rev.
D. Y.
, and
Z. D.
Phys. Rev. B
A. E.
Phys. Rev. B
Phys. Rev. B
IBM J. Res. Dev.
Philos. Mag.
P. W.
D. J.
, and
D. S.
Phys. Rev. B
P. W.
Phys. Rev. B
E. N.
C. M.
Phys. Rev. Lett.
D. S.
P. A.
Phys. Rev. B
, and
Phys. Rev. B
Phys. Scr.
K. M.
P. J.
, and
G. E. W.
Phys. Rev. B
Electronic Transport in Mesoscopic Systems
Cambridge University Press
J. M.
J. D.
, and
J. Phys. Condens. Matter
, and
Phys. Rev. B
, and
J. M.
Phys. Status Solidi B
O. F.
D. J.
Phys. Rev. B
, and
J. M.
J. Phys. Condens. Matter
J. P.
, and
Phys. Rev. Lett.
Comput. Phys. Commun.
, and
Phys. Rev. B
B. D.
, and
J. Vac. Sci. Technol., A
J. Appl. Phys.
B. D.
P. Y.
S. V.
, and
Phys. Rev. B
P. Y.
, and
J. Appl. Phys.
J. M.
J. Appl. Phys.
X. Y.
Thin Solid Films
, and
Phys. Rev. Lett.
J. Appl. Phys.
G. M.
A. V.
, and
V. T.
J. Appl. Phys.
© 2018 Author(s). | {"url":"https://pubs.aip.org/aip/jap/article/123/15/155107/154437/The-electrical-resistivity-of-rough-thin-films-A","timestamp":"2024-11-14T09:18:44Z","content_type":"text/html","content_length":"384548","record_id":"<urn:uuid:89bb169a-ce59-47a8-8aab-deec5cba72a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00552.warc.gz"} |
Tensile strength of paper | pulp paper mill
Tensile strength of paper
What is the tensile strength of paper?
The tensile strength is the maximum stress to break a strip of paper sheet. It is one of the most important basic physical properties of paper and paperboard. The tensile strength is different basis
on fiber direction. Since the fiber orientation is dissimilar between machine direction (MD) and cross direction CD), hence the tensile strength is measured in both directions. Machine direction is
the direction of the paper web which is running on the machine whereas cross direction means the direction, which is perpendicular to the paper sheet that is running on the machine during paper
making. It is greater in machine direction than in cross direction. It is calculated with the force per unit width and express as N/m.
The tensile strength test of paper sheet is like to the other materials test, but the method of expressing is different. For most cases the tensile strength is generally expressed in terms of load
per unit cross-sectional area, whereas in paper industry it is stated in terms of load per unit of the test specimen. If the tensile strength of paper is lower, then the quality of the paper is
lowered and it in need to be increased by improving different factor.
Relation of the tensile strength of paper
Tensile strength is used to find out how resistant paper is to a web break. The strength, length and bonding of fiber, degree of fiber refining and the direction of the fiber are the main sources of
the tensile strength of paper. It is also depends on the quality and quantity of fillers used. It is a significant factor for many applications as like printing, converter and packaging papers.
Tensile index and its calculation
Tensile index is defined with tensile strength divided by basis weight and express as Nm/g.
Tensile Strength = N/m
Basis Weight = g/m2
Hence, Tensile index (TI) = (N/m)/(g/m^2 ) = Nm/g
Tensile strength test machine
Several types of tensile strength testing apparatus are available, working on horizontally or vertically oriented specimen. There are five types of tensile strength tester apparatus used in paper
industry such as rigid crosshead type, inclined plane type, hydraulic type and spring type. Among them pendulum type tensile strength test is most commonly used.
19 Responses to Tensile strength of paper
1. I am very keen to learn more about Paper & Pulp. What should be the minimum and maximum Tensile strength of Kraft liner board?
How is Cobb of a kraft paper tested? What is Cobb TS & BS? What should be the minimum and maximum SCT of a Kraft Liner board?
I will appreciate if there is any short courses available so that I could join to learn more.
Thanks & best regards
2. Dear saleem,
Tensile strength depends upon the type of paper. Upto my knowledge cobb can be tested by pouring water on the paper and checking the dewatering of it up to 45 to 60 sec of minimum time. If the
water or moisture detected at the lower end of paper, then the quality of the paper is not upto expected regarding kraftliner paper. T. S the breaking or tearing point of a paper when maximum
force is applied to tear it. That max force noted as T.S of paper can be tested by machines. B.S. Can be checked as same above. It is the maximum burst strength factor which the produced paper
can with stand
3. Is there any way to manually measure the tensile strength of paper? We are conducting an investigative research about the relationship between the the strength of paper made from carabao grass
and its relative “cooking” time. It would mean a lot to our study. Thank you!
4. if tensile strength of a paper is 98N/15mm, what would be in MPa
□ I am looking for same answer …
Have you figured it out ??
☆ 1MPa = 1000000Pa = 1000000 N/m^2
Now you can calculate easily
○ Thanks for your reply,
Tensile strength of paper is expressed in N/m (Force per unit length) everywhere.
I am looking for a way to convert the tensile strength of paper to N/m^2 (Force per unit area).
I assume this is because the paper strength might not be proportional to its cross sectional area. It should calculated separately for each paper weight
○ Hello,
if tensile strength is 0.521 Mpa then what would be in N/m.
Width =50.8 mm
could u please calculate and advice
□ Sir plz calculate the tensile index of white printing paper….grammmage 59.2 g/m2
Tensile strength 33.58 n/mm2
5. if tensile strength of a paper is N/15mm to MPa.
Thanks & regards
6. Can we run machine speed at more than papers breaking length. Both while paper making and converting operations?
7. what the standard tensile , tear , burst and folding for writing and printing paper in N/m , mN , kPa , N0 and what is the standard tear index for 60 g/m2 in N m/g , mN m2/g , kPa m2/g
respectively .
Best regards
8. Dear Sir
We are looking for Tensile Strength Tester for Paper and Plastic to be used in Laboratory
Please quote qty 1 pc
Kindly give catalogue and data sheet
Our company is PT Adika Safeta Tunggal in Indonesia
9. I need to find the force to punch 100 sheets of A4 size paper using a 5mm dia pin. If the punching force calculation is perimeter x thickness x shear strength. How much is the shear strength of
the paper?
10. I need to understand that why do VFD supplier propose “MultiDrive” configuration i.e.with common DC source especially for dryer section.
Is there some kind of regeneration taking place in dryer section during running?
if yes,how do we calculate it to calculate ROI?
11. what is The Tear Index of JK Easy Copier Paper.(A4) 70gsm?
12. Can any one confirm if the tensile of MD is lower and CD wise its OK, then how we can improve it on machine.
13. How to convert MPa to N/m?
Please help me!!!!
14. Hello may I ask, what is the standard level of tensile strength of a paper, how do we consider if its strong or weak? Thank you very much
This entry was posted in Paper Properties and tagged Tensile index, tensile strength, tensile strength of paper, tensile strength test. Bookmark the permalink. | {"url":"http://www.pulppapermill.com/tensile-strength-of-paper/","timestamp":"2024-11-12T10:44:02Z","content_type":"text/html","content_length":"81290","record_id":"<urn:uuid:215dd6c0-161e-42b2-b126-cfe01f57f851>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00167.warc.gz"} |
How do you solve log(x + 10) - log(x) = 2log(5)? | Socratic
How do you solve #log(x + 10) - log(x) = 2log(5)#?
1 Answer
$\log \left(x + 10\right) - \log x = 2 \log 5$
Our first step is to rewrite the equation using some laws of logarithms, specifically, $\log A - \log B = \log \left(\frac{A}{B}\right)$. This law allows us to rewrite the left-hand side of the
equation as
$\log \left(\frac{x + 10}{x}\right) = 2 \log 5$.
Another law of logarithms, $A \log B = \log {B}^{A}$, allows us to rewrite the right-hand side equivalently as
$\log \left(\frac{x + 10}{x}\right) = \log {5}^{2}$
$= \log 25$
Now, you didn't specify a base for the $\log$ function here, so I will assume that $\log$ means base-2 logarithm. Still, whether the base is 2, 10, e, or whatever, it actually doesn't matter... the
answer will be the same. You'll see why in a moment.
Raise 2 to both sides:
${2}^{\log} \left(\frac{x + 10}{x}\right) = {2}^{\log} 25$
The exponential and the logarithm are inverse functions, so the base-2 and the logarithms will cancel:
$\frac{x + 10}{x} = 25$
From here, we just need to use some simple algebra, multiplying both sides by $x$:
$x + 10 = 25 x$
and then subtracting $x$ from both sides:
$10 = 24 x$
And then simplify to arrive at our final answer $x$:
$x = \frac{5}{12}$
Impact of this question
3647 views around the world | {"url":"https://socratic.org/questions/how-do-you-solve-log-x-10-log-x-2log-5","timestamp":"2024-11-10T11:20:08Z","content_type":"text/html","content_length":"35141","record_id":"<urn:uuid:2f596842-cf5f-41aa-af69-de72b96427eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00882.warc.gz"} |
Question #0462f | Socratic
Question #0462f
1 Answer
You actually don't have enough information to solve this problem.
The idea here is that when a gas is collected over water, the total pressure of the sample will include the vapor pressure of water at the temperature at which the gas was collected.
SInce no mention of this vapor pressure was made, you will have to either look it up, or calculate it using the Antoine equation. But then again, you'd need to look up the Antoine constants for
water, so I'll just use the known value without actually calculating it
At ${26.0}^{\circ} \text{C}$, water has a vapor pressure of about $\text{25.137 mmHg}$.
Now, according to Dalton's Law of partial pressures, the total pressure of a mixture of gases represents the sum of the partial pressures of each individual component of said mixture.
In your case, this can be written as
${P}_{\text{total" = P_"water" + P_"gas}}$
This means that the pressure of the gas is equal to
${P}_{\text{gas" = P_"total" - P_"water}}$
#P_"gas" = "751.5 mmHg" - "25.137 mmHg" = color(green)("726.4 mmHg")#
Impact of this question
1775 views around the world | {"url":"https://socratic.org/questions/563a43ff11ef6b54be60462f#184270","timestamp":"2024-11-02T17:22:28Z","content_type":"text/html","content_length":"35469","record_id":"<urn:uuid:7f90b856-53c8-4906-bcd5-2fdc682c1d68>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00561.warc.gz"} |
On The Approximation Of Conjugate Of Functions Belonging To The Generalized Lipschitz Class By Euler-matrix Product Summability Method Of Conjugate Series Of Fourier Series
In this paper, a new theorem on the approximation of conjugate of functions belonging to the generalized Lipschitz class $Lip\left(\xi(t),p \right) $ by Euler-Matrix product summability method of
conjugate series of Fourier series has been obtained.
generalized Lipschitz class, conjugate series of Fourier series, product summability method, Euler mean, matrix mean.
P. Chandra, Trigonometric approximation of functions in norm, J. Math. Anal. Appl., 275, No 1 (2002), 13-26.
G. H. Hardy, Divergent Series, American Mathematical Society, (2000).
A. S. B. Holland and B. N. Sahney, On the degree of approximation by (E, q) means, Studia Sci. Math. Hunger, 11 (1976), 431-435.
J. K. Kushwaha, On the approximation of conjugate function by almost triangular matrix summability means, Int. J. of Management Tech. and Engi., 9, No 3 (2019), 4382-4389; DOI:
S. Lal and J. K. Kushwaha, Degree of approximation of Lipschitz function by product summability methods, International Mathematical Forum, 4, No 43 (2009), 2101-2107.
S. Lal and J. K. Kushwaha, Approximation of conjugate of functions belonging to generalized Lipschitz class by lower triangular matrix means, Int. Journal of Math. Analysis, 3, No 21 (2009),
K. Qureshi, On the degree of approximation of a function belonging to the weighted class, Indian Jour. of Pure and Appl. Math, 4, No 13 (1982), 471-475.
E. C. Titchmarsh, The Theory of Functions, Oxford University Press , (1939).
S. K. Tiwary and U. Upadhyay, Degree of approximation of functions belonging to the generalized Lipschitz class by product means of its Fourier series, Ultra Scientist, 3, No 25 (2013), 411-416.
O. Töeplitz, Über die Lineare mittelbi-dungen Prace, Mat. Fiz., No 22 (1911), 113-119.
A. Zygmund, Trigonometric Series, Cambridge University Press, Cambridge, (1959).
C.K.Chui. An introduction to Wavelets (wavelets analysis and it's application), Vol.1, Academic Press, USA, 1992.
Jitendra Kumar Kushwaha, On the Approximation of Generalized Lipschitz Function by Euler Means of Conjugate Series of Fourier Series, The Scientific World Journal, Vol. 2013, Article Id 508026.
Sandeep Kumar Tiwari and Uttam Upadhyay, Degree of Approximation of Function Belonging to class by (E,q) A-Product Means of its Fourier series, IJM Archive- 4(8), 2013, 266-272.
Xhevat Z. Krasniqi, On the Degree of Approximation of Functions Belonging to the Lipschitz Class by Means, Khayyam J. Math, 1(2015), no.2 243-252.
Jitendra Kumar Kushwaha, Approximation of functions by (C,2)(E,1) product summability method of Fourier series. Ratio Mathematica, Vol 38, 2020, pp. 341-348.
• There are currently no refbacks.
Copyright (c) 2022 Jitendra Kumar Kushwaha, krishna Kumar
This work is licensed under a
Creative Commons Attribution 4.0 International License
Ratio Mathematica - Journal of Mathematics, Statistics, and Applications. ISSN 1592-7415; e-ISSN 2282-8214. | {"url":"http://eiris.it/ojs/index.php/ratiomathematica/article/view/788","timestamp":"2024-11-04T12:30:48Z","content_type":"application/xhtml+xml","content_length":"23336","record_id":"<urn:uuid:5eb0cc96-3a42-4ed8-8a33-154bbb57c27c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00329.warc.gz"} |
Playing CartPole with the Actor-Critic method | TensorFlow Core
This tutorial demonstrates how to implement the Actor-Critic method using TensorFlow to train an agent on the Open AI Gym CartPole-v0 environment. The reader is assumed to have some familiarity with
policy gradient methods of (deep) reinforcement learning.
Actor-Critic methods
Actor-Critic methods are temporal difference (TD) learning methods that represent the policy function independent of the value function.
A policy function (or policy) returns a probability distribution over actions that the agent can take based on the given state. A value function determines the expected return for an agent starting
at a given state and acting according to a particular policy forever after.
In the Actor-Critic method, the policy is referred to as the actor that proposes a set of possible actions given a state, and the estimated value function is referred to as the critic, which
evaluates actions taken by the actor based on the given policy.
In this tutorial, both the Actor and Critic will be represented using one neural network with two outputs.
In the CartPole-v0 environment, a pole is attached to a cart moving along a frictionless track. The pole starts upright and the goal of the agent is to prevent it from falling over by applying a
force of -1 or +1 to the cart. A reward of +1 is given for every time step the pole remains upright. An episode ends when: 1) the pole is more than 15 degrees from vertical; or 2) the cart moves more
than 2.4 units from the center.
The problem is considered "solved" when the average total reward for the episode reaches 195 over 100 consecutive trials.
Import necessary packages and configure global settings.
pip install gym[classic_control]
pip install pyglet
# Install additional packages for visualization
sudo apt-get install -y python-opengl > /dev/null 2>&1
pip install git+https://github.com/tensorflow/docs > /dev/null 2>&1
import collections
import gym
import numpy as np
import statistics
import tensorflow as tf
import tqdm
from matplotlib import pyplot as plt
from tensorflow.keras import layers
from typing import Any, List, Sequence, Tuple
# Create the environment
env = gym.make("CartPole-v1")
# Set seed for experiment reproducibility
seed = 42
# Small epsilon value for stabilizing division operations
eps = np.finfo(np.float32).eps.item()
2024-08-16 02:25:38.808866: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-16 02:25:38.830167: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-16 02:25:38.836483: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
The model
The Actor and Critic will be modeled using one neural network that generates the action probabilities and Critic value respectively. This tutorial uses model subclassing to define the model.
During the forward pass, the model will take in the state as the input and will output both action probabilities and critic value \(V\), which models the state-dependent value function. The goal is
to train a model that chooses actions based on a policy \(\pi\) that maximizes expected return.
For CartPole-v0, there are four values representing the state: cart position, cart-velocity, pole angle and pole velocity respectively. The agent can take two actions to push the cart left (0) and
right (1), respectively.
Refer to Gym's Cart Pole documentation page and Neuronlike adaptive elements that can solve difficult learning control problems by Barto, Sutton and Anderson (1983) for more information.
class ActorCritic(tf.keras.Model):
"""Combined actor-critic network."""
def __init__(
num_actions: int,
num_hidden_units: int):
self.common = layers.Dense(num_hidden_units, activation="relu")
self.actor = layers.Dense(num_actions)
self.critic = layers.Dense(1)
def call(self, inputs: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor]:
x = self.common(inputs)
return self.actor(x), self.critic(x)
num_actions = env.action_space.n # 2
num_hidden_units = 128
model = ActorCritic(num_actions, num_hidden_units)
Train the agent
To train the agent, you will follow these steps:
1. Run the agent on the environment to collect training data per episode.
2. Compute expected return at each time step.
3. Compute the loss for the combined Actor-Critic model.
4. Compute gradients and update network parameters.
5. Repeat 1-4 until either success criterion or max episodes has been reached.
1. Collect training data
As in supervised learning, in order to train the actor-critic model, you need to have training data. However, in order to collect such data, the model would need to be "run" in the environment.
Training data is collected for each episode. Then at each time step, the model's forward pass will be run on the environment's state in order to generate action probabilities and the critic value
based on the current policy parameterized by the model's weights.
The next action will be sampled from the action probabilities generated by the model, which would then be applied to the environment, causing the next state and reward to be generated.
This process is implemented in the run_episode function, which uses TensorFlow operations so that it can later be compiled into a TensorFlow graph for faster training. Note that tf.TensorArrays were
used to support Tensor iteration on variable length arrays.
# Wrap Gym's `env.step` call as an operation in a TensorFlow function.
# This would allow it to be included in a callable TensorFlow graph.
@tf.numpy_function(Tout=[tf.float32, tf.int32, tf.int32])
def env_step(action: np.ndarray) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
"""Returns state, reward and done flag given an action."""
state, reward, done, truncated, info = env.step(action)
return (state.astype(np.float32),
np.array(reward, np.int32),
np.array(done, np.int32))
def run_episode(
initial_state: tf.Tensor,
model: tf.keras.Model,
max_steps: int) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor]:
"""Runs a single episode to collect training data."""
action_probs = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True)
values = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True)
rewards = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)
initial_state_shape = initial_state.shape
state = initial_state
for t in tf.range(max_steps):
# Convert state into a batched tensor (batch size = 1)
state = tf.expand_dims(state, 0)
# Run the model and to get action probabilities and critic value
action_logits_t, value = model(state)
# Sample next action from the action probability distribution
action = tf.random.categorical(action_logits_t, 1)[0, 0]
action_probs_t = tf.nn.softmax(action_logits_t)
# Store critic values
values = values.write(t, tf.squeeze(value))
# Store log probability of the action chosen
action_probs = action_probs.write(t, action_probs_t[0, action])
# Apply action to the environment to get next state and reward
state, reward, done = env_step(action)
# Store reward
rewards = rewards.write(t, reward)
if tf.cast(done, tf.bool):
action_probs = action_probs.stack()
values = values.stack()
rewards = rewards.stack()
return action_probs, values, rewards
2. Compute the expected returns
The sequence of rewards for each timestep \(t\), \(\{r_{t}\}^{T}_{t=1}\) collected during one episode is converted into a sequence of expected returns \(\{G_{t}\}^{T}_{t=1}\) in which the sum of
rewards is taken from the current timestep \(t\) to \(T\) and each reward is multiplied with an exponentially decaying discount factor \(\gamma\):
\[G_{t} = \sum^{T}_{t'=t} \gamma^{t'-t}r_{t'}\]
Since \(\gamma\in(0,1)\), rewards further out from the current timestep are given less weight.
Intuitively, expected return simply implies that rewards now are better than rewards later. In a mathematical sense, it is to ensure that the sum of the rewards converges.
To stabilize training, the resulting sequence of returns is also standardized (i.e. to have zero mean and unit standard deviation).
def get_expected_return(
rewards: tf.Tensor,
gamma: float,
standardize: bool = True) -> tf.Tensor:
"""Compute expected returns per timestep."""
n = tf.shape(rewards)[0]
returns = tf.TensorArray(dtype=tf.float32, size=n)
# Start from the end of `rewards` and accumulate reward sums
# into the `returns` array
rewards = tf.cast(rewards[::-1], dtype=tf.float32)
discounted_sum = tf.constant(0.0)
discounted_sum_shape = discounted_sum.shape
for i in tf.range(n):
reward = rewards[i]
discounted_sum = reward + gamma * discounted_sum
returns = returns.write(i, discounted_sum)
returns = returns.stack()[::-1]
if standardize:
returns = ((returns - tf.math.reduce_mean(returns)) /
(tf.math.reduce_std(returns) + eps))
return returns
3. The Actor-Critic loss
Since you're using a hybrid Actor-Critic model, the chosen loss function is a combination of Actor and Critic losses for training, as shown below:
\[L = L_{actor} + L_{critic}\]
The Actor loss
The Actor loss is based on policy gradients with the Critic as a state dependent baseline and computed with single-sample (per-episode) estimates.
\[L_{actor} = -\sum^{T}_{t=1} \log\pi_{\theta}(a_{t} | s_{t})[G(s_{t}, a_{t}) - V^{\pi}_{\theta}(s_{t})]\]
• \(T\): the number of timesteps per episode, which can vary per episode
• \(s_{t}\): the state at timestep \(t\)
• \(a_{t}\): chosen action at timestep \(t\) given state \(s\)
• \(\pi_{\theta}\): is the policy (Actor) parameterized by \(\theta\)
• \(V^{\pi}_{\theta}\): is the value function (Critic) also parameterized by \(\theta\)
• \(G = G_{t}\): the expected return for a given state, action pair at timestep \(t\)
A negative term is added to the sum since the idea is to maximize the probabilities of actions yielding higher rewards by minimizing the combined loss.
The Advantage
The \(G - V\) term in our \(L_{actor}\) formulation is called the Advantage, which indicates how much better an action is given a particular state over a random action selected according to the
policy \(\pi\) for that state.
While it's possible to exclude a baseline, this may result in high variance during training. And the nice thing about choosing the critic \(V\) as a baseline is that it trained to be as close as
possible to \(G\), leading to a lower variance.
In addition, without the Critic, the algorithm would try to increase probabilities for actions taken on a particular state based on expected return, which may not make much of a difference if the
relative probabilities between actions remain the same.
For instance, suppose that two actions for a given state would yield the same expected return. Without the Critic, the algorithm would try to raise the probability of these actions based on the
objective \(J\). With the Critic, it may turn out that there's no Advantage (\(G - V = 0\)), and thus no benefit gained in increasing the actions' probabilities and the algorithm would set the
gradients to zero.
The Critic loss
Training \(V\) to be as close possible to \(G\) can be set up as a regression problem with the following loss function:
\[L_{critic} = L_{\delta}(G, V^{\pi}_{\theta})\]
where \(L_{\delta}\) is the Huber loss, which is less sensitive to outliers in data than squared-error loss.
huber_loss = tf.keras.losses.Huber(reduction=tf.keras.losses.Reduction.SUM)
def compute_loss(
action_probs: tf.Tensor,
values: tf.Tensor,
returns: tf.Tensor) -> tf.Tensor:
"""Computes the combined Actor-Critic loss."""
advantage = returns - values
action_log_probs = tf.math.log(action_probs)
actor_loss = -tf.math.reduce_sum(action_log_probs * advantage)
critic_loss = huber_loss(values, returns)
return actor_loss + critic_loss
4. Define the training step to update parameters
All of the steps above are combined into a training step that is run every episode. All steps leading up to the loss function are executed with the tf.GradientTape context to enable automatic
This tutorial uses the Adam optimizer to apply the gradients to the model parameters.
The sum of the undiscounted rewards, episode_reward, is also computed in this step. This value will be used later on to evaluate if the success criterion is met.
The tf.function context is applied to the train_step function so that it can be compiled into a callable TensorFlow graph, which can lead to 10x speedup in training.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
def train_step(
initial_state: tf.Tensor,
model: tf.keras.Model,
optimizer: tf.keras.optimizers.Optimizer,
gamma: float,
max_steps_per_episode: int) -> tf.Tensor:
"""Runs a model training step."""
with tf.GradientTape() as tape:
# Run the model for one episode to collect training data
action_probs, values, rewards = run_episode(
initial_state, model, max_steps_per_episode)
# Calculate the expected returns
returns = get_expected_return(rewards, gamma)
# Convert training data to appropriate TF tensor shapes
action_probs, values, returns = [
tf.expand_dims(x, 1) for x in [action_probs, values, returns]]
# Calculate the loss values to update our network
loss = compute_loss(action_probs, values, returns)
# Compute the gradients from the loss
grads = tape.gradient(loss, model.trainable_variables)
# Apply the gradients to the model's parameters
optimizer.apply_gradients(zip(grads, model.trainable_variables))
episode_reward = tf.math.reduce_sum(rewards)
return episode_reward
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1723775141.663718 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.667620 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.671341 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.675129 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.687076 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.690515 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.694025 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.697475 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.700940 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.704504 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.707964 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775141.711412 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.938318 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.940342 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.942370 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.944416 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.946416 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.948277 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.950194 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.952156 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.954012 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.955861 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.957791 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.959728 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.997905 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775142.999871 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.001830 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.003823 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.005687 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.007547 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.009500 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.011459 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.013359 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.015658 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.017987 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723775143.020344 57252 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
5. Run the training loop
Training is executed by running the training step until either the success criterion or maximum number of episodes is reached.
A running record of episode rewards is kept in a queue. Once 100 trials are reached, the oldest reward is removed at the left (tail) end of the queue and the newest one is added at the head (right).
A running sum of the rewards is also maintained for computational efficiency.
Depending on your runtime, training can finish in less than a minute.
min_episodes_criterion = 100
max_episodes = 10000
max_steps_per_episode = 500
# `CartPole-v1` is considered solved if average reward is >= 475 over 500
# consecutive trials
reward_threshold = 475
running_reward = 0
# The discount factor for future rewards
gamma = 0.99
# Keep the last episodes reward
episodes_reward: collections.deque = collections.deque(maxlen=min_episodes_criterion)
t = tqdm.trange(max_episodes)
for i in t:
initial_state, info = env.reset()
initial_state = tf.constant(initial_state, dtype=tf.float32)
episode_reward = int(train_step(
initial_state, model, optimizer, gamma, max_steps_per_episode))
running_reward = statistics.mean(episodes_reward)
episode_reward=episode_reward, running_reward=running_reward)
# Show the average episode reward every 10 episodes
if i % 10 == 0:
pass # print(f'Episode {i}: average reward: {avg_reward}')
if running_reward > reward_threshold and i >= min_episodes_criterion:
print(f'\nSolved at episode {i}: average reward: {running_reward:.2f}!')
0%| | 0/10000 [00:00<?, ?it/s]/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/gym/utils/passive_env_checker.py:233: DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`. (Deprecated NumPy 1.24)
if not isinstance(terminated, (bool, np.bool8)):
6%|▌ | 578/10000 [01:00<16:29, 9.52it/s, episode_reward=500, running_reward=478]
Solved at episode 578: average reward: 477.60!
CPU times: user 2min 4s, sys: 18.7 s, total: 2min 23s
Wall time: 1min
After training, it would be good to visualize how the model performs in the environment. You can run the cells below to generate a GIF animation of one episode run of the model. Note that additional
packages need to be installed for Gym to render the environment's images correctly in Colab.
# Render an episode and save as a GIF file
from IPython import display as ipythondisplay
from PIL import Image
render_env = gym.make("CartPole-v1", render_mode='rgb_array')
def render_episode(env: gym.Env, model: tf.keras.Model, max_steps: int):
state, info = env.reset()
state = tf.constant(state, dtype=tf.float32)
screen = env.render()
images = [Image.fromarray(screen)]
for i in range(1, max_steps + 1):
state = tf.expand_dims(state, 0)
action_probs, _ = model(state)
action = np.argmax(np.squeeze(action_probs))
state, reward, done, truncated, info = env.step(action)
state = tf.constant(state, dtype=tf.float32)
# Render screen every 10 steps
if i % 10 == 0:
screen = env.render()
if done:
return images
# Save GIF image
images = render_episode(render_env, model, max_steps_per_episode)
image_file = 'cartpole-v1.gif'
# loop=0: loop forever, duration=1: play each frame for 1ms
image_file, save_all=True, append_images=images[1:], loop=0, duration=1)
import tensorflow_docs.vis.embed as embed
Next steps
This tutorial demonstrated how to implement the Actor-Critic method using Tensorflow.
As a next step, you could try training a model on a different environment in Gym.
For additional information regarding Actor-Critic methods and the Cartpole-v0 problem, you may refer to the following resources:
For more reinforcement learning examples in TensorFlow, you can check the following resources: | {"url":"https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic","timestamp":"2024-11-12T00:23:34Z","content_type":"text/html","content_length":"237869","record_id":"<urn:uuid:d713d9d8-322a-4b5e-867c-31af69f8aad2>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00012.warc.gz"} |
Inconsistencies and Incompatibilities Require Changes to Relativity
(written twenty years ago)
As Einstein vehemently maintained, it is the very nature of light (now most completely described by quantum theories) that is at the heart of the formalism of SR. But now Quantum Electrodynamics
encompasses the explanation of electron energy exchanges and has, therefore, supplanted classical electrodynamics which was developed by Maxwell for the detailed explanation of related phenomena. In
Lorentz’s mathematical precursor of SR the derivation was based on an electron theory which forced equations of electrodynamics to be invariant under uniform relative motions (in his case with
respect to an ether). Einstein also demonstrated the invariance of Maxwell’s equations with respect to a Lorentz transformation as signally significant. One should certainly have expected two thus
intricately related theories for which a complimentary symmetry could be established between scientific domains, to have exhibited complimentary formalisms. Far from being the case, however, SR is
based exclusively on a simple set of algebraic equations relating classical parameters in the state vector of an observed object (or event on the object) to that what would be observed by another in
a simple one-to-one mapping assuming deterministic projection. QM on the other hand is embodied in Schrödinger’s complex differential operator equation (or equivalently in Heisenberg’s matrix
mechanics) for which solutions have no direct classical analog. Their only consistent meaning involves interpretation of a product of the solution with its complex conjugate to form a probability
density function from which ‘expected’ classical parameter values for the observable state vector can be calculated. These differences are somewhat understandable since SR provides the analogy of
coordinate conversion and Schrödinger’s approach is more directly analogous to electromagnetic field equations, but SR was validated with respect to those classical equations which Schrödinger
replaced. However, the basic postulate of SR is that all of the fundamental laws of physics must be invariant under Lorentz transformations and this most basic equation of QM isn’t – although a
Klein-Gordon version does provide that nominal covariance. Furthermore, the assumption of a deterministic projection as assumed by SR is inherently incompatible with indeterminism as demanded by QM.
Thus, the formalisms and methodologies of the two theories have been totally unique from inception onward. Both of the theories are considered to be firmly based on confirmed observations, but they
embrace different conceptions of what even constitutes an observer or an observation. SR credulously embraces a sentient framework endowed with capabilities to assess space and time values for any
event occurring within a space/time cone encompassing all past and future events of the entire universe that can in any way be causally related within the particular reference frame. QM, on the other
hand, addresses observation with extreme suspicion; it assumes the very act of observation to be no less significant in many cases than the action that is being observed, and in general to be
associated with an uncertainty in determination of the objects of observation. SR employs extreme realism to the extent that ‘real’ contraction is assumed even after its having been shown by Penrose
and others that such contractions cannot be observed. (Actually, a second transformation is required in determining contemporaneous observations whose results can be considered on a par with what are
called ‘observables’ in QM. Lorentz transformations provide four-dimensional correspondences with what are noncontemporaneous events on a rigid structure in the ‘other’ frame assuming observations
equivalent to what would be the case if light sources were stationary with respect to the observer.) Correspondence requires a further transformation of the ‘field of vision’ to obtain the
contemporaneous ‘visual’ observation prediction for another observer. Thus Einstein’s interpretation of the results of the Lorentz transformations (invoking a unique space/time metric) presupposes
the existence of an intermediate level of reality beneath (visual) observation with related confusing terminologies involving ‘actual’ and ‘observed’ as essential to an interpretation of experiments.
QM, on the other hand, has been interpreted almost exclusively over the last three quarters of a century according to a Copenhagen Interpretation embracing a most extreme form of positivism,
sometimes called ‘logical empiricism,’ that denies that there can even be meaning to concepts for which direct observation cannot be obtained. Thus, QM requires a direct correlation with
‘observation’ in a sense other than could be supported by any of the currently accepted interpretations of an ‘observation’ in relativity. Furthermore, alternative observations by separate (or even
the same) observers of an event on an object in QM can only be statistically correlated since each observation involves its own inherently unique state variations. SR assumes inherently unique
coordinate realities for relatively moving coincident observers, but the vehicle of observation (a ‘ray’ of light) is according to Einstein’s interpretation shared by the two observers to accommodate
the precise anticipation of an observation by third parties using a velocity addition formula. The essential philosophical differences of these theories, therefore, result in each entirely refuting
the validity of the approach taken by the other.
If the geological theory of plate tectonics were to be applied by analogy to stresses accumulating between adjacent domains within our scientific ‘World View,’ it would surely indicate that we are
overdue for seismic activity along this fault line. Or as Kuhn would say, “In fact, however, step by step their deep divergences and incoherencies emerge increasingly within the scientific community,
but people do not see them until finally the confusion becomes so great that the situation breaks down.”
It is inevitable that physical theories should be continually replaced, but a completely smooth evolution of their domains does not occur. This is partly, of course, because of incommensurabilities
that Khun has identified with alternative theoretical paradigms that are as inevitable as change itself, but in addition, human loyalty tends to weigh more heavily than objective thinking or the
surpassing value of sincerity ought to accommodate. In deference to William of Occam it should be acknowledged that it is much simpler not to rock a floating boat to obtain a marginally better
oarsman and, therefore, to be replaced, a theory must offend much more than mere philosophy. But, inevitably, change does occur. There are many reasons why theories are ultimately replaced – why
alternative, even though mathematically equivalent, interpretations must be changed. Experimental data may accumulate which cannot (or which can only awkwardly) be accounted by the original theory.
Analogies of terminology may become so completely absurd that previously unquestioned relationships must either now be demonstrated to be independent of the analogy or withdrawn. More comprehensive
formalization accommodates a merger of theoretical treatments of various theories that were formerly appropriate but only within a more restricted scientific domain. Finally, it seems altogether
fitting that as (and if) the philosophy of science matures, the credibility of older, philosophically unsound, theories may completely erode, even if only gradually, until they are completely
All these factors favoring change are present in varying degrees in the current situation; it seems to the author that they are more especially damaging to SR than to QM as currently interpreted by
Cramer, although this is not the conventional wisdom. At any rate, there have been considerably more revolutionary developments since SR was originally conceived than is the case for QM, even if the
years to have transpired are only so slightly greater. | {"url":"https://fred.vaughan.cc/inconsistencies-and-incompatibilities-require-changes-to-relativity/","timestamp":"2024-11-11T00:36:07Z","content_type":"text/html","content_length":"136716","record_id":"<urn:uuid:5a1d4279-a843-48f2-a3a8-3e9953163d70>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00582.warc.gz"} |
Amps to Volts Calculator - calculator
Amps to Volts Calculators
Amps to Volts Calculators: In the world of electrical engineering and DIY electronics, converting between current (measured in Amperes or Amps) and voltage (measured in Volts) is a common task. One
of the tools that can simplify this process is an Amps to Volts Calculator. In this article, we will explore what these calculators are, how they work, and their advantages and disadvantages.
Related Calculator-
Amps to Volts Calculator
What is Amps to Volts?
Amps to Volts refers to the process of converting electrical current measured in amperes to voltage. This conversion is crucial in understanding electrical systems and circuits. The relationship
between current (I), voltage (V), and resistance (R) in a circuit is defined by Ohm's Law, which states that:
V = I × R
• V is the voltage in volts
• I is the current in amperes
• R is the resistance in ohms
What is an Amps to Volts Calculator?
An Amps to Volts Calculator is a tool designed to convert current (measured in amps) to voltage. It simplifies the process of calculating the voltage in an electrical circuit when the current and
resistance are known. This type of calculator is particularly useful for electricians, engineers, and hobbyists working on electrical projects.
How to Use an Amps to Volts Calculator
Using an Amps to Volts Calculator is straightforward. Here’s a step-by-step guide:
1. Input the Current: Enter the value of the current in amperes.
2. Input the Resistance: Enter the value of the resistance in ohms.
3. Calculate: Click the 'Calculate' button to obtain the voltage.
The calculator will use the formula V = I × R to compute the voltage. The result will be displayed almost instantly, making it easy to perform quick calculations.
What is the Formula for Amps to Volts Conversion?
The formula used for converting amps to volts is derived from Ohm's Law. The basic formula is:
V = I × R
• V = Voltage (Volts)
• I = Current (Amperes)
• R = Resistance (Ohms)
This formula is fundamental in electrical engineering and helps in designing and analyzing electrical circuits.
Advantages and Disadvantages of Amps to Volts Calculators
• Simplicity: These calculators provide a simple and quick way to perform conversions.
• Accuracy: They reduce the chances of manual calculation errors.
• Convenience: Easily accessible online and in app forms, making them handy for on-the-go calculations.
• Efficiency: Saves time compared to manual calculations.
• Dependence: Relying too much on calculators may reduce one’s problem-solving skills.
• Accuracy of Input: The accuracy of the result is dependent on the correct input values. Incorrect values will lead to incorrect results.
• Limited Scope: These calculators are useful for simple conversions but may not handle complex scenarios involving varying resistance or non-linear components.
Additional Information
Amps to Volts Calculators are just one part of the broader toolkit used by engineers and electricians. They are often used in conjunction with other tools like multimeters and oscilloscopes to ensure
accurate measurements and to troubleshoot electrical issues. Understanding how to interpret the results and the limitations of these calculators is essential for effective electrical work.
Frequently Asked Questions
What is the difference between Amps and Volts?
Amps (amperes) measure the amount of electrical current flowing through a circuit, while Volts measure the electrical potential difference or force that drives the current through the circuit.
Essentially, amps indicate the flow of electricity, and volts indicate the pressure pushing that flow.
Can I use an Amps to Volts Calculator for AC circuits?
Yes, you can use an Amps to Volts Calculator for AC circuits, provided you have the resistance value and the current measurement. However, for AC circuits, you might need to consider additional
factors such as impedance, which includes both resistance and reactance.
Are there any mobile apps for Amps to Volts calculations?
Yes, there are numerous mobile apps available for Amps to Volts calculations. These apps often offer additional features such as saving calculation history, unit conversions, and integration with
other electrical engineering tools.
What should I do if the calculator gives unexpected results?
If the calculator provides unexpected results, check the input values for accuracy. Ensure that you have entered the correct current and resistance values. Also, verify that you are using the
calculator properly and that it is suitable for your specific calculation needs. | {"url":"https://calculatordna.com/amps-to-volts-calculator/","timestamp":"2024-11-06T18:36:35Z","content_type":"text/html","content_length":"92285","record_id":"<urn:uuid:a61628de-d6c1-41c9-8810-c072b18b042f>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00811.warc.gz"} |
Minimax-robust prediction problem for stochastic sequences with stationary increments and cointegrated sequences
Minimax-robust prediction problem for stochastic sequences with stationary increments and cointegrated sequences
Keywords: Stochastic sequence with stationary increments, cointegrated sequence, minimax-robust estimate, mean square error, least favorable spectral density, minimax-robust spectral characteristic
The problem of optimal estimation of the linear functionals $A {\xi}=\sum_{k=0}^{\infty}a (k)\xi(k)$ and $A_N{\xi}=\sum_{k=0}^{N}a (k)\xi(k)$ which depend on the unknown values of a stochastic
sequence $\xi(m)$ with stationary $n$th increments is considered. Estimates are obtained which are based on observations of the sequence $\xi(m)+\eta(m)$ at points of time $m=-1,-2,\ldots$, where the
sequence $\eta(m)$ is stationary and uncorrelated with the sequence $\xi(m)$. Formulas for calculating the mean-square errors and spectral characteristics of the optimal estimates of the functionals
are derived in the case of spectral certainty, where spectral densities of the sequences $\xi(m)$ and $\eta(m)$ are exactly known. These results are applied for solving extrapolation problem for
cointegrated sequences. In the case where spectral densities of the sequences are not known exactly, but sets of admissible spectral densities are given, the minimax-robust method of estimation is
applied. Formulas that determine the least favorable spectral densities and minimax spectral characteristics are proposed for some special classes of admissible densities.
W. Bell, Signal extraction for nonstationary time series, The Annals of Statistics, vol. 12, no. 2, pp. 646-664, 1984.
G. E. P. Box, G. M. Jenkins and G. C. Reinsel, Time series analysis. Forecasting and control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.
I. I. Golichenko and M. P. Moklyachuk, Estimates of functionals of periodically correlated processes. Kyiv: NVP ``Interservis", 2014.
I. I. Gikhman and A. V. Skorokhod, The theory of stochastic processes. I., Berlin: Springer, 2004.
C. W. J. Granger, Cointegrated variables and error correction models, UCSD Discussion paper, 83-13a, 1983.
I. I. Dubovets’ka, O.Yu. Masyutka and M.P. Moklyachuk, Interpolation of periodically correlated stochastic sequences, Theory Probab. Math. Stat., vol. 84, pp. 43–56, 2012.
I. I. Dubovets’ka and M. P. Moklyachuk, Filtration of linear functionals of periodically correlated sequences, Theory Probab. Math. Stat., vol. 86, pp. 51–64, 2013.
I. I. Dubovets’ka and M. P. Moklyachuk, Extrapolation of periodically correlated processes from observations with noise, Theory Probab. Math. Stat., vol. 88, pp. 67–83, 2014.
I. I. Dubovets’ka and M. P. Moklyachuk, Minimax estimation problem for periodically correlated stochastic processes, Journal of Mathematics and System Science, vol. 3, no. 1, pp. 26–30, 2013.
I. I. Dubovets’ka and M. P. Moklyachuk, On minimax estimation problems for periodically correlated stochastic processes, Contemporary Mathematics and Statistics, vol.2, no. 1, pp. 123–150, 2014.
R. F. Engle and C. W. J. Granger, Co-integration and error correction: Representation, estimation and testing, Econometrica, vol. 55, pp. 251-276, 1987.
U. Grenander, A prediction problem in game theory, Arkiv för Matematik, vol. 3, pp. 371–379, 1957.
J. Franke, Minimax robust prediction of discrete time series, Z. Wahrscheinlichkeitstheor. Verw. Gebiete, vol. 68, pp. 337–364, 1985.
J. Franke and H. V. Poor, Minimax-robust filtering and finite-length robust predictors, Robust and Nonlinear Time Series Analysis. Lecture Notes in Statistics, Springer-Verlag, vol. 26, pp. 87–126,
K. Karhunen, Über lineare Methoden in der Wahrscheinlichkeitsrechnung, Annales Academiae Scientiarum Fennicae. Ser. A I, vol.37, 1947.
A. N. Kolmogorov, Selected works by A. N. Kolmogorov. Vol. II: Probability theory and mathematical statistics. Ed. by A. N. Shiryayev. Mathematics and Its Applications. Soviet Series. 26. Dordrecht
etc. Kluwer Academic Publishers, 1992.
M. M. Luz and M. P. Moklyachuk, Interpolation of functionals of stochastic sequences with stationary increments, Theory Probab. Math. Stat., vol. 87, pp. 117–133, 2013.
M. M. Luz and M. P. Moklyachuk, Interpolation of functionals of stochastic sequences with stationary increments for observations with noise, Prykl. Stat., Aktuarna Finans. Mat., no. 2, pp. 131–148,
M. M. Luz and M. P. Moklyachuk, Minimax-robust filtering problem for stochastic sequence with stationary increments, Theory Probab. Math. Stat., vol. 89, pp. 127–142, 2014.
M. Luz and M. Moklyachuk, Robust extrapolation problem for stochastic processes with stationary increments, Mathematics and Statistics, vol. 2, no. 2, pp. 78–88, 2014.
M. Luz and M. Moklyachuk, Minimax-robust filtering problem for stochastic sequences with stationary increments and cointegrated sequences, Statistics, Optimization & Information Computing, vol. 2,
no. 3, pp. 176–199, 2014.
M. Luz and M. Moklyachuk, Minimax interpolation problem for random processes with stationary increments, Statistics, Optimization & Information Computing, vol. 3, no. 1, pp. 30–41, 2015.
M. Moklyachuk and M. Luz, Robust extrapolation problem for stochastic sequences with stationary increments, Contemporary Mathematics and Statistics, vol. 1, no. 3, pp. 123–150, 2013.
M. P. Moklyachuk, Minimax extrapolation and autoregressive-moving average processes, Theory Probab. Math. Stat., vol. 41, pp. 77–84, 1990.
M. P. Moklyachuk, Robust procedures in time series analysis, Theory of Stochastic Processes, vol. 6, no. 3-4, pp. 127-147, 2000.
M. P. Moklyachuk, Game theory and convex optimization methods in robust estimation problems, Theory of Stochastic Processes, vol. 7, no. 1-2, pp. 253–264, 2001.
M. P. Moklyachuk, Robust estimations of functionals of stochastic processes, Kyiv University, Kyiv, 2008.
M. Moklyachuk and A. Masyutka, Extrapolation of multidimensional stationary processes, Random Operators and Stochastic Equations, vol. 14, no. 3, pp.233–244, 2006.
M. Moklyachuk and A. Masyutka, Robust estimation problems for stochastic processes, Theory of Stochastic Processes, vol. 12, no. 3-4, pp. 88–113, 2006.
M. Moklyachuk and A. Masyutka, Robust filtering of stochastic processes, Theory of Stochastic Processes, vol. 13, no. 1-2, pp. 166–181, 2007.
M. Moklyachuk and A. Masyutka, Minimax prediction problem for multidimensional stationary stochastic sequences, Theory of Stochastic Processes, vol. 14, no. 3-4, pp. 89–103, 2008.
M. Moklyachuk and A. Masyutka, Minimax prediction problem for multidimensional stationary stochastic processes, Communications in Statistics – Theory and Methods., vol. 40, no. 19-20, pp. 3700–3710,
M. Moklyachuk and O. Masyutka, Minimax-robust estimation technique for stationary stochastic processes, LAP LAMBERT Academic Publishing, 2012.
M. P. Moklyachuk, Nonsmooth analysis and optimization, Kyiv University, Kyiv, 2008.
M. S. Pinsker and A. M. Yaglom, On linear extrapolation of random processes with nth stationary increments, Doklady Akademii Nauk SSSR, vol. 94, pp. 385–388, 1954.
M. S. Pinsker, The theory of curves with nth stationary increments in Hilber spaces, Izvestiya Akademii Nauk SSSR. Ser. Mat., vol. 19, no. 5, pp. 319–344, 1955.
B. N. Pshenichnyi, Necessary conditions of an extremum, “Nauka”, Moskva, 1982.
Yu. A. Rozanov, Stationary stochastic processes. 2nd rev. ed., “Nauka”, Moskva, 1990.
N. Wiener, Extrapolation, interpolation and smoothing of stationary time series. With engineering applications, The M. I. T. Press, Massachusetts Institute of Technology, Cambridge, Mass., 1966.
A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 1: Basic results, Springer Series in Statistics, Springer-Verlag, New York etc., 1987.
A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 2: Supplementary notes and references, Springer Series in Statistics, Springer-Verlag, New York etc., 1987.
A. M. Yaglom, Correlation theory of stationary and related random processes with stationary nth increments, Mat. Sbornik, vol. 37, no. 1, pp. 141–196, 1955.
A. M. Yaglom, Some classes of random fields in n-dimensional space related with random stationary processes, Teor. Veroyatn. Primen., vol. 11, no. 3, pp. 292–337, 1957.
How to Cite
Luz, M., & Moklyachuk, M. (2015). Minimax-robust prediction problem for stochastic sequences with stationary increments and cointegrated sequences. Statistics, Optimization & Information Computing, 3
(2), 160-188. https://doi.org/10.19139/soic.v3i2.132
Research Articles
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the
work with an acknowledgement of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an
institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to
productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access). | {"url":"http://www.iapress.org/index.php/soic/article/view/20150604","timestamp":"2024-11-02T01:33:33Z","content_type":"text/html","content_length":"34600","record_id":"<urn:uuid:a07219cf-07fe-440e-a6e6-b8e1def49558>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00393.warc.gz"} |
Proofs and Logic
This course is MAT 2071, Introduction to Proofs and Logic, taking place in the Fall 2015 semester with Professor Reitz. We will be using this website in a variety of ways this semester – as a
central location for information about the course (assignments, review sheets, policies, and so on), a place to write about the work we are doing, to ask and answer questions, to post examples of our
work, and to talk about logic, proofs, mathematics, reality and so on.
Getting Started
Anyone on the internet can look around the site and see what we are doing, and even leave a comment on one of the pages. However, only registered users can create new posts and participate in the
discussion boards.
How do I register?
You will need to do two things:
1. If you have not used the OpenLab before, you must first create an account. You will need access to your citytech email address for this. Detailed instructions for signing up on the OpenLab can
be found here.
2. Once you have created an account on the OpenLab, log in and then join this particular course, 2015 Fall – MAT 2071 Proofs and Logic – Reitz. To do this, first click the “Course Profile” link at
the top left of this page (just under the picture). Then click the “Join Now” button, which should appear just underneath the picture.
Problems with the OpenLab or with your CityTech email:
Please let me know if you run into any problems registering or joining our course (send me an email, jreitz@citytech.cuny.edu). I also wanted to give you two resources to help out in the process:
1. For problems with your citytech email account, contact the Student Computing Helpdesk, either in person, by phone, or by email:
Student Computing Helpdesk
Location: Namm First Floor – Information Booth
Hours: TBD (usually 9am – 5pm Mon-Fri)
Phone: 718.260.4900
E-mail: Studenthelpdesk@citytech.cuny.edu
Their website also contains tutorials and FAQ on common problems
2. For problems registering for the OpenLab, contact the OpenLab support team, either by email at openlab@citytech.cuny.edu, or by following this link. | {"url":"https://openlab.citytech.cuny.edu/2015-fall-mat-2071-reitz/?tag=join","timestamp":"2024-11-03T04:45:34Z","content_type":"text/html","content_length":"114804","record_id":"<urn:uuid:63bd530a-050a-45c5-a7aa-d3d6da57b6c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00452.warc.gz"} |
The Stacks project
Definition 59.57.1. Let $G$ be a topological group.
1. A $G$-module, sometimes called a discrete $G$-module, is an abelian group $M$ endowed with a left action $a : G \times M \to M$ by group homomorphisms such that $a$ is continuous when $M$ is
given the discrete topology.
2. A morphism of $G$-modules $f : M \to N$ is a $G$-equivariant homomorphism from $M$ to $N$.
3. The category of $G$-modules is denoted $\text{Mod}_ G$.
Comments (2)
Comment #3200 by Dario Weissmann on
Typo: Directly above definition 53.56.2 the category of R-modules should be denoted Mod_R instead of mod_R.
Comment #3304 by Johan on
Thanks, fixed here.
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0A2H. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0A2H, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0A2H","timestamp":"2024-11-11T16:15:24Z","content_type":"text/html","content_length":"25490","record_id":"<urn:uuid:6a67a8ce-db83-4af0-b9cb-486f41f3be1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00210.warc.gz"} |
{sets} -- A lightweight constraint programming language based on ROBDDs
2007 Articles
{sets} -- A lightweight constraint programming language based on ROBDDs
Constraint programming is a step toward ideal programming: you merely define the problem domain and the constraints the solution must meet and let the computer do the rest. Many constraint
programming languages have been developed; the majority of them employ iterative constraint propagation over the problem variables. While such an approach solves many problems and can handle very
rich data types, it is often too inefficient to be practical. To address this problem, we developed a constraint programming language called {sets} that uses reduced ordered binary decision diagrams
(ROBDDs) as the solution engine. Providing a minimal syntax, the language can be used to solve many finite problems that fit the constraint programming paradigm. The minimal syntax and simple
semantics of the language enable the user to create libraries customized for a specific problem domain. {sets} is particularly useful in problems where an efficient search algorithm yet know to exist
or can not be developed due to time constraints. As long as the solution domain is finite and discrete, {sets} should be able to describe the problem and search for a solution. We describe the {sets}
language through a series of examples, show how it is compiled into C++ code that uses a public-domain ROBDD library, and compare the performance of this language with other constraint languages.
Also Published In
Proceedings of the IADIS International Conference on Applied Computing: Salamanca, Spain, 18-20 February 2007.
International Association for Development of the Information Society
More About This Work
Academic Units
Published Here
September 22, 2011 | {"url":"https://academiccommons.columbia.edu/doi/10.7916/D8NS1471","timestamp":"2024-11-07T14:28:30Z","content_type":"text/html","content_length":"18784","record_id":"<urn:uuid:2edde78f-114f-44db-83ff-c7999fc69ea3>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00732.warc.gz"} |
Quadibloc 2002E RA, RC, and RR
[Next] [Up] [Previous] [Index]
Quadibloc 2002E RA, RC, and RR
This section involves three variants of Quadibloc 2002E which involve changes to the combiner portions of either the new type rounds (based on Quadibloc 2002) or to the core rounds.
Quadibloc 2002E RA
Another variation on Quadibloc 2002E U operates on 128-bit blocks. Here, the two stretches of eight new type rounds are replaced by stretches of twelve new type rounds, and, in addition, the twelve
new type rounds are modified to use three different combiners operating on the rightmost 32 bits of the block. To preserve invertibility by changing only the key schedule, the order in which the
combiners are used is reversed in the second stretch of twelve new type rounds.
The three different types of combiners alternate three different ways in which the 64 bits of input from the new type round f-function is concealed within the encipherment of the last 32 bits of the
block, thus leading to the name of this variant, Quadibloc 2002E RA (Rotating Ambiguity).
The first combiner shown is identical to that in Quadibloc 2002E U, and it folds the 64 bits of input into modifying a 32 bit subblock by having four, rather than two, Feistel rounds within the
f-function. The second combiner folds the 64 bits of input into a 32-bit value by using the second 32 bits of the input as two subkeys to encipher the first 32 bits. The third combiner folds the 64
bits of input into modifying a 32 bit subblock by having the combiner consist of four, rather than two, Feistel rounds.
It is intended that having successive rounds differing in this important aspect of how multiple f-function outputs produce the same result should remove an important regularity that a cryptanalyst
would need to exploit.
For this cipher, the additional key material, over and above that needed by the original Quadibloc 2002E, that is required is:
The additional key material required is as follows:
• One hundred and sixty-eight 32-bit subkeys, K497 through K664
• Twenty-four 64-bit subkeys the bytes of which are the outputs of a 4 of 8 code, EK29 through EK52 (exchange keys)
• Sixteen 32-bit subkeys the bytes of which are the outputs of a 4 of 8 code, SEK1 through SEK16 (short exchange keys)
• Eight 16-bit subkeys the bytes of which are the outputs of a 4 of 8 code, TEK1 through TEK8 (tiny exchange keys)
• One hundred and ninety-two subkey pools, each comprising sixteen 32-bit subkeys, SP1 through SP192 (subkey pools)
• Fourteen S-boxes containing 256 8-bit elements, forming a permutation of the values from 0 to 255, SB17 through SB30 (bijective S-boxes)
• Eight S-boxes containing 256 16-bit elements, having no special properties, SR3 through SR10 (random S-boxes)
This key material is generated after the key material in the Quadibloc 2002E key schedule, making the order of subkey generation the following:
• Subkeys K1 through K192
• S-boxes SB1 and SB2
• Subkeys LK1 through LK22
• S-boxes SB3 and SB4
• Subkeys EK1 through EK12
• Subkeys K193 through K240
• S-box SB5
• Subkeys K241 through K496
• S-boxes SB6 through SB11
• Subkeys EK13 through EK20
• S-boxes SB12 through SB15
• Subkeys EK21 through EK28
• S-box SB16
• S-boxes SR1 and SR2
• Subkeys K497 through K616
• S-boxes SR3 and SR4
• S-box SB17
• Subkeys K617 through K664
• Subkey pools SP1 through SP192
• S-boxes SB18 through SB22
• S-boxes SR5 through SR8
• Subkeys SEK1 through SEK16
• S-box SB23
• Subkeys TEK1 through TEK8
• S-boxes SB24 through SB26
• S-boxes SR9 and SR10
• S-boxes SB27 through SB29
thus, note that while the first new type round in Quadibloc 2002E U uses subkeys 497 through 501 and 577 and 578, the first new type round in Quadibloc 2002E RA uses subkeys 497 through 501 and 617
and 618, since now there are twenty-four instead of sixteen new type rounds, so there are 120 rather than 80 subkeys of the type that are used five per round, and 48 instead of 32 subkeys of the type
that are used two per round.
For decipherment, the order of short exchange keys SEK1 through SEK16 is reversed, and their bits are complemented, and the order of tiny exchange keys TEK1 through TEK8 is reversed, and their bits
are complemented. Also, the order of subkeys K497 through K616 is reversed, and the order of pairs of subkeys within K617 through K664 is reversed, reflecting the larger number of new type subkeys in
Quadibloc 2002E RA as against Quadibloc 2002E U.
Quadibloc 2002E RC
The main combiner in the core rounds can be replaced by another design, as illustrated below:
Here, SB16 and EK13 through EK20 are eliminated from the cipher, along with the bit swap and substitution operations that use them, and LK23 through LK38 are added to Quadibloc 2002E to produce
Quadibloc 2002E RC (Revised Combiner).
In the key schedule, LK23 through LK38 are generated when EK13 through EK20 would have been.
For decipherment, the order of keys LK23 through LK38 is reversed.
Quadibloc 2002E RR
Combining Quadibloc 2002E RA with a modified version of Quadibloc 2002E RC yields Quadibloc 2002E RR (Rotating Revised). As in Quadibloc 2002E RC, LK23 through LK38 are added, this time after EK13
through EK20, as those exchange keys are now retained, unlike the case in Quadibloc 2002E RC. In addition, S-boxes SR11 and SR12 are added to the key schedule, and eight 32-bit keys K665 through
The order of subkey generation in Quadibloc 2002E RR is as follows:
• Subkeys K1 through K192
• S-boxes SB1 and SB2
• Subkeys LK1 through LK22
• S-boxes SB3 and SB4
• Subkeys EK1 through EK12
• Subkeys K193 through K240
• S-box SB5
• Subkeys K241 through K496
• S-boxes SB6 through SB11
• Subkeys EK13 through EK20
• Subkeys LK23 through LK38
• Subkeys K665 through K672
• S-boxes SR11 and SR12
• S-boxes SB12 through SB15
• Subkeys EK21 through EK28
• S-box SB16
• S-boxes SR1 and SR2
• Subkeys K497 through K616
• S-boxes SR3 and SR4
• S-box SB17
• Subkeys K617 through K664
• Subkey pools SP1 through SP192
• S-boxes SB18 through SB22
• S-boxes SR5 through SR8
• Subkeys SEK1 through SEK16
• S-box SB23
• Subkeys TEK1 through TEK8
• S-boxes SB24 through SB26
• S-boxes SR9 and SR10
• S-boxes SB27 through SB29
The modified form of the combiner from Quadibloc 2002E RC used in Quadibloc 2002E RR is as illustrated below:
The modified combiner appears asymmetric, but the swap halves operations on the left half of the block in the core rounds ensure that both halves of the block do become modified.
Note that in the second Feistel round of the combiner, in order to obtain greater variation than that provided by the simple XOR of subkeys, the roles of SB12 and SB13 have been exchanged, and thus
these two S-boxes need to be switched in the deciphering key schedule. Note that S-box SB16 and EK13 through EK20 are returned to the key schedule; here, they, combined with the order in which the
32-bit subblocks of the outputs of the XORs with the subkeys LK23 through LK38 are used, allow the four rounds to use their subkeys in reverse order as well, to further differentiate the two stages.
The order of subkeys K665 through K672 must be reversed for deciphering; as well, the order of subkeys LK23 through LK38 must be reversed as in Quadibloc 2002E RC, and as in Quadibloc 2002E itself,
the order of subkeys EK13 through EK20 must be reversed and their bits must be inverted; and the other subkey changes for deciphering are as in Quadibloc 2002E RA.
[Next] [Up] [Previous] [Index]
Start of Section
Skip to Next Chapter
Table of Contents
Main Page | {"url":"http://quadibloc.com/crypto/co47i74.htm","timestamp":"2024-11-06T08:44:29Z","content_type":"text/html","content_length":"8777","record_id":"<urn:uuid:537572d5-a5d3-4c41-946d-937c6bdd345e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00685.warc.gz"} |
One-Step Subtraction Equations
What are One-Step Subtraction Equations?
A One-Step Subtraction Equation is a type of equation where the operation involved is subtraction, and solving the equation requires only one operation. The goal is to find the value of the variable
(usually represented as x).
Consider the equation:
x - 4 = 9
In this equation, we need to find the value of x that makes the equation true. The operator in this equation is Subtraction, meaning 4 is being subtracted from x, and our job is to solve for x.
Process of Finding the Solution:
1. Understand the equation:
□ You have x, which is the unknown number.
□ Subtracting 4 from x gives 9.
2. Undo the subtraction:
□ To solve for x, you need to undo the subtraction by performing the opposite operation, which is addition. This step isolates the variable.
□ Add 4 to both sides of the equation:
x - 4 + 4 = 9 + 4
Simplifying this gives:
x = 13
3. Check your solution:
□ Substitute x = 13 back into the original equation to verify:
13 - 4 = 9
9 = 9
Since both sides are equal, your solution is correct!
Key Concept
Although the equation involves subtraction, you solve it by performing the opposite operation (addition) to isolate the variable. It's called a One-Step Equation because only one operation (addition
in this case) is required to find the solution.
More Math Cheat Sheets | {"url":"https://cheatsheeting.com/show.html?sheet=one-step-subtraction-equations","timestamp":"2024-11-02T23:31:37Z","content_type":"application/xhtml+xml","content_length":"182168","record_id":"<urn:uuid:3b1edc59-bd90-4802-9e77-524e735b09d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00463.warc.gz"} |
Current study is devoted to the readiness of future math teachers to develop pupils` mathematical abilities. Development of pupils` mathematical abilities is one of the most important problems in
learning mathematics. Many Russian and foreign scholars were engaged in research of mathematical abilities. Tutoring support having been provided for young teachers and graduates of Kazan federal
university pedagogical department showed a number of issues. Novice teachers engage “average” students, do not implement an individual approach, have certain difficulties in determining pupils`
mathematical abilities and applying the methods of its development. Consequently, the purpose of this study was to identify ways to improve training of future mathematics teachers in terms of
developing students' mathematical abilities.Analysis of specific literature showed distinction between ordinary “school” abilities to assimilate mathematical knowledge, to reproduce and implement it
on the one side, and creative mathematical abilities associated with designing original product. In this regard questionnaires were drawn to establish ideas of KFU students as future math teachers
about current state and possibilities of developing pupils` mathematical abilities.The study determined that success of future teachers` training depends on many factors. A considerable place must be
given both to development of students` mathematical abilities themselves and to development of their pedagogical competencies. | {"url":"https://ap.pensoft.net/article_preview.php?id=24426","timestamp":"2024-11-02T08:00:35Z","content_type":"application/xhtml+xml","content_length":"9206","record_id":"<urn:uuid:253100b8-0ba6-4dd6-b5c6-e1c301249777>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00382.warc.gz"} |
3. Direct Model-Theoretic Semantics
The model-theoretic semantics for SWRL is a straightforward extension of the semantics for OWL given in the OWL Semantics and Abstract Syntax document [OWL S&AS]. The basic idea is that we define
bindings, extensions of OWL interpretations that also map variables to elements of the domain. A rule is satisfied by an interpretation iff every binding that satisfies the antecedent also satisfies
the consequent. The semantic conditions relating to axioms and ontologies are unchanged, e.g., an interpretation satisfies an ontology iff it satisfies every axiom (including rules) and fact in the
3.1. Interpreting Rules
From the OWL Semantics and Abstract Syntax document we recall that, given a datatype map D, an abstract OWL interpretation is a tuple of the form
“ I = <R, EC, ER, L, S, LV> ”
where R is a set of resources, LV ⊆ R is a set of literal values, EC is a mapping from classes and datatypes to subsets of R and LV respectively, ER is a mapping from properties to binary relations
on R, L is a mapping from typed literals to elements of LV, and S is a mapping from individual names to elements of EC(owl:Thing). To handle the built-in relations, we augment the datatype map to map
the built-in relations to tuples over the appropriate sets. That is, op:numeric-add is mapped into the triples of numeric values that correctly interpret numeric addition.
Note that allowing the datatype map to vary allows different implementations of SWRL to implement different built-in relations. It is suggested that if a SWRL implementation implements a particular
datatype, then it implement the built-ins for that datatype from Section 8.
Given an abstract OWL interpretation Ι, a binding B(Ι) is an abstract OWL interpretation that extends Ι such that S maps i-variables to elements of EC(owl:Thing) and L maps d-variables to elements of
LV respectively. An atom is satisfied by an interpretation Ι under the conditions given in the Interpretation Conditions Table, where C is an OWL DL description, D is an OWL DL data range, P is an
OWL DL individualvalued property, Q is an OWL DL datavalued property, f is a built-in relation, x,y are variables or OWL individuals, and z is a variable or an OWL data value.
Interpretation Conditions Table
│ Atom │Condition on Interpretation │
│C(x) │S(x) ∈ EC(C) │
│D(z) │S(z) ∈ EC(D) │
│P(x,y) │<S(x),S(y)> ∈ ER(P) │
│Q(x,z) │<S(x),L(z)> ∈ ER(Q) │
│sameAs(x,y) │S(x) = S(y) │
│differentFrom(x,y) │S(x) ≠ S(y) │
│builtIn(r,z1,...,zn) │<S(z1),...,S(zn)> ∈ D(f) │
Note that this interpretation of the built-in relations is very permissive. It is not necessary for a built-in relation to have a fixed arity, nor is it an error to use a built-in relation with a
fixed arity with the wrong number of arguments. For example builtIn(op:numeric-add ?x 5) is simply unsatisfiable, and not a syntax error.
A binding B(Ι) satisfies an antecedent A iff A is empty or B(Ι) satisfies every atom in A. A binding B(Ι) satisfies a consequent C iff C is not empty and B(Ι) satisfies every atom in C. A rule is
satisfied by an interpretation Ι iff for every binding B such that B(Ι) satisfies the antecedent, B(Ι) also satisfies the consequent.
Note that rule annotations have no semantic consequences and neither do the URI references associated with rules. This is different from the situation for OWL itself, where annotations do not have
semantic consequences.
The semantic conditions relating to axioms and ontologies are unchanged. In particular, an interpretation satisfies an ontology iff it satisfies every axiom (including rules) and fact in the
ontology; an ontology is consistent iff it is satisfied by at least one interpretation; an ontology O[2] is entailed by an ontology O[1] iff every interpretation that satisfies O[1] also satisfies O | {"url":"https://www.daml.org/2004/04/swrl/direct.html","timestamp":"2024-11-13T13:06:38Z","content_type":"application/xhtml+xml","content_length":"7688","record_id":"<urn:uuid:e1467b9f-f8f1-4df1-9b60-6afad6ad4729>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00659.warc.gz"} |
Can someone assist with SPSS correlation and regression analysis for clinical trials? | Pay Someone To Do My SPSS Assignment
Can someone assist with SPSS correlation and regression analysis for clinical trials? In the electronic version of the spreadsheet paper, a subset of clinical trials, selected for the correlation and
regression analysis, contains 12 lines of data, thus is represented for each trial. Studies are drawn from the published online version of Figure 2.2 to illustrate the results. Because these trial
data are not included in the meta-analysis, they are represented for analysis in this paper. The authors conducted a meta-analysis for every 10 trials that were selected for the correlation and
regression analysis, each on both methods. In analyzing each trial in this manuscript, the authors obtained several comparisons that the authors could consider, and two basic models are built: The
base model provides the exact model which derives the results of the fitting procedure, and the addition and subtraction methods provide the values of the coefficients. The equation used is shown in
Fig. 3.1 for each trial. There was no evidence for a significant or significant effect on age for any trial at all. The data from one trial (Fig 3.1) are indicated in the text. Fig. 3.1 Note that to
obtain samples from each trial, a subgroup method for every trial of each study is also provided. To obtain a general meta-analysis (to obtain 95% confidence), a number of subgroups methods currently
available (including random effects in random regression models Visit Your URL likelihood-ratio systems) are needed. Overall, in analyzing the data obtained by EMRs, the authors concluded that
further steps should be a matter of personal he has a good point The authors in conclusion agreed that the method of regression analysis provides some evidence that the increased risks observed from
the above-mentioned inclusion of four or five models in the model (in fact, model 1) and the higher total exposure of only four or five models in the model (in fact, model 2) are substantial. In this
manuscript, however, the authors evaluated for the potential effects of further steps to obtain more detailed statistical evaluation of included models by the meta-analysis. It is discussed whether
the results of similar analyses can be followed from different authorings.
Pass My Class
A narrative summary summarizing what the different authorings had to say can be found in “Ethics of meta analysis” (Fig 1.1). Note that further comparisons did not occur, as the paper was already
decided for later evaluation. The statistical and methodological properties of each trial are tabulated in Table 3.1. Finally, the discussion regarding significance of statistical differences is
listed in the last column which is presented in the next section. Table 3.1 Summary of Statistical Derivation of Meta-Analysis Results from EMRs It will be shown from the above discussion that in
addition to the methods specified, the additional steps can provide clear conclusions on this topic. The present approach can also clearly provide the information on the important statistical
considerations. How is this paper published? The paper is published in PhysCan someone assist with SPSS correlation and regression analysis for clinical trials? **Ewerle & Holinsch** Departments of
Pharmacy Analysis and Statistics, Division of Pharmacy Dynamics, University of Malaga College of Pharmacy, Germany **Information availability** **in accordance with hospital policy** ### Methodology
Source: e-Publit(3), , 2018. ###### Methods **Introduction** Over the past ten years, e-Publit(3) has provided the context that guided the development of e-PC software for SPSS-calculable clinical
trials. However, there are important limitations to this new software tool. For example, by utilizing the same data database from SPSS, the investigators are able to perform the treatment comparisons
between the different inclusion and exclusion tests. This extension of the software allows the investigators not to use different time frames to review multiple DUSCA-EPCs simultaneously, and is
limited to patient self- reports. In addition to this limitation, according to our research, in certain clinical trials, a single database of patients is used to select the DUSCA-EPCs according to
More Bonuses expectations.
Do We Need Someone To Complete Us
Another limitation relates to the performance requirements of its data source. In order to provide the clinical data to the authors of the DUSCA-EPCs, their analysis needs are required to be
reliable. For example, since data about the DUSCA e-PROMs have been obtained from 100071594, a prospective application is aimed at validation the DUSCA or a clinical drug discovery application so
that EBCD implementation will be provided to the investigators. This also does not seem to be expected at least for studies with higher risk of bias. For this purpose, and without the limitation to
routine clinical trials, several time frames may not be available for the authors of the EPCs, such as the ones used by our software tool. However, if the authors only knew the DUSCA-EPCs
individually, there could be significant impact of the time frame into DUSCA-EPCs based analyses. Thus, as a whole, there is no need for using a single database for individual sample data; however,
instead, the authors have to share the data that follow to assess the impact of differences in pay someone to do spss assignment frames between the samples collected in the respective DUSCA-EPCs. How
to transfer those multiple data points to DUSCA-EPCs using automated algorithms is described in more detail in a further study. ###### Data Sources In other words, in future analysis, our data source
will gradually increase in number, the number of the experiments and the number of follow-ups. Once all the data are in be processed, the authors will get accessCan someone assist with SPSS
correlation and regression analysis for clinical trials? Author Information J. K. Lee, E. A. Leff, N. Kogelman, K. A. Neumeyer, and D.W. Sim and F. Martin [Ozdemid, Mich.
What Are The Best Online Courses?
, August 9, 2015] incorporated a report of SPSS, a program of SPSS, to give background data for clinical trial quality and the evaluation of effect following drug interactions with different types of
drugs as a function of the number of molecules in SPSS. On-time changes in the dose of a new drug are shown as Pearson correlation coefficients. The data used to calculate this coefficient was
obtained after 5 years of follow-up. Additional data about multiple dosing was obtained from previous years in literature review. The authors provide information about the scientific consensus using
SPSS, the SPSS programs, SPSS Reference Manual, and SPSS User Manual which is included in this CURE report. Therefore the approach proposed by SPSS is based on common general approaches used by
clinicians based on clinical records, clinical laboratory tests, and analysis on log files. The design for this report can be found on the SPSS website entry 5 and above. (The more advanced models
used here for prediction are: model by SPSS user manual, SPSS reference manual, and SPSS reference file). For this, we report a four-dimensional model including every model of the individual,
including any of the covariables we have on the regression rank and the SPSS coefficient. The overall classification model described here was obtained by multiplying model by each covariate. Thus to
measure statistical significance the correlation coefficients averaged over all possible “doses” to do so is different from the above. Models are important not only for the prediction of patients’
perception of treatment treatment, but also for the treatment that takes place prior to drug therapy, so the predictive model are useful because they are related to predictors of response in future
trials. Even though we can estimate those predictive relations directly we cannot expect them to be useful either globally as we have needed individual regression methods (as discussed below, all are
included in this CURE report). Models are also interesting because they show that the SPSS Sorensen test is more effective than the SPSS SPSS tests and hence they are likely to detect drug
interactions better: the SPSS analysis of simulated data instead of the SPSS regression is the simplest procedure to predict the corresponding relation of the model to a particular treatment
response. However, we have no evidence to show SPSS is useful the SPSS models are not adequate to predict the SPSS coefficients when one of the coefficients returns about to the minimum. In Table [4]
(#T4){ref-type=”table”} we present the examples of the Sorensen and SPSS models that we used. ###### | {"url":"https://spsshelponline.com/can-someone-assist-with-spss-correlation-and-regression-analysis-for-clinical-trials","timestamp":"2024-11-09T00:17:39Z","content_type":"text/html","content_length":"168466","record_id":"<urn:uuid:8f4d89af-fbd5-45f7-a06e-e96b59962c53>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00067.warc.gz"} |
US10631796B2 - Brake system and medical apparatus including the same
- Google Patents
US10631796B2 - Brake system and medical apparatus including the same - Google Patents
Brake system and medical apparatus including the same Download PDF
Publication number
US10631796B2 US10631796B2 US15/963,328 US201815963328A US10631796B2 US 10631796 B2 US10631796 B2 US 10631796B2 US 201815963328 A US201815963328 A US 201815963328A US 10631796 B2 US10631796 B2 US
United States
Prior art keywords
lever unit
rail unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
Other versions
Pil Yong OH
Eun Hye Seo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US15/963,328 priority Critical patent/US10631796B2/en
Publication of US20180242930A1 publication Critical patent/US20180242930A1/en
Application granted granted Critical
Publication of US10631796B2 publication Critical patent/US10631796B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical
□ 238000003384 imaging method Methods 0.000 claims description 16
□ 239000000463 material Substances 0.000 claims description 5
□ 230000007423 decrease Effects 0.000 claims description 3
□ 238000002591 computed tomography Methods 0.000 description 12
□ 230000005855 radiation Effects 0.000 description 7
□ 230000003028 elevating effect Effects 0.000 description 6
□ 230000008878 coupling Effects 0.000 description 4
□ 238000010168 coupling process Methods 0.000 description 4
□ 238000005859 coupling reaction Methods 0.000 description 4
□ 238000003780 insertion Methods 0.000 description 4
□ 230000037431 insertion Effects 0.000 description 4
□ 238000013459 approach Methods 0.000 description 2
□ 238000006243 chemical reaction Methods 0.000 description 2
□ 230000000694 effects Effects 0.000 description 2
□ 230000001965 increasing effect Effects 0.000 description 2
□ 230000005856 abnormality Effects 0.000 description 1
□ 230000015556 catabolic process Effects 0.000 description 1
□ 238000006731 degradation reaction Methods 0.000 description 1
□ 238000002059 diagnostic imaging Methods 0.000 description 1
□ 239000012530 fluid Substances 0.000 description 1
□ 238000009434 installation Methods 0.000 description 1
□ 230000003902 lesion Effects 0.000 description 1
□ 238000000034 method Methods 0.000 description 1
□ 230000008520 organization Effects 0.000 description 1
□ 229920001296 polysiloxane Polymers 0.000 description 1
□ 238000002601 radiography Methods 0.000 description 1
□ 238000003325 tomography Methods 0.000 description 1
□ XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
☆ A—HUMAN NECESSITIES
☆ A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
☆ A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
☆ A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
☆ A61B6/10—Safety means specially adapted therefor
☆ A61B6/102—Protection against mechanical damage, e.g. anti-collision devices
☆ A61B6/105—Braking or locking devices
☆ A—HUMAN NECESSITIES
☆ A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
☆ A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
☆ A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
☆ A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
☆ A61B6/03—Computed tomography [CT]
☆ A61B6/032—Transmission computed tomography [CT]
☆ A—HUMAN NECESSITIES
☆ A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
☆ A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
☆ A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
☆ A61B6/44—Constructional features of apparatus for radiation diagnosis
☆ B—PERFORMING OPERATIONS; TRANSPORTING
☆ B60—VEHICLES IN GENERAL
☆ B60T—VEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR
PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
☆ B60T1/00—Arrangements of braking elements, i.e. of those parts where braking effect occurs specially for vehicles
☆ B60T1/12—Arrangements of braking elements, i.e. of those parts where braking effect occurs specially for vehicles acting otherwise than by retarding wheels, e.g. jet action
☆ B60T1/14—Arrangements of braking elements, i.e. of those parts where braking effect occurs specially for vehicles acting otherwise than by retarding wheels, e.g. jet action directly on
☆ F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
☆ F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
☆ F16D—COUPLINGS FOR TRANSMITTING ROTATION; CLUTCHES; BRAKES
☆ F16D63/00—Brakes not otherwise provided for; Brakes combining more than one of the types of groups F16D49/00 - F16D61/00
☆ F16D63/008—Brakes acting on a linearly moving member
☆ F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
☆ F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
☆ F16D—COUPLINGS FOR TRANSMITTING ROTATION; CLUTCHES; BRAKES
☆ F16D65/00—Parts or details
☆ F16D65/14—Actuating mechanisms for brakes; Means for initiating operation at a predetermined position
☆ A—HUMAN NECESSITIES
☆ A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
☆ A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
☆ A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
☆ A61B6/44—Constructional features of apparatus for radiation diagnosis
☆ A61B6/4429—Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
☆ A61B6/4435—Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a
rigid structure
☆ A61B6/4447—Tiltable gantries
☆ A—HUMAN NECESSITIES
☆ A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
☆ A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
☆ A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
☆ A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts;
specially adapted for specific clinical applications
☆ A61B6/502—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts;
specially adapted for specific clinical applications for diagnosis of breast, i.e. mammography
☆ F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
☆ F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
☆ F16D—COUPLINGS FOR TRANSMITTING ROTATION; CLUTCHES; BRAKES
☆ F16D2121/00—Type of actuator operation force
☆ F16D2121/14—Mechanical
☆ F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
☆ F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
☆ F16D—COUPLINGS FOR TRANSMITTING ROTATION; CLUTCHES; BRAKES
☆ F16D2121/00—Type of actuator operation force
☆ F16D2121/18—Electric or magnetic
☆ F16D2121/24—Electric or magnetic using motors
☆ F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
☆ F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
☆ F16D—COUPLINGS FOR TRANSMITTING ROTATION; CLUTCHES; BRAKES
☆ F16D2125/00—Components of actuators
☆ F16D2125/18—Mechanical mechanisms
☆ F16D2125/58—Mechanical mechanisms transmitting linear movement
☆ F16D2125/64—Levers
☆ F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
☆ F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
☆ F16D—COUPLINGS FOR TRANSMITTING ROTATION; CLUTCHES; BRAKES
☆ F16D2125/00—Components of actuators
☆ F16D2125/18—Mechanical mechanisms
☆ F16D2125/58—Mechanical mechanisms transmitting linear movement
☆ F16D2125/66—Wedges
□ Apparatuses consistent with exemplary embodiments relate to a brake system and a medical apparatus including the same.
□ Radiation imaging apparatuses are imaging systems which radiate radiation, for example, X-rays onto an object such as the whole or a part of the human body or another object to obtain images
from, such as an internal material, structure, or organization of a baggage.
□ Radiation imaging apparatuses are used as medical imaging systems for detecting an abnormality such as a lesion inside a human body, used as an image capture device to check an internal
structure of an object or a component, or used as a scanning device to scan baggage at the airport.
□ Radiation imaging apparatuses include a computed tomography (CT) scanner.
□ CT computed tomography
□ a CT scanner surrounds a moving object, continuously irradiates radiation on the moving object from all directions (i.e., around 360 degrees) and detects rays passing through the object to
obtain a plurality of cross-sectional images of the object.
□ the CT scanner continuously irradiates radiation on the object from the beginning to the end of scanning to obtain consecutive cross-sectional images thereof.
□ a body of the CT scanner may be operated to be tilted.
□ the body of the CT scanner may rotate at a high speed and irradiate radiation to the object while tilting at a predetermined angle.
□ the quality of images obtained using X-rays may be negatively impacted.
□ One or more exemplary embodiments provide a tomograph capable of obtaining clear images of an object by preventing the movement of a body.
□ a medical apparatus including a body rotatably provided to perform computed tomography (CT) scan, a base frame configured to support the body, a rail unit mounted on an outer surface of the
body, and a brake system mounted on the base frame and configured to perform braking of a rotation of the body.
□ CT computed tomography
□ the brake system includes a lever unit rotatably provided on a rotational shaft, and a wedge connected to one side of the lever unit and having tapered both sides.
□ An inner surface of the rail unit is formed to be tapered to correspond to an outer surface of the wedge.
□ the body When the wedge pressurizes the inner surface of the rail unit, the body may not move.
□ the rotational shaft may be provided more adjacent to one side of the lever unit than the other side of the lever unit.
□ the lever unit may receive a driving force from a motor and rotate on the rotational shaft.
□ a gear part connected to a driving gear part connected to the motor may be provided on the other side of the lever unit.
□ the driving force of the motor may be transferred to the lever unit while being amplified by a gear ratio of the driving gear part to the gear part.
□ the gear part may be engaged with a connection gear part, and the connection gear part may be engaged with the driving gear part.
□ the brake system may further include an elastic member which supplies an elastic force to the lever unit to allow the one side of the lever unit to be separate from the rail unit.
□ the elastic member may be located more adjacent to the other side of the lever unit than the one side of the lever unit and may transfer the elastic force to a bottom surface of the lever
□ a protrusion may be provided on the bottom surface of the lever unit, and the elastic member may be mounted on the protrusion.
□ a frictional pad may be mounted on the outer surface of the wedge.
□ the brake system may further include a base plate mounted on the base frame and the lever unit is rotatably mounted on the base plate.
□ a fixing bracket may be mounted on the base plate, and the rotational shaft may pass through the fixing bracket and the lever unit.
□ a hole may be formed in the base plate, and the wedge may pass through the hole and may be inserted into the rail unit.
□ the rail unit may include a bottom part and side parts provided on both sides of the bottom part to face each other, and a distance between the both side parts may become farther from the
bottom part toward ends of the side parts.
□ a tomograph including a body rotatably provided to perform a CT scan, a base frame configured to support the body, a rail unit mounted on an outer surface of the body, and a brake system
mounted on one of the body and the base frame and configured to prevent the movement of the body.
□ the brake system includes a lever unit rotatably provided on a rotational shaft and a wedge connected to the lever unit and configured to be inserted into the rail unit. An outer surface of
the wedge and an inner surface of the rail unit are formed to be tapered.
□ the brake system may further include a motor configured to transfer a driving force through the other side of the lever unit.
□ a gear part may be provided on the other side of the lever unit, the motor and the other side of the lever unit may be connected by a connection gear part.
□ the driving force of the motor may be transferred to the lever unit while being amplified by a gear ratio of the connection gear part to the gear part.
□ the rotational shaft may be configured more adjacent to one side of the lever unit than the other side of the lever unit.
□ a frictional member may be mounted on the outer surface of the wedge.
□ the frictional member may include rubber.
□ the brake system may further include an elastic member configured to supply an elastic force to allow the wedge to pressurize the inner surface of the rail unit.
□ a brake system which may perform braking of a tiltable body including a lever unit rotatably provided on a rotational shaft, a wedge which has a tapered outer surface, is provided on one side
of the lever unit, and halts the movement of the body by pressurizing one side of the body, and a motor which transfers a driving force through the other side of the lever unit.
□ the rotational shaft is located more adjacent to the one side of the lever unit than the other side of the lever unit.
□ the brake system may further include an elastic member which provides an elastic force to the lever unit to allow the wedge to pressurize the one side of the body.
□ a frictional member may be provided on the outer surface of the wedge.
□ a gear part may be provided on the other side of the wedge, and the driving force of the motor may be transferred to the lever unit while being amplified by a gear ratio of a connection gear
part connected to the motor to the gear part.
□ a brake apparatus provided on a body of a medical apparatus, the brake apparatus including: a rail unit; a lever unit configured to rotate with respect to a rotational shaft; an elastic
member provided on the lever unit at a first end of the lever unit and configured to provide an elastic force to rotate the lever unit; a wedge provided on a second end opposite to the first
end of the lever unit, configured to be inserted into the rail unit according to rotation of the lever unit and configured to apply braking pressure on the rail unit.
□ the wedge may include an outer surface including: a first side of the wedge; and a second side of the wedge opposite to the first side of the wedge wherein the first and the second sides of
the wedge are tapered.
□ An inner surface of the rail unit may be tapered to correspond to the outer surface of the wedge.
□ the wedge may be configured to apply pressure to the inner surface of the rail unit and configured to halt a movement of the body of the medical apparatus.
□ the brake apparatus may further include a motor configured to transfer a driving force to the lever unit.
□ the lever unit may include a gear part provided at the first end of the lever unit and connected to a driving gear part of the motor.
□ a gear ratio of the driving gear part to the gear part may be configured to amplify the driving force of the motor transferred to the lever unit.
□ the gear part of the lever unit may be engaged with a connection gear part, and wherein the connection gear part is engaged with the driving gear part of the motor.
□ the wedge may be configured to halt at least one of a linear movement and a rotation of the body of the medical apparatus.
□ the rotational shaft may be provided closer to the second end of the lever unit than the first end of the lever unit.
□ the elastic member may be configured to provide the elastic force to the lever unit and to pressurize the rail unit with the wedge.
□ the elastic member may be provided closer to the first end of the lever unit than the second end of the lever unit.
□ the wedge may include a frictional member provided on an outer surface of the wedge.
□ the lever unit may include a protrusion provided on a bottom surface of the lever unit, and the protrusion may be fitted in the elastic member.
□ the wedge may include a trapezoidal cross-sectional shape, and an inner surface of the rail unit may have a shape corresponding to the trapezoidal cross-sectional shape of the wedge.
□ a medical apparatus including: a body configured to perform a scan; a base frame configured to support the body; and a brake apparatus configured to perform braking of a movement of the body,
wherein the brake apparatus includes: a rail unit attached to one of the body and the base frame; a lever unit provided on the other one of the body and the base frame and configured to
rotate with respect to a rotational shaft; a motor configured to provide a driving force to the lever unit; an elastic member provided at a first end portion of the lever unit and configured
to provide an elastic force; and a wedge provided on a second end portion opposite to the first end portion of the lever unit and configured to perform braking of the body.
□ Opposite surfaces of the wedge may be tapered, and an inner surface of the rail unit may have a shape corresponding to an outer surface including the tapered opposite surfaces of the wedge.
□ the elastic member may be configured to supply the elastic force to rotate the lever unit to apply braking pressure to the rail unit via the wedge.
□ a frictional pad may be provided on an outer surface of the wedge.
□ the lever unit may be rotatably provided on the rotational shaft, and the rotational shaft may be provided closer to the second end portion of the lever unit than the first end portion of the
lever unit.
□ a brake apparatus provided on a body of a tomograph which performs a computed tomography (CT) scan on an object
□ the brake apparatus including: a lever unit rotatably provided on a rotational shaft to rotate with respect to the rotational shaft; an elastic member provided at a first end of the lever
unit and configured to supply an elastic force; a wedge provided on a second end opposite to the first end of the lever unit and configured to perform braking; and a rail unit configured to
perform the braking by engaging with the wedge, wherein the wedge is configured to apply braking pressure to the rail unit due to the elastic force and configured to halt tilting of the body.
□ CT computed tomography
□ the elastic member may be configured to supply the elastic force to allow the wedge to apply pressure to the rail unit.
□ a brake apparatus provided on a medical apparatus including a body performing a scan and a base frame supporting the body, the brake apparatus including: a rail unit attached to one of the
body and the base frame; and a lever unit provided on the other one of the body and the base frame, the lever unit configured to engage with the rail unit, wherein the lever unit includes: a
base plate attached to the other one of the body and the base frame; a bracket body provided on the base plate; a lever configured to rotate with respect to a rotational shaft inserted
through the bracket body; an elastic member provided at a first end portion of the lever and configured to provide an elastic force to rotate the lever; and a wedge provided on a second end
portion opposite to the first end portion of the lever unit and configured to perform braking of the body with respect to the base frame by contacting the rail unit according to rotation of
the lever.
□ the brake apparatus may further include a motor configured to transfer a driving force to the lever unit.
□ the lever may include a gear part provided at the first end of the lever and connected to a driving gear part of the motor.
□ FIG. 1 is a perspective view of a medical apparatus according to an exemplary embodiment
□ FIG. 2 is a side view of a body according to an exemplary embodiment
□ FIGS. 3A and 3B are perspective views of a brake system according to an exemplary embodiment
□ FIG. 4 is an exploded perspective view of the brake system according to an exemplary embodiment
□ FIG. 5 is a cross-sectional view illustrating a wedge and a rail unit according to an exemplary embodiment
□ FIGS. 6 and 7 are side views of the brake system according to an exemplary embodiment
□ FIG. 8 is a schematic view of the brake system according to an exemplary embodiment
□ FIG. 9 is a schematic view illustrating parts of the wedge part and the rail unit according to an exemplary embodiment
□ FIG. 10 is a view of a mammographic apparatus including a brake system according to an exemplary embodiment
□ FIGS. 11 and 12 are views of the brake system according to an exemplary embodiment.
□ FIGS. 13 and 14 are views of a brake system according to still an exemplary embodiment.
□ FIG. 1 is a perspective view of a medical apparatus 1 according to an exemplary embodiment.
□ FIG. 2 is a side view of a body 2 according to an exemplary embodiment.
□ the medical apparatus 1 includes the body 2 and an examination stand 3 .
□ the body 2 may include an opening 20 in a center part and an X-ray generator 21 and an X-ray detector 22 disposed thereinside to be opposite to each other.
□ An object 300 located on the examination stand 3 may be inserted into the opening 20 to perform tomography scanning.
□ the body 2 includes a stator 23 and a rotor 24 in which the opening 20 is installed in the center thereof.
□ the rotor 24 may be rotatably provided inside the stator 23 .
□ the X-ray generator 21 may be provided on one side of the rotor 24
□ the X-ray detector 22 may be provided on the other side of the rotor 24 .
□ the X-ray generator 21 and the X-ray detector 22 may be provided to be opposite to each other.
□ the X-ray detector 22 may directly receive X-rays which pass through the object 300 or X-rays which are radiated to the periphery of the object 300 and do not reach the object 300 , and may
detect the X-rays by conversion into electric signals.
□ the medical apparatus 1 may further include an image processor which reads and generates images from the electric signals stored in the X-ray detector 22 , image-processes the generated
images, or generates other images using the generated images. Also, the medical apparatus 1 may further include a controller for controlling whether X-rays are radiated or not.
□ the body 2 may be supported by a base frame 4 .
□ the base frame 4 may be provided on both left and right sides of the body 2 .
□ the base frame 4 may be mounted outside the stator 23 .
□ the body 2 may be mounted on the base frame 4 to be tiltable. As shown in FIG. 2 , the body 2 may be provided to be tiltable to allow a front side thereof to move up and down.
□ a brake system (or a brake apparatus) 5 may be provided between the body 2 and the base frame 4 .
□ One of an outer surface of the body 2 and the base frame 4 may include a rail unit 50 and the other may include the brake system 5 .
□ the rail unit 50 may be provided on the outer surface of the body 2 and the brake system 5 may be provided on one side of the base frame 4 .
□ a portion of the brake system 5 may be provided to be insertable into the rail unit 50 .
□ the rail unit 50 is provided to have a predetermined curvature in such a way that the rail unit 50 moves together with the body 2 and the portion of the brake system 5 inserted into the rail
unit 50 may vary in position along the rail unit 50 when the body 2 tilts.
□ the rail unit 50 may be provided to have a proper curvature and length depending on a tilt angle and shape of the body 2 .
□ the rail unit 50 may be a part of a circle having a radius R and having a rotational center O as shown in FIG. 2 .
□ the radius R increases, the occurrence of vibrations of the body may be reduced and the braking property of the brake system 5 may be improved.
□ a range of tilt angles ( ⁇ 1 + ⁇ 2 ) may be reduced.
□ the body 2 may be provided to operate at an tilt angle of about 60°.
□ a tilt angle ⁇ 1 to allow the front of the body 2 to face downward and a tilt angle ⁇ 2 to allow the front of the body 2 to face upward may be provided to be about 30°, respectively.
□ the radius R may be present within a range of 400 mm to 440 mm. More specifically, the radius R may be about 420 mm.
□ the body 2 may be fixed not to move while tilting even when the rotor 24 rotates.
□ a detailed configuration of the brake system 5 will be described below.
□ the examination stand 3 includes a supporter 30 and a transfer unit 31 .
□ the transfer unit 31 may be slidably provided above the supporter 30 .
□ the transfer unit 31 slides to be inserted into the body 2 through the opening 20 .
□ the object 300 may be located between the X-ray generator 21 and the X-ray detector 22 .
□ the rotor 24 rotates on the object 300 , and images of the object 300 from various angles may be taken using X-rays generated by the X-ray generator 21 .
□ FIGS. 3A and 3B are perspective views of the brake system 5 according to an exemplary embodiment.
□ FIG. 4 is an exploded perspective view of the brake system 5 .
□ the brake system 5 is provided on one surface (i.e., a surface facing the body 2 ) of the base frame 4 to allow a part of a lever unit 51 to be insertable into the rail unit 50 provided on
the outer surface of the body 2 . As the body 2 tilts, a position of the lever unit 51 with respect to the rail unit 50 may vary.
□ the rail unit 50 may include a bottom part 500 and side parts 501 and 502 extending or protruding from the bottom part 500 .
□ the side parts 501 and 502 may be provided on both sides of the bottom part 500 to be opposite to each other.
□ a sliding part 503 in which the lever unit 51 is inserted and moves may be formed by the bottom part 500 and the side parts 501 and 502 .
□ the bottom part 500 may include a plurality of coupling holes 500 a .
□ the rail unit 50 may be mounted on the outer surface of the body 2 by a coupling member passing through the plurality of coupling holes 500 a.
□ Inner surfaces of the side parts 501 and 502 may be provided to be tapered. Specifically, a greater distance between the inner surfaces of the side parts 501 and 502 may be formed farther
away from the bottom part 500 . That is, a distance D 1 between ends of the side parts 501 and 502 located farthest from the bottom part 500 may be formed to be greater than a distance D 2
between parts of the side parts 501 and 502 adjacent to the bottom part 500 .
□ the brake system 5 may include the lever unit 51 , an elastic member 52 which provides the lever unit 51 with an elastic force, and a driving source 53 capable of driving the lever unit 51 .
□ the lever unit 51 , the elastic member 52 , and the driving source 53 may be provided on a base plate 57 .
□ the base plate 57 may be mounted on the surface of the base frame 4 facing the body 2 .
□ the lever unit 51 includes a first lever part 510 and a second lever part 511 .
□ the second lever part 511 may be provided to extend or protrude from one side (i.e., a bottom side facing the rail unit 50 ) of the first lever part 510 to form an approximate right angle.
□ the second lever part 511 may be provided adjacent to the one end (i.e. a first end/a first side) of the first lever part 510 .
□ a wedge 512 may be provided on an end of the second lever part 511 .
□ a hole 570 may be formed in one side of the base plate 57 . The wedge 512 may pass through the hole 570 and may be inserted into the sliding part 503 of the rail unit 50 .
□ the wedge 512 may be provided to correspond to a shape of an inner surface of the rail unit 50 which forms the sliding part 503 .
□ Outer surfaces 512 a and 512 b of the wedge 512 may be formed to correspond to tapered shapes of the inner surfaces of the side parts 501 and 502 of the rail unit 50 . That is, a distance
between the outer surfaces 512 a and 512 b which face each other may be provided to become smaller toward an end of the wedge 512 (i.e., an end provided closer to the rail unit 50 ). Due to a
rotation of the first lever part 510 , a bottom surface 512 c of the wedge 512 may be in contact with the bottom part 500 of the rail unit 50 according to a rotational position of the first
lever part 510 .
□ the elastic member 52 may be mounted on one side (i.e., a bottom side) of the first lever part 510 .
□ a protrusion 513 may be further provided on the one side of the first lever part 510 , and the elastic member 52 may be mounted on the protrusion 513 . That is, the elastic member 52 is
engaged with the first lever part 510 by the protrusion 513 being inserted in the elastic member according to the exemplary embodiment.
□ the protrusion 513 may be provided adjacent to the other end (i.e., a second side/a second end) of the first lever part 510 .
□ the elastic member 52 may fit on the protrusion 513 and may be located between one surface of the first lever part 510 and one surface of the base plate 57 , thereby providing a bottom
surface of the first lever part 510 with an elastic force.
□ the elastic member 52 may apply the elastic force to allow the bottom surface at the other end (i.e. the second end) of the first lever part 510 to become farther from the base plate 57 .
□ the wedge 512 may be pushed toward the rail unit 50 and apply braking pressure to the body 2 .
□ the elastic member 52 is provided adjacent to the other end (i.e., the second end opposite to the first end where the wedge 512 is located) of the first lever part 510 and provides the bottom
surface of the first lever part 510 with the elastic force.
□ an installation position and a pressurization position of the elastic member 52 are not limited thereto.
□ the elastic member 52 may be provided adjacent to the one end (i.e., the first end where the wedge 512 is located) of the first lever part 510 and may apply the elastic force a top surface of
the first end of the first lever part 510 to perform braking.
□ the elastic member 52 is provided on the other end of the first lever part 510 (i.e., the second end opposite to the first end where the wedge 512 is located) and applies the elastic force to
the bottom surface of the first lever part 510 will be described.
□ a gear part 514 may be provided on the other end (i.e., the second end) of the first lever part 510 in the exemplary embodiment.
□ the gear part 514 may be engaged with a gear part 541 connected to a motor 53 that will be described below.
□ a gear ratio of the gear part 514 provided on the other end of the first lever part 510 to the gear part 541 connected to the motor 53 is suitably adjusted to amplify and transfer a driving
force from the motor 53 to the first lever part 510 .
□ a rotational shaft 552 may be provided between the second lever part 511 and the protrusion 513 .
□ a through hole 516 may be formed in the first lever part 510 , and the rotational shaft 552 may be provided to pass through the through hole 516 .
□ the through hole 516 may be formed to pass through between the second lever part 511 and the protrusion 513 . Accordingly, the rotational shaft 552 may be provided to extend in a widthwise
direction of the first lever part 510 .
□ the rotational shaft 552 may be provided closer to the one end of the first lever part 510 provided with the second lever part 511 than the other end of the first lever part 510 .
□ the driving force applied on the other end of the first lever part 510 may be transferred to the one end of the first lever part 510 while being amplified.
□ the driving force of the motor 53 is transferred to the other end of the first lever part 510
□ the force transferred to the other end of the first lever part 510 is transferred to the one end of the first lever part 510 while being amplified.
□ the wedge 512 applies pressure to the inner surface of the rail unit 50 using the amplified force, thereby improving the braking performance of the brake system 5 with respect to the body 2 .
□ the base plate 57 may be provided with a fixing bracket 55 including a hole 551 into which the rotational shaft 552 is insertable.
□ the fixing bracket 55 may be coupled with the base plate 57 using a coupling member.
□ a bracket body 550 of the fixing bracket 55 may be provided to protrude from the one surface of the base plate 57 , and the hole 551 into which the rotational shaft 552 is insertable may be
formed in the bracket body 550 .
□ the rotational shaft 552 is inserted into the hole 514 formed in the first lever part 510 and the hole 551 formed in the fixing bracket 55 , thereby fastening the lever unit 51 to the base
plate 57 to be rotatable on the rotational shaft 552 .
□ the motor 53 may be provided on the one side of the base plate 57 .
□ the driving force of the motor 53 may be transferred to the lever unit 51 through a link between the gear part 514 and the gear part 541 .
□ the lever unit 51 receives the driving force of the motor 53 and moves to allow the other end of the first lever part 510 to approach the base plate 57 .
□ the other end of the first lever part 510 approaches the base plate 57
□ the one end of the first lever part 510 becomes farther from the base plate 57 and the wedge 512 provided on the end of the second lever part 511 located on the one end of the first lever
part 510 may become separate from the inner surface of the rail unit 50 .
□ the driving force of the motor 53 may be transferred to the lever unit 51 through a driving gear part 530 provided on the motor 53 and a connection gear unit 54 .
□ the base plate 57 may be provided with a mounting bracket 56 to mount the connection gear unit 54 thereon to be rotatable.
□ the mounting bracket 56 may include a bracket body 560 provided with a space 562 , in which the connection gear unit 54 is accommodated to prevent a rotation of the connection gear unit 54
from being interfered with other parts, and a bracket cover 561 which covers an opening formed in one side of the mounting bracket 56 .
□ the mounting bracket 56 may include a shaft insertion hole 560 a into which a rotational shaft 563 is insertable.
□ the shaft insertion hole 560 a may be formed in the bracket body 560 .
□ the rotational shaft 563 may pass through the shaft insertion hole 560 a and an insertion hole 542 formed in the connection gear unit 54 , thereby mounting the connection gear unit 54 on the
mounting bracket 56 to be rotatable.
□ a first connection gear part 540 engaged with the driving gear part 530 is formed on one side of the connection gear unit 54
□ a second connection gear part 541 engaged with the gear part 514 formed on the other end of the first lever part 510 may be formed on the other side of the connection gear unit 54 .
□ the connection gear unit 54 is formed to be bent in such a way that the first connection gear part 540 is formed on one end and the second connection gear part 541 is formed on the other end.
□ extending directions of the teeth of the gear part 514 and the second connection gear part 541 may intersect with each other.
□ FIG. 5 is a cross-sectional view illustrating the wedge 512 and the rail unit 50 according to an exemplary embodiment.
□ frictional members e.g., frictional pads
□ the frictional members 515 a and 515 b may be mounted on the outer surfaces 512 a and 512 b of the wedge 512 and may increase a frictional force between the wedge 512 and the inner surface of
the rail unit 50 .
□ the frictional members 515 a and 515 b may include a material having a high frictional force such as rubber and silicone.
□ the frictional members 515 a and 515 b are provided on the outer surfaces 512 a and 512 b of the wedge 512 to increase the frictional force between the wedge 512 and the rail unit 50 ,
thereby improving the braking performance of the brake system 5 .
□ the frictional members 515 a and 515 b are provided on the outer surfaces 512 a and 512 b of the wedge 512 .
□ the exemplary embodiment is not limited thereto.
□ the frictional members may be provided on the bottom surface 512 c of the wedge 512 .
□ frictional members may be provided on the inner surface of the rail unit 50 .
□ FIGS. 6 and 7 are side views of the brake system 5 according to an exemplary embodiment.
□ the first connection gear part 540 may rotate in another direction B.
□ the gear part 514 engaged with the second connection gear part 541 may rotate in a downward direction C 1 and the first lever part 510 may rotate with respect to the rotational shaft 552 such
that the one end of the first lever part 510 moves closer to the base plate 57 .
□ the second lever part 511 may move in an upward direction C 2 , and the wedge 512 connected to the second lever part 511 may become separate from the inner surface of the rail unit 50 .
Because the rail unit 50 is not interfered with the wedge 512 , the body 2 mounted with the rail unit 50 may tilt to allow the front thereof to move up and down.
□ the first connection gear part 540 may rotate in the direction A.
□ the gear part 514 engaged with the second connection gear part 541 may rotate in the upward direction C 2 and the first lever part 510 may rotate on the rotational shaft 552 to become farther
from the base plate 57 .
□ the second lever part 511 may move in the downward direction C 1 , and the wedge 512 connected to the second lever part 511 may be in contact with the inner surface of the rail unit 50 .
□ the wedge 512 may apply pressure the inner surface of the rail unit 50 to perform braking.
□ the frictional force between the outer surfaces 512 a and 512 b of the wedge 512 and the inner surface of the rail unit 50 and the force of the wedge 512 applied to the inner surface of the
rail unit 50 may fix the rail unit 50 not to move.
□ the body 2 may be fixed not to tilt. As described above, because the body 2 is fixed not to move by the brake system 5 , even when the rotor 24 rotates to perform radiography, the body 2 does
not move, thereby obtaining definite images of the object 300 .
□ FIG. 8 is a schematic view of the brake system 5 according to an exemplary embodiment.
□ FIG. 9 is a schematic view illustrating parts of the wedge 512 and the rail unit 50 according to an exemplary embodiment.
□ the lever unit 51 of the brake system 5 applies pressure to the rail unit 50 via the wedge 512 , thereby fixing the rail unit 59 not to move.
□ the first lever part 510 may receive the driving force of the motor 53 and may rotate with respect to the rotational shaft 552 .
□ the first lever part 510 may receive the elastic force of the elastic member 52 and may rotate with respect to the rotational shaft 552 . Even when the driving force of the motor 53 is not
transferred to the first lever part 510 , the elastic member 52 is located on the bottom surface of the first lever part 510 in such a way that the wedge 512 mounted on the second lever part
511 may apply pressure to the inner surface of the rail unit 50 .
□ the first lever part 510 rotates with respect to the rotational shaft 552 due to one of the driving force of the motor 53 and the elastic force of the elastic member 52 , thereby allowing the
wedge 512 to apply pressure to the inner surface of the rail unit 50 .
□ the bottom surface 512 c of the wedge 512 may apply pressure to the bottom part 500 of the rail unit 50 .
□ the outer surfaces 512 a and 512 b of the wedge 512 may apply braking pressure to the inner surface of the rail unit 50 .
□ the wedge 512 may apply pressure to the rail unit 50 with a greater force than a case in which the outer surfaces 512 a and 512 b of the wedge 512 are provided to be vertical to the bottom
surface 512 c of the wedge 512 or the inner surface of the rail unit 50 is provided to be vertical to the bottom part 500 of the rail unit 50 .
□ a force applied by the wedge 512 in a vertical direction to the bottom part 500 of the rail unit 50 at one point P of the rail unit 50 may be referred to as W.
□ a normal force at the point P may be designated as N
□ a frictional force at the point P may be designated as f.
□ a force applied to the point P may be expressed below.
□ W (T/ ⁇ )cos ⁇ Tsin ⁇ .
□ the wedge 512 may pressurize the rail unit 50 to fix the body 2 not to tilt.
□ the wedge 512 may apply pressure to the rail unit 50 to be fixed and not allow the body 2 to move when ⁇ is greater than 70°. As described above, even when the driving force of the motor 53
is not transferred, the movement of the body 2 may be prevented while taking images due to the fixing configuration which allows the wedge 512 to apply braking pressure to the rail unit 50
due to the elastic force of the elastic member 52 .
□ the tapered shapes of the inner surface of the rail unit 50 and the outer surfaces 512 a and 512 b of the wedge 512 in the brake system 5 allow the force of the wedge 512 applied to the inner
surface of the rail unit 50 to be increased. Also, an output of the motor 53 may be transferred to the lever unit 51 while being amplified using the leverage effect and the gear ratio between
the gears located between the motor 53 and the lever unit 51 , thereby improving the performance of the brake system 5 .
□ the frictional members 515 a and 515 b are attached to the outer surfaces 512 a and 512 b of the wedge 512 , thereby increasing the frictional force with the inner surface of the rail unit 50
and preventing the degradation of the performance of the brake system 5 , which occurs due to a decrease in the frictional force between the wedge 512 and the inner surface of the rail unit
50 , which is caused by fluids such as water and oil therebetween.
□ the one side of the lever unit 51 receives the elastic force from the elastic member 52 , thereby fixing the wedge 512 so as not to be pushed from the inner surface of the rail unit 50
although the motor 53 is not driven.
□ the body 2 does not move although rotating at a high speed, thereby obtaining definite images of the object 300 and improving the quality of the medical apparatus 1 .
□ FIG. 10 is a view of a mammographic apparatus 7 including the brake system 6 according to an exemplary embodiment.
□ FIGS. 11 and 12 are views of the brake system 6 according to an exemplary embodiment.
□ the brake system 6 may be provided to prevent a body (not shown) of the mammographic apparatus 7 from falling.
□ the body of the mammographic apparatus 7 may be provided to be movable up and down along a stand 70 .
□ the body may move along an elevating shaft 73 disposed on the stand 70 and extending upward and downward.
□ the body may be connected to the elevating shaft 73 via a mobile panel 72 .
□ the mobile panel 72 is mounted on the elevating shaft 73 and moves up and down along the elevating shaft 73 , thereby allowing the body to move up and down along the elevating shaft 73 .
□ the mobile panel 72 may be connected to a cable 700 operated by a driving source 75 and may move up and down as the cable 700 rotates clockwise or counterclockwise, respectively.
□ Pulleys 701 which respectively receive a driving force from the driving source 75 may be provided on a top and a bottom of the stand 70 .
□ the cable 700 may be wound on the pulleys 701 and may rotate together with the pulleys 701 .
□ the mobile panel 72 may include the brake system 6 .
□ the stand 70 may include a rail unit 74 extending upward and downward.
□ the rail unit 74 similar to the rail unit 50 , may include a bottom part and side parts extending from the bottom part.
□ the side parts may be provided on both sides of the bottom part to face each other. Inner surfaces of the side parts may be provided to be tapered. That is, a greater distance between the
inner surfaces of the side parts may be formed farther from the bottom part.
□ the brake system 6 may include a lever unit 62 and an elastic member 64 .
□ the lever unit 62 may be mounted on the mobile panel 72 by a mounting bracket 63 .
□ the lever unit 62 may be disposed to extend upward and downward to be parallel to the elevating shaft 73 .
□ the lever unit 62 may be rotatably provided on a rotational shaft 630 which passes through the mounting bracket 63 and the lever unit 62 .
□ the lever unit 62 rotates with respect to the rotational shaft 630 .
□ the cable 700 may be mounted on one side of the lever unit 62 .
□ a wedge 65 which is insertable into the rail unit 74 may be provided on the other side opposite to the one side of the lever unit 62 .
□ the wedge 65 may be formed to allow an outer surface thereof to correspond to a tapered shape of an inner surface of the rail unit 74 .
□ the cable 700 may be mounted on an end of the first lever part 620 and the wedge 65 may be located on an end of the second lever part 621 .
□ the end of the first lever part 620 is tilted to be positioned close to the mobile panel 72 due to the cable 700 mounted on the end of the first lever part 620 , and the end of the second
lever part 621 may be provided to be positioned away from the mobile panel 72 as shown in FIG. 11 .
□ the wedge 65 located on the end of the second lever part 621 may be separate from the mobile panel 72 and the rail unit 74 .
□ An elastic member container 610 may be provided on one side of the mobile panel 72 .
□ the elastic member container 610 may accommodate the elastic member 64 .
□ the elastic member container 610 may be located to allow the elastic member 64 accommodated in the elastic member container 610 to push away the first lever part 620 from the mobile panel 72
□ the elastic member 64 may maintain a state of being pressurized by the first lever part 620 while being accommodated in the elastic member container 610 .
□ the mobile panel 72 on which the body is mounted may descend due to the weights of the body and the mobile panel.
□ a state in which the elastic member 64 is pressurized by the first lever part 620 may be released.
□ the elastic member 64 may supply the elastic force to pressurize the first lever part 620 to become separate from the mobile panel 72 .
□ the lever unit 62 rotates with respect to the rotational shaft 630 in such a way that the wedge 65 provided on the end of the second lever part 621 may be inserted into the rail unit 74 . Due
to a descent of the body and the mobile panel 72 , the wedge 65 may slide along the rail unit 74 .
□ the wedge 65 may slide on the rail unit 74 for a certain distance and then may be fixed by a frictional force between the wedge 65 and the rail unit 74 and a force of the wedge 65 to
pressurize the inner surface of the rail unit 74 , thereby halting the descent of the mobile panel 72 on which the body is mounted.
□ a frictional force between the wedge 65 and the rail unit 74 and a force of the wedge 65 to pressurize the inner surface of the rail unit 74 , thereby halting the descent of the mobile panel
72 on which the body is mounted.
□ FIGS. 13 and 14 are views of a brake system 8 according to an exemplary embodiment.
□ the brake system 8 may be provided to halt a rotation of a brake shaft 91 .
□ the brake system 8 may be provided in a driving device 9 which increases or decreases a length of a column on which an X-ray generator is mounted.
□ driving of the driving device 9 may be halted by the brake system 8 .
□ a device on which the brake system 8 is mounted is not limited thereto.
□ the driving device 9 includes a rotational body 90 and the brake shaft 91 mounted on the rotational body 90 and the rotational body 90 receives a driving force through the cable 92
□ the brake system 8 may operate and halt a rotation of the rotational body 90 when the cable 92 breaks.
□ the brake system 8 may include a lever unit 80 and an elastic member 83 .
□ the lever unit 80 may be rotatably provided on a lever-rotational shaft 81 . That is, the lever unit 80 rotates with respect to the lever-rotational shaft 81 .
□ one side of the lever unit 80 is designated as a first lever part 800 and the other side thereof is designated as a second lever part 801 .
□ the lever-rotational shaft 81 may be located closer to an end of the second lever part 801 than an end of the first lever part 800 as shown in FIG. 13 .
□ the elastic member 83 may be located on a side of the first lever part 800 .
□ the elastic member 83 may be provided to apply pressure to the first lever part 800 when there is no external force.
□ one side of the first lever part 800 may be pushed by the cable 92 .
□ the elastic member 83 may be compressed by the first lever part 800 .
□ first lever part 800 receives a force which allows the first lever part 800 to face downward by the cable 92 located above the first lever part 800 and the elastic member 83 is located below
the first lever part 800 will be described.
□ Positions and force transfer directions of the cable 92 , the first lever part 800 , and the elastic member 83 are not limited thereto.
□ a wedge 82 may be provided on a side of the second lever part 801 .
□ the wedge 82 may be provided to be in contact with an outer surface 910 of the brake shaft 91 according to rotation of the lever unit 80 .
□ At least one of the outer surface 910 of the brake shaft 91 and an outer surface of the lever unit 80 may be surrounded with a material having a high friction coefficient.
□ the wedge 82 is separated from the outer surface 910 of the brake shaft 91 .
□ the brake shaft 91 and the rotational body 90 may rotate clockwise or counterclockwise.
□ the force which is applied to the first lever part 800 may be removed.
□ the first lever part 800 may rotate on the lever-rotational shaft 81 due to an elastic force of the elastic member 83 to allow the end of the first lever part 800 to face upward.
□ the second lever part 801 may move toward the brake shaft 91 and may be in contact with the outer surface 910 of the brake shaft 91 .
□ a rotation speed of the brake shaft 91 may be gradually slow down and the brake shaft 91 may stop the rotation due to a frictional force with the outer surface of the wedge 82 .
□ the brake shaft 91 stops the rotation, thereby allowing the rotational body 90 , on which the brake shaft 91 is mounted, to stop a rotation thereof.
□ a tomograph in accordance with one embodiment of the present invention obtains definite images of an object by preventing a body from moving by using a brake system.
□ Engineering & Computer Science (AREA)
□ Health & Medical Sciences (AREA)
□ Life Sciences & Earth Sciences (AREA)
□ Medical Informatics (AREA)
□ General Engineering & Computer Science (AREA)
□ Molecular Biology (AREA)
□ General Health & Medical Sciences (AREA)
□ Physics & Mathematics (AREA)
□ Veterinary Medicine (AREA)
□ Biophysics (AREA)
□ High Energy & Nuclear Physics (AREA)
□ Public Health (AREA)
□ Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
□ Optics & Photonics (AREA)
□ Pathology (AREA)
□ Radiology & Medical Imaging (AREA)
□ Biomedical Technology (AREA)
□ Heart & Thoracic Surgery (AREA)
□ Animal Behavior & Ethology (AREA)
□ Surgery (AREA)
□ Mechanical Engineering (AREA)
□ Theoretical Computer Science (AREA)
□ Pulmonology (AREA)
□ Transportation (AREA)
□ Apparatus For Radiation Diagnosis (AREA)
□ Braking Arrangements (AREA)
□ Accommodation For Nursing Or Treatment Tables (AREA)
A brake apparatus provided on a body of a medical apparatus, the brake apparatus including: a rail unit; a lever unit configured to rotate with respect to a rotational shaft; an elastic member
provided on the lever unit at a first end of the lever unit and configured to provide an elastic force to rotate the lever unit; a wedge provided on a second end opposite to the first end of the
lever unit, configured to be inserted into the rail unit according to rotation of the lever unit and configured to apply braking pressure on the rail unit.
This application is a Continuation application Ser. No. 14/862,261 filed Sep. 23, 2015, which claims priority from Korean Patent Application No. 10-2015-0009676, filed on Jan. 21, 2015 in the
Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
BACKGROUND 1. Field
Apparatuses consistent with exemplary embodiments relate to a brake system and a medical apparatus including the same.
2. Description of the Related Art
Radiation imaging apparatuses are imaging systems which radiate radiation, for example, X-rays onto an object such as the whole or a part of the human body or another object to obtain images
from, such as an internal material, structure, or organization of a baggage.
Radiation imaging apparatuses are used as medical imaging systems for detecting an abnormality such as a lesion inside a human body, used as an image capture device to check an internal structure
of an object or a component, or used as a scanning device to scan baggage at the airport.
Radiation imaging apparatuses include a computed tomography (CT) scanner. A CT scanner surrounds a moving object, continuously irradiates radiation on the moving object from all directions (i.e.,
around 360 degrees) and detects rays passing through the object to obtain a plurality of cross-sectional images of the object. The CT scanner continuously irradiates radiation on the object from
the beginning to the end of scanning to obtain consecutive cross-sectional images thereof.
To obtain clear images of various parts of the object, a body of the CT scanner may be operated to be tilted. The body of the CT scanner may rotate at a high speed and irradiate radiation to the
object while tilting at a predetermined angle. Here, when the body of the CT scanner is moving while tilting, the quality of images obtained using X-rays may be negatively impacted.
One or more exemplary embodiments provide a tomograph capable of obtaining clear images of an object by preventing the movement of a body.
Additional aspects of the inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the
inventive concept.
In accordance with an aspect of an exemplary embodiment, there is provided a medical apparatus including a body rotatably provided to perform computed tomography (CT) scan, a base frame
configured to support the body, a rail unit mounted on an outer surface of the body, and a brake system mounted on the base frame and configured to perform braking of a rotation of the body. The
brake system includes a lever unit rotatably provided on a rotational shaft, and a wedge connected to one side of the lever unit and having tapered both sides. An inner surface of the rail unit
is formed to be tapered to correspond to an outer surface of the wedge.
When the wedge pressurizes the inner surface of the rail unit, the body may not move.
The rotational shaft may be provided more adjacent to one side of the lever unit than the other side of the lever unit.
The lever unit may receive a driving force from a motor and rotate on the rotational shaft.
A gear part connected to a driving gear part connected to the motor may be provided on the other side of the lever unit.
The driving force of the motor may be transferred to the lever unit while being amplified by a gear ratio of the driving gear part to the gear part.
The gear part may be engaged with a connection gear part, and the connection gear part may be engaged with the driving gear part.
The brake system may further include an elastic member which supplies an elastic force to the lever unit to allow the one side of the lever unit to be separate from the rail unit.
The elastic member may be located more adjacent to the other side of the lever unit than the one side of the lever unit and may transfer the elastic force to a bottom surface of the lever unit.
A protrusion may be provided on the bottom surface of the lever unit, and the elastic member may be mounted on the protrusion.
A frictional pad may be mounted on the outer surface of the wedge.
The brake system may further include a base plate mounted on the base frame and the lever unit is rotatably mounted on the base plate.
A fixing bracket may be mounted on the base plate, and the rotational shaft may pass through the fixing bracket and the lever unit.
A hole may be formed in the base plate, and the wedge may pass through the hole and may be inserted into the rail unit.
The rail unit may include a bottom part and side parts provided on both sides of the bottom part to face each other, and a distance between the both side parts may become farther from the bottom
part toward ends of the side parts.
In accordance with an aspect of another exemplary embodiment, there is provided a tomograph including a body rotatably provided to perform a CT scan, a base frame configured to support the body,
a rail unit mounted on an outer surface of the body, and a brake system mounted on one of the body and the base frame and configured to prevent the movement of the body. The brake system includes
a lever unit rotatably provided on a rotational shaft and a wedge connected to the lever unit and configured to be inserted into the rail unit. An outer surface of the wedge and an inner surface
of the rail unit are formed to be tapered.
The brake system may further include a motor configured to transfer a driving force through the other side of the lever unit.
A gear part may be provided on the other side of the lever unit, the motor and the other side of the lever unit may be connected by a connection gear part.
The driving force of the motor may be transferred to the lever unit while being amplified by a gear ratio of the connection gear part to the gear part.
The rotational shaft may be configured more adjacent to one side of the lever unit than the other side of the lever unit.
A frictional member may be mounted on the outer surface of the wedge.
The frictional member may include rubber.
The brake system may further include an elastic member configured to supply an elastic force to allow the wedge to pressurize the inner surface of the rail unit.
In accordance with an aspect of yet another exemplary embodiment, there is provided a brake system which may perform braking of a tiltable body including a lever unit rotatably provided on a
rotational shaft, a wedge which has a tapered outer surface, is provided on one side of the lever unit, and halts the movement of the body by pressurizing one side of the body, and a motor which
transfers a driving force through the other side of the lever unit. The rotational shaft is located more adjacent to the one side of the lever unit than the other side of the lever unit.
The brake system may further include an elastic member which provides an elastic force to the lever unit to allow the wedge to pressurize the one side of the body.
A frictional member may be provided on the outer surface of the wedge.
A gear part may be provided on the other side of the wedge, and the driving force of the motor may be transferred to the lever unit while being amplified by a gear ratio of a connection gear part
connected to the motor to the gear part.
In accordance with an aspect of yet another exemplary embodiment, there is provided a brake apparatus provided on a body of a medical apparatus, the brake apparatus including: a rail unit; a
lever unit configured to rotate with respect to a rotational shaft; an elastic member provided on the lever unit at a first end of the lever unit and configured to provide an elastic force to
rotate the lever unit; a wedge provided on a second end opposite to the first end of the lever unit, configured to be inserted into the rail unit according to rotation of the lever unit and
configured to apply braking pressure on the rail unit.
The wedge may include an outer surface including: a first side of the wedge; and a second side of the wedge opposite to the first side of the wedge wherein the first and the second sides of the
wedge are tapered.
An inner surface of the rail unit may be tapered to correspond to the outer surface of the wedge.
The wedge may be configured to apply pressure to the inner surface of the rail unit and configured to halt a movement of the body of the medical apparatus.
The brake apparatus may further include a motor configured to transfer a driving force to the lever unit.
The lever unit may include a gear part provided at the first end of the lever unit and connected to a driving gear part of the motor.
A gear ratio of the driving gear part to the gear part may be configured to amplify the driving force of the motor transferred to the lever unit.
The gear part of the lever unit may be engaged with a connection gear part, and wherein the connection gear part is engaged with the driving gear part of the motor.
The wedge may be configured to halt at least one of a linear movement and a rotation of the body of the medical apparatus.
The rotational shaft may be provided closer to the second end of the lever unit than the first end of the lever unit.
The elastic member may be configured to provide the elastic force to the lever unit and to pressurize the rail unit with the wedge.
The elastic member may be provided closer to the first end of the lever unit than the second end of the lever unit.
The wedge may include a frictional member provided on an outer surface of the wedge.
The lever unit may include a protrusion provided on a bottom surface of the lever unit, and the protrusion may be fitted in the elastic member.
The wedge may include a trapezoidal cross-sectional shape, and an inner surface of the rail unit may have a shape corresponding to the trapezoidal cross-sectional shape of the wedge.
In accordance with an aspect of yet another exemplary embodiment, there is provided a medical apparatus including: a body configured to perform a scan; a base frame configured to support the
body; and a brake apparatus configured to perform braking of a movement of the body, wherein the brake apparatus includes: a rail unit attached to one of the body and the base frame; a lever unit
provided on the other one of the body and the base frame and configured to rotate with respect to a rotational shaft; a motor configured to provide a driving force to the lever unit; an elastic
member provided at a first end portion of the lever unit and configured to provide an elastic force; and a wedge provided on a second end portion opposite to the first end portion of the lever
unit and configured to perform braking of the body.
Opposite surfaces of the wedge may be tapered, and an inner surface of the rail unit may have a shape corresponding to an outer surface including the tapered opposite surfaces of the wedge.
The elastic member may be configured to supply the elastic force to rotate the lever unit to apply braking pressure to the rail unit via the wedge.
A frictional pad may be provided on an outer surface of the wedge.
The lever unit may be rotatably provided on the rotational shaft, and the rotational shaft may be provided closer to the second end portion of the lever unit than the first end portion of the
lever unit.
In accordance with an aspect of yet another exemplary embodiment, there is provided a brake apparatus provided on a body of a tomograph which performs a computed tomography (CT) scan on an
object, the brake apparatus including: a lever unit rotatably provided on a rotational shaft to rotate with respect to the rotational shaft; an elastic member provided at a first end of the lever
unit and configured to supply an elastic force; a wedge provided on a second end opposite to the first end of the lever unit and configured to perform braking; and a rail unit configured to
perform the braking by engaging with the wedge, wherein the wedge is configured to apply braking pressure to the rail unit due to the elastic force and configured to halt tilting of the body.
The elastic member may be configured to supply the elastic force to allow the wedge to apply pressure to the rail unit.
In accordance with an aspect of yet another exemplary embodiment, there is provided a brake apparatus provided on a medical apparatus including a body performing a scan and a base frame
supporting the body, the brake apparatus including: a rail unit attached to one of the body and the base frame; and a lever unit provided on the other one of the body and the base frame, the
lever unit configured to engage with the rail unit, wherein the lever unit includes: a base plate attached to the other one of the body and the base frame; a bracket body provided on the base
plate; a lever configured to rotate with respect to a rotational shaft inserted through the bracket body; an elastic member provided at a first end portion of the lever and configured to provide
an elastic force to rotate the lever; and a wedge provided on a second end portion opposite to the first end portion of the lever unit and configured to perform braking of the body with respect
to the base frame by contacting the rail unit according to rotation of the lever.
The brake apparatus may further include a motor configured to transfer a driving force to the lever unit.
The lever may include a gear part provided at the first end of the lever and connected to a driving gear part of the motor.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and/or other aspects of the disclosure will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying
drawings of which:
FIG. 1 is a perspective view of a medical apparatus according to an exemplary embodiment;
FIG. 2 is a side view of a body according to an exemplary embodiment;
FIGS. 3A and 3B are perspective views of a brake system according to an exemplary embodiment;
FIG. 4 is an exploded perspective view of the brake system according to an exemplary embodiment;
FIG. 5 is a cross-sectional view illustrating a wedge and a rail unit according to an exemplary embodiment;
FIGS. 6 and 7 are side views of the brake system according to an exemplary embodiment;
FIG. 8 is a schematic view of the brake system according to an exemplary embodiment;
FIG. 9 is a schematic view illustrating parts of the wedge part and the rail unit according to an exemplary embodiment;
FIG. 10 is a view of a mammographic apparatus including a brake system according to an exemplary embodiment;
FIGS. 11 and 12 are views of the brake system according to an exemplary embodiment; and
FIGS. 13 and 14 are views of a brake system according to still an exemplary embodiment.
DETAILED DESCRIPTION
Hereinafter, exemplary embodiments will be described in detail with reference to the attached drawings.
FIG. 1 is a perspective view of a medical apparatus 1 according to an exemplary embodiment. FIG. 2 is a side view of a body 2 according to an exemplary embodiment.
Referring to FIGS. 1 and 2, the medical apparatus 1 includes the body 2 and an examination stand 3. The body 2 may include an opening 20 in a center part and an X-ray generator 21 and an X-ray
detector 22 disposed thereinside to be opposite to each other. An object 300 located on the examination stand 3 may be inserted into the opening 20 to perform tomography scanning.
The body 2 includes a stator 23 and a rotor 24 in which the opening 20 is installed in the center thereof. The rotor 24 may be rotatably provided inside the stator 23. The X-ray generator 21 may
be provided on one side of the rotor 24, and the X-ray detector 22 may be provided on the other side of the rotor 24. The X-ray generator 21 and the X-ray detector 22 may be provided to be
opposite to each other.
When a current is supplied to the stator 23, the rotor 24 rotates thereinside, X-rays generated by the X-ray generator 21 are radiated onto the object 300, and X-rays passing through the object
300 may be detected by the X-ray detector 22.
The X-ray detector 22 may directly receive X-rays which pass through the object 300 or X-rays which are radiated to the periphery of the object 300 and do not reach the object 300, and may detect
the X-rays by conversion into electric signals. The medical apparatus 1 may further include an image processor which reads and generates images from the electric signals stored in the X-ray
detector 22, image-processes the generated images, or generates other images using the generated images. Also, the medical apparatus 1 may further include a controller for controlling whether
X-rays are radiated or not.
The body 2 may be supported by a base frame 4. The base frame 4 may be provided on both left and right sides of the body 2. The base frame 4 may be mounted outside the stator 23. To obtain
definite images of the object 300, the body 2 may be mounted on the base frame 4 to be tiltable. As shown in FIG. 2, the body 2 may be provided to be tiltable to allow a front side thereof to
move up and down.
A brake system (or a brake apparatus) 5 may be provided between the body 2 and the base frame 4. One of an outer surface of the body 2 and the base frame 4 may include a rail unit 50 and the
other may include the brake system 5. In the exemplary embodiment, the rail unit 50 may be provided on the outer surface of the body 2 and the brake system 5 may be provided on one side of the
base frame 4. A portion of the brake system 5 may be provided to be insertable into the rail unit 50.
As shown in FIG. 2, the rail unit 50 is provided to have a predetermined curvature in such a way that the rail unit 50 moves together with the body 2 and the portion of the brake system 5
inserted into the rail unit 50 may vary in position along the rail unit 50 when the body 2 tilts. The rail unit 50 may be provided to have a proper curvature and length depending on a tilt angle
and shape of the body 2.
The rail unit 50 may be a part of a circle having a radius R and having a rotational center O as shown in FIG. 2. When the radius R increases, the occurrence of vibrations of the body may be
reduced and the braking property of the brake system 5 may be improved. However, when the radius R of the body 2 increases, a range of tilt angles (θ1+θ2) may be reduced. To obtain definite
images of the object 300, the body 2 may be provided to operate at an tilt angle of about 60°. For example, based on when the body 2 does not tilt, that is, a front and rear of the body 2 are
located in parallel to a bottom surface, a tilt angle θ1 to allow the front of the body 2 to face downward and a tilt angle θ2 to allow the front of the body 2 to face upward may be provided to
be about 30°, respectively. Here, the radius R may be present within a range of 400 mm to 440 mm. More specifically, the radius R may be about 420 mm.
Due to the brake system 5, the body 2 may be fixed not to move while tilting even when the rotor 24 rotates. A detailed configuration of the brake system 5 will be described below.
The examination stand 3 includes a supporter 30 and a transfer unit 31. The transfer unit 31 may be slidably provided above the supporter 30. When the object 300 is located on the transfer unit
31, the transfer unit 31 slides to be inserted into the body 2 through the opening 20. When the object 300 is inserted into the opening 20, the object 300 may be located between the X-ray
generator 21 and the X-ray detector 22. The rotor 24 rotates on the object 300, and images of the object 300 from various angles may be taken using X-rays generated by the X-ray generator 21.
Hereinafter, an exemplary embodiment in which the rail unit 50 is provided on the outer surface of the body 2 and the brake system 5 is provided on one side of the base frame 4 will be described.
FIGS. 3A and 3B are perspective views of the brake system 5 according to an exemplary embodiment. FIG. 4 is an exploded perspective view of the brake system 5.
Referring to FIGS. 3A, 3B and 4, the brake system 5 is provided on one surface (i.e., a surface facing the body 2) of the base frame 4 to allow a part of a lever unit 51 to be insertable into the
rail unit 50 provided on the outer surface of the body 2. As the body 2 tilts, a position of the lever unit 51 with respect to the rail unit 50 may vary.
The rail unit 50 may include a bottom part 500 and side parts 501 and 502 extending or protruding from the bottom part 500. The side parts 501 and 502 may be provided on both sides of the bottom
part 500 to be opposite to each other. A sliding part 503 in which the lever unit 51 is inserted and moves may be formed by the bottom part 500 and the side parts 501 and 502. The bottom part 500
may include a plurality of coupling holes 500 a. The rail unit 50 may be mounted on the outer surface of the body 2 by a coupling member passing through the plurality of coupling holes 500 a.
Inner surfaces of the side parts 501 and 502 may be provided to be tapered. Specifically, a greater distance between the inner surfaces of the side parts 501 and 502 may be formed farther away
from the bottom part 500. That is, a distance D1 between ends of the side parts 501 and 502 located farthest from the bottom part 500 may be formed to be greater than a distance D2 between parts
of the side parts 501 and 502 adjacent to the bottom part 500.
The brake system 5 may include the lever unit 51, an elastic member 52 which provides the lever unit 51 with an elastic force, and a driving source 53 capable of driving the lever unit 51. The
lever unit 51, the elastic member 52, and the driving source 53 may be provided on a base plate 57. The base plate 57 may be mounted on the surface of the base frame 4 facing the body 2.
The lever unit 51 includes a first lever part 510 and a second lever part 511. The second lever part 511 may be provided to extend or protrude from one side (i.e., a bottom side facing the rail
unit 50) of the first lever part 510 to form an approximate right angle. The second lever part 511 may be provided adjacent to the one end (i.e. a first end/a first side) of the first lever part
510. A wedge 512 may be provided on an end of the second lever part 511. A hole 570 may be formed in one side of the base plate 57. The wedge 512 may pass through the hole 570 and may be inserted
into the sliding part 503 of the rail unit 50.
The wedge 512 may be provided to correspond to a shape of an inner surface of the rail unit 50 which forms the sliding part 503. Outer surfaces 512 a and 512 b of the wedge 512 may be formed to
correspond to tapered shapes of the inner surfaces of the side parts 501 and 502 of the rail unit 50. That is, a distance between the outer surfaces 512 a and 512 b which face each other may be
provided to become smaller toward an end of the wedge 512 (i.e., an end provided closer to the rail unit 50). Due to a rotation of the first lever part 510, a bottom surface 512 c of the wedge
512 may be in contact with the bottom part 500 of the rail unit 50 according to a rotational position of the first lever part 510.
The elastic member 52 may be mounted on one side (i.e., a bottom side) of the first lever part 510.
A protrusion 513 may be further provided on the one side of the first lever part 510, and the elastic member 52 may be mounted on the protrusion 513. That is, the elastic member 52 is engaged
with the first lever part 510 by the protrusion 513 being inserted in the elastic member according to the exemplary embodiment. The protrusion 513 may be provided adjacent to the other end (i.e.,
a second side/a second end) of the first lever part 510.
In detail, the elastic member 52 may fit on the protrusion 513 and may be located between one surface of the first lever part 510 and one surface of the base plate 57, thereby providing a bottom
surface of the first lever part 510 with an elastic force. The elastic member 52 may apply the elastic force to allow the bottom surface at the other end (i.e. the second end) of the first lever
part 510 to become farther from the base plate 57. When the bottom surface at the other end (i.e., the second end) of the first lever part 510 becomes farther from the base plate 57, in turn, the
wedge 512 may be pushed toward the rail unit 50 and apply braking pressure to the body 2.
In the above described exemplary embodiment, the elastic member 52 is provided adjacent to the other end (i.e., the second end opposite to the first end where the wedge 512 is located) of the
first lever part 510 and provides the bottom surface of the first lever part 510 with the elastic force. However, an installation position and a pressurization position of the elastic member 52
are not limited thereto. For example, the elastic member 52 may be provided adjacent to the one end (i.e., the first end where the wedge 512 is located) of the first lever part 510 and may apply
the elastic force a top surface of the first end of the first lever part 510 to perform braking. Hereinafter, a case in which the elastic member 52 is provided on the other end of the first lever
part 510 (i.e., the second end opposite to the first end where the wedge 512 is located) and applies the elastic force to the bottom surface of the first lever part 510 will be described.
A gear part 514 may be provided on the other end (i.e., the second end) of the first lever part 510 in the exemplary embodiment. The gear part 514 may be engaged with a gear part 541 connected to
a motor 53 that will be described below. A gear ratio of the gear part 514 provided on the other end of the first lever part 510 to the gear part 541 connected to the motor 53 is suitably
adjusted to amplify and transfer a driving force from the motor 53 to the first lever part 510.
A rotational shaft 552 may be provided between the second lever part 511 and the protrusion 513. A through hole 516 may be formed in the first lever part 510, and the rotational shaft 552 may be
provided to pass through the through hole 516. The through hole 516 may be formed to pass through between the second lever part 511 and the protrusion 513. Accordingly, the rotational shaft 552
may be provided to extend in a widthwise direction of the first lever part 510.
The rotational shaft 552 may be provided closer to the one end of the first lever part 510 provided with the second lever part 511 than the other end of the first lever part 510. In the exemplary
embodiment, due to a leverage effect, the driving force applied on the other end of the first lever part 510 may be transferred to the one end of the first lever part 510 while being amplified.
When the driving force of the motor 53 is transferred to the other end of the first lever part 510, the force transferred to the other end of the first lever part 510 is transferred to the one
end of the first lever part 510 while being amplified. Accordingly, the wedge 512 applies pressure to the inner surface of the rail unit 50 using the amplified force, thereby improving the
braking performance of the brake system 5 with respect to the body 2.
The base plate 57 may be provided with a fixing bracket 55 including a hole 551 into which the rotational shaft 552 is insertable. The fixing bracket 55 may be coupled with the base plate 57
using a coupling member. A bracket body 550 of the fixing bracket 55 may be provided to protrude from the one surface of the base plate 57, and the hole 551 into which the rotational shaft 552 is
insertable may be formed in the bracket body 550. The rotational shaft 552 is inserted into the hole 514 formed in the first lever part 510 and the hole 551 formed in the fixing bracket 55,
thereby fastening the lever unit 51 to the base plate 57 to be rotatable on the rotational shaft 552.
The motor 53 may be provided on the one side of the base plate 57. The driving force of the motor 53 may be transferred to the lever unit 51 through a link between the gear part 514 and the gear
part 541. The lever unit 51 receives the driving force of the motor 53 and moves to allow the other end of the first lever part 510 to approach the base plate 57. When the other end of the first
lever part 510 approaches the base plate 57, the one end of the first lever part 510 becomes farther from the base plate 57 and the wedge 512 provided on the end of the second lever part 511
located on the one end of the first lever part 510 may become separate from the inner surface of the rail unit 50.
As an example, when the motor 53 and the lever unit 51 are provided to extend in parallel due to spatial constraints of the brake system 5, the driving force of the motor 53 may be transferred to
the lever unit 51 through a driving gear part 530 provided on the motor 53 and a connection gear unit 54.
The base plate 57 may be provided with a mounting bracket 56 to mount the connection gear unit 54 thereon to be rotatable. The mounting bracket 56 may include a bracket body 560 provided with a
space 562, in which the connection gear unit 54 is accommodated to prevent a rotation of the connection gear unit 54 from being interfered with other parts, and a bracket cover 561 which covers
an opening formed in one side of the mounting bracket 56.
The mounting bracket 56 may include a shaft insertion hole 560 a into which a rotational shaft 563 is insertable. The shaft insertion hole 560 a may be formed in the bracket body 560. The
rotational shaft 563 may pass through the shaft insertion hole 560 a and an insertion hole 542 formed in the connection gear unit 54, thereby mounting the connection gear unit 54 on the mounting
bracket 56 to be rotatable.
A first connection gear part 540 engaged with the driving gear part 530 is formed on one side of the connection gear unit 54, and a second connection gear part 541 engaged with the gear part 514
formed on the other end of the first lever part 510 may be formed on the other side of the connection gear unit 54. When the motor 53 and the lever unit 51 are located in parallel, the connection
gear unit 54 is formed to be bent in such a way that the first connection gear part 540 is formed on one end and the second connection gear part 541 is formed on the other end. Here, extending
directions of the teeth of the gear part 514 and the second connection gear part 541 may intersect with each other.
FIG. 5 is a cross-sectional view illustrating the wedge 512 and the rail unit 50 according to an exemplary embodiment.
Referring to FIG. 5, frictional members (e.g., frictional pads) 515 a and 515 b formed of a material having a high friction coefficient may be provided on the outer surfaces 512 a and 512 b of
the wedge 512. The frictional members 515 a and 515 b may be mounted on the outer surfaces 512 a and 512 b of the wedge 512 and may increase a frictional force between the wedge 512 and the inner
surface of the rail unit 50. For example, the frictional members 515 a and 515 b may include a material having a high frictional force such as rubber and silicone.
The frictional members 515 a and 515 b are provided on the outer surfaces 512 a and 512 b of the wedge 512 to increase the frictional force between the wedge 512 and the rail unit 50, thereby
improving the braking performance of the brake system 5.
In the above described exemplary embodiment, the frictional members 515 a and 515 b are provided on the outer surfaces 512 a and 512 b of the wedge 512. However, the exemplary embodiment is not
limited thereto. For example, the frictional members may be provided on the bottom surface 512 c of the wedge 512. Also, frictional members may be provided on the inner surface of the rail unit
FIGS. 6 and 7 are side views of the brake system 5 according to an exemplary embodiment.
As shown in FIG. 6, when the driving gear part 530 rotates in a direction A due to the motor 53, the first connection gear part 540 may rotate in another direction B. When the first connection
gear part 540 rotates in the other direction B, the gear part 514 engaged with the second connection gear part 541 may rotate in a downward direction C1 and the first lever part 510 may rotate
with respect to the rotational shaft 552 such that the one end of the first lever part 510 moves closer to the base plate 57. Here, the second lever part 511 may move in an upward direction C2,
and the wedge 512 connected to the second lever part 511 may become separate from the inner surface of the rail unit 50. Because the rail unit 50 is not interfered with the wedge 512, the body 2
mounted with the rail unit 50 may tilt to allow the front thereof to move up and down.
On the other hand, as shown in FIG. 7, when the driving gear part 530 rotates in the direction B due to the motor 53, the first connection gear part 540 may rotate in the direction A. When the
first connection gear part 540 rotates in the direction A, the gear part 514 engaged with the second connection gear part 541 may rotate in the upward direction C2 and the first lever part 510
may rotate on the rotational shaft 552 to become farther from the base plate 57. Here, the second lever part 511 may move in the downward direction C1, and the wedge 512 connected to the second
lever part 511 may be in contact with the inner surface of the rail unit 50.
When the bottom side of the first lever part 510 is pressurized by the driving force transferred from the motor 53 at one end, the wedge 512 may apply pressure the inner surface of the rail unit
50 to perform braking. The frictional force between the outer surfaces 512 a and 512 b of the wedge 512 and the inner surface of the rail unit 50 and the force of the wedge 512 applied to the
inner surface of the rail unit 50 may fix the rail unit 50 not to move. Because the rail unit 50 is fixed by the wedge 512, the body 2 may be fixed not to tilt. As described above, because the
body 2 is fixed not to move by the brake system 5, even when the rotor 24 rotates to perform radiography, the body 2 does not move, thereby obtaining definite images of the object 300.
FIG. 8 is a schematic view of the brake system 5 according to an exemplary embodiment. FIG. 9 is a schematic view illustrating parts of the wedge 512 and the rail unit 50 according to an
exemplary embodiment.
Referring to FIGS. 8 and 9, the lever unit 51 of the brake system 5 applies pressure to the rail unit 50 via the wedge 512, thereby fixing the rail unit 59 not to move. In the exemplary
embodiment, to allow the wedge 512 to apply braking pressure to the inner surface of the rail unit 50, the first lever part 510 may receive the driving force of the motor 53 and may rotate with
respect to the rotational shaft 552.
The first lever part 510 may receive the elastic force of the elastic member 52 and may rotate with respect to the rotational shaft 552. Even when the driving force of the motor 53 is not
transferred to the first lever part 510, the elastic member 52 is located on the bottom surface of the first lever part 510 in such a way that the wedge 512 mounted on the second lever part 511
may apply pressure to the inner surface of the rail unit 50.
As described above, the first lever part 510 rotates with respect to the rotational shaft 552 due to one of the driving force of the motor 53 and the elastic force of the elastic member 52,
thereby allowing the wedge 512 to apply pressure to the inner surface of the rail unit 50.
Hereinafter, a fixing configuration in which the body 2 does not move due to the elastic force of the elastic member 52 and the shape of the wedge 512 when the driving force of the motor 53 is
not transferred to the lever unit 51 is described.
Due to the elastic force transferred from the elastic member 52, the bottom surface 512 c of the wedge 512 may apply pressure to the bottom part 500 of the rail unit 50. As shown in FIG. 9, due
to the shapes of the outer surfaces 512 a and 512 b of the wedge 512 and the inner surface of the rail unit 50, the outer surfaces 512 a and 512 b of the wedge 512 may apply braking pressure to
the inner surface of the rail unit 50. In the exemplary embodiment, the wedge 512 may apply pressure to the rail unit 50 with a greater force than a case in which the outer surfaces 512 a and 512
b of the wedge 512 are provided to be vertical to the bottom surface 512 c of the wedge 512 or the inner surface of the rail unit 50 is provided to be vertical to the bottom part 500 of the rail
unit 50.
As shown in FIG. 8 illustrating the first lever part 510 from the side, a force applied by the wedge 512 in a vertical direction to the bottom part 500 of the rail unit 50 at one point P of the
rail unit 50 may be referred to as W. A normal force at the point P may be designated as N, and a frictional force at the point P may be designated as f. Such frictional force may be expressed as
f=μN, in which μ indicates a friction coefficient.
When a reaction force applied in a linear direction from rotational shaft 552 to the point P in which the first lever part 510 is fixed is designated as T, one side of the rail unit 50 on which
the point P is located is designated as G, and an angle formed by T and G is designated as θ, a force applied to the point P may be expressed below.
μN−T cos θ=0
−W+N−T sin θ=0
Arranging these with respect to W, W=(T/μ)cos θ−Tsin θ.
Here, when W is smaller than 0, it is possible to fix the rail unit 50 by pressurizing the point P.
Eliminating T from the expression for W, (T/μ)cos θ−Tsin θ<0 may be arranged below.
(1/μ)cos θ−sin θ<0
Transposing cos θ and sin θ to the right side, it is below.
(1/μ)<(sin θ/cos θ)
Because sin θ/cos θ is tan θ, when tan θ>1/μthe wedge 512 may pressurize the rail unit 50 to fix the body 2 not to tilt.
For example, when a friction coefficient μ is 0.4, the wedge 512 may apply pressure to the rail unit 50 to be fixed and not allow the body 2 to move when θ is greater than 70°. As described
above, even when the driving force of the motor 53 is not transferred, the movement of the body 2 may be prevented while taking images due to the fixing configuration which allows the wedge 512
to apply braking pressure to the rail unit 50 due to the elastic force of the elastic member 52.
The tapered shapes of the inner surface of the rail unit 50 and the outer surfaces 512 a and 512 b of the wedge 512 in the brake system 5 allow the force of the wedge 512 applied to the inner
surface of the rail unit 50 to be increased. Also, an output of the motor 53 may be transferred to the lever unit 51 while being amplified using the leverage effect and the gear ratio between the
gears located between the motor 53 and the lever unit 51, thereby improving the performance of the brake system 5.
The frictional members 515 a and 515 b are attached to the outer surfaces 512 a and 512 b of the wedge 512, thereby increasing the frictional force with the inner surface of the rail unit 50 and
preventing the degradation of the performance of the brake system 5, which occurs due to a decrease in the frictional force between the wedge 512 and the inner surface of the rail unit 50, which
is caused by fluids such as water and oil therebetween. The one side of the lever unit 51 receives the elastic force from the elastic member 52, thereby fixing the wedge 512 so as not to be
pushed from the inner surface of the rail unit 50 although the motor 53 is not driven.
Due to the configuration of the brake system 5 as described above, the body 2 does not move although rotating at a high speed, thereby obtaining definite images of the object 300 and improving
the quality of the medical apparatus 1.
Hereinafter, another exemplary embodiment in which a brake system 6 having the structural characteristics described above is applied to another apparatus will be described.
FIG. 10 is a view of a mammographic apparatus 7 including the brake system 6 according to an exemplary embodiment. FIGS. 11 and 12 are views of the brake system 6 according to an exemplary
Referring to FIGS. 10 to 12, the brake system 6 may be provided to prevent a body (not shown) of the mammographic apparatus 7 from falling. The body of the mammographic apparatus 7 may be
provided to be movable up and down along a stand 70.
The body may move along an elevating shaft 73 disposed on the stand 70 and extending upward and downward. The body may be connected to the elevating shaft 73 via a mobile panel 72. The mobile
panel 72 is mounted on the elevating shaft 73 and moves up and down along the elevating shaft 73, thereby allowing the body to move up and down along the elevating shaft 73.
The mobile panel 72 may be connected to a cable 700 operated by a driving source 75 and may move up and down as the cable 700 rotates clockwise or counterclockwise, respectively. Pulleys 701
which respectively receive a driving force from the driving source 75 may be provided on a top and a bottom of the stand 70. The cable 700 may be wound on the pulleys 701 and may rotate together
with the pulleys 701. The mobile panel 72 may include the brake system 6. The stand 70 may include a rail unit 74 extending upward and downward. The rail unit 74, similar to the rail unit 50, may
include a bottom part and side parts extending from the bottom part. The side parts may be provided on both sides of the bottom part to face each other. Inner surfaces of the side parts may be
provided to be tapered. That is, a greater distance between the inner surfaces of the side parts may be formed farther from the bottom part.
The brake system 6 may include a lever unit 62 and an elastic member 64. The lever unit 62 may be mounted on the mobile panel 72 by a mounting bracket 63. Here, the lever unit 62 may be disposed
to extend upward and downward to be parallel to the elevating shaft 73. The lever unit 62 may be rotatably provided on a rotational shaft 630 which passes through the mounting bracket 63 and the
lever unit 62. The lever unit 62 rotates with respect to the rotational shaft 630.
The cable 700 may be mounted on one side of the lever unit 62. A wedge 65 which is insertable into the rail unit 74 may be provided on the other side opposite to the one side of the lever unit 62
. The wedge 65 may be formed to allow an outer surface thereof to correspond to a tapered shape of an inner surface of the rail unit 74. When an upper part of the lever unit 62 based on the
rotational shaft 630 is designated as a first lever part 620 and a lower part of the lever unit 62 is designated as a second lever part 621, the cable 700 may be mounted on an end of the first
lever part 620 and the wedge 65 may be located on an end of the second lever part 621. The end of the first lever part 620 is tilted to be positioned close to the mobile panel 72 due to the cable
700 mounted on the end of the first lever part 620, and the end of the second lever part 621 may be provided to be positioned away from the mobile panel 72 as shown in FIG. 11. The wedge 65
located on the end of the second lever part 621 may be separate from the mobile panel 72 and the rail unit 74.
An elastic member container 610 may be provided on one side of the mobile panel 72. The elastic member container 610 may accommodate the elastic member 64. The elastic member container 610 may be
located to allow the elastic member 64 accommodated in the elastic member container 610 to push away the first lever part 620 from the mobile panel 72.
When the mobile panel 72 normally ascends and descends due to the cable 700, the elastic member 64 may maintain a state of being pressurized by the first lever part 620 while being accommodated
in the elastic member container 610.
When the cable 700 breaks while the mammographic apparatus 7 is being used as shown in FIG. 12, the mobile panel 72 on which the body is mounted may descend due to the weights of the body and the
mobile panel. Here, when the cable 700 breaks, a state in which the elastic member 64 is pressurized by the first lever part 620 may be released. The elastic member 64 may supply the elastic
force to pressurize the first lever part 620 to become separate from the mobile panel 72. The lever unit 62 rotates with respect to the rotational shaft 630 in such a way that the wedge 65
provided on the end of the second lever part 621 may be inserted into the rail unit 74. Due to a descent of the body and the mobile panel 72, the wedge 65 may slide along the rail unit 74. The
wedge 65 may slide on the rail unit 74 for a certain distance and then may be fixed by a frictional force between the wedge 65 and the rail unit 74 and a force of the wedge 65 to pressurize the
inner surface of the rail unit 74, thereby halting the descent of the mobile panel 72 on which the body is mounted. Hereby, even when the cable 700 breaks, it is possible to prevent an accident
such as a rapid descent of the body.
FIGS. 13 and 14 are views of a brake system 8 according to an exemplary embodiment.
Referring to FIGS. 13 and 14, the brake system 8 may be provided to halt a rotation of a brake shaft 91. As an example, the brake system 8 may be provided in a driving device 9 which increases or
decreases a length of a column on which an X-ray generator is mounted. When a cable 92 provided in the driving device 9 breaks, driving of the driving device 9 may be halted by the brake system 8
. A device on which the brake system 8 is mounted is not limited thereto.
Hereinafter, a case in which the driving device 9 includes a rotational body 90 and the brake shaft 91 mounted on the rotational body 90 and the rotational body 90 receives a driving force
through the cable 92 will be described. The brake system 8 may operate and halt a rotation of the rotational body 90 when the cable 92 breaks.
The brake system 8 may include a lever unit 80 and an elastic member 83. The lever unit 80 may be rotatably provided on a lever-rotational shaft 81. That is, the lever unit 80 rotates with
respect to the lever-rotational shaft 81. Based on the lever-rotational shaft 81, one side of the lever unit 80 is designated as a first lever part 800 and the other side thereof is designated as
a second lever part 801. The lever-rotational shaft 81 may be located closer to an end of the second lever part 801 than an end of the first lever part 800 as shown in FIG. 13.
The elastic member 83 may be located on a side of the first lever part 800. The elastic member 83 may be provided to apply pressure to the first lever part 800 when there is no external force.
When the driving device 9 normally operates, one side of the first lever part 800 may be pushed by the cable 92. Here, the elastic member 83 may be compressed by the first lever part 800.
Hereinafter, a case in which the first lever part 800 receives a force which allows the first lever part 800 to face downward by the cable 92 located above the first lever part 800 and the
elastic member 83 is located below the first lever part 800 will be described. Positions and force transfer directions of the cable 92, the first lever part 800, and the elastic member 83 are not
limited thereto.
A wedge 82 may be provided on a side of the second lever part 801. The wedge 82 may be provided to be in contact with an outer surface 910 of the brake shaft 91 according to rotation of the lever
unit 80. At least one of the outer surface 910 of the brake shaft 91 and an outer surface of the lever unit 80 may be surrounded with a material having a high friction coefficient.
When the cable 92 does not break and the driving device 9 normally operates, the wedge 82 is separated from the outer surface 910 of the brake shaft 91. Here, the brake shaft 91 and the
rotational body 90 may rotate clockwise or counterclockwise.
When the cable 92 breaks, the force which is applied to the first lever part 800 may be removed. When the force which is applied to the first lever part 800 is removed, the first lever part 800
may rotate on the lever-rotational shaft 81 due to an elastic force of the elastic member 83 to allow the end of the first lever part 800 to face upward. Here, the second lever part 801 may move
toward the brake shaft 91 and may be in contact with the outer surface 910 of the brake shaft 91.
A rotation speed of the brake shaft 91 may be gradually slow down and the brake shaft 91 may stop the rotation due to a frictional force with the outer surface of the wedge 82. The brake shaft 91
stops the rotation, thereby allowing the rotational body 90, on which the brake shaft 91 is mounted, to stop a rotation thereof.
As described above, even when the cable 92 of the driving device 9 breaks, the rotation of the rotational body 90 is halted by the brake system 8, thereby preventing the column on which the X-ray
generator is mounted from rapidly descending.
As is apparent from the above description, a tomograph in accordance with one embodiment of the present invention obtains definite images of an object by preventing a body from moving by using a
brake system.
While exemplary embodiments have been particularly shown and described above, it would be appreciated by those skilled in the art that various changes may be made therein without departing from
the principles and spirit of the inventive concept, which is defined in the following claims.
Claims (12)
What is claimed is:
1. An X-ray imaging apparatus comprising:
a stand;
a body provided to be movable along the stand;
a cable provided to allow the stand to be movable; and
a brake apparatus provided to halt a movement of the body,
wherein the brake apparatus comprises:
a lever unit configured to rotate with respect to a rotational shaft and including a first end and a second end opposite to the first end; and
a wedge provided on the second end of the lever unit, and
wherein the wedge is provided to halt the movement of the body when the cable breaks.
2. The X-ray imaging apparatus of
claim 1
, wherein the stand includes a rail unit, and
wherein the wedge is inserted into the rail unit when the cable breaks and the movement of the body is halted by a frictional force generated between the wedge and the rail unit.
3. The X-ray imaging apparatus of claim 2, wherein the brake apparatus further comprises an elastic member provided on the first end of the lever unit and configured to provide an elastic force
to rotate the lever unit.
4. The X-ray imaging apparatus of claim 3, wherein when the cable breaks, a state in which the elastic member is pressurized by the first end of the lever unit is released, and the lever unit
rotates so that the wedge is inserted into the rail unit.
5. The X-ray imaging apparatus of
claim 2
, wherein the body moves upward and downward along the stand, and
wherein the rail unit is formed on the stand to extend along a movement direction of the body.
6. The X-ray imaging apparatus of
claim 1
, further comprising:
pulleys rotatably provided to wind the cable; and
a driving source provided to rotate the pulleys.
7. The X-ray imaging apparatus of
claim 3
, further comprising a mobile panel provided to couple the body and the stand,
wherein the body is movably coupled to the stand via the mobile panel, and
wherein the mobile panel is provided with an elastic member container to accommodate the elastic member.
8. An X-ray imaging apparatus comprising:
a column on which an X-ray generator is mounted;
a driving device rotatably provided to increase or decrease a length of the column;
a cable coupled to the driving device to drive the driving device; and
a brake apparatus provided to halt driving of the driving device,
wherein the brake apparatus comprises:
a lever unit configured to rotate with respect to a rotational shaft and including a first end and a second end opposite to the first end; and
a wedge provided on the second end of the lever unit, and
wherein the wedge is provided to halt a rotation of the driving device by contacting to the driving device when the cable breaks.
9. The X-ray imaging apparatus of
claim 8
, wherein the driving device comprises:
a brake shaft; and
a rotational body configured to rotate with respect to the brake shaft, and
wherein the wedge is provided to halt a rotation of the rotational body by contacting to the brake shaft when the cable breaks.
10. The X-ray imaging apparatus of claim 9, wherein at least one of an outer surface of the brake shaft and an outer surface of the wedge is surrounded with a material having a high friction
11. The X-ray imaging apparatus of claim 9, wherein the brake apparatus further comprises an elastic member provided on the first end of the lever unit and configured to provide an elastic force
to rotate the lever unit.
12. The X-ray imaging apparatus of claim 11, wherein when the cable breaks, a state in which the elastic member is pressurized by the first end of the lever unit is released, and the lever unit
rotates so that the wedge is contacted to the brake shaft.
US15/963,328 2015-01-21 2018-04-26 Brake system and medical apparatus including the same Expired - Fee Related US10631796B2 (en)
Priority Applications (1)
Application Number Priority Date Filing Date Title
US15/963,328 US10631796B2 (en) 2015-01-21 2018-04-26 Brake system and medical apparatus including the same
Applications Claiming Priority (4)
Application Number Priority Date Filing Date Title
KR1020150009676A KR101622539B1 (en) 2015-01-21 2015-01-21 Brake apparatus and medical apparatus having the same
KR10-2015-0009676 2015-01-21
US14/862,261 US9974497B2 (en) 2015-01-21 2015-09-23 Brake system and medical apparatus including the same
US15/963,328 US10631796B2 (en) 2015-01-21 2018-04-26 Brake system and medical apparatus including the same
Related Parent Applications (1)
Application Number Title Priority Date Filing Date
US14/862,261 Continuation US9974497B2 (en) 2015-01-21 2015-09-23 Brake system and medical apparatus including the same
Publications (2)
Family Applications (2)
Application Number Title Priority Date Filing Date
US14/862,261 Expired - Fee Related US9974497B2 (en) 2015-01-21 2015-09-23 Brake system and medical apparatus including the same
US15/963,328 Expired - Fee Related US10631796B2 (en) 2015-01-21 2018-04-26 Brake system and medical apparatus including the same
Family Applications Before (1)
Application Number Title Priority Date Filing Date
US14/862,261 Expired - Fee Related US9974497B2 (en) 2015-01-21 2015-09-23 Brake system and medical apparatus including the same
Country Status (5)
Families Citing this family (3)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10980504B2 (en) 2017-04-20 2021-04-20 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for moving a component of an X-ray machine
CN110332262B (en) * 2019-07-08 2020-11-27 南京普爱医疗设备股份有限公司 Pedal brake device of medical movable C-arm machine
US12005939B2 (en) * 2021-09-16 2024-06-11 Rockwell Automation Technologies, Inc. Brake system for track and mover system
Citations (19)
* Cited by examiner, † Cited by third party
Publication Priority Publication Assignee Title
number date date
US3575368A (en) 1969-01-27 1971-04-20 Westinghouse Electric Corp Vertically adjustable counterbalancing x-ray tube head suspension support apparatus
US4802198A (en) 1985-04-01 1989-01-31 Siemens Aktiengesellschaft X-ray equipment support apparatus
US5014828A (en) 1988-06-08 1991-05-14 Moteurs Leroy-Somer Electromagnetic brake with clamping jaws
JPH06178769A ( 1992-12-11 1994-06-28 Toshiba Corp Gantry of x-ray ct scanner
US20010053203A1 2000-06-19 2001-12-20 Takahiro Ishii Onboard X-ray CT apparatus, container for mounting X-ray CT apparatus, and motor vehicle for
(en) mounting X-ray CT apparatus
JP2002263096A ( 2001-02-28 2002-09-17 Ge Medical Systems Global Technology Co Llc Gantry unit for radiographic tomograph system
CN101195456A ( 2006-12-07 2008-06-11 因温特奥股份公司 Braking device and guide rail for an elevator with wedge-shaped braking surface
CN101200259A ( 2006-12-05 2008-06-18 因温特奥股份公司 Braking device for holding and braking a lift cabin in a lift facility and method for holding and
en) braking a lift facility
KR20080100212A 2006-02-03 2008-11-14 더 유럽피안 애토믹 에너지 커뮤니티(이유알에이티오엠), 리 Robotic surgical system for performing minimally invasive medical procedures
(en) 프레젠티드 바이 더 유럽피안 커미션
US7543986B2 (en 2004-03-09 2009-06-09 Siemens Aktiengesellschaft Equilibrated C-arm x-ray device
) *
US20100067652A1 2008-09-12 2010-03-18 Kabushiki Kaisha Toshiba X-ray ct apparatus
US7734016B2 (en 2007-07-03 2010-06-08 Canon Kabushiki Kaisha Radiographic imaging apparatus and method
) *
US20110222667A1 2010-03-12 2011-09-15 Gregerson Eugene A Drive system for imaging device
US8303478B2 (en 2006-06-26 2012-11-06 Universite De Strasbourg Robotized installation for the positioning and movement of a component or instrument and treatment
) * device that comprises such an installation
US20130077765A1 2011-09-26 2013-03-28 Thomas J. Welsh Digital radiography mechanical positioning system
US8480303B2 (en 2009-09-08 2013-07-09 Siemens Aktiengesellschaft Movement control for a mobile x-ray system
) *
KR101361473B1 ( 2012-09-06 2014-02-11 삼성전자주식회사 Device for controlling stroke of collision switch and medical diagnosis apparatus employing the
en) same
US8672543B2 (en 2010-04-13 2014-03-18 Carestream Health, Inc. Counterweight for mobile x-ray device
) *
WO2014116062A1 2013-01-25 2014-07-31 삼성중공업 주식회사 Trolley
Patent Citations (22)
* Cited by examiner, † Cited by third party
Publication Priority Publication Assignee Title
number date date
US3575368A (en) 1969-01-27 1971-04-20 Westinghouse Electric Corp Vertically adjustable counterbalancing x-ray tube head suspension support apparatus
US4802198A (en) 1985-04-01 1989-01-31 Siemens Aktiengesellschaft X-ray equipment support apparatus
US5014828A (en) 1988-06-08 1991-05-14 Moteurs Leroy-Somer Electromagnetic brake with clamping jaws
JPH06178769A ( 1992-12-11 1994-06-28 Toshiba Corp Gantry of x-ray ct scanner
US20010053203A1 2000-06-19 2001-12-20 Takahiro Ishii Onboard X-ray CT apparatus, container for mounting X-ray CT apparatus, and motor vehicle for mounting
(en) X-ray CT apparatus
JP2002263096A ( 2001-02-28 2002-09-17 Ge Medical Systems Global Technology Co Llc Gantry unit for radiographic tomograph system
US7543986B2 (en 2004-03-09 2009-06-09 Siemens Aktiengesellschaft Equilibrated C-arm x-ray device
) *
KR20080100212A 2006-02-03 2008-11-14 더 유럽피안 애토믹 에너지 커뮤니티(이유알에이티오 Robotic surgical system for performing minimally invasive medical procedures
(en) 엠), 리프레젠티드 바이 더 유럽피안 커미션
US8303478B2 (en 2006-06-26 2012-11-06 Universite De Strasbourg Robotized installation for the positioning and movement of a component or instrument and treatment device
) * that comprises such an installation
CN101200259A ( 2006-12-05 2008-06-18 因温特奥股份公司 Braking device for holding and braking a lift cabin in a lift facility and method for holding and braking
en) a lift facility
US8312972B2 (en 2006-12-05 2012-11-20 Inventio Ag Brake equipment for holding and braking an elevator car in an elevator installation and a method of
) holding and braking an elevator installation
CN101195456A ( 2006-12-07 2008-06-11 因温特奥股份公司 Braking device and guide rail for an elevator with wedge-shaped braking surface
US8020671B2 (en 2006-12-07 2011-09-20 Inventio Ag Elevator installation, a guide rail of an elevator installation, brake equipment of an elevator
) installation and a method for guiding, holding and braking an elevator installation
US20080135345A1 2006-12-07 2008-06-12 Hans Kocher Elevator installation, a guide rail of an elevator installation, brake equipment of an elevator
(en) installation and a method for guiding, holding and braking an elevator installation
US7734016B2 (en 2007-07-03 2010-06-08 Canon Kabushiki Kaisha Radiographic imaging apparatus and method
) *
US20100067652A1 2008-09-12 2010-03-18 Kabushiki Kaisha Toshiba X-ray ct apparatus
US8480303B2 (en 2009-09-08 2013-07-09 Siemens Aktiengesellschaft Movement control for a mobile x-ray system
) *
US20110222667A1 2010-03-12 2011-09-15 Gregerson Eugene A Drive system for imaging device
US8672543B2 (en 2010-04-13 2014-03-18 Carestream Health, Inc. Counterweight for mobile x-ray device
) *
US20130077765A1 2011-09-26 2013-03-28 Thomas J. Welsh Digital radiography mechanical positioning system
KR101361473B1 ( 2012-09-06 2014-02-11 삼성전자주식회사 Device for controlling stroke of collision switch and medical diagnosis apparatus employing the same
WO2014116062A1 2013-01-25 2014-07-31 삼성중공업 주식회사 Trolley
Non-Patent Citations (4)
* Cited by examiner, † Cited by third party
Communication dated Feb. 12, 2016, issued by the Korean Intellectual Property Office in counterpart Korean Application No. 10-2015-0009676.
Communication dated Jan. 26, 2016, issued by the International Searching Authority in counterpart International Application No. PCT/KR2015/010425 (PCT/ISA/210).
Communication dated Jan. 6, 2020 by the State Intellectual Property Office of P.R. China in counterpart Chinese Patent Application No. 201580073474.1.
Communication issued by the European Patent Office dated Jan. 10, 2018 in counterpart European Patent Application No. 15879048.5.
Legal Events
Date Code Title Description
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE
FEPP Fee payment procedure ENTITY
STPP Information on status: patent application and granting procedure Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION
in general
STPP Information on status: patent application and granting procedure Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS
in general
STCF Information on status: patent grant Free format text: PATENTED CASE
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE
FEPP Fee payment procedure ENTITY
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF
LAPS Lapse for failure to pay maintenance fees PATENT OWNER: LARGE ENTITY
STCH Information on status: patent discontinuation Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362
FP Lapsed due to failure to pay maintenance fee Effective date: 20240428 | {"url":"https://patents.google.com/patent/US10631796B2/en","timestamp":"2024-11-08T14:03:10Z","content_type":"text/html","content_length":"513477","record_id":"<urn:uuid:39638d6d-b4d2-4809-a85f-c548936a51ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00792.warc.gz"} |
Python matplotlib – 1
In this article:
1. A simple graphic with pyplot
Most simple graphic with matplotlib is plotting with pyplot interface:
import matplotlib.pyplot as myplot
myplot.plot([1, 5], [2, 12])
This show graphic like below:
From image result that pyplot draw a line between two points (1,2) and (5,12)
2. Using lists for graphic coordinates
Arguments for “plot” in myplot.plot([1, 5], [2, 12]) are 2 lists:
l1 = [1, 5]
l2 = [2, 12]
The same graphic is obtained using:
import matplotlib.pyplot as myplot
l1 = [1, 5]
l2 = [2, 12]
myplot.plot(l1, l2)
In this case line draw is between
point 1 with coordinates:
x=first element of list l1, or l1[0]
y=first element of list l2, or l2[0]
point 2 with coordinates:
x=second element of list l1, or l1[1]
y=second element of list l2, or l2[1]
3. A graphic with many lines
If lists l1 and l2 have more elements, plot will draw line(es) between appropriate points. For example:
import matplotlib.pyplot as myplot
l1 = [1, 3, 5]
l2 = [2, 4, 12]
myplot.plot(l1, l2)
Code will draw:
In this case lines are between points that have coordinates:
(1,2), (3,4), (5,12)
(l1[0],l2[0]), (l1[0],l2[0]), (l1[0],l2[0])
4. Using tuples for graphic coordinates
The same graphic is obtained if we use tuples, i.e. code:
import matplotlib.pyplot as myplot
t1 = (1, 3, 5)
t2 = (2, 4, 12)
myplot.plot(t1, t2)
Lists or tuples can have more elements, it is important that those to have
the same number of elements.
5. What happen when lists or tuples for coordinates have different size?
If we try with different elements, for example code:
import matplotlib.pyplot as myplot
l1 = [1, 3, 5]
l2 = [2, 4, 7, 12]
myplot.plot(l1, l2)
this will fail with error:
ValueError: x and y must have same first dimension, but have shapes (3,) and (4,)
6. Save graphic in a file
Command myplot.show() display effectively resulted graphic.
Using savefig() graphic is saved in a file for example:
import matplotlib.pyplot as myplot
l1 = [1, 3, 5]
l2 = [2, 4, 12]
myplot.plot(l1, l2)
Here savefig method is used with only one parameter, the file path.
Method savefig have many more parameters, like here.
Above code will save graphic in file C:\tmp\fig1.png | {"url":"https://data2bit.com/2024/01/25/python-matplotlib-1/","timestamp":"2024-11-11T22:27:40Z","content_type":"text/html","content_length":"29870","record_id":"<urn:uuid:eee26f42-be94-4aec-9b27-23959d55df42>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00264.warc.gz"} |
Five Minute Summary: Treatment Effects in Interactive Fixed Effects Models
\(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\indicator}[1]{ \mathbf{1}\{#1\} }\)
In my recent chapter in the Handbook of Labor, Human Resources, and Population Economics, I included proofs of results from Goodman-Bacon (Journal of Econometrics, 2021) and Sun and Abraham (Journal
of Econometrics, 2021) basically with the idea of trying to write these results down in similar notation. I didn’t include the result from de Chaisemartin and d’Haultfoeuille (American Economic
Review, 2020) just due to space limitations, but we are building on that result in a couple of recent papers, and I write this sort of proof just infrequently enough that I have to figure it out over
and over. I’m going to just include the proof for the oft-considered case with staggered treatment adoption, no anticipation, and no units treated in the first period. I’m also using the same
notation I always use – if it’s confusing, check out my handbook chapter. And, just to be clear, I’m not inventing anything here, just putting down a proof of a nice result in a familiar notation for
The main assumption underlying all of this is the following parallel trends assumption:
For all \(g \in \mathcal{G}\), and \(t=2,\ldots,\mathcal{T}\),
\[\E[\Delta Y_t(0) | G=g] = \E[\Delta Y_t(0)]\]
which says that the path of untreated potential outcomes is the same for all groups across all time periods.
The interest here centers on interpreting \(\alpha\) from the following regression
\[Y_{it} = \theta_t + \eta_i + \alpha D_{it} + e_{it}\]
Panel data versions of FWL-type arguments imply that we can remove the time- and unit- fixed effects by
\[\ddot{Y}_{it} = \alpha \ddot{D}_{it} + \ddot{e}_{it}\]
where the notation indicates double-demeaning each of the variables, so, for example,
\[\ddot{D}_{it} = D_{it} - \bar{D}_i - \E[D_t] + \frac{1}{\mathcal{T}} \sum_{s=1}^{\mathcal{T}} \E[D_s]\]
Now, population versions of FWL arguments imply that we can write
\[\alpha = \frac{\displaystyle \frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \E[\ddot{D}_{it} Y_{it}]}{\displaystyle \frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \E[\ddot{D}_{it}^2]}\]
There are two useful properties of double-demeaned random variables that are useful below
\[\E[\ddot{D}_{it}] = 0 \qquad \textrm{and} \qquad \sum_{t=1}^T \ddot{D}_{it} = 0\]
These are easy results to show (see, for example, my handbook chapter mentioned above for more details). Next, notice that, under staggered treatment adoption, \(\ddot{D}_{it}\) is fully determined
by a unit’s group and knowledge of \(t\). In particular, notice that,
\[D_{it} = \indicator{G_i \leq t} \qquad \textrm{and} \qquad \bar{D}_i = \frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \indicator{G_i \leq t} = \frac{\mathcal{T} - G_i + 1}{\mathcal{T}}\]
Thus, define the function \(v(g,t) = \indicator{g \leq t} - \frac{\mathcal{T} - g + 1}{\mathcal{T}}\); this implies that \(D_{it} - \bar{D}_i = v(G_i,t)\). Next, define the function \(h(g,t) = v(g,t)
- \displaystyle \sum_{g\in \mathcal{G}} v(g,t) p_g\), and notice that \(\E[D_t] - \displaystyle \frac{1}{\mathcal{T}} \sum_{s=1}^{\mathcal{T}} \E[D_s] = \E\big[ D_{it} - \bar{D}_i\big] = \E[v(G,t)]
\). This implies that \(\ddot{D}_{it} = h(G_i,t)\), which gives us an easy way to switch between working with \(\ddot{D}_{it}\) and groups.
To show the result, most of the work will be for the numerator in the expression for \(\alpha\) above, and, in particular, notice that
\[\begin{aligned} \frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \E[\ddot{D}_{it} Y_{it} ] &= \frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \E[\ddot{D}_{it} Y_{it} ] - \underbrace{\frac{1}{\mathcal
{T}} \sum_{t=1}^{\mathcal{T}} \E[\ddot{D}_{it} Y_{iG_i-1} ]}_{=0} \\ &= \frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \E[h(G_i,t) (Y_{it} - Y_{iG_i-1}) ] \\ &= \frac{1}{\mathcal{T}} \sum_{t=1}^{\
mathcal{T}} \sum_{g \in \mathcal{G}} \E[h(g,t) (Y_{it} - Y_{ig-1}) | G=g] \, p_g \\ &= \frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \sum_{g \in \mathcal{G}} h(g,t) \E[(Y_{it} - Y_{ig-1}) | G=g] \,
p_g - \underbrace{\frac{1}{\mathcal{T}} \sum_{t=1}^{\mathcal{T}} \sum_{g \in \mathcal{G}} h(g,t)\E[ (Y_{it} - Y_{ig-1}) | G=\mathcal{T}+1] \, p_g}_{=0} \\ &= \frac{1}{\mathcal{T}} \sum_{t=1}^{\
mathcal{T}} \sum_{g \in \mathcal{G}} h(g,t) \Big( \E[(Y_{it} - Y_{ig-1}) | G=g] - \E[(Y_{it} - Y_{ig-1}) | G=\mathcal{T}+1] \Big) \, p_g \end{aligned}\]
where the first equality holds by the property that \(\displaystyle \sum_{t=1}^{\mathcal{T}} \ddot{D}_{it} = 0\), the second equality holds by the definition of \(h\) and by combining terms, the
third equality holds by the law of iterated expectations, we show that the extra term in the fourth equality is equal to 0 below, and the last equality holds by combining terms. Combining this with
the denominator in the FWL expression for \(\alpha\), we have that
\[\alpha = \sum_{t=1}^{\mathcal{T}} \sum_{g \in \mathcal{G}} \frac{h(g,t)}{\sum_{s=1}^{\mathcal{T}} \E[h(G,s)^2]} \Big( \E[(Y_{it} - Y_{ig-1}) | G=g] - \E[(Y_{it} - Y_{ig-1}) | G=\mathcal{T}+1] \Big)
\, p_g\]
Note that the previous result is a decomposition in the sense that everything is computable, and \(\alpha\) will be exactly equal to the term on the right hand side (\(\hat{\alpha}\) will be equal to
the sample analogue of the term on the RHS).
It’s also interesting to separate the previous expression based on whether a particular period is a post-treatment or a pre-treatment period. In particular, just by splitting the sum above (and
noticing that the inside term is equal 0 for the never-treated group), we have that
\[\begin{aligned} \alpha &= \sum_{g \in \bar{\mathcal{G}}} \sum_{t=g}^{\mathcal{T}} p_g \frac{h(g,t)}{\sum_{s=1}^{\mathcal{T}} \E[h(G,s)^2]} \Big( \E[(Y_{it} - Y_{ig-1}) | G=g] - \E[(Y_{it} - Y_
{ig-1}) | G=\mathcal{T}+1] \Big) \\ & + \sum_{g \in \bar{\mathcal{G}}} \sum_{t=1}^{g-1} p_g \frac{h(g,t)}{\sum_{s=1}^{\mathcal{T}} \E[h(G,s)^2]} \Big( \E[(Y_{it} - Y_{ig-1}) | G=g] - \E[(Y_{it} - Y_
{ig-1}) | G=\mathcal{T}+1] \Big) \end{aligned}\]
Next, let’s impose parallel trends. In particular, under parallel trends \(\E[(Y_{it} - Y_{ig-1}) | G=g] - \E[(Y_{it} - Y_{ig-1}) | G=\mathcal{T}+1] = ATT(g,t)\) for \(t \geq g\) (i.e.,
post-treatment periods for group \(g\)), and \(\E[(Y_{it} - Y_{ig-1}) | G=g] - \E[(Y_{it} - Y_{ig-1}) | G=\mathcal{T}+1] = 0\) for \(t < g\) (i.e., pre-treatment periods for group \(g\)). Then,
\[\alpha = \sum_{g \in \bar{\mathcal{G}}} \sum_{t=g}^{\mathcal{T}} \underbrace{p_g \frac{h(g,t)}{\sum_{s=1}^{\mathcal{T}} \E[h(G,s)^2]}}_{w(g,t)} ATT(g,t)\]
where \(\bar{\mathcal{G}}\) denotes the set of all groups excluding \(G=\mathcal{T}+1\) (the never-treated group). This says that, under parallel trends, \(\alpha\) is equal to a weighted average of
group-time average treatment effects. To conclude, let’s show some interesting properties of the weights, \(w(g,t)\). Consider the numerator of the the weights,
\[\begin{aligned} \sum_{g \in \bar{\mathcal{G}}} \sum_{t=g}^{\mathcal{T}} h(g,t) p_g &= \sum_{g \in \bar{\mathcal{G}}} \sum_{t=1}^{\mathcal{T}} h(g,t) \indicator{g \leq t} p_g \\ &= \sum_{t=1}^{\
mathcal{T}} \sum_{g \in \mathcal{G}} h(g,t) \indicator{g \leq t} \indicator{g < \mathcal{T}+1}p_g \\ &= \sum_{t=1}^{\mathcal{T}} \E[h(G,t) \indicator{G \leq t}] \\ &= \sum_{t=1}^{\mathcal{T}} \E[\
ddot{D}_{it} D_{it}] = \sum_{t=1}^{\mathcal{T}} \E[\ddot{D}_{it}^2] \end{aligned}\]
This implies that
\[\sum_{g \in \bar{\mathcal{G}}} \sum_{t=g}^{\mathcal{T}} w(g,t) = 1\]
or, in other words, the weights sum to 1. This is a good property for the weights to have. It is possible to discuss the weights in more detail though. I think it is fair to see the denominator in
the weights as a normalizing constant. The \(p_g\) term indicates that, at least for this component of the weights, larger groups will tend to be given more weight. The most interesting term in the
weights is \(h(g,t)\), and, for example, it is possible for \(h(g,t)\) to be negative (which would make \(w(g,t)\) negative as well). Recall that
\[h(g,t) = \indicator{g \leq t} - \frac{\mathcal{T} - g + 1}{\mathcal{T}} - \E[D_t] + \frac{1}{\mathcal{T}} \sum_{s=1}^{\mathcal{T}} \E[D_s]\]
Also, notice that, for all the group-times that get non-zero weight, \(\indicator{g \leq t} = 1\), and the last term is constant across \(g\) and \(t\). This means that the most interesting terms are
the two middle ones. Group-times that get negative weights (or the smallest weights) would be ones where \(\displaystyle \frac{T-g+1}{\mathcal{T}}\) is large (this would be the case for early treated
groups) and when \(\E[D_t]\) is large (this would be large for later treated periods). This discussion suggests that, in a very simple case where \(\mathcal{T}=3\) and \(\mathcal{G} = \{2,3,4\}\),
the \(ATT(g,t)\) at risk of having negative weights is \(ATT(g=2,t=3)\). | {"url":"https://bcallaway11.github.io/feed.xml","timestamp":"2024-11-11T22:25:03Z","content_type":"application/atom+xml","content_length":"714461","record_id":"<urn:uuid:626d54de-555e-46bb-b25c-c2e00b0a6778>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00537.warc.gz"} |
Permutations in Floral Arrangements
How many possible flower arrangements are there?
Kate has 10 different types of flowers but she wants to make a floral arrangement with only 7 of them.
The permutation shows that the number of arrangements when Kate has 10 different types of flowers but she wants to make a floral arrangement with only 7 of them will be 604800 arrangements.
Permutation is a mathematical concept that deals with arranging objects or elements in a particular order. In this case, Kate has 10 different types of flowers but she only wants to use 7 of them for
her floral arrangement.
To calculate the number of possible arrangements, we use the formula for permutation which is nPr = n! / (n - r)!, where n is the total number of objects and r is the number of objects to be
Therefore, for Kate's situation:
10P7 = 10! / (10 - 7)! = 10! / 3! = 604800
So, there are a total of 604800 possible arrangements that Kate can create using 7 out of her 10 different types of flowers. | {"url":"https://chasethegoose.com/sat/permutations-in-floral-arrangements.html","timestamp":"2024-11-11T01:36:56Z","content_type":"text/html","content_length":"20743","record_id":"<urn:uuid:330931c3-3416-4d1d-a703-e5982f7f0751>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00550.warc.gz"} |
How do you calculate planck constant? | Socratic
How do you calculate planck constant?
1 Answer
Energy of a light photon = Planck's constant x Frequency
Planck's Constant = Energy of light photon / Frequency
Frequency = Speed of Light/ wavelength
Planck's Constant = Energy of light Photon x Wavelength of light / speed of light
Impact of this question
13618 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-calculate-planck-constant","timestamp":"2024-11-06T15:37:05Z","content_type":"text/html","content_length":"32624","record_id":"<urn:uuid:ca23c1fe-35db-4b12-8b02-201a96f08b47>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00680.warc.gz"} |
Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization
We propose a first order interior point algorithm for a class of non-Lipschitz and nonconvex minimization problems with box constraints, which arise from applications in variable selection and
regularized optimization. The objective functions of these problems are continuously differentiable typically at interior points of the feasible set. Our algorithm is easy to implement and the
objective function value is reduced monotonically along the iteration points. We show that the worst-case complexity for finding an $\epsilon$ scaled first order stationary point is $O(\epsilon^{-2})
$. Moreover, we develop a second order interior point algorithm using the Hessian matrix, and solve a quadratic program with ball constraint at each iteration. Although the second order interior
point algorithm costs more computational time than that of the first order algorithm in each iteration, its worst-case complexity for finding an $\epsilon$ scaled second order stationary point is
reduced to $O(\epsilon^{-3/2})$. An $\epsilon$ scaled second order stationary point is an $\epsilon$ scaled first order stationary point.
Department of Applied Mathematics, The Hong Kong Polytechnic University, July, 2012
View Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization | {"url":"https://optimization-online.org/2012/08/3550/","timestamp":"2024-11-12T17:33:57Z","content_type":"text/html","content_length":"84481","record_id":"<urn:uuid:1011beba-6c8f-4a1e-b64e-8a3b45030b74>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00069.warc.gz"} |
Cincinnati, OH 2023-07-17
Ron's notes:
"Thanks so much for Bob Isaacs for feedback on the couple of alternative versions, and for cleaning up the A1.
The title: Leonard Nimoy’s last public words from Feb 23, 2015:
“A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP”
Vulcan hand signs on allemandes are optional.
Written Feb 28, 2015" | {"url":"https://contradb.com/programs/368","timestamp":"2024-11-03T05:52:17Z","content_type":"text/html","content_length":"24750","record_id":"<urn:uuid:28997c44-fce8-4bea-a72c-82dcefcc29a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00302.warc.gz"} |
The Various Loads Used to Rate Reciprocating Compressors (Part One)
A note from Robert X. Perez:
Welcome back to Compressor University!
We are constantly asked to push our machinery a little harder. The days of underloaded, overdesigned machines have gone by way of the dinosaurs, eight-track tapes and slide rules. After numerous
process nudges, prods and rerates, we are finding some machines are operating against the proverbial wall. When we go too far, our machinery begins talking to us by failing prematurely or, in extreme
cases, failing catastrophically.
Judiciously determining the safe and reliable operating limits of process machinery is one of the most critical responsibilities of machinery professionals. Ultimately, this function requires us to
weigh process throughput against machine life or welfare. Many years of working in production environments have taught me that it is always better to operate at lower reliable rates than to operate
at higher rates that can lead to upsets and outages in order to maximize process profit. In other words, slow and steady is better than fast and reckless. Finding the operating point that satisfies
the process folks without adversely affecting machinery life is the key to profitable, carefree production.
In the next three installments of Compressor University, Atkins, Hinchliff and McCain discuss compressor load ratings that can limit processes that use reciprocating gas compressors. They will walk
you through the definitions developed to protect compressors from various types of overload conditions. By the end of their articles, the reader should understand the history of rod loads and how
they are computed. Remember that it is ultimately the user/owner's responsibility to select the proper loading limit criteria for his situation. Those who work with reciprocating compressors on a
regular basis should keep all three parts of this article for future reference.
Reciprocating compressors are usually rated in terms of horsepower, speed and rod load. Horsepower and speed are easily understood; however, the term rod load is interpreted differently by various
users, analysts, OEMs, etc. Rod load is one of the most widely used, but least understood, reciprocating compressor descriptors in industry. Typical end users know that rod load is a factor used to
"rate" a compressor, but they do not generally have a good understanding of how this rating is developed and how to utilize it for machinery protection.
These three articles discuss the various definitions of rod load, including historical and current API-618 definitions, manufacturer's ratings and various user interpretations. It also explains that
there are load limits based on the running gear (moving parts such as pistons, rods, crosshead, crankthrow, etc.) as well as load limits based on the stationary components (frame, crosshead guide,
The basic kinematics and forces acting on a slider-crank mechanism will be reviewed to provide a better understanding of the various definitions used. Analytical results and field rod load
measurements will be compared to illustrate the various factors that influence rod load on typical compressor installations.
Basic Theory
Consider the typical double-acting compressor cylinder geometry illustrated in Figure 1. The loads (forces) that are generally of concern include the piston rod loads, the connecting rod loads the
crosshead pin loads, the crankpin loads and the frame loads. As the crankshaft undergoes one revolution, all of these loads vary from minimum to maximum values. The loads are generated by both gas
and inertia forces.
Gas Loads
As the compressor piston moves to compress gas, the differential pressures acting on the piston and stationary components result in gas forces illustrated in Figure 2. An ideal pressure versus time
diagram for a typical double acting compressor cylinder is shown in Figure 3. The pressures acting on the piston faces (head end and crank end) result in forces on the piston rod. The force acting on
the piston rod due to the cylinder pressures alone alternates from tension to compression during the course of each crankshaft revolution.
It is straightforward to compute the net force on the piston rod due to pressure. A plot of this force versus crank angle for the ideal P-T diagram is shown in Figure 4. The forces due to pressure
also act (equal and opposite) on the stationary components.
The maximum compression force due to pressure occurs when the head end is at discharge pressure and the maximum tensile force due to pressure occurs when the crank end is at discharge pressure.
Therefore, the equation in Figure 2 is often evaluated at the extremes in Equations 1 and 2:
Consider a more realistic pressure versus time diagram as shown in Figure 5. Line pressure refers to the pressure at the line side of the pulsation bottle (suction or discharge). Flange pressure
refers to the pressure at the cylinder flange.
As shown, the in-cylinder discharge pressure exceeds the nominal discharge line pressure, and the in-cylinder suction pressure is less than the nominal suction line pressure due to several effects:
1. Pressure drop due to valve and cylinder passage losses (typically 2 to 10 percent)
2. Pressure drop due to pulsation control devices (typically <1 percent)
3. Pulsation at cylinder valves (typically <7 percent)
4. Valve dynamics (inertia, sticktion, flutter, etc.)
API-618 specifies that the internal pressures must be computed, but does not define any calculation procedure. There are several methods for accounting for the non-ideal effects. One common method is
to model the valve as an orifice and then the pressure drop though the valve (valve loss) is proportional to the square of the piston velocity (flow). This is illustrated in Figure 5. Theoretically,
it would be more accurate to use the results of the valve dynamics analysis coupled with the digital pulsation simulation to model the instantaneous pressure at the valves. This is not practical to
do until all of the piping and valve details are known. In any case, the difference should be small provided the losses are within the typical values listed above.
Because of these effects, the forces due to differential pressures are higher on both the running gear and the stationary components than those calculated based on nominal line pressures. However,
Equations 1 and 2 are still applicable as long as the appropriate pressures (discharge pressure higher than nominal discharge pressure, suction pressure lower than nominal line pressure) are used. If
the nominal pressures at the suction and discharge cylinder flanges are used for P[Suction] and P[Discharge], then these tension and compression forces represent the term flange loads as interpreted
by some users. Equations 1 and 2 are easy to evaluate and for many years were the basis for rating "rod loads" of reciprocating compressors.
Of course, for the general non-ideal compressor cylinder, the maximum discharge pressure on the head-end will not necessarily occur at the same instant that the minimum suction pressure occurs on the
crank-end and vice versa. Therefore, it is common to evaluate the gas forces versus crank angle at discrete steps (e.g. every 5 or 10 degrees). The history of these types of calculations is discussed
below, but computing the instantaneous force due to differential gas pressures is easily accomplished with computer-based software. If the actual in-cylinder pressures are used and the extremes are
evaluated, these forces are then the gas loads referred to in the API specifications.
Piston Rod Loads
The basic slider crank mechanism is illustrated in Figure 6. The exact equation for the position of the crosshead with respect to the x-direction shown is Equation 3.
The piston (crosshead) motion is usually approximated using the first two harmonics of the Taylor series in Equation 4.
The piston rod loads can be evaluated by considering the free body diagram in Figure 7. The forces acting on the piston rod are the gas forces due to differential pressures acting on head end and
crank end piston areas plus the inertia forces due to the reciprocating mass.
If the reference point is chosen as the crosshead end of the piston rod, then the reciprocating weight will include the piston rod and the piston assembly (piston, rings, rider bands, etc.). The
reciprocating inertial force (F=ma) can be computed using Equation 5.
The combined piston rod load is the sum of the gas force and the inertial force. In accordance with API-618, this value is routinely calculated in the design stage and used along with the rod area at
the minimum cross-section to compute tensile and compressive stresses in the piston rods. The stress in the piston rod is one factor to consider in the design, and in some cases it may be the
limiting factor or the "weakest link in the chain." However, this load is not the rod load to which API-618 refers.
Crosshead Pin Loads
The free body diagram for the system including the crosshead pin is shown in Figure 8. Here the mass of the crosshead assembly (crosshead, balance weights, crosshead shoes, etc.) must be considered,
but the same equations apply. The combination of the gas loads and inertia loads evaluated at the crosshead pin in the direction of piston motion are the "combined rod loads" to which API-618 refers.
This load does not consider side forces on the crosshead or the 1/3 of the connecting rod weight that is usually considered to be reciprocating. Thus, rod load by API definition is not really a rod
load, but actually a pin load.
Crankpin Loads
If the loads and torques throughout the system are evaluated, then the rotating and reciprocating inertias as well as the side forces are included. Equations are applied for computing x and y
components of crankpin and wrist pin loads, crank throw torques, main bearing loads, etc. The typical output of the computer program used to evaluate these loads is shown in Figure 9. All of these
loads are typically considered in the design stage. Different OEMs evaluate the loads per their own experience.
API guidelines are discussed in Part Two, which will be featured next month.
Originally presented at the 2005 Gas Machinery Conference in Covington, KY, October 2-5, 2005 | {"url":"https://www.pumpsandsystems.com/various-loads-used-rate-reciprocating-compressors-part-one","timestamp":"2024-11-10T11:54:05Z","content_type":"text/html","content_length":"126049","record_id":"<urn:uuid:2250b5a2-58ec-429a-9c21-fe78ce8250a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00178.warc.gz"} |
Neural Networks Algorithms
This Chapter presenteda description of architectures and algorithms used to train neural networks. This chapter will explain the Model of a neuron and structures of neural networks including the
single layer feedforward networks, multilayer feedforward networks, recurrent networks, and radial basis function networks. The sections below explainsthe artificial neural networks training and
learning involved neural networks learning; supervised learning and unsupervised learning. Also this chapter discusses some advanced neural networks learning and problems using neural networks.
2.2Models of a Neuron
A neuron is an information-processing unit that is fundamental to the operation of a neural network. Figure 2.1 shows the model for a neuron. We may identify three basic elements of the neuron model,
as described here:
1. A set of synapses or connecting links, each of which is characterized by a weight or strength of its own. Specially, a signal xj at the input of synapse of j connected to neuron k is multiplied
by the synaptic weight wkj. It is important to make a note of the manner in which the subscripts of the synaptic weight wkj are written. The first subscript refers to the neuron in question and
the second subscript refers to the input end of the synapse to which the weight refers; the reverse of this notation is also used in the literature. The weight wkj is positive if the associated
synapse is excitatory; it is negative if the synapse is inhibitory.
2. An adder for summing the input signals, weighted by the respective synapses of the neuron; the operations described here constitutes a liner combiner.
3. An activation function for limiting the amplitude of the output of a neuron. The activation function is also referred to in the literature as a squashing function in that it squashes (limits) the
permissible amplitude range of the output signal to some finite value. Typically, the normalized amplitude range of the output of a neuron is written as the closed unit interval [0, 1] or
alternatively [-1, 1].
The model of a neuron shown in Fig. 2.1 also includes an externally applied threshold kthat has the effect of lowering the net input of the activation function. On the other hand, the net input of
the activation function may be increased by employing a bias term rather than a threshold; the bias is the negative of the threshold.
Figure 2.1 Nonlinear model of a neuron.
In mathematical terms, we may describe a neuron k by writing the following pair of equations:
Where x1, x2,…, xp are the input signals; wk1, wk2, …, wkp are the synaptic weights of neuron k; uk is the linear combiner output; k is the threshold; () is the activation function; and yk is the
output signal of the neuron. The use of thresholdk has the effect of applying an affine transformation to the output uk of the linear combiner in the model of Fig 2.2 as shown by
In particular, depending on whether the threshold k is positive of negative, the relationship between the effective internal activity level or activation potential vk of neuron k and the linear
combiner output uk is modified in the manner illustrated in Fig. 2.2. Note that as a result of this affine transformation, the graph of vk versus uk no longer pass through the origin.
Figure 2.2. Affine transformation produced by the presence of a threshold.
The kis an external parameter of artificial neuron k. We may account for its presence as in Eq. (2.2). Equivalently, we may formulate the combination of Eqs. (2.1) and (2.2) as follows:
In Eq. (2.4) we have added a new synapse, whose input is
and whose weight is
We may therefore reformulate the model of neuron k as in Fig. 2.3a. In this figure, the effect of the threshold is represented by doing two things: (1) adding a new input signal fixed at –1, and (2)
adding a new synaptic weight equal to the threshold k. Alternatively, we may model the neuron as in Fig. 2.3b,
Figure 2.3. Two other nonlinear models of a neuron.
Where the combination of fixed input x0 = +1 and weight wk0 = bk accounts for the bias bk. Although the models in Fig. 2.1 and 2.3 are different in appearance, they are mathematically equivalent.
2.3Neural Network Structures
The manner in which the neurons of a neural network are structured is intimately linked with the learning algorithm used to train the network. We may therefore speak of learning algorithms (rules)
used in the design of neural networks as being structured.
In general, we may identify four different classes of network architectures:
2.3.1Single-Layer Feedforward Networks
A layered neural network is a network of neurons organized in the form of layers. In the simplest form of a layered network, we just have an input layer of source nodes that projects onto an output
layer of neurons (computation nodes), but not vice versa. In other words, this network is strictly of a feedforward type. It is illustrated in Fig. 2.4 for the case of four nodes in both the input
and output layers. Such a network is called a single-layer network, with the designation "single layer" referring to the output layer of computation nodes (neurons). In other words, we do not count
the input layer of source nodes, because no computation is performed there.
Figure 2.4. Feedforward network with a single layer of neurons
The perceptron can be trained by adjusting the weights of the inputs with Supervised Learning. In this learning technique, the patterns to be recognised are known in advance, and a training set of
input values are already classified with the desired output. Before commencing, the weights are initialised with random values. Each training set is then presented for the perceptron in turn. For
every input set the output from the perceptron is compared to the desired output. If the output is correct, no weights are altered. However, if the output is wrong, we have to distinguish which of
the patterns we would like the result to be, and adjust the weights on the currently active inputs towards the desired result.
Perceptron Convergence Theorem:
The perceptron algorithm finds a linear discriminant function in finite iterations if the training set is linearly separable. [Rosenblatt 1962] [2].
The learning algorithm for the perceptron can be improved in several ways to improve efficiency, but the algorithm lacks usefulness as long as it is only possible to classify linear separable
2.3.2Multilayer Feedforward Networks
The second class of a feedforward neural network distinguishes itself by the presence of one or more hidden layers, whose computation nodes are correspondingly called hidden neurons or hidden units.
The function of the hidden neurons is to intervene between the external input and the network output. By adding one or more hidden layers, the network acquires a global perspective despite its local
connectivity by virtue of the extra set of synaptic connections and the extra dimension of neural interactions (Churchland and Sejnowski, 1992) [10]. The ability of hidden neurons to extract
higher-order statistics is particularly valuable when the size of the input layer is large.
The source nodes in the input layer of the network supply respectively elements of the activation pattern (input vector), which constitute the input signals applied to the neurons (computation nodes)
in the second layer (i.e., the first hidden layer). The output signals of the second layer are used as inputs to the third layer, and so on for the rest of the network. Typically, the neurons in each
layer of the network have as their inputs the output signals of the preceding layer only. The set of output signals of the neurons in the output (final) layer of the network constitutes the overall
response of the network to the activation pattern supplied by the source nodes in the input (first) layer. The architectural graph of Fig. 2.5 illustrates the layout of a multilayer feedforward
neural network for the case of a single hidden layer. For brevity the network of Fig. 2.5 is referred to as a 4-4-2 network in that it has 4 source nodes, 4 hidden nodes, and 2 output nodes. As
another example, a feedforward network with p source nodes, h1 neurons in the first hidden layer, h2 neurons in the second layer, and q neurons in the output layer, say, is referred to as a p-h1-h2-q
Figure 2.5. Fully connected feedforward network with one hidden layer.
The neural network of Fig 2.5 is said to be fully connected in the sense that every node in each layer of the network is connected to every other node in the adjacent forward layer. If, however, some
of the communication links (synaptic connections) are missing from the network, we say that the network is partially connected. A form of partially connected multilayer feedforward network of
particular interest is a locally connected network. An example of such a network with a single hidden layer is presented in Fig. 2.6. Each neuron in the hidden layer is connected to a local (partial)
set of source nodes that lies in its immediate neighborhood; such a set of localized nodes feeding a neuron is said to constitute the receptive field of the neuron. Likewise, each neuron in the
output layer is connected to a local set of hidden neurons. The network of Fig. 2.6 has the same number of source nodes, hidden nodes, and output nodes as that of Fig.2.1. However, comparing these
two networks, we see that the locally connected network of Fig. 2.6 has a specialized structure.
Figure 2.6. Partially connected feedforward network.
The threshold function of the units is modified to be a function that is continuous derivative, the Sigmoid Function. The use of the Sigmoid function gives the extra information necessary for the
network to implement the back-propagation training algorithm. Back-propagation works by finding the squared error (the Error function) of the entire network, and then calculating the error term for
each of the output and hidden units by using the output from the previous neuron layer. The weights of the entire network are then adjusted with dependence on the error term and the given learning
rate. Training continues on the training set until the error function reaches a certain minimum. If theminimum is set too high, the network might not be able to correctly classify a pattern. But if
theminimum is set too low, the network will have difficulties in classifying noisy patterns.
2.3.3Recurrent Networks
A recurrent neural network distinguishes itself from a feedforward neural network in that it has at least one feedforward loop. For example, a recurrent network may consist of a single layer of
neurons with each neuron feeding its output signal back to the inputs of all the other neurons, as illustrated in the architecture graph of Fig. 2.7. In the structure depicted in this figure there
are no self-feedback loops in the network; self-feedback refers to a situation where the output of a neuron is fed back to its own input. The presence of feedback loops has a profound impact on the
learning capability of the network, and on its performance. Moreover, the feedback loops involve the use of particular branches composed of unit-delay elements (denoted by z-1), which result in a
nonlinear dynamical behavior by virtue of the nonlinear nature of the neurons. Nonlinear dynamics plays a key role in the storage function of a recurrent network.
Figure 2.7. Recurrent network with hidden neurons.
2.3.4Radial Basis Function Networks
The radial basis function (RBF) network constitutes another way of implementing arbitrary input/outputmappings. The most significant difference between the MLP and RBF lies in the processing element
nonlinearity. While the processing elementin the MLP responds to the full input space, the processing element in the RBF is local, normally a Gaussian kernel in the inputspace. Hence, it only
responds to inputs that are close to its center; i.e., it has basically a local response.
Figure 2.8.Radial Basis Function (RBF) network.
The RBF network is also a layered net with the hidden layer built from Gaussian kernels and a linear (ornonlinear) output layer (Fig. 2.8). Training of the RBF network is done normally in two stages
[Haykin, 1994] [11]:
First, the centers xi are adaptively placed in the input space using competitive learning or k means clustering[Bishop, 1995] [12], which are unsupervised procedures. Competitive learning is
explained later in the chapter. Thevariances of each Gaussian are chosen as a percentage (30 to 50%) to the distance to the nearest center. Thegoal is to cover adequately the input data distribution.
Once the RBF is located, the second layer weights wiare trained using the LMS procedure.
RBF networks are easy to work with, they train very fast, and they have shown good properties both forfunction approximation as classification. The problem is that they require lots of Gaussian
kernels in high-dimensionalspaces.
2.4Training an Artificial Neural Network
Once a network has been structured for a particular application, thatnetwork is ready to be trained. To start this process the initial weights arechosen randomly. Then, the training, or learning,
There are two approaches to training - supervised and unsupervised.Supervised training involves a mechanism of providing the network withthe desired output either by manually "grading" the network's
performanceor by providing the desired outputs with the inputs. Unsupervised trainingis where the network has to make sense of the inputs without outside help.
The vast bulk of networks utilize supervised training. Unsupervisedtraining is used to perform some initial characterization on inputs. However,in the full blown sense of being truly self learning,
it is still just a shiningpromise that is not fully understood, does not completely work, and thus isrelegated to the lab.
2.4.1Supervised Training
In supervised training, both the inputs and the outputs are provided.The network then processes the inputs and compares its resulting outputsagainst the desired outputs. Errors are then propagated
back through thesystem, causing the system to adjust the weights which control the network.
This process occurs over and over as the weights are continually tweaked.The set of data which enables the training is called the "training set." Duringthe training of a network the same set of data
is processed many times as theconnection weights are ever refined.
The current commercial network development packages provide toolsto monitor how well an artificial neural network is converging on the abilityto predict the right answer. These tools allow the
training process to go on fordays, stopping only when the system reaches some statistically desired point,or accuracy. However, some networks never learn. This could be because theinput data does not
contain the specific information from which the desiredoutput is derived. Networks also don't converge if there is not enough datato enable complete learning. Ideally, there should be enough data so
that partof the data can be held back as a test. Many layered networks with multiplenodes are capable of memorizing data. To monitor the network to determineif the system is simply memorizing its
data in some nonsignificant way,supervised training needs to hold back a set of data to be used to test thesystem after it has undergone its training. (Note: memorization is avoided bynot having too
many processing elements.).
If a network simply can't solve the problem, the designer then has toreview the input and outputs, the number of layers, the number of elementsper layer, the connections between the layers, the
summation, transfer, andtraining functions, and even the initial weights themselves. Those changesrequired to create a successful network constitute a process wherein the "art"of neural networking
Another part of the designer's creativity governs the rules of training.There are many laws (algorithms) used to implement the adaptive feedbackrequired to adjust the weights during training. The
most common techniqueis backward-error propagation, more commonly known as back-propagation.These various learning techniques are explored in greater depth later in thisreport.
Yet, training is not just a technique. It involves a "feel," and consciousanalysis, to insure that the network is not overtrained. Initially, an artificialneural network configures itself with the
general statistical trends of the data.Later, it continues to "learn" about other aspects of the data which may bespurious from a general viewpoint.
When finally the system has been correctly trained, and no furtherlearning is needed, the weights can, if desired, be "frozen." In some systemsthis finalized network is then turned into hardware so
that it can be fast.Other systems don't lock themselves in but continue to learn while inproduction use.
2.4.2Unsupervised Training
The other type of training is called unsupervised training. Inunsupervised training, the network is provided with inputs but not withdesired outputs. The system itself must then decide what features
it will useto group the input data. This is often referred to as self-organization oradaption.
At the present time, unsupervised learning is not well understood.This adaption to the environment is the promise which would enable sciencefiction types of robots to continually learn on their own
as they encounternew situations and new environments. Life is filled with situations whereexact training sets do not exist. Some of these situations involve militaryaction where new combat techniques
and new weapons might beencountered. Because of this unexpected aspect to life and the human desireto be prepared, there continues to be research into, and hope for, this field.Yet, at the present
time, the vast bulk of neural network work is in systemswith supervised learning. Supervised learning is achieving results.
One of the leading researchers into unsupervised learning is TuevoKohonen [13], an electrical engineer at the Helsinki University of Technology. Hehas developed a self-organizing network, sometimes
called an autoassociator that learns without the benefit of knowing the right answer. It isan unusual looking network in that it contains one single layer with manyconnections. The weights for those
connections have to be initialized and theinputs have to be normalized. The neurons are set up to compete in awinner-take-all fashion. | {"url":"https://docest.com/doc/23499/neural-networks-algorithms","timestamp":"2024-11-13T20:59:18Z","content_type":"text/html","content_length":"40905","record_id":"<urn:uuid:cfe92942-28cb-4838-90df-d12d411ea9bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00235.warc.gz"} |
2 Digit Addition With And Without Regrouping Worksheets
Let’s make sure we understand both methods.
Two-digit addition without regrouping: This is when you add two numbers together and don’t need to carry any value over to the next column.
For example, let’s say you have the problem:
+ 34
Here’s how you solve it:
a. First, you add the numbers in the ones place (the right column). That’s 5 (from 25) plus 4 (from 34). What does that equal? It’s 9!
b. Then you add the numbers in the tens place (the left column). That’s 2 (from 25) plus 3 (from 34). What does that equal? It’s 5!
c. So the answer is 59!
Two-digit addition with regrouping: This is when you add two numbers together and the sum is 10 or more, so you have to carry a value over to the next column.
For example, let’s say you have the problem:
+ 48
Here’s how you solve it:
a. First, you add the numbers in the ones place (the right column). That’s 7 (from 27) plus 8 (from 48). What does that equal? It’s 15!
b. 15 is a two-digit number, so we have to regroup. We write down the 5 in the ones place, and carry the 1 (the tens place of 15) over to the tens column.
c. Then, you add the numbers in the tens place, including the carried number. That’s 1 (carried) plus 2 (from 27) plus 4 (from 48). What does that equal? It’s 7!
d. So the answer is 75!
For the worksheet, you just follow these steps for each problem. Whether you need to regroup or not depends on if the numbers you’re adding together in the ones or tens place make a number 10 or
bigger. I hope this helps!
Printable 2 Digit Addition With Regrouping Worksheet
Answer Key
Printable 2 Digit Addition Without Regrouping Worksheet
Answer Key | {"url":"https://www.worksheetsgo.com/2-digit-addition-with-and-without-regrouping-worksheets/","timestamp":"2024-11-02T08:15:54Z","content_type":"text/html","content_length":"123151","record_id":"<urn:uuid:11576206-17d0-4a31-a17e-305478a2e18f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00175.warc.gz"} |
Polynomial Inference
Univariate Polynomial Inference by Monte Carlo Message Length Approximation
Leigh J. Fitzgibbon, David L. Dowe, & Lloyd Allison, Nineteenth International Conference on Machine Learning (ICML-2002), Sydney, Australia, 8-12 July 2002
Abstract: We apply the Message from Monte Carlo (MMC) algorithm to inference of univariate polynomials. MMC is an algorithm for point estimation from a Bayesian posterior sample. It partitions the
posterior sample into sets of regions that contain similar models. Each region has an associated message length (given by Dowe's MMLD approximation) and a point estimate that is representative of
models in the region. The regions and point estimates are chosen so that the Kullback-Leibler distance between models in the region and the associated point estimate is small (using Wallace's FSMML
Boundary Rule). We compare the MMC algorithm's point estimation performance with Minimum Message Length [12] and Structural Risk Minimisation on a set of ten polynomial and non-polynomial functions
with Gaussian noise. The orthonormal polynomial parameters are sampled using reversible jump Markov chain Monte Carlo methods. | {"url":"https://allisons.org/ll/Publications/2002-ICML/","timestamp":"2024-11-13T21:34:24Z","content_type":"text/html","content_length":"3079","record_id":"<urn:uuid:6f9f1c21-0b09-4e63-b838-8ec3858063ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00490.warc.gz"} |
Methodology document: Usual intakes from food for energy, nutrients and other dietary components (2004 and 2015 Canadian Community Health Survey - Nutrition)
January 2021
Table of Contents
Health Canada would like to acknowledge and thank the individuals who have contributed to this work. The production of these intake estimates was a joint venture between Health Canada and Statistics
Canada. Subject-matter experts from the Bureau of Food Surveillance and Science Integration, Food Directorate, and the Office of Nutrition Policy and Promotion at Health Canada and from the Centre
for Population Health Data and Health Analysis Division at Statistics Canada produced the usual intake data table and this methodology document.
List of abbreviations
Adequate Intake
Acceptable Macronutrient Distribution Range
Canadian Community Health Survey
Chronic Disease Risk Reduction Intake
coefficient of variation
Dietary Reference Intake
Estimated Average Requirement
Institute of Medicine
sample size
National Cancer Institute
oral contraceptive
standard deviation
standard error
Software for Intake Distribution Estimation
Tolerable Upper Intake Level
1.0 Introduction
Health Canada's Usual intakes from food for energy, nutrients and other dietary components is published on the Government of Canada's Open Government portal. These intake estimates were generated
using data collected from Canadians in the 2004 and 2015 Canadian Community Health Survey (CCHS)-Nutrition as a joint venture with Statistics Canada. To optimize the usage of the data, it is
recommended that users refer to the Table Footnotes (Appendix A) and also read The Reference Guide to Understanding and Using the Data - 2015 Canadian Community Health Survey - Nutrition^Footnote 1
published by Health Canada in June 2017. This reference guide includes an overview of the 2015 CCHS-Nutrition, including descriptions of the survey sample, how the survey was conducted and survey
components. Further, the reference guide introduces the Dietary Reference Intakes (DRI), the nutrient reference standards used to assess diets by age-sex groups.
This methodology document is a reference for those who will use the 2004 and the 2015 CCHS-Nutrition usual intake data to guide nutrition‐related program and policy decisions. It will be of
particular benefit to provincial ministries of health, researchers and graduate students, policy makers and analysts, public health professionals, epidemiologists, dietitians, the food industry, and
the health media.
The summary data table presents the distribution of usual intakes of 41 dietary components as described in Appendix B. Data are provided for 16 age-sex groups at the national, regional and provincial
levels. Data used for producing the estimates were obtained from the 2004 and 2015 CCHS-Nutrition Share Files. The nutrient intakes represent food consumption only. A methodology for combining data
on nutrient intakes derived from food with data on vitamin and mineral supplements is being explored by Health Canada. Because supplements may make meaningful contributions to nutrient intakes,
inferences about the prevalence of nutrient excess or inadequacy based on intakes from food alone may respectively underestimate or overestimate the prevalence based on total nutrient intakes from
both food and supplements.
Results are presented for 13 geographical areas: Canada excluding the territories, the 10 provinces, the Atlantic Region and the Prairie Region. Data from the four Atlantic Provinces and the three
Prairie Provinces were combined into the Atlantic Region and the Prairie Region, respectively.
Recognizing that the smoking of tobacco affects vitamin C requirements, estimates are provided for the intake of vitamin C by smoking status.
The next section describes the methodology used to produce the intake estimates and how we addressed computational problems that were encountered. The guide does not provide any interpretation or
draw conclusions. Readers are encouraged to consult The Reference Guide to Understanding and Using the Data - 2015 Canadian Community Health Survey - Nutrition^Footnote 1, for examples of how to
interpret the 2015 CCHS-Nutrition data.
2.0 The 2015 CCHS-Nutrition: Estimation of population usual intake distribution
2.1 Introduction
One of the goals of the 2015 CCHS-Nutrition was to estimate distributions of usual intake from food for energy, several nutrients and other food components at the national, provincial and regional
levels for 16 DRI age-sex groups. To accomplish this, data from two dietary recalls were collected concerning the amount and types of foods consumed in the 24 hours preceding the interview: one
recall for all respondents and a second recall only from a representative subsample of the group. Using data from the first dietary recall produces a measure of daily intake (i.e. the quantity of
nutrients or food eaten in a given day). Data from both the first and second recalls can be used to produce a population-level estimate of usual intake (i.e. the long-term average of the daily
The variability in intakes among a group on a given day reflects both variability in intake within specific individuals (who may have eaten more or less than usual on that day) as well as between
different individuals (who habitually have higher or lower intakes). To obtain an estimate of a population's usual intake distribution from daily intake data, one must fit a measurement error model
that reduces the effect of the within‐individual variance while measuring the between‐individual variance. Several methods are available to estimate a population's usual intake distributions from
daily intake data. Following the release of 2015 CCHS-Nutrition, Statistics Canada recommended the use of the National Cancer Institute (NCI) method for estimation of usual intakes.
Three main types of estimates can be obtained from usual intake distributions including: (i) the mean usual intake; (ii) the percentage of the population having a usual intake under (or over) a given
threshold (cut‐off); and (iii) percentiles of the distribution.
The goal of this document is to summarize the:
• Rationale for providing updated estimates for the 2004 CCHS-Nutrition
• Methodology used to estimate usual intake distributions for the 2015 CCHS-Nutrition
• Estimation of iron inadequacy using the full probability method
• Estimation of usual intake for caffeine
• Guidance for comparing intake estimates between cycles 2004 and 2015
• Table footnotes
2.2. Rationale for updating estimates for 2004
Since the release of the data from 2004 CCHS-Nutrition, novel statistical methods to estimate usual intakes from self-report dietary assessments have become available. In 2016, a joint technical
working group comprised of statisticians from Health Canada and Statistics Canada evaluated existing statistical methods for estimating the usual dietary intake and recommended the use of the NCI
method^Footnote 2 ^,^Footnote 3 for analysis of the 2015 CCHS-Nutrition data. This was a shift from 2004 CCHS-Nutrition (Cycle 2.2), where usual intake estimates were calculated using the Iowa State
University (ISU) method^Footnote 4 which uses the Software for Intake Distribution Estimation (SIDE). Usual intake estimates for the 2004 CCHS-Nutrition have been recalculated using the NCI method in
order to facilitate comparisons. The summary data table available on Canada's Open Government Portal presents estimates for both years (2004 and 2015) of the survey. Users are cautioned against
comparing the 2015 CCHS-Nutrition usual intake estimates to those published in the three volumes of the 2004 CCHS-Nutrition Compendium of Nutrient Intake Tables^Footnote 5 due to differences in usual
intake estimation methodology.
2.3 Methodology for estimating usual intake
2.3.1 Usual intake estimation with the National Cancer Institute method
Estimated distributions of usual nutrient intakes for the 2015 CCHS-Nutrition were computed using the NCI method^Footnote 2^,^Footnote 3. Despite increased computational time compared with other
available methods, the NCI method has advantages as it can be used to estimate intake of both ubiquitously and episodically consumed nutrients and foods, can include covariates in the model, and
accounts for the correlation between probability of consumption and amount consumed.
The NCI method was developed on the premise that usual intake is equal to the probability of consumption on any given day multiplied by the average amount consumed on a "consumption day". There are
slight differences in how the method is applied for dietary components that are consumed by nearly everyone, nearly every day (i.e. ubiquitously consumed) compared with those that are episodically
consumed, on a few days. The approach for ubiquitously consumed components (sometimes referred to as the one-part or amount-only model) assumes a probability of consumption of 1 and requires an
estimation of the amount consumed using linear regression on a transformed scale with a person-specific random effect. The more complex estimation of episodically consumed components (referred to as
the two-part model) requires a model that estimates (1) the probability of consuming a food component using logistic regression also with a person-specific random effect and (2) the amount consumed
using a non-linear mixed model. Each part of this two-part model may include multiple or no covariates. For the two-part models, if the person-specific random effects of the two parts are correlated,
the two-part correlated model is selected. Otherwise, the two-part uncorrelated model is fit. For either the one-part or the two-part models, the next step is to estimate each individual's linear
predictor(s), generate random effects using 100 pseudo-persons for each individual, add random effects to the linear predictors and back-transform the amount estimate to the original scale and
finally estimate mean, standard deviation and percentiles empirically.
Training materials on the use of the NCI method to estimate usual intake distribution characteristics from the 2015 CCHS-Nutrition data are available from Statistics Canada (contact Client Services,
Health Statistics Division, Statistics Canada at 613-951-1746 or by email at STATCAN.hd-ds.STATCAN@canada.ca). The National Cancer Institute has developed SAS macros for implementation of the NCI
method, which are available online.
2.3.2 Application of the NCI method in the 2004 and 2015 CCHS-Nutrition
The NCI method was originally developed for analysis of the United States National Health and Nutrition Examination Survey (NHANES), which has a different design than the one used in the
CCHS-Nutrition surveys. Application of the NCI method to CCHS-Nutrition was investigated^Footnote 6 and various statistical considerations were noted. In particular, questions relating to the choice
of model (one- or two-part), the method to remedy outliers, and the choice of covariates were investigated and are discussed below.
Selection of one- or two-part model
The decision of whether to implement the one- or two- part model was based on the following scenarios^Footnote 3:
• If less than 5% of the 24-hour recalls (unweighted) had zero intake of a nutrient, then the one-part model was used.
• If greater than 10% of the 24-hour recalls had zero intake of a nutrient, the two-part model was fit twice: once including the correlation between person-specific random effects, and another
assuming the correlation to be zero. If the correlation is found to be significant, the correlated model is selected, otherwise the uncorrelated model is chosen.
• If between 5% and 10% of the 24-hour recalls had zero intake of a nutrient, then both one- and two- part models were fit and the model with the best fit was chosen for further analysis. The
best-fitting model was chosen by examining the significance of the correlation coefficient through implementation of Fisher's z-transformation between the two-part models as in the previous step.
If neither of the two-part models converge, the amount-only model is fit. In this case, a warning note may appear stating that the estimated distribution might be right-shifted compared to the
true distribution.
Intake estimates for all dietary components, except caffeine, were computed with a one-part model, as less than 5% of 24-hour recalls had zero intake of a nutrient. For caffeine, the two-part model
was used for individuals under 30 years of age since more than 10% of recalls had zero intake. For adult groups over 30 years of age, the proportion of zero intakes ranged between 6.41% and 10.24%,
thus both the one- and two-part models were fit for these groups.
Once the model was chosen, the ratio of within-between variance components was used to evaluate other statistical assumptions, including choice of covariates and outliers. Large values of the
within-between variance ratio suggests instability of model parameter estimates, and leads to a larger adjustment of the one-day intakes to the usual intakes. As a result, estimation of percentiles
of the usual intake distribution and prevalence of inadequacy may be impacted. To ensure model accuracy, the effect of covariates, outliers and survey weights were evaluated when computing usual
Since estimates of usual intakes by age-sex group are desired at the national, regional and provincial levels, province was included as a covariate in the model. Initial analysis using the NCI method
indicated non-convergence of some age-sex groups in some provinces, thus data from the 2004 CCHS-Nutrition (Cycle 2.2) was included to increase the sample size and then provide usual intake estimates
using the NCI method for the 2004 survey. As a result, parameter estimates were obtained from a dataset with 2004 and 2015 CCHS-Nutrition combined, and survey year was also included as a covariate.
As per the NCI User Guide, covariates for sequence of recall and weekend/weekday were also included.
Pooling vs. Stratification
Computation was done for usual intakes for each age-sex group separately, using a stratified approach. While the NCI method provides the option to pool groups, an initial analysis using the root
survey weight indicated large differences in the estimated ratio of within-between variance components for different age-sex groups. Previous research has noted that pooling is not appropriate in
situations where the within-between ratios are much different, since usual intake distributions could be biased in such cases^Footnote 6. Hence, for the analysis of CCHS - Nutrition data, usual
intakes were obtained by stratification of each age-sex group, while pooling over survey year and province within each strata.
In cases where the difference between Day 1 and Day 2 intakes was abnormally large (i.e. ratio of within- to between- variation >10), analyses were conducted to look for potential outliers. In such
cases, the Day 2 value was removed as Day 1 values are considered more reliable. Day 1 recall is less likely to be biased due to learning curve or change in diet since the respondent is aware that an
upcoming recall will take place. The impact of outlier removal on the within-between variation was determined on the basis of ±3, ±2.5 or ±2 standard deviations (SD) away from the mean distribution
of difference between Day 1 and Day 2 values. The scenario that resulted in the greatest improvement in within-between variations with the fewest outliers removed was selected. For those dietary
components with outliers identified, the total number of outliers is summarized below:
Table 1: Summary of outliers by dietary components
Dietary Component DRI Age-Sex Group Threshold Number of recalls removed
Percentage of total energy intake from fats 19 to 30 years, females 3 SD 4
Percentage of total energy intake from monounsaturated fats 19 to 30 years, females 3 SD 8
Sodium (mg/d) 19 to 30 years, males 2 SD 39
Potassium (mg/d) 31 to 50 years, males 2 SD 63
9 to 13 years, females 3 SD 11
Percentage of total energy intake from linolenic fatty acid
14 to 18 years, males 3 SD 18
Data source: Statistics Canada, 2015 Canadian Community Health Survey - Nutrition; 2004 Canadian Community Health Survey Nutrition (cycle 2.2) Share files
SAS macros
Analyses were completed using SAS Macros Version 2.1, specifically the MIXTRAN and DISTRIB macros. General documentation on the NCI method, including user guides and specific examples, is also
available online.
To perform the usual intake calculations and model selection steps mentioned previously, the NCI univariate macros MIXTRAN and DISTRIB were implemented in a systematic way, as described below.
The MIXTRAN macro transforms the data and fits the nonlinear mixed model. This macro permits the use of covariates in the model fitting procedure and outputs the parameter estimates needed to
calculate distributions of usual intake.
The DISTRIB macro uses the parameters estimated by MIXTRAN to estimate the usual intake distributions through simulation. This macro can also provide the estimated percentage of the population whose
usual intake falls below or above a certain value, a feature used to provide estimates above or below DRI values (i.e. EARs, AIs and ULs).
The MIXTRAN macro was used extensively to evaluate the ratio of within-between variances, to determine the presence of outliers and to evaluate differences in pooling and stratification. Once the
final model was chosen, the DISTRIB macro was used to calculate percentiles and prevalence of inadequacy for a particular dietary component. The exception to this procedure was iron, whose estimation
procedure is summarized in Section 2.3.4.
Convergence criteria for the MIXTRAN macro
As the NCI method uses a numerical optimization method to find a solution, the default convergence criteria (gconv = 1e-8) was originally used for the analysis of all dietary components (Appendix B).
In most cases, the default criteria provided feasible solutions with the shortest amount of computational time. For 2% of DRI age/sex groups, the default convergence criteria did not provide a
feasible solution, thus more stringent convergence criteria (gconv = 1e-12 for Potassium with Males 9 to 13 years old, and 1e-10 for the remaining) were used for certain dietary components (Table 2),
at the expense of additional run time. In general, it is recommended to use the default convergence criterion in the MIXTRAN macro when computing usual intakes.
Table 2: Summary of dietary components and age-sex groups
with non-default convergence criteria
Dietary Component Age-Sex Groups
Females: 19 to 30 years old
Females: 31 to 50 years old
Females & Males Combined: 1 to 3 years old
Males: 9 to 13 years old
Potassium Females: 14 to 18 years old
Females: 19 to 30 years old
Males: 31 to 50 years old
Energy Females & Males Combined: 1 to 3 years old
Females & Males Combined: 1 to 3 years old
Males: 14 to 18 years old
Males: 71 years and over
Females: 71 years and over
Females: 9 to 13 years old
Vitamin C
Males: 14 to 18 years old
Calcium Females & Males Combined: 4 to 8 years old
Moisture Females & Males Combined: 1 to 3 years old
Total Sugars Females: 14 to 19 years old
2.3.3 Measuring sampling variability with bootstrap replication
The CCHS-Nutrition surveys have a complex design, implying that no mathematical formula exists to calculate the sampling variability directly. Instead, it is necessary to use a replication method to
estimate this variance, and the most convenient method is bootstrap replication. Statistics Canada has provided bootstrap replicate weights to estimate the variance from complex survey sampling
For simple estimates such as totals, ratios or regression parameters, it is possible to estimate the sampling variability by using the bootstrap weights with a survey procedure, such as SUDAAN,
STATA, or PROC SURVEYMEANS in SAS. These procedures properly account for the complex survey design in the estimation of standard errors. To obtain an estimate, the parameter of interest is calculated
(e.g. total, ratio) for each of the 500 replicates and then the variance between the 500 values is computed. This is the method used to estimate the average dietary component intake using day one
recalls only. For estimates related to distributions of usual intake, this process must be repeated when using the NCI method. Thus, it is necessary to estimate the parameters of interest with the
NCI method for each replicate (using each bootstrap weight) and then calculate the variance between each of the 500 estimates.
For some survey procedures, the variance of the 500 replicates compares each estimate with the mean of the 500 bootstraps (the bootstrap mean). The root estimate (the estimate calculated using the
original survey weight) is also available from the data. Typically, since the number of replicates is large (500), the bootstrap mean will converge to the root mean estimate. However, since the NCI
method may fail for some of the 500 replicates, it is possible that not all of the 500 distribution estimates will be available to calculate the bootstrap mean estimates. For this reason, when
calculating the variance from the bootstrap estimates, each replicate is compared with the root estimate and not with the bootstrap mean. As such, some of the bias caused by failing replicates is
mitigated by the estimation procedure.
More specifically for usual intakes, let [b],b=1,2,…,B (where B=500), represents the estimate of the parameter from each of the B=500 bootstrap replicates. The bootstrap standard error for
Figure 1.
Figure 1 - Text Description
Formula showing how the standard error of the estimates were measured using the bootstrap method. According to this formula, the standard error of an estimate is equal to square root of the mean
squared differences between the root estimate and each bootstrap estimates. More specifically, suppose that [b] (b=1,2,…,B) are corresponding B bootstrap estimates (B = 500 for the case of CCHS -
Nutrition). The variance of [b] divided by B. The bootstrap standard error of
2.3.4 Estimation of iron inadequacy using the full probability method
The distribution of iron requirements for menstruating females and other age-sex groups is not normally distributed, nor necessarily symmetric. Therefore, the full probability approach^Footnote 7 is
required for the estimation of iron inadequacy as opposed to the EAR cut-point method. For all age-sex groups, the iron requirement distributions from the Institute of Medicine's (IOM) report on the
DRIs: The Essential Guide to Nutrient Requirements^Footnote 8 Appendix G was used to estimate inadequacy. For the three DRI age-sex groups of menstruating females aged between 14 and 50 years, the
iron requirement distributions of mixed populations, which assumes 17% oral contraceptive (OC) users and 83% non-OC users, were used to estimate inadequacy^Footnote 8. For females 51 to 70 years and
71+ years, the iron requirement distributions for the post-menopausal population were used.
Tables of the risk of inadequate intake for specified ranges of the usual intake of iron, which are provided in the IOM report, were used for calculating iron inadequacy. The following summarizes how
the full probability method to estimate iron inadequacy was implemented:
• The NCI method was used to estimate the usual intake distribution for iron. For each DRI age-sex group, the MIXTRAN macro was run separately with the covariates survey year (cycle), province,
weekend/weekday and sequence of 24-hour recall, similar to other nutrients. For females 9 to 13, 19 to 30 and 31 to 50 years old, other covariates pertaining to female health had sufficient
sample size and were considered to improve model fit. In particular, for females 9 to 13, the variable "Have you begun having menstrual cycles (periods) yet?" was considered; while for females 19
to 30 and 31 to 50, the covariate "In the past month, did you take birth control pills, including for reasons other than birth control?" was used. For females 31 to 50, the birth control
covariate was significant (p=0.0012) and was included in the final model. For females 9 to 13 and 19 to 30 years old, these covariates were not significant (p=0.1552 and p=0.1400 respectively)
and were removed from the final model. Individuals with missing covariate values were excluded from the final model for females 31 to 50 years old.
• In all cases, once the model was finalized, the parameter estimates from MIXTRAN were included in the DISTRIB macro to compute usual intake distributions for iron. Within the DISTRIB macro, the
dataset corresponding to estimates of the pseudo-individuals (mcsim) was obtained, which considers iron usual intakes for 100 simulated individuals from each respondent.
• From Appendix G of the IOM report^Footnote 8 on the DRIs for iron, Tables G5, G6 and G7 were used to determine the risk values. For females aged 14 to 18 years and menstruating women, the tables
for the mixed adolescent and adult populations were used. Finally, for females 51-70 and 71+, tables for the post-menopausal requirements were used.
• As an example, for the mixed adolescent population, intakes below the minimum value of 4.49 mg/d are assumed to have 100% probability of inadequacy (risk=1.0). Those with intakes above or equal
to the maximum value of 14.39 mg/d are assumed to have zero risk of inadequacy. For intakes between these two extremes, the risk of inadequacy is calculated as 100 minus the midpoint of the
percentiles of the requirement.
• The weighted average of these simulated risk values over all respondents within the DRI age-sex group was the estimate of the iron inadequacy for that age-sex group.
• Since covariates were included to improve estimates in some age-sex groups, a different approach was used to calculate the usual intake distribution for adult males and females 19 years and
older. In these two groups, results from each of the four stratified MIXTRAN runs (e.g. females 19 to 30, females 31 to 50, females 51 to 70 and females 71+) were obtained. The 100 simulated
pseudo-individuals from each of these DRI age-sex groups were found using the DISTRIB macro, and the risk associated with each of the 100 pseudo-individuals was calculated, as outlined in the
previous step. Finally, the simulated data from the four gender-specific age-sex groups were "stacked" and the prevalence of inadequacy for the entire adult group was estimated by gender.
• Standard errors for the estimates were calculated with the probability approach using the bootstrap method, as described in Section 2.3.3.
• For additional information on iron estimation and the full probability method, consult the Health Canada publication Reference Guide to Understanding and Using the Data - 2015 Canadian Community
Health Survey - Nutrition, Appendix 4^Footnote 1.
2.3.5 Estimation of usual intake distribution for caffeine
The analysis of caffeine differed from other nutrients, primarily since intake varies depending on the age group considered. To be consistent with Health Canada guidance, information on the usual
intake of caffeine is provided for individuals aged 4 years and older. Unlike other nutrients, the usual intake of caffeine was analyzed using the two-part NCI model, since caffeine was found to be
an episodically consumed nutrient for some Canadians.
The percentage of 24-hour recalls with zero intake was larger than 10% for many age-sex groups (Table 3), hence the correlated and uncorrelated models were fit (see section 2.3.2). For individuals
over 30 years of age, the percentage of zero intake was between 5% and 10% approximately, thus all three NCI models were fit - correlated, uncorrelated and amount-only model. For all models, the same
covariates were used: survey year, province, weekend/weekday, and sequential effect of the 24-hour recall. In addition, the parameter estimates from MIXTRAN which were obtained using the original
survey weight became starting values for the subsequent bootstrap runs. As part of MIXTRAN, Box-Cox transformations were used to transform the data to normality. However, for children 4 to 8 years,
males 9 to 13 years and females 9 to 13 years old, the log-transformation (lambda = 0) was used. The amount-only model was not considered in the final analysis since all of the two-part models
Table 3 - Summary of statistical considerations for usual intake of caffeine by age/sex group
Age-Sex Percentage of zero intake (%) No. of outliers removed in final model Final p-value for correlated model Two-part Model Used
Females & Males combined: 4 to 8 years 36.0 1 0.75 Uncorrelated
Males: 9 to 13 years 32.3 0 0.40 Uncorrelated
Females: 9 to 13 years 30.6 0 ≤0.0001 Correlated
Males: 14 to 18 years 27.4 0 0.29 Uncorrelated
Females: 14 to 18 years 31.2 0 0.52 Uncorrelated
Males: 19 to 30 years 20.1 0 0.005 Correlated
Females: 19 to 30 years 19.5 1 0.61 Uncorrelated
Males: 31 to 50 years 9.9 8 0.17 Uncorrelated
Females: 31 to 50 years 10.2 15 0.03 Correlated
Males: 51 to 70 years 6.8 10 0.003 Correlated
Females: 51 to 70 years 7.0 11 0.12 Uncorrelated
Males: 71 years and older 6.4 6 0.02 Correlated
Females 71 years and older 6.9 7 N/A Uncorrelated
Data Source: Statistics Canada, 2015 Canadian Community Health Survey - Nutrition; 2004 Canadian Community Health Survey Nutrition (cycle 2.2) Share files
N/A - Unable to calculate correlated p-value since the correlated model did not converge: uncorrelated model fitted
The following procedure was used in choosing which two-part model provided the best fit. When the Fisher's Z-transformation of the estimated correlation between the random effects in the correlated
model differed statistically from zero at the 5% significance level, then the correlated model was used. Otherwise, the uncorrelated model was fit. Correlated models were used to estimate usual
intakes for females 9 to 13 years, males 19 to 30 years, females 31 to 50 years, males 51 to 70 years and males 71 years and older. For females 71 years and older, the correlated model did not
converge using the root survey weight, thus the uncorrelated model was used for analysis. The uncorrelated model was fit for all other DRI groups (Table 3).
By implementing the outlier detection strategy, described in Section 2.3.2, the resulting ratio of within-person to between-person variation was found to be smaller than 10 in all DRI age-sex groups.
No outliers were removed using this method. Another outlier detection strategy was used to search for possible violations to the normality assumption^Footnote 9. In particular, the method computes a
Box-Cox transformation of the original non-zero intake values and flags extreme values satisfying one of two criteria: i) those below the 25^th percentile minus 2.5 times the interquartile range of
the transformed distribution; and ii) values which were above the 75^th percentile plus 2.5 times the interquartile range. Table 3 lists the number of outliers removed for each DRI age-sex group
using this method.
In addition to the specific DRI age-sex groups, usual intake distributions of caffeine for males 19 years and older and females 19 years and older were also calculated. A distinct approach was used
for these combined age groups because each individual age-sex group required different models (Table 3). Based on their respective model, for each individual, 100 simulated pseudo-individuals were
outputted by the NCI method using the DISTRIB macro. Finally, the simulated pseudo-individuals obtained were "stacked" and the distribution of usual intakes for both adult gender groups was
Standard errors for the caffeine estimates were calculated using the bootstrap method, as described in Section 2.3.3. For the males 19+ group, 46 bootstrap replicates failed, compared with 80 failed
replicates for the females 19+ group.
2.3.6 Data source
The datasets used to generate estimates were the 2004 and 2015 Canadian Community Health Survey - Nutrition Share Files, which consist of all respondents who agreed to share their responses with the
survey share partners. About 96% of respondents agreed to share their responses^Footnote 1.
Excluded from the dataset were respondents with null intakes (zero total intake from food) or invalid intakes, breastfed children and pregnant or breastfeeding women. Day one and day two recalls were
used. Three respondents with day two recalls who did not have a corresponding day one recall were excluded. Analysis was performed on provincial, regional (Atlantic and Prairies) and national levels
for all age-sex groups other than children aged between 0 and 1 year. Analysis was also performed for the aggregated age-sex groups: males 19+ and females 19+ years of age.
2.4. Comparing 2015 and 2004 nutrient intake estimates
One of the objectives of the 2015 CCHS-Nutrition was to assess whether changes in dietary intake have occurred since the 2004 CCHS-Nutrition. To meet this objective, the percentage of the population
above or below relevant DRI reference values in 2004 and 2015 were compared. This was done using t-tests where the mean change between 2004 and 2015 is compared to 0 and where the estimate of
variance of that change comes from the bootstrap repetitions. The p-values presented in the summary data table were not adjusted for multiple comparisons.
When interpreting between-year comparisons and before drawing conclusions, it is essential to consider that the data are not adjusted for differences in methodology. Differences in intakes between
the two survey years may reflect changes in consumption patterns, changes in the nutrient composition of foods and/or changes in survey methodology among other potential explanations. Please refer to
The Reference Guide to Understanding and Using the Data- 2015 Canadian Community Health Survey- Nutrition Section 4^Footnote 1 for detailed discussions of what differed between survey years and
potential implications. A number of potential differences in data collection and processing for the 2015 CCHS-Nutrition are likely to have affected intake estimates. Three of the major differences
• changes in the nutrient databases beyond reformulation of food products by manufacturers, for example, filling in nutrient values that were 'missing';
• use of an updated model booklet in the interview to estimate amounts consumed;
• the addition of quality checks during the interview when a large amount was entered, thereby allowing any necessary revisions to be made in the presence of the respondent.
In 2015, vastly fewer post-data collection manual sizing edits were required (68 in 2015 vs. 22,000 in 2004) suggesting the quality and quantity control improvements implemented for 2015 were highly
Appendix A - Table footnotes
The following footnotes apply to the summary data table:
1. The survey excludes from its target population those living in the three territories, individuals living on reserves, residents of institutions, full‐time members of the Canadian Armed Forces and
residents of certain remote regions.
2. The table excludes pregnant and breastfeeding females, subject to another set of nutritional recommendations. The sample of pregnant and breastfeeding females is not large enough to allow for
reliable estimates at the provincial level.
3. Sample size is based on the first 24‐hour recall (first day of interview) only.
4. Intakes are based on food consumption only. Intakes from vitamin and mineral supplements are not included. Inferences about the prevalence of nutrient excess or inadequacy based on intakes from
food alone may respectively underestimate or overestimate the prevalence based on total nutrient intakes from both food and supplements.
5. The intake distribution (percentiles and percentage above or below a cut‐off when applicable) was adjusted using the National Cancer Institute (NCI) Method as described in Tooze JA, Midthune D,
Dodd KW, et al.: A new statistical method for estimating the usual intake of episodically consumed foods with application to their distribution. J Am Diet Assoc 2006;106: 1575-1587 and Tooze JA,
Kipnis V, Buckman DW, et al.: A mixed-effects model approach for estimating the distribution of usual intake of nutrients: the NCI method. Stat Med 2010; 29: 2857-2868
6. Bootstrapping techniques were used to produce the coefficient of variation (CV) and the standard error (SE).
7. AMDR is the Acceptable Macronutrient Distribution Range, expressed as a percentage of total energy intake. Intakes inside the range (shown in the AMDR columns) are associated with a reduced risk
of chronic disease while providing adequate intakes of essential nutrients. For further information on AMDR in assessing population groups, see the Health Canada publication Reference Guide to
Understanding and Using the Data - 2015 Canadian Community Health Survey- Nutrition, Section 2.2.6 page 28^Footnote 1.
8. EAR is the Estimated Average Requirement. In the context of reporting results in a population-based survey such as the 2004 and 2015 CCHS-Nutrition, the primary use of the EAR is to estimate the
prevalence of inadequacy of some nutrients in a group. For further information on EAR and how to interpret the prevalence of inadequacy in a population see the Health Canada publication The
Reference Guide to Understanding and Using the Data - 2015 Canadian Community Health Survey - Nutrition, Section 2.2. 2, page 24^Footnote 1.
9. AI is the Adequate Intake. The level of intake at the AI (shown in the AI columns) is the recommended average daily intake level based on observed or experimentally determined approximations or
estimates of nutrient intake by a group or groups of apparently healthy people that are assumed to be adequate. It is developed when an EAR cannot be determined. The percentage of the population
having a usual intake above the AI (shown in the %>AI columns) almost certainly meets their needs. The adequacy of intakes below the AI cannot be assessed, and should not be interpreted as being
inadequate. For further information on AI and how to interpret the prevalence of inadequacy in a population, see the Health Canada publication Reference Guide to Understanding and Using the Data
- 2015 Canadian Community Health Survey - Nutrition, Section 2.2.4, pages 25-26^Footnote 1.
10. UL is the Tolerable Upper Intake Level. The level of intake at the UL (shown in the UL columns) is the highest average daily intake level that is likely to pose no risk of adverse health effects
to almost all individuals in the general population. For further information on UL and how to interpret the prevalence of intakes above the UL in a population, see the Health Canada publication
The Reference Guide to Understanding and Using the Data - 2015 Canadian Community Health Survey - Nutrition, Section 2.2.5, page 28^Footnote 1. In 2017, the Guiding Principles for Developing
Dietary Reference Intakes Based on Chronic Disease recommended that the UL be retained in the expanded DRI model, but that it should characterize toxicological risk^Footnote 10.
11. The Chronic Disease Risk Reduction Intake (CDRR) is the lowest level of intake for which there is sufficient strength of evidence to characterize a chronic disease risk reduction. For more
detailed understanding of the CDRR and its interpretation when assessing intakes of particular nutrients, consult the 2017 National Academies report, Guiding Principles for Developing Dietary
Reference Intakes Based on Chronic Disease^Footnote 10.
12. For a more detailed understanding of DRIs and their interpretation when assessing intakes of particular nutrients, consult the summary of the series of publications on DRIs published by the
Institute of Medicine: Dietary Reference Intakes: The Essential Guide to Nutrient Requirements, (2006)^Footnote 8.
13. For more detailed understanding of DRIs and their interpretation when assessing intakes of sodium and potassium, consult the Dietary Reference Intakes for Sodium and Potassium, 2019^Footnote 11.
14. Data on trans-fat intake cannot be obtained from the 2004 and 2015 CCHS-Nutrition datasets and therefore are not reported separately. However, the estimates for percent energy from total fat
comprise all fats, including trans-fats. Note that the estimates provided for energy intake from the individual types of fat will not add up to the estimates provided for total fat due to
measurement error as well as the lack of data on trans-fat intake.
15. In terms of precision, the estimate 0.0 with a standard error of 0.0 refers to a standard error smaller than 0.1%.
16. Data with a coefficient of variation (CV) from 16.6% to 33.3% are identified as follows: (E) use with caution.
17. Data with a coefficient of variation (CV) greater than 33.3% with a 95% confidence interval entirely between 0 and 3% are identified as follows: <3 interpret with caution.
18. Data with a coefficient of variation (CV) greater than 33.3% were suppressed due to extreme sampling variability and are identified as follows: (F) too unreliable to be published.
19. Comparisons between the 2004 and 2015 CCHS-Nutrition were calculated using paired t-tests without adjustment for multiple comparisons.
20. Data are not adjusted for differences in methodology between the 2004 and 2015 CCHS-Nutrition. For additional information on what differed between years and potential implications, please refer
to section 2.4 of this methodology document.
Appendix B - List of dietary components
Dietary Components
Total energy intake
Total carbohydrates
Percentage of total energy intake from carbohydrates
Total sugars
Percentage of total energy intake from sugars
Total fats
Percentage of total energy intake from fats
Total saturated fats
Percentage of total energy intake from saturated fats
Total monounsaturated fats
Energy and macronutrients Percentage of total energy intake from monounsaturated fats
Total polyunsaturated fats
Percentage of total energy intake from polyunsaturated fats
Linoleic acid
Percentage of total energy intake from linoleic acid
Linolenic acid^Footnote a
Percentage of total energy intake from linolenic acid
Protein^Footnote b
Percentage of total energy intake from protein
Total dietary fibre^Footnote c
Vitamin A^Footnote d
Vitamin B6
Vitamin B12
Vitamin C
Vitamin C - by smoking status
Vitamin D
Folacin^Footnote e
Naturally occurring folate
Iron^Footnote f
Minerals Phosphorus
Other Dietary Components
Moisture^Footnote g
Footnote a
Footnote b
Footnote c
Footnote d
Footnote e
Footnote f
Footnote g
Footnote 1
Footnote 2
Footnote 3
Footnote 4
Footnote 5
Footnote 6
Footnote 7
Footnote 8
Footnote 9
Footnote 10
Footnote 11
Footnote 12
Footnote 13
Page details
Date modified: | {"url":"https://www.canada.ca/en/health-canada/services/food-nutrition/food-nutrition-surveillance/health-nutrition-surveys/canadian-community-health-survey-cchs/compendium-data-tables-intakes-energy-nutrients-other-food.html","timestamp":"2024-11-02T06:35:16Z","content_type":"text/html","content_length":"88974","record_id":"<urn:uuid:71a4f57b-5988-4131-a067-2f1dcbb625db>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00408.warc.gz"} |
Mechanical Energy Conversion
Question Video: Mechanical Energy Conversion Physics • First Year of Secondary School
A ball with an initial velocity of 10 m/s rolls along a curved surface, as shown in the diagram. The mass of the ball is 100 g. Assume that the only energy conversions that take place are between the
kinetic energy and the gravitational potential energy of the ball and calculate the speed of the ball at different positions to the nearest meter per second. Find the magnitude of 𝑣₁. Find the
magnitude of 𝑣₂. Find the magnitude of 𝑣₃. Find the magnitude of 𝑣₄.
Video Transcript
A ball with an initial velocity of 10 meters per second rolls along a curved surface as shown in the diagram. The mass of the ball is 100 grams. Assume that the only energy conversions that take
place are between the kinetic energy and the gravitational potential energy of the ball. And calculate the speed of the ball at different positions to the nearest meter per second. Find the magnitude
of 𝑣 one. Find the magnitude of 𝑣 two. Find the magnitude of 𝑣 three. Find the magnitude of 𝑣 four.
Okay, so in this question, we need to find out the magnitudes of 𝑣 one, 𝑣 two, 𝑣 three, and 𝑣 four, which are the velocities of the ball at different points along the curve, the curve of course being
the red line here. So one of the important things that we’ve been told is that the only conversions in energy that take place are between the kinetic energy and the gravitational potential energy of
the ball.
In other words, we only need to deal with those two types of energy. We don’t need to worry about friction between the ball and the surface, for example, or anything like that. So that makes life a
lot easier for us.
Now another piece of information we’ve been given is that the mass of the ball is 100 grams. Let’s call this mass 𝑚. And we’ll say that that is equal to 100 grams. However, this is not the most
useful piece of information for us in this form. And the reason for that is that we first need to convert it to standard units.
Now the standard unit of mass is the kilogram. So we need to convert this mass from grams to kilograms. To do this, we can recall that one kilogram is equivalent to 1000 grams. So what we can do here
is to divide both sides of the equation by 10. And this way, on the left-hand side, we’ll be left with 0.1 kilograms. And on the right, we’ll be left with 100 grams, which is exactly what we needed
to convert into kilograms. So we can say that this mass, the mass of the ball, which is 100 grams, is instead equal to 0.1 kilograms.
And at this point, we can move on because we’ve now converted it to standard units. Okay, so at this point, let’s consider the ball when it’s first at this position here. Now at this position, we
know the height of the ball above the ground. We know that this height is 25 meters. And we know the speed of the ball. It’s 10 meters per second.
Therefore, with this information, we can work out both the gravitational potential energy of the ball in this position and its kinetic energy. We’ll see whether this is useful in a second. But first
let’s recall that the gravitational potential energy of an object, GPE, is given by multiplying the mass of the object 𝑚 by the gravitational field strength of the Earth 𝑔 by the height above the
ground ℎ.
And in this case, the height above the ground, like we said earlier, is 25 meters. So at this position, which we’ll call position zero, the ball has a gravitational potential energy, which we’ll call
GPE sub zero.
And this gravitational potential energy at position zero is equal to the mass of the ball, which is 0.1 kilograms, multiplied by the gravitational field strength of the Earth, which we can recall is
9.8 meters per second squared, multiplied by the height above the ground, which is 25 meters. And the good thing about this expression is that we’re using the standard units for mass, which is
kilograms, the standard units for gravitational field strength, which is meters per second squared, and the standard units for the height above the ground, which is meters.
Therefore, the answer that we get for the gravitational potential energy is also going to be in its standard unit, which is the joule. So when we evaluate the right-hand side of the equation, we find
that this evaluates to 24.5 joules. And this is the gravitational potential energy of the ball in position zero.
We can, therefore, move on to working out the kinetic energy of the ball. And to do this, we’ll recall that the kinetic energy of an object, KE, is given by multiplying half by the mass of that
object 𝑚 by the velocity of the object 𝑣 squared. So in position zero, the kinetic energy, which we’ll call KE sub zero, is equal to half multiplied by the mass of the ball, which once again is 0.1
kilograms, multiplied by the velocity of the ball, which happens to be 10 meters per second squared. And this is the velocity that we’re using, the velocity in position zero.
Also, yet again, we’ve used the standard unit for mass, which is kilograms, and the standard unit for velocity, which is meters per second. So the kinetic energy we find is also going to be in
joules. So evaluating the right-hand side, we find that the kinetic energy of the ball in position zero is five joules. And at this point, we know both the gravitational potential energy of the ball
and the kinetic energy of the ball at position zero.
Now since the question tells us that these are the only two types of energy that we need to worry about, we can therefore work out the total energy of the ball at position zero. So the total energy
of the ball at position zero, which we’ll call 𝐸 sub zero, is equal to the gravitational potential energy at position zero plus the kinetic energy at position zero. And this happens to be 24.5
joules, which is the gravitational potential energy, plus five joules, which is the kinetic energy. And so the total energy of the ball in this position is 29.5 joules.
Now why is this relevant? Well, it’s because we can use the law of conservation of energy. What conservation of energy tells us is that energy is neither created nor destroyed. In fact, it can only
be converted from one form to another. But if energy cannot be created or destroyed, then the total energy of the ball must remain the same throughout its journey. In other words, the total energy of
the ball in position zero is the same as the total energy of the ball in position one and position two and position three and position four and anywhere else along the curve.
Now let’s think about this carefully. We’re not saying that the gravitational potential energy stays the same throughout or the kinetic energy stays the same throughout. In fact, the gravitational
potential energy and kinetic energy individually change all the time. But what we’re saying is that the total energy or the sum of the gravitational potential energy and the kinetic energy has to
stay the same throughout.
So in this case, what happens is, say, for example, we start here at position zero. And the ball is moving along and it starts going towards position one. Well, in this case, it’s moving downwards
along the slope. So it’s losing height. And we see that it goes from 25 meters to 15 meters above the ground. So it’s losing gravitational potential energy. But whatever gravitational potential
energy loses, it gains as kinetic energy. And this is how the total energy of the ball stays constant. And this is something that we’ll be able to exploit in order to work out the values of 𝑣 one, 𝑣
two, 𝑣 three, and 𝑣 four.
Now at this point, we don’t need to know the gravitational potential energy and kinetic energy at position zero. Instead, what we do need to know is the total energy of the ball. So let’s put a
little box around it. And let’s look at position one first of all.
Now as we’ve already said, the total energy of the ball stays the same. So we can write down an equation that tells us that the gravitational potential energy of the ball in position one this time
plus the kinetic energy of the ball in position one is equal to the energy, the total energy, in position one.
But as we’ve already said, the total energy stays the same throughout its journey. So 𝐸 sub one is the same as 𝐸 sub zero. So GPE sub one plus KE sub one is equal to 𝐸 sub zero. And we can substitute
in the expressions for the gravitational potential energy and the kinetic energy. So 𝑚𝑔 multiplied by the height at position one, which we’ll call ℎ sub one, plus half 𝑚 multiplied by the velocity at
position one, which we’ll call 𝑣 sub one squared, is equal to 𝐸 sub zero.
Now in this equation, we already know the value of 𝑚 as well as 𝑔 as well as ℎ one. We’ve been given ℎ one here. And we know 𝑚 once again. We don’t know what 𝑣 one is. But we do know what 𝐸 zero is.
In other words, there’s only one unknown in this equation. That’s 𝑣 one. So we can rearrange to find out what 𝑣 one is.
Firstly, we’ll write down the value of 𝐸 sub zero over here to give us a little bit more space to work with. And then what we’ll do is start to rearrange. Firstly, we can subtract the value of 𝑚𝑔ℎ
sub one from both sides of the equation. That way, 𝑚𝑔ℎ sub one cancels on the left. And what we’re left with is that half multiplied by 𝑚 multiplied by 𝑣 one squared is equal to 𝐸 naught minus 𝑚𝑔ℎ
sub one.
Then what we can do is to multiply both sides of the equation by two over 𝑚. This way, the half cancels with the two in the numerator. And the mass on the left-hand side also cancels. So we’re only
left with 𝑣 one squared on the left-hand side. And then at this point, all we need to do is to take the square root of both sides. On the left, that just leaves us with 𝑣 one. And on the right, we’re
left with an expression to help us find 𝑣 one.
Now it’s important to note that this expression is in terms of ℎ one, the height at position one. And this is really useful because we can then apply this to position two, three, and four, simply by
changing the values one here and here to two or three or four. For now though, let’s keep it as 𝑣 one because that’s what we’re trying to find out first.
So 𝑣 one is equal to square root of two over 𝑚 multiplied by 𝐸 naught minus 𝑚𝑔ℎ one. Let’s plug in all the values on the right-hand side. 𝑣 one is equal to two divided by the mass, which is 0.1, the
stuff inside the parentheses 29.5 joules, which is 𝐸 naught minus 𝑚 times 𝑔 times ℎ one. And at this point, we can evaluate this to find that 𝑣 one is equal to 17.20 meters per second.
However, we need to give our answer to the nearest meter per second. So we need to round to this value just before the decimal. Now the value after the decimal is a two. So the seven is going to stay
as a seven. It’s not going to round up. Hence, to the nearest meter per second, the value of 𝑣 one is 17. And this is our answer to the first part of the question.
Let’s now move on to finding 𝑣 sub two. And we can do this by replacing the one in this equation with two. Now we can see that the value of ℎ two, the height at position two, is 10 meters. So we plug
that value in and keep everything else the same. And we find that 𝑣 two ends up being 19.84 meters per second.
But once again, we need to get our answer to the nearest meter per second. So we’re looking at rounding this value here. Now the number after it is an eight. Eight is larger than five. So this nine
is going to round up. In other words, this 19 is going to become a 20. So to the nearest meter per second, our answer is 20. And this is our answer to the second part of the question.
Now we can repeat this process for 𝑣 three to give a value of 14.07 meters per second, which, to the nearest meter per second, is simply 14, and repeat it once again for 𝑣 four, which happens to be
24.28 meters per second. Now once again, to the nearest meter per second, this rounds to be 24. And at this point, we found the velocity of the ball in all four positions along the curve. So we’ve
reached the end of our question. | {"url":"https://www.nagwa.com/en/videos/723141829797/","timestamp":"2024-11-04T07:40:37Z","content_type":"text/html","content_length":"266211","record_id":"<urn:uuid:5705b202-f310-4da0-9fab-8cd12aea8256>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00237.warc.gz"} |
Directory macros/latex/contrib/mismath
Miscellaneous mathematical macros - The mismath package
The package provides some mathematical macros to typeset:
• mathematical constants e, i, pi in upright shape (automatically) as recommended by ISO 80000-2,
• vectors with nice arrows and adjusted norm,
• tensors in sans serif bold italic shape (ISO recommendation),
• some standard operator names,
• several commands with useful aliases,
• improved spacings in mathematical formulas,
• systems of equations and small matrices,
• displaymath in double columns for lengthy calculations with short expressions.
• run LaTeX on mismath.ins, you obtain the file mismath.sty,
• if then you run pdfLaTeX on mismath.dtx you get the file mismath.pdf which is also in the archive,
• put the files mismath.sty and mismath.pdf in your TeX Directory Structure.
Antoine Missier
Email: antoine.missier@ac-toulouse.fr
Released under the LaTeX Project Public License v1.3 or later. See http://www.latex-project.org/lppl.txt
Download the contents of this package in one zip archive (439.2k).
mismath – Miscellaneous mathematical macros
The package provides some mathematical macros to typeset:
• mathematical constants e, i, π in upright shape (automatically) as recommended by ISO 80000-2,
• vectors with nice arrows and adjusted norm (and tensors),
• tensors in sans serif bold italic shape,
• some standard operator names,
• improved spacings in mathematical formulas,
• systems of equations and small matrices,
• displaymath in double columns for lengthy calculations.
Package mismath
Version 3.1 2024-06-16
Licenses The LaTeX Project Public License 1.3
Copyright 2019–2024 Antoine Missier
Maintainer Antoine Missier
Contained in TeXLive as mismath
MiKTeX as mismath
Topics Maths | {"url":"https://www.ctan.org/tex-archive/macros/latex/contrib/mismath","timestamp":"2024-11-09T22:14:03Z","content_type":"text/html","content_length":"16708","record_id":"<urn:uuid:b6d70fd5-c2ac-4048-af7c-451274900184>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00783.warc.gz"} |
(non-periodic cash flows)
1. Net Present Value (non-periodic cash flows)
Net Present Value (non-periodic cash flows)
Calculates Net Present Value (NPV) of a series of non-periodic cash flows F[1] F[2] F[3] ... occuring at dates d[0] d[1] d[2]... .
`NPV = F_0+F_1/(1+r)^((d_1-d0)/365)+F_2/(1+r)^((d_2-d_0)/365)+F_3/(1+r)^((d_3-d_0)/365)...` (see explanations below)
If cash flows are periodic (e.g., annually), this calculator would be more adapted NPV with periodic cash flows.
How to use this calculator ?
Field "date format"
mm/dd/yyyy is the US format with the following rules,
• the day is written in two digits : input 11/02/2023 and not 11/2/2023.
• the month is also written in two digits : input 03/22/2023 and not 3/22/2023.
• the year is written in four digits : saisir 12/10/2024 and not 12/10/24.
dd/mm/yyyy format is the european format which follows the same rules as above except that the month and the day are reversed.
Field "Dates / cash flows"
• Input a single cash flow per line, preceded by its date.
Example of valid cash flow schedule,
02/03/2025 -12000
05/06/2025 6000
10/08/2026 6000
• The first line corresponds likely, though not necessarily, to the initial investment (input a negative value in this case as for all outgoing cash flows).
This first date is the origin date that is used to discount all others inputted cash flows.
• The origin date must be earlier than all other dates (otherwise an error message is displayed).
For other cash flows, the order of entry does not have to be chronological. The result table will display the cash flows in the order entered and not chronologically.
Field "discount rate"
Input an annual discount rate without the percentage symbol. For example, input 2.5 for 2.5%.
See also
NPV with periodic cash flows calculator
Investment Calculators
Finance Calculators | {"url":"https://www.123calculus.com/en/npv-custom-periods-page-2-60-610.html","timestamp":"2024-11-03T10:28:22Z","content_type":"text/html","content_length":"18474","record_id":"<urn:uuid:a5b24e75-d8ab-4318-aaff-218fbb658ee7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00057.warc.gz"} |
Website Detail Page
published by the National Council of Teachers of Mathematics
This item is a unit of study for high school students on the topic of data analysis. It consists of nine lessons in which users interpret the slope and y-intercept of least squares
regression lines in the context of real-life data.
This resource is part of a larger collection of lessons, labs, and activities developed by the National Council of Teachers of Mathematics (NCTM).
Subjects Levels Resource Types
Classical Mechanics
- Motion in One Dimension - Instructional Material
Mathematical Tools - High School = Activity
- Statistics - Lower Undergraduate = Lesson/Lesson Plan
Other Sciences = Unit of Instruction
- Mathematics
Intended Users Formats Ratings
- Educators - text/html
- Learners - application/java
Access Rights:
Free access
© 2008 National Council of Teachers of Mathematics
graph, graphical analysis, graphing, linear regression, regression analysis, statistics
Record Creator:
Metadata instance created November 12, 2008 by Caroline Hall
Record Updated:
August 17, 2016 by Lyle Barbato
Last Update
when Cataloged:
February 14, 2008
Other Collections:
AAAS Benchmark Alignments (2008 Version)
9. The Mathematical World
9B. Symbolic Relationships
• 6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease
steadily, increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase
or decrease in steps, or do something different from any of these.
• 9-12: 9B/H4. Tables, graphs, and symbols are alternative ways of representing data and relationships that can be translated from one to another.
9C. Shapes
• 6-8: 9C/M4. The graphic display of numbers may help to show patterns such as trends, varying rates of change, gaps, or clusters that are useful when making predictions about the
phenomena being graphed.
11. Common Themes
11C. Constancy and Change
• 9-12: 11C/H9. It is not always easy to recognize meaningful patterns of change in a set of data. Data that appear to be completely irregular may be shown by statistical analysis to
have underlying trends or cycles. On the other hand, trends or cycles that appear in data may sometimes be shown by statistical analysis to be easily explainable as being attributable
only to randomness or coincidence.
12. Habits of Mind
12D. Communication Skills
• 9-12: 12D/H7. Use tables, charts, and graphs in making arguments and claims in oral, written, and visual presentations.
12E. Critical-Response Skills
• 9-12: 12E/H1. Notice and criticize claims based on the faulty, incomplete, or misleading use of numbers, such as in instances when (1) average results are reported but not the amount
of variation around the average, (2) a percentage or fraction is given but not the total sample size, (3) absolute and proportional quantities are mixed, or (4) results are reported
with overstated precision.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.2 Reason abstractly and quantitatively.
High School — Functions (9-12)
Interpreting Functions (9-12) Supplements
• F-IF.6 Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph. Contribute
• F-IF.7.a Graph linear and quadratic functions and show intercepts, maxima, and minima.
High School — Statistics and Probability^? (9-12) Materials
Interpreting Categorical and Quantitative Data (9-12) Featured By
• S-ID.2 Use statistics appropriate to the shape of the data distribution to compare center (median, mean) and spread (interquartile range, standard deviation) of two or more different
data sets.
• S-ID.3 Interpret differences in shape, center, and spread in the context of the data sets, accounting for possible effects of extreme data points (outliers).
• S-ID.6.a Fit a function to the data; use functions fitted to data to solve problems in the context of the data.
• S-ID.6.b Informally assess the fit of a function by plotting and analyzing residuals.
• S-ID.7 Interpret the slope (rate of change) and the intercept (constant term) of a linear model in the context of the data.
• S-ID.8 Compute (using technology) and interpret the correlation coefficient of a linear fit.
Making Inferences and Justifying Conclusions (9-12)
• S-IC.1 Understand statistics as a process for making inferences about population parameters based on a random sample from that population.
• S-IC.4 Use data from a sample survey to estimate a population mean or proportion; develop a margin of error through the use of simulation models for random sampling.
Using Probability to Make Decisions (9-12)
• S-MD.1 (+) Define a random variable for a quantity of interest by assigning a numerical value to each event in a sample space; graph the corresponding probability distribution using
the same graphical displays as for data distributions.
• S-MD.4 (+) Develop a probability distribution for a random variable defined for a sample space in which probabilities are assigned empirically; find the expected value.
• S-MD.7 (+) Analyze decisions and strategies using probability concepts (e.g., product testing, medical testing, pulling a hockey goalie at the end of a game).
Common Core State Writing Standards for Literacy in History/Social Studies, Science, and Technical Subjects 6—12
Text Types and Purposes (6-12)
• 1. Write arguments focused on discipline-specific content. (WHST.9-10.1)
• 2. Write informative/explanatory texts, including the narration of historical events, scientific procedures/ experiments, or technical processes. (WHST.9-10.2)
ComPADRE is beta testing Citation Styles!
<a href="https://psrc.aapt.org/items/detail.cfm?ID=8293">National Council of Teachers of Mathematics. Illuminations: Least Squares Regression. Reston: National Council of Teachers of
Mathematics, February 14, 2008.</a>
(National Council of Teachers of Mathematics, Reston, 2008), WWW Document, (https://illuminations.nctm.org/unit.aspx?id=6509).
Illuminations: Least Squares Regression (National Council of Teachers of Mathematics, Reston, 2008), <https://illuminations.nctm.org/unit.aspx?id=6509>.
Illuminations: Least Squares Regression. (2008, February 14). Retrieved November 7, 2024, from National Council of Teachers of Mathematics: https://illuminations.nctm.org/unit.aspx?id=
National Council of Teachers of Mathematics. Illuminations: Least Squares Regression. Reston: National Council of Teachers of Mathematics, February 14, 2008. https://
illuminations.nctm.org/unit.aspx?id=6509 (accessed 7 November 2024).
Illuminations: Least Squares Regression. Reston: National Council of Teachers of Mathematics, 2008. 14 Feb. 2008. 7 Nov. 2024 <https://illuminations.nctm.org/unit.aspx?id=6509>.
@misc{ Title = {Illuminations: Least Squares Regression}, Publisher = {National Council of Teachers of Mathematics}, Volume = {2024}, Number = {7 November 2024}, Month = {February 14,
2008}, Year = {2008} }
%T Illuminations: Least Squares Regression %D February 14, 2008 %I National Council of Teachers of Mathematics %C Reston %U https://illuminations.nctm.org/unit.aspx?id=6509 %O text/html
%0 Electronic Source %D February 14, 2008 %T Illuminations: Least Squares Regression %I National Council of Teachers of Mathematics %V 2024 %N 7 November 2024 %8 February 14, 2008 %9 text
/html %U https://illuminations.nctm.org/unit.aspx?id=6509
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ. | {"url":"https://psrc.aapt.org/items/detail.cfm?ID=8293&Standards=1","timestamp":"2024-11-07T12:31:34Z","content_type":"text/html","content_length":"37981","record_id":"<urn:uuid:513ffcb8-c8fb-4d7a-8c39-89b66d10ef45>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00582.warc.gz"} |
Journal Article
Hidden flavor symmetries of SO(10) GUT
No external resources are shared
There are currently no full texts shared for your IP range.
There is no public supplementary material available
The Yukawa interactions of the SO(10) GUT with fermions in 16-plets (as well as with singlets) have certain intrinsic ("built-in") symmetries which do not depend on the model parameters. Thus, the
symmetric Yukawa interactions of the 10 and 126 dimensional Higgses have intrinsic discrete $Z_2\times Z_2$ symmetries, while the antisymmetric Yukawa interactions of the 120 dimensional Higgs have a
continuous SU(2) symmetry. The couplings of SO(10) singlet fermions with fermionic 16-plets have $U(1)^3$ symmetry. We consider a possibility that some elements of these intrinsic symmetries are the
residual symmetries, which originate from the (spontaneous) breaking of a larger symmetry group $G_f$. Such an embedding leads to the determination of certain elements of the relative mixing matrix
$U$ between the matrices of Yukawa couplings $Y_{10}$, $Y_{126}$, $Y_{120}$, and consequently, to restrictions of masses and mixings of quarks and leptons. We explore the consequences of such
embedding using the symmetry group conditions. We show how unitarity emerges from group properties and obtain the conditions it imposes on the parameters of embedding. We find that in some cases the
predicted values of elements of $U$ are compatible with the existing data fits. In the supersymmetric version of SO(10) such results are renormalization group invariant. | {"url":"https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_2351153","timestamp":"2024-11-10T21:48:54Z","content_type":"application/xhtml+xml","content_length":"40940","record_id":"<urn:uuid:154287f7-7e5d-4200-8117-18517cf7e666>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00196.warc.gz"} |
Bit Error Rate (BER) Calculator - Savvy Calculator
Bit Error Rate (BER) Calculator
Bit Error Rate (BER) is a critical metric in the field of data communication and digital signal processing. It measures the ratio of the number of bits received incorrectly to the total number of
bits received, providing insight into the quality and reliability of a digital communication channel. In this article, we will walk you through how to use our HTML-based Bit Error Rate calculator,
explaining the formula, providing examples, and addressing frequently asked questions.
How to Use
To use the Bit Error Rate Calculator, you need to enter two variables: the number of bits received in error (EB) and the total number of bits received (NB). The calculator will then determine the BER
for you.
The formula for calculating the Bit Error Rate (BER) is:
BER = EB / NB
• BER: Bit Error Rate
• EB: Number of bits received in error
• NB: Total number of bits received
Let’s consider an example to demonstrate the application of the Bit Error Rate formula. Suppose you are monitoring a digital communication channel, and over a period, you receive 50 bits in error out
of a total of 10,000 bits. Using the formula:
BER = 50 / 10,000 = 0.005
The Bit Error Rate in this case is 0.005 or 0.5%.
Q1: Why is BER important in data communication?
A1: Bit Error Rate is crucial in data communication as it quantifies the accuracy and reliability of data transmission. It helps engineers and network operators ensure the quality of their
communication channels and detect potential issues.
Q2: What is an acceptable BER value?
A2: The acceptable BER value varies depending on the specific application. In high-reliability systems like satellite communication, an extremely low BER may be required (e.g., 1 in 10^12). In less
critical applications, a higher BER might be acceptable.
Q3: How can I reduce the BER in a communication system?
A3: You can reduce BER by improving the signal-to-noise ratio, using advanced error correction techniques, and minimizing channel interference.
Q4: Can I use the BER calculator for wireless communication systems?
A4: Yes, the BER calculator is applicable to various communication systems, including wireless. It helps assess the quality of the received data in wireless networks.
The Bit Error Rate (BER) is an essential concept for anyone working in data communication, digital signal processing, or network engineering. By using the provided formula and our HTML-based
calculator, you can easily determine the BER of your communication channel. Understanding BER is crucial for maintaining reliable data transmission, and our tool simplifies the process, making it
accessible to professionals and enthusiasts alike.
Leave a Comment | {"url":"https://savvycalculator.com/bit-error-rate-ber-calculator","timestamp":"2024-11-04T13:45:47Z","content_type":"text/html","content_length":"143241","record_id":"<urn:uuid:0435450c-7e8a-40a5-965e-f066c8e9b570>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00249.warc.gz"} |
Search for | AFGC Wiki
B4. Specificities of the seismic analysis
B4. Specificities of the seismic analysis Spectral response – Specific case of earthquakes The principle of the method is, for a given seismic direction, to construct the maximum responses from the
loading spectrum at all points, mode by mode, then, to accum... | {"url":"https://wiki.afgc.asso.fr/search?term=&page=2","timestamp":"2024-11-12T02:33:41Z","content_type":"text/html","content_length":"56676","record_id":"<urn:uuid:f41febcb-1ed2-45a8-8fac-7c06405f83c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00354.warc.gz"} |
The main purpose of "Kac" is to compute fusion rules for RCFT’s related to Wess-Zumino-Witten models on arbitrary group manifolds. This includes semi-simple groups and non-simply connected group
manifolds. The latter are described algebraically in terms of Simple Currents. The program can also compute fusion rules for most coset conformal field theories.
The algorithm is based on the Verlinde formula applied to a modular transformation matrix S. The latter is computed using the Kac-Peterson formula, combined with the orbit Lie algebra formalism
developed in collaboration with J. Fuchs and C. Schweigert (building on earlier work with S. Yankielowicz).
Some additional features:
1. Computation of spectra of WZW-models (or untwisted affine Lie algebras) and of all their simple current invariants;
2. Computation of "spectra" for twisted affine Lie algebras;
3. C=1 orbifold and N=0,1,2 minimal model spectra;
4. Computation of boundary and crosscap formulas for simple current invariants, using the formulas developed in collaboration with J. Fuchs, C. Schweigert, L. Huiszoon and J. Walcher
5. Boundaries and crosscaps for some exceptional invariants;
6. Computation of annulus, Moebius and Klein bottle coefficients;
7. Computation of open and closed string spectra, including tadpole cancellation;
8. Computation of spectra for twisted affine Lie algebras;
9. Partial computation of modular invariant partition functions using Galois and Quasi-Galois symmetries of S;
10. Computation of higher indices of all representations of all simple Lie-algebras (For the exceptional algebras Index files are needed as input). Put them in a directory ~/Library/Kac. In
combination with FORM the program can be used to compute characters of Lie-algebra representations. For this purpose Kac generates FORM input file named Xr.characters in the directory ~/Library/
Kac (X=A,...,G and r is the rank of the algebra).
Please report any problems to Bert Schellekens.
The "basic commandline versions" should work on any system with the correct CPU, as indicated. The other versions require the "readline" library.
Version 7
Version 8
1. Version 8.08067 for linux (Compiled on Scientific Linux 6.8)
Manual (still under construction) | {"url":"https://www.nikhef.nl/~t58/Site/Kac.html","timestamp":"2024-11-05T04:32:33Z","content_type":"application/xhtml+xml","content_length":"28854","record_id":"<urn:uuid:509e2f2f-3cdd-4299-9880-1ea8fffa63a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00201.warc.gz"} |
Probabilistic Temporal Networks with Ordinary Distributions: Theory, Robustness and Expected Utility
Most existing works in Probabilistic Simple Temporal Networks (PSTNs) base their frameworks on well-defined, parametric probability distributions.
Under the operational contexts of both strong and dynamic control, this paper addresses robustness measure of PSTNs, i.e. the execution success probability, where the probability distributions of the
contingent durations are ordinary, not necessarily parametric, nor symmetric (e.g. histograms, PERT), as long as these can be discretized.
In practice, one would obtain ordinary distributions by considering empirical observations (compiled as histograms), or even hand-drawn by field experts.
In this new realm of PSTNs, we study and formally define concepts such as degree of weak/strong/dynamic controllability, robustness under a predefined dispatching protocol, and introduce the concept
of PSTN expected execution utility.
We also discuss the limitation of existing controllability levels, and propose new levels within dynamic controllability, to better characterize dynamic controllable PSTNs based on based practical
complexity considerations.
We propose a novel fixed-parameter pseudo-polynomial time computation method to obtain both the success probability and expected utility measures.
We apply our computation method to various PSTN datasets, including realistic planetary exploration scenarios in the context of the Mars 2020 rover. Moreover, we propose additional original
applications of the method. | {"url":"https://www.rombio.be/blog/probabilistic-temporal-networks-with-ordinary-distributions","timestamp":"2024-11-05T00:59:13Z","content_type":"text/html","content_length":"16287","record_id":"<urn:uuid:2e7b9788-000a-4d43-bd0c-d475487f84ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00064.warc.gz"} |
Distributed Arithmetic for FIR Filters
Distributed Arithmetic Overview
Distributed Arithmetic (DA) is a widely used technique for implementing sum-of-products computations without the use of multipliers. Designers frequently use DA to build efficient Multiply-Accumulate
Circuitry (MAC) for filters and other DSP applications.
The main advantage of DA is its high computational efficiency. DA distributes multiply and accumulate operations across shifters, lookup tables (LUTs), and adders in such a way that conventional
multipliers are not required.
The coder supports DA in HDL code generated for several single-rate and multirate FIR filter structures for fixed-point filter designs. (See Requirements and Considerations for Generating Distributed
Arithmetic Code. )
This section briefly summarizes of the operation of DA. Detailed discussions of the theoretical foundations of DA appear in these publications.
• Meyer-Baese, U., Digital Signal Processing with Field Programmable Gate Arrays, Second Edition, Springer, pp 88–94, 128–143.
• White, S.A., Applications of Distributed Arithmetic to Digital Signal Processing: A Tutorial Review. IEEE ASSP Magazine, Vol. 6, No. 3.
In a DA realization of a FIR filter structure, a sequence of input data words of width W is fed through a parallel to serial shift register. This feedthrough produces a serialized stream of bits. The
serialized data is then fed to a bit-wide shift register. This shift register serves as a delay line, storing the bit serial data samples.
The delay line is tapped (based on the input word size W), to form a W-bit address that indexes into a lookup table (LUT). The LUT stores the possible sums of partial products over the filter
coefficients space. A shift and adder (scaling accumulator) follow the LUT. This logic sequentially adds the values obtained from the LUT.
A table lookup is performed sequentially for each bit (in order of significance starting from the LSB). On each clock cycle, the LUT result is added to the accumulated and shifted result from the
previous cycle. For the last bit (MSB), the table lookup result is subtracted, accounting for the sign of the operand.
This basic form of DA is fully serial, operating on one bit at a time. If the input data sequence is W bits wide, then a FIR structure takes W clock cycles to compute the output. Symmetric and
asymmetric FIR structures are an exception, requiring W+1 cycles, because one additional clock cycle is required to process the carry bit of the preadders.
Improving Performance with Parallelism
The inherently bit serial nature of DA can limit throughput. To improve throughput, the basic DA algorithm can be modified to compute more than one bit-sum at a time. The number of simultaneously
computed bit sums is expressed as a power of two called the DA radix. For example, a DA radix of 2 (2^1) indicates that a one bit-sum is computed at a time. A DA radix of 4 (2^2) indicates that a two
bit-sums are computed at a time, and so on.
To compute more than one bit-sum at a time, the coder replicates the LUT. For example, to perform DA on two bits at a time (radix 4), the odd bits are fed to one LUT and the even bits are
simultaneously fed to an identical LUT. The LUT results corresponding to odd bits are left-shifted before they are added to the LUT results corresponding to even bits. This result is then fed into a
scaling accumulator that shifts its feedback value by two places.
Processing more than one bit at a time introduces a degree of parallelism into the operation, which can improve performance at the expense of area. The DARadix property lets you specify the number of
bits processed simultaneously in DA.
Reducing LUT Size
The size of the LUT grows exponentially with the order of the filter. For a filter with N coefficients, the LUT must have 2^N values. For higher-order filters, LUT size must be reduced to reasonable
levels. To reduce the size, you can subdivide the LUT into several LUTs, called LUT partitions. Each LUT partition operates on a different set of taps. The results obtained from the partitions are
For example, for a 160 tap filter, the LUT size is (2^160)*W bits, where W is the word size of the LUT data. You can achieve a significant reduction in LUT size by dividing the LUT into 16 LUT
partitions, each taking 10 inputs (taps). This division reduces the total LUT size to 16*(2^10)*W bits.
Although LUT partitioning reduces LUT size, the architecture uses more adders to sum the LUT data.
The DALUTPartition property lets you specify how the LUT is partitioned in DA.
Requirements and Considerations for Generating Distributed Arithmetic Code
The coder lets you control how DA code is generated using the DALUTPartition and DARadix properties (or equivalent Generate HDL tool options). Before using these properties, review these general
requirements, restrictions, and other considerations for generation of DA code.
Supported Filter Types
The coder supports DA in HDL code generated for these single-rate and multirate FIR filter structures:
• direct form (dfilt.dffir or dsp.FIRFilter)
• direct form symmetric (dfilt.dfsymfir or dsp.FIRFilter)
• direct form asymmetric (dfilt.dfasymfir or dsp.FIRFilter)
• dsp.FIRDecimator
• dsp.FIRInterpolator
Fixed-Point Quantization Required
Generation of DA code is supported only for fixed-point filter designs.
Specifying Filter Precision
The data path in HDL code generated for the DA architecture is optimized for full precision computations. The filter casts the result to the output data size at the final stage. If your filter object
is set to use full precision data types, numeric results from simulating the generated HDL code are bit-true to the output of the original filter object.
If your filter object has customized word or fraction lengths, the generated DA code may produce numeric results that are different than the output of the original filter object.
Coefficients with Zero Values
DA ignores taps that have zero-valued coefficients and reduces the size of the DA LUT accordingly.
Considerations for Symmetric and Asymmetric Filters
For symmetric and asymmetric FIR filters:
• A bit-level preadder or presubtractor is required to add tap data values that have coefficients of equal value and/or opposite sign. One extra clock cycle is required to compute the result
because of the additional carry bit.
• The coder takes advantage of filter symmetry. This symmetry reduces the DA LUT size substantially, because the effective filter length for these filter types is halved.
Holding Input Data in a Valid State
Partitioned distributed arithmetic architectures implement internal clock rates higher than the input rate. In such filter implementations, there are N cycles (N >= 2) of the base clock for each
input sample. You can specify how many clock cycles the test bench holds the input data values in a valid state.
• When you select Hold input data between samples (the default), the test bench holds the input data values in a valid state for N clock cycles.
• When you clear Hold input data between samples, the test bench holds input data values in a valid state for only one clock cycle. For the next N-1 cycles, the test bench drives the data to an
unknown state (expressed as 'X') until the next input sample is clocked in. Forcing the input data to an unknown state verifies that the generated filter code registers the input data only on the
first cycle.
Distributed Arithmetic via generatehdl Properties
Two properties specify distributed arithmetic options to the generatehdl function:
• DALUTPartition — Number and size of lookup table (LUT) partitions.
• DARadix — Number of bits processed in parallel.
You can use the helper function hdlfilterdainfo to explore possible partitions and radix settings for your filter.
For examples, see
Distributed Arithmetic Options in the Generate HDL Tool
The Generate HDL tool provides several options related to DA code generation.
• The Architecture pop-up menu, which lets you enable DA code generation and displays related options.
• The Specify folding drop-down menu, which lets you directly specify the folding factor, or set a value for the DARadix property.
• The Specify LUT drop-down menu, which lets you directly set a value for the DALUTPartition property. You can also select an address width for the LUT. If you specify an address width, the coder
uses input LUTs as required.
The Generate HDL tool initially displays default DA-related option values that correspond to the current filter design. For the requirements for setting these options, see DALUTPartition and DARadix.
To specify DA code generation using the Generate HDL tool, follow these steps:
1. Design a FIR filter (using Filter Designer, Filter Builder, or MATLAB^® commands) that meets the requirements described in Requirements and Considerations for Generating Distributed Arithmetic
2. Open the Generate HDL tool.
3. Select Distributed Arithmetic (DA) from the Architecture pop-up menu.
When you select this option, the related Specify folding and Specify LUT options are displayed below the Architecture menu. This figure shows the default DA options for a direct form FIR filter.
4. Select one of these options from the Specify folding drop-down menu.
□ Folding factor (default): Select a folding factor from the drop-down menu to the right of Specify folding. The menu contains an exhaustive list of folding factor options for the filter.
□ DA radix: Select the number of bits processed simultaneously, expressed as a power of 2. The default DA radix value is 2, specifying processing of one bit at a time, or fully serial DA. If
desired, set the DA radix field to a nondefault value.
5. Select one of these options from the Specify LUT drop-down menu.
□ Address width (default): Select from the drop-down menu to the right of Specify LUT. The menu contains an exhaustive list of LUT address widths for the filter.
□ Partition: Select, or enter, a vector specifying the number and size of LUT partitions.
6. Set other HDL options as required, and generate code. Invalid or illegal values for LUT Partition or DA Radix are reported at code generation time.
Viewing Detailed DA Options
As you interact with the Specify folding and Specify LUT options you can see the results of your choice in three display-only fields: Folding factor, Address width, and Total LUT size (bits).
In addition, when you click the View details hyperlink, the coder displays a report showing complete DA architectural details for the current filter, including:
• Filter lengths
• Complete list of applicable folding factors and how they apply to the sets of LUTs
• Tabulation of the configurations of LUTs with total LUT Size and LUT details
This figure shows a typical report.
DA Interactions with Other HDL Options
When Distributed Arithmetic (DA) is selected in the Architecture menu, some other HDL options change automatically to settings that correspond to DA code generation:
• Coefficient multipliers is set to Multiplier and disabled.
• FIR adder style is set to Tree and disabled.
• Add input register (in the Ports pane) is selected and disabled. (An input register, used as part of a shift register, is used in DA code.)
• Add output register (in the Ports pane) is selected and disabled. | {"url":"https://se.mathworks.com/help/hdlfilter/distributed-arithmetic-for-fir-filters.html","timestamp":"2024-11-11T02:49:27Z","content_type":"text/html","content_length":"84719","record_id":"<urn:uuid:72ad57eb-73d5-4378-a3f1-2bf8618e4123>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00154.warc.gz"} |
How would I make this kind of system?
So basically I am bored and dicided to make a card system.
Basically where you open up a pack of cards and it gives you random one depending on there rarity.
So lets say I something like this:
local cards = {
ProCard = {
rarity = 30
noobCard = {
rarity = 50
ExtremeCard = {
rarity = 0.5
And I want to loop through all the cards and pick on depending on its rarity.
Basically the rarer it is the less chance you will get it(Like and loot box lol)
How would I do this?
1 Like
Well, the way you want it is not the way everyone makes something like that. But it’s close. Here is a small code which will give you the results you want.
local cards = {
ProCard = {
rarity = 30
noobCard = {
rarity = 50
ExtremeCard = {
rarity = 0.5
local Max = 100
function PickRandomCard(Max)
local Random = math.random(1,Max)
local Rarity,MaxRarity
for i,v in pairs(cards) do
if v.rarity > MaxRarity then
MaxRarity = v.rarity
Rarity = v
return Rarity
The higher the rarity number, the lower the chance of getting the card
EDIT: I fixed the error I think, try now
1 Like
Well how do I get is so it will pick a random one? Depending on how rare it is?
Cause I already can get how high the number is.
I just need to know how to pick the actual card. Depending on its rarity
1 Like
It will pick a random card depending on it’s rarity. Read the code it does exactly what you want
1 Like
I played around with it for a little while.
But it is not picking doubles.
This is what I have right now:
local cards = {
ProCard = {
rarity = 51
noobCard = {
rarity = 50
ExtremeCard = {
rarity = 10
local Max = 100
function PickRandomCard(Max)
local Random = math.random(1,Max)
local Rarity = 0
local maxrarity = 0
for i,v in pairs(cards) do
if v.rarity > maxrarity then
maxrarity = v.rarity
Rarity = v
if maxrarity == cards.ExtremeCard.rarity then
warn('LEGENDARY CARD!!')
while true do
And basically it always prints 50 then 51. It never does 50 then 50 again, or 51 then 51. So any way to fix this?
Also dont ask why I’m making it run multiple times. I just want to see what it will give me
The difference is in this:
if maxrarity == cards.ExtremeCard.rarity then
warn('LEGENDARY CARD!!')
You use == operator, of course it will be way too rare so use < operator instead
1 Like
Its still doing the thing where it always prints 50 then 51. And never doubles. Like 50 and 50 ETC
1 Like
You are doing loop in loop; get rid of that, and it will be better. In fact, I designed this function to be used with return. The way you made just broke the entire purpose of what I did
1 Like
Assuming you want to throw in probability distribution, we can write a function to calculate that.
local cards = {
ProCard = 30,
noobCard = 50,
ExtremeCard = 0.5
local function createDistribution(cards)
local distribution = {}
local total = 0
for card, rarity in pairs(cards) do
total = total + rarity
local accumulate = 0
for card, rarity in pairs(cards) do
accumulate = accumulate + rarity
distribution[card] = accumulate / total
return distribution
Then we can make a simple function to draw a card using this distribution.
local function drawCard(distribution)
local rand = math.random()
for card, prob in pairs(distribution) do
if rand <= prob then
return card
Here is the usage:
local distribution = createDistribution(cards)
local card = drawCard(distribution)
Please do note, the higher the rarity value, the more common the card (it occupies a larger portion of the probability distribution). If you want to make higher rarity values mean a card is less
common, you could invert the rarity in the createDistribution function, for example by changing total = total + rarity to total = total + 1/rarity and accumulate = accumulate + rarity to accumulate =
accumulate + 1/rarity .
Here is the output:
local cards = {
ProCard = 30,
noobCard = 50,
ExtremeCard = 10
local function createDistribution(cards)
local distribution = {}
local total = 0
for card, rarity in pairs(cards) do
total = total + rarity
local accumulate = 0
for card, rarity in pairs(cards) do
accumulate = accumulate + rarity
distribution[card] = accumulate / total
return distribution
local function drawCard(distribution)
local rand = math.random()
for card, prob in pairs(distribution) do
if rand <= prob then
return card
local distribution = createDistribution(cards)
local dict = {}
for n = 0, 100 do
local card = drawCard(distribution)
if dict[card] then
dict[card] = dict[card] + 1
dict[card] = 0
for i, v in next, dict do
print(i, v / 100, "\n")
ProCard 0.37
noobCard 0.51
ExtremeCard 0.1
2 Likes
Ok, I fixed the code. Should work fine now, sorry for the errors.
local cards = {
ProCard = {
rarity = 30
noobCard = {
rarity = 50
ExtremeCard = {
rarity = 0.5
local Max = 100
function PickRandomCard(Max)
local Random = math.random(1,Max)
local Rarity
for i,v in pairs(cards) do
if v.rarity > Random then
Rarity = v
return Rarity
2 Likes
Yeah this works perfectly.
Thank you @towerscripter1386 also for the help!
I noticed that It pulls the same card a lot sometimes. I got the noobCard 15 time lol.
I also noticed that yes it did give me 1 single extreme card so yes it is very rare lol!
2 Likes
As a side note, while it’s possible to store additional information with the card rarity (like the table structure you initially proposed), it’s generally better to keep the data structure simple and
specific to its purpose to avoid unnecessary complexity. This follows the single responsibility principle, making your code more maintainable and scalable. For instance, a separate table or
dictionary could be used to store the card stats, keeping the draw chance and card stats distinct.
To summarize that using code:
Not good
local cards = {
ProCard = {
rarity = 30
noobCard = {
rarity = 50
ExtremeCard = {
rarity = 0.5
Good because it adheres to one principle and takes up less space in memory:
local cards = {
ProCard = 30,
noobCard = 50,
ExtremeCard = 0.5
1 Like
Well yes true. the reason I did this is cause I was also going to have other values so thats why.
Also I changed it so it loops through cards in a folder in ServerStorage.
But for some reason it keeps picking the legendary card instead of picking the common card.
And I have no idea why this is happening.
heres my code.
I didnt really change anything. It still should be picking the smallest value the littlest amount of time
local cards = game:GetService('ServerStorage'):FindFirstChild('Cards'):GetChildren()
local cardFolder = game:GetService('ServerStorage'):FindFirstChild('Cards')
-- get right cards
local function createDistribution(cards)
-- get the cards so we can draw from cards later
local distribution = {}
local total = 0
for card, item in pairs(cards) do
total = total + item.Rarity.Value
local accumulate = 0
for card, item in pairs(cards) do
accumulate = accumulate + item.Rarity.Value
local cardName = item.Name
local cardDetails = {
card = item,
rarityValue = accumulate / total
distribution[cardName] = cardDetails
return distribution
local function drawCard(distribution) -- draw a card from the cards.
local rand = math.random()
for card, prob in pairs(distribution) do
if rand <= prob.rarityValue then
return card -- gets the card!
local function getCard()
local distribution = createDistribution(cards)
local card = drawCard(distribution)
return card
local function displayCards(cards,player)
local debris = game:GetService('Debris')
local ui = player.PlayerGui:FindFirstChild('DisplayCards')
local sf = ui:FindFirstChild('SF')
local template = script:FindFirstChild('CardTemplate') -- the template of cards
ui.Enabled = true
for _, card in pairs(cards) do -- loop through the cards and display them
local cardItem = cardFolder:FindFirstChild(card) -- get the card
local Item = template:Clone() -- make the template
Item.Visible = true
-- set values
Item.Rarity.Text = cardItem.Rarity.Value
Item.ItemName.Text = cardItem.Name
Item.Parent = sf
ui.Enabled = false
local cards = {getCard(),getCard(),getCard()}
local cards = {getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard()}
I need the place file with the cards because the code I sent earlier works.
Here ya go.
CardGameLol.rbxl (56.9 KB)
The game file.
1 Like
Here is the solution
CardGameLol.rbxl (57.4 KB)
1 Like
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed. | {"url":"https://devforum.roblox.com/t/how-would-i-make-this-kind-of-system/2334436","timestamp":"2024-11-06T23:51:36Z","content_type":"text/html","content_length":"65223","record_id":"<urn:uuid:287d26f5-1abe-4bd8-808d-3777aba2357b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00016.warc.gz"} |
What is a Statistic? A Plain English Explanation
Probability and Statistics >Basic Statistics > What is a Statistic? Contents:
What is a Statistic?
Statistics are everywhere. From the news to the classroom, we are constantly surrounded by data that can help us gain a better understanding of any given topic. But exactly what is a statistic? In
its simplest form, it is a fact or piece of data from a study of a large quantity of numerical data. For example, the statement “the statistics show that the crime rate has increased” is referring to
facts or numbers collected from research involving crime rates. More precisely, a statistic is a small piece of data from a portion of a population. It’s the opposite of a parameter — which is census
data that surveys everyone. Think of it this way:
• If you calculate something (e.g., an average) from part of a data set, that’s a statistic.
• If you know something about 10% of people, such as their favorite TV show, that’s a statistic also.
• If you survey everyone in the United States to get their voting preference, that’s a parameter. Parameters contain all of the information. And all the information is rarely known, which is why we
need statistics.
An important difference between parameters and statistics is that a statistic is a variable that can vary from sample to sample, while a parameter is a constant that does not change.
Statistic Parameter
Description Describes a sample. Describes a population.
Calculation Calculated from a sample. Calculated from the whole population.
Value Variable, depending on the sample. Fixed.
The most common statistic is the mean. It represents the average of a dataset.
Other common statistics include the median and the mode. The median is the middle value in a sorted dataset, while the mode refers to the most commonly occurring value. These measures also provide
insights into the central tendency of the data, but in different ways compared to the mean.
Other statistics you’ll often come across include:
Statistics are drawn from populations, but a “population” doesn’t necessarily mean a physical count of bodies. It can be a collection of just about anything you can count, from galaxies in the sky to
a count of trials in an experiment. Thus a statistic can be any quantity calculated from sample values. A couple of examples:
• The average height of a man in the world is 5’10” [2]. This is a statistic because it describes a population, in this case the population of men in the world. In real life, we never survey
everyone — asking all 3+ billion men in the world isn’t possible, so when you read a fact like this about the “world”, it is always a statistic.
• The probability of flipping a coin and getting tails is 50%. This is a statistic because it tells us the probability of an event happening, in this example the event of flipping a coin and
getting tails. We aren’t endlessly flipping coins until infinity, or flipping all of the coins in the world — we are using a small amount of coins and flipping them a set amount.
Here are some other examples of statistics:
• The unemployment rate in the United States averaged 5.72% from 1948 to 2023.
• The average life expectancy for a man in the United States is 77.28 years.
• The median household income in France is $61,020.
What is a statistic used for?
Statistics helps us to understand the data that is collected about us and the world. For example, the UPS database is 17 terabytes — about as large as a database containing every book in the Library
of Congress [1]. All of that data is meaningless without a way to interpret it, which is where statistics comes in.
Statistical methods are used in many areas such as economics, finance, health care and marketing. For example, statistics can be used to analyze market trends in banking and finance, evaluate patient
outcomes in healthcare, or determine the best crops to grow in agriculture.
Statistical methods also allow us to compare different types of data so that we can make more accurate predictions about future events or outcomes. For example, let’s say we wanted to know whether an
increase in advertising spending would lead to an increase in sales for an organization. We could use methods such as regression analysis in order to determine this with greater accuracy than just
relying on intuition alone.
Statistics also enables us to draw conclusions from data that would otherwise be impossible or too costly to measure directly, such as testing a new drug on everyone in the world; this allows us to
explore new avenues of research that would not be possible without statistics.
Stats come in three types:
• Descriptive Statistics. Describe data. Includes sample mean or sample median. Order statistics are a subset of descriptive statistics. They tell you something about how the data is ordered. For
example, measurements like the sample minimum. You know the order is #1. Also includes charts and graphs. Anything that describes data is descriptive statistics.
• Estimators.Used to guess at a parameter. In other words, something about a population. Often taken from descriptive stats. For example, if you know the sample mean you can use it to guess what
the population mean is. Used in inferential statistics. Inferential stats is just a “best guess” about something, based on data.
• Test Statistics, which are used in null hypothesis testing. That’s where you take a known fact about a population and then test that fact to see if it is true or not. A “population” could be real
people in a trial. Or it could be TVs in a factory. Which test statistic you use depends on what kind of data you have. Some examples of test stats: t score, and chi-square.
A statistic can be more than one type. For example, the sample standard deviation can be used as a descriptive statistic to describe the standard deviation of a sample. It can be used as an
estimator: To estimate the population standard deviation. And it can be used to test a theory (a hypothesis).
Origin of “statistic”
The word statistic indirectly comes from the medieval Latin word status, for a political state although there is also a closely related word in German (statistik) which is also used in a political
sense. “Statistik” was popularized by German political scientist Gottfried Aschenwall (1719-1772) in his “Vorbereitung zur Staatswissenschaft” (1748).
According to Leiden University, it’s difficult to know exactly when the word ceased to have a meaning close to a “political state” and became more of a mathematical term. The first time the word was
used in the Oxford English Dictionary is in 1770, in W. Hooper’s translation of Bielfield’s Elementary Universal Education:
“The science, that is called statistics, teaches us what is the political arrangement of all the modern states of the known world.”
The Online Etymology Dictionary states that the first recorded time the word meant “numerical data collected and classified” was 1829 and the abbreviated form stats first appeared in 1961. Webster’s
1828 dictionary defines statistics as:
A collection of facts respecting the state of society, the condition of the people in a nation or country, their health, longevity, domestic economy, arts, property and political strength, the
state of the country, &c.
What is the Difference Between Inferential and Descriptive Statistics?
Inferential means that you can infer (make predictions) from the data, while descriptive means that you just describe the data. Let’s say you worked every week last month and received four paychecks:
$100, $105, $110, and $120. Examples of descriptive statistics about your pay (describing and summarizing the set of data) Some options available to you:
• Find the mean (the average) = $100 + $105 + $110 + $120 / 4 = $108.75. You earned an average of $108.75 last week. You could also calculate other statistics about your pay like the median, range
or standard deviation.
• Make a bar graph:
You could also make other charts like a pie chart, a line graph or a stem plot and you could also describe the shapes of those distributions (i.e. bell-shaped, skewed, or uniform).
Examples of inferential statistics about your pay (making predictions): Perhaps the most obvious inference you can make from your pay is that there’s an upwards trend. It looks like it’s going up by
$5 per week, so you can expect to earn $125 in week 5. You can quantify this trend by:
• Inserting a trendline: this is easy to do in Microsoft Excel (instructions can be found here). You could also draw a rough line by hand–grab a ruler, draw a pencil line, and make your predictions
based on where the line is going. For your pay, the line is going upwards (it should have a positive slope).
• If you want an equation for the trendline, you can perform regression analysis, enabling you to easily predict what you’ll earn next, week, next month, or next year.
• More complex inferential statistics include hypothesis testing, where you take raw data and use a known model to verify the accuracy of your predictions. For this pay check example, you might
compare your pay to the average pay of someone else working in your particular field.
What is a Statistic: Notation
In general, stats notation is in Roman letters, a-z. Parameters have Greek letters or uppercase Roman). If some letters look the same: look closely. For example, look for the small p and large P.
Usually, if you see a large letter (i.e. P), it’s a parameter. Small letters usually mean it’s a stat.
│Measurement │Statistic (Roman or lowercase)│Parameter (Greek or uppercase)│
│Population Proportion │p │P │
│Data Elements │x │X │
│Population Mean │ │μ │
│https://www.statisticshowto.com/probability-and-statistics/standard-deviation/ │s │σ │
│Variance │s^2 │σ^2 │
│Number of elements │n │N │
│Correlation Coefficient │r │ρ │
What is a Statistic: Data & Variables
You might think that data is a list of numbers. However, in statistics, “Data” means something a little different; Data contains the who and what about something (the “something” could be anything
from a book in a bookstore to a batting average to a choice about elections).
Data can have numerals that have meaning. For example, 1453767142 is the ISBN for the Practically Cheating Statistics Handbook. The What as it related to ISBNs is the name of the book (The
Practically Cheating statistics Handbook) and the Who as it related to book sales could be the person who ordered the book or it could be the purchase orders (as opposed to the individuals who placed
those orders).
You might be familiar with variables from algebra, like “x” or “y.” They stand for something (usually a number that you plug-in to solve an equation). In statistics, variables are broken down into
two types: numerical or quantitative variables and categorical variables.
• Numerical variables are the variables you’re most familiar with: numbers. For example, those “x” and “y” variables in algebra stand for a number.
• Categorical variables are variables that aren’t numbers: they are descriptive. For example, sex (male or female), occupation, school district, state and dog breeds are all types of categorical
Breeds of dog are categorical variables (this particular dog is a bergamasco). How many dogs is numeric (this picture has one dog).
What is a Statistic: Vital Statistics
Vital statistics can mean one of three things, as far as stats goes:
1. National Vital Statistics System (run by the CDC) government records), a government database that keeps records of births, deaths, marriages, divorces, and fetal deaths.
2. Bust-waist-hip measurements: measurements for clothes fitting, usually listed on a clothing size chart.
3. Vital signs: Blood pressure and other body measurements taken by health professionals. For example, blood pressure and pulse are used to measure the health of your heart.
What is an Estimator?
An estimator (or estimate) is a statistic that’s used to approximate a population parameter. While there are several types of estimators, the word “estimator” on its own usually refers to a point
estimate. A point estimate is a single value (as opposed to an interval, like a confidence interval). For example, the mean is a point estimate. The two characteristics of point estimates that are
arguably most important are:
• Bias: whether the estimator underestimates or overestimates a parameter. For example state inspectors in Houston, Texas, found that one in five gas pumps weren’t calibrated correctly. An
incorrectly calibrated pump could cost a consumer up to 18 cubic inches per five gallons pumped. The bias meant that consumers were consistently overcharged. We can void bias by using unbiased
estimators such as the sample mean, which will give an expected value that is equal to the true population parameter [3].
• Sampling variability: Your average bathroom scale probably goes up and down like a yoyo, stating a slightly different weight every time you get on it. Your weight might range from 158.1 one
minute, to 161.2 the next. However, if you take a large enough sample (say, 30 measurements), your actual weight would probably be close to the mean of these readings. The widespread of weights
(anywhere from 158.1 to 161.2) is one example of variability between samples. Sampling variability is usually measured in terms of standard error. The larger the standard error, the larger the
sampling variability.
What is a statistic in the national vital statistics system?
The National Vital Statistics System is a database run by the Centers for Disease Control (CDC). Although the individual states are responsible for actually registering the data, the CDC collaborates
with the National Center for Health Statistics, the National Cancer Institute and the Census Bureau to keep an up-to-date database of life and death statistics for the United States. Included are:
births, deaths, marriages, divorces, and fetal deaths. Additional programs related to the National Vital Statistics System include:
1. Bock, Velleman, & DeVeaux, Stats: Modeling the World – Chapter 1: Stats Starts Here.
2. Kokoska, Stephen (2015). Introductory Statistics: A Problem-Solving Approach (2nd ed.). New York: W. H. Freeman and Company.
Comments? Need to post a correction? Please Contact Us. | {"url":"https://www.statisticshowto.com/statistic/","timestamp":"2024-11-04T04:57:35Z","content_type":"text/html","content_length":"93511","record_id":"<urn:uuid:8877cc39-1a26-44e0-9453-881b785634db>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00766.warc.gz"} |
Orange Data Mining - undefined
Converts numeric attributes to categorical.
• Data: dataset with discretized values
The Discretize widget discretizes numeric variables.
1. Set default method for discretization.
2. Select variables to set specific discretization methods for each. Hovering over a variable shows intervals.
3. Discretization methods
□ Keep numeric keeps the variable as it is.
□ Remove removes variable.
□ Natural binning finds nice thresholds for the variable's range of values, for instance 10, 20, 30 or 0.2, 0.4, 0.6, 0.8. We can set the desired number of bins; the actual number will depend
on the interval.
□ Fixed width uses a user-defined bin width. Boundaries of bins will be multiples of width. For instance, if the width is 10 and the variable's values range from 35 to 68, the resulting bins
will be <40, 40-50, 50-60, >60. This method does not work for time variables. If the width is too large (resulting in a single interval) or too small (resulting in more than 100 intervals),
the variable is removed.
□ Time interval is similar to Fixed width, but for time variables. We specify the width and a time unit, e.g. 4 months or 3 days. Bin boundaries will be multiples of the interval; e.g. with 4
months, bins will always include Jan-Mar, Apr-Jun, Jul-Sep and Oct-Dec.
□ Equal-frequency splits the attribute into a given number of intervals with approximately the same number of instances.
□ Equal-width evenly splits the range between the smallest and the largest observed value.
□ Entropy-MDL is a top-down discretization invented by Fayyad and Irani, which recursively splits the attribute at a cut maximizing information gain, until the gain is lower than the minimal
description length of the cut. This discretization can result in an arbitrary number of intervals, including a single interval, in which case the variable is discarded as useless (removed).
□ Custom allows entering an increasing, comma-separated list of thresholds. This is not applicable to time variables.
□ Use default setting (enabled for particular settings and not default) sets the method to specified as "Default setting".
4. The CC button sets the method for the currently selected variables to Custom, using their current thresholds. This allows for manual editing of automatically determined bins.
In the schema below, we took the Heart disease data set and
• discretized age to a fixed interval of 10 (years),
• max HR to approximately 6 bins (the closest match were 7 bins with a width of 25),
• removed Cholesterol,
• and used entropy-mdl for the remaining variables, which resulted in removing rest SBP and in two intervals for ST by exercise and major vessels colored. | {"url":"https://orangedatamining.com/widget-catalog/transform/discretize/","timestamp":"2024-11-10T02:24:20Z","content_type":"text/html","content_length":"80593","record_id":"<urn:uuid:0a5cd1dd-8569-47dc-a4c3-4f4a14a8cf7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00641.warc.gz"} |
Geometric Langlands, Khovanov Homology, String Theory
Edward Witten on rich mathematical secrets and surprises
Edward Witten at the 2013 Prospects in Theoretical Physics Program at the Institute
In 2006, Edward Witten, Charles Simonyi Professor in the School of Natural Sciences, cowrote with Anton Kapustin a 225-page paper, “Electric-Magnetic Duality and the Geometric Langlands Program,” on
the relation of part of the geometric Langlands program to ideas of the duality between electricity and magnetism.
Some background about the Langlands program: In 1967, Robert Langlands, now Professor Emeritus in the School of Mathematics, wrote a seventeen-page handwritten letter to André Weil, a Professor at
the Institute at the time, in which he proposed a grand unifying theory that relates seemingly unrelated concepts in number theory, algebraic geometry, and the theory of automorphic forms. A typed
copy of the letter, made at Weil’s request for easier reading, circulated widely among mathematicians in the late 1960s and 1970s, and for more than four decades, mathematicians have been working on
its conjectures, known collectively as the Langlands program.
Witten spoke about his experience writing the paper with Kapustin and his thoughts about future directions in mathematics and physics in an interview that took place in November 2014 on the occasion
of Witten’s receipt of the 2014 Kyoto Prize in Basic Sciences for his outstanding contributions to mathematical science through his exploration of superstring theory. The following excerpts are drawn
from a slightly edited version of the interview conducted by Hirosi Ooguri, Member (1988–89) and Visiting Professor (2015) in the School of Natural Sciences, which was published in the May 2015 issue
of Notices of the American Mathematical Society (www.ams.org/notices/201505/ rnoti-p491.pdf).^1
It was very hard to write a paper about it. It took about a year. For that year, I felt like someone who had discovered the meaning of life and couldn’t explain it to anybody else. And in a sense, I
still feel that way for the following reason. Physicists with a background in string theory or gauge theory dualities can understand my paper with Kapustin on geometric Langlands, but for most
physicists, this topic is too detailed to be really exciting. On the other hand, it is an exciting topic for mathematicians but difficult to understand because too much of the quantum field theory
and string theory background is unfamiliar (and difficult to formulate rigorously). That paper with Kapustin may unfortunately remain mysterious to mathematicians for quite some time.
I think it’s actually very difficult to see what advance in the near term could make the gauge theory interpretation of geometric Langlands accessible for mathematicians. That’s actually one reason
why I’m excited about Khovanov homology. My approaches to Khovanov homology and to geometric Langlands use many of the same ingredients, but in the case of Khovanov homology, I think it is quite
feasible that mathematicians could understand this approach in the near future if they get excited about it. I believe it will be more accessible. If I had to bet, I think I have a decent chance to
live to see gauge theory and Khovanov homology recognized and appreciated by mathematicians, and I think I’d have to be lucky to see that in the case of gauge theory and the geometric Langlands
correspondence—just a personal guess. A lot of things that number theorists like have appeared in physics, and some have even appeared in my own work. Plenty has been found to show that the physics
theories that we work on as string theorists are interesting in number theory. These theories know something about number theory, but personally I don’t see an opportunity to really make contact in a
structural way with number theory in the foreseeable future. I can’t even formulate what it would mean to make such contact, so I can’t even properly tell you what we can’t do, but I think the time
is not right to do it.Anyway, that’s why I personally concentrated on geometric Langlands rather than on number theory, and geometric Langlands was hard enough. It was a lot of work to understand it,
but I think that having understood it, many things that mathematicians do involving geometric aspects of representation theory are much more accessible as part of physics. . . . In the last few years
physicists working on supersymmetric gauge theories in four dimensions and their cousins in six dimensions have made several discoveries involving the role of conformal field theory at the critical
level, so the time may well be right to resolve this point.
In the last twenty years, not only has this interaction of math and physics continued to be very rich, but it has developed in such diversity that very frequently exciting things are done which I
myself am able to understand embarrassingly little about, because the field is expanding in so many directions.
I am sure that this is going to continue and I believe the reason it will continue is that quantum field theory and string theory, I believe, somehow have rich mathematical secrets. When some of
these secrets come to the surface, they often come as surprises to physicists, because we do not really understand string theory properly as physics—we do not understand the core ideas behind it. At
an even more basic level, the mathematicians are still not able to fully come to grips with quantum field theory and therefore things coming from it are surprises. So for both of those reasons, I
think that the physics and math ideas generated are going to be surprising for a long time.
I think there are definitely exciting opportunities for young people to come and help explain what it all means. We don’t understand this properly. We got a wider perspective in the 1990s when it
became clear that the different string theories are unified by nonperturbative dualities and that string theory in some sense is inherently quantum mechanical.
But we’re still studying many different aspects of a subject whose core underlying principles are not clear. As long as that is true, there are opportunities for even bigger discoveries by today’s
young people. But if I could tell you exactly what direction you had to go in, I would be there.
1 The interview originally appeared in the December 2014 issue of Kavli IPMU News, the news publication of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) of the
University of Tokyo, and was conducted by Kavli IPMU’s Principal Investigator Hirosi Ooguri, Fred Kavli Professor of Theoretical Physics and Mathematics and Founding Director of the Walter Burke
Institute for Theoretical Physics at the California Institute of Technology. | {"url":"https://www.ias.edu/ideas/2015/witten-interview","timestamp":"2024-11-07T21:59:17Z","content_type":"text/html","content_length":"74486","record_id":"<urn:uuid:e655b75d-1671-40ea-bfa2-03cee9d96e1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00822.warc.gz"} |
M/J Grade 7 Mathematics
General Course Information and Notes
Version Description
The benchmarks in this course are mastery goals that students are expected to attain by the end of the year. To build mastery, students will continue to review and apply earlier grade-level
benchmarks and expectations.
General Notes
In grade 7, instructional time will emphasize five areas: (1) recognizing that fractions, decimals and percentages are different representations of rational numbers and performing all four operations
with rational numbers with procedural fluency; (2) creating equivalent expressions and solving equations and inequalities; (3) developing understanding of and applying proportional relationships in
two variables; (4) extending analysis of two- and three-dimensional figures to include circles and cylinders and (5) representing and comparing categorical and numerical data and developing
understanding of probability.
Curricular content for all subjects must integrate critical-thinking, problem-solving, and workforce-literacy skills; communication, reading, and writing skills; mathematics skills; collaboration
skills; contextual and applied-learning skills; technology-literacy skills; information and media-literacy skills; and civic-engagement skills.
English Language Development ELD Standards Special Notes Section:
Teachers are required to provide listening, speaking, reading and writing instruction that allows English language learners (ELL) to communicate information, ideas and concepts for academic success
in the content area of Mathematics. For the given level of English language proficiency and with visual, graphic, or interactive support, students will interact with grade level words, expressions,
sentences and discourse to process or produce language necessary for academic success. The ELD standard should specify a relevant content area concept or topic of study chosen by curriculum
developers and teachers which maximizes an ELL's need for communication and social skills. To access an ELL supporting document which delineates performance definitions and descriptors, please click
on the following link:
General Information
Course Number: 1205040
Course Path:
Abbreviated Title: M/J GRADE 7 MATH
Course Type: Core Academic Course
Course Level: 2
Course Status: State Board Approved
Educator Certifications
One of these educator certification options is required to teach this course.
Classical Education - Restricted (Elementary and Secondary Grades K-12)
Section 1012.55(5), F.S., authorizes the issuance of a classical education teaching certificate, upon the request of a classical school, to any applicant who fulfills the requirements of s. 1012.56
(2)(a)-(f) and (11), F.S., and Rule 6A-4.004, F.A.C. Classical schools must meet the requirements outlined in s. 1012.55(5), F.S., and be listed in the FLDOE Master School ID database, to request a
restricted classical education teaching certificate on behalf of an applicant.
Student Resources
Vetted resources students can use to learn the concepts and skills in this course.
Original Student Tutorials
Educational Games
Fraction Quiz:
Test your fraction skills by answering questions on this site. This quiz asks you to simplify fractions, convert fractions to decimals and percentages, and answer algebra questions involving
fractions. You can even choose difficulty level, question types, and time limit.
Type: Educational Game
Estimator Quiz:
In this activity, students are quizzed on their ability to estimate sums, products, and percentages. The student can adjust the difficulty of the problems and how close they have to be to the actual
answer. This activity allows students to practice estimating addition, multiplication, or percentages of large numbers. This activity includes supplemental materials, including background information
about the topics covered, a description of how to use the application, and exploration questions for use with the java applet.
Type: Educational Game
Educational Software / Tool
Arithmetic Quiz:
In this activity, students solve arithmetic problems involving whole numbers, integers, addition, subtraction, multiplication, and division. This activity allows students to track their progress in
learning how to perform arithmetic on whole numbers and integers. This activity includes supplemental materials, including background information about the topics covered, a description of how to use
the application, and exploration questions for use with the java applet.
Type: Educational Software / Tool
Lesson Plan
Holidays that Celebrate America:
In this lesson plan, students will explore the history and meaning behind various patriotic holidays and make personal connections with those holidays including, Constitution Day, Memorial Day,
Veteran’s Day, Patriot Day, President’s Day, Independence Day, and Medal of Honor Day.
Type: Lesson Plan
Perspectives Video: Experts
Using Statistics to Estimate Lionfish Population Size:
<p>It's impossible to count every animal in a park, but with statistics and some engineering, biologists can come up with a good estimate.</p>
Type: Perspectives Video: Expert
Tow Net Sampling to Monitor Phytoplankton Populations:
<p>How do scientists collect information from the world? They sample it! Learn how scientists take samples of phytoplankton not only to monitor their populations, but also to make inferences about
the rest of the ecosystem!</p>
Type: Perspectives Video: Expert
Managing Lionfish Populations:
<p>Invasive lionfish are taking a bite out of the ecosystem of Biscayne Bay. Biologists are looking for new ways to remove them, including encouraging recreational divers to bite back!</p>
Type: Perspectives Video: Expert
Perspectives Video: Professional/Enthusiasts
Problem-Solving Tasks
Speed Trap:
The purpose of this task is to allow students to demonstrate an ability to construct boxplots and to use boxplots as the basis for comparing distributions.
Type: Problem-Solving Task
Haircut Costs:
This problem could be used as an introductory lesson to introduce group comparisons and to engage students in a question they may find amusing and interesting.
Type: Problem-Solving Task
The Titanic 1:
This task asks students to calculate probabilities using information presented in a two-way frequency table.
Type: Problem-Solving Task
Running around a track II:
The goal of this task is to model a familiar object, an Olympic track, using geometric shapes. Calculations of perimeters of these shapes explain the staggered start of runners in a 400 meter race.
Type: Problem-Solving Task
Running around a track I:
In this problem, geometry is applied to a 400 meter track to find the perimeter of the track.
Type: Problem-Solving Task
Paper Clip:
In this task, a typographic grid system serves as the background for a standard paper clip. A metric measurement scale is drawn across the bottom of the grid and the paper clip extends in both
directions slightly beyond the grid. Students are given the approximate length of the paper clip and determine the number of like paper clips made from a given length of wire.
Type: Problem-Solving Task
How thick is a soda can? (Variation II):
This problem solving task asks students to explain which measurements are needed to estimate the thickness of a soda can. Multiple solution processes are presented.
Type: Problem-Solving Task
Archimedes and the King's Crown:
This problem solving task uses the tale of Archimedes and the King of Syracuse's crown to determine the volume and mass of gold and silver.
Type: Problem-Solving Task
In this resource, students will determine the volumes of three different shaped drinking glasses. They will need prior knowledge with volume formulas for cylinders, cones, and spheres, as well as
experience with equation solving, simplifying square roots, and applying the Pythagorean theorem.
Type: Problem-Solving Task
Discounted Books:
This purpose of this task is to help students see two different ways to look at percentages both as a decrease and an increase of an original amount. In addition, students have to turn a verbal
description of several operations into mathematical symbols. This requires converting simple percentages to decimals as well as identifying equivalent expressions without variables.
Type: Problem-Solving Task
Equivalent Expressions?:
Students are asked to determine if two expressions are equivalent and explain their reasoning.
Type: Problem-Solving Task
Guess My Number:
This problem asks the students to represent a sequence of operations using an expression and then to write and solve simple equations. The problem is posed as a game and allows the students to
visualize mathematical operations. It would make sense to actually play a similar game in pairs first and then ask the students to record the operations to figure out each other's numbers.
Type: Problem-Solving Task
Miles to Kilometers:
In this task students are asked to write two expressions from verbal descriptions and determine if they are equivalent. The expressions involve both percent and fractions. This task is most
appropriate for a classroom discussion since the statement of the problem has some ambiguity.
Type: Problem-Solving Task
Students are asked to determine the change in height in inches when given a constant rate of change in centimeters. The answer is rounded to the nearest half inch.
Type: Problem-Solving Task
Eight Circles:
Students are asked to find the area of a shaded region using a diagram and the information provided. The purpose of this task is to strengthen student understanding of area.
Type: Problem-Solving Task
Floor Plan:
The purpose of this task is for students to translate between measurements given in a scale drawing and the corresponding measurements of the object represented by the scale drawing. If used in an
instructional setting, it would be good for students to have an opportunity to see other solution methods, perhaps by having students with different approaches explain their strategies to the class.
Students who can only solve this by first converting the linear measurements will have a hard time solving problems where only area measures are given.
Type: Problem-Solving Task
Comparing Freezing Points:
In this task, students answer a question about the difference between two temperatures that are negative numbers.
Type: Problem-Solving Task
Coupon Versus Discount:
In this task, students are presented with a real-world problem involving the price of an item on sale. To answer the question, students must represent the problem by defining a variable and related
quantities, and then write and solve an equation.
Type: Problem-Solving Task
Operations on the Number Line:
The purpose of this task is to help solidify students' understanding of signed numbers as points on a number line and to understand the geometric interpretation of adding and subtracting signed
numbers. There is a subtle distinction between a fraction and a rational number. Fractions are always positive, and when thinking of the symbol ab as a fraction, it is possible to interpret it as a
equal-sized pieces where b pieces make one whole.
Type: Problem-Solving Task
Repeating Decimal as Approximation:
The student is asked to complete a long division which results in a repeating decimal, and then use multiplication to "check" their answer. The purpose of the task is to have students reflect on the
meaning of repeating decimal representation through approximation.
Type: Problem-Solving Task
Sharing Prize Money:
Students are asked to determine how to distribute prize money among three classes based on the contribution of each class.
Type: Problem-Solving Task
Art Class, Variation 1:
Students are asked to use ratios and proportional reasoning to compare paint mixtures numerically and graphically.
Type: Problem-Solving Task
Chess Club:
This problem includes a percent increase in one part with a percent decrease in the remaining and asks students to find the overall percent change. The problem may be solved using proportions or by
reasoning through the computations or writing a set of equations.
Type: Problem-Solving Task
Comparing Years:
Students are asked to make comparisons among the Egyptian, Gregorian, and Julian methods of measuring a year.
Type: Problem-Solving Task
Finding a 10% Increase:
5,000 people visited a book fair in the first week. The number of visitors increased by 10% in the second week. How many people visited the book fair in the second week?
Type: Problem-Solving Task
Music Companies, Variation 2:
This problem has multiple steps. In order to solve the problem it is necessary to compute: the value of the TunesTown shares; the total value of the BeatStreet offer of 20 million shares at $25 per
share; the difference between these two amounts; and the cost per share of each of the extra 2 million shares MusicMind offers to equal to the difference.
Type: Problem-Solving Task
Coffee by the Pound:
Students will answer questions about unit price of coffee, make a graph of the information, and explain the meaning of constant of proportionality/slope in the given context.
Type: Problem-Solving Task
Comparing Snow Cones:
Students will just be learning about similarity in this grade, so they may not recognize that it is needed in this context. Teachers should be prepared to give support to students who are struggling
with this part of the task. To simplify the task, the teacher can just tell the students that based on the slant of the truncated conical cup, the complete cone would be 14 in tall and the part that
was sliced off was 10 inches tall. (See solution for an explanation.) There is a worthwhile discussion to be had about parts (c) and (e). The percentage increase is smaller for the snow cones than it
was for the juice treats. The snow cones have volume which is equal to those of the juice treats plus the volume of the dome, which is the same in both cases. Adding the same number to two numbers in
a ratio will always make their ratio closer to one, which in this case means that the ratio - and thus percentage increase - would be smaller.
Type: Problem-Solving Task
Students are asked to determine which sale option results in the largest percent decrease in cost.
Type: Problem-Solving Task
Selling Computers:
The sales team at an electronics store sold 48 computers last month. The manager at the store wants to encourage the sales team to sell more computers and is going to give all the sales team members
a bonus if the number of computers sold increases by 30% in the next month. How many computers must the sales team sell to receive the bonus? Explain your reasoning.
Type: Problem-Solving Task
Stock Swaps, Variation 2:
Students are asked to solve a problem using proportional reasoning in a real world context to determine the number of shares needed to complete a stock purchase.
Type: Problem-Solving Task
Stock Swaps, Variation 3:
Students are asked to solve a multistep ratio problem in a real-world context.
Type: Problem-Solving Task
Tax and Tip:
After eating at your favorite restaurant, you know that the bill before tax is $52.60 and that the sales tax rate is 8%. You decide to leave a 20% tip for the waiter based on the pre-tax amount. How
much should you leave for the waiter? How much will the total bill be, including tax and tip?
Type: Problem-Solving Task
The Price of Bread:
The purpose of this task is for students to calculate the percent increase and relative cost in a real-world context. Inflation, one of the big ideas in economics, is the rise in price of goods and
services over time. This is considered in relation to the amount of money you have.
Type: Problem-Solving Task
Two-School Dance:
The purpose of this task is to see how well students students understand and reason with ratios.
Type: Problem-Solving Task
Mr. Brigg's Class Likes Math:
In a poll of Mr. Briggs's math class, 67% of the students say that math is their favorite academic subject. The editor of the school paper is in the class, and he wants to write an article for the
paper saying that math is the most popular subject at the school. Explain why this is not a valid conclusion and suggest a way to gather better data to determine what subject is most popular.
Type: Problem-Solving Task
Offensive Linemen:
In this task, students are able to conjecture about the differences and similarities in the two groups from a strictly visual perspective and then support their comparisons with appropriate measures
of center and variability. This will reinforce that much can be gleaned simply from visual comparison of appropriate graphs, particularly those of similar scale.
Type: Problem-Solving Task
Tossing Cylinders:
The purpose of this task is to provide students with the opportunity to determine experimental probabilities by collecting data. The cylindrical objects used in this task typically have three
different resting positions but not all of these may be equally likely and some may be extremely unlikely or impossible when the object is tossed. Furthermore, obtaining the probabilities of the
outcomes is perhaps only possible through the use of long-run relative frequencies. This is because these cylinders do not have the same types of symmetries as objects that are often used as dice,
such as cubes or tetrahedrons, where each outcome is equally likely.
Type: Problem-Solving Task
How Many Buttons?:
This resource involves a simple data-gathering activity which furnishes data that students organize into a table. They are then asked to refer to the data and determine the probability of various
Type: Problem-Solving Task
Election Poll, Variation 2:
This task introduces the fundamental statistical ideas of using data summaries (statistics) from random samples to draw inferences (reasoned conclusions) about population characteristics
(parameters). In the task built around an election poll scenario, the population is the entire seventh grade class, the unknown characteristic (parameter) of interest is the proportion of the class
members voting for a specific candidate, and the sample summary (statistic) is the observed proportion of voters favoring the candidate in a random sample of class members. Variation 2 leads students
through a physical simulation for generating sample proportions by sampling, and re-sampling, marbles from a box.
Type: Problem-Solving Task
Election Poll, Variation 1:
This task introduces the fundamental statistical ideas of using data summaries (statistics) from random samples to draw inferences (reasoned conclusions) about population characteristics
(parameters). There are two important goals in this task: seeing the need for random sampling and using randomization to investigate the behavior of a sample statistic. These introduce the basic
ideas of statistical inference and can be accomplished with minimal knowledge of probability.
Type: Problem-Solving Task
Rolling Dice:
This task is intended as a classroom activity. Students pool the results of many repetitions of the random phenomenon (rolling dice) and compare their results to the theoretical expectation they
develop by considering all possible outcomes of rolling two dice. This gives them a concrete example of what we mean by long term relative frequency.
Type: Problem-Solving Task
Sitting Across From Each Other:
The purpose of this task is for students to compute the theoretical probability of a seating configuration. There are 24 possible configurations of the four friends at the table in this problem.
Students could draw all 24 configurations to solve the problem but this is time consuming and so they should be encouraged to look for a more systematic method.
Type: Problem-Solving Task
Estimating Square Roots:
By definition, the square root of a number n is the number you square to get n. The purpose of this task is to have students use the meaning of a square root to find a decimal approximation of a
square root of a non-square integer. Students may need guidance in thinking about how to approach the task.
Type: Problem-Solving Task
Converting Decimal Representations of Rational Numbers to Fraction Representations:
Requires students to "convert a decimal expansion which repeats eventually into a rational number." Despite this choice of wording, the numbers in this task are rational numbers regardless of the
choice of representation. For example, 0.333¯ and 1/3 are two different ways of representing the same number.
Type: Problem-Solving Task
Shipping Rolled Oats:
Students should think of different ways the cylindrical containers can be set up in a rectangular box. Through the process, students should realize that although some setups may seem different, they
result in a box with the same volume. In addition, students should come to the realization (through discussion and/or questioning) that the thickness of a cardboard box is very thin and will have a
negligible effect on the calculations.
Type: Problem-Solving Task
Chocolate Bar Sales:
In this task students use different representations to analyze the relationship between two quantities and to solve a real world problem. The situation presented provides a good opportunity to make
connections between the information provided by tables, graphs and equations. In the later part of the problem, the numbers are big enough so that using the formula is the most efficient way to solve
the problem; however, creative use of the table or graph will also work.
Type: Problem-Solving Task
Text Resources
Powers of Zero:
Students will learn that non-zero numbers to the zero power equal one. They will also learn that zero to any positive exponent equals zero.
Type: Tutorial
Finding Probability:
This video demonstrates several examples of finding probability of random events.
Type: Tutorial
Impact of a Radius Change on the Area of a Circle:
This video shows how the area and circumference relate to each other and how changing the radius of a circle affects the area and circumference.
Type: Tutorial
Circles: Radius, Diameter, Circumference, and Pi:
In this video, students are shown the parts of a circle and how the radius, diameter, circumference and Pi relate to each other.
Type: Tutorial
Circumference of a Circle:
This video shows how to find the circumference, the distance around a circle, given the area.
Type: Tutorial
Area of a Circle:
In this video, watch as we find the area of a circle when given the diameter.
Type: Tutorial
Proportion Word Problem:
This introductory video demonstrates the basic skill of how to write and solve a basic equation for a proportional relationship.
Type: Tutorial
Solving a Proportion with an Unknown Variable :
Here's an introductory video explaining the basic reasoning behind solving proportions and shows three different methods for solving proportions which you will use later on to solve more difficult
Type: Tutorial
Setting up Proportions to Solve Word Problems:
This introductory video shows some basic examples of writing two ratios and setting them equal to each other. This is just step 1 when solving word problems with proportions.
Type: Tutorial
Area of a Trapezoid:
A trapezoid is a type of quadrilateral with one set of parallel sides. Here we explain how to find its area.
Type: Tutorial
Perimeter and Area:
Students will learn the basics of finding the perimeter and area of squares and rectangles.
Type: Tutorial
Percent Word Problem:
Learn how to find the full price when you know the discount price in this percent word problem.
Type: Tutorial
Converting Speed Units:
In this lesson, students will be viewing a Khan Academy video that will show how to convert ratios using speed units.
Type: Tutorial
Solving Motion Problems with Linear Equations:
Based upon the definition of speed, linear equations can be created which allow us to solve problems involving constant speeds, time, and distance.
Note: This video exceeds basic expectations for the mathematical concept(s) at this grade level. The video is intended for students who have demonstrated mastery within the scope of instruction who
may be ready for a more rigorous extension of the mathematical concept(s). As with all materials, ensure to gauge the readiness of students or adapt according to student's needs prior to
Type: Video/Audio/Animation
Parent Resources
Vetted resources caregivers can use to help students learn the concepts and skills in this course. | {"url":"https://www.cpalms.org/Public/PreviewCourse/Preview/27","timestamp":"2024-11-01T22:44:57Z","content_type":"text/html","content_length":"280147","record_id":"<urn:uuid:28fbe916-532e-4995-b76b-f318d374d085>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00194.warc.gz"} |
<src>FixedFormat⇒ number | string | { decimals?: number , signed?: boolean , width?: number }
A description of a fixed-point arithmetic field.
When specifying the fixed format, the values override the default of a fixed128x18, which implies a signed 128-bit value with 18 decimals of precision.
The alias fixed and ufixed can be used for fixed128x18 and ufixed128x18 respectively.
When a fixed format string begins with a u, it indicates the field is unsigned, so any negative values will overflow. The first number indicates the bit-width and the second number indicates the
decimal precision.
When a number is used for a fixed format, it indicates the number of decimal places, and the default width and signed-ness will be used.
The bit-width must be byte aligned and the decimals can be at most 80.
A FixedNumber represents a value over its FixedFormat arithmetic field.
A FixedNumber can be used to perform math, losslessly, on values which have decmial places.
A FixedNumber has a fixed bit-width to store values in, and stores all values internally by multiplying the value by 10 raised to the power of decimals.
If operations are performed that cause a value to grow too high (close to positive infinity) or too low (close to negative infinity), the value is said to overflow.
For example, an 8-bit signed value, with 0 decimals may only be within the range -128 to 127; so -128 - 1 will overflow and become 127. Likewise, 127 + 1 will overflow and become -127.
Many operation have a normal and unsafe variant. The normal variant will throw a NumericFaultError on any overflow, while the unsafe variant will silently allow overflow, corrupting its value value.
If operations are performed that cause a value to become too small (close to zero), the value loses precison and is said to underflow.
For example, an value with 1 decimal place may store a number as small as 0.1, but the value of 0.1 / 2 is 0.05, which cannot fit into 1 decimal place, so underflow occurs which means precision is
lost and the value becomes 0.
Some operations have a normal and signalling variant. The normal variant will silently ignore underflow, while the signalling variant will thow a NumericFaultError on underflow.
<src>fixedNumber.decimals⇒ numberread-only
<src>fixedNumber.format⇒ stringread-only
<src>fixedNumber.signed⇒ booleanread-only
<src>fixedNumber.value⇒ bigintread-only
The value as an integer, based on the smallest unit the decimals allow.
<src>fixedNumber.width⇒ numberread-only
Creates a new FixedNumber with the big-endian representation value with format.
This will throw a NumericFaultError if value cannot fit in format due to overflow.
Creates a new FixedNumber for value with format.
This will throw a NumericFaultError if value cannot fit in format, either due to overflow or underflow (precision loss).
Creates a new FixedNumber for value divided by decimal places with format.
This will throw a NumericFaultError if value (once adjusted for decimals) cannot fit in format, either due to overflow or underflow (precision loss).
Returns a new FixedNumber with the result of this added to other, ignoring overflow.
Returns a new FixedNumber which is the smallest integer that is greater than or equal to this.
Returns a new FixedNumber with the result of this divided by other, ignoring underflow (precision loss). A NumericFaultError is thrown if overflow occurs.
Returns a new FixedNumber with the result of this divided by other, ignoring underflow (precision loss). A NumericFaultError is thrown if overflow occurs.
Returns a new FixedNumber which is the largest integer that is less than or equal to this.
<src>fixedNumber.isNegative()⇒ boolean
<src>fixedNumber.isZero()⇒ boolean
Returns a new FixedNumber with the result of this multiplied by other. A NumericFaultError is thrown if overflow occurs or if underflow (precision loss) occurs.
Returns a new FixedNumber with the result of this multiplied by other, ignoring overflow and underflow (precision loss).
Returns a new FixedNumber with the decimal component rounded up on ties at decimals places.
Returns a new FixedNumber with the result of other subtracted from this, ignoring overflow.
Return a new FixedNumber with the same value but has had its field set to format.
<src>fixedNumber.toString()⇒ string
<src>fixedNumber.toUnsafeFloat()⇒ number | {"url":"https://docs.ethers.org/v6/api/utils/fixed-point-math/","timestamp":"2024-11-09T20:45:33Z","content_type":"text/html","content_length":"53786","record_id":"<urn:uuid:f4bbc49a-4b71-41ee-8444-f06595332a3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00812.warc.gz"} |
The Octonions
GFM seminar
IIIUL, B3-01
2015-01-28 14:30 2015-01-28 15:30 2015-01-28 14:30 .. 15:30
by John Huerta (CAMGSD, Instituto Superior Técnico)
The octonions are the largest and most eccentric of the four normed division algebras. These are algebras with a norm that satisfies the following key identity, familiar from the study of real and
complex numbers:
|xy| = |x| |y|.
The octonions are noncommutative and, worse, nonassociative. Yet they play an essential role in many parts of mathematics and physics, from giving string theory 10 dimensions to reigning over the
theory of exceptional Lie groups. In this talk, we introduce the octonions, prove they are the largest normed division algebra, and sketch their deep relationship with the exceptional Lie groups,
from the humble G2 to the enigmatic E8.
Seminário financiado por Fundos Nacionais através da FCT Fundação para a Ciência e a Tecnologia no âmbito do projeto PEst-OE/MAT/UI0208/2013 | {"url":"http://gfm.cii.fc.ul.pt/events/seminars/20150128-huerta/","timestamp":"2024-11-12T17:15:53Z","content_type":"application/xhtml+xml","content_length":"21987","record_id":"<urn:uuid:b07ac7e3-1fec-46f0-91ea-1044c8a29119>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00448.warc.gz"} |
Adaptive Yield Split | Idle
The Adaptive Yield Split is a unique feature of YTs that manages the return distribution dynamically conditional to the liquidity deposited on each side (Senior/Junior) of the Tranche.
Mathematically, the formulas behind this mechanism consider mainly Senior and Junior liquidity ratios to compute Senior and Junior returns.
Please note that the labels used slightly change the naming at the contract level
The Senior TVL ratio is _AATrancheSplitRatio
The Senior Yield share is _trancheAPRSplitRatio
Liquidity ratios
First, we define the Senior and Junior TVL ratios as
$\text{TVL ratio}_{Sr} = \frac{\text{Liquidity}_{Senior}}{\text{Liquidity}_{Senior + Junior}}$
$\text{TVL ratio}_{Jr} = \frac{\text{Liquidity}_{Junior}}{\text{Liquidity}_{Senior + Junior}}$
Senior and Junior yields
The Senior return can be calculated as
$\text{APY}_{Sr} = \text{Base APY} \times \text{Yield share}_{Sr} \qquad \tag{1}$
where the Base APY is the underlying Tranches yield and the Yield share of the Senior side is a piecewise function conditional to the liquidity on the Senior tranche.
$\text{Yield share}_{Sr} = \begin{dcases} 99\% & \text{if } \text{TVL ratio}_{Sr} \geq 99\% \\ \\ \dfrac{\text{Liquidity}_{Senior}}{\text{Liquidity}_{Senior + Junior}} & \text{if } \text{TVL ratio}_
{Sr} > 50\% \\ \\ 50\% & \text{if } \text{TVL ratio}_{Sr} \leq 50\% \\ \end{dcases}$
The Junior return can be calculated as
$\text{APY}_{Jr} = \frac{(\text{Base APY} - \text{APY}_{Sr}) \times \text{TVL ratio}_{Sr}}{\text{TVL ratio}_{Jr}} + \text{Base APY} \qquad \tag{2}$
Normal case
When Senior liquidity represents 50-99% of the funds in the Tranche, we use Equation (1) to compute the Yield share of the Senior side.
Alternatively, we use some fixed percentages. There are two hedge cases:
The majority of total Tranche's liquidity lying on the Senior side (more than 99%)
Less than half of total Tranche's liquidity lying on the Senior side (less than 50%)
$\text{Yield share}_{Sr} = \begin{dcases} 99\% & \text{if } \dfrac{\text{Liquidity}_{Senior}}{\text{Liquidity}_{Senior + Junior}} \geq 99\% \\ \\ 50\% & \text{if } \dfrac{\text{Liquidity}_{Senior}}{\
text{Liquidity}_{Senior + Junior}} \leq 50\% \\ \end{dcases}$
In the first case, we set the Yield share of Senior Tranches equal to 99% while in the second case, we set it equal to 50%. These two hedge cases link to the principle that
Senior Tranche receives most of the underlying yield when liquidity is low on the Junior side (i.e. low coverage on Senior funds), or receives a guaranteed minimum portion of the underlying
yield when Junior liquidity is high (i.e. high coverage on Senior funds);
Junior Tranche receives outperforming APYs on the Junior Tranches, no matter what the amount of deposited liquidity on the Senior is.
The guaranteed minimum portion, aka the Yield share of the Senior Tranches, has been set to half the Base APY (see HC#2) when the Senior liquidity is smaller than the Junior one.
Senior coverage and Junior overperformance
The formulas of the Senior coverage provided by the Junior counterparty and the Junior boosted yield vs the underlying return are
$\text{Coverage}_{Sr} = \frac{\text{Liquidity}_{Junior}}{\text{Liquidity}_{Senior}}$
$\text{Overperformance}_{Jr} = \frac{\text{APY}_{Jr}}{\text{Base APY}}$
The Senior coverage should not be confused with the overall Tranche coverage that is computed in proportion to the whole tranche TVL
$\text{Tranche coverage} = \frac{\text{Liquidity}_{Junior}}{\text{Liquidity}_{Tranche}}$
We compute the returns of the Senior and the Junior sides using the formulas listed previously, assuming
An average underlying yield, Base APY, of 10%
The total liquidity of the Tranche, Tranche TVL, equal to $10,000,000
Standard case: between 50 and 99% of the total Tranche's liquidity lying on the Senior side
Side Liquidity Expected APY
Senior $8m 8%
Junior $2m 18%
The Senior Yield share is equal to 80%.
Senior funds coverage is 25% and the Junior overperformance vs base APY is 1.8x. The Tranche coverage is 20%.
Hedge case 1: the majority of total Tranche's liquidity lying on the Senior side ($\geq$99%)
Side Liquidity Expected APY
Senior $9.9m 10%
Junior $100 20%
The Senior Yield share is set to 99% (HC#1).
Senior funds coverage is 0% and the Junior overperformance vs base APY is 1.99x. The Tranche coverage is 0% as well.
Hedge case 2: less than half of the total Tranche's liquidity lying on the Senior side ($\leq$50%)
Side Liquidity Expected APY
Senior $4m 5%
Junior $6m 13%
The Senior Yield share is set to 50% (HC#2).
Senior funds coverage is 150% and the Junior overperformance vs base APY is 1.33x. The Tranche coverage is 60%. | {"url":"https://docs.idle.finance/products/yield-tranches/adaptive-yield-split","timestamp":"2024-11-11T10:25:20Z","content_type":"text/html","content_length":"740186","record_id":"<urn:uuid:026d1eca-4479-410a-a490-bfd01a43f06c>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00133.warc.gz"} |