content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Alkanols: Types, Classes, Fermentation of Alcohol, Properties and uses - 2023
Chemistry Notes
Alkanols: Types, Classes, Fermentation of Alcohol, Properties and uses
• Types and Classes
• Industrial Production by Fermentation
• Properties and Uses
Alkanols is a homologous series with general molecular formula of C[n]H[2n+1]OH or ROH.
The functional group in alkanols is the hydroxyl (-OH) group.
The names of alkanols are obtained by substituting “e” in alkanes with “ol”.
Methanol – CH[3]OH, Ethanol – CH[3]CH[2]OH
The alkanols are classified based on the number of alkyl groups directly linked to the carbon atom carrying the hydroxyl group.
The type of alkanols is determined by the number of the hydroxyl group –OH, present in the molecule.
1. Monohydric alkanols: This type has only one hydroxyl(–OH), present in its molecule.
Example: C[2]H[5]OH, C[3]H[7]OH.
1. Name the functional group in the alkanol.
2. Give an example each by writing the structure and names of the classes of alkanols.
1. Hydrolyzing ethyl esters with hot alkali
2. Reducing ethanol with nascent hydrogen
1. From ethene: Ethene is obtained by the cracking of petroleum. It is then absorbed in 95% H[2]SO[4] at 80^0C and 30 atm to form ethyl hydrogen tetraoxosulphate (VI)
C[2]H[4 ] + H[2]SO[4] → C[2]H[5]HSO[4]
The ethyl hydrogen tetraoxosulphate (VI) is hydrolyse by boiling in water to produce ethanol.
C[2]H[5]HSO[4]+ H[2]O → C[2]H[5]OH + H[2]SO[4].
The ethanol is distilled off leaving the acid behind which can be used again.
• Preparation by fermentation: Ethanol is prepared industrially from raw materials containing starch or sugar by the process of fermentation. Fermentation is an enzymatic process which involves the
decomposition of large organic molecules to simple molecule by micro-organism.The common micro-organism used is YEAST
Ethanol can be prepared from starchy food like rice, potatoes, maize etc.
The following steps are involved;
• Crush and pressure cook the starchy materials.
• Extract the starch granules by mixing with water.
• Allow the starch granules to settle and decant
• Treat the starch granules with malt (partially germinated barley which contains the enzyme, DIASTASE) at 50^0C for one hour.
• The starch is then converted to MALTOSE.
2(C[6]H[10]O[5])n + nH[2]O → nC[12]H[22]O[11]
• Then yeast is added at room temperature for some time (at least one day). Yeast contains two enzymes, namely MALTASE and ZYMASE. Maltase converts maltose to two glucose units, while Zymase
converts the glucose to ethanol and carbon (IV) oxide.
C[12]H[22]O[11] + H[2]O →^maltase2C[6]H[12]O[6]
C[6] H[12] O[6] →^Zymase 2C[2]H[3]OH + CO[2]
1. Describe fully, the production of ethanol from a named starchy material/food.
2. What type of chemical reaction is involved in fermentation of sugar?
1. Ethanol is a colourless volatile liquid.
2. It is soluble in water.
3. It has boiling point of 78^0C.
4. It has no action on litmus paper.
1. It is used as organic solvent.
2. It is the main constituent of methylated spirit used to clean wounds and to dissolve paint.
3. It is used as petrol addictive for use as fuel in vehicles.
4. It is used to manufacture other chemicals such as ethanol and ethanoic acid.
5. It is used as ingredient in making alcoholic drinks e.g. beers, wines and spirits.
6. It is used as anti-freeze in automobile radiator because of its low freezing point (-117^0C).
1. Describe how ethanol can be prepared from cane sugar.
2. Using balanced equations, state five chemical properties of ethanol.
3. Describe a test to identify an unknown solution to be ethanol.
4. What is the number of oxygen atoms in 32g of the gas? [N[A] = 6.02 x 10^23]
5. 5.6dm3 of oxygen gas was evolved at the anode during the electrolysis of dilute copper (II) tetraoxosulphate (VI) using platinum electrodes. What mass of copper is deposited at the cathode during
the process? [Cu = 64, Molar volume of a gas at s.t.p = 22.4dm^3, 1F = 96500C]
New School Chemistry for Senior Secondary School by O.Y.Ababio (6^th edition) pages 539-544.
SECTION A:Write the correct option ONLY.
1. The functional groups of the alkanolis(a).double bond (b). carboxyl group (c). hydroxyl group (d). triple bond
2. Primaryalkanols are oxidized to carboxylic acid; secondary alkanols are oxidized to alkanones while tertiary alkanols are (a). oxidized to alkanols(b). oxidized to alkanones(c). not oxidized
(d).oxidized to alkenes
3. The solubility of alkanols in water is due to (a). the covalent nature(b).hydrogenbonding
(c).their low melting point (d).their low melting point
4. When acidified KMnO[4] is used as oxidizing agent for alkanol, the colour change observed is(a). yellow to red (b). purple to colourless(c). orange to green (d).white to black
5. Which of the following enzymes converts glucose to ethanol?(a). maltose (b).zymase
(c).diastase (d).amylase
1 (a).Write the structural formula of two named primary alkanols.
(b). Explain the structural different between secondary and tertiary alkanols giving one example each.
2 (a).What is fermentation?
(b). Describe the preparation of ethanol from table sugar. | {"url":"https://stoplearn.com/alkanols_-types-classes-fermentation-of-alcohol-properties-and-uses/","timestamp":"2024-11-07T07:05:16Z","content_type":"text/html","content_length":"77081","record_id":"<urn:uuid:8b7e32b5-9237-4fad-ba1a-dc535ba69c0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00267.warc.gz"} |
Stochastic Modeling - Definition, Applications & Example
Stochastic Modeling
Last Updated :
21 Aug, 2024
Stochastic Modeling Definition
Stochastic modeling develops a mathematical or financial model to derive all possible outcomes of a given problem or scenarios using random input variables. It focuses on the probability
distribution of possible outcomes. Examples are Monte Carlo Simulation, Regression Models, and Markov-Chain Models.
The model represents a real case simulation to understand the system better, study the randomness, and evaluate uncertain situations that define every possible outcome and how the system will evolve.
Hence this modeling technique helps professionals and investors make better management decisions and formulate their business practices to maximize profitability.
• Stochastic modeling develops a mathematical or financial model to simulate an entire system and derives a set of possible outcomes with its probability distribution.
• The deterministic model predicting a single output exemplifies the opposite concept of the stochastic model as they do not involve any randomness or uncertainty.
• Its application is seen in various sectors like the financial market, agriculture, weather forecasting, and manufacturing.
• Examples of stochastic models are Monte Carlo Simulation, Regression Models, and Markov-Chain Models.
Stochastic Modeling Explained
The stochastic modeling definition states that the results vary with conditions or scenarios. The modeling consists of random variables and uncertainty parameters, playing a vital role. It brings the
probability factor in the calculation, which determines every possible outcome. For the probability determination of each result, the inputs are given variation from time to time. Thus, it
contributes to the computation of probability distributions which are mathematical functions that reflect the similarity of different outcomes.
The model stands on many criteria to ensure accuracy in probable outcomes. Therefore, the model must cover all points of uncertainty to showcase all possible results for drawing the correct
probability distribution. Furthermore, every probability is related to one another within the model itself and collectively contributes to computing the randomness of the inputs. These probabilities
are further used for predictions and forecasting relevant information.
The stochastic prototype provides several outcomes, and it is applied commonly in analyzing investment returns. First, it studies the market volatility based on the uncertain input and probability of
various returns. Thus, stochastic modeling in finance helps investors discern the unknown outcomes that usually do not consider in the analysis. The variable is generally time-series data depicting
the difference between historical data, and the final distribution result describes the inputs' randomness.
One of the famous stochastic modeling examples is Monte Carlo Simulation, invented by famous Mathematicians John Von Neumann and Stanislaw Ulam during World War II to enhance decision-making in an
environment filled with uncertainty. It is a computerized simulation technique used for decision-making by professionals in diverse disciplines like engineering and finance. The method creates an
artificial world similar to the real-world case using numerous random samples, observing the outcome and its probabilities to derive practical solutions.
Stochastic vs. Deterministic Modeling
The prime difference between stochastic and deterministic representation is noticeable in the name itself. The word "stochastic" indicates a random probability distribution, whereas "deterministic"
indicates the absence of randomness.
The following table demonstrates the significant differences between the stochastic and deterministic methods:
Features Stochastic Modeling Deterministic Modeling
Outcomes Produces a set of possible outcomes for a problem. Creates a single output for a problem.
Input/ All properties are random, or inputs are random variables leading to different outputs. All properties are certain, or no random value is taken into account, and the model only works
Properties on defined values providing a single result.
Complexity in Stochastic models are more complex given to their prediction and forecasting inclined to Deterministic modeling is less complex; it follows a set path and does not deflect due to
design use random variables and several conditions. randomness or uncertainty.
Example Example of stochastic model: The Monte Carlo simulation Example of deterministic model: Water Balance Model
Uses of Stochastic Modeling
The application of stochastic modeling has a broad scope and importance in different fields and areas of study. Some of the prominent uses of it are as follows:
• Investment decisions: Stochastic modeling in finance is predominantly associated with investment decision-making. It is used in financial analysis to decide on investment decisions, return on
investment, etc. The model provides the probable outcomes corresponding to various scenarios.
• Agriculture: The model is used in the agriculture field for effective decision-making during uncertain situations. For example, its application in farmland and irrigation management increases
farmers’ profit.
• Weather forecasting: The application of stochastic approaches in substantial weather and climate prediction prototype is proven, and stochastic predictands can correct many of the inaccuracies in
the other frameworks.
• Manufacturing: Stochastic replicas of a wide range of manufacturing systems are used for practical examination purposes.
• Biochemistry and Systems Biology: Stochastic kinetic methods are used to structure the dynamics of biochemical and biological networks.
Frequently Asked Questions (FAQs)
What is stochastic volatility modeling?
The stochastic volatility model considers the volatility of a return on an asset. The fundamental of stochastic volatility is that asset price volatility is not constant; it varies. They are used in
mathematical finance to evaluate derivative securities, such as options.
What are the differences between stochastic and deterministic models?
The word stochastic implies u0022randomu0022 or u0022uncertain,u0022 whereas the word deterministic indicates u0022certain.u0022 When it comes to stochastic and deterministic frameworks, stochastic
predicts a set of possible outcomes with their probability of occurrences. In contrast, the deterministic model produces only a single output from a given set of circumstances.
What is stochastic modeling?
It is an example of modeling techniques using stochastic projections. Furthermore, the model accepts possible random inputs, provides the probability distribution of possible outcomes, and is mainly
used for financial planning. Examples of such models are Monte Carlo Simulation and Regression Models.
Recommended Articles
This has been a guide to Stochastic Modeling & its Definition. Here we explain stochastic modeling, its application, example & finance & volatility models. You may also have a look at the following
articles to learn more – | {"url":"https://www.wallstreetmojo.com/stochastic-modeling/","timestamp":"2024-11-08T07:58:35Z","content_type":"text/html","content_length":"300512","record_id":"<urn:uuid:51130850-e810-4e73-a85d-4b33ab8457ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00115.warc.gz"} |
seminars - Completely almost periodic elements of Hopf von Neumann algebras
Almost periodicity was introduced by H. Bohr in the 1920s in the context of functions on the real line. Subsequently, the following generalization has become accepted: a bounded function on a group G
is called almost periodic if the set of its translates is relatively compact (in the sup-norm topology). The space of all a.p. functions on G is then an interesting commutative unital C*-algebra,
whose spectrum can be regarded as a "compactification" of G.
$L^\infty(G)$ is an example of a Hopf von Neumann algebra, and there are several plausible ways to extend the previous definitions to the world of Hopf von Neumann algebras. In this talk, I will give
a brief sketch of some of the classical results, and then discuss a version for Hopf von Neumann algebras that was proposed by Runde, using a modified notion of compactness that may be more
appropriate to the operator-space setting.
Extending his results, I shall show that Runde's construction always produces a C*-algebra, and if time permits, I will discuss an unexpected connection with a problem that arose in the study of
uniform Roe algebras. | {"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&page=65&sort_index=Time&order_type=asc&l=en&document_srl=775149","timestamp":"2024-11-08T04:18:41Z","content_type":"text/html","content_length":"48412","record_id":"<urn:uuid:e97ff8b8-12c5-4822-93f2-ade45de9831d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00498.warc.gz"} |
1.3 The Language of Physics: Physical Quantities and Units
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Associate physical quantities with their International System of Units (SI) and perform conversions among SI units using scientific notation
• Relate measurement uncertainty to significant figures and apply the rules for using significant figures in calculations
• Correctly create, label, and identify relationships in graphs using mathematical relationships (e.g., slope, y-intercept, inverse, quadratic and logarithmic)
Section Key Terms
accuracy ampere constant
conversion factor dependent variable derived units
English units exponential relationship fundamental physical units
independent variable inverse relationship inversely proportional
kilogram linear relationship logarithmic (log) scale
log-log plot meter method of adding percents
order of magnitude precision quadratic relationship
scientific notation second semi-log plot
SI units significant figures slope
uncertainty variable y-intercept
The Role of Units
The Role of Units
Physicists, like other scientists, make observations and ask basic questions. For example, how big is an object? How much mass does it have? How far did it travel? To answer these questions, they
make measurements with various instruments (e.g., meter stick, balance, stopwatch, etc.).
The measurements of physical quantities are expressed in terms of units, which are standardized values. For example, the length of a race, which is a physical quantity, can be expressed in meters
(for sprinters) or kilometers (for long distance runners). Without standardized units, it would be extremely difficult for scientists to express and compare measured values in a meaningful way (
Figure 1.13).
All physical quantities in the International System of Units (SI) are expressed in terms of combinations of seven fundamental physical units, which are units for: length, mass, time, electric
current, temperature, amount of a substance, and luminous intensity.
SI Units: Fundamental and Derived Units
SI Units: Fundamental and Derived Units
There are two major systems of units used in the world: SI units (acronym for the French Le Système International d’Unités, also known as the metric system), and English units (also known as the
imperial system). English units were historically used in nations once ruled by the British Empire. Today, the United States is the only country that still uses English units extensively. Virtually
every other country in the world now uses the metric system, which is the standard system agreed upon by scientists and mathematicians.
Some physical quantities are more fundamental than others. In physics, there are seven fundamental physical quantities that are measured in base or physical fundamental units: length, mass, time,
electric current temperature, amount of substance, and luminous intensity. Units for other physical quantities (such as force, speed, and electric charge) described by mathematically combining these
seven base units. In this course, we will mainly use five of these: length, mass, time, electric current and temperature. The units in which they are measured are the meter, kilogram, second, ampere,
kelvin, mole, and candela (Table 1.1). All other units are made by mathematically combining the fundamental units. These are called derived units.
Table 1.1 SI Base Units
Quantity Name Symbol
Length Meter m
Mass Kilogram kg
Time Second s
Electric current Ampere a
Temperature Kelvin k
Amount of substance Mole mol
Luminous intensity Candela cd
The Meter
The SI unit for length is the meter (m). The definition of the meter has changed over time to become more accurate and precise. The meter was first defined in 1791 as 1/10,000,000 of the distance
from the equator to the North Pole. This measurement was improved in 1889 by redefining the meter to be the distance between two engraved lines on a platinum-iridium bar. (The bar is now housed at
the International Bureau of Weights and Measures, near Paris). By 1960, some distances could be measured more precisely by comparing them to wavelengths of light. The meter was redefined as
1,650,763.73 wavelengths of orange light emitted by krypton atoms. In 1983, the meter was given its present definition as the distance light travels in a vacuum in 1/ 299,792,458 of a second (Figure
The Kilogram
The SI unit for mass is the kilogram (kg). It is defined to be the mass of a platinum-iridium cylinder, housed at the International Bureau of Weights and Measures near Paris. Exact replicas of the
standard kilogram cylinder are kept in numerous locations throughout the world, such as the National Institute of Standards and Technology in Gaithersburg, Maryland. The determination of all other
masses can be done by comparing them with one of these standard kilograms.
The Second
The SI unit for time, the second (s) also has a long history. For many years it was defined as 1/86,400 of an average solar day. However, the average solar day is actually very gradually getting
longer due to gradual slowing of Earth’s rotation. Accuracy in the fundamental units is essential, since all other measurements are derived from them. Therefore, a new standard was adopted to define
the second in terms of a non-varying, or constant, physical phenomenon. One constant phenomenon is the very steady vibration of Cesium atoms, which can be observed and counted. This vibration forms
the basis of the cesium atomic clock. In 1967, the second was redefined as the time required for 9,192,631,770 Cesium atom vibrations (Figure 1.15).
The Ampere
Electric current is measured in the ampere (A), named after Andre Ampere. You have probably heard of amperes, or amps, when people discuss electrical currents or electrical devices. Understanding an
ampere requires a basic understanding of electricity and magnetism, something that will be explored in depth in later chapters of this book. Basically, two parallel wires with an electric current
running through them will produce an attractive force on each other. One ampere is defined as the amount of electric current that will produce an attractive force of 2.7 $××$ 10^–7 newton per meter
of separation between the two wires (the newton is the derived unit of force).
The SI unit of temperature is the kelvin (or kelvins, but not degrees kelvin). This scale is named after physicist William Thomson, Lord Kelvin, who was the first to call for an absolute temperature
scale. The Kelvin scale is based on absolute zero. This is the point at which all thermal energy has been removed from all atoms or molecules in a system. This temperature, 0 K, is equal to −273.15
°C and −459.67 °F. Conveniently, the Kelvin scale actually changes in the same way as the Celsius scale. For example, the freezing point (0 °C) and boiling points of water (100 °C) are 100 degrees
apart on the Celsius scale. These two temperatures are also 100 kelvins apart (freezing point = 273.15 K; boiling point = 373.15 K).
Metric Prefixes
Physical objects or phenomena may vary widely. For example, the size of objects varies from something very small (like an atom) to something very large (like a star). Yet the standard metric unit of
length is the meter. So, the metric system includes many prefixes that can be attached to a unit. Each prefix is based on factors of 10 (10, 100, 1,000, etc., as well as 0.1, 0.01, 0.001, etc.).
Table 1.2 gives the metric prefixes and symbols used to denote the different various factors of 10 in the metric system.
Table 1.2
Prefix Symbol Value Example Name Example Symbol Example Value Example Description
exa E 10^18 Exameter Em 10^18 m Distance light travels in a century
peta P 10^15 Petasecond Ps 10^15 s 30 million years
tera T 10^12 Terawatt TW 10^12 W Powerful laser output
giga G 10^9 Gigahertz GHz 10^9 Hz A microwave frequency
mega M 10^6 Megacurie MCi 10^6 Ci High radioactivity
kilo k 10^3 Kilometer km 10^3 m About 6/10 mile
hector h 10^2 Hectoliter hL 10^2 L 26 gallons
deka da 10^1 Dekagram dag 10^1 g Teaspoon of butter
____ ____ 10^0 (=1)
deci d 10^–1 Deciliter dL 10^–1 L Less than half a soda
centi c 10^–2 Centimeter Cm 10^–2 m Fingertip thickness
mili m 10^–3 Millimeter Mm 10^–3 m Flea at its shoulder
micro µ 10^–6 Micrometer µm 10^–6 m Detail in microscope
nano n 10^–9 Nanogram Ng 10^–9 g Small speck of dust
pico p 10^–12 Picofarad pF 10^–12 F Small capacitor in radio
femto f 10^–15 Femtometer Fm 10^–15 m Size of a proton
atto a 10^–18 Attosecond as 10^–18 s Time light takes to cross an atom
See Appendix A for a discussion of powers of 10.
The metric system is convenient because conversions between metric units can be done simply by moving the decimal place of a number. This is because the metric prefixes are sequential powers of 10.
There are 100 centimeters in a meter, 1000 meters in a kilometer, and so on. In nonmetric systems, such as U.S. customary units, the relationships are less simple—there are 12 inches in a foot, 5,280
feet in a mile, 4 quarts in a gallon, and so on. Another advantage of the metric system is that the same unit can be used over extremely large ranges of values simply by switching to the
most-appropriate metric prefix. For example, distances in meters are suitable for building construction, but kilometers are used to describe road construction. Therefore, with the metric system,
there is no need to invent new units when measuring very small or very large objects—you just have to move the decimal point (and use the appropriate prefix).
Known Ranges of Length, Mass, and Time
Table 1.3 lists known lengths, masses, and time measurements. You can see that scientists use a range of measurement units. This wide range demonstrates the vastness and complexity of the universe,
as well as the breadth of phenomena physicists study. As you examine this table, note how the metric system allows us to discuss and compare an enormous range of phenomena, using one system of
measurement (Figure 1.16 and Figure 1.17).
Table 1.3 Approximate Values of Length, Mass, and Time
Length (m) Phenomenon Measured Mass (kg) Phenomenon Measured Time (s) Phenomenon Measured
10^–18 Present experimental limit to smallest observable detail 10^–30 Mass of an electron (9.11 $××$ 10^–31 kg) 10^–23 Time for light to cross a proton
10^–15 Diameter of a proton 10^–27 Mass of a hydrogen atom (1.67 $××$ 10^–27 kg) 10^–22 Mean life of an extremely unstable nucleus
10^–14 Diameter of a uranium nucleus 10^–15 Mass of a bacterium 10^–15 Time for one oscillation of a visible light
10^–10 Diameter of a hydrogen atom 10^–5 Mass of a mosquito 10^–13 Time for one vibration of an atom in a solid
10^–8 Thickness of membranes in cell of living organism 10^–2 Mass of a hummingbird 10^–8 Time for one oscillation of an FM radio wave
10^–6 Wavelength of visible light 1 Mass of a liter of water (about a quart) 10^–3 Duration of a nerve impulse
10^–3 Size of a grain of sand 10^2 Mass of a person 1 Time for one heartbeat
1 Height of a 4-year-old child 10^3 Mass of a car 10^5 One day (8.64 $××$ 10^4 s)
10^2 Length of a football field 10^8 Mass of a large ship 10^7 One year (3.16 $××$ 10^7 s)
10^4 Greatest ocean depth 10^12 Mass of a large iceberg 10^9 About half the life expectancy of a human
10^7 Diameter of Earth 10^15 Mass of the nucleus of a comet 10^11 Recorded history
10^11 Distance from Earth to the sun 10^23 Mass of the moon (7.35 $××$ 10^22 kg) 10^17 Age of Earth
10^16 Distance traveled by light in 1 year (a light year) 10^25 Mass of Earth (5.97 $××$ 10^24 kg) 10^18 Age of the universe
10^21 Diameter of the Milky Way Galaxy 10^30 Mass of the sun (1.99 $××$ 10^24 kg)
10^22 Distance from Earth to the nearest large galaxy (Andromeda) 10^42 Mass of the Milky Way Galaxy (current upper limit)
10^26 Distance from the Earth to the edges of the known universe 10^53 Mass of the known universe (current upper limit)
More precise values are in parentheses.
Using Scientific Notation with Physical Measurements
Using Scientific Notation with Physical Measurements
Scientific notation is a way of writing numbers that are too large or small to be conveniently written as a decimal. For example, consider the number 840,000,000,000,000. It’s a rather large number
to write out. The scientific notation for this number is 8.40 $××$ 10^14. Scientific notation follows this general format
$x × 10 y . x × 10 y .$
In this format x is the value of the measurement with all placeholder zeros removed. In the example above, x is 8.4. The x is multiplied by a factor, 10^y, which indicates the number of placeholder
zeros in the measurement. Placeholder zeros are those at the end of a number that is 10 or greater, and at the beginning of a decimal number that is less than 1. In the example above, the factor is
10^14. This tells you that you should move the decimal point 14 positions to the right, filling in placeholder zeros as you go. In this case, moving the decimal point 14 places creates only 13
placeholder zeros, indicating that the actual measurement value is 840,000,000,000,000.
Numbers that are fractions can be indicated by scientific notation as well. Consider the number 0.0000045. Its scientific notation is 4.5 $××$ 10^–6. Its scientific notation has the same format
$x × 10 y . x × 10 y .$
Here, x is 4.5. However, the value of y in the 10^y factor is negative, which indicates that the measurement is a fraction of 1. Therefore, we move the decimal place to the left, for a negative y. In
our example of 4.5 $××$ 10^–6, the decimal point would be moved to the left six times to yield the original number, which would be 0.0000045.
The term order of magnitude refers to the power of 10 when numbers are expressed in scientific notation. Quantities that have the same power of 10 when expressed in scientific notation, or come close
to it, are said to be of the same order of magnitude. For example, the number 800 can be written as 8 $××$ 10^2, and the number 450 can be written as 4.5 $××$ 10^2. Both numbers have the same value
for y. Therefore, 800 and 450 are of the same order of magnitude. Similarly, 101 and 99 would be regarded as the same order of magnitude, 10^2. Order of magnitude can be thought of as a ballpark
estimate for the scale of a value. The diameter of an atom is on the order of 10^−9 m, while the diameter of the sun is on the order of 10^9 m. These two values are 18 orders of magnitude apart.
Scientists make frequent use of scientific notation because of the vast range of physical measurements possible in the universe, such as the distance from Earth to the moon (Figure 1.18), or to the
nearest star.
Unit Conversion and Dimensional Analysis
It is often necessary to convert from one type of unit to another. For example, if you are reading a European cookbook in the United States, some quantities may be expressed in liters and you need to
convert them to cups. A Canadian tourist driving through the United States might want to convert miles to kilometers, to have a sense of how far away his next destination is. A doctor in the United
States might convert a patient’s weight in pounds to kilograms.
Let’s consider a simple example of how to convert units within the metric system. How can we want to convert 1 hour to seconds?
Next, we need to determine a conversion factor relating meters to kilometers. A conversion factor is a ratio expressing how many of one unit are equal to another unit. A conversion factor is simply a
fraction which equals 1. You can multiply any number by 1 and get the same value. When you multiply a number by a conversion factor, you are simply multiplying it by one. For example, the following
are conversion factors: (1 foot)/(12 inches) = 1 to convert inches to feet, (1 meter)/(100 centimeters) = 1 to convert centimeters to meters, (1 minute)/(60 seconds) = 1 to convert seconds to
minutes. In this case, we know that there are 1,000 meters in 1 kilometer.
Now we can set up our unit conversion. We will write the units that we have and then multiply them by the conversion factor (1 km/1,000m) = 1, so we are simply multiplying 80m by 1:
1.1$1 h × 60 min 1 h × 60 s 1 min =3600 s = 3 .6 × 10 2 s 1 h × 60 min 1 h × 60 s 1 min =3600 s = 3 .6 × 10 2 s$
When there is a unit in the original number, and a unit in the denominator (bottom) of the conversion factor, the units cancel. In this case, hours and minutes cancel and the value in seconds
You can use this method to convert between any types of unit, including between the U.S. customary system and metric system. Notice also that, although you can multiply and divide units
algebraically, you cannot add or subtract different units. An expression like 10 km + 5 kg makes no sense. Even adding two lengths in different units, such as 10 km + 20 m does not make sense. You
express both lengths in the same unit. See Appendix C for a more complete list of conversion factors.
Worked Example
Unit Conversions: A Short Drive Home
Suppose that you drive the 10.0 km from your university to home in 20.0 min. Calculate your average speed (a) in kilometers per hour (km/h) and (b) in meters per second (m/s). (Note—Average speed is
distance traveled divided by time of travel.)
First we calculate the average speed using the given units. Then we can get the average speed into the desired units by picking the correct conversion factor and multiplying by it. The correct
conversion factor is the one that cancels the unwanted unit and leaves the desired unit in its place.
Solution for (a)
1. Calculate average speed. Average speed is distance traveled divided by time of travel. (Take this definition as a given for now—average speed and other motion concepts will be covered in a later
module.) In equation form,
$average speed = distance time . average speed = distance time .$
2. Substitute the given values for distance and time.
$average speed = 10.0 km 20.0 min =0.500 km min average speed = 10.0 km 20.0 min =0.500 km min$
3. Convert km/min to km/h: multiply by the conversion factor that will cancel minutes and leave hours. That conversion factor is $60 min/1 h 60 min/1 h$. Thus,
$average speed = 0.500 km min × 60 min 1 h =30.0 km h . average speed = 0.500 km min × 60 min 1 h =30.0 km h .$
Discussion for (a)
To check your answer, consider the following:
1. Be sure that you have properly cancelled the units in the unit conversion. If you have written the unit conversion factor upside down, the units will not cancel properly in the equation. If you
accidentally get the ratio upside down, then the units will not cancel; rather, they will give you the wrong units as follows
$km min × 1 hr 60 min = 1 60 km·h min 2 , km min × 1 hr 60 min = 1 60 km·h min 2 ,$
which are obviously not the desired units of km/h.
2. Check that the units of the final answer are the desired units. The problem asked us to solve for average speed in units of km/h and we have indeed obtained these units.
3. Check the significant figures. Because each of the values given in the problem has three significant figures, the answer should also have three significant figures. The answer 30.0 km/h does
indeed have three significant figures, so this is appropriate. Note that the significant figures in the conversion factor are not relevant because an hour is defined to be 60 min, so the
precision of the conversion factor is perfect.
4. Next, check whether the answer is reasonable. Let us consider some information from the problem—if you travel 10 km in a third of an hour (20 min), you would travel three times that far in an
hour. The answer does seem reasonable.
Solution (b)
There are several ways to convert the average speed into meters per second.
1. Start with the answer to (a) and convert km/h to m/s. Two conversion factors are needed—one to convert hours to seconds, and another to convert kilometers to meters.
2. Multiplying by these yields
$Average speed=30.0 km h × 1 h 3,600 s × 1,000 m 1 km Average speed=30.0 km h × 1 h 3,600 s × 1,000 m 1 km$
$Average speed=8.33 m s Average speed=8.33 m s$
Discussion for (b)
If we had started with 0.500 km/min, we would have needed different conversion factors, but the answer would have been the same: 8.33 m/s.
You may have noted that the answers in the worked example just covered were given to three digits. Why? When do you need to be concerned about the number of digits in something you calculate? Why not
write down all the digits your calculator produces?
Worked Example
Using Physics to Evaluate Promotional Materials
A commemorative coin that is 2″ in diameter is advertised to be plated with 15 mg of gold. If the density of gold is 19.3 g/cc, and the amount of gold around the edge of the coin can be ignored, what
is the thickness of the gold on the top and bottom faces of the coin?
To solve this problem, the volume of the gold needs to be determined using the gold’s mass and density. Half of that volume is distributed on each face of the coin, and, for each face, the gold can
be represented as a cylinder that is 2″ in diameter with a height equal to the thickness. Use the volume formula for a cylinder to determine the thickness.
The mass of the gold is given by the formula $m=ρV=15× 10 −3 g m=ρV=15× 10 −3$ where $ρ=19.3 g/cc$ and V is the volume. Solving for the volume gives $V= m ρ = 15× 10 −3 g 19.3 g/cc ≅7.8× 10 −4 cc.$
If t is the thickness, the volume corresponding to half the gold is $1 2 ( 7.8× 10 −4 )=π r 2 t=π ( 2.54 ) 2 t,$where the 1″ radius has been converted to cm. Solving for the thickness gives $t= (
3.9× 10 −4 ) π ( 2.54 ) 2 ≅1.9× 10 −5 cm = 0.00019 mm.$
The amount of gold used is stated to be 15 mg, which is equivalent to a thickness of about 0.00019 mm. The mass figure may make the amount of gold sound larger, both because the number is much bigger
(15 versus 0.00019), and because people may have a more intuitive feel for how much a millimeter is than for how much a milligram is. A simple analysis of this sort can clarify the significance of
claims made by advertisers.
Accuracy, Precision and Significant Figures
Accuracy, Precision and Significant Figures
Science is based on experimentation that requires good measurements. The validity of a measurement can be described in terms of its accuracy and its precision (see Figure 1.19 and Figure 1.20).
Accuracy is how close a measurement is to the correct value for that measurement. For example, let us say that you are measuring the length of standard piece of printer paper. The packaging in which
you purchased the paper states that it is 11 inches long, and suppose this stated value is correct. You measure the length of the paper three times and obtain the following measurements: 11.1 inches,
11.2 inches, and 10.9 inches. These measurements are quite accurate because they are very close to the correct value of 11.0 inches. In contrast, if you had obtained a measurement of 12 inches, your
measurement would not be very accurate. This is why measuring instruments are calibrated based on a known measurement. If the instrument consistently returns the correct value of the known
measurement, it is safe for use in finding unknown values.
Precision states how well repeated measurements of something generate the same or similar results. Therefore, the precision of measurements refers to how close together the measurements are when you
measure the same thing several times. One way to analyze the precision of measurements would be to determine the range, or difference between the lowest and the highest measured values. In the case
of the printer paper measurements, the lowest value was 10.9 inches and the highest value was 11.2 inches. Thus, the measured values deviated from each other by, at most, 0.3 inches. These
measurements were reasonably precise because they varied by only a fraction of an inch. However, if the measured values had been 10.9 inches, 11.1 inches, and 11.9 inches, then the measurements would
not be very precise because there is a lot of variation from one measurement to another.
The measurements in the paper example are both accurate and precise, but in some cases, measurements are accurate but not precise, or they are precise but not accurate. Let us consider a GPS system
that is attempting to locate the position of a restaurant in a city. Think of the restaurant location as existing at the center of a bull’s-eye target. Then think of each GPS attempt to locate the
restaurant as a black dot on the bull’s eye.
In Figure 1.21, you can see that the GPS measurements are spread far apart from each other, but they are all relatively close to the actual location of the restaurant at the center of the target.
This indicates a low precision, high accuracy measuring system. However, in Figure 1.22, the GPS measurements are concentrated quite closely to one another, but they are far away from the target
location. This indicates a high precision, low accuracy measuring system. Finally, in Figure 1.23, the GPS is both precise and accurate, allowing the restaurant to be located.
The accuracy and precision of a measuring system determine the uncertainty of its measurements. Uncertainty is a way to describe how much your measured value deviates from the actual value that the
object has. If your measurements are not very accurate or precise, then the uncertainty of your values will be very high. In more general terms, uncertainty can be thought of as a disclaimer for your
measured values. For example, if someone asked you to provide the mileage on your car, you might say that it is 45,000 miles, plus or minus 500 miles. The plus or minus amount is the uncertainty in
your value. That is, you are indicating that the actual mileage of your car might be as low as 44,500 miles or as high as 45,500 miles, or anywhere in between. All measurements contain some amount of
uncertainty. In our example of measuring the length of the paper, we might say that the length of the paper is 11 inches plus or minus 0.2 inches or 11.0 ± 0.2 inches. The uncertainty in a
measurement, A, is often denoted as δA ("delta A"),
The factors contributing to uncertainty in a measurement include the following:
1. Limitations of the measuring device
2. The skill of the person making the measurement
3. Irregularities in the object being measured
4. Any other factors that affect the outcome (highly dependent on the situation)
In the printer paper example uncertainty could be caused by: the fact that the smallest division on the ruler is 0.1 inches, the person using the ruler has bad eyesight, or uncertainty caused by the
paper cutting machine (e.g., one side of the paper is slightly longer than the other.) It is good practice to carefully consider all possible sources of uncertainty in a measurement and reduce or
eliminate them,
Percent Uncertainty
One method of expressing uncertainty is as a percent of the measured value. If a measurement, A, is expressed with uncertainty, δA, the percent uncertainty is
1.2 $% uncertainty = δA A × 100%. % uncertainty = δA A × 100%.$
Worked Example
Calculating Percent Uncertainty: A Bag of Apples
A grocery store sells 5-lb bags of apples. You purchase four bags over the course of a month and weigh the apples each time. You obtain the following measurements:
• Week 1 weight: $4.8 lb 4.8 lb$
• Week 2 weight: $5.3 lb 5.3 lb$
• Week 3 weight: $4.9 lb 4.9 lb$
• Week 4 weight: $5.4 lb 5.4 lb$
You determine that the weight of the 5 lb bag has an uncertainty of ±0.4 lb. What is the percent uncertainty of the bag’s weight?
First, observe that the expected value of the bag’s weight, $AA$ , is 5 lb. The uncertainty in this value, $δAδA$ , is 0.4 lb. We can use the following equation to determine the percent uncertainty
of the weight
$% uncertainty = δA A × 100%. % uncertainty = δA A × 100%.$
Plug the known values into the equation
$% uncertainty = 0.4 lb 5 lb ×100%=8%. % uncertainty = 0.4 lb 5 lb ×100%=8%.$
We can conclude that the weight of the apple bag is 5 lb ± 8 percent. Consider how this percent uncertainty would change if the bag of apples were half as heavy, but the uncertainty in the weight
remained the same. Hint for future calculations: when calculating percent uncertainty, always remember that you must multiply the fraction by 100 percent. If you do not do this, you will have a
decimal quantity, not a percent value.
Uncertainty in Calculations
There is an uncertainty in anything calculated from measured quantities. For example, the area of a floor calculated from measurements of its length and width has an uncertainty because the both the
length and width have uncertainties. How big is the uncertainty in something you calculate by multiplication or division? If the measurements in the calculation have small uncertainties (a few
percent or less), then the method of adding percents can be used. This method says that the percent uncertainty in a quantity calculated by multiplication or division is the sum of the percent
uncertainties in the items used to make the calculation. For example, if a floor has a length of 4.00 m and a width of 3.00 m, with uncertainties of 2 percent and 1 percent, respectively, then the
area of the floor is 12.0 m^2 and has an uncertainty of 3 percent (expressed as an area this is 0.36 m^2, which we round to 0.4 m^2 since the area of the floor is given to a tenth of a square meter).
For a quick demonstration of the accuracy, precision, and uncertainty of measurements based upon the units of measurement, try this simulation. You will have the opportunity to measure the length and
weight of a desk, using milli- versus centi- units. Which do you think will provide greater accuracy, precision and uncertainty when measuring the desk and the notepad in the simulation? Consider how
the nature of the hypothesis or research question might influence how precise of a measuring tool you need to collect data.
Precision of Measuring Tools and Significant Figures
An important factor in the accuracy and precision of measurements is the precision of the measuring tool. In general, a precise measuring tool is one that can measure values in very small increments.
For example, consider measuring the thickness of a coin. A standard ruler can measure thickness to the nearest millimeter, while a micrometer can measure the thickness to the nearest 0.005
millimeter. The micrometer is a more precise measuring tool because it can measure extremely small differences in thickness. The more precise the measuring tool, the more precise and accurate the
measurements can be.
When we express measured values, we can only list as many digits as we initially measured with our measuring tool (such as the rulers shown in Figure 1.24). For example, if you use a standard ruler
to measure the length of a stick, you may measure it with a decimeter ruler as 3.6 cm. You could not express this value as 3.65 cm because your measuring tool was not precise enough to measure a
hundredth of a centimeter. It should be noted that the last digit in a measured value has been estimated in some way by the person performing the measurement. For example, the person measuring the
length of a stick with a ruler notices that the stick length seems to be somewhere in between 36 mm and 37 mm. He or she must estimate the value of the last digit. The rule is that the last digit
written down in a measurement is the first digit with some uncertainty. For example, the last measured value 36.5 mm has three digits, or three significant figures. The number of significant figures
in a measurement indicates the precision of the measuring tool. The more precise a measuring tool is, the greater the number of significant figures it can report.
Special consideration is given to zeros when counting significant figures. For example, the zeros in 0.053 are not significant because they are only placeholders that locate the decimal point. There
are two significant figures in 0.053—the 5 and the 3. However, if the zero occurs between other significant figures, the zeros are significant. For example, both zeros in 10.053 are significant, as
these zeros were actually measured. Therefore, the 10.053 placeholder has five significant figures. The zeros in 1300 may or may not be significant, depending on the style of writing numbers. They
could mean the number is known to the last zero, or the zeros could be placeholders. So 1300 could have two, three, or four significant figures. To avoid this ambiguity, write 1300 in scientific
notation as 1.3 × 10^3. Only significant figures are given in the x factor for a number in scientific notation (in the form $x×10yx×10y$). Therefore, we know that 1 and 3 are the only significant
digits in this number. In summary, zeros are significant except when they serve only as placeholders. Table 1.4 provides examples of the number of significant figures in various numbers.
Table 1.4
Number Significant Figures Rationale
1.657 4 There are no zeros and all non-zero numbers are always significant.
0.4578 4 The first zero is only a placeholder for the decimal point.
0.000458 3 The first four zeros are placeholders needed to report the data to the ten-thousandths place.
2000.56 6 The three zeros are significant here because they occur between other significant figures.
45,600 3 With no underlines or scientific notation, we assume that the last two zeros are placeholders and are not significant.
15895000 7 The two underlined zeros are significant, while the last zero is not, as it is not underlined.
5.457 $××$ 10^13 4 In scientific notation, all numbers reported in front of the multiplication sign are significant
6.520 $××$ 10^–23 4 In scientific notation, all numbers reported in front of the multiplication sign are significant, including zeros.
Significant Figures in Calculations
When combining measurements with different degrees of accuracy and precision, the number of significant digits in the final answer can be no greater than the number of significant digits in the least
precise measured value. There are two different rules, one for multiplication and division and another rule for addition and subtraction, as discussed below.
1. For multiplication and division: The answer should have the same number of significant figures as the starting value with the fewest significant figures. For example, the area of a circle can be
calculated from its radius using $A=π r 2 A=π r 2$. Let us see how many significant figures the area will have if the radius has only two significant figures, for example, r = 2.0 m. Then, using
a calculator that keeps eight significant figures, you would get
$A= π r 2 = ( 3.1415927... ) × ( 2.0 m ) 2 = 4.5238934 m 2 . A= π r 2 = ( 3.1415927... ) × ( 2.0 m ) 2 = 4.5238934 m 2 .$
But because the radius has only two significant figures, the area calculated is meaningful only to two significant figures or
$A= 4.5 m 2 A= 4.5 m 2$
even though the value of $ππ$ is meaningful to at least eight digits.
2. For addition and subtraction: The answer should have the same number places (e.g. tens place, ones place, tenths place, etc.) as the least-precise starting value. Suppose that you buy 7.56 kg of
potatoes in a grocery store as measured with a scale having a precision of 0.01 kg. Then you drop off 6.052 kg of potatoes at your laboratory as measured by a scale with a precision of 0.001 kg.
Finally, you go home and add 13.7 kg of potatoes as measured by a bathroom scale with a precision of 0.1 kg. How many kilograms of potatoes do you now have, and how many significant figures are
appropriate in the answer? The mass is found by simple addition and subtraction:
$7.56 kg −6.052 kg +13.7 kg _ 15.208 kg 7.56 kg −6.052 kg +13.7 kg _ 15.208 kg$
The least precise measurement is 13.7 kg. This measurement is expressed to the 0.1 decimal place, so our final answer must also be expressed to the 0.1 decimal place. Thus, the answer should be
rounded to the tenths place, giving 15.2 kg. The same is true for non-decimal numbers. For example,
$6527.23+2=6528.23=6528. 6527.23+2=6528.23=6528.$
We cannot report the decimal places in the answer because 2 has no decimal places that would be significant. Therefore, we can only report to the ones place.
It is a good idea to keep extra significant figures while calculating, and to round off to the correct number of significant figures only in the final answers. The reason is that small errors
from rounding while calculating can sometimes produce significant errors in the final answer. As an example, try calculating $5,098−( 5.000 )×( 1,010 ) 5,098−( 5.000 )×( 1,010 )$ to obtain a
final answer to only two significant figures. Keeping all significant during the calculation gives 48. Rounding to two significant figures in the middle of the calculation changes it to $5,100 –
(5.000) × (1,000) = 100, 5,100 – (5.000) × (1,000) = 100,$ which is way off. You would similarly avoid rounding in the middle of the calculation in counting and in doing accounting, where many
small numbers need to be added and subtracted accurately to give possibly much larger final numbers.
Significant Figures in this Text
In this textbook, most numbers are assumed to have three significant figures. Furthermore, consistent numbers of significant figures are used in all worked examples. You will note that an answer
given to three digits is based on input good to at least three digits. If the input has fewer significant figures, the answer will also have fewer significant figures. Care is also taken that the
number of significant figures is reasonable for the situation posed. In some topics, such as optics, more than three significant figures will be used. Finally, if a number is exact, such as the 2 in
the formula, $c=2πr c=2πr$, it does not affect the number of significant figures in a calculation.
Worked Example
Approximating Vast Numbers: a Trillion Dollars
The U.S. federal deficit in the 2008 fiscal year was a little greater than $10 trillion. Most of us do not have any concept of how much even one trillion actually is. Suppose that you were given a
trillion dollars in $100 bills. If you made 100-bill stacks, like that shown in Figure 1.25, and used them to evenly cover a football field (between the end zones), make an approximation of how high
the money pile would become. (We will use feet/inches rather than meters here because football fields are measured in yards.) One of your friends says 3 in., while another says 10 ft. What do you
When you imagine the situation, you probably envision thousands of small stacks of 100 wrapped $100 bills, such as you might see in movies or at a bank. Since this is an easy-to-approximate quantity,
let us start there. We can find the volume of a stack of 100 bills, find out how many stacks make up one trillion dollars, and then set this volume equal to the area of the football field multiplied
by the unknown height.
1. Calculate the volume of a stack of 100 bills. The dimensions of a single bill are approximately 3 in. by 6 in. A stack of 100 of these is about 0.5 in. thick. So the total volume of a stack of
100 bills is
$volume of stack=length×width×height, volume of stack=6 in.×3 in.×0.5 in., volume of stack=9 in . 3 . volume of stack=length×width×height, volume of stack=6 in.×3 in.×0.5 in., volume of stack=9
in . 3 .$
2. Calculate the number of stacks. Note that a trillion dollars is equal to $1× 10 12 1× 10 12$, and a stack of one-hundred $100 100$ bills is equal to $10,000, 10,000,$ or $1× 10 4 1× 10 4$. The
number of stacks you will have is
1.3$1 × 10 12 (a trillion dollars) / 1 × 10 4 per stack = 1 × 10 8 stacks. 1 × 10 12 (a trillion dollars) / 1 × 10 4 per stack = 1 × 10 8 stacks.$
3. Calculate the area of a football field in square inches. The area of a football field is $100 yd× 50 yd 100 yd× 50 yd$, which gives $5,000 yd 2 5,000 yd 2$. Because we are working in inches, we
need to convert square yards to square inches
$Area = 5,000 yd 2 × 3 ft 1yd × 3 ft 1yd × 12 in. 1 foot × 12 in. 1 foot = 6,480,000 in . 2 , Area≈ 6 × 10 6 in . 2 . Area = 5,000 yd 2 × 3 ft 1yd × 3 ft 1yd × 12 in. 1 foot × 12 in. 1 foot =
6,480,000 in . 2 , Area≈ 6 × 10 6 in . 2 .$
This conversion gives us $6 × 10 6 in . 2 6 × 10 6 in . 2$ for the area of the field. (Note that we are using only one significant figure in these calculations.)
4. Calculate the total volume of the bills. The volume of all the $100-bill stacks is $9 in . 3 / stack × 10 8 stacks = 9 × 10 8 in . 3 9 in . 3 / stack × 10 8 stacks = 9 × 10 8 in . 3$
5. Calculate the height. To determine the height of the bills, use the following equation
$volume of bills = area of field × height of money Height of money = volume of bills area of field Height of money = 9 × 10 8 in . 3 6 × 10 6 in . 2 = 1 .33 × 10 2 in. Height of money = 1×10 2
in. = 100 in. volume of bills = area of field × height of money Height of money = volume of bills area of field Height of money = 9 × 10 8 in . 3 6 × 10 6 in . 2 = 1 .33 × 10 2 in. Height of
money = 1×10 2 in. = 100 in.$
The height of the money will be about 100 in. high. Converting this value to feet gives
$100 in.× 1ft 12 in. =8.33 ft≈8 ft. 100 in.× 1ft 12 in. =8.33 ft≈8 ft.$
The final approximate value is much higher than the early estimate of 3 in., but the other early estimate of 10 ft (120 in.) was roughly correct. How did the approximation measure up to your first
guess? What can this exercise tell you in terms of rough guesstimates versus carefully calculated approximations?
In the example above, the final approximate value is much higher than the first friend’s early estimate of 3 in. However, the other friend’s early estimate of 10 ft. (120 in.) was roughly correct.
How did the approximation measure up to your first guess? What can this exercise suggest about the value of rough guesstimates versus carefully calculated approximations?
Graphing in Physics
Graphing in Physics
Most results in science are presented in scientific journal articles using graphs. Graphs present data in a way that is easy to visualize for humans in general, especially someone unfamiliar with
what is being studied. They are also useful for presenting large amounts of data or data with complicated trends in an easily-readable way.
One commonly-used graph in physics and other sciences is the line graph, probably because it is the best graph for showing how one quantity changes in response to the other. Let’s build a line graph
based on the data in Table 1.5, which shows the measured distance that a train travels from its station versus time. Our two variables, or things that change along the graph, are time in minutes, and
distance from the station, in kilometers. Remember that measured data may not have perfect accuracy.
Table 1.5
Time (min) Distance from Station (km)
1. Draw the two axes. The horizontal axis, or x-axis, shows the independent variable, which is the variable that is controlled or manipulated. The vertical axis, or y-axis, shows the dependent
variable, the non-manipulated variable that changes with (or is dependent on) the value of the independent variable. In the data above, time is the independent variable and should be plotted on
the x-axis. Distance from the station is the dependent variable and should be plotted on the y-axis.
2. Label each axes on the graph with the name of each variable, followed by the symbol for its units in parentheses. Be sure to leave room so that you can number each axis. In this example, use Time
(min) as the label for the x-axis.
3. Next, you must determine the best scale to use for numbering each axis. Because the time values on the x-axis are taken every 10 minutes, we could easily number the x-axis from 0 to 70 minutes
with a tick mark every 10 minutes. Likewise, the y-axis scale should start low enough and continue high enough to include all of the distance from station values. A scale from 0 km to 160 km
should suffice, perhaps with a tick mark every 10 km.
In general, you want to pick a scale for both axes that 1) shows all of your data, and 2) makes it easy to identify trends in your data. If you make your scale too large, it will be harder to see
how your data change. Likewise, the smaller and more fine you make your scale, the more space you will need to make the graph. The number of significant figures in the axis values should be
coarser than the number of significant figures in the measurements.
4. Now that your axes are ready, you can begin plotting your data. For the first data point, count along the x-axis until you find the 10 min tick mark. Then, count up from that point to the 10 km
tick mark on the y-axis, and approximate where 22 km is along the y-axis. Place a dot at this location. Repeat for the other six data points (Figure 1.26).
5. Add a title to the top of the graph to state what the graph is describing, such as the y-axis parameter vs. the x-axis parameter. In the graph shown here, the title is train motion. It could also
be titled distance of the train from the station vs. time.
6. Finally, with data points now on the graph, you should draw a trend line (Figure 1.27). The trend line represents the dependence you think the graph represents, so that the person who looks at
your graph can see how close it is to the real data. In the present case, since the data points look like they ought to fall on a straight line, you would draw a straight line as the trend line.
Draw it to come closest to all the points. Real data may have some inaccuracies, and the plotted points may not all fall on the trend line. In some cases, none of the data points fall exactly on
the trend line.
Analyzing a Graph Using Its Equation
One way to get a quick snapshot of a dataset is to look at the equation of its trend line. If the graph produces a straight line, the equation of the trend line takes the form
$y=mx+b. y=mx+b.$
The b in the equation is the y-intercept while the m in the equation is the slope. The y-intercept tells you at what y value the line intersects the y-axis. In the case of the graph above, the y
-intercept occurs at 0, at the very beginning of the graph. The y-intercept, therefore, lets you know immediately where on the y-axis the plot line begins.
The m in the equation is the slope. This value describes how much the line on the graph moves up or down on the y-axis along the line’s length. The slope is found using the following equation
$m= Y 2 − Y 1 X 2 − X 1 . m= Y 2 − Y 1 X 2 − X 1 .$
In order to solve this equation, you need to pick two points on the line (preferably far apart on the line so the slope you calculate describes the line accurately). The quantities Y[2] and Y[1]
represent the y-values from the two points on the line (not data points) that you picked, while X[2] and X[1] represent the two x-values of the those points.
What can the slope value tell you about the graph? The slope of a perfectly horizontal line will equal zero, while the slope of a perfectly vertical line will be undefined because you cannot divide
by zero. A positive slope indicates that the line moves up the y-axis as the x-value increases while a negative slope means that the line moves down the y-axis. The more negative or positive the
slope is, the steeper the line moves up or down, respectively. The slope of our graph in Figure 1.26 is calculated below based on the two endpoints of the line
$m = Y 2 − Y 1 X 2 − X 1 m = (80 km) – (20 km) (40 min) – (10 min) m = 60 km 30 min m = 2.0 km/min. m = Y 2 − Y 1 X 2 − X 1 m = (80 km) – (20 km) (40 min) – (10 min) m = 60 km 30 min m = 2.0 km/min.$
Equation of line:$y=( 2.0 km/min )x+0 y=( 2.0 km/min )x+0$
Because the x axis is time in minutes, we would actually be more likely to use the time t as the independent (x-axis) variable and write the equation as
1.4$y=( 2.0 km/min )t+0. y=( 2.0 km/min )t+0.$
The formula $y=mx+b y=mx+b$ only applies to linear relationships, or ones that produce a straight line. Another common type of line in physics is the quadratic relationship, which occurs when one of
the variables is squared. One quadratic relationship in physics is the relation between the speed of an object its centripetal acceleration, which is used to determine the force needed to keep an
object moving in a circle. Another common relationship in physics is the inverse relationship, in which one variable decreases whenever the other variable increases. An example in physics is
Coulomb’s law. As the distance between two charged objects increases, the electrical force between the two charged objects decreases. Inverse proportionality, such the relation between x and y in the
$y=k/x, y=k/x,$
for some number k, is one particular kind of inverse relationship. A third commonly-seen relationship is the exponential relationship, in which a change in the independent variable produces a
proportional change in the dependent variable. As the value of the dependent variable gets larger, its rate of growth also increases. For example, bacteria often reproduce at an exponential rate when
grown under ideal conditions. As each generation passes, there are more and more bacteria to reproduce. As a result, the growth rate of the bacterial population increases every generation (Figure
Using Logarithmic Scales in Graphing
Sometimes a variable can have a very large range of values. This presents a problem when you’re trying to figure out the best scale to use for your graph’s axes. One option is to use a logarithmic
(log) scale. In a logarithmic scale, the value each mark labels is the previous mark’s value multiplied by some constant. For a log base 10 scale, each mark labels a value that is 10 times the value
of the mark before it. Therefore, a base 10 logarithmic scale would be numbered: 0, 10, 100, 1,000, etc. You can see how the logarithmic scale covers a much larger range of values than the
corresponding linear scale, in which the marks would label the values 0, 10, 20, 30, and so on.
If you use a logarithmic scale on one axis of the graph and a linear scale on the other axis, you are using a semi-log plot. The Richter scale, which measures the strength of earthquakes, uses a
semi-log plot. The degree of ground movement is plotted on a logarithmic scale against the assigned intensity level of the earthquake, which ranges linearly from 1-10 (see Figure 1.29 (a)).
If a graph has both axes in a logarithmic scale, then it is referred to as a log-log plot. The relationship between the wavelength and frequency of electromagnetic radiation such as light is usually
shown as a log-log plot (Figure 1.29 (b)). Log-log plots are also commonly used to describe exponential functions, such as radioactive decay.
Virtual Physics
Graphing Lines
In this simulation you will examine how changing the slope and y-intercept of an equation changes the appearance of a plotted line. Select slope-intercept form and drag the blue circles along the
line to change the line’s characteristics. Then, play the line game and see if you can determine the slope or y-intercept of a given line.
Grasp Check
How would the following changes affect a line that is neither horizontal nor vertical and has a positive slope?
1. increase the slope but keeping the y-intercept constant
2. increase the y-intercept but keeping the slope constant
a. Increasing the slope will cause the line to rotate clockwise around the y-intercept. Increasing the y-intercept will cause the line to move vertically up on the graph without changing the
line’s slope.
b. Increasing the slope will cause the line to rotate counter-clockwise around the y-intercept. Increasing the y-intercept will cause the line to move vertically up on the graph without changing
the line’s slope.
c. Increasing the slope will cause the line to rotate clockwise around the y-intercept. Increasing the y-intercept will cause the line to move horizontally right on the graph without changing
the line’s slope.
d. Increasing the slope will cause the line to rotate counter-clockwise around the y-intercept. Increasing the y-intercept will cause the line to move horizontally right on the graph without
changing the line’s slope.
Check Your Understanding
Check Your Understanding
Exercise 12
Identify some advantages of metric units.
a. Conversion between units is easier in metric units.
b. Comparison of physical quantities is easy in metric units.
c. Metric units are more modern than English units.
d. Metric units are based on powers of 2.
Exercise 13
The length of an American football field is $100 yd$, excluding the end zones. How long is the field in meters? Round to the nearest $0.1 m$.
a. $10.2 m$
b. $91.4 m$
c. $109.4 m$
d. $328.1 m$
Exercise 14
The speed limit on some interstate highways is roughly $100 km/h$. How many miles per hour is this if $1.0 mile$ is about $1.609 km$?
a. 0.1 mi/h
b. 27.8 mi/h
c. 62 mi/h
d. 160 mi/h
Exercise 15
Briefly describe the target patterns for accuracy and precision and explain the differences between the two.
a. Precision states how much repeated measurements generate the same or closely similar results, while accuracy states how close a measurement is to the true value of the measurement.
b. Precision states how close a measurement is to the true value of the measurement, while accuracy states how much repeated measurements generate the same or closely similar result.
c. Precision and accuracy are the same thing. They state how much repeated measurements generate the same or closely similar results.
d. Precision and accuracy are the same thing. They state how close a measurement is to the true value of the measurement. | {"url":"https://texasgateway.org/resource/13-language-physics-physical-quantities-and-units?book=79076&binder_id=78091","timestamp":"2024-11-14T11:24:02Z","content_type":"text/html","content_length":"199760","record_id":"<urn:uuid:90a0bce4-0305-406f-9772-a42fdbb60132>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00688.warc.gz"} |
Introduction to Quantum Mechanics
Quantum mechanics is a fundamental theory in physics that describes the behavior of particles at the smallest scales, such as atoms and subatomic particles. It provides a framework for understanding
phenomena that classical physics cannot explain, such as the behavior of particles in superposition and the concept of quantum entanglement.
How Does Quantum Mechanics Work?
Quantum mechanics works on the principles of wave-particle duality, quantization, and uncertainty. It describes particles as wavefunctions, which are mathematical functions that provide the
probability of finding a particle in a particular state or position. Key principles include:
• Wave-Particle Duality: Particles exhibit both wave-like and particle-like properties. For example, electrons can create interference patterns like waves but also collide like particles.
• Quantization: Certain properties, such as energy, can only take on discrete values. This is seen in the quantized energy levels of electrons in atoms.
• Uncertainty Principle: Formulated by Werner Heisenberg, it states that certain pairs of physical properties, like position and momentum, cannot both be precisely measured simultaneously.
What Are the Key Experiments in Quantum Mechanics?
Several experiments have been crucial in developing quantum mechanics:
• Double-Slit Experiment: Demonstrates wave-particle duality by showing that particles like electrons create interference patterns when not observed, but behave like particles when observed.
• Photoelectric Effect: Albert Einstein's work on this phenomenon showed that light can be quantized into photons, providing evidence for the particle nature of light.
• Quantum Entanglement: Experiments such as those by Alain Aspect demonstrate that particles can be entangled, meaning the state of one particle instantly affects the state of another, regardless
of distance.
What Are the Applications of Quantum Mechanics?
Quantum mechanics has led to several important applications:
• Semiconductors: The behavior of electrons in semiconductors is described by quantum mechanics, enabling the development of modern electronics like transistors and integrated circuits.
• Quantum Computing: Quantum mechanics provides the basis for quantum computers, which use qubits to perform complex computations much faster than classical computers.
• Medical Imaging: Techniques such as MRI rely on principles of quantum mechanics to provide detailed images of the body's internal structures.
• Lasers: The operation of lasers is based on quantum mechanical principles, where electrons transition between energy levels to emit coherent light.
What Are the Challenges in Quantum Mechanics?
Quantum mechanics, despite its successes, presents several challenges:
• Interpretation: Various interpretations of quantum mechanics, such as the Copenhagen and Many-Worlds interpretations, attempt to explain the nature of reality but remain a topic of debate.
• Complexity: The mathematics and concepts of quantum mechanics can be highly abstract and complex, making it challenging to understand and apply.
• Experimental Limitations: Creating and manipulating quantum systems often requires extremely precise conditions, such as very low temperatures or isolated environments.
Quantum mechanics is a revolutionary theory that has fundamentally altered our understanding of the physical world at the smallest scales. It has led to numerous technological advancements and
continues to be a rich field of research. While the theory poses conceptual and practical challenges, its contributions to science and technology underscore its significance and enduring impact. | {"url":"https://www.sharpcoderblog.com/blog/introduction-to-quantum-mechanics","timestamp":"2024-11-06T05:54:43Z","content_type":"text/html","content_length":"24246","record_id":"<urn:uuid:1d829c4a-9140-4c19-8de4-38a531408c02>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00141.warc.gz"} |
Inverse of a Matrix
Many university STEM major programs have reduced the credit hours for a course in Matrix Algebra or have simply dropped the course from their curriculum. The content of Matrix Algebra in many cases
is taught just in time where needed. This approach can leave a student with many conceptual holes in the required knowledge of matrix algebra. In this series of blogs, we bring to you ten topics
that are of immediate and intermediate interest for Matrix Algebra. Here is the
seventh topic
where we talk about solving a set of simultaneous linear equations using the LU decomposition method. First, the LU decomposition method is discussed along with its motivation. The LU decomposition
method to find the inverse of a square matrix is discussed. Get all the resources in form of textbook content, lecture videos, multiple choice test, problem set, and PowerPoint presentation.
LU Decomposition Method
This post is brought to you by
• Holistic Numerical Methods Open Course Ware:
• the textbooks on
• the Massive Open Online Course (MOOCs) available at
How much computational time does it take to find the inverse of a square matrix using Gauss Jordan method? Part 1 of 2.
Problem Statement
How much computational time does it take to find the inverse of a square matrix using Gauss Jordan method? Part 1 of 2.
To understand the solution, you should be familiar with the Gauss Jordan method of finding the inverse of a square matrix. Peter Young of UCSC describes it briefly in this pdf file while if you like
watching an example via a video, you can see PatrickJMT doing so. You also need to read a previous blog where we calculated the computational time needed for the forward elimination steps on a
square matrix in the Naïve Gauss elimination method. We are now ready to estimate the computational time required for Gauss Jordan method of finding the inverse of a square matrix.
This post is brought to you by
• Holistic Numerical Methods Open Course Ware:
□ Numerical Methods for the STEM undergraduate at http://nm.MathForCollege.com;
□ Introduction to Matrix Algebra for the STEM undergraduate at http://ma.MathForCollege.com
• the textbooks on
• the Massive Open Online Course (MOOCs) available at
LU Decomposition takes more computational time than Gaussian Elimination! What gives?
If you are solving a set of simultaneous linear equations, LU Decomposition method (involving forward elimination, forward substitution and back substitution) would use more computational time than
Gaussian elimination (involving forward elimination and back substitution, but NO forward substitution).
So why use and waste time talking about LU Decomposition?
Because, LU Decomposition is computationally more efficient than Gaussian elimination when we are solving several sets of equations with the same coefficient matrix but different right hand sides.
Case in point is when you are finding the inverse of a matrix [A]. If one is trying to find the inverse of nxn matrix, then it implies that one needs to solve n sets of simultaneous linear equations
of [A][X]=[C] form with the n right hand sides [C] being the n columns of the nxn identity matrix, while the coefficient matrix [A] stays the same.
The computational time taken for solving a single set of n simultaneous linear equations is as follows:
• Forward elimination: Proportional to $\frac{n^3}{3}$
• Back substitution: Proportional to $\frac{n^2}{2}$
• Forward substitution: Proportional to $\frac{n^2}{2}$
So for LU decomposition method used to find the inverse of a matrix, the computational time is proportional to $\frac{n^3}{3}+n( \frac{n^2}{2}+\frac{n^2}{2})=\frac{4n^3}{3}$. Remember that the
forward elimination only needs to be done only once on [A] to generate the L and U matrices for the LU decomposition method. However the forward and back substitution need to be done n times.
Now for Gaussian Elimination used to find the inverse of a matrix, the computational time is proportional to $n \frac{n^3}{3} +n \frac{n^2}{2}=\frac{n^4}{3}+\frac{n^3}{2}$. Remember that both the
forward elimination and back substitution need to be done n times.
Hence for large n, for LU Decomposition, the computational time is proportional to $\frac{4n^3}{3}$, while for Gaussian Elimination, the computational time is proportional to $\frac{n^4}{3}$. So for
large n, the ratio of the computational time for Gaussian elimination to computational for LU Decomposition is ${\frac{n^4}{3}}/{\frac{4n^3}{3}}=\frac{n}{4}$
As an example, to find the inverse of a 2000×2000 coefficient matrix by Gaussian Elimination would take n/4=2000/4=500 times the time it would take to find the inverse by LU Decomposition.
So are you convinced now why we use LU Decomposition in certain cases? For textbook notes on this issue, examples of LU Decomposition to solve a set of equations, and finding inverse of a matrix
using LU Decomposition, click here.
Reference: Numerical Methods for the STEM Undergraduate, http://nm.mathforcollege.com/topics/lu_decomposition.html
This post brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://nm.mathforcollege.com | {"url":"https://blog.autarkaw.com/tag/inverse-of-a-matrix/","timestamp":"2024-11-06T13:53:39Z","content_type":"text/html","content_length":"44477","record_id":"<urn:uuid:eed2f2ff-6228-4afa-ad49-86c60ab1ccd1>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00414.warc.gz"} |
The Differences Between Laminar and Turbulent Flow
Navigating the complex landscape of fluid dynamics unveils a dichotomy between laminar and turbulent flow phenomena. This scholarly exploration delves into their distinct characteristics, spanning
from the ordered trajectories of laminar flow to the chaotic eddies of turbulent flow. Investigating their implications across natural and engineered systems unveils the critical role of the Reynolds
number in delineating flow regimes. Furthermore, we scrutinize the computational methodologies offered by ANSYS Fluent to simulate these phenomena, elucidating the nuanced intricacies of each viscous
This academic discourse aims to deepen our understanding of fluid dynamics and its computational simulations, fostering advancements in engineering and scientific domains.
What is laminar flow?
In a laminar flow, fluid particles move along smooth, non-crossing paths in distinct layers. This type of flow is predictable and can be accurately modeled mathematically. Essentially, laminar flow
is characterized by well-defined, orderly paths, with fluid particles moving in a structured manner.
Have you ever seen engine oil spilled? In some cases, you will encounter a situation like Figure 1. Even though you know the fluid is moving, it appears solid. This phenomenon occurs because the
movement of the oil is a laminar flow, moving completely uniformly, so no change is seen in the flow, giving the illusion of solidity.
Figure 1. Laminar flow of engine oil.
In Figure 2, the fluid inside the tube exhibits laminar flow. The ink is injected into the flow by a syringe. Due to the laminar flow, the ink follows a specific path and does not mix with the fluid
immediately. The ink may slowly mix with the fluid through diffusion.
Figure 2. Injecting ink into a laminar flow.
What is Turbulent Flow?
In turbulent flow, the fluid moves irregularly. This type of flow is characterized by disordered motion with eddies, swirls, and vortices, leading to high levels of turbulence and mixing.
Figure 3. Water vapor in this image exhibits turbulent flow.
In Figure 4, the fluid inside the tube exhibits turbulent flow. The ink is injected into the flow by a syringe. Due to the turbulent flow, the ink mixes quickly with the fluid.
Figure 4. Injecting ink into a turbulent flow.
Turbulent flow is chaotic in nature, making many of its details unpredictable. For instance, the speed of the flow at a specific point varies unpredictably over time. However, experimental tests
reveal that the average speed over time remains constant.
Example of laminar flow
• Honey dripping: The slow, viscous honey flows smoothly in layers as it drizzles down, illustrating laminar flow.
• Flow in microfluidic devices: Laminar flow is commonly employed in microfluidic devices across fields like biotechnology and chemistry due to its predictability and limited fluid mixing.
• Blood Flow in Capillaries: Blood flow in the body’s small blood vessels, like capillaries, typically demonstrates laminar flow, driven by their low velocities and small diameters.
Example of turbulent flow
• Car Exhaust: When driving a car, the exhaust emitted from the rear is an example of turbulent flow. The exhaust gas contains particles in random motion.
• Waterfalls: The flow of water over a waterfall can transition from laminar to turbulent as it descends and interacts with the air and rocks below.
• River Flow: Rivers frequently demonstrate turbulent flow, attributed to the presence of rocks, bends, and fluctuations in depth that disturb the otherwise smooth movement of water.
Laminar flow Vs turbulent flow Reynolds number
Scientists and engineers use the dimensionless Reynolds number to determine whether a flow is in a laminar or turbulent regime.
Where ρ [Kg.m^-3] is the fluid density, V [m.s^-1] is the characteristic velocity of the flow, L [m] is a characteristic length scale of the flow (e.g., pipe diameter for flow in a pipe) and μ [Pa.s]
is the dynamic viscosity of the fluid.
The Reynolds number is interpreted as the ratio of inertial forces to viscous forces. Although this number is not the exact ratio of these forces, as it increases, the influence of inertial forces
relative to viscous forces also increases, leading to a rise in flow turbulence.
In the movement of any fluid, several factors contribute to irregular fluid flow, including surface roughness, the presence of obstacles, and variations in fluid properties at different pressures and
temperatures. The irregularities in fluid flow are mitigated by viscosity and friction. However, as the Reynolds number increases and the inertia relative to viscosity rises, friction becomes less
effective in suppressing these irregularities, leading to turbulent flow.
In a fluid phenomenon, such as fluid flow inside a pipe with a circular cross-section, the Reynolds number is increased from low to high values. Based on experimental results, researchers determine
the Reynolds number at which the flow becomes turbulent.
For flow in a circular pipe, the critical Reynolds number that separates laminar and turbulent flow is generally accepted to be around Re = 2300. It is not the case that the flow regime suddenly
changes at Re = 2300; rather, the flow regime changes gradually. For flow in a circular pipe, consider the flow as laminar when Re = 100 and as turbulent when Re = 4000. In the case where Re = 2100,
be cautious with your numerical calculations; considering the flow as either turbulent or laminar will result in errors. In such cases, it is better to rely on experimental methods.
The Reynolds number is used in every fluid and heat transfer phenomenon where the flow regime is important. These phenomena include flow separation, drag force, convection heat transfer, and more.
Difference between laminar, turbulent, and transitional flow
In various sources discussing fluid flow regimes, a transitional zone is introduced to improve accuracy between laminar and turbulent zones. In this region, the fluid gradually becomes more turbulent
as the Reynolds number increases until it reaches a state of complete turbulence.
For flow in a circular pipe, Re < 2300 is considered the laminar zone, 2300 < Re < 4000 is the transition zone, and Re > 4000 is the turbulent zone.
Laminar vs turbulent flow in lakes and rivers
In lakes and rivers, the distinction between laminar and turbulent flows shapes the dynamics of water movement and profoundly influences aquatic ecosystems. Laminar flow, characterized by smooth and
orderly movement of fluid particles, is relatively rare in natural water bodies due to the typically low velocities and high viscosities required.
It may occur in calm, shallow waters or engineered channels with low flow rates, impacting sediment transport and nutrient distribution. In contrast, turbulent flow, with its chaotic and irregular
motion, dominates in rivers, especially in fast-moving streams, rapids, and areas with obstacles. Turbulent flow facilitates sediment transport, enhances mixing for nutrient distribution, and
dissipates energy through friction. Understanding these flow regimes is essential for managing and conserving aquatic environments.
What is Rayleigh number?
The Rayleigh number (Ra) is a dimensionless number in fluid mechanics that helps predict natural convection in fluids. It plays a crucial role in determining whether natural convection in a fluid
will be laminar or turbulent. It is defined as:
Where g [m.s^-2] is the acceleration due to gravity, β [1.K^-1] is the thermal expansion coefficient of the fluid, x [m] is the characteristic length scale, α [m^2.s] is the thermal diffusivity of
the fluid, ν [m^2.s] is the kinematic viscosity of the fluid, T[S ][K] is the surface temperature and T[∞] [K] is fluid temperature.
The Rayleigh number is expressed as a measure of the ratio of the buoyancy force to the viscous force. It should be noted that it is not the exact ratio of these two forces, but it indicates that as
the Rayleigh number increases, the ratio of the buoyancy force to the viscous force also increases.
Figure 5. Free convection boundary layer transition on a vertical plate, from “Fundamentals of heat and mass transfer” by Frank P. Incropera et al.
In natural convection, the flow regime is determined by the Rayleigh number, not the Reynolds number. Figure 5 shows natural convection flow on a vertical plane. As can be seen, the flow regime
changes at Ra = 10^9, which is called the critical Rayleigh number. Natural convection currents are created due to the temperature difference in the fluid, which causes a density difference and the
resulting buoyancy force.
The flow regime in ANSYS Fluent
In Ansys Fluent, the user determines the flow regime by calculating the Reynolds number (or Rayleigh number in case of natural convection) outside the software. Based on this calculation, it is
determined whether the flow simulation is laminar or turbulent.
There are several methods in Fluent to simulate turbulent flow. Each of them has its advantages and disadvantages and is suitable for specific applications.
Under the title of “Viscous Models” in the software, all the simulation methods for the fluid flow regime available in the software are:
• Inviscid: In this model, it is assumed that fluid has no viscosity.
• Laminar: There is just one model for viscous laminar flows and it solves ordinary Navier-Stokes equation. The following models are for simulating turbulent flow
• Spalart-Allmaras: It is a Reynolds-Averaged Navier-Stokes (RANS) model, suitable for aerodynamics applications.
• k-epsilon: It is a Reynolds-Averaged Navier-Stokes (RANS) model, suitable for general-purpose turbulence modeling in various applications.
• k-omega: It is a Reynolds-Averaged Navier-Stokes (RANS) model, suitable for modeling near-wall and low-Reynolds number turbulence.
• Transition k-kl-omega: It is a Reynolds-Averaged Navier-Stokes (RANS) model, suitable for modeling transitional flows from laminar to turbulent.
• Transition SST: This model is a combination of the k-epsilon and k-omega turbulence models.
• Reynolds Stress it is a RANS model suitable for simulating highly anisotropic turbulent flows.
• Scale-Adaptive simulation: A hybrid RANS-LES model suitable for resolving different turbulent scales.
• Detached Eddy Simulation: A hybrid RANS-LES model suitable for resolving eddies near walls.
• Large Eddy Simulation(LES): A method resolving large turbulent structures, suitable for high-resolution simulations. It is not a RANS model.
The details of each viscous model can be adjusted in the software. For example, for the k-epsilon model, there are three modes: standard, RNG, and realizable.
Figure 6. Viscous models, ANSYS Fluent
In conclusion, our exploration of laminar and turbulent flow phenomena reveals their fundamental importance across various disciplines, from fluid dynamics to engineering and beyond.
By understanding the distinctions between these flow regimes and the critical role of the Reynolds number in their delineation, we gain valuable insights into complex fluid behavior. Moreover, the
computational simulations offered by ANSYS Fluent provide powerful tools for modeling and analyzing these phenomena with precision.
As we continue to delve into the intricacies of fluid dynamics, this knowledge serves as a cornerstone for advancements in research, engineering design, and technological innovation.
Click to access the Turbomachinery CFD
Leave a Comment | {"url":"https://cfdland.com/the-differences-between-laminar-and-turbulent-flow/","timestamp":"2024-11-04T12:05:00Z","content_type":"text/html","content_length":"418740","record_id":"<urn:uuid:ac0d86a3-1421-4eab-a7f7-29191edb18e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00354.warc.gz"} |
Photomath - Solve Math Problems With Image | Mathful
The Ultimate PhotoMath Tool
Detailed Answers
Get step-by-step answers that promote a greater understanding of how to solve countless systems of mathematical equations with ease.
Swift Solutions
Using powerful AI and image analysis technology, Photomath can instantly identify and interpret any math equation to facilitate seamless problem-solving in seconds.
Accurate Results
Photomath is trained on vast mathematical datasets to guarantee unparalleled precision and detail when solving even the most complex math equations.
Advanced AI Calculator
Photomath has a step-by-step AI calculator that lets you input various equations, expressions, and functions to get accurate answers in algebra, calculus, etc.
Access Photomath For Comprehensive AI Math Assistance
Whether you are in grade school, high school, college, or post-graduate studies, our photomath solver provides expert-level math assistance across a wide range of math disciplines. These include:
• Elementary math
• Arithmetic
• Word problems
• Geometry
• Pre-algebra and algebra
• Pre-calculus and calculus
• Trigonometry
• And much more!
How Does Photomath Work?
Photomath is designed with user simplicity in mind, so it only takes a few quick steps to get the math answers you need. You can follow them below.
• 01
Step 1: Input Your Math Problem
Take a photo of the math problem or upload an existing image.
• 02
Step 2: Let Photomath Analyze It
Our AI analyzes the image and recognizes the equation.
• 03
Step 3: Review The Math Solution
Get a step-by-step solution explained in detail.
Frequently Asked Questions
• What is Photomath?
Photomath is an AI-powered math solver powered by advanced image recognition technology. When a user takes a photo of a math problem—whether it's handwritten, printed, or displayed on a
screen—Photomath analyzes the image and identifies the mathematical operations involved. It then generates accurate solutions while breaking down the process into clear, understandable steps.
• Why choose Photomath?
Our photo math solver employs advanced image and formula recognition to instantly and accurately interpret mathematical expressions from any scans you upload. This facilitates seamless
problem-solving with detailed step-by-step math solutions across various branches of math.
• Does photo math offer native language support?
Yes. We designed Photomath to cater to all global users. For this reason, it can be effectively utilized in over 50+ languages including Spanish, French, Italian, German, Japanese, Mandarin,
Korean, and several others.
• Are Photomath's answers accurate and reliable?
Absolutely. Our AI photo math solver utilizes state-of-the-art AI algorithms specially trained on tons of mathematical data to tackle an extensive range of math problems, no matter their
complexity. As such, any solutions it delivers will be 100% accurate.
• Is our photo math solver free to use?
Yes. You can freely access our photo math solver at no charge via the free trial. You won't even have to submit any credit card information to start. But, once the free trial expires, you must
subscribe to a premium plan to keep using Photomath.
• What's the best photo math solver?
The best photo math solver available is HIX Tutor. This innovative tool excels in scanning and interpreting images of math problems, providing users with highly accurate solutions and detailed
explanations. HIX Tutor stands out due to its ability to break down complex problems into manageable steps, ensuring that users not only receive the correct answers but also understand the
methodology behind them.
Scan Images On Photomath For Instant Math Solutions!
Upload an image of any complex math problem to instantly get a quick, accurate, and detailed solution. Try our photo math solver for free today!
Elementary math
Word problems
Pre-algebra and algebra
Pre-calculus and calculus
And much more! | {"url":"https://mathful.com/photomath","timestamp":"2024-11-12T22:34:53Z","content_type":"text/html","content_length":"86674","record_id":"<urn:uuid:50c58632-a8c5-4da1-b951-aedc95ed3af7>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00371.warc.gz"} |
Outdated page
This page contains obsolete information about the VPM based VIATRA2 and preserved for archive purposes only.
The currently maintained wiki is available at http://wiki.eclipse.org/VIATRA
Overview: Graph patterns
Pattern Matching Semantics
Patterns may be composed in VTCL a complex way by using the find construct. Moreover, the injectivity of pattern matching can be further controlled by using the new shareable keyword as follows:
• Injective pattern matching (default): the default behavior of the pattern matcher is that two pattern variables cannot be bound to the same value (i.e. element in the model space). Explicit
pattern variable assignments (in the form of A=B) can enforce that the two variables do take the same value during pattern matching.
• Shareable (or non-injective) pattern matching: the injectivity condition is not checked for local pattern variables (thus two variables may be bound to the same value) unless a non-injectvity
constraint (in the form of A =/= B) is prescribed explicitly for a pair of variables.
The following examples highlight the semantic corner cases using of pattern composition and injective pattern matching. As a example, we use a simple state machine formalism (with states as entities
and transitions as relations), which potentially contain loop transitions (where the source and target state of a transition is the same).
// A and B should be different, i.e. loop transitions are not matched
pattern childPatternInj1(A, B) = {
state.transition(T, A, B);
// A and B may be equal, loop transitions are matched
shareable pattern childPatternSha1(A, B) = {
state.transition(T, A, B);
// Equivalent match set with childPatternInj1: A =/= B
pattern childPatternInj2(A, B) = {
find childPatternSha1(A, B);
// Equivalent match set with childPatternInj1: A =/= B
shareable pattern childPatternSha2(A, B) = {
find childPatternSha1(A, B);
A =/= B;
And now, let us present some more complex scenarios. As a general rule, the caller (parent) pattern may prescribe additional injectivity constraints for the local variables.
// Constraints: X =/= Y, Y =/Z, X =/= Z (thanks to the injectivity of the parent pattern)
pattern parentPattern1(X, Y, Z) = {
find childPatternInj1(X, Y);
find childPatternInj1(Y, Z);
// Constraints: X =/= Y, Y =/Z, X =/= Z (thanks to the injectivity of the parent pattern)
pattern parentPattern2(X, Y, Z) = {
find childPatternInj1(X, Y);
find childPatternSha1(Y, Z);
// Constraints: X =/= Y, Y =/Z, X =/= Z (thanks to the injectivity of the parent pattern)
pattern parentPattern3(X, Y, Z) = {
find childPatternSha1(X, Y);
find childPatternSha1(Y, Z);
// Constraints: X =/= Y, Y =/Z (thanks to the injectivity of the child pattern)
shareable pattern parentPattern4(X, Y, Z) = {
find childPatternInj1(X, Y);
find childPatternInj1(Y, Z);
// Constraints: X =/= Y (thanks to the injectivity of the child pattern)
shareable pattern parentPattern5(X, Y, Z) = {
find childPatternInj1(X, Y);
find childPatternSha1(Y, Z);
// Constraints: none
shareable pattern parentPattern6(X, Y, Z) = {
find childPatternSha1(X, Y);
find childPatternSha1(Y, Z); | {"url":"https://wiki.eclipse.org/VIATRA2/Examples/VTCL/GraphPattern","timestamp":"2024-11-14T13:58:40Z","content_type":"text/html","content_length":"32296","record_id":"<urn:uuid:cfbedaff-e3c7-4dc5-86f1-c26e96b544c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00038.warc.gz"} |
Symmetry is a property of some images, objects, and mathematical equations whereby reflections, rotations, or substitutions cause no change in properties or appearance. For example, the letter M is
symmetrical across a line drawn down its center, a ball is symmetrical under all possible rotations, and the equation y = x^2 (a parabola) is symmetrical under the substitution of -x for x. This
equation's mathematical symmetry is equivalent to its graph's physical symmetry. The ability of mathematical symmetries to reflect the physical symmetries of the real world is of great importance in
physics, especially particle physics.
Many real objects and forces at all size scales—subatomic particles, atoms, crystals, organisms, stars, and galaxies—exhibit symmetry, of which there are many kinds. Line or bilateral symmetry, the
simplest and most familiar, is the symmetry of by any figure or object that can be divided along a central line and then restored (geometrically) to wholeness by reflecting its remaining half in a
Symmetries are not only defined in terms of reflection across a line. A sphere, for example, can be rotated through any angle without changing its appearance, and in mathematics is said to possess O
(3) symmetry. The quantum field equations whose solutions describe the electron, which is, like a sphere, the same viewed from any direction, also have O(3) symmetry.
In particle physics, the mathematics of symmetry is an essential tool for producing an organized account of the confusing plethora of particles and forces observed in Nature and for making
predictions based on that account. An extension of the parabola example shows how it is possible for mathematical symmetry to lead to the prediction of new phenomena. Consider a system of two
equations, y = x^2 and y = 4. There are two values of x that allow both equations to be true at once, x = 2 and x = -2. The two (x, y) pairs (2, 4) and (-2, 4) are termed the solutions to this system
of two equations, because both both equations are simultaneously true if and only if x and y have these values. (The two solutions correspond to the points where a horizontal line, y = 4, would
intersect the two rising arms of the parabola.) If this system two equations constitued an extremely simple theory of matter, and if one of its two solutions corresponded to a known particle, say
with "spin" = x = 2 and "mass" = y = 4, then one might predict, based on the symmetry of the two solutions, that a particle with "spin" = -2 and "mass" = 4 should also exist. An analogous (though
more complex) process has actually led physicists to predict, seek, and find certain fundamental particles, including the Ω^– baryon and the Η^0 muon).
Symmetry, however, not only is a useful tool in mathematical physics, but has a profound connection to the laws of Nature. In 1915, German mathematician Emmy Noether (1882–1835) proved that every
conservation law corresponds to a mathematical symmetry. A conservation law is a statement that says that the total amount of some quantity remains unchanged (i.e., is conserved) in any physical
Momentum, for example, is conserved when objects exert force on each other; electric charge is also conserved. The laws (mathematical equations) that describe momentum and charge must, therefore,
display certain symmetries.
Noether's theorem works both ways: in the 1960s, a conserved quantum-mechanical quantity (unitary spin) was newly defined based on symmetries observed in the equations describing a class of
fundamental particles termed hadrons, and has since become an accepted aspect of particle physics. As physicists struggle today to determine whether the potentially all-embracing theory of "strings"
can truly account for all known physical phenomena, from quarks to gravity and the Big Bang, string theory's designers actively manipulate its symmetries in seeking to explore its implications.
Barnett, R. Michael, Henry Mühry, and Helen R. Quinn. The Charm of Strange Quarks. New York: Springer-Verlag, 2000.
Elliot, J.P., and P.G. Dawber. Symmetry in Physics. New York: Oxford University Press, 1979.
Silverman, Mark. Probing the Atom Princeton, NJ: Princeton University Press, 2000.
Additional topics | {"url":"https://science.jrank.org/pages/6673/Symmetry.html","timestamp":"2024-11-06T21:47:39Z","content_type":"text/html","content_length":"12851","record_id":"<urn:uuid:6769007e-ad2c-42d8-b691-aa58f6e31eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00686.warc.gz"} |
orksheets for 3rd Class
Recommended Topics for you
Algebra Solving equations
Mathematics algebra refresh
Explore Algebra Worksheets by Grades
Explore Algebra Worksheets for class 3 by Topic
Explore Other Subject Worksheets for class 3
Explore printable Algebra worksheets for 3rd Class
Algebra worksheets for Class 3 are an essential resource for teachers looking to help their students build a strong foundation in math. These worksheets provide a variety of engaging and challenging
problems that cover topics such as addition, subtraction, multiplication, and division, as well as basic algebraic concepts like variables, expressions, and equations. By incorporating these
worksheets into their lesson plans, teachers can ensure that their students are developing the necessary skills to tackle more advanced math concepts in the future. Furthermore, these worksheets can
be easily customized to cater to the individual needs and learning styles of each student, making them an invaluable tool for any Class 3 math teacher.
Quizizz is an excellent platform that offers a wide range of resources, including algebra worksheets for Class 3, to help teachers create interactive and engaging learning experiences for their
students. In addition to worksheets, Quizizz also features customizable quizzes, flashcards, and interactive games that can be used to reinforce key math concepts and assess student understanding.
Teachers can easily track student progress and identify areas where additional support may be needed, ensuring that every student has the opportunity to succeed in their math education. By
incorporating Quizizz into their teaching strategies, Class 3 teachers can provide their students with a fun and effective way to learn and practice essential algebra skills, setting them up for
success in their future math endeavors. | {"url":"https://quizizz.com/en-in/algebra-worksheets-class-3","timestamp":"2024-11-12T07:10:55Z","content_type":"text/html","content_length":"147820","record_id":"<urn:uuid:cf916e3c-cc15-400c-9226-6ad2cae09f4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00553.warc.gz"} |
Home Page
PROF. DR. ATTILA ASKAR
Dr. Attila Askar is currently a professor of mathematics Koç University. Prior to his current appointment, Dr. Askar served Koç University as professor of mathematics, Dean of the College of Arts and
Sciences, Provost and President. His previous academic appointments included positions at Bogaziçi University in Istanbul, Brown University, Princeton University, Paris University VI, Max-Planck
Institute in Göttingen and the Royal Institute of Technology in Stockholm. Attila Askar received recognitions which include the Junior Scientist; and Science awards of the National Research Council
(Tübitak), the Information Age Award of the Ministry of Culture and is a member of the National Academy of Sciences of Turkey. His recent research interests included scattering of classical and
quantum waves; wavelet analysis and molecular dynamics. He is the author of over eighty research journal articles and two books. Attila Askar received his Engineering Diploma from the Technical
University of Istanbul in 1966 and his PhD from Princeton University in USA in 1969.
Selected Journal Articles (since 1990)
1. "Nonlinear Surface Gravity waves: Continuous and Discontinuous Solutions", M. Can, A. Askar, Wave Motion 12, 485 (1990).
2. "Discrete-Continuum Hybrid model for Dynamics with Applications: Desorption of Adsorbates and relaxation of Lattice Inclusions", T. Thacher, H. A. Rabitz, A. Askar, J. Chem. Phys. 93, 4673 (1990).
3. "Convergence Properties of a Class of Boundary Element Approximations to Linear Diffusion Problems with Localized Nonlinear Reactions", A. Peirce, A. Askar, H.A. Rabitz, Numerical Methods for
Partial Differential Equations 6, 75 (1990).
4. "Discrete-Continuum Hybrid model for Gas-Surface Collisions I: Trajectories with Single, Multiple Collisions and Capture", A. Askar, H.A. Rabitz, Surface Science 245, 411 (1991).
5. "Discrete-Continuum Hybrid model for Gas-Surface Collisions II: Closed Form Perturbation Solutions for Single Collisions", A. Askar, H.A. Rabitz, Surface .Science 245, 425 (1991)
6. "Subsoil geology and soil amplification in Mexico Valley", P. Hadley, A. Askar, A.S. Cakmak, Int. J. Soil Dyn. Earthqu. Eng. 10, 101 (1991).
7. "Earthquake Wave Propagation in Layered Media dy Boundary Itegral Methods", P. Hadley, A. Askar, A.S. Cakmak, Int. J. Soil Dyn. Earthqu. Eng. 10, 130 (1991).
8. "Optimal Control of Acoustic Waves in Solids", Y.S. Kim, R.H. Rabitz, A. Askar, J. B. McManus, Phys. Rev. B 44, 4892 (1991).
9. "Application of the Born-Oppenheimer principle to classification of time scales in molecules interacting with time-dependent external fields", T. J. Gill, S. Shi, A. Askar, H. A. Rabitz, Phys.
Rev. A.45, 6479 (1992).
10. "Long time scale molecular dynamics subspace integration method applied to anharmonic crystals and glasses", B. Space, H. Rabitz, A. Askar, J. Chem. Phys. 99, 9070 (1993).
11. "Molecular dynamics with Langevin equation using local harmonics and . Chandrasekhar's convolution", A. Askar, R. G. Owens, H. A. Rabitz, J. Chem. Phys. 99 (7), 5316 (1993)
12. "The continuum solid and compliance functions in gas-surface low energy collisions A.Askar, Computer Physics Communications 80, 168 (1994)
13. "A Direct Method for the Inversion of Physical Systems", Inverse Problems, Caudill-LF Rabitz-H, A. Askar 10, Iss 5, pp 1099-1114 (1994)
14. "Optimal-Control of Laser-Generated Acoustic-Waves in Solids Full source", Physical Review B-Condensed Matter, Kim-YS Tadi-M Rabitz-H, A. Askar, Mcmanus-JB 50, Iss 21, pp 15744-15751 (1994)
15. "Generation of Controlled Acoustic-Waves by Optimal-Design of Surface Loads with Constrained Forms", International Journal Of Engineering Science, Kim-YS Rabitz-H Tadi-M A. Askar, Mcmanus-JB, 33,
Iss 6, pp 907-920 (1995)
16. "Subspace Method for Long-Time Scale Molecular-Dynamics" Journal Of Physical Chemistry 1995, A. Askar, Space-B Rabitz-H 99, Iss 19, pp 7330-7338
17. "Wavelet Transform for Analysis of Molecular-Dynamics Journal Of Physical Chemistry 1996, A. Askar, Cetin-AE, Rabitz-H 100, Iss 49, pp 19165-19173
18. "Interior Energy Focusing Within an Elastoplastic Material" International Journal Of Solids And Structures 1996, Tadi-M Rabitz-H Kim-YS, A. Askar, Prevost-JH Mcmanus-JB 33, Iss 13, pp 1891-1901
19. "Focused Bulk Ultrasonic-Waves Generated by Ring-Shaped Laser Illumination and Application to Flaw Detection", Journal Of Applied Physics 1996, Wang-X Littman-MG Mcmanus-JB Tadi-M Kim-YS, A.
Askar, Rabitz-H, 80, Iss 8, pp 4274-4281
20. "Laser-Beam Propagation in a Saturable Absorber", J Optical Society Of America B-Optical Physics 1997, Sennaroglu-A Atay-FM, A. Askar, 14, Iss 10, pp 2577-2583
21. "Quantitative Study of Laser-Beam Propagation in a Thermally Loaded Absorber" J Optical Society Of America B-Optical Physics 1997, Sennaroglu-A, A. Askar Atay-FM, 14, Iss 2, pp 356-363
22. "Alternating Direction Implicit Technique and Quantum Eution Within the Hydrodynamical Formulation of Schrodinger’s Equation" Chemical Physics Letters 1998, Dey-BK, A. Askar, Rabitz-H, 297, Iss
3-4, pp 247-256
23. "Multidimensional Wave-Packet Dynamics Within the Fluid Dynamical Formulation of the Schrodinger-Equation", Journal Of Chemical Physics 1998, Dey-BK, A. Askar, Rabitz-H, 109, Iss 20, pp 8770-8782
24. "Quantum fluid dynamics in the Lagrangian representation and applications to photo-dissociation problems", F. Sales Mayor, A. Askar and H. A. Rabitz, J. Chem. Phys. 111, 2423 (1999)
25. "Optimal control of molecular motion expressed through quantum fluid dynamics", B. Dey, H. A. Rabitz and A. Askar, Phys. Rev. A. 61, 043412-1, (2000)
26. "Solution of the quantum fluid dynamics equations with radial basis function interpolation", Xu-Guang Hu, Tak-San Ho, H. A. Rabitz and A. Askar, Phys. Rev. E. 61, 5967 (2000)
27. "Optimal control of molecular motion expressed through quantum fluid dynamics", B. Dey, H. A. Rabitz and A. Askar, "Selected Abstract from other Physical Review Journal", Phys. Rev. E, 61, 6032
28. "Multivariate radial basis interpolation for solving quantum fluid dynamical equations", Hu XG, Ho TS, Rabitz H, A. Askar, Comput Math Appl 43 (3-5): 525-537 Feb-Mar 2002
29. "Optimal Reduced Dimensional Representation Of Classical Molecular Dynamics, B. K. Dey, H. Rabitz, A. Askar, J Chem Phys 119 (11): 5379-5387 Sep 15 2003
1. "Methods in Applied Algebra and Analysis", A. Askar, Bogazici Üniversitesi (1981)
2. "Lattice Dynamical Foundations of Continuum Theories of Solids", A. Askar, Series in Theoretical and Applied Mechanics, World Scientific (1986).
Contact Information:
Office Tel: (90)(212) 338 1216
E-Mail: aaskar@ku.edu.tr | {"url":"http://home.ku.edu.tr/~aaskar/","timestamp":"2024-11-03T16:25:13Z","content_type":"text/html","content_length":"11089","record_id":"<urn:uuid:a18bf2f7-ed0d-4a8b-bae7-4a487e8cb222>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00435.warc.gz"} |
Educational Insights Fraction Formula game review and giveaway ends 7-22
3 of my kids doing alot of math in school and most of that math now involves Fractions. Now from when I was in school I know how tough fractions can get so I wanted learning fractions to be fun and
unstressful for them so when I got the chance to review Educational Insights Fraction Formula game I jumped at the chance because what funner way to learn then to play games at least for my kids
So when we got the game we sat down to play it. They were afraid it was gonna be hard but once we got playing it and figured out how it works they were having a blast. To me it was like playing 21
the card game. The whole object of the game is to pull cards and find the fraction piece to put in your tube to fill it up to the number 1 without going over. The person that gets the closest without
going over is the winner. We played this like I swear about 30 times in a hour lol but it was fun and they were learning their fractions and how to add them up. That was great! So if you have kids
that are learning fractions you definitely should check this game out. You and your children will be glad you did! Plus check out all the other fun things
Educational Insights
Fraction Formula
It’s a race to “1” with this 4-player fraction game!
Draw a card and find the corresponding fizzy fraction tile.
Drop the tile into your cylinder.
“Hold” it you think you’re as close t o 1 as you’ll get without going over or draw another card if you think you can get closer by adding another fraction tile.
The player who gets closest to 1 wins the round!
Grades 3+/Ages 8+
We are giving one lucky reader this awesome game yay!
To Enter:
1. Go to
Educational Insights
and tell me something else you love
Extra Entries:
2.Follow me
3.sign up for my rss feeds
4.Blog about this giveaway with link back to Mommies Angels this will count for 4 entries
5.add my button to your site 2 entries
6.Subscribe to my email feeds 3 entries
8.follow me on twitter (collyn23) and tweet this contest (daily)
9.Enter my other giveaways 1 entry for each entered
10.Follow me on network blogs on sidebar
11.add my blog to your blogroll counts as 3 entries
12.Fav me on technoratti
13.comment on my non giveaway blogs 1 entry for each comment
14. Post this giveaway on any giveaway site, Online Sweepstakes, Mr Linky, or other networking website-leave the link where I can find it (5 entry for each site)
15. Vote for me on Picket Fences (Daily)
39 comments:
Another item I like is the Tip Top Tally™ game.
footejennifer at hotmail.com
Google Friend Connect follower
footejennifer at hotmail.com
Email subscriber #3.
footejennifer at hotmail.com
I also love the Sprout & Grow™ Greenhouse with Wonder Soil™. sweepmorey@gmail.com
I follow your blog Jammie
I subscribe via email #2
I subscribe via email #3
I like the Estimation Station
I follow with GFC
Kabam looks like fun!
I follow on GFC
rss subscriber
email subscriber 1
email subscriber 2
email subscriber 3
I follow on twitter
I follow on networked blogs
EM 1
EM 3
Picket Fence 7-9
Fiber 1 entry
Build a Dream entry
Picket Fence vote 7-10
Picket Fence 7-11
Picket Fence 7-12
PF vote 7-14
PF 7-15 | {"url":"https://beautifulangelzz.blogspot.com/2011/06/educational-insights-fraction-formula.html?showComment=1310256915616","timestamp":"2024-11-14T12:00:47Z","content_type":"text/html","content_length":"113943","record_id":"<urn:uuid:37308590-e8c3-48d1-838e-8e430382cbb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00310.warc.gz"} |
Homework 5 The Slope Formula's personal website
Homework 5 The Slope Formula
Download File 🌐✨🔔👉 https://jinyurl.com/2vNHVm 👈🔔✨🌐
If you are struggling with homework 5 the slope formula, you are not alone. Many students find this topic challenging and confusing. But don't worry, we are here to help you. In this article, we will
explain what the slope formula is, how to use it, and how to solve some common problems involving it.
The slope formula is a mathematical equation that allows you to find the slope of a line given two points on the line. The slope of a line is a measure of how steep or slanted the line is. It can be
positive, negative, zero, or undefined.
The slope formula is written as:
where (x, y) and (x, y) are the coordinates of two points on the line.
To use the slope formula, you need to follow these steps:
To solve homework 5 the slope formula, you need to apply the steps above to each problem. Here are some examples of how to do that:
We have (x, y) = (3, -2) and (x, y) = (-1, 4).
We substitute these values into the slope formula:
The slope is -3/2.
We have (x, y) = (-5, 7) and (x, y) = (-5, -3).
We substitute these values into the slope formula:
The denominator is zero, so the slope is undefined.
We have (x, y) = (0, 0) and (x, y) = (4, 8).
We substitute ``` these values into the slope formula:
The slope is 2.
In this article, we have learned what the slope formula is, how to use it, and how to solve some common problems involving it. We hope this article has helped you with homework 5 the slope formula.
If you have any questions or feedback, please leave a comment below. Thank you for reading!
Read more
Contact Me | {"url":"https://www.polywork.com/john_gonzales_2","timestamp":"2024-11-03T06:43:19Z","content_type":"text/html","content_length":"29537","record_id":"<urn:uuid:90e86ecf-8d7b-4640-8ce3-ea6407a5c2b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00860.warc.gz"} |
Double-sided bounds for solution of one-dimensional heat equation
Title Double-sided bounds for solution of one-dimensional heat equation
Authors A. E. Rassadin^1
^1HSE University
In the article, to solve the Cauchy problem on a straight line for a linear diffusion-thermal conductivity equation with initial conditions of a special type, estimates of this solution
Annotation are obtained from below and from above. Using a numerical test, it is shown that after a certain period of time, any of these estimates can be taken as an approximate solution of this
Keywords Poisson integral, direct and reverse H"{o}lder inequalities, error function, compact support, Heaviside step function, relative error
Rassadin A. E. ''Double-sided bounds for solution of one-dimensional heat equation'' [Electronic resource]. Proceedings of the XVI International scientific conference "Differential
Citation equations and their applications in mathematical modeling". (Saransk, July 17-20, 2023). Saransk: SVMO Publ, 2023. - pp. 198-204. Available at: https://conf.svmo.ru/files/2023/papers/
paper32.pdf. - Date of access: 12.11.2024. | {"url":"https://conf.svmo.ru/en/archive/article?id=426","timestamp":"2024-11-12T03:48:15Z","content_type":"text/html","content_length":"11123","record_id":"<urn:uuid:064e28e4-51d6-4dbe-bdd1-c2ddfb5598d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00409.warc.gz"} |
Magic squares with all subsquares of possible orders based on extended Langford sequences
A magic square of order $n$ with all subsquares of possible orders (ASMS$(n)$) is a magic square which contains a general magic square of each order $k\in\{3, 4, \cdots, n-2\}$. Since the conjecture
on the existence of an ASMS was proposed in 1994, much attention has been paid but very little is known except for few sporadic examples. A $k$-extended Langford sequence of defect $d$ and length $m$
is equivalent to a partition of $\{1,2,\cdots,2m+1\}\backslash\{k\}$ into differences $\{d,\cdots,d+m-1\}$. In this paper, a construction of ASMS based on extended Langford sequence is established.
As a result, it is shown that there exists an ASMS$(n)$ for $n\equiv\pm3\pmod{18}$, which gives a partial answer to Abe's conjecture on ASMS.
arXiv e-prints
Pub Date:
December 2017
□ Mathematics - Combinatorics | {"url":"https://ui.adsabs.harvard.edu/abs/2017arXiv171205560L/abstract","timestamp":"2024-11-12T19:40:35Z","content_type":"text/html","content_length":"36449","record_id":"<urn:uuid:8405cba8-9d4e-4013-bc02-80af19bb1716>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00116.warc.gz"} |
Nearest Neighbor Upsampling
A nice idea to use SND-COMPOSE with QUANTIZE.
A slightly modified version in one line - I think the modern expression is “fugly”
(setq factor 4)
(multichan-expand #'snd-compose s (mult (/ *sound-srate*)(quantize (snd-pwl 0 *sound-srate* (list (truncate (* len factor)) len (truncate (* len factor)))) 1)))
Perhaps the most “elegant” solution I’ve found:
(setq factor 4)
(control-srate-abs *sound-srate*
(let ((sig2 (mult (/ *sound-srate*)
(quantize (pwl factor len factor) 1))))
(multichan-expand #'snd-compose s sig2)))
I’m not sure about “officializing” it
RepeatSamples.ny (621 Bytes) | {"url":"https://forum.audacityteam.org/t/nearest-neighbor-upsampling/26710/19","timestamp":"2024-11-10T12:44:52Z","content_type":"text/html","content_length":"17817","record_id":"<urn:uuid:c2b69cd3-902d-4f13-aa7d-fb11921840ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00004.warc.gz"} |
Transactions Online
Nobuo FUNABIKI, Junji KITAMICHI, "A Gradual Neural Network Algorithm for Broadcast Scheduling Problems in Packet Radio Networks" in IEICE TRANSACTIONS on Fundamentals, vol. E82-A, no. 5, pp. 815-824,
May 1999, doi: .
Abstract: A novel combinatorial optimization algorithm called "Gradual neural network (GNN)" is presented for NP-complete broadcast scheduling problems in packet radio (PR) networks. A PR network
provides data communications services to a set of geographically distributed nodes through a common radio channel. A time division multiple access (TDMA) protocol is adopted for conflict-free
communications, where packets are transmitted in repetition of fixed-length time-slots called a TDMA cycle. Given a PR network, the goal of GNN is to find a TDMA cycle with the minimum delay time for
each node to broadcast packets. GNN for the N-node-M-slot TDMA cycle problem consists of a neural network with N M binary neurons and a gradual expansion scheme. The neural network not only satisfies
the constraints but also maximizes transmissions by two energy functions, whereas the gradual expansion scheme minimizes the cycle length by gradually expanding the size of the neural network. The
performance is evaluated through extensive simulations in benchmark instances and in geometric graph instances with up to 1000 vertices, where GNN always finds better TDMA cycles than existing
algorithms. The result in this paper supports the credibility of our GNN algorithm for a class of combinatorial optimization problems.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e82-a_5_815/_p
author={Nobuo FUNABIKI, Junji KITAMICHI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Gradual Neural Network Algorithm for Broadcast Scheduling Problems in Packet Radio Networks},
abstract={A novel combinatorial optimization algorithm called "Gradual neural network (GNN)" is presented for NP-complete broadcast scheduling problems in packet radio (PR) networks. A PR network
provides data communications services to a set of geographically distributed nodes through a common radio channel. A time division multiple access (TDMA) protocol is adopted for conflict-free
communications, where packets are transmitted in repetition of fixed-length time-slots called a TDMA cycle. Given a PR network, the goal of GNN is to find a TDMA cycle with the minimum delay time for
each node to broadcast packets. GNN for the N-node-M-slot TDMA cycle problem consists of a neural network with N M binary neurons and a gradual expansion scheme. The neural network not only satisfies
the constraints but also maximizes transmissions by two energy functions, whereas the gradual expansion scheme minimizes the cycle length by gradually expanding the size of the neural network. The
performance is evaluated through extensive simulations in benchmark instances and in geometric graph instances with up to 1000 vertices, where GNN always finds better TDMA cycles than existing
algorithms. The result in this paper supports the credibility of our GNN algorithm for a class of combinatorial optimization problems.},
TY - JOUR
TI - A Gradual Neural Network Algorithm for Broadcast Scheduling Problems in Packet Radio Networks
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 815
EP - 824
AU - Nobuo FUNABIKI
AU - Junji KITAMICHI
PY - 1999
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E82-A
IS - 5
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - May 1999
AB - A novel combinatorial optimization algorithm called "Gradual neural network (GNN)" is presented for NP-complete broadcast scheduling problems in packet radio (PR) networks. A PR network provides
data communications services to a set of geographically distributed nodes through a common radio channel. A time division multiple access (TDMA) protocol is adopted for conflict-free communications,
where packets are transmitted in repetition of fixed-length time-slots called a TDMA cycle. Given a PR network, the goal of GNN is to find a TDMA cycle with the minimum delay time for each node to
broadcast packets. GNN for the N-node-M-slot TDMA cycle problem consists of a neural network with N M binary neurons and a gradual expansion scheme. The neural network not only satisfies the
constraints but also maximizes transmissions by two energy functions, whereas the gradual expansion scheme minimizes the cycle length by gradually expanding the size of the neural network. The
performance is evaluated through extensive simulations in benchmark instances and in geometric graph instances with up to 1000 vertices, where GNN always finds better TDMA cycles than existing
algorithms. The result in this paper supports the credibility of our GNN algorithm for a class of combinatorial optimization problems.
ER - | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/e82-a_5_815/_p","timestamp":"2024-11-06T12:40:42Z","content_type":"text/html","content_length":"63295","record_id":"<urn:uuid:7bf03d32-4730-4c7b-be49-77252bc19f78>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00563.warc.gz"} |
Understanding Prime Numbers, HCF and LCM - Blue3 Academy
Preparing for your GCE O Level E-math exams requires a thorough understanding of essential numerical concepts that form the foundation of your mathematical knowledge. Among these crucial topics are
Prime Numbers, Highest Common Factor (HCF), Lowest Common Multiple (LCM), Perfect Squares, and Perfect Cubes. These topics are the cornerstones for more advanced algebraic concepts that you’ll face
in the future.
In this article, we will provide you with a summarised conceptual guide on Prime Numbers, HCF, and LCM, as well as some application questions that you can try your hand at.
Categorising Natural Numbers using Prime Factorisation
Natural numbers (also known as positive integers) such as 1, 2, 3, 4 can be categorised into three main groups: the number one (1) itself, composite numbers and prime numbers.
Composite Numbers
Composite numbers have more than two factors and can be divided by numerical values other than 1 and themselves. Examples include 4, 6, 8, 9, 10, 12, 14 and 15.
Prime Numbers
Prime numbers, on the other hand, have exactly two factors (1 and itself) and cannot be divided evenly. Examples include 2, 3, 5, 7, 11, 13 and 17.
Fun Fact: The smallest prime number is 2!
Checkpoint! Prime Numbers: Question 1
Can you identify the prime numbers between 1 to 30?
Check your answers! Did you identify all the prime numbers correctly?
Prime Numbers: Application Questions
Checkpoint! Prime Numbers: Question 2
Express 4840 in index notation.
Check your answers! Did you get it right?
Highest Common Factor (HCF): Finding the Greatest Common Factor
HCF is defined as the greatest common factor between two or more numbers; the largest positive integer that divides the numbers without leaving a remainder.
Take 8 and 12, for instance.
The HCF of 8 and 12 will be 4 as 4 is the highest value that can divide both 8 and 12.
Now, let’s understand how we can find the HCF of the following values in index notation using this example below.
Find the highest common factor of the following values in index notation.
924 = 2^2 x 3 x 7 x 11
2520 = 2^3 x 3^2 x 5 x 7
2548 = 2^2 x 7^2 x 13
Checkpoint! HCF: Question 1
Find the highest common factor (HCF) of 820, 2120, and 2240 using prime factorisation.
Check your answers! Did you get it right?
Lowest Common Multiple (LCM): Calculating the Lowest Common Multiple
The lowest common multiple of two numbers, for example: x and y, is denoted by LCM(x,y). The LCM is the smallest integer that is divisible by both x and y.
Let’s put this into numerical terms for better understanding. Let’s take numbers 4 and 6, for example.
The multiples of the number 4 are 4, 8, 12, 16, 20, 24, and so on.
The multiples of the number 6 are 6, 12, 18, 24, and so on.
Of this, the common multiples for 4 and 6 would be 12, 24, 36, and so on. The least common multiple amongst these numbers is 12.
Now that you understand what LCM is about, let’s try to find the LCM of 45 and 60 using the prime factorisation method.
Now, it’s your turn!
Checkpoint! LCM: Question 1
Find the lowest common multiple of the following values.
924 = 2^2x 3 x 7 x 11
2520 = 2^3 x 3^2 x 5 x 7
2548 = 2^2 x 7^2 x 13
Check your answers! Did you get it right?
Perfect Squares
A perfect square is defined as an integer that can be expressed as the square of another integer. In other words, a perfect square is the result of multiplying an integer by itself.
For example, take a look at the numbers 4, 9, 16 and 25.
This set of numbers can be classified as perfect squares as they can be written as:
4 = 2^2 = 2 x 2
9 = 3^2 = 3 x 3
16 = 4^2 = 4 x 4
25 = 5^2 = 5 x 5
Therefore, we can conclude that if n is an integer, then n^2 is a perfect square.
Now, let’s solve an example question together!
Find the smallest value of k such that 2520k is a perfect square.
Now, try a similar question below.
Checkpoint! Perfect Square: Question 1
Find the smallest value of k such that square root 2520k is a whole number.
Check your answers! Did you get it right?
Perfect Cubes in Numbers
A perfect cube is defined as a number that can be expressed as the cube of an integer. Specifically, if x is a perfect cube of y, then x=y^3.
Take the numbers: 8, 27, 64 and 125 for example. They are perfect cubes because they can be represented as:
8 = 2^3 = 2 x 2 x 2
27 = 3^2 = 3 x 3 x 3
64 = 4^3 = 4 x 4 x 4
125 = 5^3 = 5 x 5 x 5
Moreover, when we take the cube root of a perfect cube, we will obtain a natural number and not a fraction; cube root x = y.
Have you understood the concept of Perfect Cubes so far?
If so, let’s jump right into an example question.
Find the smallest value of k such that 2520k is a perfect cube.
Now, try a similar question below.
Checkpoint! Perfect Square: Question 1
Question 1: Find the smallest value of k such that cube root 2520k is a whole number.
Check your answers! Did you get it right?
Application Questions
Now that you’ve understood the concepts of prime numbers, HCF, LCM, perfect squares and perfect cubes, try out this application question.
Checkpoint! Application Question 1
Find the smallest positive integer k such that 392k is a multiple of 396.
Check your answers! Did you get it right?
Still want more?
Have you understood the concepts of Prime Numbers, HCF and LCM? Now, test your knowledge by attempting this word problem!
Still want more? Then, join our interactive live teaching sessions, where we explore various topics covered in the GCE O Level Elementary Mathematics exams, including interesting subjects like Prime
Numbers, HCF and LCM and more.
Follow us on TikTok @blue3academy for updates on our upcoming sessions! | {"url":"https://blue3academy.com/prime-numbers-hcf-and-lcm/","timestamp":"2024-11-10T16:10:56Z","content_type":"text/html","content_length":"315620","record_id":"<urn:uuid:757cf408-a122-4642-85bb-f47102057ffd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00879.warc.gz"} |
International Code of Signals (Overview)
The International Code of Signals was first drafted in 1855 by the British Board of Trade and subsequently published in 1857 as a means of maritime communications. The original publication showed
17,000 signals using 18 flags, part of which was specific to the United Kingdom and another part that contained universal signals to be used by all nations. Adopted by most sea-faring nations, the
system was revised in 1932 to include seven languages: English, French, German, Italian, Japanese, Spanish, and Norwegian.
The Fourth Assembly of the Intergovernmental Maritime Consultative Organization revised the code in 1965 which became effective on January 1, 1969. This revision added Russian and Greek to the
languages already included and adopted a new radiotelephone code. Each signal has a complete meaning.
Jorge Candeias, 31 August 1999
The site of the Mystic Seaport Library has put on line the following brochure: Report of the Committee Appointed by the Lords of the Committe of Privy Council for Trade to Inquire into and Report
upon the subject of a Code of Signals to be used at Sea published in London by HMSO, 1857.
Fourteen pages in all, this makes interesting reading and provides background information on the machinery leading to the adoption of signal flags.
Thirteen official and more less official codes were debated as were the various qualities a system should have. A 'Numeral' code's disadvantages are listed and some simple mathematics displayed e.g.
, the number of permutations. Maryatt's code flags are recommended albeit with some variations.
The report (accompanied by a separate signal book) did not only address British usage but foreign as well, in fact, it invited contributions from other nations along the lines specified. Lastly, it
was decided not to burden the system with a list of ship's names.
A few appendices further develop some of the above.
Jan Mertens, 9 August 2005
All signal code flags are square, although A and B are broadly swallowtailed. The colours are as follows:
• A = vertically divided, white on hoist, blue on fly.
• B = red
• C = horizontally divided, blue-white-red-white-blue, ratio 1:1:1:1:1
• D = horizontally divided, yellow-blue-yellow, ratio 1:2:1
• E = horizontally divided, blue over red
• F = white field, with red square rotated 45 ^o on to its corner, extending to the edges of the flag
• G = vertically divided (from hoist) yellow-blue-yellow-blue-yellow-blue
• H = vertically divided white on hoist and red on fly.
• I = yellow field, small black circle in centre
• J = horizontally equally divided blue-white-blue
• K = vertically divided, yellow on hoist, blue on fly
• L = quartered, first and fourth yellow, second and third black.
• M = blue field, white saltire-style cross
• N = chequered, four rows and four columns, alternating blue and white, beginning with blue in the upper hoist
• O = diagonally divided, yellow in the lower hoist, red in the upper fly.
• P = blue field, small white square centred on it
• Q = yellow; the quarantine flag
• R = red field, yellow cross
• S = white field, small blue square centred on it
• T = vertically equally divided, red-white-blue from hoist to fly
• U = quartered, first and fourth red, second and third white
• V = white field, red saltire-style cross
• W = blue field, white square centred on it, small red square centred in the white
• X = white field, blue cross
• Y = diagonally striped, stripes rising from lower hoist to upper fly, yellow-red-yellow-red-yellow-red-yellow-red-yellow-red
• Z = diagonally quartered, yellow in upper sector, black along hoist, red in lower sector, blue in fly sector.
I asked a Flagmakers firm ("Industrial Velera Marsal S.A.") and they make 3 sizes measuring: 1.98x2.41 m, 1.37x1.68 m and 0.76x0.91 m, which they assure are "official". I didn´t find any proportions
kept at the 3 sizes. They are close to 8-10, but a little more "squared" than that.
Maritime letter flags, as far as I know, go back to Sir Home Popham, who published "Telegraphic Signals or Marine Vocabulary" in 1800, with a larger version in 1803, and another expanded edition in
This was used by Nelson to signal his fleet before the beginning of the battle of Trafalgar, the 21 of October, 1805, the famous message: "ENGLAND EXPECTS THAT EVERY MAN WILL DO HIS DUTY". Certainly
those flags were 1-1 in proportions and can be seen in several books.
After many manuals and codes, the actual international signal flags developed from Captain Frederick Marryat´s "Code of Signals for the Merchant Service".
This actual international code is what we need to find out if it has construction sheets.
In Whitney Smith´s 1975 book, page 86, letter flags are drawn in prop. 8-10.
Jose C. Alegria, 25 August 1999
I am not a specialist of signal flags (Album des Pavillons presents all national flags, ensigns and markings at sea, but not signal flags) yet I had a look at the technical specifications of French
Navy and British Royal Navy : signal flags have the following measures (in cm).
French Navy 198 : 244
137 : 168
76 : 91
British Navy 183 : 229
114 : 152
102 : 122
61 : 76
46 : 53
30 : 38
Armand Noel du Payrat,
25 August 1999 Here are the correct proportions for the International Ship Code Flags:
• Letters A & B-1:1.5 (swallowtailed)
• Letters C - Z (& the US Navy's set of 10 numerical square flags)-1:1 (square)
• The 10 ICS numerical-& the code/answering-pennants-5:9 (tapered at fly)
• The 4 substitutes (repeaters), the books-on-board (white with a blue reader), & the PROMPT (yellow/green/yellow-new!!) pennants-7:11 (triangular).
Robert Lloyd Wheelock,
25 August 1999
In Flags at Sea, Timothy Wilson wrote; "The most common sizes for signal flags of the International Code nowadays (1986) are: 78 inches by 96 inches, 54 inches by 66 inches, and 30 inches by 36
The sizes offered in a current catalogue are (all in inches): 9x12, 12x18, 18x21, 24x30, 30x36, 43x54, 48x72.
Marryat suggested that his flags should be 6 feet by 8 feet, with pennants 4 feet by 18 feet.
David Prothero, 28 August 1999
The US Navy page on signal flags, http://www.chinfo.navy.mil/navpalib/communications/flags/flags.html also shows what look like 1:1 proportions. Although the depiction of the "Romeo" flag looks to me
like the cross is too narrow, so use your best judgment on reliability of the other representations. The page also shows United States Navy vs. international meanings of individual flags, for those
Joseph McMillan, 31 August 1999
I visited the Bornholm Museum in Rønne, on Bornholm, during my vacation, and they had this overview of the signaling flags in different version of the code. Signaling flags being an interest of mine,
I just had to jot them down:
If I'm deciphering my all-too-small notes correctly, the 1867 version had:
B Like the current flag
C White with a red dot (as the current "1", but shaped more or less like the current repeaters)
D Blue with a white dot (as the current "2", but the same shape as its "C")
F Red with a white dot (its "C" in reverse)
G Yellow before blue (as the current "K", but shaped like its "C", and with the
yellow only approx. 2/5 of its length)
H Like the current flag
J Like the current flag
K Like the current flag
L Quartered blue and yellow
M Like the current flag
N Like the current flag
P Like the current flag
Q Like the current flag
R Like the current flag
S Like the current flag
T Like the current flag
V Like the current flag
W Like the current flag
Answering pennant Like the current flag (However, I'm not sure about this one: Other sources, displaying the 1901 code picture a sharp-tipped flag, like the "C" in this code.
I knew there were no vowels in this code, but it turns out there were no X or Z either. Well, considering that the code started out as English [It did, didn't it?], and these letters are basically
non-native for English, this might make sense.
OK, returning to my notes, and the di Pietri system, these were the changes to the flags in 1901:
A Like the current flag
E Columns of red, white and blue (like the "T", but shaped like its "C")
F Red with a white cross (shaped like its "C")
I Like the current flag
L Like the current flag
O Like the current flag
U Like the current flag
X Like the current flag
Y Like the current flag
Z Like the current flag
Adding the missing letters. The "F" is changed to avoid confusion with the other dark-coloured pennant with white dot, I expect. Why did they change the "L"? Maybe a partly folding "L" would look too
much like a "K"?
In 1933 all changed to their current flags. The pennant shaped letters were replaced. The digit pennants, with their obtuse tips, were introduced, as well as the repeaters. [How did they repeat
before this, then? Or was the code build to avoid repetition?] From now on the answering pennant seems to have its shape like a slightly longer version of a digit pennant.
The chart didn't mention any other signals. If the black wreck flag was ever part of the code, it wasn't shown here, nor did it show when any of the other signal flags were introduced.
Peter Hans van den Muijzenberg, 10 August 2003
Comparing the 1913 International Signals Code depicted therein with the current ICS, I noted that the flags for the letters C, D, E, F and G are pennants. At some point the rectangular flags with the
quite different designs that they are in the current International Code of Signals. All the other alphabetical flags of the 1913 code are still the same in the current ICS.
Incidentally, these five pennants with their designs survived in the modern ICS as the numeral pennants 1,2,3,4 and 5.
Andre Burgers, 8 September 2004
The change came into force on 1st January 1934. In the previous 1901 Code the alphabet flags were used to represent numbers, from 2 to 27. 1934 was also when the three substitute flags were
David Prothero, 9 September 2004 | {"url":"https://www.crwflags.com/FOTW/flags/xf-ics.html","timestamp":"2024-11-08T02:02:53Z","content_type":"text/html","content_length":"16504","record_id":"<urn:uuid:525940a8-dd34-43c0-91bf-5bbdd78ff107>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00742.warc.gz"} |
Transgressing the Boundaries (idea)
In a surprising turn of events, late
, the
establishment seems to have been hit by a hoax similar to that perpetrated by
with his paper
Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity
, except the other way around.
French brothers, Igor and Grichka Bogdanov, both had their PhD theses accepted by the physics department at Bourgogne university, France, and have published papers in respectable physics journals,
such as Annals of Physics and Classical and Quantum Gravity.
Their theory apparently concerns the "topological state" of spacetime, at "scale zero". For example, the abstract from Igor's thesis starts off:
We propose in this research a new solution regarding the existence and the content of the initial spacetime singularity. In the context of topological field theory we consider that the initial
singularity of space-time corresponds to a zero size singular gravitational instanton characterized by a Riemannian metric configuration (++++) in dimension D = 4. Connected with some unexpected
topological data corresponding to the zero scale of space-time, the initial singularity is thus not considered in terms of divergences of physical fields but can be resolved in the frame of
topological field theory. We get this result from the physical observation that the pre-spacetime is in a thermal equilibrium at the Planck scale.
and concludes:
... we conjecture that the problem of inertial interaction might be explained in terms of topological amplitude connected with the singular zero size gravitational instanton corresponding to the
initial singularity of spacetime.
Commenting in the usenet newsgroup sci.physics.research, physics professor John Baez (one of the world's leading mathematical physicists) says he "can [...] assure you that the abstracts seem like
gibberish to me, even though I know what most of the buzzwords mean. The journal articles make for rather strange reading [...] Some parts almost seem to make sense, but the more carefully you read
them, the less sense they make."
Baez goes on to extract the start of Igor's paper "Topological theory of intertia" (Czechoslovak Journal of Physics, 51 (2001), 1153-1236.) to illustrate his point:
The phenomenon of inertia - or "pseudo-force" according to E. Mach [1] - has recently been presented by J. P. Vigier as one of the "unsolved mysteries of modern physics". Indeed our point of view
is that this important question, which is well formulated in the context of Mach's principle, cannot be resolved or even understood in the framework of conventional field theory.
Here we suggest a novel approach, a direct outcome of the topological field theory proposed by Edward Witten in 1988 [3]. According to this approach, beyond the interpretation propoosed by Mach,
we consider inertia as a topological field, linked to the topological charge Q = 1 of the "singular zero size gravitational instanton" [4] which, according to [5], can be identified with the
initial singularity of space-time in the standard model.
and the conclusion, that
whatever the orientation, the plane of oscillation of Foucault's pendulum is necessarily aligned with the initial singularity marking the origin of physical space S^3, that of Euclidean space E^4
(described by the family of instants I[β] of whatever radius β), and, finally, that of Lorentzian space-time M^4.
According to rumours reported by Baez, the Bogdanovs are "journalists and science fiction writers, both in their late 40's". After gaining fluency in some of the jargon of modern physics by
interviewing French string theorists, they "spread rumors that they were geniuses and their theses were a milestone in theoretical physics".
Having made almost a state event of their thesis defences (inviting the national media, wining and dining the president of France, renting a large venue for the occasion) they were both rewarded with
passes. Their theses can (at the moment) be found on the "Theses Online" site, at the urls given below.
Information, speculation and rumour, from usenet news article "Physics bitten by reverse Alan Sokal hoax?" <ap7tq6$eme$1@glue.ucr.edu>, sci.physics.research, 24 October, 2002, by John Baez.
Igor Bogdanov
(Topological state of spacetime at scale zero)
Grichka Bogdanov
(Quantum fluctuations of the signature of the metric at the Planck scale)
Grichka Bogdanov and Igor Bogdanov,
Topological field theory of the initial singularity of spacetime,
Classical and Quantum Gravity 18 (2001), 4341-4372.
Grichka Bogdanov and Igor Bogdanov,
Spacetime Metric and the KMS Condition at the Planck Scale,
Annals of Physics, 295 (2002), 90-97.
Grichka Bogdanov and Igor Bogdanov,
KMS space-time at the Planck scale,
Nuovo Cimento, 117B (2002) 417-424.
Igor Bogdanov,
Topological origin of inertia,
Czechoslovak Journal of Physics, 51 (2001), 1153-1236.
Igor Bogdanov,
KMS state of the spacetime at the Planck scale,
Chinese Journal of Physics. (2002). | {"url":"https://everything2.com/user/JerboaKolinowski/writeups/Transgressing+the+Boundaries","timestamp":"2024-11-04T18:56:24Z","content_type":"text/html","content_length":"37358","record_id":"<urn:uuid:ae01ec9d-beee-4830-ae5d-76b9f1cf61d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00816.warc.gz"} |
What is Cash Discounting? Everything You Need to Know
Cash discounts are the reductions in the amount to be paid by a credit customer to whom the seller has given credit terms in case that customer pays within a given time period. Determine the amount
of the final payment (\(N\)) after the first two partial payments are credited toward the invoice total. To credit the two partial payments you must find the list amount before any cash discounts (L)
such that the payment can be deducted from the invoice total. Suppose Company A sells certain goods at a price of $4,400 with terms of payment of 2/10, n/20.
However, the situation should first be analyzed, as it’s possible that the bank’s interest rates are so high that taking out a loan is not worth it. If the calculated interest rate is greater than
the bank’s lending rates, it’s advisable to take out a short-term loan from a bank in order to finance the utilization of the cash discount. In the end, the markup percentage calculation and the
cost-plus calculation are simply two strategies for determining which sales price would be best. It’s simply an offer that the company makes in order to motivate the customer to pay more quickly. In
that way, the cash discount is a method for improving sales promotions and liquidity.
The credit above reduces the account balance to the amount of the cash discount 5 . This is mainly an incentive to the purchasing party to settle the bill earlier than the prescribed date. The
purchase discount is based on the purchase price of the goods and is sometimes referred to as a cash discount on purchases, settlement discount, or discount received. This is similar to surcharges,
although generally offers greater freedom and flexibility overall as it isn’t limited to certain card payment types. As such, if you have been looking to save money on your business’s payment
processes, considering cash discounting could be a valuable option to consider overall. However, surcharges cannot be applied to prepaid cards and debit cards, limiting their versatility.
Advantages of the Cash Discount
This calculation naturally has to take the cash discount into account, so that the minimum price limit can be chosen in a way that guarantees the returns exceed the expenses. The most popular methods
for calculating the list sales price that makes the required profit possible revolve https://bookkeeping-reviews.com/ around markup pricing. Important calculations in relation to this are the markup
percentage calculation, and the cost-plus pricing calculation. The process of cash discounting isn’t always easy to understand at the outset, but it’s actually a relatively simple process overall.
• The cash discount formula is based on the terms included on the customer’s invoice.
• You may understand cash discounts as a motivator or incentive that sellers offer to purchasers as a trade-off for paying a bill before the booked due date.
• Early payment means better cash flow for your business, and the discount rewards your customers who pay early.
• Calculating the contribution margin correctly is crucial to the success of your company, by allowing you to draw conclusions about its profit or loss.
• Giving the buyer a small cash discount would benefit the seller as it would allow her to access the cash sooner.
The interpretation that is chosen affects how receivables and sales are measured and how the discount is reported. Ask a question about your financial situation https://kelleysbookkeeping.com/
providing as much detail as possible. At Finance Strategists, we partner with financial experts to ensure the accuracy of our financial content.
One of the main primary examples of this occurs when a customer is invoiced for an item or service. In this scenario, a cash discount may be applied to the transaction if they pay the invoice within
a given time frame. A cash discount is a reduction in the amount of an invoice that the seller allows the buyer. This discount is given in exchange for the buyer paying the invoice earlier than its
normal payment date.
What is the approximate value of your cash savings and other investments?
This is the rate for the use of the funds for 20 days, to convert this to an annual percentage rate (APR) we simply divide by 20 to convert it to a daily rate, and then multiply by 365. Mary
Girsch-Bock is the expert on accounting software and payroll software for The Ascent. Similarly, in the third instance, startups and young professionals can often use infusions of cash to help grow
their businesses faster. Cash discounts accounted for in this way result in large distortions of reported net income. We can also see that a firm would be willing to impose a penalty of this size in
order to encourage prompt payment.
What is Cash Discounting? Everything You Need to Know
Perhaps you think that $10,000 should be removed from the $21,000 invoice total, thereby leaving a balance owing of $11,000. Or maybe you think the payment should receive the discount of 2%, which
would be $200. In all of these scenarios, you would be committing a serious mistake and miscalculating your balance owing. In each of the following cases, determine the cash discount or penalty for
which the payment qualifies.
Great! The Financial Professional Will Get Back To You Soon.
The CCC measurement incorporates how much time is expected to sell inventory, gather AR or account receivables, and the length of the bill payment window of a company before the company starts facing
penalties. The figure below illustrates the timeline for the invoice and identification of payments. In each https://quick-bookkeeping.net/ of the following cases, determine which term of payment
results in the longest credit period extending from the invoice date. According to the net method, the company would initially record the sale at net price. If you’re using the wrong credit or debit
card, it could be costing you serious money.
The deductible prior tax is thus also reduced when the cash discount is subtracted. Due to the cash discount, the net revenue or profit margin will decrease because of the cash discount given by the
seller. In cases of business units where there are adequate cash reserves, it just prompts fewer profits in light of the fact that the prior recovery of cash is of no utilization.
The discount is recorded in a contra expense account which is offset against the appropriate purchases or expense account in the income statement. When a business purchases goods on credit from a
supplier the terms will stipulate the date on which the amount outstanding is to be paid. In addition the terms will often allow a purchase discount to be taken if the invoice is settled at an
earlier date. Offering customers a cash discount, even a minimal one, can be advantageous for your business and your customers. If you don’t offer credit terms to your customers, or your customers
tend to pay on time, offering a cash discount won’t help your business much.
Cash Discounts vs. Trade Discounts
It is considered a motivator that a company offers to its clients or customers in the cases they make the payment on or before the given date according to the company’s agreements. The first payment
resulted in a $41,666.67 deduction from the invoice. The overdue remaining balance of $11,788.88 has a penalty of $324.19 added to it, resulting in a final clearing payment of $12,113.07. If a
payment is late, the invoice balance has the appropriate late penalty applied. In each of the following situations, determine for the partial payment whether you would credit the invoice for an
amount that is larger than, equal to, or less than the partial payment amount. Unless you pay attention to invoicing concepts, it is easy to get confused. | {"url":"https://www.plantenagro.com/2023/03/17/what-is-cash-discounting-everything-you-need-to/","timestamp":"2024-11-04T12:11:06Z","content_type":"text/html","content_length":"62435","record_id":"<urn:uuid:7ee65caf-6a47-4eb5-b23f-ca81d9afdec9>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00730.warc.gz"} |
Resources Related to the mosaic
This vignette describes related resources and materials useful for teaching statistics with a focus on modeling and computation.
Package Vignettes
The mosaic package includes a number of vignettes. These are available from within R, from cran.r-project.org/package=mosaic, or from www.mosaic-web.org/mosaic/.
• Minimal R describes a minimal set of R commands for use in Introductory Statistics and discusses why it is important to keep the set of commands small;
• Resampling methods in R demonstrates how to use the mosaic package to compute p-values for randomization tests and bootstrap confidence intervals in a number of common situations. The examples
are based on the ``resampling bake off’’ at USCOTS 2011.
• ggformula/lattice conversion examples compares the lattice and ggformula formula interfaces for creating graphics.
• Less Volume, More Creativity, based on slides from an ICOTS 2014 workshop, introduces the mosaic package and related tools and describes some of the philosophy behind the design choices made in
the mosaic package.
• Graphics with the mosaic package is gallery of plots made using tools from the mosaic package.
Auxiliary packages
Some features of the mosaic package are provided through auxiliary packages. These include:
• mosaicModel – implements high-level systems for working with statistical models: effect-size calculation, bootstrapped confidence intervals, prediction error, graphics for models with multiple
inputs. The package contains an introductory vignette.
• mosaicCalc – provides the calculus components of mosaic, including integration, differentiation, and differential equation solving.
Install these packages using install.packages(c("mosaicCalc", "mosaicModel")).
Mosaic paper
Pruim R, Kaplan DT and Horton NJ (2017). The mosaic Package: Helping Students to ‘Think with Data’ Using R. The R Journal, 9(1), pp. 77-102. https://journal.r-project.org/archive/2017/RJ-2017-024/
Abstract: The mosaic package provides a simplified and systematic introduction to the core functionality related to descriptive statistics, visualization, modeling, and simulation-based inference
required in first and second courses in statistics. This introduction to the package describes some of the guiding principles behind the design of the package and provides illustrative examples of
several of the most important functions it implements. These can be combined to help students ‘think with data’ using R in their early course work, starting with simple, yet powerful, declarative
Project MOSAIC Little Books
The following longer documents are available at github.com/ProjectMOSAIC/LittleBooks.
• Start Teaching Statistics Using R includes some strategies for teaching beginners, and introduction to the mosaic package, and some additional things that instructors should know about using R.
(A spanish language translation can be found at https://github.com/fjaraavilaa/MOSAIC-LittleBooks-Spanish.)
• A Student’s Guide to R provides a brief introduction to the R commands needed for all the basic statistical procedures in an Intro Stats course.
(A spanish language translation can be found at https://github.com/fjaraavilaa/MOSAIC-LittleBooks-Spanish.) | {"url":"http://mirrors.dotsrc.org/cran/web/packages/mosaic/vignettes/mosaic-resources.html","timestamp":"2024-11-12T06:34:07Z","content_type":"text/html","content_length":"635208","record_id":"<urn:uuid:d130993b-0adf-4573-9f1d-7ab58debe717>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00127.warc.gz"} |
Who provides assistance with MATLAB simulation for mechatronics assignments? Matlab Assignment Help | Who provides assistance with MATLAB simulation for mechatronics assignments? Project and Homework Help
Who provides assistance with MATLAB simulation for mechatronics assignments? ========================== This paper gives MATLAB code that simulates the "simulation with MATLAB". MATLAB code source
code -------------------------- ### SOURCE: Mathlab tool box. Open MathLab () is a free MATLAB editor package that is written in Java and a Google Repo allows you submit Python 2.7 (without Matlab)
as an.nltx file as well as simple console games. To submit, you must first sign our license agreement. We will take that along with your review (this will become automatic as our licenses will be
reviewed). At some point in time, when you need to replicate results, we will allow you to use MATLAB code from Visual Studio 2014 or earlier. Unfortunately, you will need some help verifying this
license agreement. Here are a few key points we learned about: (1) All the scripts need some time -- it's never really hard to remember where our scripts are written and where they are packaged
(thanks to the [tool box](https://github.com/mathlab/JavaScript_Plugin._MMA.md#workspaces) already mentioned). (2) [Setup problems](https://github.com/mathlab/JavaScript_Plugin/issues/14) mentioned
above. Here is a diagrammatic explanation of the problem. ### INITIALLY-DESIGNATED CODE The Matlab workspace as we mentioned above has an [!INITIALLY-DESIGNATED-file](https://github.com/mathlab/
Take My Spanish Class Online
cs) as an empty screen where the inputs are drawn and the output is displayed (see fig 19). ### INITIALLY-RESULTED CODE Here is a list of functionality so far we just want to have it here. =======
*Who provides assistance with MATLAB simulation for mechatronics assignments? -Mechatronics:I am working on MATLAB simulation for mechatronics assignment exercises. I have not done MATLAB simulations
exercises yet related to functions work! please see here: MATLAB(http://www.mechatronics.com/matlab.html) and also the book I have not done MATLAB simulations exercises yet related to functions work!
Please see here: MATLAB(http://www.mechatronics.com/matlab.html) and also the book or tutorials If I are already assigned MATLAB functions? Yes, You are a beginner! There may be a better value that
you found in the MATLAB documentation. Also I would like to provide a brief example on how to do MATLAB functions for mechatronics. I was using MATLAB on my laptop and just read up and its confusing.
Type in the message - "Computer!" and type in code "math." And its no MATLAB functions any more. I have also got an error message whenever I input too high or double numbers: The MATLAB code for
mouse button has incorrect shape when it gets to M:M as the variable. Where is this error from using a Math function in MATLAB? I just thought about it too. I need it so I can calculate the
percentage of my object I looked at the Math code here: http://en.wikipedia.org/wiki/Math_(program) My next step is one more class assignment code i found in the MATLAB documentation and also the
How To Pass My Classes
com/matlab.html). I also want to figure out what type of functions are required and if Matlab did them. Also I am having a hard time finding any specific MATLAB type on the StackTop. I am open to
other type in Maths. I have a MATLAB version 1.8.14 installed and have the Math functions and other work on it. I think the Math function is for things like a vector and matplotLib files. A: It
doesn't have to be MATLAB with a MATLAB function It might be better to modify the MATLAB code if you get the idea of it (I would rather it be it yourself, I already tried it and it compiled with
minimal effort). Who provides assistance with MATLAB simulation for mechatronics assignments? Hey I am very new ot this forum but I did some google search and found pretty great solution. My problem
was that I'm a noob around MATLAB. I thought my web interface was tricky to achieve with this software but I guess in these days things will get a lot easier. 1. As you know I've been experimenting
for some time now with MATLAB. It was the best program for this. The problem was that I don't even know what program I did. Then someone else in this forum made the decision more complex and
suggested my first attempt. Here is my first attempt: The first program will use the following command: gmdisplay tbl1 (parameters:=parameters, color:=color) 2 Where color=color, ctype=column The
second program will use the following command: gmdisplay tbl2 (parameters:=parameters, color:=color) 3 Where color=color, ctype=colormode 4 Here is what the result is: [CC] < c-color=orange to-color=
blue> [CC] < c-color=red to-color=green> [CC] < c-color=green to-color=red> [CC] < c-color=red to-color=yellow> < v-border>:red < c-text=#F6EE3F6 to-line=#F6EE3F6> [[0-9A9]{3,}] < b-text-color=#
A6EE6f5 to-border-color=#BBBBB6> [[0-9A9]{3,}] < b-text-color=#D2655B> [[0-9A9]{3,}] < c-text-color=#FFFD59> [0-9A9] < g-text-color=#D2655B> Then the program will loop therefrom until the 3nd
character on the command line is c-color=orange, c-text=blue, c-text=green, c-text=red. Now the problem seems very simple and no other program does this.
Best Site To Pay Do My Homework
If the program has to do this all i want is to add a bit change or else the number will go out of range Here is my first attempt: gmdisplay tbl1 (parameters:=parameters, color:=color) How can this be
done automatically so I don't become a problem when I have to enter any arbitrary string, type and click on the color or I entered some arbitrary thing and it will get in all letters or whatever it
is when I click on you could try here mousedown word. Ok, well as | {"url":"https://www.matlabhelponline.com/who-provides-assistance-with-matlab-simulation-for-mechatronics-assignments-407556","timestamp":"2024-11-04T11:50:33Z","content_type":"text/html","content_length":"148143","record_id":"<urn:uuid:2e2cdbc5-6886-416a-b99d-449c57d29588>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00491.warc.gz"} |
Teoremi di Frobenius e Chow per campi vettoriali lipschitziani
Control theory has found several applications in Physics in the last decades, from Statistical and Classical Mechanics to Chaos Theory. This thesis is focused upon a specific topic of this
mathematical theory, known as Controllability. We give the two main results in this field, the Frobenius and Chow-Rashevsky theorems, first in a differentiable environment and then in a less smooth
one, with the last chapter focused on some examples in Classical Mechanics. Precisely speaking: we’ll start giving the definition of small time locally controllable system; then, basic Differential
geometry notions are given (tangent bundle, vector fields and their fluxes, Lie brackets) and we state the Frobenius and Chow Theorems in this differentiable context; we then examine the non-smooth
case, particularly focusing on re-defining Lipschitz vector fields and introducing objects like Set-valued maps and Generalized Differential Quotients with their properties, all of which are
necessary to give a new precise definition of Lie brackets in this non-differentiable context. This will allow an immediate re-statement of the Frobenius Theorem; an exact formula for the composition
of the fluxes of vector fields, then, will let a generalization of the Chow’s theorem in the non-smooth case. The last chapter shows some applications of the two theorems in Classical Mechanics, as
well as an interesting connection between Lie and Poisson brackets, which allows the use of this work in Hamiltonian mechanics.
Teoremi di Frobenius e Chow per campi vettoriali lipschitziani
Control theory has found several applications in Physics in the last decades, from Statistical and Classical Mechanics to Chaos Theory. This thesis is focused upon a specific topic of this
mathematical theory, known as Controllability. We give the two main results in this field, the Frobenius and Chow-Rashevsky theorems, first in a differentiable environment and then in a less smooth
one, with the last chapter focused on some examples in Classical Mechanics. Precisely speaking: we’ll start giving the definition of small time locally controllable system; then, basic Differential
geometry notions are given (tangent bundle, vector fields and their fluxes, Lie brackets) and we state the Frobenius and Chow Theorems in this differentiable context; we then examine the non-smooth
case, particularly focusing on re-defining Lipschitz vector fields and introducing objects like Set-valued maps and Generalized Differential Quotients with their properties, all of which are
necessary to give a new precise definition of Lie brackets in this non-differentiable context. This will allow an immediate re-statement of the Frobenius Theorem; an exact formula for the composition
of the fluxes of vector fields, then, will let a generalization of the Chow’s theorem in the non-smooth case. The last chapter shows some applications of the two theorems in Classical Mechanics, as
well as an interesting connection between Lie and Poisson brackets, which allows the use of this work in Hamiltonian mechanics.
File in questo prodotto:
File Dimensione Formato
accesso aperto
Dimensione 969.79 kB 969.79 kB Adobe PDF Visualizza/Apri
Formato Adobe PDF
Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/24374 | {"url":"https://thesis.unipd.it/handle/20.500.12608/24374","timestamp":"2024-11-08T20:21:54Z","content_type":"text/html","content_length":"40884","record_id":"<urn:uuid:57108bfa-1b65-4097-806e-3b3cdeb32450>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00722.warc.gz"} |
(PDF) Introduction To Probability And Statistics - William Mendenhall - 12th Edition
Introduction to Probability and Statistics – William Mendenhall – 12th Edition
Introduction to Probability and Statistics
William Mendenhall
• 0534418708
• 9780534418700
• 12th Edition
• Solution Manual
• English
Used by hundreds of thousands of students since its first edition, INTRODUCTION TO PROBABILITY AND STATISTICS continues to blend the best of its proven coverage with new innovations. While retaining
the straightforward presentation and traditional outline for descriptive and inferential statistics, the Twelfth Edition incorporates exciting new learning aids like MyPersonal Trainer, MyApplet, and
MyTip to ensure that students learn and understand the relevance of the material.
The book takes advantage of modern technology, including computational software and interactive visual tools, to facilitate statistical reasoning as well as the understanding and interpretation of
statistical results.
In addition to showing how to apply statistical procedures, the authors explain how to meaningfully describe real sets of data, what the statistical tests mean in terms of their practical
applications, how to evaluate the validity of the assumptions behind statistical tests, and what to do when statistical assumptions have been violated. This new edition retains the statistical
integrity, examples, exercises and exposition that have made it a market leader, and builds upon this tradition of excellence with new technology integration.
View more
Leave us a comment
No Comments
0 Comments
Inline Feedbacks
View all comments
| Reply
Warning: Undefined variable $isbn13 in /home/elsoluci/public_html/tbooks.solutions/wp-content/themes/el-solucionario/content.php on line 207
1. Describing data with graphs.
2. Describing data with numerical measures.
3. Describing bivariate data.
4. Probability and probability distributions.
5. Several useful discrete distributions.
6. The normal probability distribution.
7. Sampling distributions.
8. Large-sample estimation.
9. Large-sample tests of hypotheses.
10. Inference from small samples.
11. The analysis of variance.
12. Linear regression and correlation.
13. Multiple regression analysis.
14. Analysis of categorical data.
15. Nonparametric statistics.
• Citation
□ Introduction to Probability and Statistics
□ 0534418708
□ 9780534418700
□ 12th Edition
□ Solution Manual
□ English | {"url":"https://www.tbooks.solutions/introduction-probability-statistics-william-mendenhall-12th-edition/","timestamp":"2024-11-09T14:08:31Z","content_type":"text/html","content_length":"127626","record_id":"<urn:uuid:1b31277c-85e2-4629-b0de-6c4ab63f5d88>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00767.warc.gz"} |
Class Notes On Mathematics SSS3 First Term | ClassNotes
Subject Scheme & Timeline:
No reviews yet.
Bulk Purchase
For Schools & Teachers
Discounted Price
For Schools and Teacher who want to subscribe to all of the subjects, a class or term at a discounted price.
Order Now
Custom Bundle
Create Your Bundle
Discounted Price
Need many subjects? Waste no time. Select many subjects together in one subscription at a discounted price.
Create Bundle | {"url":"https://classnotes.com.ng/subject/mathematics-sss3-first-term","timestamp":"2024-11-14T15:37:45Z","content_type":"text/html","content_length":"90455","record_id":"<urn:uuid:4c74d7da-ff2a-4be2-be75-bf79acf6defd>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00244.warc.gz"} |
Author content
All content in this area was uploaded by Peter Gauer on Oct 31, 2023
Content may be subject to copyright.
Peter Gauer1,∗, Nellie Sofie Body1, Anniken Helene Aalerud1
1Norwegian Geotechnical Institute, Norway
ABSTRACT: Avalanche velocity is an important parameter to characterize the avalanche dynamic behavior.
Observations imply that the maximum velocity of major avalanches scales with the total drop height. Com-
bining this information with observations of runout and volume of major avalanches can provide hints on the
choice of the empirical parameters of numerical avalanche models. To this end, a simple track geometry is
used to test the performance of present-days avalanche models. The model tracks varied in drop height and
mean steepness. The model results are compared with expected values of the maximum velocity and runout
based on avalanche observations.
Keywords: avalanche observations, avalanche models, performance test
Runout observations provide limited only con-
straints for the validation of the empirical parameters
used in common present-day numerical avalanche
models. This is demonstrated in Fig.1, which shows
five simulations with a simple mass block model
governed by the equation of motion:
dt =gsin ϕ−aret , (1)
aret =a0gcos ϕ+a2U2or aret =a3. (2)
Here, Uis the velocity, dU/dt the acceleration, g
the gravitational acceleration, and ϕis the slope
angle of the track. The model parameters for the
retarding acceleration are the Coulomb-friction pa-
rameter a0and the turbulent frictional parameter a2.
Both parameters can be related to the parameters
commonly used in the Voellmy-fluid type friction law:
a0≡µand a2≡g/(ξhf), with the flow depth hf, or
a2≡D/Min the case of the PCM-model (Voellmy,
1955; Perla et al., 1980). The simple mass block
model is ideal for illustration purposes as it is easy
to follow, and yet, the model is an admissible first-
order approximation. For comparison also the case
with a constant retarding acceleration, a3, is consid-
In this example, all the simulations are forced
to reach the expected αm-point according to the
statistical α-β-model (Lied and Bakkehøi, 1980),
but depending on the choice of the empirical pa-
rameters they show very different velocity distri-
butions along the track. Assuming a flow depth
∗Corresponding author address:
Peter Gauer, Norwegian Geotechnical Institute,
P.O. Box 3930 Ullevl Stadion, NO–0806 Oslo, Norway
Tel: ++47 45 27 47 43; Fax: ++47 22 23 04 48; E-mail:
Figure 1: Velocity of a mass block moving with various parameter
combinations of {a0; a2HSC }along a cycloidal track (black line;
steepness in release area is ϕ0= 45◦) and reaching the αm-
point. Velocity and length are scaled with the total drop height
HSC . The gray dotted lines mark the probability of exceedance of
the scaled velocity based on observations (c.f. Fig. 2 a)
hfof 2 m and a drop height HSC = 1000 m,
the corresponding Voellmy parameters {µ,ξ}are
{0.43, ∞},n0.155, 3500 m s-2o,n0.13, 2450 m s-2 o,
and n0.09, 1000 m s-2o. This choice of the param-
eters is inspired by values like µ= 0.155 or ξ≈
1000 m s-2, or a2HSC = 2, which can be found in the
literature (e.g., Buser and Frutiger, 1980; Bakkehøi
et al., 1983; Perla et al., 1980). However, those au-
thors only focused on runout observations as con-
straint. For comparison, a simulation with a constant
retarding acceleration is included too.
In all cases with a pronounced velocity depen-
dency of the friction low, the parameter choice
should depend on the drop height.
The differences in the predicted velocities can be
crucial for the delimitation of endangered areas or
the design of mitigation measures. The latter case
is considered in the following example. Simple di-
mensioning criteria for avalanche catching dams re-
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023
late the required height of the free board Hfb to the
avalanche velocity (see for example Chapter 8.4 in
Rudolf-Miklau et al., 2014).
Hfb =U2
where λis an empirical constant typically set to a
value between 1 and 3, depending on the avalanche
type (dry or wet). In the case of the example in
Fig. 1, the avalanches, stopping at the αm-point,
still have a scaled velocity U/pgHSC/2 of approxi-
mately 0.75, 0.70, 0.41, 0.35, or 0.22 at the β-point.
If one were to plan a catching dam at the β-point,
one could directly relate the required free board Hfb
to the drop height HSC
Hfb ≈fv
where the factor fvin our examples is 0.56, 0.49,
0.17, 0.12, or 0.05, respectively. That is, the design
dam height may differ by a factor up to 10 or more,
depending on the choice of the model parameters.
Furthermore, the predicted avalanche runtime,
which is an important design parameter for some
temporal mitigation measures such as automated
road closures, may differ considerably with the
choice of the friction parameters. In our example,
the non-dimensional runtime, tav /pHSC /g, varies
between approximately 6 and 13. This factor may
determine wether an automated road closure is fea-
sible or not. These are only two examples to show
that not only the prediction of the runout is important
but also the correct prediction of the velocity along
whole track.
2. OBSERVATION
Therefore, velocity observations that could constrain
model calibrations are desirable. Fig. 2 a shows the
exceedance probabilities (i.e. the probability to ob-
serve a value larger than a given one) for a series
of observed Umax /pgHSC /2 (McClung and Gauer,
2018) and expected αvalues according to the α-β-
model (Lied and Bakkehøi, 1980). The assumption
of the empirical α-β-model is that the data behind
it reflect rare avalanches, that is events with return
periods of the order of 100 years. With this in mind,
one might be tempted to multiply the exceedance
probability in Figure 2 b by a factor of the order of
10−2to obtain annual probabilities. However, this is
a crude approximation. The complementary cumu-
lative distribution function (CCDF) of Umax can be
approximated reasonably well by a Generalized Ex-
treme Value (GEV) distribution. Here and in the fol-
lowing, the term “major avalanche” is used in the
sense that these avalanches have return periods of
at least several years and can be considered large
relative to the path, but not necessary the most ex-
Figure 2: Complementary Cumulative Distribution Function
(CCDF, survivor function) of observed values of Umax /pgHSC /2
at Ryggfonn and for major avalanches at various locations. The
gray rectangle indicates a region that covers typical rare events
(cf. Fig. 3 b; and b) estimated exceedance probability of αver-
sus βaccording to the α-β-model (αm= 0.96β−1.4◦; gray
shaded area indicates αm±σ-range; for explanation see Lied
and Bakkehøi, 1980).
Fig. 3 a shows the calculated (dimensionless) ve-
locity of a mass block moving with a constant retard-
ing acceleration along a cycloidal track. The retard-
ing acceleration is chosen in such a way that the
mass block stops at the β-point (which is close to
the αm+σ-point), the αm-point, or the (αm−σ)-point
respectively. The light gray polygon indicates ex-
pected ranges for major avalanches according to the
observations in Fig. 2. Fig. 3 b shows correspond-
ing avalanche observations from major events with
drop heights between 100 m and 1200m. These ob-
servations have implication for the choice of model
parameters as further discussed in the next section.
Fig. 4 shows a collection of observed deposition
volumes. They show an increasing trend with drop
3. MODEL TESTS
3.1. Method
As mentioned, avalanche velocity is an important
parameter to characterize the dynamic behavior.
Observations imply that the velocity and especially
the maximum velocity of major avalanches scale
with the total drop height HSC , that is Umax ∼
pgHsc /2 (McClung and Gauer, 2018; Gauer, 2018,
2014). Combined with estimates on the expected
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023
Figure 3: Scaling behavior of the front velocity of major
avalanches combined with runout estimates. a) based on analyt-
ical calculations (cycloidal track and constant retarding acceler-
ations; ϕ0= 48◦) and b) corresponding avalanche observations
from major events. The blue line shows the mean, the shaded
area the ±σ-range and the red dashed line the observed maxi-
mum derived from observations along the track. The black line
represents a “mean path” geometry and the dark gray shaded
area the envelope of all path geometries. The light grey polygon
provides a reference from Fig. 2.
Figure 4: Observed avalanche deposits of “major events” versus
total drop height HSC (for references to the data see Gauer et al.,
2010). ♦indicate the volumes used in the model simulations
in the next section. The lines show the estimated exceedance
probabilities derived from the observations.
runout of major avalanches, e.g., by using statisti-
cal models, these observations provide implications
for the choice of the empirical parameters used in
numerical avalanche models. Like for the Voellmy-
type friction models that find application in most of
the present-day avalanche models.
Using a simple parabolic track, model perfor-
mance can be tested. To this end, simulations were
performed on slightly channelized parabolic tracks
Figure 5: Model grid; the (•) mark the β-, αm-, (αm−σ)-, and
where the thalweg is given by
z1/HSC =a(x/HSC )2+b(x/HSC ) + c(5)
z(x,y) = z1(1 + f(y)) . (6)
Here, f(y) defines the degree of canalization. Using
a slight canalization should reduce lateral spread-
ing, which is caused by numerical diffusion and
therefore is an artifact. The parameter a,b,care
determined by the initial slope angle ϕ0, which is
also a proxy of the mean slope angle β≈0.72ϕ0−
1.4◦(for explanations see Gauer, 2018). Fig. 5
shows an example grid.
At the low point, the track is horizontally extended.
The initial volume is adjusted according to expected
deposition volumes (see Fig. 4). The corresponding
release areas are located above the actual track (i.e.
zSC >1) with an assumed fracture depths Dfrac ≈
[1 m, 1.5 m, 2 m] ·cos ϕ0and a constant slope angle
ϕ0given by the initial tangent of the track.
The simulations were done with the commer-
cial version of RAMMS:avalanche (version 1.7.20;
(Christen et al., 2010)) and the MoT-Voellmy model
(Version 2020-05-12, (Issler et al., 2023)).
The RAMMS friction parameters are chosen ac-
cording to standard values (Bartelt et al., 2017) cor-
responding to the respective volume class. How-
ever, only the highest elevation class is used, which
gives the lowest friction values—that is, they should
favor longer runouts and higher velocities. The pa-
rameter for the MoT-Voellmy model are chosen sim-
In addition, simulations were done with the
SAMOS-solver for dense flow (Sampl and Granig,
2009) including entrainment, however, with a modi-
fied friction law that favors Coulomb-type behavior:
τb=µ ρfg hfcos ϕ+CDρaU2
+τcmax exp −u2/25, exp −10h2
f , (7)
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023
where hfis the flow depth, the avalanche density
ρf= [100, 200] kg m-3, the Coulomb friction factor
µ= 0.3, CDρa= 0.05 kg m-3 and τc= 100 Pa. The
second term on the right accounts for some air drag
and the last term on the right introduces some co-
hesion, which should mainly suppress spreading of
very shallow flows. This friction law is inspired by
the simple model tests in Gauer (2020). The en-
trainment is set to ≤300 kg m−2.
3.2. Results
Using the simple parabolic track, model perfor-
mance can be tested as shown in Fig. 6 and Fig. 8.
Here, we focus mainly on the RAMMS (Christen
et al., 2010) as it is probably one of the (if not the)
most used model by practitioners at present. Fig. 6
shows an example of the kind of simulations and
show the overall maximum values of the simulation
at a given grid point.
Figure 6: Simulation with RAMMS on a parabolic track: a) maxi-
mum velocity and b) maximum flow depth. The velocity is scaled
as USC =Umax /pgHsc /2 and the flow depth as h=hf/Dfrac,
where Dfrac is the initial fracture depth.
Fig. 7 shows a comparison of avalanche simula-
tions with RAMMS for different drop heights HSC =
[100, 300, 500, 750, 1000, 1500, 2000] m. The vol-
umes were adjusted according to expected depo-
sition volumes (see Fig. 4). The corresponding re-
lease areas are located above the actual track with
an assumed a fracture depth Dfrac = 2 cos ϕ0m and
a constant slope angle (in this case ϕ0= 35◦) given
by the initial tangent of the track; the mean slope
Figure 7: a) Simulated velocities with RAMMS along the thal-
weg for seven different drop heights. The maxima are marked
with (♦). The release volumes are adjusted to the drop height,
the fracture depth is set to 1.64 m and ϕ0= 35◦. As a ref-
erence, the β-, αm-, (α−1σ)- and (αm−2σ)-point are shown
(for explanation see Lied and Bakkehøi, 1980). b) Simulated
maximum velocity, Umax , versus square root of the drop height,
√Hsc , marked by (♦); the color illustrates the scaled velocity
USC =Umax /pgHsc /2 and the marker size corresponds to the
EAWS avalanche size classes. The lines show the estimated ex-
ceedance probabilities derived from observations shown in gray.
angle is β≈23.6◦.
As can be seen in Fig. 7 a the runout ends ap-
proximately at the mean expected αm-angle accord-
ing to the α-β-model, even though the large volumes
used could suggest that these simulations represent
more extreme events. For drop heights up-to around
750 m, the simulated maximum velocities are in the
range of rare events (cf. panel b). For drop heights
above, the maximum velocity reaches a terminal ve-
locity, which is not reflected in the observations. For
drop heights larger than 1000 m, the simulations un-
derestimate the velocities significantly compared to
the observations of major events. This is a typical
problem for models based on the Voellmy-fluid rhe-
ology (see also discussions in Gauer, 2014, 2013,
2018). However, also for drop heights smaller than
1000 m a subtle difference seems to exist as the
simulated maximum velocity tends to be obtained
earlier in the path than suggested by observations.
Fig. 8 shows the corresponding simulation as
shown in Fig. 6 for the MoT-Voellmy model. The
results are comparable to those from RAMMS, how-
ever, in this case the simulation suggests also some
spurious numerical artifacts.
As mentioned above, the combination of runout
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023
Figure 8: Simulation with MoT-Voellmy on a parabolic tracks:
a) maximum velocity and b) maximum flow depth. The veloc-
ity is scaled by USC =Umax /pgHsc/2 and the flow depth by
h=hf/Dfrac, where Dfrac is the initial fracture depth (simulation
to t = 100 s; build version MoT-Voellmy.2020-05-12.exe).
observations and velocity scaling can provide a
more stringent constraint to the choice of the empir-
ical model parameters for the commonly used fric-
tion laws. Choosing a simple track geometry and
the use of expected runout and maximum velocities
along the track can therefore give a fast impression
of model performance versus observations depend-
ing on the model parameters.
The approach is exemplified in (Gauer, 2018,
2020). Fig. 9, Fig. 10, and Fig. 11, show results ob-
tained with a simple mass block model that accounts
for mass entrainment (Gauer, 2020). Although only
a first order approximation, the model captures the
observations for “major” dry-mixed avalanches rea-
sonably well. Therefore, the figures give a reference
for the 2D-simulations shown below in this section.
As the model is basically scale invariant regarding
the drop height HSC , Fig. 11 should be similar for
various drop heights. The colors and isolines in-
dicate the velocity distribution along the horizontal
distance for simulations along tracks with varying
steepness, ϕ0(or β≈0.72ϕ0−1.4◦). For the il-
lustration, the data are interpolated.
Fig. 13 gives now a summary of the simulated
runout angles for the 2D models. As indicated, the
models seem to capture the expected runout ac-
cording to the observations. However, one could
have expected longer runouts considering the sim-
ulation setup that should have favoured higher ve-
locities and longer runout for the Voellmy-type mod-
Figure 9: Simulated runout marked by the α-angle and maximum
velocity (color coded) on a parabolic track with the mean slope
angle, β, as parameter. (•) marks runs with entrainment height
He = 0.25 m and (♦) those with He = 0.5 m. Observations are
shown as gray dots. Estimated exceedance probabilities of α
versus β, according to the α-β-model αm= 0.96β−1.4◦(Lied and
Bakkehøi, 1980); gray shaded area αm±σ.
Figure 10: Simulated maximum velocity, Umax , versus square
root of the drop height, √HSC . The color illustrates the scaled
velocity, USC =Umax /pgHSC /2. The figure shows example
calculations for a cycloidal and parabolic track and two erosion
depths. The initial slope angle ϕ0of the tracks is 40◦. The gray
triangles depict measured maximum front-velocities from ”major
avalanche events” in various tracks (see Fig. 3). The lines depict
the probability of exceedance.
els. Also, it can be noticed that there is a slight in-
Figure 11: MBVM (Mass Block with Variable Mass): Normalized
velocity, U/pgHSC /2 ; iso-lines of scaled avalanche velocities
on some parabolic tracks with different steepness given by the
β-angle. Runout length given to hit the average runout angle αm
corresponding to the α-β-model (Lied and Bakkehøi, 1980). Also
marked are corresponding positions of β,αm, and where zbe-
comes zero. Gray shaded area shows αm±σrange.
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023
crease in runout length with increasing slope steep-
ness. This is especially obvious in the Coulomb fric-
tion dominated SAMOSCCDTM2model. Here, the in-
crease is pronounced for lower drop heights, which
can be observed in Fig. 14.
Figure 12: Simulated runout marked by the α-angle and maxi-
mum velocity (color coded) on a parabolic track with the mean
slope angle, β(varying symbol) as parameter and vertical re-
lease depth HS = 1.5m. Expected runout angle αversus β, ac-
cording to the α-β-model αm= 0.96β−1.4◦(Lied and Bakkehøi,
1980); gray shaded area ±σ. Symbol size gives an impression of
the avalanche size according to EAWS volume classification. a)
RAMMS; b) MoT-Voellmy; c) SAMOSCCDTM2. The additional dot-
ted and dashed lines in c) show the α-β-model for Colorado and
the Sierra Nevada, respectively (c.f. McClung and Mears, 1991).
Fig. 14 shows plots corresponding to Fig. 11.
The values in all cases are taken along the cen-
ter line of the track and then interpolated. On a
first glance, the results suggest that the models are
capable to capture the runout distances reasonably
well, although one could have expected even longer
runouts with respect to the model setting, see also
Fig. 12.
Figure 13: Simulated maximum velocity, Umax , versus square
root of the effective drop height √HSe (i.e., including the height
of the release area) with the mean slope angle, βas parame-
ter (varying symbols; for comparison see Fig. 12) and vertical
release depth HS = 1.5 m. The color illustrates the scaled ve-
locity USC =Umax /pgHSC /2. a) RAMMS; b) MoT-Voellmy; c)
There are, however, noticeable difference be-
tween the predicted maximum velocities and those
observed, especially for avalanche drops heights
larger 1000 m. This is also very well reflected in
Fig. 13. But also for smaller drop heights it seems
there are differences in the velocity profiles. That
is, the simulations reach its maximum velocity early
on and underestimate the velocity the lower part of
the track as mentioned before. RAMMS and MoT-
Voellmy show quite similar behaviour. Both model
show a pronounced drop height dependency, which
is not reflected in observations of major avalanches.
The drop height dependency is less in the more
Coulomb-type model implemented in the SAMOS-
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023
In this paper, a simple track geometry was used
to test the performance of present days avalanche
models. The model tracks varied in drop height and
mean steepness. The model results were compared
with expected values of the maximum velocity and
runout based on avalanche observations.
On a first glance, the results suggest that the
models are capable to capture the runout distances
reasonably well, although one could have expected
even longer runouts with respect to the model set-
ting. There are, however, noticeable difference be-
tween the predicted maximum velocities and those
observed, especially for avalanche drops heights
larger 1000 m. Also for smaller drop heights, it
seems there are differences in the velocity profiles
and that the simulation reaches its maximum veloc-
ity early on and underestimate the velocity the lower
part of the track.
Here, we focused mainly on RAMMS and on
MOT-Voellmy, but similar results are expected and
are known for other models using the Voellmy-
rheology too, like DAN3D (Aaron et al., 2016; Con-
lan et al., 2018) or SAMOS-AT (Sampl and Granig,
The more Coulomb-type variant of SAMOS pre-
sented here shows less velocity dependency of the
friction term and is better capable to reproduce the
higher observed velocities for larger drop heights.
There is, however, tendency to show longer runouts
than expected by the α-β-model for Norway for
steeper paths, at least for the parameters used here.
On the other hand, these runouts are still in ac-
cordance with observation from Colorado or Sierra
At this point, model developers could argue that
the Voellmy-rheology focuses mainly on the predic-
tion the dense- (or may be the slightly fluidized-) part
of the avalanche, and that the observations are un-
certain and include also events of highly fluidized-
or powder snow events. This refutation might be
justified, but the observations include avalanches
that are highly relevant for practitioners concerned
with hazard mapping and mitigation. Then practi-
tioners need to ask themselves, Are we using the
right tools? Are models that better account for vary-
ing flow-regimes and mass exchange needed?
Parts of this research was financially supported by
the Norwegian Ministry of Oil and Energy through
the project grant “R&D Snow avalanches 2017–
2019 & 2020–2023” to NGI, administrated by the
Norwegian Water Resources and Energy Direc-
torate (NVE).
Aaron, J., Conlan, M., Johnston, K., Gauthier, D., and McDougall,
D. (2016). Adapting and calibrating the dan3d dynamic model
for north american snow avalanche runout modelling. In Inter-
national Snow Science Workshop 2016 Proceedings, Breck-
enridge, CO, USA.
Bakkehøi, S., Domaas, U., and Lied, K. (1983). Calculation of
snow avalanche runout distance. Annals of Glaciology, 4:24–
Bartelt, P., B ¨
uhler, Y., Christen, M., Deubelbeiss, Y., Salz,
M., Schneider, M., and Schumacher, L. (2017). RAMMS
User Manual v.1.7.0 Avalanche. WSL Institute for Snow and
Avalanche Research SLF.
Buser, O. and Frutiger, H. (1980). Observed maximum run-out
distance of snow avalanches and the determination of the fric-
tion coefficients µand ξ.Journal of Glaciology, 26(94):121–
Christen, M., Kowalski, J., and Bartelt, P. (2010). RAMMS:
Numerical simulation of dense snow avalanches in three-
dimensional terrain. Cold Regions Science and Technology,
Conlan, M., Aaron, J., Johnston, K., Gauthier, D., and McDougall,
S. (2018). Dan3D model parameters for snow avalanche case
studies in Western Canada. In Dan3D Model Parameters for
Snow Avalanche Case Studies in Western Canada, pages
Gauer, P. (2013). Comparison of avalanche front velocity mea-
surements: supplementary energy considerations. Cold Re-
gions Science and Technology, 96:17–22.
Gauer, P. (2014). Comparison of avalanche front velocity mea-
surements and implications for avalanche models. Cold Re-
gions Science and Technology, 97:132–150.
Gauer, P. (2018). Considerations on scaling behavior in
avalanche flow along cycloidal and parabolic tracks. Cold Re-
gions Science and Technology, 151:34–46.
Gauer, P. (2020). Considerations on scaling behavior in
avalanche flow: Implementation in a simple mass block model.
Cold Regions Science and Technology, 180:103165.
Gauer, P., Kronholm, K., Lied, K., Kristensen, K., and Bakkehøi,
S. (2010). Can we learn more from the data underlying the
statistical α-βmodel with respect to the dynamical behavior of
avalanches? Cold Regions Science and Technology, 62:42–
Issler, D., Gledisch Giss, K., Gauer, P., Glimsdal, S., Domaas, U.,
and Sverdrup-Thygeson, K. (submitted 2023). NAKSIN – a
New Approach to Snow Avalanche Hazard Indication Mapping
in Norway. Cold Regions Science and Technology.
Lied, K. and Bakkehøi, S. (1980). Empirical calculations of snow-
avalanche run-out distance based on topographic parameters.
Journal of Glaciology, 26(94):165–177.
McClung, D. M. and Gauer, P. (2018). Maximum frontal
speeds, alpha angles and deposit volumes of flowing snow
avalanches. Cold Regions Science and Technology, 153:78–
McClung, D. M. and Mears, A. I. (1991). Extreme value predic-
tion of snow avalanche runout. Cold Regions Science and
Technology, 19(2):163–175.
Perla, R., Cheng, T. T., and McClung, D. M. (1980). A two-
parameter model of snow-avalanche motion. Journal of
Glaciology, 26(94):119–207.
Rudolf-Miklau, F., Sauermoser, S., and Mears, A. I., editors
(2014). The Technical Avalanche Protection Handbook. Ernst
& Sohn.
Sampl, P. and Granig, M. (2009). Avalanche simulation with
SAMOS-AT. In Proceedings of the International Snow Sci-
ence Workshop, Davos, pages 519–523.
Voellmy, A. (1955). ¨
Uber die Zerst¨
orungskraft von Law-
inen. Schweizerische Bauzeitung, Sonderdruck aus dem 73.
Jahrgang(12, 15, 17, 19 und 37):1–25.
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023
Figure 14: Maximum normalized velocity, U/pgHSC/2, along the center line for various drop heights,
HSC = [100, 500,750, 1000, 1500, 2000] m (top to bottom) for the models RAMMS, MoT-Voellmy, and SAMOSCCDTM2
Proceedings, International Snow Science Workshop, Bend, Oregon, 2023 | {"url":"https://www.researchgate.net/publication/375090202_WHAT_AVALANCHE_OBSERVATIONS_TELL_US_ABOUT_THE_PERFORMANCE_OF_NUMERICAL_MODELS","timestamp":"2024-11-11T11:25:54Z","content_type":"text/html","content_length":"653969","record_id":"<urn:uuid:35a644d8-bf7a-4d5d-8812-09f9a993bbd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00063.warc.gz"} |
Commutative idempotent involutive FL-algebras
A \emph{commutative idempotent involutive FL-algebra} or \emph{commutative idempotent involutive residuated lattice} is a structure $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \sim\rangle$ of type
$\langle 2, 2, 2, 0, 1\rangle$ such that
$\langle A, \vee, \wedge\rangle$ is a lattice
$\langle A, \cdot, 1\rangle$ is a semilattice with top
$\sim$ is an \emph{involution}: ${\sim}{\sim}x=x$ and
$xy\le z\iff x\le {\sim}(y({\sim}z))$
A \emph{commutative involutive FL-algebra} or \emph{commutative involutive residuated lattice} is a structure $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \sim\rangle$ of type $\langle 2, 2, 2, 0,
1\rangle$ such that
$\langle A, \vee\rangle$ is a semilattice
$\langle A, \cdot\rangle$ is a semilattice and
$x\le z\iff x\cdot{\sim}y\le{\sim}1$, where $x\le y\iff x\vee y=y$.
Let $\mathbf{A}$ and $\mathbf{B}$ be involutive residuated lattices. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \vee y)=h(x) \vee h(y)
$, $h(x \cdot y)=h(x) \cdot h(y)$, $h({\sim}x)={\sim}h(x)$ and $h(1)=1$.
Basic results
Finite members
f(1)= &1\\
f(2)= &1\\
f(3)= &1\\
f(4)= &2\\
f(5)= &2\\
f(6)= &4\\
f(7)= &4\\
f(8)= &9\\
f(9)= &10\\
f(10)= &21\\
f(11)= &22\\
f(12)= &49\\
f(13)= &52\\
f(14)= &114\\
f(15)= &121\\
f(16)= &270\\
^1) N. Galatos and P. Jipsen, \emph{Residuated frames with applications}, Transactions of the AMS, 365 (2013), 1219-1249 | {"url":"https://math.chapman.edu/~jipsen/structures/doku.php?id=commutative_idempotent_involutive_residuated_lattices","timestamp":"2024-11-14T20:01:17Z","content_type":"application/xhtml+xml","content_length":"22368","record_id":"<urn:uuid:71f57ccf-799f-4d5e-a79e-fb6b42848a2d>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00669.warc.gz"} |
Lectures on Differential Topology - Free Computer, Programming, Mathematics, Technical Books, Lecture Notes and Tutorials
Lectures on Differential Topology
• Title: Lectures on Differential Topology
• Author(s) Riccardo Benedetti
• Publisher: American Mathematical Society (October 27, 2021); eBook (Arxiv Edition, Creative Commons Licensed)
• License(s): Creative Commons License (CC)
• Hardcover/Paperback: 425 pages
• eBook: PDF (416 pages)
• Language: English
• ISBN-10: 1470462710
• ISBN-13: 978-1470462710
• Share This:
Book Description
This book gives a comprehensive introduction to the theory of smooth manifolds, maps, and fundamental associated structures with an emphasis on "bare hands" approaches, combining
differential-topological cut-and-paste procedures and applications of transversality.
About the Authors
• Riccardo Benedetti: University of Pisa, Pisa, Italy.
Reviews, Ratings, and Recommendations: Related Book Categories: Read and Download Links: Similar Books:
Book Categories
Other Categories
Resources and Links | {"url":"https://freecomputerbooks.com/Lectures-on-Differential-Topology.html","timestamp":"2024-11-13T15:44:50Z","content_type":"application/xhtml+xml","content_length":"34605","record_id":"<urn:uuid:e8c09e71-b47d-4c01-8d4e-f9cb075a3138>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00219.warc.gz"} |
Revision #2 to TR13-142 | 24th February 2014 18:02
Property Testing Bounds for Linear and Quadratic Functions via Parity Decision Trees
In this paper, we study linear and quadratic Boolean functions in the context of property testing. We do this by observing that the query complexity of testing properties of linear and quadratic
functions can be characterized in terms of complexity in another model of computation called {\em parity decision trees}.
The observation allows us to characterize testable properties of linear functions in terms of the approximate $l_1$ norm of the Fourier spectrum of an associated function. It also allows us to
reprove the $\Omega(k)$ lower bound for testing $k$-linearity due to Blais et al. More interestingly, it rekindles the hope of closing the gap of $\Omega(k)$ vs $O(k\log k)$ for testing $k$-linearity
by analyzing the randomized parity decision tree complexity of a fairly simple function called $E_k$ that evaluates to $1$ if and only if the number of $1$s in the input is exactly $k$. The approach
of Blais et al. using communication complexity is unlikely to give anything better than $\Omega(k)$ as a lower bound.
In the case of quadratic functions, we prove an adaptive two-sided $\Omega(n^2)$ lower bound for testing affine isomorphism to the inner product function. We remark that this bound is tight and
furnishes an example of a function for which the trivial algorithm for testing affine isomorphism is the best possible. As a corollary, we obtain an $\Omega(n^2)$ lower bound for testing the class of
{\em Bent} functions.
We believe that our techniques might be of independent interest and may be useful in proving other testing bounds.
Changes to previous version:
Revision meant as a full version to the conference version to appear in the proceedings of the 9th International Computer Science Symposium in Russia (CSR 2014).
Revision #1 to TR13-142 | 11th October 2013 17:23
Property Testing Bounds for Linear and Quadratic Functions via Parity Decision Trees
In this paper, we study linear and quadratic Boolean functions in the context of property testing. We do this by observing that the query complexity of testing properties of linear and quadratic
functions can be characterized in terms of the complexity in another model of computation called parity decision trees.
The observation allows us to characterize the testable properties of linear functions in terms of the approximate $l_1$ norm of the Fourier spectrum of an associated function. It also allows us to
reprove the $\Omega(k)$ lower bound for testing $k$-linearity due to Blais et al. More interestingly, it rekindles the hope of closing the gap of $\Omega(k)$ vs $O(k\log k)$ for testing $k$-linearity
by analyzing the randomized parity decision tree complexity of a fairly simple function called $E_k$ that evaluates to $1$ if and only if the number of $1$s in the input is exactly $k$. The approach
of Blais et al. using communication complexity fails to give anything better than $\Omega(k)$ as a lower bound.
In the case of quadratic functions, we prove an adaptive, two-sided $\Omega(n^2)$ lower bound for testing affine isomorphism to the inner product function. We remark that this bound is tight and
furnishes an example of a function for which the trivial algorithm for testing affine isomorphism is the best possible. As a corollary, we obtain an $\Omega(n^2)$ lower bound for testing the class of
Bent functions.
We believe that our techniques might be of independent interest and may be useful in proving other testing bounds.
Changes to previous version:
Modified proof in Appendix C.2
TR13-142 | 11th October 2013 12:52
Property Testing Bounds for Linear and Quadratic Functions via Parity Decision Trees
In this paper, we study linear and quadratic Boolean functions in the context of property testing. We do this by observing that the query complexity of testing properties of linear and quadratic
functions can be characterized in terms of the complexity in another model of computation called parity decision trees.
The observation allows us to characterize the testable properties of linear functions in terms of the approximate $l_1$ norm of the Fourier spectrum of an associated function. It also allows us to
reprove the $\Omega(k)$ lower bound for testing $k$-linearity due to Blais et al. More interestingly, it rekindles the hope of closing the gap of $\Omega(k)$ vs $O(k\log k)$ for testing $k$-linearity
by analyzing the randomized parity decision tree complexity of a fairly simple function called $E_k$ that evaluates to $1$ if and only if the number of $1$s in the input is exactly $k$. The approach
of Blais et al. using communication complexity fails to give anything better than $\Omega(k)$ as a lower bound.
In the case of quadratic functions, we prove an adaptive, two-sided $\Omega(n^2)$ lower bound for testing affine isomorphism to the inner product function. We remark that this bound is tight and
furnishes an example of a function for which the trivial algorithm for testing affine isomorphism is the best possible. As a corollary, we obtain an $\Omega(n^2)$ lower bound for testing the class of
Bent functions.
We believe that our techniques might be of independent interest and may be useful in proving other testing bounds. | {"url":"https://eccc.weizmann.ac.il/report/2013/142/","timestamp":"2024-11-12T12:17:41Z","content_type":"application/xhtml+xml","content_length":"27675","record_id":"<urn:uuid:d3ced92d-affd-495c-94f2-08a64b41161b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00687.warc.gz"} |
The Kinetic Theory of the Solar Wind and its Interaction with the Moon
Griffel, David Henry (1968) The Kinetic Theory of the Solar Wind and its Interaction with the Moon. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/DH61-ZQ91. https://
NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in .pdf document.
Beyond about .1 A.U. from the sun, fluid mechanics is not a good approximation for the solar wind, because the collision frequency is low. Analysis of the particle dynamics shows that if there are no
collisions beyond .1 A.U., then at the earth T[...]/T[...] = 35; this is much greater than is observed. We study the effects of interactions by means of the Boltzmann equation. Solving it with
Krook's collision term, we find that the temperature anisotropy observed by the Vela satellite requires each particle to make an average of 2 or 3 collisions between .1 and 1 A.U. The temperature
averaged over direction roughly follows an adiabatic law, with γ = 3/2; γ tends to increase with distance. The theory predicts an excess of high-velocity particles, as is observed by Vela, even when
the collision frequency is independent of velocity; but to produce an effect as strong as that observed requires a fairly strong velocity-dependence of the collision frequency.
We proceed to study the interaction of the wind with the moon, treated as a solid body, with neither magnetic field nor atmosphere, absorbing and neutralizing all incident particles. We construct an
exact theory of the boundary layer between such a body and a plasma with a magnetic field parallel to the surface, valid when the plasma has no velocity towards the surface. The thickness of the
layer is about two gyroradii, and the magnetic field rises across it according to the equation of pressure balance.
We then consider two-dimensional models of the complete wind-planet interaction, and show that in any steady two-dimensional flow, the plasma velocity must be tangential to the body. Then, using the
model of the sheath constructed above, we show that there can be no steady flow at all around a finitely conducting cylinder.
Finally, we consider the magnetic fields induced by the interplanetary field inside the moon, taking account of its rotation. If the applied field is uniform, then in the steady state there is a
constant axial field inside the sphere; near the surface there is a complex toroidal field, dying away to zero in the interior if the sphere is spinning rapidly. If the external field is non-uniform,
there is a residual toroidal field throughout the sphere. If the diffusion time is longer than the time between reversals of the inter-planetary field, then the moon will contain concentric shells of
toroidal and axial fields, independently diffusing inwards.
Item Type: Thesis (Dissertation (Ph.D.))
Subject Keywords: (Physics)
Degree Grantor: California Institute of Technology
Division: Physics, Mathematics and Astronomy
Major Option: Physics
Thesis Availability: Public (worldwide access)
Research Advisor(s): • Davis, Leverett
Thesis Committee: • Unknown, Unknown
Defense Date: 18 December 1967
Record Number: CaltechETD:etd-09202008-111601
Persistent URL: https://resolver.caltech.edu/CaltechETD:etd-09202008-111601
DOI: 10.7907/DH61-ZQ91
Default Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 3673
Collection: CaltechTHESIS
Deposited By: Imported from ETD-db
Deposited On: 06 Nov 2008
Last Modified: 02 Apr 2024 17:54
Thesis Files
PDF (Griffel_dh_1968.pdf) - Final Version
See Usage Policy.
Repository Staff Only: item control page | {"url":"https://thesis.library.caltech.edu/3673/","timestamp":"2024-11-13T13:13:29Z","content_type":"application/xhtml+xml","content_length":"29406","record_id":"<urn:uuid:e664dcbc-9285-4e0d-8888-b41b2ea685bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00576.warc.gz"} |
The American Scientist Magazine Understands Nothing about the Traveling Salesman Problem
I like to think there are some solid foundations to my life. I will be able to do the Monday New York Times crossword and I will not be able to do the Thursday version. My dental hygienist will not
be satisfied with the amount of flossing I do. I will get my favorite spaghetti meal served on my birthday.
And I can trust American Scientist to write articles that teach me science in an accessible, yet challenging, way. American Scientist is the magazine of the honor society Sigma Xi and is my favorite
science magazine since the demise of The Sciences, my all-time favorite science magazine. Or it was.
That particular pillar of my existence crumbled into dust today as I was reading an otherwise interesting article on “Paradoxes, Contradictions, and the Limits of Science” by Noson Yanofsky. In this
article, there is a sidebar on the Traveling Salesman Problem that makes me weep.
It is fortunate that they have put this behind their firewall so as to minimize the damage they have done. But let me briefly describe: there is a map of the United States, along with a truly
atrocious Traveling Salesman tour through the US, apparently visiting each zip code (approximately 43,000 zip codes). The tour is created by “Hilbert Curves” and is “about 75% optimal”. Then, the
inevitable, “Finding the actual shortest route would take a computer essentially an infinite amount of time to compute”. Cue the sidebar: “But what about going to 100 different cities? A computer
would have to check 100x99x98x….x2x1 possible routes.” They then helpfully expand that into the 157 digit number of potential routes. “The computer would have see how long the route takes, and then
compare all of them to find the shortest route.”
The sidebar continues with the drivel of computer speed and the number of centuries it would take.
No, no. 100! times no. It is not necessary to check all the routes. 100 cities TSPs can be solved to proved optimality in a very short period of time. And even 43,000 zip codes can be solved to
optimality with a non-infinite (and quite practical) amount of computation. Using integer programming and specialized techniques, Bill Cook and his gang at Concorde have solved to optimality problems
with up to 85,900 cities. The “zip code” problem is just like the problems Cook has solved, just smaller.
The Traveling Salesman problem is NP-complete (or NP-hard depending what you mean by the TSP). But as a computational approach, the “figuring out how big 100! is and that is the time” completely
misunderstands the role optimization and algorithms play in truly solving hard problems (I mean it: the true optimal is found and proved to be optimal). My most recent blog post (sadly too long ago)
was on this very subject. Perhaps I should rename this blog “Michael Trick’s Rants about Complete Enumeration Arguments”.
Even if you want to call this instance impossible to solve, why would you ever use a Hilbert Curve heuristic for it? Don’t get me wrong: I am a student of John Bartholdi and his paper with Loren
Platzman on spacefilling curves and his paper applying this to Meals-on-Wheels scheduling changed my life. But these techniques don’t give great tours (they have lots of other great properties).
It appears American Scientist has taken this tour from this webpage. The published map is the bottom one on the page. The resulting tour is clearly not good. Practically any other modern heuristic
would do better. This is no complaint about Robert Kosara, the author of the original article: the page describes what he wanted to do, and he did it well. But for American Scientist to put this
forward as an example of the state-of-the-art heuristic approach completely misrepresents the state of the art for heuristic approaches. And I have no idea what 75% optimal means: the tour is longer
than optimal, of course.
I don’t know if this means I should not trust American Scientist in areas that are not my specialty. On something I know a bit about, the magazine has failed miserably.
14 thoughts on “The American Scientist Magazine Understands Nothing about the Traveling Salesman Problem”
1. Hi, good rant! The idea to use a Hilbert curve came to me when I was thinking about this years ago. So I did about five minutes of research and Wikipedia’s 75% number was good enough for me. And
off I went implementing it (was pretty easy, too).
I didn’t really care how optimal it was, I was mostly after a fun variation on my scribble map. I might spend some time redoing it, since it’s another election year… got any good pointers? I can
imagine lots of ways of making this more efficient with quad trees, Voronoi diagrams, etc.
2. Isn’t this a chance to be a scientist and publish a peer-reviewed paper refuting the results in American Scientist, showing the provably optimal tour of all US zip codes?
3. Well, really disappointing from the readers’ perspective. I would be glad to see operations research on a scientific magazine but it clearly needs to explain better the state-of-the-art
techniques otherwise it’s just sad.
4. You’re just miffed because the “75% optimal” route invades Canada multiple times.Hope they’re doing it in the summer — they have to swim back and forth across Lake Michigan, and it can get a bit
5. I hope you plan on writing a Letter to the Editor. It at least contact the author.
6. American Scientist does a good job of allowing readers to debate with authors who publish incorrect material in the magazine. Please do send a letter to the editor.
7. This is true, but this isn’t constructive. It still doesn’t explain the complexity of a constraint satisfaction problem to journalist, let alone his/her audience.
TSP is a red herring: it has only 1 constraint and human technology can solve 10k+ TSP’s optimally in reasonable time. Add a few arbitrary constraints on it (like any business case in the real
world will) and that optimally goes out the window.
Instead, let’s take CVRPTW with a load balancing constraint – how do we explain to the journalist that it takes exponential time to solve? How do we explain that no humans or technology on the
planet can solve optimally and scale out? The size of the brute force search space matters. But also the degree of restrictiveness of the constraints matter: not too little and not too much.
8. Please do write a letter to the editors; it will give them an opportunity to bring the error to the attention of all the magazine’s readers, most of whom will not see this blog post.
Let me further suggest that whenever one of your favorite science magazines disappoints you, think for a moment about how you might make it better. A letter the the editor is a start. But maybe
you’d like to write an article yourself? Or just suggest subjects and authors who might produce the kind of work you’d like to read about in those pages.
American Scientist is not a distant and faceless institution. The magazine is not created on another planet and beamed here for the edification (or otherwise) of earthlings. It’s a channel of
communication created by and for the entire community of science. The magazine is a nonprofit enterprise with a tiny staff of editors, illustrators, and others. They are very smart and work very
hard, but they don’t know everything about every branch of science, mathematics, and engineering. With your help, they’ve got a much better chance of avoiding missteps the next time an article
mentions the TSP.
Disclosure: Until recently I wrote a column for American Scientist, and once upon a time I was the magazine’s editor. Not an innocent bystander. Over the years I’ve made my own share of goofs in
the pages of the magazine, and if I’ve gotten any better over the years, it’s because of all the people who have taken the trouble to show me the errors of my ways.
9. so…. hear me out… why not just take UPS’ right-turn-only delivery algorithms and map that out to the rest of the county? they need to be able to touch every house (let alone every zipcode). I
know I’m not bright enough to sit in anyone’s lecture, but to say that finding “the most efficient way” is impossible, has I think (IMO) probably already been done – just not in the manner spoken
of here…. or I’m just not understanding the problem…
10. Thank you for pointing out a truly fascinating article. Yanofsky’s topic is really interesting and has many intriguing ideas. As for the minor point about TSP, it remains a NP-Complete problem
and there are cases where the problem still demands an exponential amount of time. I also think that talking against a great magazine like American Scientist for a sidebar of an article is a bit
silly. Surely you jest!
11. The problem is much harder than simply trying to visit 43,000 points on the surface of a sphere optimally. Each zip code is a polygon on the surface of the sphere. The constraints say “visit each
zip code”. Does that mean one has to be on a road? Or can a person walk, swim and rapel across terrain and water? If a road, please define the meaning of “a road”? Gravel? Off-road 2-track? Dirt
bike trail? Even when each of these questions is answered, the problem is much harder than just visiting 43,000 points.
12. Thanks for all the feedback here. Brian Hayes: your column was the reason I subscribed. It was brilliant! And you are right that there are more positive things I can do. But this _particular_ bit
of misinformation is extremely grating (as my previous blog post pointed out).
13. Aside from all the off-topic drivel on HN, I would also tend to wonder about the articles conclusions (having done this by hand myself). But then, there are very few things nowadays I agree with,
news of any kind being something I will usually disagree with. Certainly not worth a blog however.
14. A couple of small points:
Mike, I would guess that “within 75% of optimal” probably means “within 175% of optimal,” or “with an error within 75% of the optimum.” That’s apparently not the invention of the sidebar writers,
but appears in many articles Google turns up about the Hilbert curve heuristic. Nevertheless, Christofedes’s heuristic is already within 150% of optimal. And add my vote to sending a letter.
Don Wills, indeed a realistic routing problem based on this premise is harder to formulate (what points in each zip, what roads count, geodesic vs. Euclidian vs. odometer distance, etc.), much
less solve. But the sidebar is still just about the stylized TSP: if you had to visit 43000 points with some distances (presumably at least respecting the triangle inequality), what’s the most
efficient way to find the best tour? Certainly not complete enumeration. | {"url":"https://mat.tepper.cmu.edu/blog/index.php/2016/04/24/the-american-scientist-magazine-understands-nothing-about-the-traveling-salesman-problem/","timestamp":"2024-11-04T11:23:49Z","content_type":"text/html","content_length":"80764","record_id":"<urn:uuid:b71eaa77-7548-42f5-bf0c-144b35b02d12>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00863.warc.gz"} |
Two identical conducting rods are first connected independently
Two identical conducting rods are first connected independently: Heat Conduction
Problem: Two identical conducting rods are first connected independently to two vessels, one containing water at 100°C and the other containing ice at 0°C. In the second case, the rods are joined end
to end and connected to the same vessels. Let q[1] and q[2] grams per second be the rate of melting of ice in the two cases, respectively. The ratio q[1]/q[2] is? (IIT JEE 2004)
A. 1/2
B. 2
C. 4
D. 1/4
Answer: The answer is (C) i.e., the ratio q[1]/q[2] is 4.
Solution: The rate of heat conduction through a material having conductivity $\kappa$, cross-section area $A$, length $\Delta x$, and temperature difference between two ends $\Delta T$ is given by, \
frac{\Delta Q}{\Delta t}=\kappa A \frac{\Delta T}{\Delta x}.\nonumber
In case (1), two rods are connected in parallel.
The rate of heat transfer through each rod is, ${\Delta Q}/{\Delta t}=\kappa A ({100}/{l})$. Thus, the rate of heat transfer to the ice is, \label{vpa:eqn:1} \frac{\Delta Q_1}{\Delta t}=2 \frac{\
Delta Q}{\Delta t}=\kappa A \frac{200}{l}.
In case (2), two identical rods are connected in series making their effective length $2l$.
The rate of heat transfer to the ice is given by, \label{vpa:eqn:2} \frac{\Delta Q_2}{\Delta t}=\kappa A\frac{100}{2l}. The heat transferred to the ice is used to melt it. The rate of melting is, q=\
frac{\Delta m}{\Delta t}=\frac{1}{L}\frac{\Delta Q}{\Delta t}, \nonumber where $L$ is latent heat of fusion. Use above equations to get the ratio of rate of melting in two cases i.e., \frac{q_1}{q_2}
=\frac{\Delta Q_1}{\Delta t}\Big/\frac{\Delta Q_2}{\Delta t}=4.
Related Question
Two metal cubes A and B of the same size are arranged as shown in the figure. The extreme ends of the combination are maintained at the indicated temperatures. The arrangement is thermally insulated.
The coefficients of thermal conductivity of A and B are 300 W/(m-°C) and 200 W/(m-°C), respectively. After steady state is reached the temperature T of the interface will be? (IIT JEE 1996)
Answer: 60°C
More on Heat Conduction
See Our Book | {"url":"https://www.concepts-of-physics.com/thermodynamics/two-identical-conducting-rods-are.php","timestamp":"2024-11-14T16:42:59Z","content_type":"text/html","content_length":"15876","record_id":"<urn:uuid:276fe9ed-4d26-4764-91ce-af524c0bea46>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00727.warc.gz"} |
Evaluating & Optimizing Code Performance In R
Optimizing R code can significantly improve the performance of R scripts and programs, making them run more efficiently. This is especially important for large and complex data sets, as well as for
applications that need to be run in real-time or on a regular basis.
In this RStudio tutorial, we’ll evaluate and optimize an R code’s performance using different R packages, such as tidyverse and data.table. As an example, we’ll see how long it takes for RStudio to
read a large CSV file using the read.csv ( ) function, the tidyverse package, and the data.table package.
Optimizing Performance In R
Open RStudio. In the R script, assign the file extension to a variable.
You need to use the system.file ( ) function to determine how long it takes to perform a function or operation. Since we want to evaluate how long it takes to open a file, write read.csv (df) in the
When you run the code, the Console will show you the time it took to open the file. The elapsed column shows how long it took for the CPU to perform the R code. The results show that it took RStudio
31.93 seconds which is a significant amount of time. This loading time is impractical if you’re always working with large datasets.
One of the ways you can optimize the performance of your R code is by using the tidyverse package. Doing so reduces the time from 30 to 5 seconds.
Take note that in order to read the file, you need to use the read_csv ( ) function.
The tidyverse package improves loading time in R through the use of the readr package, which provides a set of fast and efficient functions for reading and writing data. The readr package provides
functions such as read_csv ( ) and read_table ( ) that can read large data sets quickly and efficiently.
Another optimization method in R is using the data.table package. This is free to download in the internet.
The data.table package in R is a powerful and efficient tool for working with large and complex datasets. It provides an enhanced version of the data.frame object, which is a core data structure in
R. The main advantage of data.table is its high performance and low memory usage when working with large datasets.
Note that when using this package, you need to write the fread ( ) function instead of read.csv ( ). When you run this together with your code, you can see that the loading time is reduced to 2.25
Comparing R Packages Using Microbenchmark
To compare the performance between each method, you can use the microbenchmark ( ) function.
The microbenchmark ( ) function in R is a tool for measuring the performance of R code. It provides a simple and easy-to-use interface for benchmarking the execution time of R expressions.
A great thing about this function is you’re able to set how many times the process is repeated. This gives more precise results. You’re also able to identify if the results are consistent.
If you’re having trouble reading a CSV file in Power BI, RStudio can do it for you. There are other options in R that you can use to optimize your code’s performance. But data.table is highly
recommended because of its simplicity.
***** Related Links *****
Edit Data In R Using The DataEditR Package
How To Install R Packages In Power BI
RStudio Help: Ways To Troubleshoot R Problems
Optimizing R code is an important step in ensuring that your R scripts run efficiently. There are several techniques and tools that can be used to optimize R code, such as using the tidyverse package
for data manipulation, using the data.table package for large data sets, and using the microbenchmark package for measuring the performance of R code.
It’s also important to keep in mind good coding practices such as using vectorized operations instead of loops, making use of built-in functions instead of writing your own, and being mindful of the
memory usage of your code.
All the best,
George Mount
This project aims to predict future sales for a retail chain using Random Forest algorithms implemented in R, aiding in informed supply chain and inventory management decisions.
Sales Forecasting for Retail Chain using Random Forest in R | {"url":"https://blog.enterprisedna.co/evaluating-optimizing-code-performance-in-r/page/2/?et_blog","timestamp":"2024-11-01T23:08:47Z","content_type":"text/html","content_length":"179417","record_id":"<urn:uuid:3a1729ad-0d25-4cd9-a273-8962705d38e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00649.warc.gz"} |
Help/idea with automation/scripts (Sonoff Basic with Tasmota)
Hello everyone!
I wonder if anyone could suggest the approach for this idea. I have 27 Sonoffs basic with GPIO14, TX and RX soldered with wires connected to the physical light switches. Everything works perfectly on
Tasmota but now I need to set functions to TX and RX on Home Assistant, total of 54 commands (TOGGLE), no need for POWER1 TOGGLE since it is already configured on each light entity created. Also I
want to use the HOLD message, so it will result on 135 commands (including POWER1 topic).
What do you guys think it would be the best approach? Create 135 automations looking the MQTT message and executing a script? Create 135 MQTT switches?
Something like this:
Total of 27 relays:
• 81 physical switches
• 54 TOGGLE commands
• 81 HOLD commands
It would be very nice to have it on a single automation so the action could identify the trigger and execute a script…
Here are all topic states:
I have a working singe automation for a Xiaomi Cube that goes like this:
- alias: BotĂŁo Xiaomi quarto
platform: mqtt
topic: 'zigbee2mqtt/casa_xiaomi_botao_quarto_01'
condition: template
value_template: "{{ trigger.payload_json.click in ('single','double','triple','quadruple','many','long') }}"
service_template: "script.botao_xiaomi_quarto_01_{{ trigger.payload_json.click }}"
I was thinking of something like this, so I could just create all scripts like this: script.interruptor_s_1_luz_tv_POWER2_TOGGLE.
Thanks a lot!
So it looks like you could do something like this, but you’d need to name your scripts based on the second phrase in your topics.
Also, i’m not versed well in MQTT, i just got the /+ thing from the documents, no clue if that’s how it works.
- alias: mqtt
- platform: mqtt
topic: stat/+
- condition: template
value_template: "{{ trigger.payload_json.click in ('single','double','triple','quadruple','many','long') }}"
- service_template: >
script.{{ trigger.topic.split('/')[1] }}
power: >
{{ trigger.topic.split('/')[-1] }}
I’m not sure if that would work because you aren’t providing enough information. Is this just for a toggle? What would these scripts look like?
1 Like
Hello! Thanks for your time, I really appreciate it.
Sorry, I was definitely not clear enough explaining my ideia.
That automation sample is the one I use for Xiaomi wireless buttons, over Zigbee2MQTT protocol, so that is what the click type is. I like the way the service template is created using the json
information from the message.
What I really need is to use my Sonoffs Basics with 3 physical buttons, one for the real relay and 2 for virtual relays that results on POWER2 and POWER3 messages over MQTT. Both topics can broadcast
TOGGLE and HOLD commands, so excluding the POWER1 TOOGLE command (already set on the light entity), I can use POWER1 HOLD, POWER2 TOOGLE, POWER2 HOLD, POWER3 TOGGLE and POWER3 HOLD to trigger scripts
using automation (MQTT trigger).
I am not sure the best way to simplify this process of calling the correct script by stripping the full topic sent by the device, for example:
If I press TOGGLE on POWER2 using the device s_1_luz_tv I get stat/s_1_luz_tv/RESULT = TOGGLE. So I need to figure out how to strip this message to know which device and what command is sent and use
it on the service_template to simplify the script entity_id, so this example would be script.interruptor_s_1_luz_tv_POWER2_TOGGLE.
So just one automation could call any of the 135 pre made scripts.
This process will be used with other POWER# topics and HOLD/TOGGLE messages to call different scripts.
All scripts would be pre-configured and named accordingly, so I can change it later if needed.
Sorry if I was not able to explain it better
Thanks again my friend!
Not sure if this would help:
Since this is the message I get when TOOGLE is sent using POWER2 - stat/s_1_luz_tv/RESULT = {"POWER2":"TOGGLE"}, maybe this would work?
- service_template: script.interruptor_{{ trigger.topic.split('/')[1] }}_{{ trigger.payload_json }}_{{ trigger.payload_json.POWER1 }}{{ trigger.payload_json.POWER2 }}{{ trigger.payload_json.POWER3 }}
Or when I get stat/s_1_luz_tv/POWER2 = TOGGLE i could use:
- service_template: script.interruptor_{{ trigger.topic.split('/')[1] }}_{{ trigger.topic.split('/')[2] }}_{{ trigger.payload_json }}
To use with script.interruptor_s_1_luz_tv_POWER2_TOGGLE
Can tx and Rx shorted to ground be used to send mqtt messages on the sonoff basic using tasmota?
Yes! That is what I did. You need to set the module to generic and then set two pins as relays and switches, so it thinks it is activating relays and sends POWER2 and POWER3 messages.
Here is a backlog for it:
Backlog Module 18; GPIO0 9; GPIO1 10; GPIO2 0; GPIO3 11; GPIO4 0; GPIO5 0; GPIO12 17; GPIO13 52; GPIO14 9; GPIO15 18; GPIO16 19; SetOption13 1; SerialLog 0, SetOption26 0; SwitchTopic 1; SwitchMode1
5; SwitchMode2 5; SwitchMode3 5; SetOption32 20
Thanks. I have a module where gpio14 doesn’t seem to work so I might try soldering something into tx or Rx.
I use toggle buttons rather than momentary. Do you know if Rx or Rx can be pulled to ground on boot without doing anything wiered
1 Like
I use RX and TX just like GPIO14, you can do everything with them. I have both momentary and toggle switches, you just need to change switchmode in console. Also, you can only use HOLD with toggle
switches, obviously. Good luck!
Do you have an example of this premade script?
That is the beauty of it: I can create anything I want by just using the correct name standard.
Something like this:
alias: "Luz painel TV"
- service: light.toggle
entity_id: light.ledsuitecama
As simple as that or as complex as I need. The main problem here is how to call the correct script given the topic and message information only by the automation.
I’m wondering if you even need to make a script for each one. Anyways, this should work assuming the stat/+ bullshit works and if this is how your topics come across:
- alias: mqtt
- platform: mqtt
topic: stat/+
- condition: template
value_template: "{{ trigger.payload_json.items() | list | length >= 1 and trigger.topic.split('/') | length == 3 }}"
- service_template: >
{% set args = trigger.payload_json.items() | list %}
{% set power, action = args[0] %}
{% set a, scriptname, c = trigger.topic.split('/') %}
script.interruptor_{{ scriptname }}_{{ power }}_{{ action }}
1 Like
Amazing job! I will try it ASAP and let you know the results! Thanks again mate!
You may want to check and see if the 3rd split item in your trigger topic is equal to “RESULT” as well to filter out similar topics.
Just to make sure: the scriptname variable would be the MQTT topic for the device, correct? The script name will adapt to relative topic and actions…
The way it’s written, this topic:
with this payload:
should run this script:
1 Like
Also: I am thinking about the trigger condition… Wouldn’t be better to have de devices topics (list above) as conditions? Since I use Zigbee2MQTT and many other MQTT devices, it could only
trigger by the topics I want.
Just a tough.
you could move the topics to start with something other than stat for this. Otherwise, you’ll be writing out all triggers by hand.
1 Like
Hello my friend! After a couple of days of testing this worked out for me. Since Power2 and Power3 sends TOGGLE commands, I’ve created individual automations (so I can disable them by their
function) to call scripts named after the trigger action/topic result:
- alias: BotĂŁo Sonoff - Hold
- platform: mqtt
topic: cmnd/#
payload: 'HOLD'
- service_template: script.interruptor_{{ trigger.topic.split('/')[1] | lower }}_{{ trigger.topic.split('/')[2] | lower }}_hold
- alias: BotĂŁo Sonoff - Power 2 toggle
- platform: mqtt
topic: cmnd/+/POWER2
payload: 'TOGGLE'
- service_template: script.interruptor_{{ trigger.topic.split('/')[1] | lower }}_{{ trigger.topic.split('/')[2] | lower }}_power2
- alias: BotĂŁo Sonoff - Power 3 toggle
- platform: mqtt
topic: cmnd/+/POWER3
payload: 'TOGGLE'
- service_template: script.interruptor_{{ trigger.topic.split('/')[1] | lower }}_{{ trigger.topic.split('/')[2] | lower }}_power3
Since POWER1 are usually configured as light or switch, the message sent is either ON or OFF, so no problems on using those automations/scripts.
I’ve found some great info HERE. like this:
Messages in MQTT are published on topics. There is no need to configure a topic, publishing on it is enough. Topics are treated as a hierarchy, using a slash (/) as a separator. This allows sensible arrangement of common themes to be created, much in the same way as a filesystem. For example, multiple computers may all publish their hard drive temperature information on the following topic, with their own computer and hard drive name being replaced as appropriate:
Clients can receive messages by creating subscriptions. A subscription may be to an explicit topic, in which case only messages to that topic will be received, or it may include wildcards. Two wildcards are available, + or #.
+ can be used as a wildcard for a single level of hierarchy. It could be used with the topic above to get information on all computers and hard drives as follows:
As another example, for a topic of "a/b/c/d", the following example subscriptions will match:
The following subscriptions will not match:
# can be used as a wildcard for all remaining levels of hierarchy. This means that it must be the final character in a subscription. With a topic of "a/b/c/d", the following example subscriptions will match:
Zero length topic levels are valid, which can lead to some slightly non-obvious behaviour. For example, a topic of "a//topic" would correctly match against a subscription of "a/+/topic". Likewise, zero length topic levels can exist at both the beginning and the end of a topic string, so "/a/topic" would match against a subscription of "+/a/topic", "#" or "/#", and a topic "a/topic/" would match against a subscription of "a/topic/+" or "a/topic/#".
Thanks for your help!
Hello @petro! How are you today my friend?
I was wondering if you could help me again, with a new take on this automation.
I’ve learned to work with rules inside Tasmota so now I can control all switches by using buttons, so now I just publish mqtt messages when a any of the three buttons are pressed (toggle and hold
The topic structure are like this:
stat/device_topic/BUTTON {"BUTTON1":"HOLD"}
Here is the main idea for the automation:
- alias: Sonoff buttons
initial_state: true
- platform: mqtt
topic: stat/+/BUTTON
- service_template: script.interruptor_{{ trigger.topic.split('/')[1] | lower }}_{{ trigger.payload_json.split('.')[0] | lower }}_{{ trigger.payload_json.split('.')[1] | lower }}
Trigger seems to be working fine but I have not been able to parse just a part of the trigger.json_payload since I call a script name by using just the BUTTON# part and final “action” part to
call the correct script.
So I can call this script, for example:
alias: "Luz principal"
- service: light.toggle
entity_id: light.cozinha
I am getting this error with this automation:
Error while executing automation automation.botoes_sonoff. Error rendering template for call_service at pos 1: UndefinedError: 'dict object' has no attribute 'split'
Can you please help me with this parsing code?
Thanks a lot!
Hmm, I’m on my phone, when I get home I’ll take a look
1 Like | {"url":"https://community.home-assistant.io/t/help-idea-with-automation-scripts-sonoff-basic-with-tasmota/75238","timestamp":"2024-11-09T14:06:27Z","content_type":"text/html","content_length":"69709","record_id":"<urn:uuid:45265b66-d120-49f7-9784-ae1d1c5e224f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00574.warc.gz"} |
Fan Mission: Quinn Co. Part 1 - La Banque Bienveillante (Remake) by Airship Ballet
I enjoyed this mission, but fell at the last hurdle: I can't figure out how to "Leave with the loot"!
I tried going back to the starting point, but nothing happened. All of the windows have bars, and I can't find any doors out of the bank.
I also had to consult the forums to find out how to empty the teller safe. That key hiding place is very non-obvious (at least from a Thief perspective — I guess it makes perfect sense in real life).
So long as all your objectives are done, the front door where you start is the way out! I'll be making the location volume bigger for next go round as a few people mentioned it's fiddly. Have to make
sure you're carrying the loot bag too, that's all I can think of!
On 7/31/2022 at 8:06 PM, Airship Ballet said:
Have to make sure you're carrying the loot bag too, that's all I can think of!
Yes, that was it. I never even realised there was something to pick up in the vault after the "fade to black" sequence.
Robbing a bank and forgetting to take the bag sounds like something I'd do
• 2
• 2 weeks later...
Finished this up yesterday and enjoyed the remake.
You nailed the rundown bank look perfectly! Story was quick and easy to follow. Had some trouble initially finding the teller safe key, but after a second look (thanks to your clue) I found it and
was on my way. Spooky secret btw.
Looking forward to the next installment, thanks for all your hard work.
30 minutes ago, Thiefette said:
Looking forward to the next installment
Thanks Thiefette, glad to see you're still around and enjoying the missions!
The next instalment might be coming sooner than you'd think
Really nice mission! I haven't played any of your old ones so I had no idea what to expect and it was a very pleasant surprise. Tight and great looking, the environment was really well made.
Appreciated some of the new (or less used) soundscapes as well, I think it's a big part of the overall atmosphere and while pretty much all of those used in TDM are good, the effect gets lessened
when you hear them all the time.
Looking forward to you next work!
Thanks Vozka, glad you had a good time!
I've actually gone back and made more new ambients for where I'd used stock ones here: you're right in that they're great, but we've all heard them by now. New ones will be in this mission when the
next one in the campaign hits shelves worldwide, and I'll probably post a compilation if others wanna use them for variety!
• 4 months later...
Just played this through, very nice. It feels more like a homage to the original than a replacement, I have to say.
I could almost ghost it ... but there's that one guard. I can turn the light off (a bust for Supreme), but can I get past him without an alert before he turns the light back on? I seriously doubt
it, didn't try very hard, and I wouldn't be able to get back again anyway, so I abandoned the ghost. Is this meant to be ghostable?
And that last secret. As you say, Hmm... WTF? Is there any lore for this strangeness?
• 8 months later...
how many remakes are there since i have a different layout then i see back on other places like youtube.
eg the vault is direct behind the entrence door.
and the buy page has many items ? And the objects are all equal now?
i realy can not find that obvious teller safe key.
Very nice short mission, the atmosphere was top notch. The rain pouring in the open window and down the stairs... but rainy atmosphere is hard to beat anyways.
Only thing to be aware of is that the mission won´t end if you have any open objectives, they are not optional here.
• 3 months later... | {"url":"https://forums.thedarkmod.com/index.php?/topic/21488-fan-mission-quinn-co-part-1-la-banque-bienveillante-remake-by-airship-ballet/page/3/","timestamp":"2024-11-02T15:54:19Z","content_type":"text/html","content_length":"273880","record_id":"<urn:uuid:78562e30-a73a-49c0-9022-cfd60bd4252e>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00755.warc.gz"} |
How To Use TRANSPOSE Function In Google Sheets[All Details]
Google Sheets is a smart tool that you are most likely going to cross paths with no matter what field you come from. However, this smart tool does include its fair share of hiccups.
When working with a spreadsheet, especially a visual spreadsheet, you may need to change the format of your data, which can get complicated the larger your sets are.
Thankfully, Google Sheets is filled with handy functions that can help achieve most of your interactions.
For this guide, we will be focusing on the transposing of data sets. Transposing data sets is simply the swapping of rows and columns and switching between the vertical and the horizontal view of
data in a spreadsheet.
We can do this by using the TRANSPOSE Function.
What is the TRANSPOSE Function?
The TRANSPOSE Function is one of Google Sheets’ numerous formatting formulas which is solely used for the purpose of switching between the vertical and the horizontal view of a data set in a
spreadsheet by swapping the rows and columns of a data set.
The TRANSPOSE Formula is typed as follows:
• =TRANSPOSE is the function trigger,
• (array_or_range) is where we highlight the targeted data set.
Let us say we have a data set that is the list of scores of two separate tests that a teacher is reporting, after finishing his list, the teacher decides that a horizontal view of these scores would
be a better fit for his report, the list is too long for a total redo.
The TRANSPOSE function would be perfect for this task. Have a look at our given list:
To switch this list from vertical to horizontal, we should first select a blank cell to write our formula in. For the above list, the TRANSPOSE Function will be used as follows:
Press Enter or the Return Key after writing the above formula.
The list should automatically be transposed to look like this:
You can do the same to a horizontal set to switch it into its vertical format.
You can not, however, delete nor edit a single entry in the transposed set. Changes can only be made by deleting or editing an entry in the original data set.
The only change you can do to the transposed data set is to delete it entirely.
Extra Help
The TRANSPOSE function is not strictly used in the above method alone, it actually has a few more tricks up its sleeve. Jump TRANSPOSE function OR Using the TRANSPOSE Function Every N Rows/Columns
For limiting the TRANSPOSE function to every certain number of rows or columns, we have to first use the INDEX function to sort out these cells.
The INDEX function is used for fetching specific data out of a larger set of data according to a specified shared element. Its formula is as follows:
=INDEX(reference, [row], [column])
Apply this function to your original data set to extract your desired entries. For example, to extract an entry every 2 rows from our original data set, we will write the following formula:
Where N is the number of jumps or skips the INDEX function makes.
Applying a skip of N=2, type this formula into a blank cell as follows:
Press Enter or the Return Key to reveal the fetched data and drag from the cell to copy the formula to the rest of the cells. It should look like this:
Then, apply the TRANSPOSE function to the results to shift the rows and columns into horizontal orientation using the following formula:
Your results should look as follows:
Now your INDEX fetched data is transposed into a horizontal orientation.
Sheet to Sheet transposition
If you’re working across different sheets, you can use the TRANSPOSE function to send the transposed data set result into another sheet using only one formula.
Using the data in our original spreadsheet, go to a new sheet, choose a blank cell and write the following formula:
Press Enter or the Return Key to reveal the following results:
What Does the TRANSPOSE Function Do?
The TRANSPOSE function is one of the numerous functions of Google Sheets. It swaps the rows and columns of a certain data set which changes its orientation from vertical to horizontal or vice versa.
The TRANSPOSE function is used across both cells and sheets to control and edit the way a set of entries is shown.
Individual entries in the transposed data set result can not be edited or deleted, this can only be done by applying changes to the original data set.
You can only delete the entire transposed data set without applying changes to the original.
The TRANSPOSE function can not also overwrite pre-existing data, meaning that if a certain cell initially had data in it, an overlaying TRANSPOSE function result would be displayed as #REF.
Fix this by either clearing out the preexisting data in that specific cell or by changing cell positions.
To sum up everything we have learned, the TRANSPOSE function is a Google Sheets feature that swaps the rows and columns in a highlighted data set to switch their orientation from vertical to
horizontal or vice versa.
We discussed the TRANSPOSE function formula and applied it to an example data set. We applied the basic formula to the basic data set.
We explored the TRANSPOSE function for every N row or column and its relation to the INDEX function. The INDEX function in Google Sheets works on fetching a specific section of entries from a larger
data set of entries based on a shared element.
Lastly, we learned how to use the TRANSPOSE function over various sheets and the formula associated with it.
The TRANSPOSE function is an extremely simple and a highly helpful feature of Google Sheets. We hope this tutorial was clear, useful, and easy to follow. | {"url":"https://abidakon.com/use-transpose-function-in-google-sheets/","timestamp":"2024-11-09T11:18:16Z","content_type":"text/html","content_length":"99921","record_id":"<urn:uuid:ec658c8f-124a-4e21-b133-4c983425aca7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00007.warc.gz"} |
This study explores how secondary mathematics teachers envision potential argumentation situations in the classroom. The data were collected by means of individual semi-structured interviews
conducted with 31 secondary mathematics teachers. The participants were asked to express their views on argumentation for teaching mathematics, provide examples of argumentation as manifested in
their own teaching, and formulate a script for the hypothetical implementation of a mathematical task in the classroom with the goal of engaging students in argumentative activity. Analysis of the
teachers' responses yielded categories related to: (1) task characteristics, (2) teaching strategies, and (3) students’ characteristics. From a cross-analysis of the teachers' statements, certain
categories appeared more frequently than others. The findings are interpreted in light of theory and practice.
Original language English
Title of host publication Proceedings of the 44th Conference of the International Group for the Psychology of Mathematics Education, 2021
Editors Maitree Inprasitha, Narumon Changsri, Nisakorn Boonsena
Publisher Psychology of Mathematics Education (PME)
Pages 33-40
Number of pages 8
ISBN (Print) 9786169383017
State Published - 2021
Event 44th Conference of the International Group for the Psychology of Mathematics Education, PME 2021 - Virtual, Online
Duration: 19 Jul 2021 → 22 Jul 2021
Publication series
Name Proceedings of the International Group for the Psychology of Mathematics Education
Volume 2
ISSN (Print) 0771-100X
ISSN (Electronic) 2790-3648
Conference 44th Conference of the International Group for the Psychology of Mathematics Education, PME 2021
City Virtual, Online
Period 19/07/21 → 22/07/21
Bibliographical note
Publisher Copyright:
© 2021, Psychology of Mathematics Education (PME). All rights reserved.
ASJC Scopus subject areas
• Mathematics (miscellaneous)
• Developmental and Educational Psychology
• Experimental and Cognitive Psychology
• Education
Dive into the research topics of 'EXPLORING TEACHERS’ ENVISIONING OF CLASSROOM ARGUMENTATION'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/exploring-teachers-envisioning-of-classroom-argumentation","timestamp":"2024-11-08T22:34:15Z","content_type":"text/html","content_length":"55484","record_id":"<urn:uuid:7950788c-9b19-4b3d-bc90-ca85737c27c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00461.warc.gz"} |
Smoothing Capacitor Calculator
This tool calculates the capacitor value for a full-wave bridge rectifier. The capacitor is used to smooth the output voltage to a specified ripple.
👉 Ripple Voltage Calculator
C = I[LOAD]/(2*f*V[Ripple])
• I[LOAD] is the load current
• f is the frequency
• V[Ripple] is the peak-to-peak voltage ripple
Example Calculation
For a load current of 1 Amp, 60 Hz frequency and Ripple Voltage of 1 Volt the required capacitor is 8.3 mF. To reduce the voltage ripple down to 0.5 V requires twice the capacitor or 16.7 mF.
What is a Smoothing Capacitor?
A smoothing capacitor, also known as a filter capacitor, is an electrical component used in power supply circuits to convert pulsating direct current (DC) output from a rectifier into a smoother,
more stable DC voltage.
When AC voltage is converted to DC voltage through a process called rectification, the result is a DC voltage that rises and falls in a waveform, typically resembling a series of peaks and valleys.
This is because the rectification process only changes the direction of the current, but does not produce a constant voltage. The purpose of a smoothing capacitor is to reduce these fluctuations to
create a steadier DC signal.
The smoothing capacitor works by charging up during the peaks of the waveform and then discharging when the voltage begins to fall. This charge-discharge cycle effectively fills in the valleys of the
waveform, leading to a smoother and more constant DC output voltage. The capacity of the capacitor to store charge, measured in Farads (F), determines its effectiveness at smoothing the output.
Larger capacitors can store more charge, and therefore, provide better smoothing by covering more of the dips in the rectified DC signal.
It’s important to note that while smoothing capacitors significantly reduce the ripple in the output DC voltage, they might not completely eliminate it. Additional voltage regulation may be necessary
for applications requiring very stable DC voltages with minimal ripple.
Related Posts | {"url":"https://3roam.com/capacitor-smoothing-calculator/","timestamp":"2024-11-05T02:36:31Z","content_type":"text/html","content_length":"195846","record_id":"<urn:uuid:563bd250-1336-40e5-a4b7-658b5836d5d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00389.warc.gz"} |
IU Indianapolis ScholarWorks :: Browsing by Subject "Particle Swarm Optimization"
Browsing by Subject "Particle Swarm Optimization"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
• Applying Different Wide-Area Response-Based Controls to Different Contingencies in Power Systems
(2019-08) Iranmanesh, Shahrzad; Steven, Rovnyak; King, Brian; dos Santos, Euzeli Cipriano
The electrical disturbances in the power system have threatened the stability of the system. In the first step, it is necessary to detect these electrical disturbances or events. In the next
step, a proper control should apply to the system to decrease the consequences of the disturbances. One-shot control is one of the effective methods for stabilizing the events. In this method, a
proper amount of loads are increased or decreased to the electrical system. Determining the amounts of loads, and the location for shedding is crucial. Moreover, some control combinations are
more effective for some events and less effective for some others. Therefore, this project is completed in two different sections. First, finding the effective control combinations, second,
finding an algorithm for applying different control combinations to different contingencies in real-time. To find effective control combinations, sensitivity analysis is employed to locate the
most effective loads in the system. Then to find the control combination commands, gradient descent, and PSO algorithm are used in this project. In the next step, a pattern recognition method is
used to apply the appropriate control combination for every event. The decision tree is selected as the pattern recognition method. The three most effective control combinations found by
sensitivity analysis and the PSO method are used in the remainder of this study. A decision tree is trained for each of the three control combinations, and their outputs are combined into an
algorithm for selecting the best control in real-time. Finally, the algorithm is evaluated using a test set of contingencies. The final results reveal a 30\% improvement in comparison to the
previous studies.
• Artificial ants deposit pheromone to search for regulatory DNA elements
(BioMed Central, 2006-08-30) Liu, Yunlong; Yokota, Hiroki; Medicine, School of Medicine
Background Identification of transcription-factor binding motifs (DNA sequences) can be formulated as a combinatorial problem, where an efficient algorithm is indispensable to predict the role of
multiple binding motifs. An ant algorithm is a biology-inspired computational technique, through which a combinatorial problem is solved by mimicking the behavior of social insects such as ants.
We developed a unique version of ant algorithms to select a set of binding motifs by considering a potential contribution of each of all random DNA sequences of 4- to 7-bp in length. Results
Human chondrogenesis was used as a model system. The results revealed that the ant algorithm was able to identify biologically known binding motifs in chondrogenesis such as AP-1, NFκB, and sox9.
Some of the predicted motifs were identical to those previously derived with the genetic algorithm. Unlike the genetic algorithm, however, the ant algorithm was able to evaluate a contribution of
individual binding motifs as a spectrum of distributed information and predict core consensus motifs from a wider DNA pool. Conclusion The ant algorithm offers an efficient, reproducible
procedure to predict a role of individual transcription-factor binding motifs using a unique definition of artificial ants.
• Comparing Pso-Based Clustering Over Contextual Vector Embeddings to Modern Topic Modeling
(2022-05) Miles, Samuel; Ben Miled, Zina; Salama, Paul; El-Sharkawy, Mohamed
Efficient topic modeling is needed to support applications that aim at identifying main themes from a collection of documents. In this thesis, a reduced vector embedding representation and
particle swarm optimization (PSO) are combined to develop a topic modeling strategy that is able to identify representative themes from a large collection of documents. Documents are encoded
using a reduced, contextual vector embedding from a general-purpose pre-trained language model (sBERT). A modified PSO algorithm (pPSO) that tracks particle fitness on a dimension-by-dimension
basis is then applied to these embeddings to create clusters of related documents. The proposed methodology is demonstrated on three datasets across different domains. The first dataset consists
of posts from the online health forum r/Cancer. The second dataset is a collection of NY Times abstracts and is used to compare
• Dynamic electronic asset allocation comparing genetic algorithm with particle swarm optimization
(2018-12) Islam, Md Saiful; Christopher, Lauren A.; King, Brian S.; El-Sharkawy, Mohamed
The contribution of this research work can be divided into two main tasks: 1) implementing this Electronic Warfare Asset Allocation Problem (EWAAP) with the Genetic Algorithm (GA); 2) Comparing
performance of Genetic Algorithm to Particle Swarm Optimization (PSO) algorithm. This research problem implemented Genetic Algorithm in C++ and used QT Data Visualization for displaying
three-dimensional space, pheromone, and Terrain. The Genetic algorithm implementation maintained and preserved the coding style, data structure, and visualization from the PSO implementation.
Although the Genetic Algorithm has higher fitness values and better global solutions for 3 or more receivers, it increases the running time. The Genetic Algorithm is around (15-30\%) more
accurate for asset counts from 3 to 6 but requires (26-82\%) more computational time. When the allocation problem complexity increases by adding 3D space, pheromones and complex terrains, the
accuracy of GA is 3.71\% better but the speed of GA is 121\% slower than PSO. In summary, the Genetic Algorithm gives a better global solution in some cases but the computational time is higher
for the Genetic Algorithm with than Particle Swarm Optimization.
• Motion correction of PET/CT images
(2017) Chong Chie, Juan Antonio Kim Hoo; Salama, Paul; Territo, Paul
The advances in health care technology help physicians make more accurate diagnoses about the health conditions of their patients. Positron Emission Tomography/Computed Tomography (PET/CT) is one
of the many tools currently used to diagnose health and disease in patients. PET/CT explorations are typically used to detect: cancer, heart diseases, disorders in the central nervous system.
Since PET/CT studies can take up to 60 minutes or more, it is impossible for patients to remain motionless throughout the scanning process. This movements create motion-related artifacts which
alter the quantitative and qualitative results produced by the scanning process. The patient's motion results in image blurring, reduction in the image signal to noise ratio, and reduced image
contrast, which could lead to misdiagnoses. In the literature, software and hardware-based techniques have been studied to implement motion correction over medical files. Techniques based on the
use of an external motion tracking system are preferred by researchers because they present a better accuracy. This thesis proposes a motion correction system that uses 3D affine registrations
using particle swarm optimization and an off-the-shelf Microsoft Kinect camera to eliminate or reduce errors caused by the patient's motion during a medical imaging study.
• Multi-Objective Optimization of Plug-In HEV Powertrain Using Modified Particle Swarm Optimization
(2021-05) Parkar, Omkar; Anwar, Sohel; Tovar, Andres; Li, Lingxi
An increase in the awareness of environmental conservation is leading the automotive industry into the adaptation of alternatively fueled vehicles. Electric, Fuel-Cell as well as Hybrid-Electric
vehicles focus on this research area with the aim to efficiently utilize vehicle powertrain as the first step. Energy and Power Management System control strategies play a vital role in improving
the efficiency of any hybrid propulsion system. However, these control strategies are sensitive to the dynamics of the powertrain components used in the given system. A kinematic mathematical
model for Plug-in Hybrid Electric Vehicle (PHEV) has been developed in this study and is further optimized by determining optimal power management strategy for minimal fuel consumption as well as
NOx emissions while executing a set drive cycle. A multi-objective optimization using weighted sum formulation is needed in order to observe the trade-off between the optimized objectives.
Particle Swarm Optimization (PSO) algorithm has been used in this research, to determine the trade-off curve between fuel and NOx. In performing these optimizations, the control signal consisting
of engine speed and reference battery SOC trajectory for a 2-hour cycle is used as the controllable decision parameter input directly from the optimizer. Each element of the control signal was
split into 50 distinct points representing the full 2 hours, giving slightly less than 2.5 minutes per point, noting that the values used in the model are interpolated between the points for each
time step. With the control signal consisting of 2 distinct signals, speed, and SOC trajectory, as 50 element time-variant signals, a multidimensional problem was formulated for the optimizer.
Novel approaches to balance the optimizer exploration and convergence, as well as seeding techniques are suggested to solve the optimal control problem. The optimization of each involved
individual runs at 5 different weight levels with the resulting cost populations being compiled together to visually represent with the help of Pareto front development. The obtained results of
simulations and optimization are presented involving performances of individual components of the PHEV powertrain as well as the optimized PMS strategy to follow for a given drive cycle.
Observations of the trade-off are discussed in the case of Multi-Objective Optimizations.
(ProQuest, 2009) Banvait, Harpreetsingh; Anwar, Sohel; Chen, Yaobin; Eberhart, Russell C.
Plug-in Hybrid Electric Vehicles (PHEV) are new generation Hybrid Electric Vehicles (HEV) with larger battery capacity compared to Hybrid Electric Vehicles. They can store electrical energy from
a domestic power supply and can drive the vehicle alone in Electric Vehicle (EV) mode. According to the U.S. Department of Transportation 80 % of the American driving public on average drives
under 50 miles per day. A PHEV vehicle that can drive up to 50 miles by making maximum use of cheaper electrical energy from a domestic supply can significantly reduce the conventional fuel
consumption. This may also help in improving the environment as PHEVs emit less harmful gases. However, the Energy Management System (EMS) of PHEVs would have to be very different from existing
EMSs of HEVs. In this thesis, three different Energy Management Systems have been designed specifically for PHEVs using simulated study. For most of the EMS development mathematical vehicle
models for powersplit drivetrain configuration are built and later on the results are tested on advanced vehicle modeling tools like ADVISOR or PSAT. The main objective of the study is to design
EMSs to reduce fuel consumption by the vehicle. These EMSs are compared with existing EMSs which show overall improvement. x In this thesis the final EMS is designed in three intermediate steps.
First, a simple rule based EMS was designed to improve the fuel economy for parametric study. Second, an optimized EMS was designed with the main objective to improve fuel economy of the vehicle.
Here Particle Swarm Optimization (PSO) technique is used to obtain the optimum parameter values. This EMS has provided optimum parameters which result in optimum blended mode operation of the
vehicle. Finally, to obtain optimum charge depletion and charge sustaining mode operation of the vehicle an advanced PSO EMS is designed which provides optimal results for the vehicle to operate
in charge depletion and charge sustaining modes. Furthermore, to implement the developed advanced PSO EMS in real-time a possible real time implementation technique is designed using neural
networks. This neural network implementation provides sub-optimal results as compared to advanced PSO EMS results but it can be implemented in real time in a vehicle. These EMSs can be used to
obtain optimal results for the vehicle driving conditions such that fuel economy is improved. Moreover, the optimal designed EMS can also be implemented in real-time using the neural network
procedure described.
• Particle Swarm Optimization in the dynamic electronic warfare battlefield
(2017-04-27) Witcher, Paul Ryan; Christopher, Lauren
This research improves the realism of an electronic warfare (EW) environment involving dynamic motion of assets and transmitters. Particle Swarm Optimization (PSO) continues to be used to place
assets in such a manner where they can communicate with the largest number of highest priority transmitters. This new research accomplishes improvement in three areas. First, the previously
stationary assets and transmitters are given a velocity component, allowing them to change positions over time. Because the assets now have a starting position and velocity, they require time to
reach the PSO solution. In order to optimally assign each asset to move in the direction of a PSO solution location, a graph-based method is implemented. This encompasses the second area of
research. The graph algorithm runs in O(n^3) time and consumes less than 0.2% of the total measured computation time to find a solution. Transmitter location updates prompt a recalculation of the
PSO, causing the assets to change their assignments and trajectories every second. The computation required to ensure accuracy with this behavior is less than 0.5% of the total computation time.
The final area of research is the completion of algorithmic performance analysis. A scenario with 3 assets and 30 transmitters only requires an average of 147ms to update all relevant information
in a single time interval of one second. Analysis conducted on the data collected in this process indicates that more than 95% of the time providing automatic updates is spent with PSO
calculations. Recommendations on minimizing the impact of the PSO are also provided in this research.
• Real-time estimation of state-of-charge using particle swarm optimization on the electro-chemical model of a single cell
(2017-05) Chandra Shekar, Arun; Anwar, Sohel
Accurate estimation of State of Charge (SOC) is crucial. With the ever-increasing usage of batteries, especially in safety critical applications, the requirement of accurate estimation of SOC is
paramount. Most current methods of SOC estimation rely on data collected and calibrated offline, which could lead to inaccuracies in SOC estimation as the battery ages or under different
operating conditions. This work aims at exploring the real-time estimation and optimization of SOC by applying Particle Swarm Optimization (PSO) to a detailed electrochemical model of a single
cell. The goal is to develop a single cell model and PSO algorithm which can run on an embedded device with reasonable utilization of CPU and memory resources and still be able to estimate SOC
with acceptable accuracy. The scope is to demonstrate the accurate estimation of SOC for 1C charge and discharge for both healthy and aged cell. | {"url":"https://scholarworks.indianapolis.iu.edu/browse/subject?value=Particle%20Swarm%20Optimization","timestamp":"2024-11-05T09:56:30Z","content_type":"text/html","content_length":"486670","record_id":"<urn:uuid:e921f2bd-8720-479f-8f11-0882635f6f0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00337.warc.gz"} |
Introductory Chemical Engineering Thermodynamics, 2nd ed.
Please rate the screencasts using the stars on the right of each post. Ratings are anonymous. Ratings help users find good screencasts, and help the authors know which screencasts to improve.
Unifac.xls Calculation of Bubble Temperature. (3 min) (LearnChemE.com)
Comprehension Questions: Download Unifac.xls from the software link and use it to answer the following.
1. Estimate the activity coefficient of IPA in water at 80C and xw = 0.1.
2. Estimate the fugacity for IPA in water at 80C and xw =0.1.
3. Estimate the total pressure at 80C when xw =0.1.
4. Estimate the bubble temperature of IPA in water at 760mmHg and xw =0.1.
UNIFAC concepts (8:17) (msu.edu)
UNIFAC is an extension of the UNIQUAC method where the residual contribution is predicted based on group contributions using energy parameters regressed from a large data set of mixtures. This
screecast introduces the concepts used in model development. You may want to review group contribution methods before watching this presentation.
Comprehension Questions:
1. What is the difference between the upper case Θ of UNIFAC and the lower cast θ of UNIQUAC?
2. Suppose you had a mixture that was exactly the same proportions as the lower right "bubble" in slide 2. Compute Θ[OH] for that mixture.
3. Compare your value computed in 2 to the value given by unifac.xls. | {"url":"https://chethermo.net/comment/27","timestamp":"2024-11-07T10:45:01Z","content_type":"text/html","content_length":"19723","record_id":"<urn:uuid:f945a875-6ddb-4a52-baab-4d9f444d1175>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00798.warc.gz"} |
What is the difference between LR, SLR, and LALR parsers?
What is the actual difference between LR, SLR, and LALR parsers? I know that SLR and LALR are types of LR parsers, but what is the actual difference as far as their parsing tables are concerned?
And how to show whether a grammar is LR, SLR, or LALR? For an LL grammar we just have to show that any cell of the parsing table should not contain multiple production rules. Any similar rules for
LALR, SLR, and LR?
For example, how can we show that the grammar
S --> Aa | bAc | dc | bda
A --> d
is LALR(1) but not SLR(1)?
EDIT (ybungalobill): I didn't get a satisfactory answer for what's the difference between LALR and LR. So LALR's tables are smaller in size but it can recognize only a subset of LR grammars. Can
someone elaborate more on the difference between LALR and LR please? LALR(1) and LR(1) will be sufficient for an answer. Both of them use 1 token look-ahead and both are table driven! How they are
SLR, LALR and LR parsers can all be implemented using exactly the same table-driven machinery.
Fundamentally, the parsing algorithm collects the next input token T, and consults the current state S (and associated lookahead, GOTO, and reduction tables) to decide what to do:
• SHIFT: If the current table says to SHIFT on the token T, the pair (S,T) is pushed onto the parse stack, the state is changed according to what the GOTO table says for the current token (e.g,
GOTO(T)), another input token T' is fetched, and the process repeats
• REDUCE: Every state has 0, 1, or many possible reductions that might occur in the state. If the parser is LR or LALR, the token is checked against lookahead sets for all valid reductions for the
state. If the token matches a lookahead set for a reduction for grammar rule G = R1 R2 .. Rn, a stack reduction and shift occurs: the semantic action for G is called, the stack is popped n (from
Rn) times, the pair (S,G) is pushed onto the stack, the new state S' is set to GOTO(G), and the cycle repeats with the same token T. If the parser is an SLR parser, there is at most one reduction
rule for the state and so the reduction action can be done blindly without searching to see which reduction applies. It is useful for an SLR parser to know if there is a reduction or not; this is
easy to tell if each state explicitly records the number of reductions associated with it, and that count is needed for the L(AL)R versions in practice anyway.
• ERROR: If neither SHIFT nor REDUCE is possible, a syntax error is declared.
So, if they all the use the same machinery, what's the point?
The purported value in SLR is its simplicity in implementation; you don't have to scan through the possible reductions checking lookahead sets because there is at most one, and this is the only
viable action if there are no SHIFT exits from the state. Which reduction applies can be attached specifically to the state, so the SLR parsing machinery doesn't have to hunt for it. In practice L
(AL)R parsers handle a usefully larger set of langauges, and is so little extra work to implement that nobody implements SLR except as an academic exercise.
The difference between LALR and LR has to do with the table generator. LR parser generators keep track of all possible reductions from specific states and their precise lookahead set; you end up with
states in which every reduction is associated with its exact lookahead set from its left context. This tends to build rather large sets of states. LALR parser generators are willing to combine states
if the GOTO tables and lookhead sets for reductions are compatible and don't conflict; this produces considerably smaller numbers of states, at the price of not be able to distinguish certain symbol
sequences that LR can distinguish. So, LR parsers can parse a larger set of languages than LALR parsers, but have very much bigger parser tables. In practice, one can find LALR grammars which are
close enough to the target langauges that the size of the state machine is worth optimizing; the places where the LR parser would be better is handled by ad hoc checking outside the parser.
So: All three use the same machinery. SLR is "easy" in the sense that you can ignore a tiny bit of the machinery but it is just not worth the trouble. LR parses a broader set of langauges but the
state tables tend to be pretty big. That leaves LALR as the practical choice.
Having said all this, it is worth knowing that GLR parsers can parse any context free language, using more complicated machinery but exactly the same tables (including the smaller version used by
LALR). This means that GLR is strictly more powerful than LR, LALR and SLR; pretty much if you can write a standard BNF grammar, GLR will parse according to it. The difference in the machinery is
that GLR is willing to try multiple parses when there are conflicts between the GOTO table and or lookahead sets. (How GLR does this efficiently is sheer genius [not mine] but won't fit in this SO
That for me is an enormously useful fact. I build program analyzers and code transformers and parsers are necessary but "uninteresting"; the interesting work is what you do with the parsed result and
so the focus is on doing the post-parsing work. Using GLR means I can relatively easily build working grammars, compared to hacking a grammar to get into LALR usable form. This matters a lot when
trying to deal to non-academic langauges such as C++ or Fortran, where you literally needs thousands of rules to handle the entire language well, and you don't want to spend your life trying to hack
the grammar rules to meet the limitations of LALR (or even LR).
As a sort of famous example, C++ is considered to be extremely hard to parse... by guys doing LALR parsing. C++ is straightforward to parse using GLR machinery using pretty much the rules provided in
the back of the C++ reference manual. (I have precisely such a parser, and it handles not only vanilla C++, but also a variety of vendor dialects as well. This is only possible in practice because we
are using a GLR parser, IMHO).
[EDIT November 2011: We've extended our parser to handle all of C++11. GLR made that a lot easier to do. EDIT Aug 2014: Now handling all of C++17. Nothing broke or got worse, GLR is still the cat's | {"url":"https://coderapp.vercel.app/answer/3993931","timestamp":"2024-11-06T17:32:02Z","content_type":"text/html","content_length":"106201","record_id":"<urn:uuid:98528406-f508-4238-94a6-5b2f4cabe3e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00185.warc.gz"} |
Artificial Intelligence
Artificial Intelligence: 2024-2025
Lecturer Jiarui Gan
Schedule A2(CS&P) — Computer Science and Philosophy
Schedule B1 (CS&P) — Computer Science and Philosophy
Schedule A2 — Computer Science
Schedule B1 — Computer Science
Schedule A2(M&CS) — Mathematics and Computer Science
Schedule B1(M&CS) — Mathematics and Computer Science
Term Michaelmas Term 2024 (16 lectures)
This is an introductory course into the field of artificial intelligence (AI), with particular focus on search as the fundamental technique for solving AI problems. The problem of navigating a road
map with a known layout is a typical example of a problem studied in this course. Problems such as this one can be solved by enumerating all possible sequences of moves until a solution is found.
Such a naive idea, however, is rarely applicable in practice because the size of the search space is typically vast. This course will introduce basic AI search techniques, such as depth‐first,
breadth‐first, and iterative deepening search, and it will discuss heuristic techniques such as A* search that improve efficiency by pruning the search space. This course also deals with optimization
problems. For example, the optimization version of the n‐queens problem is to arrange n queens on an n x n chessboard while minimizing the number of pairs of queens that are under attack. Such
problems can be effectively solved by search techniques introduced in the course such as hill climbing, simulated annealing, and genetic algorithms. Planning is a special kind of optimization
problem. A typical planning problem is finding a sequence of actions for delivering ten packages to ten different destinations. This course will introduce a standardized language called STRIPS for
modelling planning problems, and it will discuss how to solve planning problems using search techniques such as forward chaining, backward chaining, and partial order planning. This course will also
show how to apply these techniques to the problem of planning a robot’s path through an environment while taking into account the geometry of the environment and the robot. Constraint satisfaction
problems (CSPs) constitute another important class of AI problems, a typical example of which is the map colouring problem: colour each country on a map with red, green, or blue, but in a way so that
no two adjacent countries have the same colour. This course will introduce search techniques such as backtracking and constraint propagation that can efficiently solve many CSP problems in practice.
Developing programs for playing board games such as chess has played a central role in AI since the very inception of the field. Game playing can be conceived as yet another kind of search problem,
in which the objective is to find a winning strategy. This course will introduce the basic game‐playing techniques such as minimax search and alpha‐beta pruning. Dealing with unknown or incompletely
specified environments is a form of intelligent behaviour that is critical in many intelligent systems. For example, consider a robot in maze that has no prior knowledge about the maze layout. To
solve the problem, the robot needs to represent and update its knowledge about the maze as it moves through the maze. This course will show how to solve such problems via search in the belief space –
the space of all possible beliefs the robot may have about the maze layout.
Learning outcomes
By attending the course the students should acquire a firm grasp of various search techniques and should be able to select an appropriate search technique and apply it in practice. Since search is
such a fundamental technique in computer science, the material taught in the course is relevant in contexts other than artificial intelligence.
Familiarity with propositional logic would be beneficial but is not a requirement.
Students are expected to use Java in practicals. This course does not cover any aspects of Java programming: the students are expected to have learned Java elsewhere. No knowledge of Java or any
other programming language is required in order to pass the exam: all programming (in Java or any other language) is limited to practicals.
│1 │Introduction to Artificial Intelligence: definition of AI; Turing test; brief history of AI. │
│2 - │Problem solving and search: problem formulation; search space; states vs. nodes; tree search: breadth-first, uniform cost, depth-first, depth-limited, iterative deepening; graph search. │
│3 │ │
│4 │Informed search: greedy search; A* search; heuristic function; admissibility and consistency; deriving heuristics via problem relaxation. │
│5 │Local search: hill-climbing; simulated annealing; genetic algorithms; local search in continuous spaces. │
│6 - │Planning: the STRIPS language; forward planning; backward planning; planning heuristics; partial-order planning; planning using propositional logic; planning vs. scheduling. │
│7 │ │
│8 - │Dealing with geometry of physical agents: basic issues in robotics; degrees of freedom; Dijkstra’s shortest path algorithm; configuration spaces; Voronoi diagrams; skeletonization; potential │
│9 │field planning. │
│10 │Constraint satisfaction problems (CSPs): basic definitions; finite vs. infinite vs. continuous domains; constraint graphs; relationship with propositional satisfiability, conjunctive queries, │
│ │linear integer programming, and Diophantine equations; NP-completeness of CSP; extension to quantified constraint satisfaction (QCSP). │
│11 -│Solving CSPs: constraint satisfaction as a search problem; backtracking search; variable and value ordering heuristic; degree heuristic; least-constraining value heuristic; forward checking; │
│12 │constraint propagation; dependency-directed backtracking; independent subproblems; tree-like CSPs; acyclic CSPs; CSPs of bounded treewidth. │
│13 -│Playing games: game tree; utility function; optimal strategies; minimax algorithm; alpha-beta pruning; games with an element of chance. │
│14 │ │
│15 -│Beyond classical search: searching with nondeterministic actions; searching with partial observations; online search agents; dealing with unknown environments. │
│16 │ │
Problem solving via search. Uninformed, informed, and local search. Planning. Dealing with geometry of physical agents. Constraint satisfaction. Adversarial search. Searching with nondeterministic
actions and partial observations.
Reading list
S.J. Russell and P. Norvig.
Artificial Intelligence: A Modern Approach (3rd edition)
, Prentice-Hall, 2010.
Related research
Themes Artificial Intelligence and Machine Learning
Taking our courses
This form is not to be used by students studying for a degree in the Department of Computer Science, or for Visiting Students who are registered for Computer Science courses
Other matriculated University of Oxford students who are interested in taking this, or other, courses in the Department of Computer Science, must complete this online form by 17.00 on Friday of 0th
week of term in which the course is taught. Late requests, and requests sent by email, will not be considered. All requests must be approved by the relevant Computer Science departmental committee
and can only be submitted using this form. | {"url":"http://www.cs.ox.ac.uk/teaching/courses/2024-2025/ai/","timestamp":"2024-11-13T10:02:43Z","content_type":"text/html","content_length":"37105","record_id":"<urn:uuid:f44230cf-af6d-46c6-9728-25650f1fe52b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00265.warc.gz"} |
Eureka Math Grade 6 Module 2 End of Module Assessment Answer Key
Engage NY Eureka Math 6th Grade Module 2 End of Module Assessment Answer Key
Eureka Math Grade 6 Module 2 End of Module Assessment Answer Key
Question 1.
LB. Johnson Middle School held a track and field event during the school year. The chess club sold various drink and snack items for the participants and the audience. Altogether, they sold 486 items
that totaled $2,673.
a. If the chess club sold each item for the same price, calculate the price of each item.
Each item’s price is $5.50
b. Explain the value of each digit in your answer to 1(a) using place value terms.
Question 2.
The long-jump pit was recently rebuilt to make it level with the runway. Volunteers provided pieces of
wood to frame the pit. Each piece of wood provided measures 6 feet, which is approximately 1.8287
a. Determine the amount of wood, in meters, needed to rebuild the frame.
b. How many boards did the volunteers supply? Round your calculations to the nearest hundredth, and then provide the whole number of boards supplied.
Question 3.
Andy runs 436.8 meters in 62.08 seconds.
a. If Andy runs at a constant speed, how far does he run in one second? Give your answer to the nearest tenth of a second.
b. Use place value, multiplication with powers of 10, or equivalent fractions to explain what is happening mathematically to the decimal points in the divisor and dividend before dividing.
When you write the problem as a fraction, multiply the numerator and denominator by 100. Multiplying each by 100 resulted in both numbers being whole numbers.
436.8 ÷ 62.08 is the same as 43,680 ÷ 6,208.
c. In the following expression, place a decimal point in the divisor and the dividend to create a new problem with the same answer as in 3(a). Then, explain how you know the answer will be the same.
43.68 ÷ 6.208
Multiplying or dividing the dividend and divisor by the same power of ten yields the same quotient.
Question 4.
The PTA created a cross-country trail for the meet.
a. The PTA placed a trail marker in the ground every four hundred yards. Every nine hundred yards, the PTA set up a water station. What is the shortest distance a runner will have to run to see both
a water station and trail marker at the same location?
LCM 2 . 2 . 3 . 3 = 36 hundred
36 hundred yards
b. There are 1,760 yards in one mile. About how many miles will a runner have to run before seeing both a water station and trail marker at the same location? Calculate the answer to the nearest
hundredth of a mile.
c. The PTA wants to cover the wet areas of the trail with wood chips. They find that one bag of wood
chips covers a 3\(\frac{1}{2}\)-yard section of the trail. If there is a wet section of the trail that is approximately 50\(\frac{1}{4}\) yards long, how many bags of wood chips are needed to cover
the wet section of the trail?
Question 5.
The Art Club wants to paint a rectangle-shaped mural to celebrate the winners of the track and field meet. They design a checkerboard background for the mural where they will write the winners’
names. The rectangle measures 432 inches in length and 360 inches in width. Apply Euclid’s algorithm to determine the side length of the largest square they can use to fill the checkerboard pattern
completely without overlap or gaps.
Leave a Comment | {"url":"https://bigideasmathanswers.com/eureka-math-grade-6-module-2-end-of-module-assessment-answer-key/","timestamp":"2024-11-01T20:45:00Z","content_type":"text/html","content_length":"142548","record_id":"<urn:uuid:648f5ff8-da11-4abe-b6e0-39a42a1ffca4>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00412.warc.gz"} |
Quantile Regression with Telematics Information to Assess the Risk of Driving above the Posted Speed Limit
Department Econometria, Riskcenter-IREA, Universitat de Barcelona, Av. Diagonal, 690, 08034 Barcelona, Spain
Department Matemàtica Econòmica, Financera i Actuarial, Universitat de Barcelona, Av. Diagonal, 690, 08034 Barcelona, Spain
Author to whom correspondence should be addressed.
Submission received: 7 June 2019 / Revised: 2 July 2019 / Accepted: 12 July 2019 / Published: 15 July 2019
We analyzed real telematics information for a sample of drivers with usage-based insurance policies. We examined the statistical distribution of distance driven above the posted speed limit—which
presents a strong positive asymmetry—using quantile regression models. We found that, at different percentile levels, the distance driven at speeds above the posted limit depends on total distance
driven and, more generally, on factors such as the percentage of urban and nighttime driving and on the driver’s gender. However, the impact of these covariates differs according to the percentile
level. We stress the importance of understanding telematics information, which should not be limited to simply characterizing average drivers, but can be useful for signaling dangerous driving by
predicting quantiles associated with specific driver characteristics. We conclude that the risk of driving for long distances above the speed limit is heterogeneous and, moreover, we show that
prevention campaigns should target primarily male non-urban drivers, especially if they present a high percentage of nighttime driving.
1. Objective
Every kilometer driven above the posted speed limit increases the risk of accident. This is the hazard to which the driver, the passengers in the vehicle, and those in vehicles on the same stretch of
road expose themselves. The main objective of this paper is to analyze, in a real case telematics data set, the distribution of the distance traveled at speeds above posted limits and to show that it
is dependent on the total distance driven and other factors, which include the percentages of urban and nighttime driving and the driver’s gender. If we only model the mathematical expectation, i.e.,
the average distance driven at speeds above the posted limits, significant relationships are likely to be found with a number of telematics covariates. However, here, we consider quantile regression
to determine whether the impact of certain factors might differ depending on the percentile being analyzed.
When quantile regression slopes differ depending on the level, the risk of driving above the posted speed limit is not homogeneous across all drivers, begging the question as to how this risk might
be predicted or measured. Thus, in this paper, we also seek to show how specific driver characteristics can help predict a driver’s expected ranking; that is, not in relation to the whole population,
but to similar drivers.
The rest of this paper is organized as follows. In
Section 2
, we present the background to this study. In
Section 3
, the theory of quantile regression modelling and the data set used in this study are presented. In
Section 4
, the results are discussed and, finally, in
Section 5
, we outline the conclusions that can be drawn.
2. Background
There is much evidence pointing to the relationship between elevated vehicle speeds and the risk of collision (see
Ossiander and Cummings 2002
Vernon et al. 2004
, among others) in the literature. Likewise, the effectiveness of speed cameras in the reduction of road traffic collisions and related casualties has been extensively demonstrated (see
Pilkington and Kinra 2005
Wilson et al. 2006
, among others), which would seem to confirm that high speeds increase the risk of collision. Speeding, moreover, has been shown to be directly related to the severity of accidents (see, among
Dissanayake and Lu 2002
Jun et al. 2007
), while
Yu and Abdel-Aty
) report that marked variations in speed prior to a crash increase the likelihood of severe accidents.
Not all drivers present the same tendency to exceed the posted speed limit. More specifically, evidence of gender differences in driving patterns has been reported in many articles (see
Ayuso et al. 2014
). It has been shown that, compared to women, men present riskier driving behavior, driving more kilometers per day, during the night, and at speeds above the limit. All these factors have been shown
to be related to a greater number of accidents (
Gao et al. 2019a
Gao and Wüthrich 2019
Guillen et al. 2019
). For example,
Paefgen et al.
) found that the risk of accident is higher at nightfall, during the weekends on urban roads, and at low-range (0–30 km/h) or high-range speeds (90–120 km/h).
Speed control has recently come under investigation in connection with advanced driver assistance systems (ADAS) and semi-autonomous vehicles.
Pérez-Marín and Guillen
), for example, analyzed the contribution of telematics information and usage-based insurance (UBI) research in identifying the effect of driving patterns—above all, speeding—on the risk of accident.
The authors used a predictive model of the number of claims in a portfolio of insureds as their starting point for addressing risk quantification in relation to vehicles exceeding the speed limit.
They concluded that if excess speeds could be eliminated, the expected number of accident claims could be reduced by half, in the average conditions prevailing in their real UBI dataset.
Pérez-Marín et al.
) show that young drivers tend to reduce posted speed limit violations after an accident.
It has also been demonstrated that both the mean speed and the coefficient of variation of speed are relevant risk factors (
Taylor et al. 2002
). Moreover, interest has been expressed in the percentile assessment of the speed distribution, as opposed to just the mean. In this regard,
) claims that controlling the 85th percentile speed is common when designing road safety interventions. The same author also examined the role of quantile regression for modelling this percentile and
specifically demonstrated its potential benefits when evaluating whether or not an intervention is able to significantly modify the 85th percentile speed.
) based his analysis on a data set of observations on approximately 100 vehicle speeds at each of 14 pairs of sites recorded before, right after, and some time after the intervention (the
installation of warning signs, in this instance). However, here, we apply quantile regression to an analysis of the effects of telematics information on a range of percentiles of the distance
travelled at speeds above the limit, rather than to the speed measured at one specific moment in time.
We should stress that the objective of our paper is not the same as
’s (
), inasmuch as we do not seek to evaluate a particular safety intervention. Our aim is to understand conditional quantiles of distance traveled, possibly at different moments, rather than an instant
speed measurement. To do so, our analysis was based on real telematics information from a sample of drivers covered by a UBI policy. This means that, in addition to speed, we analyzed other
telematics variables, such as the location and time of driving and the total distance travelled by each driver in the sample.
3. Methods
3.1. Quantile Regression
Our quantile regression model follows the same notation as that used in
Koenker and Hallock
). Thus, in the classical multiple linear regression model, the response
is modeled as follows:
$x i = ( 1 , x i 1 , … , x i p ) ,$
in which
is the number of explanatory variables,
is the vector of coefficients such that
$β = ( β 0 , β 1 , … , β p )$
, and
is the random term with distribution
). When we model the conditional mean response, the Gaussian likelihood function is given by the following:
$L ( β ) ∝ e x p { − 1 2 σ 2 ∑ i = 1 n ( y i − x i T β ) 2 } .$
The least squares estimation of β is obtained by maximizing L(β) over β. Fitting a regression model to an asymmetric dependent variable, that is also conditionally asymmetric on the explanatory
variables, is fine if one is interested in the mean, but the point here is to analyze asymmetry.
As we aim to estimate a conditional quantile function 100
%, rather than a conditional mean, we need to use a quantile regression model (see
Koenker and Hallock 2001
Yu et al. 2003
Hewson 2008
, among others). The objective function to be minimized in this case equals the following:
$L α ( β ) = ∑ i = 1 n ρ α ( y i − x i T β ) { } ,$
where the expression contains an asymmetric loss function,
$ρ α$
. To explain just what this asymmetric loss function is, we need to introduce some notation. We consider that the 100α% quantile of the residual ϵ is the 100
% largest value (that is, it has 100
% of values smaller than it and 100(1 −
)% of values larger than it). Quantile regression, therefore, involves finding estimates
$β ^$
, where 100α% of the residuals are below zero and 100(1 −
)% are above zero. We use an indicator function,
$I A$
, on the set A, as follows:
$I A ( δ ) = { 1 δ ∈ A 0 δ ∉ A .$
The loss function
$ρ α$
can then be defined as follows:
$ρ α ( δ ) = α δ I δ ≥ 0 ( δ ) − ( 1 − α ) δ I δ < 0 ( δ ) ,$
for any value of α between 0 and 1. Finding the values of
$β ^$
that maximize the likelihood of the quantile regression model is the same as finding the values of
$β ^$
that minimize this loss function. The objective Function (1) can be minimized by using linear programming techniques. As noted by
Koenker and Hallock
), for minimizing a sum of asymmetrically weighted absolute residuals, simply giving differing weights to positive and negative residuals would yield the quantiles. The function qr of the quantreg R
package (
Koenker et al. 2018
) can be used to fit a quantile regression model.
Koenker and Machado
) proposed a goodness-of-fit criterion for quantile regression analogous to the R
statistic in linear regression, which we have also implemented here. The criterion is calculated as 1 −
$L α ( β ) ^ / L α ( β ) ˜ ,$
$L α ( β ) ^$
is the value of objective Function (1) where all covariates are included in the model specification (unrestricted model), whereas
$L α ( β ) ˜$
is the value of the objective function when only an intercept is considered (restricted model).
3.2. The Data
The data set comprises a sample of 9614 drivers with UBI coverage, which targets drivers between the ages of 18 and 35, for the whole of 2010. The variables are presented in
Table 1
. Age is the age of the driver at the beginning of 2010. We also have information on gender (Gender), total number of kilometers (km) driven during 2010 (km), and its natural logarithm (Lnkm). We
also have information on the number of kilometers driven at speeds above the posted limit (Tolerkm, which is the dependent variable), the percentage of kilometers driven on urban roads (Pkdr_vurba)
and, finally, percentage of kilometers driven at night (Pkdr_nocturn). All the drivers had UBI coverage throughout the whole of 2010 and all the telematics variables refer to this year. Note that we
considered the natural logarithm of km, Lnkm, as it has been shown that distance travelled has a nonlinear effect on the risk of an accident (see
Boucher et al. 2013
). In the
Appendix A
, we also present the results of the model and the examples where we have removed Lnkm and, instead, we introduce km and km squared.
The gender distribution of the sample is 49% women and 51% men.
Table 2
shows that the average age of drivers in the sample is 24.78 years. The average number of kilometers travelled during the year was 13,063.71 (standard deviation of 7715.80). We also observed that, on
average, drivers travel 26.29% of kilometers on urban roads and 7.02% of kilometers at night. The mean kilometers travelled at speeds above the limit (Tolerkm, dependent variable) is 1,398.20, while
its median is 689.20. Tolerkm has positive asymmetry (skewness coefficient equals 3.64); the distribution has a long tail, as can be observed in
Figure 1
. The rest of the variables also present some degree of skewness, but not as high as Tolerkm.
4. Results
We fitted a multiple linear regression model to the variable Tolerkm, although we consider it unsuitable insofar as the dependent variable is highly asymmetric. The variable km was included in the
model as its natural logarithm (variable Lnkm), as it produced a better fit. Parameter estimates are shown in
Table 3
. The R-squared goodness-of-fit statistic equals 0.26.
All the explanatory variables have a significant effect except for Age, which is attributable to the fact that UBI policies were sold primarily to young drivers and, so, the age range in the sample
is not wide. Note that most of drivers (see
Table 1
) are under 25 years of age, so we may either have not too many older drivers or, really, this factor may have no effect. Lnkm and Pkdr_nocturn present positive parameter estimates, indicating that
increases in the total number of kilometers driven and in the percentage of km driven at night contribute to increasing the expected number of kilometers driven at speeds above the posted limits.
Pkdr_vurba, in contrast, has the opposite effect, the higher the percentage of kilometers driven on urban roads, the lower the expected number of kilometers driven at speeds above the posted limit.
Finally, gender (indicating males) has a positive parameter estimate, meaning that, on average, men drive more kilometers at speeds above the posted limit than women.
To fulfil the objectives identified in the first section and, at the same time, to address the strong positive asymmetry, a grid of quantile regressions with different percentiles were fitted to the
data. The results of the quantile regression models are presented in
Table 4
. Each column shows the parameter estimates of the quantile regression at the following percentiles: 50th, 75th, 90th, 95th, 97.5th, and 99th. In general, significant parameter estimates are the same
as those found in the multiple linear regression model shown in
Table 3
. However, the results in
Table 4
show that the covariates have different marginal effects on conditional quantiles, depending on the estimated percentile. These changes in the parameters, depending on the quantile level at which the
model is specified, are clearly illustrated in
Figure 2
and are discussed in detail below.
Table 4
shows that the percentage of kilometers driven at night presents a highly significant effect when we estimate the 50th percentile and that it remains significant—at the 5% level—but with a larger
-value, when we estimate the 75th, 90th, and 95th percentiles. Likewise, the effect of gender is positive and significant at the 5% significance level for all quantiles, except for the 99th
percentile. In the case of the 99th percentile, only Lnkm and Pkdr_vurba present a significant effect, while the rest of the parameters are no longer significant at the 5% level, including the model
intercept. The lack of significance may be explained by the wider confidence intervals at a 5% level of significance, observed in
Figure 2
for the 99th percentile.
Table 4
also shows the values of the goodness-of-fit criterion and we observe that the contribution to explain the quantiles of the model with covariates with respect to the model without covariates is
higher for extreme percentiles.
Table 4
Figure 2
also show that the magnitude of the marginal effects of variables with significant parameters in the models differs depending on the level of the estimated quantile. Specifically, the marginal effect
of Lnkm increases as the level of the estimated quantile increases (being equal to 597.6 and 1180.2 for the 50th and 99th percentiles, respectively). The same pattern, albeit less pronounced, is
observed for the marginal effect of Pkdr_nocturn, which increases as the level of the estimated quantile increases (being equal to 5.41 and 37.49 for the 50th and 95th percentiles, respectively). In
the case of Pkdr_vurba, the marginal effect is always negative, but in absolute terms it increases with the level of the estimated quantile (being equal to −9.19 and −87.12 for the 50th and 99th
percentiles, respectively). Finally, the marginal effect of gender is always positive and increases with the level of the estimated quantile (being equal to 206.76 and 1070.06 for the 50th and
97.5th, respectively).
It is interesting to compare the results of the quantile regression for the 75th and 95th percentiles. Thus, the model intercept is quite similar in both models. A comparison of the marginal effect
of Lnkm shows that a one-unit increase in Lnkm (equivalent to multiplying km by 2.718), increases the 75th percentile of the number of kilometers driven at speeds above the posted limit by 892.80 km,
while the 95th percentile increases by 1094.57 km, ceteris paribus. In the case of Pkdr_vurba, increasing the percentage of kilometers driven in urban areas by one percentage unit reduces the 75th
percentile of the number of kilometers driven at speeds above the posted limit by 22.26 km and by 53.44 km at the 95th percentile, ceteris paribus. On the other hand, being a man increases the 75th
percentile of the number of kilometers driven at speeds above the posted limit by 377.94 km and by 755.87 km at the 95th percentile, ceteris paribus. Finally, increasing the percentage of kilometers
driven at night by one percentage unit increases the 75th percentile of the number of kilometers driven at speeds above the limit by 6.71 km and by 37.49 km at the 95th percentile, ceteris paribus.
Table 5
illustrates how the model can be implemented for predictive purposes. Let us consider three drivers with different characteristics, each of whom has driven exactly 600 km above the posted speed
limit. Compared to the general population, and without conditioning on specific characteristics, these three drivers present a distance driven at excess speeds below the median (689.20 km) and, as
such, can be considered relatively safe drivers. However, the key is to calculate the percentile risk level of the response variable given the specific characteristics of each driver. Indeed, it
seems obvious that a distance of 600 km driven above the posted speed limit does not denote the same level of risk for an urban driver (who probably does a lot of driving in congested areas), as it
does for a driver who drives largely outside the city limits. Most notably, the risk depends on the total distance driven. If we use the grid of different percentiles (
Table 4
) to make our predictions, it can be seen that, for a distance of 600 km driven above the speed limit, driver 1 lies at the 50th percentile, indicative of median risk. In contrast, driver 2 lies at
the 75th percentile and, so, has a higher risk score when taking his driving characteristics into account. Finally, driver 3 lies at the 90th percentile, indicative of a very high risk.
5. Conclusions
We have shown that the distribution of the distance driven above the posted speed limit is not homogeneous with respect to certain driver characteristics. As such, quantile regression is an
interesting tool for analyzing risk when telematics information is available. On the assumption that quantiles of distance driven above the speed limit represent a valuable risk measure, our model
allows us to identify the factors associated with higher quantile values and, therefore, with risky drivers. This information is valuable in terms of providing preventive early warnings.
We also find that the impact of each additional kilometer driven is much greater in higher quantiles than in lower quantiles. Note that we specify a log-linear relationship between total distance
driven and distance driven above the posted speed limits, which means there is a decreasing marginal effect on the latter as total distance increases.
One limitation of our analysis is that the degree to which drivers exceeded the posted limit was not recorded by the telematics equipment. Thus, we are unable to examine the magnitude of the speed
We believe that UBI will soon develop into a scheme that can improve aspects of both service and protection in the sector. As insurance services are reinvented, risk scores and the identification of
potential niches of drivers with risky patterns provide new ways of keeping drivers better informed and for promoting safe driving. Models such as those presented in this paper should enable insurers
to design predictive models of driver risk and fix personalized indicators. In the application presented here, it could be argued that excess speed is the only feature a driver can modify, given that
all other factors, including age, gender, total distance driven, and percentages of nighttime and urban driving, are dictated by external circumstances, such as distance from home to work place and
by personal or professional obligations. This means the quantile regression model would predict the total distance driven above the posted speed limit percentile, given that particular set of
external circumstances and, thus, it would allow the percentile risk score of the driver to be calculated by controlling for those circumstances and not for the whole population of drivers.
Estimating a driver’s rank with regard to distance driven above the posted speed limit is personalized information that should constitute interesting feedback for policy holders. Indeed, safety
measures and even telematics-based insurance should segment the population of drivers accordingly.
Guillen et al.
) confirm that speed limit violations and driving in urban areas increase the expected number of accident claims.
Gao et al.
) analyzed the driving characteristics at different speeds and their predictive power for claims frequency modeling. Given that speed is the primary cause of severe accidents, these results should
translate into lower insurance premiums for those who present a lower risk. In other words, if quantile-based behavior is considered rather than mathematical expectations of accident severity, the
calculation of the premium to be paid should be improved. However, we leave questions as to how this rank might be converted into an insurance price and how information of a driver’s behavior might
impact careful driving for further research.
Author Contributions
Conceptualization, M.A. and L.B.; methodology, M.G. and A.M.P.-M.; software, A.M.P.-M. and M.A.; validation, M.A.; formal analysis, A.M.P.-M.; investigation, M.G.; resources, M.G.; data curation,
L.B.; writing—original draft preparation, A.M.P.-M. and L.B.; writing—review and editing, L.B.; visualization, A.M.P.-M.; supervision, M.G.; project administration, M.G.; funding acquisition, M.G.
Support from the Spanish Ministry and ERDF grant ECO2016-76203-C2-2-P is gratefully acknowledged. MG gratefully acknowledges financial support from ICREA under the ICREA Academia programme. The
authors thank Fundación BBVA grants to Scientific Research Teams in Big Data.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Table A1. Parameter estimates of the linear regression model. In the model, Tolerkm/1000 is the dependent variable and km_1000 (km/1000) and km_1000^2 are introduced in the model instead of lnkm as
independent variables.
Parameter Estimate
Intercept 0.6397
Km_1000 0.0292
Km_1000^2 0.0035
Pkdr_vurba −0.0137
Pkdr_nocturn 0.0018
Age −0.0079
Gender 0.2295
R^2 43.20%
Table A2. Parameter estimates of the quantile regression model for different percentiles. In that case, Tolerkm/1000 is the dependent variable and km_1000 (km/1000) and km_1000^2 are introduced in
the model, instead of lnkm, as independent variables.
50th Percentile 75th Percentile 90th Percentile 95th Percentile 97.5th Percentile 99th Percentile
(p-Value) (p-Value) (p-Value) (p-Value) (p-Value) (p-Value)
Intercept 0.1812 0.3845 0.3681 0.4940 0.1147 0.8439
(0.0003) (<0.0001) (0.0805) (0.0643) (0.7889) (0.1271)
Km_1000 0.0113 0.0257 0.0595 0.0839 0.0887 0.0632
(0.0399) (0.0010) (0.0001) (<0.0001) (0.0084) (0.0243)
Km_1000^2 0.0035 0.0056 0.0079 0.0087 0.0107 0.0138
(<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001)
Pkdr_vurba −0.0031 −0.0082 −0.0136 −0.0177 −0.0200 −0.0248
(<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001)
Pkdr_nocturn 0.0023 0.0010 −0.0022 0.0028 0.0055 0.0095
(0.0164) (0.4777) (0.6037) (0.6140) (0.5539) (0.4052)
Age −0.0027 −0.0001 0.0143 0.0216 0.0480 0.0416
(0.1316) (0.9749) (0.0623) (0.0266) (0.0019) (0.0725)
Gender 0.1132 0.1734 0.1975 0.1510 0.1428 0.2360
(<0.0001) (<0.0001) (<0.0001) (0.0082) (0.1198) (0.1646)
Goodness-of-fit criterion 23.62% 33.45% 43.70% 49.62% 54.10% 59.67%
Table A3. Estimates of the conditional percentiles for drivers with different characteristics, each of whom has driven 600 km above the posted speed limit. The models used in the calculations
consider Tolerkm/1000 as the dependent variable and km_1000 (km/1000) and km_1000^2 are introduced in the model, instead of lnkm, as independent variables.
Driver 1 Driver 2 Driver 3
Km 12,000 8000 5500
Pkdr_vurba 80 75 80
Pkdr_noctur 14 11 10.5
Age 25 25 25
Gender 1 1 1
Estimated conditional percentile ^1 45th 78th 96th
^1 The estimated conditional percentile is found by locating the quantile level that produces a response equal to 600 km, given the exogenous characteristics (total kilometers driven, percent urban
driving, percent nighttime driving, age and gender) in the three example columns.
Figure 2. Parameter estimates at different levels of the quantile. Confidence intervals at a 5% level of significance. The horizontal red line represents the corresponding parameter estimate in a
classical linear regression model. (a) Intercept; (b) lnkm; (c) pkdr_vurba; (d) pkdr_nocturn; (e) age; and (f) gender.
Variable Description
Tolerkm Number of kilometers driven at speeds above the posted limit during 2010.
Km Total number of kilometers driven during 2010.
Lnkm Logarithm of the total number of kilometers driven during 2010.
Pkdr_vurba % of kilometers driven on urban roads during 2010.
Pkdr_nocturn % of kilometers driven at night (between midnight and 6 am.) during 2010.
Age Age of the driver at the beginning of 2010.
Gender 1 = Male, 0 = Female
Variable Min 1st Qu Median Mean 3rd Qu Max St. Dev. Skewness
Tolerkm 0.00 282.40 689.20 1398.20 1701.60 23,500.20 1995.37 3.64
Km 0.69 7530.56 11,697.82 13,063.71 17,337.00 57,756.98 7715.80 1.08
Lnkm −0.37 8.93 9.37 9.27 9.76 10.96 0.75 −1.87
Pkdr_vurba 0.00 15.60 23.39 26.29 34.32 100.00 14.18 1.03
Pkdr_nocturn 0.00 2.48 5.31 7.02 9.84 78.56 6.13 1.67
Age 18.11 22.66 24.63 24.78 26.88 35.00 2.82 0.11
Parameter Estimate (p-Value)
Intercept −8082.506
Lnkm 1064.506
Pkdr_vurba −21.868
Pkdr_nocturn 7.536
Age −1.131
Gender 328.009
R^2 25.96%
50th Percentile 75th Percentile 90th Percentile 95th Percentile 97.5th Percentile 99th Percentile
(p-Value) (p-Value) (p-Value) (p-Value) (p-Value) (p-Value)
Intercept −4496.53 −6250.34 −6418.11 −6009.63 −5137.24 −2451.17
(<0.0001) (<0.0001) (<0.0001) (<0.001) (<0.0001) 0.5780
Lnkm 597.60 892.80 1074.66 1094.57 1119.94 1180.21
(<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.001)
Pkdr_vurba −9.19 −22.26 −39.59 −53.44 −68.58 −87.12
(<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001)
Pkdr_nocturn 5.41 6.71 21.76 37.49 20.01 43.86
(<0.0001) (0.0363) (0.0226) (0.0086) (0.4266) (0.4014)
Age −2.56 1.84 5.16 40.29 71.28 36.87
(0.1632) (0.7298) (0.7419) (0.2086) (0.1094) (0.7009)
Gender 206.76 377.94 574.08 755.87 1070.06 1091.38
(<0.0001) (<0.0001) (<0.0001) (<0.0001) (<0.0001) (0.0624)
Goodness-of-fit criterion 14.19% 18.26% 20.23% 20.27% 20.56% 20.06%
Table 5. Estimates of the conditional percentiles for drivers with different characteristics, each of whom has driven 600 km above the posted speed limit.
Driver 1 Driver 2 Driver 3
Km 12,000 8000 5500
Pkdr_vurba 80 75 80
Pkdr_noctur 14 11 10.5
Age 25 25 25
Gender 1 1 1
Estimated conditional percentile ^1 50th 75th 90th
^1 The estimated conditional percentile is found by locating the quantile level that produces a response equal to 600 km, given the exogenous characteristics (total kilometers driven, percent urban
driving, percent nighttime driving, age, and gender) in the three example columns.
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Pérez-Marín, A.M.; Guillen, M.; Alcañiz, M.; Bermúdez, L. Quantile Regression with Telematics Information to Assess the Risk of Driving above the Posted Speed Limit. Risks 2019, 7, 80. https://
AMA Style
Pérez-Marín AM, Guillen M, Alcañiz M, Bermúdez L. Quantile Regression with Telematics Information to Assess the Risk of Driving above the Posted Speed Limit. Risks. 2019; 7(3):80. https://doi.org/
Chicago/Turabian Style
Pérez-Marín, Ana M., Montserrat Guillen, Manuela Alcañiz, and Lluís Bermúdez. 2019. "Quantile Regression with Telematics Information to Assess the Risk of Driving above the Posted Speed Limit" Risks
7, no. 3: 80. https://doi.org/10.3390/risks7030080
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2227-9091/7/3/80","timestamp":"2024-11-06T07:26:00Z","content_type":"text/html","content_length":"425904","record_id":"<urn:uuid:680dc00d-1630-4e04-a058-afb39ee57dc9>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00821.warc.gz"} |
ncl_c_ftkurvd: calculate interpolated values and derivatives for parametric curves - Linux Manuals (3)
ncl_c_ftkurvd (3) - Linux Manuals
ncl_c_ftkurvd: calculate interpolated values and derivatives for parametric curves
c_ftkurvd - calculate interpolated values and derivatives for parametric curves
int c_ftkurvd (int, float [], float [], int, float [],
float [], float [], float [], float [],
float [], float []);
int c_ftkurvd (n, xi, yi, m, t, xo, yo, xd, yd, xdd, ydd);
n The number of input data points. (n > 1)
xi An array containing the abscissae for the input function.
yi An array containing the functional values (y[k] is the functional value at x[k] for k=0,n).
m The number of desired interpolated points.
t Contains an array of values for the parameter mapping onto the interpolated curve.
xo An array containing the X values for the interpolated points. t[k] maps to (xo[k],yo[k]) for k=0,n-1.
yo An array containing the Y values for the interpolated points.
xd Contains the first derivatives of the X component with respect to t.
yd Contains the first derivatives of the Y component with respect to t.
xdd Contains the second derivatives of the X component with respect to t.
ydd Contains the second derivatives of the Y component with respect to t.
c_ftkurvd returns an error value as per:
= 0 -- no error.
= 1 -- if n is less than 2.
= 2 -- if adjacent coordinate pairs coincide.
This procedure behaves like ftkurv except that in addition it returns the first and second derivatives of the component functions in the parameterization.
Given a sequence of input points ( (x[0],y[0]), ... , (x[n-1],y[n-1]), the interpolated curve is parameterized by mapping points in the interval [0.,1.] onto the interpolated curve. The resulting
curve has a parametric representation both of whose components are splines under tension and functions of the polygonal arc length. The value 0. is mapped onto (x[0],y[0]) and the value 1. is mapped
onto (x[n-1],y[n-1]).
c_ftkurvd is called after all of the desired values for control parameters have been set using the procedures c_ftseti, c_ftsetr, c_ftsetc. Control parameters that apply to c_ftkurvd are: sig, sl1,
sln, sf1.
The value for the parameter sig specifies the tension factor. Values near zero result in a cubic spline; large values (e.g. 50) result in nearly a polygonal line. A typical value is 1. (the default).
The value for parameter sl1 is in radians and contains the slope at (x[0],y[0]). The angle is measured counter-clockwise from the X axis and the positive sense of the curve is assumed to be that
moving from point 0 to point n-1. A value for sl1 may be omitted as indicated by the switch sf1.
The value for parameter sln is in radians and contains the slope at (x[n-1],y[n-1]). The angle is measured counter-clockwise from the X axis and the positive sense of the curve is assumed to be that
moving from point 0 to point n-1. A value for sln may be omitted as indicated by the switch sf1.
The value of sf1 controls whether to use the values for sl1 and sln, or compute those values internally. Specifically, sf1
= 0 if sl1 and sln are user-specified.
= 1 if sl1 is user-specified, but sln is
internally calculated.
= 2 if sln is user-specified, but sl1 is
internally calculated.
= 3 if sl1 and sln are internally calculated.
To use c_ftkurvd, load the NCAR Graphics library ngmath.
Copyright (C) 2000
University Corporation for Atmospheric Research
The use of this Software is governed by a License Agreement. | {"url":"https://www.systutorials.com/docs/linux/man/3-ncl_c_ftkurvd/","timestamp":"2024-11-10T08:16:07Z","content_type":"text/html","content_length":"11983","record_id":"<urn:uuid:f32902a6-df9f-4bc1-bd58-cde3c8381dbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00400.warc.gz"} |
LaTeX Template for the Journal of the American Mathematical Society (JAMS)
Other (as stated in the work)
This is the template for the preparation of manuscripts for submission to the Journal of the AMS; it is pre-loaded with the necessary files, and can be opened for editing in Overleaf simply by
clicking the button above.
Once your manuscript is ready, the 'Submit to the Journal of the AMS' button in the top bar of the Overleaf editor provides a quick route to the official Journal of the AMS submission portal with the
files you need for submission. | {"url":"https://tr.overleaf.com/latex/templates/latex-template-for-the-journal-of-the-american-mathematical-society-jams/ttwgbdfwdqwx","timestamp":"2024-11-11T21:15:14Z","content_type":"text/html","content_length":"42208","record_id":"<urn:uuid:2b81163a-b84f-45d6-ad37-a35f9317bdee>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00873.warc.gz"} |
[molpro-user] ScO bond length and basis sets
Kucukbenli Emine emine.kucukbenli at epfl.ch
Thu May 15 15:36:41 BST 2014
Dear All,
I am not sure if this is the appropriate place for this question but I would be very happy if you could find the time to answer:
I am trying to determine the bond length of Scandium oxide dimer at its ground state. The ground state is known to be a sigma state with 1 unpaired electron. The experimental bond length is
1.668 Angstrom.
There is a very nice paper on the whole 3d transition metal oxide dimers by Bauschlicher and Maitre (http://link.springer.com/article/10.1007%2FBF01113847) that reports the bond length with UCCSD(T) as
1.680 Angstrom.
I perform UCCSD(T) calculations as well, and as they did, I also correlate 3s and 3p electrons using the cc-pwcvXz kind of basis sets and using the "core" directive.
That paper also suggest that at the MCPF level of theory, relativistic effects are very small. To account for them anyways, I use basis sets that are like cc-pwcvXz-DK (but I do not add any other directive to the input file - is it necessary?)
Considering it is an ionic environment for oxygen, for it I use the same basis set but augmented with diffuse functions, such as (aug-cc-pwcvXz-DK). In Bauschlicher work the diffuse g function is deleted but I don't know why that is done or how, so I simply used the aug- basis set as is.
My resulting input files look like the following:
r=1.66 ang
{rhf; wf,29,1,1}
{uccsd(t); occ, 9,3,3,0; closed, 8,3,3,0; core, 4,1,1,0;}
{optg, GRADIENT=1d-5, ENERGY=1d-6}
The geometry optimization yields the following results:
basis set vs bond length in angstrom
TZ : 1.657 A #cc-pwcvTz-DK/aug-cc-pwcvTz-DK
QZ : 1.659 A #cc-pwcvQz-DK/aug-cc-pwcvQz-DK
Before going and doing the time consuming 5Zeta basis set calculation I wanted to understand whether I am doing something wrong here, as my results are far from the previous calculations of Bauschlicher and Maitre.
So I tried to following to see the convergence behaviour of the basis set:
{uccsd(t); occ, 9,3,3,0; closed, 8,3,3,0; core, 6,2,2,0;}
i.e. using the same basis set but not correlating the 3s and 3p; for which the calculations are cheaper.
This is what I get in this case:
basis set vs Bond length in Ang.
Tz: 1.680 #cc-pwcvTz-DK/aug-cc-pwcvTz-DK
Qz: 1.689 #cc-pwcvQz-DK/aug-cc-pwcvQz-DK
5z: 1.678 #cc-pwcv5z-DK/aug-cc-pwcv5z-DK
Can this even be concluded to be a convergent behaviour?
Being quite inexperienced in both the theory and the code I cannot disentangle whether it is a problem on the theoretical level or my usage of the code.
If you could point to what I am doing wrong here I would be very happy!
Many thanks in advance,
Emine Kucukbenli,
Institute of Materials, EPFL, Switzerland
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.molpro.net/pipermail/molpro-user/attachments/20140515/68601b60/attachment.html>
More information about the Molpro-user mailing list | {"url":"https://www.molpro.net/pipermail/molpro-user/2014-May/005999.html","timestamp":"2024-11-11T11:18:39Z","content_type":"text/html","content_length":"5850","record_id":"<urn:uuid:288e4c88-881c-4dee-8924-6330a992c0c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00731.warc.gz"} |
Autonomous Integral Functionals with Discontinuous Nonconvex Integrands: Lipschitz Regularity of Minimizers, DuBois-Reymond Necessary Conditions, and Hamilton-Jacobi Equations
Published Paper
Inserted: 26 jul 2002
Last Updated: 16 jan 2004
Journal: Applied Math. Optim.
Volume: 48
Pages: 39-66
Year: 2003
This paper is devoted to the autonomous Lagrange problem of the calculus of variations with a discontinuous Lagrangian. We prove that every minimizer is Lipschitz continuous if the Lagrangian is
coercive and locally bounded. The main difference with respect to the previous works in the literature is that we do not assume that the Lagrangian is convex in the velocity. We also show that, under
some additional assumptions, the DuBois-Reymond necessary condition still holds in the discontinuous case. Finally, we apply these results to deduce that the value function of the Bolza problem is
locally Lipschitz and satisfies (in a generalized sense) a Hamilton-Jacobi equation. | {"url":"https://cvgmt.sns.it/paper/515/","timestamp":"2024-11-09T20:56:39Z","content_type":"text/html","content_length":"8827","record_id":"<urn:uuid:3abacb31-b092-44b8-8d8f-94fad2f6b024>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00256.warc.gz"} |
2024 WASSCE Elective Mathematics Topics, Structure and More
The 2024 WASSCE Elective Mathematics paper will be written on Tuesday, 3rd September, 2024. Let’s take a look at the topics for the exams.
These topics were selected based on previous years’ questions. Also, some of these topics were selected from the WAEC and GES syllabus. It should be noted that WAEC will set questions around the
topics listed below. All candidates are advised to go through the topics very well and study other topics that may not be listed below.
1. Correlation & Regression
2. Binary operations
3. Functions & Partial fraction
4. Binomial expansion
5. Logarithms/Indices
6. Polynomial
7. Basic calculus
8. Circle theorem
9. Matrix
10. Sequence and Series
11. Statistics (Probability, combination permutation)
12. Vectors & Mechanism
13. Kinematics
14. Calculus (differentiation & integrate)
NB: Candidates are advised to learn other topics that may not be listed in this article.
2024 WASSCE Elective Mathematics Structure
Candidates will spend 2 hours and 30 minutes on the essay type questions and spend 1 hour, 30 minutes on the objective test paper. The paper will begin at exactly 8:30 am.
This examination subject has been grouped into two papers namely; Paper 1 and Paper 2.
PAPER 1
This paper will include forty multiple-choice objective questions covering the whole Elective Mathematics curriculum. Candidates will have one hour to answer all questions for a total of 40 marks.
The questions will be drawn from the following sections of the syllabus:
• 30 questions on pure mathematics
• 4 questions about statistics and probability
• 6 questions about vectors and mechanics
PAPER 2
Will consist of two sections, A and B, to be completed in two hours for a total of 100 marks.
Section A will consist of eight compulsory elementary-level questions worth 48 marks. The following questions will be distributed:
• 4 questions on pure mathematics
• 2 questions about statistics and probability
• 2 questions on vectors and mechanics
Section B will consist of seven challenging problems divided into three sections: Parts I, II, and III are listed below:
• Part I: Pure Mathematics consists – 3 questions.
• Part II: Statistics and Probability – 2 questions
• Part III: Vectors and Mechanics – 2 questions
READ ALSO: 2024 GES Recruitment Deadline For Colleges Of Education Graduates
Try Your Hands On These Questions
1. a. A point P divides the straight line joining X(1, -2) and Y(5, 3) internally in a ratio 2:3. Find the
(a) coordinates of P.
(b) equation of the straight line that passes through N(3, -5) and P.
b. Simplify: 1−25−3−1+25+3
2. Without using mathematical tables or calculator, find, in surd form (radicals), the value of tan22.5∘.
3. The ages, (in years), of a group of 18 adults have the following statistics, ∑x²=745 and ∑x²=33951.
(a) Calculate the:
(i) mean age,
(ii) standard deviation of the ages of the adults, correct to two decimal places.
(b) One person leaves teh group and the mean age of the remaining 17 is 41 years.
Find the
(i) age of the person who left;
(ii) standard deviation of the remaining 17 adults, correct to two decimal places
4. a. Solve 3sin^2teetha + 2costeetha = 2.
b. Show that 3sinx +sin2x/ 1+3cosx + cos2x = tanx
5. a. Find from the first principle, the derivative of f(x)= (x+3)^2
b. Given that x^3 y – 4x +3y = 12. find dy/dx at (-3,0)
c. If 1/2(x^2+ y^2) = bxy. find where b is a constant, find dy/dx
6. a. Find the equation of the tangent of the curve y=4x(x^2 – 12) at its maximum point.
b. The radius of a circle is 12cm. Find, leaving to the answer in terms of (pi), the rate at which the area is increasing when the radius is increasing at the rate of 0.2cms^-1.
7. a. The position vectors of A and B are a=3i-j and b=2i+3j respectively. Find, correct to 2 significant figures, |4a-2b|
b. Given that m=3i+5j, n=2i-j and r=5i+17j, find |4m-3n-r|.
Related Stories | {"url":"https://ghanaeducationnews.org/2024-wassce-elective-mathematics-topics-structure-and-more/","timestamp":"2024-11-03T00:06:03Z","content_type":"text/html","content_length":"72727","record_id":"<urn:uuid:f7ef4fe7-fd60-4a75-a7db-db445a6b83f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00227.warc.gz"} |
Number of scalar objective functions
• Alias: num_scalar_objectives
• Arguments: INTEGER
This keyword describes the number of scalar objective functions. It is meant to be used in conjunction with field_objectives, which describes the number of field objectives functions. The total
number of objective functions, both scalar and field, is given by objective_functions. If only scalar objective functions are specified, it is not necessary to specify the number of scalar terms
explicitly: one can simply say objective_functions = 5 and get 5 scalar objectives. However, if there are three scalar objectives and 2 field objectives, then objective_functions = 5 but
scalar_objectives = 3 and field_objectives = 2.
Objective functions are responses that are used with optimization methods in Dakota. Currently, each term in a field objective is added to the total objective function presented to the optimizer. For
example, if you have one field objective with 100 terms (e.g. a time-temperature trace with 100 time points and 100 corresponding temperature points), the 100 temperature values will be added to
create the overall objective. | {"url":"https://snl-dakota.github.io/docs/6.20.0/users/usingdakota/reference/responses-objective_functions-scalar_objectives.html","timestamp":"2024-11-02T09:21:23Z","content_type":"text/html","content_length":"15986","record_id":"<urn:uuid:04a43f43-18fb-4ddd-81fb-a3bd5cd72c80>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00828.warc.gz"} |
nextafter(3p) [posix man page]
nextafter(3p) [posix man page]
NEXTAFTER(3P) POSIX Programmer's Manual NEXTAFTER(3P)
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the correspond-
ing Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
nextafter, nextafterf, nextafterl, nexttoward, nexttowardf, nexttowardl -- next representable floating-point number
#include <math.h>
double nextafter(double x, double y);
float nextafterf(float x, float y);
long double nextafterl(long double x, long double y);
double nexttoward(double x, long double y);
float nexttowardf(float x, long double y);
long double nexttowardl(long double x, long double y);
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here
and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard.
The nextafter(), nextafterf(), and nextafterl() functions shall compute the next representable floating-point value following x in the
direction of y. Thus, if y is less than x, nextafter() shall return the largest representable floating-point number less than x. The
nextafter(), nextafterf(), and nextafterl() functions shall return y if x equals y.
The nexttoward(), nexttowardf(), and nexttowardl() functions shall be equivalent to the corresponding nextafter() functions, except that
the second parameter shall have type long double and the functions shall return y converted to the type of the function if x equals y.
An application wishing to check for error situations should set errno to zero and call feclearexcept(FE_ALL_EXCEPT) before calling these
functions. On return, if errno is non-zero or fetestexcept(FE_INVALID | FE_DIVBYZERO | FE_OVERFLOW | FE_UNDERFLOW) is non-zero, an error
has occurred.
Upon successful completion, these functions shall return the next representable floating-point value following x in the direction of y.
If x==y, y (of the type x) shall be returned.
If x is finite and the correct function value would overflow, a range error shall occur and +-HUGE_VAL, +-HUGE_VALF, and +-HUGE_VALL (with
the same sign as x) shall be returned as appropriate for the return type of the function.
If x or y is NaN, a NaN shall be returned.
If x!=y and the correct function value is subnormal, zero, or underflows, a range error shall occur, and
the correct function value (if representable) or
0.0 shall be returned.
These functions shall fail if:
Range Error The correct value overflows.
If the integer expression (math_errhandling & MATH_ERRNO) is non-zero, then errno shall be set to [ERANGE]. If the integer
expression (math_errhandling & MATH_ERREXCEPT) is non-zero, then the overflow floating-point exception shall be raised.
Range Error The correct value is subnormal or underflows.
If the integer expression (math_errhandling & MATH_ERRNO) is non-zero, then errno shall be set to [ERANGE]. If the integer
expression (math_errhandling & MATH_ERREXCEPT) is non-zero, then the underflow floating-point exception shall be raised.
The following sections are informative.
On error, the expressions (math_errhandling & MATH_ERRNO) and (math_errhandling & MATH_ERREXCEPT) are independent of each other, but at
least one of them must be non-zero.
When <tgmath.h> is included, note that the return type of nextafter() depends on the generic typing deduced from both arguments, while the
return type of nexttoward() depends only on the generic typing of the first argument.
feclearexcept(), fetestexcept()
The Base Definitions volume of POSIX.1-2008, Section 4.19, Treatment of Error Conditions for Mathematical Functions, <math.h>, <tgmath.h>
Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1, 2013 Edition, Standard for Information Technol-
ogy -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 7, Copyright (C) 2013 by the Institute of
Electrical and Electronics Engineers, Inc and The Open Group. (This is POSIX.1-2008 with the 2013 Technical Corrigendum 1 applied.) In the
event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Stan-
dard is the referee document. The original Standard can be obtained online at http://www.unix.org/online.html .
Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source
files to man page format. To report such errors, see https://www.kernel.org/doc/man-pages/reporting_bugs.html .
/The Open Group 2013 NEXTAFTER(3P) | {"url":"https://www.unix.com/man-page/posix/3p/nextafter","timestamp":"2024-11-09T10:08:58Z","content_type":"text/html","content_length":"35260","record_id":"<urn:uuid:1b4d832a-6be1-450f-bdde-8b9904dd18ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00223.warc.gz"} |
Part B Population Biomass And Energy Numbers Worksheet Answers
Part B Population Biomass And Energy Numbers Worksheet Answers serve as foundational tools in the realm of mathematics, supplying a structured yet functional platform for students to check out and
understand mathematical concepts. These worksheets provide an organized approach to understanding numbers, nurturing a strong structure whereupon mathematical proficiency flourishes. From the most
basic counting exercises to the details of advanced estimations, Part B Population Biomass And Energy Numbers Worksheet Answers satisfy students of varied ages and ability levels.
Introducing the Essence of Part B Population Biomass And Energy Numbers Worksheet Answers
Part B Population Biomass And Energy Numbers Worksheet Answers
Part B Population Biomass And Energy Numbers Worksheet Answers -
People are biomass as are all other animals and plants A pyramid of biomass shows the mass in grams or kilograms of the population of the trophic levels in a food chain
This worksheet contains basic conceptual questions about Renewable and Nonrenewable Energy In this worksheet students will answer questions about the following terms Renewable energyNonrenewable
energySolar energyWind energyHydropower energyBiomassNuclear energyGeothermal energyFossil fuelsNatural gasCoalWhat s
At their core, Part B Population Biomass And Energy Numbers Worksheet Answers are cars for conceptual understanding. They encapsulate a myriad of mathematical concepts, leading learners with the maze
of numbers with a series of appealing and purposeful exercises. These worksheets go beyond the limits of typical rote learning, urging energetic engagement and promoting an intuitive grasp of
numerical partnerships.
Supporting Number Sense and Reasoning
Worksheet For Pyramids Of Number And Biomass Teaching Resources
Worksheet For Pyramids Of Number And Biomass Teaching Resources
Ecological Pyramid This pyramid shows how energy and biomass decrease from lower to higher trophic levels Ecological pyramids can demonstrate the decrease in energy biomass or numbers within an
ecosystem Trophic Levels and Biomass With less energy at higher trophic levels there are usually fewer organisms as well Organisms
On average only about 10 of the energy stored as biomass in one trophic level e g primary producers gets stored as biomass in the next trophic level e g primary consumers Put another way net
productivity usually drops by a factor of ten from one trophic level to the next
The heart of Part B Population Biomass And Energy Numbers Worksheet Answers depends on growing number sense-- a deep comprehension of numbers' definitions and interconnections. They motivate
expedition, welcoming students to explore arithmetic operations, analyze patterns, and unlock the mysteries of series. Through thought-provoking challenges and rational challenges, these worksheets
become portals to honing reasoning skills, supporting the logical minds of budding mathematicians.
From Theory to Real-World Application
Biomass Energy Research Worksheet teacher Made
Biomass Energy Research Worksheet teacher Made
Browse biomass energy worksheet resources on Teachers Pay Teachers a marketplace trusted by millions of teachers for original educational resources
Worksheet requires students to draw number biomass and energy pyramids based on organisms in a food chain The pyramid of biomass is purposely inverted giving the students a chance to later explain
why this is possible
Part B Population Biomass And Energy Numbers Worksheet Answers act as channels linking theoretical abstractions with the apparent truths of everyday life. By instilling functional scenarios into
mathematical exercises, learners witness the significance of numbers in their surroundings. From budgeting and dimension conversions to understanding statistical data, these worksheets empower pupils
to wield their mathematical expertise beyond the boundaries of the classroom.
Diverse Tools and Techniques
Versatility is inherent in Part B Population Biomass And Energy Numbers Worksheet Answers, using a collection of pedagogical tools to accommodate different knowing designs. Visual aids such as number
lines, manipulatives, and digital resources act as buddies in visualizing abstract concepts. This diverse technique makes certain inclusivity, suiting learners with various preferences, staminas, and
cognitive designs.
Inclusivity and Cultural Relevance
In an increasingly varied world, Part B Population Biomass And Energy Numbers Worksheet Answers accept inclusivity. They transcend social boundaries, incorporating instances and issues that resonate
with students from varied backgrounds. By integrating culturally pertinent contexts, these worksheets promote an environment where every learner feels represented and valued, improving their link
with mathematical concepts.
Crafting a Path to Mathematical Mastery
Part B Population Biomass And Energy Numbers Worksheet Answers chart a program in the direction of mathematical fluency. They instill determination, important reasoning, and analytical abilities,
necessary features not only in maths yet in various aspects of life. These worksheets encourage learners to navigate the elaborate surface of numbers, nurturing a profound admiration for the beauty
and logic inherent in mathematics.
Embracing the Future of Education
In an era marked by technological advancement, Part B Population Biomass And Energy Numbers Worksheet Answers perfectly adjust to digital systems. Interactive interfaces and electronic sources
augment traditional learning, supplying immersive experiences that go beyond spatial and temporal boundaries. This combinations of standard methods with technological advancements declares a
promising period in education and learning, promoting a much more vibrant and interesting knowing atmosphere.
Verdict: Embracing the Magic of Numbers
Part B Population Biomass And Energy Numbers Worksheet Answers exemplify the magic inherent in maths-- a captivating trip of exploration, discovery, and proficiency. They go beyond conventional
pedagogy, working as catalysts for stiring up the flames of curiosity and questions. With Part B Population Biomass And Energy Numbers Worksheet Answers, learners start an odyssey, opening the
enigmatic globe of numbers-- one problem, one service, at a time.
Quiz Worksheet Biomass Energy Study
Pyramids Of Biomass And Pyramids Of Numbers Storyboard
Check more of Part B Population Biomass And Energy Numbers Worksheet Answers below
141 Food Pyramids Of Numbers Biomass And Energy Biology Notes For IGCSE 2014
9 Biomass Energy Pyramid Labeled Worksheet Worksheeto
Pyramids Of Biomass Worksheet Ivuyteq
11 Biomass Energy Pyramid Worksheet Worksheeto
Pyramids Of Biomass Worksheet Ivuyteq
Biomass Energy Worksheet Worksheet
Biomass Energy Worksheets Teaching Resources Teachers Pay Teachers
This worksheet contains basic conceptual questions about Renewable and Nonrenewable Energy In this worksheet students will answer questions about the following terms Renewable energyNonrenewable
energySolar energyWind energyHydropower energyBiomassNuclear energyGeothermal energyFossil fuelsNatural gasCoalWhat s
Geothermal Energy A Student s Guide To Global Climate Change
Not all geothermal energy comes from power plantagen Geographic heat pumps canister do all classification of things from heating or cooling house go warming swimming pools These systems transfer get
by pumping drink or a air a special type of fluid through pipes just below the Earth s flat where the fever is a uniform 50 to 60 F
This worksheet contains basic conceptual questions about Renewable and Nonrenewable Energy In this worksheet students will answer questions about the following terms Renewable energyNonrenewable
energySolar energyWind energyHydropower energyBiomassNuclear energyGeothermal energyFossil fuelsNatural gasCoalWhat s
Not all geothermal energy comes from power plantagen Geographic heat pumps canister do all classification of things from heating or cooling house go warming swimming pools These systems transfer get
by pumping drink or a air a special type of fluid through pipes just below the Earth s flat where the fever is a uniform 50 to 60 F
11 Biomass Energy Pyramid Worksheet Worksheeto
9 Biomass Energy Pyramid Labeled Worksheet Worksheeto
Pyramids Of Biomass Worksheet Ivuyteq
Biomass Energy Worksheet Worksheet
Biomass Worksheet Biomass Renewable Energy
Pyramids Of Number Biomass And Energy W3schools
Pyramids Of Number Biomass And Energy W3schools
Pyramids Of Biomass Worksheet Worksheet Design Ideas | {"url":"https://szukarka.net/part-b-population-biomass-and-energy-numbers-worksheet-answers","timestamp":"2024-11-03T15:21:56Z","content_type":"text/html","content_length":"26289","record_id":"<urn:uuid:bbad678b-f375-40ab-9b48-6e4321816a51>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00132.warc.gz"} |
Audiogon Discussion Forum
Thoughts on cartridge loading with a SUT.
Hi, My name is dave slagle and I am an audio geek.
This is my first post here, Larry and Steve (Vetterone & Celloist) encouraged me to start participating in the forums here during the Audiogon meet and greet during RMAF 2K8. What follows is payback
for the 2 for 1 margaritas they forced down my throat.
There seems to be a lot of confusion on how to select a transformer for a particular MC cartridge. Many people consider this a black art and I have to admit I have been shocked by the sonic
differences from devices with similar electrical characteristics. I'll try to be brief and cover some of the relationships at hand in a few short paragraphs.
It’s about the turns ratio.
The first thing you need to know about the SUT in question is the turns ratio. This is an easy number to find out and is the most important. The idea here is the turns ratio tells you the gain but it
also plays a major part in the load the cartridge sees. The rule is impedance is the square of the turns ratio. If you have a 1:10 turns ratio, you get a 1:100 impedance ratio. If you assume that the
1:10 will feed a phono stage with a 47K input resistor the 47K will reflect back 47K divided by the impedance ratio (100) or 470 ohms. It is really that simple. Many transformers (particularly the
vintage ones) are speced with impedance numbers like 150:50K which simply translates to a 1:18 step up ratio. (sqrt(50000/150)). The impedance numbers are somewhat important since they suggest the
ballpark of where the unit was designed to operate, but it is easy enough to toss that info out the door and measure things for yourself. One other important thing to realize about the turns ratio is
that "More" isn't always better. It is surprisingly easy to have too much of a step up resulting in a situation where you overdrive (clip) your phono stage.
Relating the turns ratio to load.
Sticking with the above example of a 1:10 driving the 47K input, our cartridge in a perfect world sees a load of 470 ohms. Now what if we have a 103R with a 15 ohm internal impedance and we want to
play with larger loads. (A larger load is one that is smaller in value) The traditional way to do this is to add additional resistance in parallel with the 47K to get the desired value. Again this is
a simple application of the turns ratio. If we desire a 150 ohm load we would need to parallel a 22K resistor with the existing 47K, which nets us 15K across the secondary. (47000||22000=15000)
Dividing the 15K by our impedance ratio of 100 nets us our desired 150 ohm load.
Houston, We have a problem
Everything here is nice and clean upon firs glace, but we have actually hit our first speed bump. When we terminate the transformer with a different value, we not only change the load seen by the
cartridge we change the behavior of the transformer itself! This means we are changing two parameters which creates a very unpredictable situation which goes a long way to explain why results of
playing with secondary loading on SUT's has lead to such varied results since you cannot be sure what you are fixing.
Why the need for a load anyways.
The simple answer is all cartridges have a peak (resonance) at some high frequency and by increasing the load the peak is damped smoothing out the measured response. The general effect is when the
load it too low (say unloaded) the cartridge sounds bright and when you take it to the other extreme and increase the load the highs start to sound rolled of. At some point in the middle the "sweet
spot" is found and the only way I know how to this is in system by ear. The 800 pound gorilla sitting in the corner is the fact that the SUT will show the exact same behavior as the cartridge and
loading will have a similar effect. Typically the resonance in the SUT will be an octave to a decade higher than that of the cartridge. Unfortunately there is no way to know if your choice of
secondary load has damped the resonance of the cartridge taming the highs or simply rolled of the SUT masking the brightness of the cartridge resonance.
Gorilla Wrangling
How do we tame that gorilla in the corner?
Menno van der Veen
presented the best approach I have seen in his white paper on his SUT’s. Essentially his approach is to determine the load needed to make the transformer behave as desired and then add any additional
load required by the cartridge to the primary of the transformer. He even goes as far as to measure his SUT’s under a number of conditions and provides the needed loading info for various impedance
cartridges. This is the little brother of the 800 pound gorilla, lets call him the 400 pound gorilla on the couch. Just as increasing the load on the secondary damps transformer resonances (ringing)
decreasing the source (cartridge) impedance tends to increase ringing. This makes the once simple loading of a transformer become a far more complex relationship. A few other gorillas strolling about
the room involve the belief by some that the loading of transformers can cause more sonic harm than the problem it fixes and the idea that loading a transformer secondary causes phase shift at high
Jane Goodall to the rescue.
Rather than ignore all of these primates strolling around our listening rooms the best approach is to attempt to understand them and learn how to live with them. The best way I know how to do this is
to converse about experiences and understand that ultimately it comes down to first understanding ones musical preferences so their experiences can be related to the info available to us.
A final anecdote.
For some time now I have been a fan of primary loading of the SUT. Over the past few years I have encouraged a number of people to play with primary vs. secondary loading and the results have been
mixed. The one interesting thing was that consistently the people who preferred secondary loading said primary loading was “harsh” sounding and the people who preferred primary loading stated it
brought more of the music out of their system. In looking at the measured response of the transformers in question with the known source and load impedances it quickly became apparent that the people
needing the secondary loads were not only damping the ringing in their cartridges, but also taming a peak in response in the transformer. By loading the primary they actually made the transformer
ringing worse, which is a very plausible explanation of what they heard. When looking at the measured response of the transformers used by those who preferred primary loading, the resonant peak was
reasonably controlled and substantially beyond the audio band.
Excellent information! Thanks Dave! I can't believe more folks have not responded.
Welcome Dave!
Thank you for the great review of SUTs and loading. I've found that it pays off to try different types of loads. I like loading the primary of the SUT to load the cartridge. I have a rule of thumb: I
want the load "seen" by the cartridge to be about 5 times greater than the drc of the cartridge. For example, the Denon 103R has a drc of about 14 ohms I believe. To load the Denon 103 I would start
with 70 ohms across the primary of the SUT. I have used a pot across the primary so I can quickly adjust the load by ear. After I find the load I like, I'll replace the pot with some nice resistors.
John P
Dear Dave, I just have posted an answer about what Graham Tricker of Tron says about loading with SUT and I would be interested about what you think of that.
Thank you for your post.
Very interesting & important info from someone who actually makes trannies -- and many other things.
Thank you!
It is on the Ortofon per Winfield load thread but not displayed yet.
Welcome to the fray and thanks for a fascinating first post. Anyone who uses SUT's should pay very close attention, since their system performance will certainly depend on optimizing loading.
Your explanation of the complexities of getting it right for any particular LOMC won't make anyone think it's easy, but as you know better than most - it isn't.
When we used SUT's (about 3 years ago) we never tried primary side loading. Our BentAudio Mu's were built to make secondary side loading quick and easy, so it never occured to us to try. I now
realize we may never have achieved the most optimum results possible.
You won't be surprised to hear that our results were ***extremely*** sensitive to tiny changes in resistive values on the secondary. The only way I could get satisfactory response was to double up
resistors to achieve intermediate values not attainable with any single resistor. From your explanation it seems likely I was attepting to balance two interactive variables, transformer resonance and
cartridge resonance. No surprise it took extraordinary measures to do it. Loading both sides might have been simpler, or at least more effective.
A friend and long time audio nut has owned literally dozens of SUT's. He always wondered what mysterious characteristic made one SUT better than another for a particular cartridge, even after turns
ratios and secondary side loading were optimized. Your post may explain that so I'll point him to it. Good stuff.
In the end, as you said, truly optimizing any particular cartridge/SUT combination can only be done by trial, due to the interactivity of two resonances. But at least having an awareness of that
interplay offers the possibility of starting off in the right ball park. That's a big step forward, for any gorilla.
Here is more information regarding Dave's spot-on info on the Jensen transformer website.
Welcome, Dave. And thanks for the very informative post, even for a non-user of transformers. Your level of contribution is what makes this a great place to share information and experiences. | {"url":"https://d2dve11u4nyc18.cloudfront.net/discussions/thoughts-on-cartridge-loading-with-a-sut","timestamp":"2024-11-04T05:43:49Z","content_type":"text/html","content_length":"95912","record_id":"<urn:uuid:db1e34d8-16f4-408f-b4e8-d2aa26ed6237>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00758.warc.gz"} |
Dr. Patrick Krämer
• by appointment
• Kollegiengebäude Mathematik (20.30)
• 3.025
• Karlsruher Institut für Technologie (KIT)
Institut für Angewandte und Numerische Mathematik 3
Englerstraße 2
D-76131 Karlsruhe
PhD and Diploma thesis and publications
If you are student at KIT you can get a free license key for MATLAB at the KIT software shop. Then you can download the software from mathworks.com after registration.
SS 2015:
Exercises to the lecture Aspects of Numerical Time Integration
The exercises will take place in room 3.069 in the Kollegiengebäude Mathematik 20.30 tuesday 15:45. First exercise on the 14.4.2015.
Important Note: Please install MATLAB on your Laptop and bring it with you to the Exercise if possible
In the exercises to the lecture Aspects of Numerical Time Integration our aim is to learn how to implement efficient integrators for certain partial differential equations such as the (nonlinear)
Schrödinger equation.
• At first we want to recap how to implement simple time integrators in MATLAB, such as the explicit and implicit Euler method, for ordinary differential equations (ODE) of the form
• Afterwards we practice the application of splitting methods for ODES
• Furthermore we learn how to implement pseudo-spectral methods, which make use of the Fast Fourier Transform to discretize spatial differential operators such as the laplacian
• Our aim is then to construct efficient integrators for the nonlinear Schrödinger equation which are based on pseudo-spectral methods for the space approximation and splitting methods for the
Exercise Sheets
Exercise Sheet 01 explicit Euler method for an ODE, Order Plots
Exercise Sheet 02 Lie and Strang splitting for an ODE
Exercise Sheet 03 Finite Differences / Störmer-Verlet for a linear wave equation
Exercise Sheet 04 (corrected version) Spectral Methods in Matlab (see also L.Trefethen - Spectral Methods in Matlab (2000) or in my diploma thesis, chapter 4.1 )
Exercise Sheet 05 Spatial order and efficiency of spectral methods and finite differences for a time-dependent problem
Exercise Sheet 06 Temporal and spatial order of a Strang splitting method with the space discretization by spectral methods applied to the NLS
Exercise Sheet 07 Conservation of norm and energy / NLS in 2D
WS 2014/15:
Exercises to the lecture Splitting Methods
The exercises will take place in room 1C-03 in building Allianzgebäude 5.20 wednesday 15:45. First exercise on the 22.10.2014.
Important Note: Please install MATLAB on your Laptop and bring it with you to the problem class if possible
In the exercises to the lecture Splitting Methods we want to learn how to use splitting methods as efficient numerical time integrators. The aim of splitting methods is to brake down a complicated,
costly problem into simpler subproblems which very often can be solved very efficiently.
For example solving the nonlinear Schrödinger equation
with a Runge-Kutta method is very costly.
On the other hand, breaking down the NLS into the two subproblems
(S1) S2)
allows to construct an efficient time integrator:
We can solve the subproblems (S1) and (S2) even exactly in time and combine their solutions in order to obtain an approximation to
In the problem class
• we will firstly have a short introduction to the numerical software MATLAB, which we will use for the practical handling of numerical time integration methods for ODEs and later on also for PDEs.
• we will also deepen the theoretical understanding of splitting methods and their applications
• of course if there are questions concerning the lecture we try to resolve these questions together.
Exercise Sheets
Exercise Sheet 1 explicit/implicit Euler method, exact solution of an ODE
Exercise Sheet 2 Lie splitting example and order of Strang splitting
Exercise Sheet 3 adjoint of a method
Exercise Sheet 4 global error of Strang splitting, auxiliary results for the BCH formula
Exercise Sheet 5 BCH formula and Lie derivative
Exercise Sheet 6 Lemma by Gröbner
Exercise Sheet 7 Third order splitting method
Exercise Sheet 8 Symplectic Euler method for a Hamiltonian system (harmonic oscillator)
Exercise Sheet 9 2-Body Kepler problem
Exercise Sheet 10 Some theory on symplectic mappings
Exercise Sheet 11 The implicit midpoint rule
SS 2014:
Proseminar: Approximation von Funktionen
Please go to the german site.
Exercises to the lecture Aspects of Numerical Time Integration
The exercises will take place in room K 2 (Kronenstraße 32) tuesday 15:45. First exercise on the 15.4.2014.
Important Note: Please install MATLAB on your Laptop and bring it with you to the Exercise if possible
In the exercises to the lecture Aspects of Numerical Time Integration our aim is to learn how to implement efficient integrators for certain partial differential equations such as the (nonlinear)
Schrödinger equation.
• At first we want to recap how to implement simple time integrators in MATLAB, such as the explicit and implicit Euler method, for ordinary differential equations (ODE) of the form
• Afterwards we practice the application of splitting methods for ODES
• Furthermore we learn how to implement pseudo-spectral methods, which make use of the Fast Fourier Transform to discretize spatial differential operators such as the laplacian
• Our aim is then to construct efficient integrators for the nonlinear Schrödinger equation which are based on pseudo-spectral methods for the space approximation and splitting methods for the
Exercise sheets
Exercise Sheet 1 Explicit/Implicit Euler Method
Exercise Sheet 2 Order Plots and Splitting Methods for ODEs
Exercise Sheet 3 Space discretization with finite differences and spectral methods
Exercise Sheet 4 Animated plots and transport equation
Exercise Sheet 5 Linear Schrödinger equation with potential
Exercise Sheet 6 Nonlinear Schrödinger equation
Exercise Sheet 7 Norm and Energy conservation of Lie and Strang solutions of the NLS
Exercise Sheet 8 Numerical order of Lie and Strang solutions of the NLS
Exercise Sheet 9 Regularity of numerical solutions of the NLS & NLS in 2D | {"url":"https://www.math.kit.edu/ianm3/~pkraemer/","timestamp":"2024-11-11T09:52:59Z","content_type":"text/html","content_length":"181451","record_id":"<urn:uuid:217b0239-695e-4a12-8124-17708455ea2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00097.warc.gz"} |
A Firefly Optimized Multi Adaptive Parallel Neuro Fuzzy Inference System
Volume 09, Issue 02 (February 2020)
A Firefly Optimized Multi Adaptive Parallel Neuro Fuzzy Inference System
DOI : 10.17577/IJERTV9IS020251
Download Full-Text PDF Cite this Publication
Dr. D. Revathi, 2020, A Firefly Optimized Multi Adaptive Parallel Neuro Fuzzy Inference System, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 09, Issue 02 (February 2020),
Text Only Version
A Firefly Optimized Multi Adaptive Parallel Neuro Fuzzy Inference System
Dr. D. Revathi Assistant Professor,
Department of Information Technology, Bharathidasan College of Arts and Science, Erode 638116.
Abstract:- In this research the firefly optimized multi adaptive parallel neuro-fuzzy inference (FOMAPNFI) system introduce for improving the performance of the system. This method is inspired from
fireflies which produce short flashes as a protective system and to attract mates or prey. The rate and rhythm of the flashes, as well as the time interval between them attract two sexes toward each
other. The intensity of light is decreased following an increase in distance from light source. The transmitted light is used as the formulated objective function. Three important properties of FA
algorithm are: 1) Firefly is brighter and more attractive when it moves accidentally and all fireflies are unisexual; 2) Attractiveness of firefly is proportional to the brightness and distance from
it, and decrease in light intensity is calculated by light absorption coefficient. Brightness of firefly is determined by the objective function value. 3) The distance between each firefly is
obtained through objective function.
Then consider the Multi adaptive parallel neuro- fuzzy inference system along with additional optimized parameters by using firefly algorithm. This research is used for removing the impulse noise
efficiently rather than other methods. Each member of a group of fireflies moved toward a point where their best experience had occurred. By using this approach the best and multiple membership
parameters are selected. Hence it is used for which improving the performance of denoising superior.
Keywords: Firefly algorithm, Fuzzy rules, Neural Network, Membership function
1. INTRODUCTION
Over the past decades, different image denoising approaches have been developed for removing the impulse noise. The process of removing the noise from the images is mostly required for enhancing
the restored images in image processing. Due to the noise removal operator, such approaches are having high computation complexity and difficult to preserve the useful information during noise
removal process. In previous researches, MAPNFIS is proposed for removing the processing speed of denoising process with the multiple parameters. In this approach, Multi-AFIS is hybridized with
the NN for improving the performance speed of the denoising process and the selection of fuzzy membership function is done in adaptive manner. The computation of weighting factors of the rules is
used for selecting the membership function adaptively. However, the performance of denoising is still less due to high noise rate and selection of optimal membership function.
Hence in this research phase, a Firefly Optimized Multi Adaptive Parallel Neuro-Fuzzy Inference System (FOMAPNFIS) is proposed. In this method, firefly algorithm is introduced for selecting the
optimized membership function of fuzzy rules which is utilized for denoising process. By using this approach, the best and multiple membership parameters are selected. Hence it is used for which
improving the performance of denoising superior. Finally, the experimental results show that the proposed FOMAPNFI system achieves better performance than the other image denoising approaches.
2. RELATED WORK
A new method of noise removal5 was proposed for the images corrupted by impulse noise. The main aim of this method was combining median and adaptive median filter for achieving better outcomes in
terms of computation time and visual quality. In this method, the corrupted pixels were replaced by using a median filter. Otherwise, they were estimated by their neighbours values. However, PSNR
value was less.
Medical image denoising method4 was proposed by using convolutional denoising auto-encoders. In this approach, denoising auto-encoder was constructed by using convolutional layers for removing
the noise from the medical images. However, the performance of denoising was less when training sample size was increased.
Fuzzy based salt-and-pepper noise removal9 was proposed by using adaptive switching median filter. In this method, two phases were considered such as detection and filtering. In the detection
phase, neighbourhood mapping based algorithm was used for detecting the corrupted pixels. In the filtering phase, the corrupted pixels were filtered by using fuzzy membership function so the
uncorrupted pixels were retained. This algorithm was used for variable impulse noise and there was no requirement of threshold for detection process. However in this method, the local information
in the image was not estimated.
An efficient method of image denoising3 was proposed by using hybrid filter approach. The major objective of this approach was estimating the uncorrupted image from the distorted or noisy image.
This approach was performed based on wiener filter and Pseudo inverse filter. However, the time complexity and complexity of the algorithm were high.
Collecting medical images
Collecting medical images
Determine training and testing medical images
Determine training and testing medical images
Define fuzzy rules and solution
Define fuzzy rules and solution
Encode the fuzzy parameters of MAPNFIS into a firefly
Encode the fuzzy parameters of MAPNFIS into a firefly
In this section, the proposed FOMAPNFIS for removing the impulse noise in the images is explained in brief. This approach is used for optimizing the membership functions utilized in MAPNFIS for
obtaining the best outcome by including the firefly optimization algorithm. Initially, MAPNFIS is briefly explained in section 5.2. Here, how firefly optimization algorithm is used for training the
MAPNFIS for obtaining the best fuzzy membership functions with minimal error function is explained in below.
1. Firefly Optimization Algorithm
Initialize the population of fireflies
Initialize the population of fireflies
Decode each firefly into MAPNFIS
Decode each firefly into MAPNFIS
Basically, MAPNFIS has 5 layers and each layer has its own membership function.1 In each layer, the membership function is optimized by firefly algorithm.10 Generally, firefly algorithm
follows three rules which are given in below.
Calculate the fitness functions
Calculate the fitness functions
☆ All fireflies are unisex. Each firefly can attract the other fireflies regardless of their sex.
Update the light intensity of each firefly and Compare with the other fireflies
Update the light intensity of each firefly and Compare with the other fireflies
☆ Attractiveness is proportional to the brightness and inversely to the distance. It refers a firefly that has less brightness will move towards the brighter one and if there is no firefly
in that region, it will move randomly.
Move the position of firefly that has less intensity to brighter one
Move the position of firefly that has less intensity to brighter one
☆ The brightness or light intensity of a firefly is determined by the objective function which is to be optimized.
Initially, the fuzzy rules are generated according to the three inputs such as 1, 2, 3 and the generated fuzzy rules are given as input to the first layer of MAPNFIS. Then, the firefly
algorithm is applied for training the fuzzy rules for optimizing the fuzzy rules to get a specfic range of solution based on the set of constraints and the fitness function. In this
research, PSNR, MSE and FCR are used as the fitness function for evaluating the quality of MAPNFIS. The overall process of this approach is shown in Figure
Update attractiveness
Update attractiveness
Rank the fireflies and Find the current best
Rank the fireflies and Find the current best
Obtain the best firefly
Obtain the best firefly
Check the pixel is whether noisy or noise-free pixel
Check the pixel is whether noisy or noise-free pixel
Fig.1.1 Overall Flow Diagram of FOMAPNFIS
Consider the set of fireflies which is represented as a population. Each input of the second layer is represented by a firefly. Each firefly has two parts such as set of antecedent
parameters and set of consequent parameters. The parameters which are required for adjusting MAPNFIS are coded into the individual real number code chain i.e., the network parameters like
, , 1, 2, 3 4 are integrated together. Based on the population of fireflies, a group of fireflies will be generated randomly and each firefly is mapped into MAPNFISs parameter set. After
that, the fitness value of each firefly is calculated. The light
intensity of each firefly is computed by using the fitness function.
22. Classify the pixel as whether noisy or noise-free pixel.
( ) = = 1
([, ] [, ])2
1. EXPERIMENTAL RESULTS
In this section, the performance effectiveness of
( ) = =
the proposed Firefly Optimized Multi Adaptive Parallel
× 100 (3.2)
Neuro-Fuzzy Inference System (FOMAPNFIS) is evaluated and compared with the Multiple Adaptive Parallel Neuro Fuzzy Inference System (MAPNFIS) based
( ) = = 20 ×
( 255 )
Impulse detector in terms of mean squared error, false
classification ratio and processing time, peak signal-to-
Once the light intensity of each firefly is computed, the attractiveness between fireflies is compared based on the computed light intensity and move fireflies that have less brightness to the
highest brighter firefly. The attractiveness is varied according to the distance between the fireflies. Moreover, the computation of fitness function is repeated until the system fitness meets either
the fitness threshold or the iterative number is larger than the maximum allowable iterative number. Then, the process is terminated and the fireflies are sorted based on their attractiveness.
Finally, the output is the position and the fitness value of the best firefly. Once the training process is completed, the testing images are analyzed for classifying
noise ratio. MATLAB simulation environment is used to prove the improvement and the successful execution of the environment in case of presence of more noisy images. In the experiments, the noisy
analysis medical pictures are non inheritable by contaminating a given testing picture with an impulse noise of given noise density. Numerous noise density values are used such are 25%, 50% and 75%
which indicates the low, medium and high noise densities, correspondingly.
1. Mean Squared Error
Mean Squared Error (MSE) is defined as follows:
= 1 ([, ] [, ])2
the image pixels whether it is noisy or noise-free pixel
effectively in adaptive manner and using median filter.
Algorithm: FOMAPNFIS based image denoising approach
Input: Training images
Output: Classification of noisy and noise-free pixels
1. The nearest pixels of the center pixel are given to the sub detectors and noise filter.
2. Convert the scalar input values into fuzzy numbers and generate fuzzy rules.
3. The fuzzy rules are scattered to the nodes for parallel processing and compute the type-1 interval fuzzy set.
4. Initialize the number of fireflies for each fuzzy set.
5. Define objective functions
1(), 2( ) 3( ).
6. Define operating parameters
, , 1, 2, 3 4 for each firefly.
7. Compute the light intensity at based on the objective functions.
8. Define the absorption coefficient of each firefly. 9. ( < )
10. = 1: (all fireflies) 11. = 1: ( fireflies) 12. ( > )
13. Move firefly towards .
14. End
1. Update attractiveness.
2. Evaluate the new solutions and update light intensity.
17. End
18. End
19. Rank fireflies and find the current best. 20. End
21. Obtain the best solution.
In above equation, [, ] and [, ] denotes the
luminance value of the pixel at line l and column c of one of the three color bands of the original and the restored versions of a corrupted test image correspondingly. The MSE is valid
for the gray-scale images. But the testing images are color images. So, the MSE computation is performed for three times, one for each color bands and the three resulting MSE values are
averaged to acquire the particular MSE value for the image.
The MSE comparison for the existing and proposed system is shown in Table 1.1. If the noise density is 75%, the MSE is 46 in the existing MAPNFIS method and 41 in the proposed method.
Table.1.1 Comparison of MSE
Noise Density MAPNFIS FOMAPNFIS
25% 32 27
50% 41 39
75% 46 41
Fig.1.2 Comparison of MSE
Figure 1.2 shows that the comparison of MSE for existing and proposed system. In x-axis, the noise densities are taken in %. In y-axis, MSE values are considered. If the noise density is
25%, then the MSE of proposed FOMAPNFIS and MAPNFIS are 32 and 27 respectively. When the noise density is 50%, the corresponding MSE value of proposed FOMAPNFIS and MAPNFIS are 41 and 39
respectively. Also, if the noise density is 75%, then the MSE value of proposed FOMAPNFIS and MAPNFIS are
46 and 41 correspondingly. From the analysis, it is observed that the proposed FOMAPNFIS has less mean square error value compared with the existing method.
2. False Classification Ratio
False classification ratio is defined as follows:
= × 100
Where denotes the falsely classified pixels of the input image and represents the total number of pixels.
3. Computation Time
Computation time is defined as the time taken for processing the images and filters the noise.
The comparison of computation time for the existing and proposed system is shown in Table 1.3. If the noise density is 75%, the computation time is 1.91ms in the existing MAPNFIS method and
1.4ms in the proposed method.
Table.1.3 Comparison of Computation Time (ms)
Noise Density MAPNFIS FOMAPNFIS
25% 1.51 0.86
50% 2.5 1.23
75% 1.91 1.4
The FCR comparison for the existing and proposed system is shown in Table 4.2. If the noise density is 75%, the FCR is 3.6% in the existing FOPFIS method and 2.7% in the proposed method.
Noise Density MAPNFIS FOMAPNFIS
25% 1.5 1
50% 2.8 2.1
75% 3.6 2.7
Noise Density MAPNFIS FOMAPNFIS
25% 1.5 1
50% 2.8 2.1
75% 3.6 2.7
Table.1.2 Comparison of FCR (%)
Fig.1.3 Comparison of FCR
Figure 1.3 shows that the comparison of FCR for existing and proposed system. In x-axis, the noise densities are taken in %. In y-axis, FCR values are considered in %. If the noise density is
25%, then the FCR of proposed FOMAPNFIS and MAPNFIS are 1.5% and 1%
Fig.1.4 Comparison of Computation Time
Figure 1.4 shows that the comparison of computation time for existing and proposed system. In x-axis, the noise densities are taken in %. In y-axis, computation time values are considered in
milliseconds. If the noise density is 25%, then the computation time of proposed FOMAPNFIS and MAPNFIS are 1.51ms and 0.86ms respectively. For the noise density is 50%, the corresponding
computation time of proposed FOMAPNFIS and MAPNFIS are 2.5ms and 1.23ms respectively. Also, if the noise density is 75%, then the computation time of proposed FOMAPNFIS and MAPNFIS are 1.91ms
and 1.4ms correspondingly. From the analysis, it is observed that the proposed FOMAPNFIS has less computation time compared with the existing method.
4. Peak Signal-to-Noise Ratio
Peak signal-to-noise ratio (PSNR) is used to represent the index of media quality analysis and substitutes the average MSE of frames in the PSNR computing equation to obtain the PSNR value of
the media segment and it is expressed as follows:
respectively. When the noise density is 50%, the
corresponding FCR value of proposed FOMAPNFIS and
PSNR = 20 × log10
( 255 )
MAPNFIS are 2.8% and 2.1% respectively. Also, if the noise density is 75%, then the FCR value of proposed FOMAPNFIS and MAPNFIS are 3.6% and 2.7%
correspondingly. From the analysis, it is observed that the proposed FOMAPNFIS has less false classification ratio compared with the existing method.
The PSNR comparison for the existing and proposed system is shown in Table 1.4. If the noise density is 75%, the PSNR is 90% in the existing MAPNFIS method and 94% in the proposed method.
Table.1.4 Comparison of PSNR (%)
Noise Density MAPNFIS FOMAPNFIS
25% 40 45
50% 56 60
75% 90 94
Fig.1.5 Comparison of PSNR
Figure 1.5 shows that the comparison of PSNR for existing and proposed system. In x-axis, the noise densities are taken in %. In y-axis, PSNR values are considered in terms of %. If the noise
density is 25%, then the PSNR of proposed FOMAPNFIS and MAPNFIS are 40% and 45% respectively. When the noise density is 50%, the corresponding PSNR value of proposed FOMAPNFIS and MAPNFIS are
56% and 60% respectively. Also, if the noise density is 75%, then the PSNR value of proposed FOMAPNFIS and MAPNFIS are 90% and 94%
correspondingly. From the analysis, it is observed that the proposed FOMAPNFIS has less peak signal-to-noise ratio compared with the existing method.
In this paper, Firefly Optimized Multi Adaptive Parallel Neuro-Fuzzy Inference System (FOMAPNFIS) is introduced for enhancing the noise removing process in image processing. In this method,
firefly optimization algorithm is applied for further improving the multi adaptive Neuro-fuzzy system with consideration of multiple parameters. The major objective of this method is
optimizing the fuzzy parameters for improving the quality of the image denoising performance efficiently. It optimizes the selection of fuzzy membership functions for deciding the noisy and
noise-free pixels in the images effectively. Finally, the experimental results prove that the proposed FOMAPNFIS has better performance compared to the other image denoising approaches in
terms of MSE, PSNR, MAE and computation time.
1. Benmiloud, T. (2010, June). Multi-output adaptive Neuro-fuzzy inference system. In WSEAS international conference on neural networks (Vol. 11, pp. 94-98).
2. Choubey, A., Sinha, G. R., & Choubey, S. (2011, April). A hybrid filtering technique in medical image denoising: Blending of neural network and fuzzy inference. In Electronics Computer
Technology (ICECT), 2011 3rd International Conference on (Vol. 1, pp. 170-177). IEEE.
3. Detani, D., Upadhyay, A., & Saxena, M. An Efficient Method of Image Denoising Using Hybrid Filter Approach. IOSR Journal of Electronics and Communication Engineering, 9(4), 79-84.
4. Gondara, L. (2016, December). Medical image denoising using convolutional denoising autoencoders. In Data Mining Workshops (ICDMW), 2016 IEEE 16th International Conference on (pp. 241-
246). IEEE.
5. Jourabloo, A., Feghahati, A. H., & Jamzad, M. (2012). New algorithms for recovering highly corrupted images with impulse noise. Scientia Iranica, 19(6), 1738-1745.
6. Khalifa, A. B., & Frigui, H. (2015, August). MI-ANFIS: A multiple instance adaptive neuro-fuzzy inference system. In Fuzzy Systems (FUZZ-IEEE), 2015 IEEE International Conference on (pp.
1-8). IEEE.
7. Muosa, A. H., & Hamad, A. M. (2015). Mixed noise denoising using Neuro fuzzy and memetic algorithm. International Journal of Research in Computer Applications and Robotics, 3(12), 16-23.
8. Shaabani, M. E., Banirostam, T., & Hedayati, A. (2016). Implementation of Neuro fuzzy system for diagnosis of multiple sclerosis. International Journal of Computer Science and Network, 5
(1), 157-164.
9. Thirilogasundari, V., & Janet, S. A. (2012). Fuzzy based salt and pepper noise removal using adaptive switching median filter. Procedia Engineering, 38, 2858-2865.
10. Nhu, H. N., Nitsuwat, S., & Sodanil, M. (2013, September). Prediction of stock price using an adaptive Neuro-Fuzzy Inference System trained by Firefly Algorithm. In Computer Science and
Engineering Conference (ICSEC), 2013 International (pp. 302-307). IEEE.
11. Yang, X. S., & He, X. (2013). Firefly algorithm: recent advances and applications. International Journal of Swarm Intelligence, 1(1), 36-50.
12. Yuksel, M. E., & Basturk, A. (2012). Application of type-2 fuzzy logic filtering to reduce noise in color images. IEEE Computational intelligence magazine, 7(3), 25-35.
You must be logged in to post a comment. | {"url":"https://www.ijert.org/a-firefly-optimized-multi-adaptive-parallel-neuro-fuzzy-inference-system","timestamp":"2024-11-12T16:22:22Z","content_type":"text/html","content_length":"85041","record_id":"<urn:uuid:cde72b60-d384-4e7a-ab78-fb26d508fea3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00295.warc.gz"} |
Area and Circumference of Circles Lesson Plan | Congruent Math
Have you ever wondered how to teach area and circumference of a circle in 7th grade? Or how do you introduce circles to students?
Use this artistic, real-life lesson plan to teach your students about finding the area and circumference of circles. Students will learn material with artistic guided notes (interactive sketch
notes), check for understanding, and practice with a doodle and color by number worksheet and a maze activity.
The lesson culminates with exploring how engineers design amusement park rides like roller coasters and Ferris wheels by applying the area and circumference of circles, ensuring they are both
thrilling and safe for riders.
Real-Life Application Project
Ask students to brainstorm different types of amusement park rides that involve circles. Examples might include Ferris wheels, roller coasters, and spinning rides.
Have students work in small groups to design their own amusement park ride that involves circles. They should consider the area and circumference of circles in their design. After they have completed
their designs, have each group present their ride to the class, explaining how they used the concepts of area and circumference to create their ride.
Additional Print Practice (Doodle Math)
A fun, no-prep way to practice area and circumference of circles is Doodle Math — it's a fresh take on color by number or color by code. It includes 3 levels of practice, and there’s one that’s
perfect for St. Patrick’s Day or any time of year.
Challenge with Composite Figures (Pixel Art Google Sheets)
If you’re looking for a digital extension covering composite figures, try my Pixel Art activities in Google Sheets. They’re self checking, and perfectly themed for Valentine’s Day or Pi Day. | {"url":"https://congruentmath.com/lesson-plan/area-and-circumference-of-circles/","timestamp":"2024-11-12T23:41:46Z","content_type":"text/html","content_length":"130861","record_id":"<urn:uuid:0c1ed058-3a1c-46e0-9e91-2cc8f48536ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00376.warc.gz"} |
ulti-dimensional Real-
6.6 Other multi-dimensional Real-Data MPI Transforms
FFTW’s MPI interface also supports multi-dimensional ‘r2r’ transforms of all kinds supported by the serial interface (e.g. discrete cosine and sine transforms, discrete Hartley transforms, etc.).
Only multi-dimensional ‘r2r’ transforms, not one-dimensional transforms, are currently parallelized.
These are used much like the multidimensional complex DFTs discussed above, except that the data is real rather than complex, and one needs to pass an r2r transform kind (fftw_r2r_kind) for each
dimension as in the serial FFTW (see More DFTs of Real Data).
For example, one might perform a two-dimensional L × M that is an REDFT10 (DCT-II) in the first dimension and an RODFT10 (DST-II) in the second dimension with code like:
const ptrdiff_t L = ..., M = ...;
fftw_plan plan;
double *data;
ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
/* get local data size and allocate */
alloc_local = fftw_mpi_local_size_2d(L, M, MPI_COMM_WORLD,
&local_n0, &local_0_start);
data = fftw_alloc_real(alloc_local);
/* create plan for in-place REDFT10 x RODFT10 */
plan = fftw_mpi_plan_r2r_2d(L, M, data, data, MPI_COMM_WORLD,
FFTW_REDFT10, FFTW_RODFT10, FFTW_MEASURE);
/* initialize data to some function my_function(x,y) */
for (i = 0; i < local_n0; ++i) for (j = 0; j < M; ++j)
data[i*M + j] = my_function(local_0_start + i, j);
/* compute transforms, in-place, as many times as desired */
Notice that we use the same ‘local_size’ functions as we did for complex data, only now we interpret the sizes in terms of real rather than complex values, and correspondingly use fftw_alloc_real. | {"url":"http://ftp.fftw.org/fftw3_doc/Other-Multi_002ddimensional-Real_002ddata-MPI-Transforms.html","timestamp":"2024-11-03T10:45:56Z","content_type":"text/html","content_length":"5975","record_id":"<urn:uuid:40ea9c50-dfa2-44b6-8452-384e4db4d604>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00099.warc.gz"} |
Lessons for quantum gravity from quantum information theory
Lessons for quantum gravity from quantum information theory
Harlow, D. (2020). Lessons for quantum gravity from quantum information theory. Perimeter Institute. https://pirsa.org/20070003
Harlow, Daniel. Lessons for quantum gravity from quantum information theory. Perimeter Institute, Jul. 13, 2020, https://pirsa.org/20070003
@misc{ pirsa_PIRSA:20070003,
doi = {10.48660/20070003},
url = {https://pirsa.org/20070003},
author = {Harlow, Daniel},
keywords = {Cosmology, Particle Physics, Quantum Foundations, Quantum Gravity, Quantum Information},
language = {en},
title = {Lessons for quantum gravity from quantum information theory},
publisher = {Perimeter Institute},
year = {2020},
month = {jul},
note = {PIRSA:20070003 see, \url{https://pirsa.org}}
Massachusetts Institute of Technology (MIT)
Talk number
Gravity is unique among the other forces in that within general relativity we are able to do calculations which, when properly interpreted, give us information about non-perturbative quantum gravity.
A classic example is Bekenstein and Hawking's calculation of the entropy of a black hole, and a more recent example is the calculation of the ``Page curve'' for certain evaporating black holes. A
common feature of both of these calculations is that they compute entropies without using von Neumann's formula S=-Tr(\rho \log \rho). In this strange situation where we are able to compute entropies
without understanding the details of the states for which they are the entropy, quantum information theory is a powerful tool that lets us extract information about those states. In this talk I'll
review aspects of these developments, emphasizing in particular the role of quantum extremal surfaces and quantum error correction. | {"url":"https://pirsa.org/20070003","timestamp":"2024-11-08T22:07:54Z","content_type":"text/html","content_length":"49731","record_id":"<urn:uuid:260ac26b-d68f-41a4-8401-390ceda2dcba>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00894.warc.gz"} |
Persistent collections can implement equiv() more efficiently
I found that structural equality between persistent collections makes very few assumptions which lead to inefficient implementations, especially for vectors and maps.
The thrust of the implementation is dispatching via methods which directly iterate over the underlying arrays.
These implementations aren't the prettiest or most idiomatic but they're efficient. If this gets implemented it would look different in Java anyway.
I tried these alternative implementations and found dramatic speed ups:
(let [die (clojure.lang.Reduced. false)]
(defn vec-eq
[^PersistentVector v ^Iterable y]
(let [iy (.iterator y)]
(.reduce v (fn [_ x] (if (= x (.next iy)) true die)) true))))
This works well when comparing vectors and for vector x list
Current implementation goes through a loop from 0 to count and calls nth for every element. nth calls arrayFor() every time, while both reduce and an iterator get the backing array once per array.
(let [o (Object.)
die (clojure.lang.Reduced. false)
eq (fn [m2] (fn [b k v]
(let [v' (.valAt ^IPersistentMap m2 k o)]
(if (.equals o v')
(if (= v v') true die)))))]
(defn map-eq
[m1 m2]
(.kvreduce ^IKVReduce m1 (eq m2) true)))
Here, too, the implementation iterates directly over the underlying array structure.
Current implementation casts the array to seq then iterates over it while getting entries from the other map via the Map interface.
This implementation avoids casting the map to a sequence and does not allocate entries.
When the receiver is a list the object compared against it and the receiver will be cast to a seq.
It could be more efficient to compare it with other collections via an iterator
(defn iter-eq
[^Iterable x ^Iterable y]
(let [ix (.iterator x)
iy (.iterator y)]
(loop []
(if (.hasNext ix)
(if (= (.next ix) (.next iy))
With criterium, vec-eq wins both cases. There are diminishing returns with size increase but still at n=64 vec-eq is twice as fast as =.
map-eq is also 2-3x faster for bigger maps and up to 10x faster for smaller maps
(doseq [n [1 2 4 8 16 32 64]
:let [v1 (vec (range n))
v2 (vec (range n))]]
(println 'iter-eq n (iter-eq v1 v2))
(cc/quick-bench (iter-eq v1 v2))
(println 'vec-eq n (vec-eq v1 v2))
(cc/quick-bench (vec-eq v1 v2))
(println '= n (= v1 v2))
(cc/quick-bench (= v1 v2)))
(doseq [n [1 2 4 8 16 32 64]
:let [v1 (vec (range n))
v2 (list* (range n))]]
(println 'iter-eq n (iter-eq v1 v2))
(cc/quick-bench (iter-eq v1 v2))
(println 'vec-eq n (vec-eq v1 v2))
(cc/quick-bench (vec-eq v1 v2))
(println '= n (= v1 v2))
(cc/quick-bench (= v1 v2)))
(doseq [n [1 2 4 8 16 32 64]
:let [m1 (zipmap (range n) (range n))
m2 (zipmap (range n) (range n))]]
(cc/quick-bench (map-eq m1 m2))
(cc/quick-bench (= m1 m2)))
Also checked the following cases:
(doseq [n [10000 100000]
:let [v1 (vec (range n))
v2 (assoc v1 (dec (count v1)) 7)]]
(cc/quick-bench (vec-eq v1 v2))
(cc/quick-bench (iter-eq v1 v2))
(cc/quick-bench (= v1 v2)))
(doseq [n [100000]
:let [m1 (zipmap (range n) (range n))
m2 (assoc m1 (key (last m1)) 7)]]
(cc/quick-bench (map-eq m1 m2))
(cc/quick-bench (= m1 m2)))
Optimized implementations still win by huge margins
small update - wrote a bit of Java, used test.check to generate maps, here are some results:
| size | seed | time before (us) | time after (us) | improvement |
| 10 | 0 | 0.7821998686829845 | 0.36678822554200413 | 2.1325654 |
| 44 | 1 | 4.330622612178792 | 2.103437417654809 | 2.0588312 |
| 31 | 2 | 3.0628944543188688 | 1.3886572837898974 | 2.2056518 |
| 21 | 3 | 2.028679128233322 | 0.9572009284455004 | 2.1193869 |
| 39 | 4 | 3.9265516612189715 | 1.8362321591272501 | 2.1383743 |
| 18 | 5 | 1.6854334183962798 | 0.8202897942521229 | 2.0546805 |
| 55 | 6 | 4.908545983501916 | 2.279236807427374 | 2.1535919 |
| 45 | 7 | 4.464427896621236 | 2.1081167721518987 | 2.1177327 |
| 6 | 8 | 0.3864066521455632 | 0.1928088585042629 | 2.0040918 |
| 26 | 9 | 2.7114264338699283 | 1.3179156998000194 | 2.0573595 |
| 86 | 10 | 8.879776767221973 | 4.380430951657479 | 2.0271468 |
| 16 | 11 | 1.448846888824073 | 0.6990313285286198 | 2.0726494 |
| 86 | 12 | 8.340080118652248 | 3.922289043010332 | 2.1263298 |
| 82 | 13 | 8.249968350056667 | 4.000736723253899 | 2.0621123 |
| 90 | 14 | 9.004991020408164 | 4.293898687932677 | 2.0971596 |
| 18 | 15 | 1.8062551014332244 | 0.8815394179030271 | 2.0489783 |
| 65 | 16 | 6.491169509571479 | 3.130686928716269 | 2.0734010 |
| 1 | 17 | 0.1196704726877019 | 0.07041214138259107 | 1.6995716 |
| 12 | 18 | 1.1530046459080272 | 0.6082699042686944 | 1.8955477 |
| 79 | 19 | 7.466010735312539 | 3.3860477035184937 | 2.2049337 |
Implementations are a specialization of equiv:
private boolean associativeEquiv(Associative m) {
for(int i=0;i < array.length;i+=2)
Object k = array[i];
IMapEntry e = m.entryAt(k);
if (e == null)
return false;
if(!Util.equiv(array[i+1], e.val()))
return false;
return true;
private static Object SENTINEL = new Object();
private boolean mapEquiv(Map m) {
for(int i=0;i < array.length;i+=2)
Object k = array[i];
Object v = m.getOrDefault(k, SENTINEL);
if (SENTINEL == v)
return false;
if(!Util.equiv(array[i+1], v))
return false;
return true;
public boolean equiv(Object obj){
if(!(obj instanceof Map))
return false;
if(obj instanceof IPersistentMap && !(obj instanceof MapEquivalence))
return false;
Map m = (Map) obj;
if(m.size() != size())
return false;
if (m instanceof Associative)
return associativeEquiv((Associative) m);
return mapEquiv(m);
I didn't verify your results but your benchmarks are rather limited in scope and only tests basically tiny collection sizes. How do things look if you get to 1.000, 10.000, 100.000, etc items?
I'd suspect that things will look drastically different if you compare things that actually take advantage of "structural sharing". For example create one vector and update the last element and then
compare? That should be worst case for your implementation but quite fine for the current. Same for maps.
That being said the optimized "reduce" implementations are rather new so they might be more efficient than older stuff in some places. Just make sure to verify more scenarios before coming to a
Sadly, if you look at the implementation of equiv you'll see it makes no use of structural sharing to short circuit, which is another possible optimization.
Like I mentioned in the original post, vector equality, for example, is implemented by calling `nth()` at most N times.
Checked now for 1e4 elements, vec-eq is about 20% faster, while iter-eq is up to 60% faster.
Similar results for 1e5 elements
For 1e5 map elements map-eq takes only 40% of the time `=` takes, where `m2 <- (assoc m1 (key (last m1)) 7)`
I take it as read that those considering this issue are familiar with how equiv() is implemented. I elided count checks just as I have elided instanceof checks. These are merely proofs of concepts
that there is lots of performance lying on the floor and it should be considered.
Application need? Using collections as keys or set members has BAD performance.
I'm saying the following with the tone of someone who is welcoming to optimizations:
In the past, it's been more productive to motivate optimizations starting from the application demand or problem statement side, not from the impl side. Perhaps we find a 2x improvement in equality,
but it affects 0.1% of the total runtime of a real world application. In that case even a 10x improvement isn't worth investment. (Reviewing is a huge commitment; Fogus, Alex, and Rich spend a ton of
time bringing rigor to tickets)
Is it true that using collections as keys or set members automatically makes an application slow? In that reality, optimization would be compelling, but I suspect it's not the case. Around 1.6, the
hash impl changed because of performance problems suffered in a real world application. My general advice as someone who has committed performance improvements to Clojure over the years is: stay
connected to the problem/application, and to start from that framing.
Don't "elide" correctness -- benchmarks would be invalid. Use generative tests to guide compatibility/correctness checks.
That being said, I'm not saying that there are not some potential improvements. Moving to reduce-based or iterator-based paths has been historically very helpful. But be open-minded to the
possibility that it may not matter to an application.
This makes sense.
As far as applications which could benefit from this improvement, I'd say rule engines and core.logic are at the top of the list.
odoyle https://github.com/oakes/odoyle-rules/blob/master/src/odoyle/rules.cljc#L377
- activation-group-fn: https://github.com/cerner/clara-rules/blob/main/src/main/clojure/clara/rules/engine.cljc#L2113
Used to produce activation group as a key here: https://github.com/cerner/clara-rules/blob/main/src/main/clojure/clara/rules/memory.cljc
core.logic: indexing relations where lvars can be collections https://github.com/clojure/core.logic/blob/master/src/main/clojure/clojure/core/logic/pldb.clj
Out of these I only profiled odoyle. While it still has room for optimization, it spends about ~10% in pcequiv()
The reason I elided a "full" solution is mainly time. I think this is indicative enough of big possible improvement. Then I present it to the core team expecting one of three possible responses:
1. Nice findings, but don't bother for now.
2. Go ahead, send a full patch with comprehensive benchmarks
3. We'll do it ourselves
I don't mind any of these responses, but like you said, reviewing is a huge commitment and working on it won't be a trivial task for me either. I don't want to put in significant effort on a patch
that might just sit in the backlog for a long time because the core team has its hands full and the issue has low priority.
I'll gladly work on a full patch with performance test matrix if there's interest
I think there would be interest in equality optimizations. I also think it's challenging to retain the current genericity (a synonym for "makes very few assumptions") and also taking into account
that we're implementing this over types we control, and closed types we don't (Java stuff), and open types we don't (external Clojure colls). It's common for genericity to fight with concrete
optimizations like this and "challenging" doesn't mean don't do it of course. :) I don't think you can really engage with the problem though without getting into the real impls where you can see
where the selection of strategy affects the perf too, particularly on small colls.
As Ghadi said, it's helpful to know how impactful such a change could be for real stuff to judge its priority. When I research stuff like this, I usually hack up Clojure to collect distributions on
call sites and then run stuff to see how frequently = is called and on what types/sizes in distribution. Seems like you've done a little of that, more would be useful.
You've suggested some impl rewrites and these seem like intuitively good options if you know things, but I suspect there are a variety of choices depending on what assumptions you can make (specific
concrete type, reducible, iterable, seqable, etc). We would usually try to start enumerating some of those. | {"url":"https://ask.clojure.org/index.php/11124/persistent-collections-implement-equiv-more-efficiently?show=13071","timestamp":"2024-11-12T06:32:56Z","content_type":"text/html","content_length":"66297","record_id":"<urn:uuid:94556237-827b-4958-aab7-a9506be1fcb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00159.warc.gz"} |
Email: support@assignmentsweb.com
UK:+1-585-535-1023 , US :+44- 208-133 -5697
Aus : +61- 280-07- 5697
Mathematics Assignment Help
(Are you facing trouble solving your mathematics assignments? Not to worry, students just need to mail us your mathematics assignment at support@assignmentsweb.com and we will provide you with the
accurate mathematics solutions within the mutually discussed time and that too, at the most affordable rates.
Mathematics is the abstract study of subjects encompassing quantity, structure, space, change and more, it has no generally accepted definition- Wikipedia
We understand that Mathematics is one of the subjects where students face the major problem. There are many students who score good grades in other subjects but not in this subject, which makes the
total grade to fall and then, that student is labeled as a weak student.) (Mostly students get so panic whenever they come across the Mathematics Assignments. Whenever there is Mathematics exam or
any Mathematics assignment to be given in a day or two, mostly the students gets so anxious. They feel so pressurized and frustrated. Sometimes the peers build that hoo-ha environment about this
subject. It is very important for the students to understand that this is one of the subjects where the student can score 100% marks. We understand that deciphering a particular math problem is very
painstaking process and it requires lot of calculations. Sometimes, the math question is written in such a manner such that it is difficult for the student to even comprehend the Mathematics
(Our online education service center provides mathematics assignment help to the students of all the grades, graduate students, and post-graduate students and above that also. We have the dexterous
mathematics experts with us who will solve your mathematics assignment in step-by-step manner so that the students can inculcate the required mathematics skills. Now, you can complete the most
difficult mathematics assignments also very easily and understand the concepts in a better and simpler way with us. The students can practice more questions now from their examination point of view.
Mostly there are lots of questions at the back of every chapter which we leave as they are difficult and quite confusing and we do not want to waste our time in solving it.) (Also, we come across
many questions in different books and guides which are not solved or half-solved. Now, you can trust us and our online math experts will solve all your math’s questions. This way you will learn
mathematics concepts and will be able to practice more questions in the more formidable way and you will be able to use your time in the most proper way. This will help you in scoring good grade so
that you can come out of the examination hall with flying colors. Our objective is to inculcate a habit of learning and understanding the math problems in a better way so that the students can solve
the math questions and do not find Mathematics as Herculean task.)
(We have best online mathematics experts with us for all level of mathematics: basic mathematics, medium level of mathematics and the most advanced mathematics. All our online mathematics experts
hold degree of masters and they have good work experience in this field and they can solve any kind of mathematics problem.)
Mathematics Assignment Help for School and College Students is now available at assignmentsweb.com. Mathematics is a subject which deals with numbers and values. There are several formulas and
theorems in the subject which are necessary to understand before solving any problem. College Math Assignment Help is searched by various students who are not satisfied with their teachers and their
teaching ways. We are an educational website which solves your queries without much wastage of time. We are not among those which charge heavy amounts and provide services which are of low level. We
are a reputed company with professional services; our team of expert members provides help to those students who seek for such kind of services through professionals.
Mathematics Homework Help Online is available these days through our website. We provide online service because we prefer that our students should not waste any time in search of tutors and then go
there to learn some topics by wasting their lots of time. We provide them online service at the place where they are sitting and any time whenever they required the same. The biggest benefit of these
services is, irrespective of time, these services are available all the time. Homework help is required by those who are weak in mathematics and are not able to solve the queries by their own. Our
experts help them by providing Math Homework Help Online which is the easiest and safest way to get help. Read More
Experienced Mathematics Assignment Help Tutors:
Mathematics Assignment Help Tutors are experienced and trained in their subject; they are able to provide the desired result to student. For all the topics of mathematics like: algebra, coordinate
geometry, trigonometry, linear equation etc. are fully solved by our tutors. They are experienced that is why they provide quality help to users. The way they write an assignment is amazing; Math
Homework Tutor Online is must if you really want to score high.
Mathematics Writing Assignment provides following benefits:
• Mathematics Writing Assignment provides good scores to students.
• College Mathematic Assignment help is available for 24 x7 so that students can get the help any time they need it.
• Error free University Mathematics Assignment help is provided by expert professionals at reasonable cost.
• You can Solve Mathematics Assignment for all the topics and sub topics.
This is one of the best ways to secure good marks; each and every student of modern days is focusing on such services in order to improve their results.
Topics we cover:
Arithmetic, Algebra, Geometry, Calculus, Real Numbers, Complex Numbers, Natural Numbers, Integers, Number Theory, Abstract Algebra, Order Theory, Geometry, Trigonometry, Calculus, Set Theory,
Probability, Game Theory, Complex Analysis, Integral Calculus, Differential Calculus, Limits, Continuity, Derivatives, Calculus, Discreet Math, Applied Math, etc.
(We are in this business of providing assignment help from past 7 years and so far, we have helped many students in their mathematics assignments successfully. You will get the top quality accurate
answers within the mutually discussed timeline. We are available for our students 24/7. Our charges are very reasonable and we assure you that you will get the best value of your money. Our online
assignment help services can be availed from any part of the world. Mostly, the assignment queries come from US, UK, Europe, Australia, etc.)
(We, at our online assignment help service center use the ASAP formula which means:
1. Affordability
2. Plagiarism-free Solutions
3. Availability 24/7
4. Professionalism
So, to get started, the students can send us their mathematics assignment at support@assignmentsweb.com or fill in the form provided at this page or call us at 585-535-1023. Also, send us the time
deadline with the mathematics assignment, mentioning the date and time (Kindly specify the Time Zone: GMT, PST, CST, EST, etc). If there is any specific format requirement (Word, PDF, excel, etc),
then let us know accordingly. We at our online education center provide free consultation to all the students before starting with the work. Read More
If you have any queries and you want to enquire about the same before submitting your mathematics assignment help to us, then kindly speak with our live tutors by just clicking here | {"url":"https://www.assignmentsweb.com/Mathematics_Assignment_Help.html","timestamp":"2024-11-08T15:17:17Z","content_type":"text/html","content_length":"24559","record_id":"<urn:uuid:e4577ae7-fa2b-4aa4-a3cc-2ba528434c8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00610.warc.gz"} |
In Pursuit of the Unknown
Math enthusiasts and history lovers
This book tells us fun stories about the math equations we hear from time to time. Do you know what math equations enable the invention of skyscrapers, high-speed trains, and flash drives? Read on!
We will learn the practical achievements of 17 math equations including the Pythagorean theorem, logarithms, calculus, Newton's Law of gravity, complex numbers, Euler's formula for polyhedra, normal
distribution, wave equation, Fourier transformation, Navier-Stokes equation, Maxwell's equations, Second Law of Thermodynamics, Relativity, Schrodinger's equation, information theory, chaos theory,
and Black-Scholes equation.
When I was in high school, I benefited greatly from math teachers who were passionate about their craft (shout out to you, Mr. Lieblang and Mr. Garrett!). Their enthusiasm prompted me to continue to
dabble in mathematics in college, where I advanced up to multivariable calculus. I remember learning to perform matrix multiplication in a large lecture hall, but also remember not seeing much
utility in studying increasingly abstract concepts. I should have read Ian Stewart’s In Pursuit of the Unknown: 17 Equations that Changed the World.
"..the struggles of each chapter's mathematician takes on an urgency befitting an Oscar-worthy Hollywood movie."
The book reads like a fun history lesson, and Mr. Stewart brings to life the personalities behind such storied names like Albert Einstein and Erwin Schrodinger. He also introduced fascinating
historical figures like Girolamo Cardano, the gambling scholar, and George Berkeley, the Bishop of Cloyne who intellectually sparred with Isaac Newton. With Mr. Stewart, there is historical context
to math, and the struggles of each chapter’s mathematician takes on an urgency befitting an Oscar-worthy Hollywood movie. And in every page Mr. Stewart emphasizes that these equations, far from being
a historical curiosity, have meaningful impact on our everyday modern lives, whether we love math or not.
"The opportunities of our modern world are fundamentally indebted to these equations."
The book itself requires no formal math skills to enjoy. The early chapters acted as a refresher for concepts I hadn’t used in over a decade, whereas the mathematical concepts of the later chapters
easily went over my head, even as I understood thematically what the equations have potential to achieve. Herein lies Mr. Stewart’s brilliance. Like my high school teachers, he takes an abstract
topic and is able to breathe into it a vitality that reveals just how inspiring math can be. The opportunities of our modern world are fundamentally indebted to these equations. My appreciation for
these opportunities are thanks to men like Mr. Lieblang, Mr. Garrett, and Mr. Stewart. | {"url":"https://www.thebusyreader.com/single-post/in-pursuit-of-the-unknown","timestamp":"2024-11-14T04:31:38Z","content_type":"text/html","content_length":"882060","record_id":"<urn:uuid:fac8916a-a6f3-434a-81b6-f9923780ebeb>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00891.warc.gz"} |
Milk Production Prediction For Next Year Using LSTM - With Source Code - Easiest Explanation - 2024 - Machine Learning Projects
Milk Production prediction for next year using LSTM – with source code – easiest explanation – 2024
So guys in today’s blog we will implement the Milk Production prediction for the next year with the previous 13-year milk production data. We will use LSTM for this project because of the fact that
the data is Sequential. So without any further due, Let’s do it…
Step 1 – Importing required libraries.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Activation,Dense,Dropout
%matplotlib inline
Step 2 – Read the input data.
df = pd.read_csv('monthly-milk-production.csv',index_col='Month')
df.index = pd.to_datetime(df.index)
Step 3 – Plotting data.
Step 4 – Scaling Data.
scaler = MinMaxScaler()
array = []
train_data = []
train_labels = []
for i in range(len(df)):
array.append(df.iloc[i]['Milk Production'])
array = np.array(array).reshape(-1,1)
array = scaler.fit_transform(array)
• Using MinMaxScaler here to bring our data in the 0-1 range.
• Then just reshaping it to make just one column and n no. of rows where n represents the no. of elements in the array.
Step 5 – Creating training data.
k = 0
for i in range(len(array)):
train_data = np.squeeze(train_data)
train_labels = np.array(train_labels)
• Here we are just creating training data.
• train data will have the first array as the first 12 points of the array and its corresponding train label will be the 13th entry in the array.
• In this way, we will have 14 arrays of 12 points each.
• But the training label will have only 13 points that’s why we will also take only 13 arrays in train data.
Step 6 – Getting train data in shape.
train_data = train_data[:len(train_labels)]
train_data = np.expand_dims(train_data,1)
• Taking the first 13 arrays in train data.
• Then expanding dimensions in 1st dimension. We are doing this step because Keras wants the data in this format.
Step 7 – Checking train data and train labels.
• This is the first array in train_data. It shows the milk production of 12 months.
• And its corresponding train label will be the milk production of the 13th month, which means the first month of next year.
• We will train our data on 12 months’ data and ask our model to predict the production of the 13th month (1st month of next year).
Step 8 – Creating a model.
model = Sequential()
Step 9 – Training model.
E = 1000
H = model.fit(train_data,train_labels,epochs=E)
Step 10 – Plotting loss curve for Milk Production prediction model.
epochs = range(0,E)
loss = H.history['loss']
Step 11 – Checking if our Milk Production prediction model is overfitting or not.
preds = scaler.inverse_transform(model.predict(train_data))
plt.plot(range(0,13),preds,label='our predictions')
plt.plot(range(0,13),scaler.inverse_transform(train_labels),label='real values')
Step 12 – Create a seed for next year’s Milk Production prediction.
seed = array[-12:]
• Here we are creating a seed which means the 12 last data points in the array. Suppose this is the milk production of 12 months of 1975.
• Now we will ask our model to predict the production for Jan 1976.
• When it will predict then we will update our seed.
• And now our seed will be Feb 1975 – Jan 1976 and we will ask our model to predict for Feb 2022.
• And in this way, we will predict for full 1976.
• This is all done below.
Step 13 – Next year’s Milk Production prediction.
for _ in range(12):
curr_12_months = seed[-12:]
curr_12_months = np.squeeze(curr_12_months)
curr_12_months = np.expand_dims(curr_12_months,0)
curr_12_months = np.expand_dims(curr_12_months,0)
pred = model.predict(curr_12_months)
seed = np.append(seed,pred)
• This step is explained above.
Step 14 – Plotting next year’s Milk Production prediction.
next_year_prediction = scaler.inverse_transform(seed[-12:].reshape(-1,1))
• This is the production of 1976.
Do let me know if there’s any query regarding the Milk Production prediction by contacting me on email or LinkedIn. You can also comment down below for any queries.
So this is all for this blog folks, thanks for reading it and I hope you are taking something with you after reading this and till the next time ?…
Read my previous post: AI LEARNS TO PLAY FLAPPY BIRD GAME
Check out my other machine learning projects, deep learning projects, computer vision projects, NLP projects, Flask projects at machinelearningprojects.net. | {"url":"https://machinelearningprojects.net/milk-production-prediction/","timestamp":"2024-11-14T23:44:39Z","content_type":"text/html","content_length":"212533","record_id":"<urn:uuid:29c6f46f-a77b-42e9-a8b8-2382d0c55781>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00841.warc.gz"} |
Nesta van der Schaaf
PhD student in the Laboratory for Foundations of Computer Science at the University of Edinburgh with Chris Heunen. I'm broadly interested in mathematical physics, and currently studying causality
and spacetimes using point-free topology. Also interested in category theory, foundations of (quantum) physics, and diffeology.
Contact me: <n.schaaf@ed.ac.uk>
• Ordered Locales
(with Chris Heunen)
Online in JPAA, 7 March 2024.
• Axioms for the category of Hilbert spaces and linear contractions
(with Chris Heunen and Andre Kornell)
Online in BLMS, 24 February 2024.
• Diffeological Morita Equivalence
Cahiers de Topologie et Géométrie Différentielle Catégoriques LXII.2 (2021), pp. 177-238.
To study causality in point-free topology, we use an abstraction of the Egli-Milner relation on open regions (see figure). In Ordered Locales we prove that this generalises Stone Duality to include
causal orderings, hence providing a framework for point-free causal spaces. Within this framework we are developing a point-free causal boundary construction.
• Towards Point-Free Spacetimes
PhD thesis supervised by Chris Heunen, 10 May 2024.
• Diffeologogy, Groupoids & Morita Equivalence
MSc thesis supervised by Klaas Landsman, June 2020. Main result in Cahiers de Topologie et Géométrie Différentielle Catégoriques LXII.2 (2021), pp. 177-238.
• Classical and Quantum Particles in Galilean and Poincaré Spacetime
BSc thesis supervised by Klaas Landsman, August 2017.
• Tutor Adjoint School 2023
Joint research project on concurrency in monoidal categories for the Adjoint School, part of ACT 2023.
• Teaching assistant UoE, 2021-2022
Proofs and Problem Solving, Introduction to Linear Algebra, Differentiable Manifolds (MSc).
• Teaching assistant Radboud University, 2018-2020
Introduction to Mathematics, Sep. 2018 - Nov. 2018
Topology, Jan. 2019 - Jun. 2019
Introduction to Mathematics, Sep. 2019 - Nov. 2019
Continuous Matrix Groups, Jan. 2020 - Jul. 2020.
• PhD Informatics, University of Edinburgh, 2020-2024
Supervised by Chris Heunen.
• MSc Mathematics, Radboud University, 2017-2020
• BSc Physics and Astronomy, Radboud University, 2013-2017
• HBO Propedeuse Engineering Physics, Fontys Eindhoven, 2012-2013
• HAVO High School, De Werkplaats Kindergemeenschap, 2007-2012
• Primary School, OMS Den Dolder, 1999-2007
• Dutch Native
• English Fluent, Cambridge ESOL First Certificate (C1), June 2011
• Urdu (اُردُو) Elementary
• Morita Equivalence and C*-correspondences
Literature study. Proves the characterisation of Morita equivalence of C*-algebras in terms of invertible bimodules, 2018-2020.
• Twisted Group C*-algebras and Projective Unitary Representations
Final project for a course on C*-algebras. Gives an overview of the characterisation of projective unitary representations of locally compact Hausdorff groups in terms of twisted group
C*-algebras, January 2018. | {"url":"https://sites.google.com/view/schaaf/","timestamp":"2024-11-02T02:31:51Z","content_type":"text/html","content_length":"166550","record_id":"<urn:uuid:cd87d1d8-94ee-4161-8e2e-d78bd36a696d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00400.warc.gz"} |
VLOOKUP with numbers and text in Excel November 12, 2024 - Excel Office
VLOOKUP with numbers and text in Excel
This tutorial shows how to calculate VLOOKUP with numbers and text in Excel using the example below;
A common problem with VLOOKUP is a mismatch between numbers and text. Either the first column in the table contains lookup values that are numbers stored as text, or the table contains numbers, but
the lookup value itself is a number stored as text.
In either case, VLOOKUP will return an #N/A error, even when there appears to be a match. In the example below, each planet has an id based on position from the sun. In cell H3 we have a simple
VLOOKUP formula looking for the number 3 from cell H2. The result is the #N/A error, even though 3 is clearly in the table.
One solution is to convert both the first column in the table and the lookup value to the same type: either numbers or text. However, if you don’t have control over both the table and the lookup
value, or if it’s simply not practical to convert values, you can modify the VLOOKUP formula to coerce the lookup value to match the type in the table. In this case, we can revise the VLOOKUP formula
to concatenate an empty string to the lookup value, which converts the lookup value to text:
=VLOOKUP(id,planets,2,0) // original
=VLOOKUP(id&"",planets,2,0) // revised
In the worksheet, the revised formula takes care of the error:
How this formula works
When you concatenate an empty string (“”) to a number, it converts the number to text. You could also do the same thing with a longer formula that utilizes the TEXT function to convert to text :
If you have both numbers and text
If you can’t be certain when you’ll have numbers and when you’ll have text, you can cater to both options by wrapping VLOOKUP in IFERROR and writing a formula that handles both cases:
Here, we first try a straight VLOOKUP formula that assumes both lookup value and the first column in the tables are numbers. If that throws and error, we try again with the revised formula. If that
fails, VLOOKUP will return the #N/A error. | {"url":"https://www.xlsoffice.com/others/vlookup-with-numbers-and-text-in-excel/","timestamp":"2024-11-12T06:01:16Z","content_type":"text/html","content_length":"65652","record_id":"<urn:uuid:28b04c87-65cb-4c0e-b5c1-83a18775b194>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00419.warc.gz"} |
Gravitational Field Theory
Gravitational Field Theory
Einstein re-introduced the field concepts of Faraday, James Clark Maxwell, and Sir Joseph John Thomson to the equations of his general theory of relativity. From his efforts, the term “field” became
popularized and began to be a catchword. It was variously described as classical scalar field, gravitational fields, electric field, magnetic field, electromagnetic field, or energy field, to name a
In deference to the Newtonian view, Einstein argued that everything in the Cosmos are intimately connected and that time and space stand on an equal footing. According to him, time represents another
dimension, now known as the special relativity theory.
In addition, Einstein viewed the force of gravity as simply the resulting force of the presence of the object’s weights, sizes, and mass (Brian Greene, 2011). effects of the while ty (Lewis H. Ryder,
1996). Thus, space, time, and gravity became embedded in the fabric of the Cosmos.
This new conception became known as the Gravitational Field theory. | {"url":"https://www.pauldejillas.com/our-cosmos/the-cosmic-energy-field/gravitational-field-theory/","timestamp":"2024-11-10T22:17:57Z","content_type":"text/html","content_length":"19681","record_id":"<urn:uuid:343f36f3-9030-4f30-8219-23f6e2f544b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00825.warc.gz"} |
Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to
order $p$ (for $p\geq 1$) and to assume Lipschitz continuity of the $p$-th derivative, then an $\epsilon$-approximate first-order critical point can be computed in at most $O(\epsilon^{-(p+1)/p})$
evaluations of the problem's objective function and its derivatives. This generalizes and subsumes results known for $p=1$ and $p=2$.
Report naXys-05-2015, University of Namur, Namur, Belgium
View Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models | {"url":"https://optimization-online.org/2015/06/4970/","timestamp":"2024-11-10T08:07:53Z","content_type":"text/html","content_length":"84258","record_id":"<urn:uuid:2e24bfa2-5db0-4f55-82ec-9ec592ec9d47>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00247.warc.gz"} |
Tamara Broderick: Toward a taxonomy of trust for probabilistic data analysis
In Spring 2024, I gave a tutorial "Toward a taxonomy of trust for probabilistic data analysis." The full materials and errata are on this page.
Abstract: Probabilistic data analysis increasingly informs critical decisions in medicine, economics, education, and beyond. A major concern is generalization: if we conclude that an economic or
health intervention helps people based on a data analysis, we hope that it will indeed help people when deployed in the future. We might be concerned about generalization, though, if two analysts
could come to different conclusions when trying to answer the same question with data. In this talk, we discuss how such a discrepancy could happen to two well-meaning data analysts, who aren't being
targeted by adversaries and who are using standard data analysis tools. In particular, we examine potential challenges and mitigations at multiple steps of a data analysis: (i) in the collection of
data, (ii) in the translation of abstract goals on the data to a concrete mathematical problem, (iii) in the use of an algorithm to solve the stated mathematical problem, and (iv) in the use of a
particular code implementation of the chosen algorithm.
• The video of this talk can be found here: [youtube]
• The slides from the talk can be found here: [slides]
• Note that the final slides include a full reference list for the talk, including references for the images near the start.
• After recording, I found some typos in citations (e.g. a missing "et al" and off-by-one year). Those have been corrected in the slides you can find on this page.
Accessibility Plain Academic | {"url":"https://tamarabroderick.com/tutorial_2024_taxonomy_of_trust.html","timestamp":"2024-11-02T10:53:52Z","content_type":"text/html","content_length":"6232","record_id":"<urn:uuid:f26c3698-1c20-4ad4-ad27-dcf3ebc7c56e>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00050.warc.gz"} |
Logarithmic Differentiation - Solved Example Problems| Mathematics
Logarithmic Differentiation
By using the rules for differentiation and the table of derivatives of the basic elementary functions, we can now find automatically the derivatives of any elementary function, except for one type,
the simplest representative of which is the function y = xx. Such functions are described as power-exponential and include, in general, any function written as a power whose base and index both
depend on the independent variable.
In order to find by the general rules the derivative of the power-exponential function y = xx, we take logarithms on both sides to get
log y = x log x, x > 0
Since this is an identity, the derivative of the left-hand side must be equal to the derivative of the right, we obtain by differentiating with respect to x (keeping in mind the fact that the left
hand side is a function of function) :
The operation consists of first taking the logarithm of the function f(x) (to base e) then differentiating is called logarithmic differentiation and its result
is called the logarithmic derivative of f(x).
The advantage in this method is that the calculation of derivatives of complicated functions involving products, quotients or powers can often be simplified by taking logarithms.
Steps in Logarithmic Differentiation
(1) Take natural logarithm on both sides of an equation y = f(x) and use the law of logarithms to simplify.
(2) Differentiate implicitly with respect to x.
(3) Solve the resulting equation for y′ .
In general there are four cases for exponents and bases. | {"url":"https://www.brainkart.com/article/Logarithmic-Differentiation_36111/","timestamp":"2024-11-11T08:09:47Z","content_type":"text/html","content_length":"45493","record_id":"<urn:uuid:1774131b-34f9-4a11-afeb-f31c5409a61f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00459.warc.gz"} |
Data structure comparison: Set VS Map
Last update: 06-20-2024
Understanding Set and Map Data Structures Through Hash Tables
In modern programming languages, two fundamental data structures are widely implemented: the Set and the Map (also known as a Dictionary). Both play critical roles in data management and organization
but serve different purposes and are implemented in subtly different ways.
What are Set and Map Used For?
• Set: This data structure is used to store a collection of unique elements. It does not allow duplicates, making it ideal for situations where you need to ensure no repeated values, such as in
user ID collections or to remove duplicates from data efficiently.
• Map (or Dictionary): A Map stores pairs of keys and values where each key must be unique. This structure is perfect for cases where items need to be retrieved quickly by a unique identifier, like
a product code or a person’s ID number.
How are Set and Map Implemented?
Both Set and Map can be effectively implemented using hash tables, a basic yet powerful concept in computer science.
Hash Table:
A hash table stores elements in an array format. To add an element X to the hash table, a hash function is applied to X to determine the index in the array where X should be placed. The hash function
is designed to evenly distribute entries across the array to minimize collisions (where two elements are hashed to the same index).
Implementation Details of Set and Map Using Hash Tables
Set Implementation:
• Inserting an Element: When adding a new element, the hash function is used to find the appropriate index. If the index is already occupied (a collision), strategies like linear probing or
chaining are used to find an available space.
• Existence Check: To determine if an element is present in the Set, compute its hash and check the corresponding index in the array.
Map Implementation:
• Inserting Key-Value Pairs: To add a new pair, apply the hash function to the key to find the array index. If a pair with the same key exists at that index, the value is updated. If there’s a
collision with a different key, the same collision resolution strategies as used in Set are applied.
• Searching and Accessing Values: To find a value, hash the key to locate the correct index and perform a search at that position.
How Does the Implementation Affect Runtime?
Operations such as insertion, searching, and deletion in a hash table typically have an average time complexity of O(1), thanks to the direct index access provided by hashing. However, poor hash
function design or a high number of collisions can degrade performance, potentially to O(n) in the worst-case scenario.
Similarities and Differences
While both Set and Map use hash tables and share similar implementation techniques, they serve different purposes. The Set is solely focused on unique elements, making it simpler in terms of data
handling. The Map, on the other hand, manages pairs and involves more complexity due to the necessity of handling both keys and values. | {"url":"https://absprog.com/post/set-vs-map","timestamp":"2024-11-13T01:34:39Z","content_type":"text/html","content_length":"7084","record_id":"<urn:uuid:7c156138-d2e8-435c-a91e-4419723b22a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00386.warc.gz"} |
Arranging Numbers Ascending Order Activity 5 | Smallest to Largest
Arranging Numbers Ascending Order Activity 5
Arranging Numbers Ascending Order Activity 5: Arrange the given numbers in ascending order or from the smallest to the largest.
Try Year 2 Ordering Numbers Worksheets! Try all Ordering Numbers Worksheets!
Arranging Numbers Ascending Order Activity 5
Do you know what ‘Ascending Order’ means? We say the numbers are in ascending order, when the numbers are arranged from the smallest to the largest. In this way we place the numbers in an increasing
order. For example, look at these numbers; 49, 23, 107, 95 and 77. How to arrange these numbers in ascending order? First pick up the smallest out of all the given numbers. Then the second smallest,
and so on until the largest. Likewise, in the given series of numbers 23, 49, 77, 95 and 107 are arranged in ascending order.
Do you know what ‘Descending Order’ means? We say the numbers are in descending order, when the numbers are arranged from the largest to the smallest. In this way we place the numbers in a decreasing
order. For example, look at these numbers; 49, 23, 107, 95 and 77. How to arrange these numbers in descending order? First pick up the largest out of all the given numbers. Then the second largest,
and so on until the smallest. Likewise, in the given series of numbers 107, 95, 77, 49 and 23 are arranged in descending order.
With this online worksheet you can improve your ability in ordering numbers in ascending order by arranging numbers from the smallest to the largest. Learn ascending order by arranging numbers from
least to greatest. Enjoy ordering numbers in ascending order!
{ "questionDesription":"Arranging Numbers Ascending Order Activity 5 - Write the numbers from smallest to biggest", "questions": [{"question": "658 659 660 661 662"},{"question": "679 680 681 682
683"},{"question": "147 148 149 150 151"},{"question": "240 241 242 243 244"},{"question": "920 921 922 923 924"}]}
Related Links | {"url":"https://k8schoollessons.com/arranging-numbers-ascending-order-activity-5/","timestamp":"2024-11-11T22:38:33Z","content_type":"text/html","content_length":"45665","record_id":"<urn:uuid:57cd518b-8b0d-444c-8b5b-226abd95200e>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00239.warc.gz"} |
Student Creations
Last year, my kids blew me away with
This year, I was a little more prepared for what we
be able to do with projectile motion. We spent quite a bit of time on vertical motion as part of our standard curriculum, but once we finished with our required standards, we turned our focus towards
trig ratios and applying them to motion problems. I built a few applets using GeoGebra to help my students visualize the motion and it sparked an end of year project that these kids are really proud
Abel, Matt and Robert
These kids were the first to figure out how to model the projectile. They used the rest of their time trying to dial in the effects. We couldn't figure out how to make the backgrounds of pictures
transparent, so they spent a bunch of time defining polygons to cover the white areas. The definitions were really tricky because they had to be defined in terms of the point that was being projected
otherwise the image would move but the polygon would remain static.
David, Jett and Sartaj
This group really spent some time dialing in their applet. In my opinion, it's probably the most aesthetically pleasing.
Sierra, Brandon H. and Brandon J.
The tricky part of this applet was in defining the condition to display the "Bullseye!" text. Since the center of the board is an ellipse (to establish a perspective) these students had to define
four points to represent the vertical and horizontal extremes of the ellipse. They then had to determine a set of inequalities which would describe when the point of the dart actually fell within the
range of those four points.
Marco, Brandon M. and Lazaro
The thing I really like about this applet is how careful they were with their facts. The fence height can change from 3' (Dodger Stadium left/right field) to 37' (Fenway Park's Green Monster). They
had to define many points in terms of other points in order to get the fence to be dynamic.
Fareen, Alec and Breanna
This group took this project by the horns, big time. They tackled two different motion problems in one. They have a projectile and the bird flies in a linear path defined by an angular velocity. They
ran into a snag because their scale was so large that the applet ran incredibly slow. So they spent some time tweaking the axes in order to end up with a really cool applet.
Hit the duck and you'll see their sense of humor--trust me.
Jodie, Abraham and Destin
This group had a HUGE vision for this project. They wanted the pitch to come in as a projectile and then leave the batter with a greater angle and greater velocity. The timing on this was difficult
at best. They managed to get two projectiles occuring at different times, but had to adjust the time slider to do so. There were times that this one stumped me. I really appreciated the challenge
they took on.
Creston, Mackay, Jared and Alex
Let's blow up a castle. What else can you say? This group really paid attention to detail. Heck, they even made the clouds move. Hit the castle and get a mushroom cloud. What's not to like about
Frankie, Alex and Dil
If you knew these guys, you'd see how appropriate a flying monkey is to their applet. Again, with the details. Determining the condition to show the final image took some time. How close does the
monkey have to get to the target in order for the launch to be a success? They mulled it over and drew some strong conclusions.
My Role
I asked a lot of questions. Direct instruction was necessary on things specific to GeoGebra like the coordinates of point B can be understood as (x(B),y(B)) but nearly all of the manipulation of the
equations was done by them. If a group got stuck on how to make the animation end, the standard line of questioning would go something like:
"What do you want the applet to look like when the animation ends?"
"In order to get that result are we more interested in the height of your projectile or the distance?"
"How can we describe the height of a projectile?" or "How can we describe the distance it's travelled?"
Once they were able determine which model they needed to use (vertical motion or linear motion), we'd set up the equation. A lot of them looked something like:
h = -16t^2 + v sin(α) t + s or d = v cos(α) t
and they'd play with it until they solved for t. Sometimes we'd have to think of the velocity in terms of something times t and go back to the original equations h = -16t^2 + vt + s or D = rt in
order for them to realize that v sin(α) and v cos(α) are just rates.
I don't think I've had more fun over a two week period in the classroom. Ever.
The best part was when the groups would finally export their applet to .html and then we'd go back to my desk where I showed them how to replace the current code with the animation code. The looks on
their faces when they saw something they had created actually do what it was supposed to do was priceless.
Yeah, we'll prolly do something like this again next year.
Tell 'em what you think in the comments.
(Note: I had planned on having students write their reflections and link to their projects on our class blog. However, due to a time crunch at the end, I've posted them all here. They'll be checking
this post for your feedback.)
6 comments:
I'm bookmarking this, so I can try to do this with my college students. I have to get better at Geogebra first.
Amazing projects. I love how personalized that they are and creative. Curious as to what grade these students are?
These kids are in 8th grade. They really put a lot if themselves into these projects.
Congratulations to everyone. These are fantastic! I'm also most impressed with the creativity. All of the applets are SO different (ok, there are 2 baseball ones, but it is the greatest game in
the world). With this applet work as a foundation, before we know it you guys will all be creating games like this and this.
Oh don't lie, Mr.Cox. You loved Alec's, Breanna's, and my applet ;) An yes, our humor does show in our applet. Our applet is not only math, but it truly reflects our sick humored, cynical selves!
Hi David. These are amazing applets. You may want to submit them in the next Mathematics and Multimedia blog carnival. | {"url":"https://coxmath.blogspot.com/2010/06/student-creations.html","timestamp":"2024-11-06T07:55:15Z","content_type":"text/html","content_length":"101527","record_id":"<urn:uuid:85f8c77a-30c2-49a5-8d5c-1953bd1037e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00200.warc.gz"} |
Algebraic Graph Theory
top of page
A graph is a mathematical abstraction of a network.
Graphs are used to describe complex system in almost every area of business, industry and technology. Similarly, a group is the mathematical abstraction of the symmetry of an object, and groups are
used in many branches of mathematics, as well as in physics, chemistry and networks.
Take for example these graphs:
Algebraic Graph Theory: About
Algebraic Graph Theory: TeamMember
There are various ways you could rotate these graphs and keep them looking the same. This is what we mean when we say that a graph is symmetric. This graph comes equipped with a group; the algebraic
object which measures the symmetry of the graph.
Studying the group action of a graph is an important method for studying both the group and the graph. That is, we investigate the group through its action on the graph, and we study the graph by the
properties of its group. This forms an important mathematical method and has produced various important mathematics results.
For example, it led to the proof of Weiss' Theorem that there exists no 8-arc-transitive graph, and the proof of Cameron-Praeger-Seitz-Saxl's theorem which tells us that there are only finitely many
distance-transitive graphs of a given valency.
On the other hand, Goldschmidt's study of groups acting on cubic graphs led to the creation of the amalgam method (an important method for characterising certain finite simple groups), and Fong and
Seitz's study of Moufang polygons (a special type of graph) led to the characterisation of rank-2 Lie type groups.
Algebraic Graph Theory: About
Our Centre is a world leading centre in algebraic graph theory, with world experts. The research work at the Centre covers important topics in algebraic graph theory, such as:
• Cayley graphs
• distance-transitive graphs
• s-arc-transitive graphs
• symmetric graphs
• half-arc-transitive graphs
• symmetrical factorisations of graphs
• symmetrical embeddings of graphs on Riemann surfaces.
Members of the Centre have made significant contributions to the area. They have solved various important long-standing open problems, established significant theories, developed powerful methods and
launched new research directions, including:
• Praeger's theory of 2-arc-transitive graphs, and later developed Giudic-Li-Praeger theory of locally-s-arc-transitive graphs, provide major methods for analysing edge-transitive graphs.
• The Li-Praeger theory of homogeneous factorisations of graphs launched a new research direction in combinatorics.
Algebraic Graph Theory: About
• The monograph of Cheryl Praeger (jointly with Martin E Liebeck and Jan Saxl) The maximal factorisations of the finite simple groups and their automorphism groups is an important reference in the
study of group theory and algebraic graph theory.
• Professor Royle's book (jointly with Godsil) Algebraic graph theory is now a standard reference for postgraduate students and researchers in the area.
Algebraic Graph Theory: TeamMember
Algebraic Graph Theory: About
bottom of page | {"url":"https://www.cmsc.io/graph-theory","timestamp":"2024-11-15T00:48:25Z","content_type":"text/html","content_length":"594788","record_id":"<urn:uuid:8c1857cc-e363-495f-a3fa-7120bf1144f9>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00332.warc.gz"} |
23.2 Faraday’s Law of Induction: Lenz’s Law
Chapter 23 Electromagnetic Induction, AC Circuits, and Electrical Technologies
23.2 Faraday’s Law of Induction: Lenz’s Law
• Calculate emf, current, and magnetic fields using Faraday’s Law.
• Explain the physical results of Lenz’s Law
Faraday’s and Lenz’s Law
Faraday’s experiments showed that the emf induced by a change in magnetic flux depends on only a few factors. First, emf is directly proportional to the change in flux [latex]{\Delta \phi}[/latex].
Second, emf is greatest when the change in time [latex]{\Delta t}[/latex] is smallest—that is, emf is inversely proportional to [latex]{\Delta t}[/latex]. Finally, if a coil has [latex]{N}[/latex]
turns, an emf will be produced that is [latex]{N}[/latex] times greater than for a single coil, so that emf is directly proportional to [latex]{N}[/latex]. The equation for the emf induced by a
change in magnetic flux is
[latex]{\text{emf} = -N}[/latex] [latex]{\frac{\Delta \phi}{\Delta t}}[/latex]
This relationship is known as Faraday’s law of induction. The units for emf are volts, as is usual.
The minus sign in Faraday’s law of induction is very important. The minus means that the emf creates a current I and magnetic field B that oppose the change in flux [latex]{\Delta \phi}[/latex] —this
is known as Lenz’s law. The direction (given by the minus sign) of the emf is so important that it is called Lenz’s law after the Russian Heinrich Lenz (1804–1865), who, like Faraday and Henry,
independently investigated aspects of induction. Faraday was aware of the direction, but Lenz stated it so clearly that he is credited for its discovery. (See Figure 1.)
Figure 1. (a) When this bar magnet is thrust into the coil, the strength of the magnetic field increases in the coil. The current induced in the coil creates another field, in the opposite direction
of the bar magnet’s to oppose the increase. This is one aspect of Lenz’s law—induction opposes any change in flux. (b) and (c) are two other situations. Verify for yourself that the direction of the
induced B[coil] shown indeed opposes the change in flux and that the current direction shown is consistent with RHR-2.
Problem-Solving Strategy for Lenz’s Law
To use Lenz’s law to determine the directions of the induced magnetic fields, currents, and emfs:
1. Make a sketch of the situation for use in visualizing and recording directions.
2. Determine the direction of the magnetic field B.
3. Determine whether the flux is increasing or decreasing.
4. Now determine the direction of the induced magnetic field B. It opposes the change in flux by adding or subtracting from the original field.
5. Use RHR-2 to determine the direction of the induced current I that is responsible for the induced magnetic field B.
6. The direction (or polarity) of the induced emf will now drive a current in this direction and can be represented as current emerging from the positive terminal of the emf and returning to its
negative terminal.
For practice, apply these steps to the situations shown in Figure 1 and to others that are part of the following text material.
Applications of Electromagnetic Induction
There are many applications of Faraday’s Law of induction, as we will explore in this chapter and others. At this juncture, let us mention several that have to do with data storage and magnetic
fields. A very important application has to do with audio and video recording tapes. A plastic tape, coated with iron oxide, moves past a recording head. This recording head is basically a round iron
ring about which is wrapped a coil of wire—an electromagnet (Figure 2). A signal in the form of a varying input current from a microphone or camera goes to the recording head. These signals (which
are a function of the signal amplitude and frequency) produce varying magnetic fields at the recording head. As the tape moves past the recording head, the magnetic field orientations of the iron
oxide molecules on the tape are changed thus recording the signal. In the playback mode, the magnetized tape is run past another head, similar in structure to the recording head. The different
magnetic field orientations of the iron oxide molecules on the tape induces an emf in the coil of wire in the playback head. This signal then is sent to a loudspeaker or video player.
Figure 2. Recording and playback heads used with audio and video magnetic tapes. (credit: Steve Jurvetson)
Similar principles apply to computer hard drives, except at a much faster rate. Here recordings are on a coated, spinning disk. Read heads historically were made to work on the principle of
induction. However, the input information is carried in digital rather than analog form – a series of 0’s or 1’s are written upon the spinning hard drive. Today, most hard drive readout devices do
not work on the principle of induction, but use a technique known as giant magnetoresistance. (The discovery that weak changes in a magnetic field in a thin film of iron and chromium could bring
about much larger changes in electrical resistance was one of the first large successes of nanotechnology.) Another application of induction is found on the magnetic stripe on the back of your
personal credit card as used at the grocery store or the ATM machine. This works on the same principle as the audio or video tape mentioned in the last paragraph in which a head reads personal
information from your card.
Another application of electromagnetic induction is when electrical signals need to be transmitted across a barrier. Consider the cochlear implant shown below. Sound is picked up by a microphone on
the outside of the skull and is used to set up a varying magnetic field. A current is induced in a receiver secured in the bone beneath the skin and transmitted to electrodes in the inner ear.
Electromagnetic induction can be used in other instances where electric signals need to be conveyed across various media.
Figure 3. Electromagnetic induction used in transmitting electric currents across mediums. The device on the baby’s head induces an electrical current in a receiver secured in the bone beneath the
skin. (credit: Bjorn Knetsch)
Another contemporary area of research in which electromagnetic induction is being successfully implemented (and with substantial potential) is transcranial magnetic simulation. A host of disorders,
including depression and hallucinations can be traced to irregular localized electrical activity in the brain. In transcranial magnetic stimulation, a rapidly varying and very localized magnetic
field is placed close to certain sites identified in the brain. Weak electric currents are induced in the identified sites and can result in recovery of electrical functioning in the brain tissue.
Sleep apnea (“the cessation of breath”) affects both adults and infants (especially premature babies and it may be a cause of sudden infant deaths [SID]). In such individuals, breath can stop
repeatedly during their sleep. A cessation of more than 20 seconds can be very dangerous. Stroke, heart failure, and tiredness are just some of the possible consequences for a person having sleep
apnea. The concern in infants is the stopping of breath for these longer times. One type of monitor to alert parents when a child is not breathing uses electromagnetic induction. A wire wrapped
around the infant’s chest has an alternating current running through it. The expansion and contraction of the infant’s chest as the infant breathes changes the area through the coil. A pickup coil
located nearby has an alternating current induced in it due to the changing magnetic field of the initial wire. If the child stops breathing, there will be a change in the induced current, and so a
parent can be alerted.
Making Connections: Conservation of Energy
Lenz’s law is a manifestation of the conservation of energy. The induced emf produces a current that opposes the change in flux, because a change in flux means a change in energy. Energy can enter or
leave, but not instantaneously. Lenz’s law is a consequence. As the change begins, the law says induction opposes and, thus, slows the change. In fact, if the induced emf were in the same direction
as the change in flux, there would be a positive feedback that would give us free energy from no apparent source—conservation of energy would be violated.
Example 1: Calculating Emf: How Great Is the Induced Emf?
Calculate the magnitude of the induced emf when the magnet in Figure 1(a) is thrust into the coil, given the following information: the single loop coil has a radius of 6.00 cm and the average value
of [latex]{B \;\text{cos} \;\theta}[/latex] (this is given, since the bar magnet’s field is complex) increases from 0.0500 T to 0.250 T in 0.100 s.
To find the magnitude of emf, we use Faraday’s law of induction as stated by [latex]{\text{emf} = -N \frac{\Delta \phi}{\Delta t}}[/latex], but without the minus sign that indicates direction:
[latex]{\text{emf} = N}[/latex] [latex]{\frac{\Delta \phi}{\Delta t}}[/latex]
We are given that [latex]{N = 1}[/latex] and [latex]{\Delta t=0.100 \;\text{s}}[/latex], but we must determine the change in flux [latex]{\Delta \phi}[/latex] before we can find emf. Since the area
of the loop is fixed, we see that
[latex]{\Delta \phi = \Delta (BA \;\text{cos} \theta) = A \Delta(B \;\text{cos} \;\theta)}[/latex]
Now [latex]{\Delta (B \;\text{cos} \;\theta) = 0.200 \;\text{T}}[/latex], since it was given that [latex]{B \;\text{cos} \;\theta}[/latex] changes from 0.0500 to 0.250 T. The area of the loop is
[latex]{A = \pi r^2 = (3.14 \dots)(0.060 \;\text{m})^2 = 1.13 \times 10^{-2} \;\text{m}^2}[/latex]. Thus,
[latex]{\Delta \phi = (1.13 \times 10^{-2} \;\text{m}^2)(0.200 \;\text{T})}.[/latex]
Entering the determined values into the expression for emf gives
[latex]{\text{Emf} =N}[/latex] [latex]{\frac{\Delta \phi}{\Delta t}}[/latex] [latex]{=}[/latex] [latex]{\frac{(1.13 \times 10^{-2} \;\text{m}^2)(0.200 \;\text{T})}{0.100 \;\text{s}}}[/latex] [latex]
{= 22.6 \; \text{mV}}[/latex]
While this is an easily measured voltage, it is certainly not large enough for most practical applications. More loops in the coil, a stronger magnet, and faster movement make induction the practical
source of voltages that it is.
PhET Explorations: Faraday’s Electromagnetic Lab
Play with a bar magnet and coils to learn about Faraday’s law. Move a bar magnet near one or two coils to make a light bulb glow. View the magnetic field lines. A meter shows the direction and
magnitude of the current. View the magnetic field lines or use a meter to show the direction and magnitude of the current. You can also play with electromagnets, generators and transformers!
Figure 4. Faraday’s Electromagnetic Lab
Section Summary
• Faraday’s law of induction states that the emfinduced by a change in magnetic flux is
[latex]{\text{emf = -N}}[/latex] [latex]{\frac{\Delta \phi}{\Delta t}}[/latex]
• when flux changes by [latex]{\Delta \phi}[/latex] in a time [latex]{\Delta t}[/latex].
• If emf is induced in a coil, [latex]{N}[/latex] is its number of turns.
• The minus sign means that the emf creates a current [latex]{I}[/latex] and magnetic field [latex]{B}[/latex] that oppose the change in flux [latex]{\Delta \phi}[/latex] —this opposition is known
as Lenz’s law.
Conceptual Questions
1: A person who works with large magnets sometimes places her head inside a strong field. She reports feeling dizzy as she quickly turns her head. How might this be associated with induction?
2: A particle accelerator sends high-velocity charged particles down an evacuated pipe. Explain how a coil of wire wrapped around the pipe could detect the passage of individual particles. Sketch a
graph of the voltage output of the coil as a single particle passes through it.
Problems & Exercises
1: Referring to Figure 5(a), what is the direction of the current induced in coil 2: (a) If the current in coil 1 increases? (b) If the current in coil 1 decreases? (c) If the current in coil 1 is
constant? Explicitly show how you follow the steps in the Problem-Solving Strategy for Lenz’s Law.
Figure 5. (a) The coils lie in the same plane. (b) The wire is in the plane of the coil
2: Referring to Figure 5(b), what is the direction of the current induced in the coil: (a) If the current in the wire increases? (b) If the current in the wire decreases? (c) If the current in the
wire suddenly changes direction? Explicitly show how you follow the steps in the Problem-Solving Strategy for Lenz’s Law.
3: Referring to Figure 6, what are the directions of the currents in coils 1, 2, and 3 (assume that the coils are lying in the plane of the circuit): (a) When the switch is first closed? (b) When the
switch has been closed for a long time? (c) Just after the switch is opened?
Figure 6.
4: Repeat the previous problem with the battery reversed.
5: Verify that the units of [latex]{\Delta \phi / \Delta t}[/latex] are volts. That is, show that [latex]{1 \;\text{T} \cdot \;\text{m}^2 \text{/s} = 1 \;\text{V}}[/latex].
6: Suppose a 50-turn coil lies in the plane of the page in a uniform magnetic field that is directed into the page. The coil originally has an area of
[latex]{0.250 \;\text{m}^2}[/latex]. It is stretched to have no area in 0.100 s. What is the direction and magnitude of the induced emf if the uniform magnetic field has a strength of 1.50 T?
7: (a) An MRI technician moves his hand from a region of very low magnetic field strength into an MRI scanner’s 2.00 T field with his fingers pointing in the direction of the field. Find the average
emf induced in his wedding ring, given its diameter is 2.20 cm and assuming it takes 0.250 s to move it into the field. (b) Discuss whether this current would significantly change the temperature of
the ring.
8: Integrated Concepts
Referring to the situation in the previous problem: (a) What current is induced in the ring if its resistance is [latex]{0.0100 \;\Omega}[/latex]? (b) What average power is dissipated? (c) What
magnetic field is induced at the center of the ring? (d) What is the direction of the induced magnetic field relative to the MRI’s field?
9: An emf is induced by rotating a 1000-turn, 20.0 cm diameter coil in the Earth’s [latex]{5.00 \times 10^{-5} \;\text{T}}[/latex] magnetic field. What average emf is induced, given the plane of the
coil is originally perpendicular to the Earth’s field and is rotated to be parallel to the field in 10.0 ms?
10: A 0.250 m radius, 500-turn coil is rotated one-fourth of a revolution in 4.17 ms, originally having its plane perpendicular to a uniform magnetic field. (This is 60 rev/s.) Find the magnetic
field strength needed to induce an average emf of 10,000 V.
11: Integrated Concepts
Approximately how does the emf induced in the loop in Figure 5(b) depend on the distance of the center of the loop from the wire?
12: Integrated Concepts
(a) A lightning bolt produces a rapidly varying magnetic field. If the bolt strikes the earth vertically and acts like a current in a long straight wire, it will induce a voltage in a loop aligned
like that in Figure 5(b). What voltage is induced in a 1.00 m diameter loop 50.0 m from a [latex]{2.00 \times 10^6 \;\text{A}}[/latex] lightning strike, if the current falls to zero in [latex]{25.0
\;\mu \text{s}}[/latex]? (b) Discuss circumstances under which such a voltage would produce noticeable consequences.
Faraday’s law of induction
the means of calculating the emf in a coil due to changing magnetic flux, given by [latex]{\text{emf} =-N \frac{\Delta \phi}{\Delta t}}[/latex]
Lenz’s law
the minus sign in Faraday’s law, signifying that the emf induced in a coil opposes the change in magnetic flux
Problems & Exercises
1: (a) CCW
(b) CW
(c) No current induced
3: (a) 1 CCW, 2 CCW, 3 CW
(b) 1, 2, and 3 no current induced
(c) 1 CW, 2 CW, 3 CCW
7: (a) 3.04 mV
(b) As a lower limit on the ring, estimate R = 1.00 mΩ. The heat transferred will be 2.31 mJ. This is not a significant amount of heat.
9: 0.157 V
11: proportional to [latex]{\frac{1}{r}}[/latex] | {"url":"https://pressbooks.online.ucf.edu/algphysics/chapter/faradays-law-of-induction-lenzs-law/","timestamp":"2024-11-03T10:03:23Z","content_type":"text/html","content_length":"282961","record_id":"<urn:uuid:61af9d4e-351b-4218-8a99-2fc3c3464c36>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00682.warc.gz"} |
ELEMENTS OF ALGEBRA:
FOR THE USE OF STUDENTS
IN THE
BY JAMES WOOD, D. D.
DEAN OF ELY, AND
MASTER OF ST. JOHN'S COLLEGE, CAMBRIDGE.
EIGHTH EDITION.
Printed by J. Smith, Printer to the University;
AND SOLD BY DEIGHTON & SONS, AND T. STEVENSON, CAMBRIDGE;
AND J. MAWMAN, LUDGATE-STREET, LONDON.
ON Vulgar Fractions 1
On Decimal Fractions 16
Signs used in Algebra 24
The Addition of Algebraical Quantities 29
Subtraction 32
Multiplication 33
Division 38
On Algebraical Fractions 42
Involution and Evolution 51
Simple Equations 64
Quadratic Equations 79
On Ratios 91
On Proportion 95
On Variation 103
On Arithmetical Progression 109
On Geometrical Progression 111
On Permutations and Combinations 115
The Binomial Theorem 117
On Surds 123
The Nature of Equations 135
The Transformation of Equations 142
The Limits of the Roots of Equations 151
The Depression of Equations 164
The Solution of Recurring Equations 168
The Solution of a Cubic Equation 170
Des Cartes's Solution of a Biquadratic 174
Dr. Waring's Solution 176
The Method of Divisors 178
The Method of Approximation 183
The Reversion of Series 187
The Sums of the Powers of the Roots of an Equation 190
On the Impossible roots of an Equation 194
On Unlimited Problems 203
On Continued Fractions 209
The Value of a Fraction whose Numerator and Denominator are evanescent 213
The least common Multiple 214
The cube root of a + √- b^2 217
On Logarithms ibid.
On Interest and Annuities 220
On the Summation of Series 227
On Chances 259
On Life Annuities 272
On the Nature of Curves 276
On the Construction of Equations 292
General Properties of Curve Lines 298
ON VULGAR FRACTIONS.
ART. 1. A fraction is a quantity which represents a part of parts of an integer or whole.
(2.) A simple fraction consists of two members, the numerator and the denominator; the denominator shews into how many equal parts the whole, or unity, is divided; and the numerator, the number of
those parts taken. The numerator is usually placed over the denominator with a line between them. Thus, 2/3, two thirds, signifies that unity is divided into three equal parts, and that two of those
parts are taken.
It must be observed, that we suppose every integer to be divisible into any number of equal parts at pleasure.
(3.) A proper fraction is one whose numerator is less than it's denominator, as 7/8.
(4.) An improper fraction is one whose numerator is equal to, or greater than its denominator, as 6/6;7/5.
(5.) A compound fraction is a fraction of a fraction, as 3/4 of 5/6, where 5/6 is the whole quantity of which 3/4 is to be taken; also, 2/3 of 4/5 of 9/11, is a compound fraction; &c.
(6.) A quantity consisting of a whole number and a fraction is called a mixed number, as 7 3/10, which signifies 7 integers together with 3/10 of an integer.
(7.) COR. 1. Every integer may be considered as a fraction whose denominator is 1; thus 5, or five units, is 5/1.
(8.) COR. 2. To multiply a fraction by any number, multiply the numerator by that number and retain the same denominator. Thus, 2/15 multiplied by 7 is 14/15. For, the unit, in each of the fractions
and 2/15 and 14/15, is divided into 15 equal parts, and 7 times as many of those parts are taken in the latter case as in the former.
(9.) COR. 3. To divide a fraction by any number, multiply the denominator by that number and retain the same numerator. Thus, 3/5 divided bv 4 is 3/20. For, the unit being divided into four times as
many equal parts in 3/20 as it is in 3/5, each of the parts in the latter case is four times as great as in the former, and
the same number of parts is taken in both cases; therefore, the former fraction is one fourth of the latter.
(10.) A simple fraction may be considered as representing the quotient arising from the division of the numerator by the denominator; thus the fraction 3/4 represents the quotient of 3 divided by 4;
for 3 is 3/1 (Art. 7.), and this divided by 4 is the fraction 3/4 (Art. 9.) If the integer be supposed a pound, or twenty shillings, 3/4 of £.1, which is 15 shillings, is equal to 1/4 of £.3, which
is also 15 shillings.
(11.) If the numerator and denominator of a fraction be both multiplied by the same number, it's value is not altered. For, if the numerator be multiplied by any number, the fraction is multiplied by
that number (Art. 8); and if the denominator be multiplied by the same number, the fraction is divided by it (Art. 9); and if a quantity be both multiplied and divided by the same number, it's value
is not altered. Thus, 5/14 = 15/42 = 150/420, &c. Hence, if the numerator and denominator be both divided by the same number, it's value is not altered; since 150/420 = 15/42 = 5/14*.
* To avoid repetition, the Reader is referred to the first section of the Algebra, for the explanation of the signs +, -, ×, and =
A 2
ON REDUCTION.
The operation by which a quantity is changed from one denomination to another, without altering it's value, is called Reduction.
(12.) To reduce a whole number to a fraction with a given denominator.
Multiply the proposed number by the given denominator, and the product will be the numerator of the fraction required.
Ex. Reduce 5 to a fraction whose denominator is 6.
This is or
(13.) To reduce a mixed number to an improper fraction.
Multiply the integral by the denominator of the fractional part, to this product add the numerator of the fractional part, and make it's denominator, the denominator of the sum.
Ex. 1. Reduce 7 4/5 to an improper fraction.
The quantity 7 4/5 is equal to for
Ex. 2. Also,
(14.) To reduce an improper fraction to a mixed number.
Divide the numerator by the denominator for the integral part, and make the remainder the numerator of the fractional part, and the divisor it's denominator.
Ex. Reduce 39/5 to an improper fraction. The fraction 39/5 = 7 4/5; because the unit being divided into 5 parts, 39 such, parts are to be taken, that is, 7 units and 4 such parts.
(15.) To reduce a compound fraction to a simple one.
Multiply all the numerators together for a new numerator, and all the denominators for a new. denominator.
Ex. 1. 2/3 of 4/5 = 8/15; for, one third of 4/5 is 4/15, (Art. 9); therefore two thirds, which must be twice as, great, is 8/15 (Art. 8).
Ex. 2. 3/4 of 5 = 3/4 of 5 = 3/4 of 5/1 = 15/4.
Mixed numbers must be reduced to improper fractions, before the rule can be applied.
Ex. 3. 5/8 of 2/9 of 3 1/12 = 10/72 of 3 1/12 = 10/72 of 37/122 = 370/864.
(16.) To reduce a fraction to lower terms.
Whenever the numerator and denominator of a fraction have a common measure (or number which divides each of them without remainder) greater than
unity, the fraction may be reduced to lower terms, by dividing both the numerator and denominator by this common measure.
Ex. 105/120 is reduced to 21/24, by dividing both the numerator and denominator by 5; and 21/24 is again reduced to 7/8, by dividing it's numerator and denominator by 3. That the value of the
fraction is not altered, appears from Art. 11.
In the same manner, 168/210 = 84/105 = 28/35 = 4/5.
(17.) The greatest common measure of two numbers is found by dividing the greater by the less, and the preceding divisor by the remainder, continually, till nothing is left: the last divisor is the
greatest common measure required.
To find the greatest common measure of 189 and 224.
By proceeding according to the rule, it appears that 7
is the last divisor, or the greatest common measure sought. The proof of this rule will be given hereafter*.
(18.) A fraction is reduced to it's lowest terms, by dividing it's numerator and denominator by their greatest common measure.
Ex. To reduce 385/396 to it's lowest terms.
By the last Art. the greatest common measure of the numerator and denominator is found to be 11, and therefore 35/36 is the fraction in it's lowest terms.
COR. If unity be the greatest common measure of the numerator and denominator, the fraction is in it's lowest terms.
(19.) To reduce fractions to a common denominator.
Having reduced, if necessary, compound fractions to simple ones, and mixed numbers to improper fractions, multiply each numerator by all the denominators except it's own, for the new numerator, and
all the denominators together for a common denominator.
Ex. 1. Reduce 1/2, 2/3 and 3/4 to a common denominator.
* Art. 90.
Ex. 2. Reduce 2/5 of 3/4 and 4 1/3 to a common denominator.
These are 6/20 and 13/3, or 3/10 and 13/3; therefore 9/30 and 130/30 are the fractions required.
(20.) If the denominator of one of two fractions contain the denominator of the other a certain number of times exactly, multiply the numerator and denominator of the latter by that number, and it
will be reduced to the same denominator with the former.
Ex. Reduce 5/12 and 2/3 to a common denominator.
Since 12 contains 3 four times exactly, multiply both the numerator and denominator of 2/3 by 4, and it becomes 8/12, a fraction having the same denominator with 5/12.
(21.) COR. By reducing two fractions to a common denominator their values may be compared.
Thus, 4/7 and 7/12; when reduced to a common denominator are 48/84 and 49/84; that is, the fractions have the same relative values that 48 and 49 have.
(22.) To find the value of a fraction of a proposed denomination, in terms of a lower denomination.
Multiply the fraction by the number of integers of the lower denomination contained in one integer of the higher, and the product is the value required. The value of any fractional part of the lower
denomination may be obtained in the same manner, till we come to the lowest.
Ex. 1. What is the value of 5/7 of a pound?
First, 5/7 of £.1 is 5/7 of 20 shillings, or 5/7 of 20/1 shillings = 100/7 = 14 2/7 shillings;
Next, 2/7 of a shilling = 2/7 of 12/1 pence = 24/7 pence = pence = 3 3/7 pence;
Lastly, 3/7 of a penny = 3/7 of farthings = 3/7 of 4/1, or 12/7 farthings = 1 5/7 farthings: hence, 5/7 of a pound is
The operation is usually performed in the following manner:
Ex. 2. What is the value of 5/9 of a crown?
(23.) To reduce a quantity to a fraction of any denomination.
Make the given quantity the numerator, and the number of integers of it's denomination in one of the proposed denomination, the denominator, and the fraction required is determined.
Ex. 1. What fraction of a pound is
(24.) In this example, we are obliged to reduce the whole to farthings; and in general, if the higher denomination do not contain the lower an exact number of times, reduce them to a common
denomination, and proceed as before.
Ex. 2. What fraction of a guinea is half a crown?
Here sixpence is the greatest common denomination, of which a guinea contains 42, and half a crown 5, therefore 5/42 is the fraction required.
Any common denomination would answer thp purpose, but if the greatest be taken, the resulting fraction is in the lowest terms.
(25.) To reduce a fraction to any denomination.
Find what fraction of the proposed denomination an integer of the denomination of the given fraction is, and the fraction required will be found by Art. 15.
Ex. 1. What fraction of a pound is 2/3 of a shilling? 1 shilling is 1/20 of a pound, therefore 2/3 of 1 shilling is 2/3 of 1/20 of a pound, or 2/60 = 1/30 of a pound.
Ex. 2. What fraction of a yard is 5/7 of an inch?
1 inch is 1/36 of a yard, therefore 5/7 of an inch is 5/7 of 1/36 of a yard, or 5/252 of a yard.
Ex. 3. What fraction of a guinea is 4/9 of a pound?
1 pound is 20/21 of a guinea (Art. 24); hence 4/9 of a pound is 4/9 of 20/21 of a guinea, or 80/189 of a guinea.
ADDITION OF FRACTIONS.
(26.) If fractions have a common denominator, their sum is found, by taking the sum of the numerators, and subjoining the common denominator.
Ex. 1/5 + 2/5 = 3/5. For, if an integer be divided into five equal parts, one of those parts, together with two parts of the same kind, must make three such parts.
(27.) If the fractions have not a common denominator, reduce them to a common denominator, and proceed as before.
Ex. Required the sum of 2/3, 3/4 and 4/5.
These reduced to a common denominator are 40/60, 45/60 and 48/60, whose sum is 133/60, or 2 13/60.
When mixed numbers are to be added, to the sum of the fractions, taken as before, add the sum of the integers.
Ex. Add together 5 3/4, 6 1/3 and 2/5 of 1/7.
3/4 + 1/3 + 2/35 = 315/420 + 140/420 + 24/420 = 479/420 = 1 59/420; therefore the whole sum is 12 59/420.
(28.) The difference of two fractions which have a common denominator is found by taking the difference of their numerators and subjoining the common denominator.
Ex. 4/5 - 3/5 = 1/5. For, if the unit be supposed to be divided into five equal parts, and three of those parts be taken from four, the remainder must be one, or 1/5.
(29.) If the fractions have not a common denominator, let them be reduced to a common denominator, and take the difference as before.
Ex. 1. From 9/11 take 4/5.
9/11 - 4/5 = 45/55 - 44/55 = 1/55.
Ex. 2. From 11/12 of 3/5 take 1/3 of 7/8.
33/60 - 7/24 = 792/1440 - 420/1440 = 372/1440 = 31/120.
(30.) DEF. To multiply one fraction by another, is to take such part or parts of the former as the latter expresses. This is done by multiplying the numerators of the two fractions together for a new
numerator, and the denominators for a new denominator.
Ex. 3/4×5/7 = 15/28; for 3/4 multiplied by 5/7 is, according to the definition of multiplication, 5/7 of 3/4 15/28, (Art. 15.)
Compound fractions must be reduced to simple ones, and mixed numbers to improper fractions, and they may then be multiplied as before.
Ex. 1. Multiply 2/5 of 9/13 by 7 1/8.
2/5 of 9/13 = 18/65; and 7 1/8 = 57/8; therefore their product is 18/65×57/8 = 1026/520 = 1 506/520 = 1 253/260.
Ex. 2. Multiply 2/217 by 7,
Hence it appears, that a fraction may be multiplied by a whole number, by dividing the denominator by that number, when this division can take place.
(31.) To divide one fraction by another, or to determine how often one is contained in the other, invert the numerator and denominator of the divisor, and proceed as in multiplication.
Ex. 3/4 divided by 5/7 is 3/4×7/5 = 21/20 = 1 1/20.
For, from the nature of division, the divisor multiplied by the quotient must produce the dividend; therefore 5/7×quotient = 3/4; let these equal quantities be multiplied by the same quantity 7/5,
and the products must be equal; that is, 7/5×5/7×quotients = 3/4×7/5 or 35/35×quotient = 21/20; but 35/35 = 1 (Art. 14); therefore the quotient = 21/20 according to the rule. And the same method of
proof is applicable to all cases.
Compound fractions must be reduced to simple ones, and mixed numbers to improper fractions, before the rule can be applied.
Ex. Divide 5/9 of 4/7 by 3 1/3.
5/9 of 4/7 = 20/63, and 3 1/3 = 10/3; therefore the quotient is 20/63×3/10 = 60/630 = 2/21.
ON DECIMAL FRACTIONS.
(32.) In order to lessen the trouble which in many cases attends the use of vulgar fractions, decimal fractions have been introduced, which differ from the former in this respect, that their
denominators are always 10 or some power of 10, as 100, 1000, 10000, &c. and instead of writing the denominator under the numerator, it is expressed by pointing off, from the right of the numerator,
as many figures as there are cyphers in the denominator; thus,.2,.23,.127,.0013, 43.7, signify respectively 2/10, 23/100, 127/1000, 13/10000, 43 7/10 or 437/10.
(33.) COR. 1. The value of each figure in a decimal, decreases from the left to the right in a tenfold proportion; that is, each figure is ten times as great as if it were removed one place to the
right, as in whole numbers; thus.2,.02,.002, are 2/10, 2/100, 2/1000, &c. and the decimal.127 is one tenth, two hundredths and seven thousandths of an unit.
(34.) COR. 2. Adding cyphers to the right of a decimal does not alter it's value; thus, .2, .20, .200, or 2/100, 20/100, 200/1000 are equal to each other, the numerator and denominator having been
multiplied by the same number. (See Art. 11.)
(35.) COR. 3. Decimals may be reduced to a common denominator by adding cyphers to the right, where it is necessary, till the number of decimal places is the same in all.
Ex. 5,.01 and.311 reduced to a common denominator, are.500,.010 and.311; that is, 500/1000, 10/1000 and 311/1000.
As decimals are only fractions of a particular description, their operations must depend upon the principles already laid down.
ADDITION OF DECIMALS.
(36.) To find the sum of any number of decimals, place the figures in such a manner that those of the same denomination may stand under each other; add them together as in whole numbers, and place
the decimal point in the sum under the other points.
Ex. Add together 7.9, 51.43 and.0118.
These, when reduced to a common denominator,
are 7.9000, 51.4300 and.0118; and proceeding according to the rule,
59.3418 is the sum required. (Art. 26.)
In the operation, the cyphers may be omitted; thus,
(37.) To find the difference of two decimals, place the figures of the same denomination under each other; then subtract as in whole numbers, and place the decimal point under the other points.
From 61.3 take 42.012.
These, reduced to a common denominator, are 61.300 and 42.012; therefore their difference is 19.288 (Art. 28). In the operation, the cyphers may be omitted; thus,
(38.) To multiply one decimal by another, multiply the figures as in whole numbers, and point off as many decimal places in the product as there are in the multiplier and multiplicand together.
Ex. 51.3×4.6 = 235.98. For, 513/10×46/10 = 23598/100 = (according to the decimal notation) 235.98. And a similar proof may be given in all other cases.
(39.) When there are fewer figures in the product than there are decimals in the multiplier and multiplicand together, cyphers must be annexed to the left of the product, that the decimal places may
be properly represented.
Ex. 25 ×.3 = .075; for 25/100×3/10 = 75/1000 = (according to the decimal notation) .075.
(40.) Division in decimals is performed as in whole numbers, observing to point off as many decimals in the quotient as the number of decimal places in the dividend exceeds the number in the divisor.
Ex. Divide 77.922 by 3.7.
77.922/3.7 = 21.06: here there are three decimals in the dividend, and one in the divisor; therefore, there are two in the quotient.
The truth of this rule is apparent from the nature of multiplication; for, the product of the divisor and
B 2
quotient is the dividend; there are, therefore, as many places of decimals in the dividend, as there are in the divisor and quotient together (Art. 38); consequently, there are as many in the
quotient as the number in the dividend exceeds the number in the divisor.
(41.) If figures be wanting to make up the proper number of decimal places, cyphers must be added to the left.
Ex. Divide.336 by 42.
336/42 = 8; and as the quotient of.336 divided by 42 must contain three decimal places, that quotient is.008. For, 336/1000 divided by 42 is 336/42000, or 8/1000 (Art. 9); that is (according to the
decimal notation).008 (Art. 32).
(42.) When the dividend does not contain as many decimals as the divisor, cyphers must be added to the right of the decimals in the dividend, till that is the case.
Ex. Divide 36 by. 012.
36 = 36.000; and 36.000 divided by.012 is 3000, according to the rule.
(43.) To reduce a vulgar fraction to a decimal.
Add cyphers at pleasure, as decimals, in the numerator, and divide by the denominator according to the rule for the division of decimals. The truth of this rule is evident from Art. 10.
Ex. 1. 3/4 = 3.00/4 = .75.
Ex. 2. 7/8 = 7.000/8 = .875.
Ex. 3. 4/625 = 4.0000/625 = .0064.
Ex. 4. 1/3 = 1.000 &c./3 = .333 &c.
Ex. 5. 4/33 = 4.0000 &c./33 = .1212 &c.
(44.) In some cases, as in the two last examples, the vulgar fraction cannot exactly be made up of tenths, hundredths, &c. but the decimal will go on without ever coming to an end, the same figure or
figures recurring in the same order; but though we cannot represent the exact value of the vulgar fraction, yet, by increasing the number of decimal places, we may approach to it as near as we
please. Thus, 1/9 = .1111 &c. now.1, or 1/10, is less than the true value by 1/90;.11, or 11/100, is too little by 1/900; &c.
Decimals of this kind are called recurring, or circulating decimals.
(45.) To find the value of a decimal of one denomination in terms of a lower denomination.
This may be done by the rule laid down in Art. 22.
Ex. Required the value of.615625£.
The value required is
First,.615625£. = 12.3125 shillings.
Next,.3125s. = 3.75. pence.
Lastly,.75d. = 3. farthings.
(46.) To reduce a quantity to a decimal of a Superior denomination.
Divide the quantity by the number of integers of it's denomination contained in one of the superior denomination, and the quotient is the decimal required.
Ex. 1. What decimal of a shilling is three-pence?
For, in the denomination shillings, it's numerical value must be 1/12 of it's value in the denomination pence.
Ex. 2. What decimal of a pound is
First, we find what decimal of a penny d. is; this is found in the same manner to be.3958333 &c. lastly, we find, by the same rule, what decimal of a pound 13.3958333 &c. sh. is; which appears to be
.66979166 &c.
The conclusion will be the same if we reduce the quantity to a vulgar fraction (Art. 23), and this fraction to a decimal (Art. 43).
The proofs of the rules for the management of vulgar and decimal fractions, here given, are necessarily confined to particular instances, but the same reasoning may be applied in every case; and by
using general signs, the proofs may be made general.
ELEMENTS OF ALGEBRA.
PART I.
DEFINITIONS AND EXPLANATION OF SIGNS.
(47.) THE method of representing the relation of abstract quantities by letters and characters, which are made the signs of such quantities and their relations, is called Algebra.
Known or determined quantities are usually represented by the first letters of the alphabet, a, b, c, d, &c. and unknown or undetermined quantities, by the last, y, x, w, &c.
The following signs are made use of to express the relations which the quantities bear to each other.
(48.) + Plus, signifies that the quantity to which it is prefixed must be added. Thus, a + b signifies that the quantity represented by b is to be added to the quantity represented by a; if a
represent 5, and b, 7, then a + b represents 12.
If no sign be placed before a quantity, the sign + is understood. Thus, a signifies + a. Such quantities are called positive quantities.
(49.) - Minus, signifies that the quantity to which
it is prefixed must be subtracted. Thus, a - b signifies that b must be taken from a; if a be 7, and b, 5, a - b expresses 7 diminished by 5, or 2.
Quantities to which the sign - is prefixed are called negative quantities.
(50.)×Into, signifies that the quantities between which it stands are to be multiplied together. Thus, a×b signifies that the quantity represented by a is to be multiplied by the quantity represented
by b*.
This sign is frequently omitted; thus abc signifies a×b×c. Or a full point is used instead of it; thus 1×2×3, and 1.2.3, signify the same thing.
(51.) If in multiplication the same quantity be repeated any number of times, the product is usually expressed by placing, above the quantity, the number which represents how often it is repeated;
thus a, a×a, a×a×a, a×a×a×a, and a^1, a^2, a^3, a^4, have respectively the same signification. These quantities are called powers; thus a^1, is called the first power of a; a^2, the second power, or
square of a; a^3, the third power, or cube of a, &c.
The numbers 1, 2, 3, &c. are called the indices of a; or exponents of the powers of a.
(52.) ÷ Divided by, signifies that the former of the quantities between which it is placed is to be divided by the latter. Thus, a ÷ b signifies that the quantity a is to be divided by b.
The division of one quantity by another is frequently represented by placing the dividend over the
* By quantities, we understand such magnitudes as can be represented by numbers; we may therefore without impropriety speak of the multiplication, division, &c. of quantities by each other.
divisor with a line between them, in which case the expression is called a fraction. Thus, a/b signifies a divided by b; and a is the numerator, and b the denominator of the fraction; also, a, b, and
c added together, are to be divided by e, f, and g added together; see Art. 10.
(53.) A quantity in the denominator of a fraction is also expressed by placing it in the numerator, and prefixing the negative sign to it's index; thus, a^- 1, a^- 2, a^- 3, a^- n, signify 1/a^1, 1/a
^2, 1/a^3, 1/a^n, respectively; these are called the negative powers of a.
(54.) The sign ~ between two quantities signifies their difference. Thus, a ~ x, is a - x or x - a, according as a or x is the greater; and aa and x.
(55.) A line drawn over several quantities signifies that they are to be taken collectively, and it is called a vinculum. Thus, a stand for 6; b, 5; c, 4; d, 3; and e, 1; then a - b + c is 6 - 5 + 4,
or 5; and d - e is 3 - 1, or 2; therefore
(56.) = Equal to, signifies that the quantities between which it is placed are equal to each other; thus,
ax - by = cd + ad, signifies that the quantity ax - by is equal to the quantity cd + ad.
(57.) The square root of any proposed quantity is that quantity whose square, or second power, gives the proposed quantity. The cube root, is that quantity whose cube gives the proposed quantity, &c.
The signs √,
These roots are also represented by the fractions 1/2, 1/3, 1/4, &c. placed a little above the quantities, to the right. Thus, a^1/2, a^1/3, a^1/4 a^1/n, represent the square, cube, fourth and n^th
root of a, respectively; a^5/2, a^7/3, a^3/5, represent the square root of the fifth power, the cube root of the seventh power, the fifth root of the cube of a.
(58.) If these roots cannot be exactly determined, the quantities are called irrational, or surds.
(59.) Points are made use of to denote proportion thus, a : b :: c : d, signifies that a bears the same proportion to b that c bears to d.
(60.) The number prefixed to any quantity, and which shews how often it is to be taken, is called it's coefficient. Thus, in the quantities 7ax, 6by, 3dz, 7, 6 and 3 are called the coefficients of ax
, by, and dz, respectively.
When no number is prefixed, the quantity is to be taken once, or the coefficient 1 is understood.
These numbers are sometimes represented by letters, which are called coefficients.
(61.) Similar, or like algebraical quantities, are such as differ only in their coefficients; 4a, 6ab, 9a^2, 3a^2bc, are respectively similar to 15a, 3ab, 12a^2, 15a^2bc, &c.
Unlike quantities are different combinations of letters; thus, ab, a^2b, ab^2, abc, &c. are unlike.
(62.) A quantity is said to be a multiple of another, when it contains it a certain number of times exactly; thus, l6a is a multiple of 4a, as it contains it exactly four times.
(63.) A quantity is called a measure of another, when the former is contained in the latter a certain number of times exactly; thus, 4a is a measure of l6a.
(64.) When two numbers have no common measure but unity, they are said to be prime to each other.
(65.) A simple algebraical quantity is one which consists of a single term, as a^2bc.
(66.) A binomial is a quantity consisting of two terms, as a + b, or 2a - 3bx. A trinomial is a quantity consisting of three terms, as 2a + bd + 3c.
The following examples will serve to illustrate the method of representing quantities algebraically.
Let a = 8, b = 7, c = 6, d = 5and e = 1; then,
3a - 2b + 4c - e = 24 - 14 + 24 - 1 = 33.
ab + ce - bd = 56 + 6 - 35 = 27.
d^2×^2 + d^3 = 25×2 - 18 + 125 = 50 - 18 + 125 = 157.
(67.) If equal quantities be added to equal quantities, the sums will be equal.
(68.) If equal quantities be taken from equal quantities, the remainders will be equal.
(69.) If equal quantities be multiplied by the same, or equal quantities, the products will be equal.
(70.) If equal quantities be divided by the same, or equal quantities, the quotients will be equal.
(71.) If the same quantity be added to and subtracted from another, the value of the latter will not be altered.
(72.) If a quantity be both multiplied and divided by another, it's value will not be altered.
ADDITION OF ALGEBRAICAL QUANTITIES.
(73.) The addition of algebraical quantities is performed by connecting those that are unlike with their proper signs, and collecting those that are similar into one sum.
Ex. 1. Add together the following unlike quantities;
It is immaterial in what order the quantities are set down, if we take care to prefix to each it's proper sign.
When any terms are similar, they may be incorporated, and the general expression for the sum shortened.
1^st. When similar quantities have the same sign, their sum is found by taking the sum of the coefficients with that sign, and annexing the common letters.
The reason is evident; 5a to be added, together with 4a to be added, makes 9a to be added; and 3b to be subtracted, together with 7b to be subtracted, is 10b to be subtracted.
2^d. If similar quantities have different signs, their sum is found by taking the difference of the coefficients with the sign of the greater, and annexing the common letters as before.
In the first part of the operation we have 7 times a to add, and 5 times a to take away; therefore upon the whole we have 2a to add. In the latter part, we have 3 times b to add, and 9 times b to
take away; i. e. we have upon the whole 6 times b to take away; and thus the sum of all the quantities is 2a - 6b.
If several similar quantities are to be added together, some with positive and some with negative signs, take the difference between the sum of the positive, and the sum of the negative coefficients,
prefix the sign of the greater sum, and annex the common letters.
The method of reasoning in this case is the same as in the last example.
In this example, the coefficients of x and its powers are united;
(74.) Subtraction, or the taking away of one quantity from another, is performed by changing the sign of the quantity to be subtracted, and then adding it to the other by the rules laid down in Art.
Ex. 1.
From 2bx take cy, and the difference is properly represented by 2bx - cy; because the - prefixed to cy, shews that it is to be subtracted from the other; and 2bx - cy is the sum of 2bx and - cy, Art.
Ex. 2.
Again, from 2bx take - cy, and the difference is 2bx + cy; because 2bx = 2bx + cy - cy Art. 71, take away - cy from these equal quantities, and the differences will be equal; i. e. the difference
between 2bx and - cy is 2bx + cy, the quantity which arises from adding + cy to 2bx.
In this example the coefficients are united; ^3 - px^3; ^2 + qx^2; and
(75.) The multiplication of simple algebraical quantities must be represented according to the notation pointed out Art. 50.
Thus, a×b, or ab, represents the product of a Multiplied by b; abc, the product of the three quantities a, b and c.
It is also indifferent in what order they are placed, a×b and b×a being equal.
For, 1×a = a×1, or 1 taken a times is the same with a taken once; also, b taken a times, or b×a, is b times as great as 1 taken a times; and a taken b times, or a×b, is b times as great as a taken
once; therefore (Art. 69.) b×a = a×b. Also, abc = cab = bca = acb, &c. for, as in the former case, 1×a×b = a×b×1; and c×a×b is c times as great as 1×a×b; also a×b×c is c times as great as a×b×1;
therefore a×b×c = c×a×b (Art. 69.); and a similar proof may be applied to the other cases.
(76.) To determine the sign of the product, observe the following rule:
If the multiplier and multiplicand have the same sign, the product is positive; if they have different signs, it is negative.
1^st. + ax + b = + ab; because in this case a is to be taken positively b times; therefore the product ab must be positive.
2^d. - ax + b = - ab; because - a is to be taken b times; that is, we must take - ab.
3^d. + ax - b = - ab; for a quantity is said to be multiplied by a negative number - b, if it be subtracted b times; and a subtracted b times is - ab. This also appears from Art. 79. Ex. 2.
4^th. - ax - b = + ab. Here - a is to be subtracted b times; that is, - ab is to be subtracted; but subtracting ab is the same as adding + ab (Art. 74.); therefore we have to add + ab.
The 2^d and 4^th cases may be thus proved; a - a = 0, multiply both sides by b, and ab together with - a×- b must be equal to b×0, or nothing; therefore - a multiplied by b must give - ab, a quantity
which when added to ab makes the sum nothing.
Again, a - a = 0; multiply both sides by - b, then - ab together with - a×- b must be = 0; therefore - a×- b = + ab.
(77.) If the quantities to be multiplied have coefficients, these must be multiplied together as in common arithmetic; the sign and the literal product being determined by the preceding rules.
Thus, 3a×5b = l5ab; because 3×a×5×6 = 3×5×a×b = 15ab (Art. 75); 4x×- 11y = - 44xy; - 9b×- 5c = + 45bc; - 6d×4m = - 24md.
(78.) The powers of the same quantity are multiplied together by adding the indices; thus, a^2×a^3 = a^5; for aa×aaa = aaaaa. In the same manner, a^m×a^n = a^m + n; and - 3a^2x^3×5axy^2 = - 15a^3x^4y
(79.) If the multiplier or multiplicand consist of several terms, each term of the latter must be multiplied by every term of the former, and the sum of all the products taken, for the whole product
of the two quantities.
Here a + b is to be added to itself c + d times, i. e. c times and d times.
C 2
Here a + b is to be taken c - d times; that is, c times wanting d times; or c times positively and d times negatively.
Here the coefficients of x^2 and x are collected;
(80.) The method of determining the sign of a product from the consideration of abstract quantities, has been found fault with by some algebraical writers, who contend that - a, without reference to
other quantities, is imaginary, and consequently not the object of reason or demonstration. In answer to this objection we may observe, that whenever we make use of the notation - a, and say it
signifies a quantity to be subtracted, we make a tacit reference to other quantities.
Thus, in numbers, - a represents a number to be subtracted from those with which it is connected; and when we suppose - a to be taken b times, we must understand that a is to be taken a times from
some other numbers. In estimating lines, or distances, - a represents a line, or distance, in a particular direction. The negative sign does not render quantities imaginary, or impossible, but points
out the relation of real quantities to others with which they are concerned.
(81.) To divide one quantity by another, is to determine how often the latter is contained in the former, or what quantity multiplied by the latter will produce the former.
Thus, to divide ab by a is to determine how often a must be taken to make up ab; that is, what quantity multiplied by a will give ab; which we know is b. From this consideration are derived all the
rules for the division of algebraical quantities.
(82.) If the divisor and dividend be affected with like signs, the sign of the quotient is +: but if their signs be unlike, the sign of the quotient is -.
If - ab be divided by - a, the quotient is + b; because - ax + b gives - ab; and a similar proof may be given in the other cases.
(83.) In the division of simple quantities, if the coefficient and literal product of the divisor be found in the dividend, the other part of the dividend, with the sign determined by the last rule,
is the quotient.
Thus, abc/ab = c; because ab multiplied by c gives abc.
If we first divide by a, and then by b, the result will be the same: for abc/a = bc, and bc/b = c, as before.
(84.) COR. Hence, any power of a quantity is divided by any other power of the same quantity, by taking the index of the divisor from the index of the dividend.
Thus, a^5/a^3 = a^2; a^3/a^6 = 1/a^3 = a^- 3 (Art. 53); a^m/a^n = a^m - n.
(85.) If only a part of the product which forms the divisor, be contained in the dividend, the division must be represented according to the direction in Art. 52, and the quantities contained both in
the divisor and dividend expunged.
Thus, 15a^3b^2c divided by - 3a^2bx,
First, divide by - 3a^2b, and the quotient is - 5abc; this quantity is still to be divided by x (Art. 83.), and as x is not contained in it, the division can only be represented in the usual way;
that is, - 5abc/x is the quotient.
(86.) If the dividend consist of several terms, and the divisor be a simple quantity, every term of the dividend must be divided by it.
(87.) When the divisor also consists of several terms, arrange both the divisor and dividend according to the powers of some one letter contained in them; then, find how often the first term of the
divisor is contained in the first term of the dividend, and write down this quantity for the first term in the quotient; multiply the whole divisor by it, subtract the product from the dividend, and
bring down to the remainder as many other terms of the dividend as the case may require, and repeat the operation till all the terms are brought down.
Ex. 1.
If a^2 - 2ab + b^2 be divided by a - b, the operation will be as follows:
The reason of this, and the foregoing rule, is, that as the whole dividend is made up of all it's parts, the divisor is contained in the whole, as often as it is contained in all the parts. In the
preceding operation we inquire first, how often a is contained in a^2, which gives a for the first term of the quotient, then multiplying the whole divisor by it, we have a^2 - ab to be subtracted
from the dividend, and the remainder is - ab + b^2, with which we are to proceed as before.
The whole quantity a^2 - 2ab + b^2, is in reality divided into two parts by the process, each of which is divided by a - b, therefore the true quotient is obtained.
ON THE TRANSFORMATION OF FRACTIONS TO OTHERS OF EQUAL VALUE.
(88.) If the signs of all the terms both in the numerator and denominator of a fraction be changed, it's value will not be altered. For - ab/- a = + b = + ab/+ a; and ab/- a = - b = - ab/a.
(89.) If the numerator and denominator of a fraction be both multiplied, or both divided by the same quantity, its value is not altered.
For ac/bc = a/b (Art. 85).
Hence, a fraction is reduced to it's lowest terms, by dividing both the numerator and denominator by the greatest quantity that measures them both.
(90.) The greatest common measure of two quantities is found by arranging them according to the powers of some letter, and then dividing the greater by the less, and the preceding divisor always by
the last remainder, till the remainder is nothing; the last divisor is the greatest common measure required.
Let a and b be the two quantities, and let b be contained in a, p times, with a remainder c; again, let c be contained in b, q times with a remainder d, and so on, till nothing remains; let d be the
last divisor, and it will be the greatest common measure of a and b.
(91.) The truth of this rule depends upon these two principles;
1^st. If one quantity measure another, it will also measure any multiple of that quantity. Let x measure y by the units in n, then it will measure cy by the units in nc.
2^d. If a quantity measure two others, it will measure their sum or difference. Let a be contained in x, m times, and in y, n times; then ma = x and na = y; therefore a is contained in x ± y, m ± n
times, or it measures x ± y by the units in m ± n.
(92.) Now it appears from the operation (Art. 90.), that a - pb = c, and b - qc = d; every quantity, therefore, which measures a and b, measures pb, and a - pb, or c; hence also it measures qc, and b
- qc, or d; that is, every common measure of a and b measures d.
It appears also from the division, that a = pb + c, b = qc + d, c = rd; therefore d measures c, and qc, and qc + d or b; hence it measures pb, and pb + c, or a. Every common measure then of a and b
measures d, and d measures a and b; therefore d is their greatest common measure.
To find the greatest common measure of a^4 - x^4 and a^3 - a^2x - ax^2 + x^3, and to reduce
leaving out 2x^2, which is found in each term of the remainder, the next divisor is a^2 - x^2.
a^2 - x^2 is, therefore, the greatest common measure of the two quantities, and if they be respectively divided by it, the fraction is reduced to
The quantity 2x^2, found in every term of one of the divisors, 2a^2 - 2x^4, but not in every term of the dividend, a^3 - a^2x - ax^3 - x^3, must be left out; otherwise the quotient will be
fractional, which is contrary to the supposition made in the proof of the rule; and by omitting this part, 2x^2, no common measure of the divisor and dividend is left out; because, by the
supposition, no part of 2x^2 is found in all the terms of the dividend.
(93.) To find the greatest common measure of three quantities, a, b, c; take d the greatest common measure of a and b; and the greatest measure of d and c, is the greatest common measure required.
Because every common measure of a, b and c, measures d and c; and every measure of d and c measures a, b and c (Art. 92); therefore, the greatest common measure of d and c must be the greatest common
measure of a, b and c.
(94.) In the same manner, the greatest common measure of four or more quantities may be found.
The greatest common measure of four quantities, a, b, c, d, may also be found by taking x the greatest common measure of a and b, and y the greatest common measure of c and d; then the greatest
common measure of x and y will be the common measure required.
(95.) If one number be divided by another, and the preceding divisor by the remainder, according to Art. 90, the remainder will at length be less than any quantity that can be assigned.
For a = pb + c; and b, and consequently pb, is greater than c; therefore pb + c, or a, is greater than 2c, and a/2 is greater than c; therefore from a, a quantity greater than it's half has been
taken; in the same manner, when c is the dividend, more than it's half is taken away, and so on: but if from any quantity there be taken more than it's half, and from the remainder more than it's
half, and so on, there will, at length, remain a quantity less than any that can be assigned (Euc. 1. x).
(96.) Fractions are changed to others of equal value with a common denominator, by multiplying each numerator by every denominator except it's own, for the new numerator; and all the denominators
together for the common denominator.
Let a/b, c/d, e/f be the proposed fractions; then adf/bdf, cbf/bdf, edb/bdf, are fractions of the same value with the former, having the common denominator bdf. For adf/bdf = a/b; cbf/bdf = c/d; and
edb/bdf = e/f (Art. 89); the numerator and denominator of each fraction having been multiplied by the same quantity, vis. the product of the denominators of all the other fractions.
(97.) When the denominators of the proposed fractions are not prime to each other, find their greatest common measure; multiply both the numerator and denominator of each fraction; by the
denominators of all the rest, divided respectively by their greatest common measure; and the fractions will be reduced to a common denominator in lower terms* than they would have been by proceeding
according to the former rule.
Thus, a/mx, b/my, c/mz reduced to a common denominator, are ayz/mxyz; bxz/mxyz; cxy/mxyz.
ON THE ADDITION AND SUBTRACTION OF FRACTIONS.
(98.) If the fractions to be added have a common denominator, their sum is found by adding the numerators together and retaining the common denominator.
* To obtain them in die lowest terms, each must be reduced to another of equal value, with the denominator which is the least common multiple of all the denominators. See Art. 374.
(99.) If the fractions have not a common denominator they must be transformed to others of the same value, which have a common denominator (Art. 96), and then the addition may take place as before.
a is considered as a fraction whose denominator is unity.
Ex. 5.
(100.) If two fractions have a common denominator, their difference is found by taking the difference of the numerators and retaining the common denominator.
(101.) If they have not a common denominator, they must be transformed to others of the same value, which have a common denominator, and then the subtraction may take place as before.
The sign of bd is negative, because every part of the latter fraction is to be taken from the former.
ON THE MULTIPLICATION AND DIVISION OF FRACTIONS.
(102.) To multiply a fraction by any quantity, multiply the numerator by that quantity and retain the denominator.
Thus, a/b×c = ac/b. For if the quantity to be divided be c times as great as before, and the divisor the same, the quotient must be c times as great.
(103.) COR 1. a/b×b = ab/b = a. That is, if a fraction be multiplied by it's denominator, the product is the numerator.
(104.) COR. 2. The result is the same, whether the numerator be multiplied by a given quantity, or the denominator divided by it. Let the fraction be ad/bc, and let it's numerator be multiplied by c,
the result is adc/bc, or ad/b (Art. 89.), the quantity which arises from the division of it's denominator by c.
(105.) The product of two fractions is found by multiplying the numerators together for a new numerator, and the denominators for a new denominator.
Let a/b and c/d be the two fractions; then a/b×c/d = ac/bd. For if a/b = x, and c/d = y, by multiplying the equal quantities a/b and x, by b, a = bx (Art. 69); in the same manner, c = dy; therefore,
by the same axiom, ac = bdxy; dividing these equal quantities, ac/bd = xy = a/b×c/d.
(106.) To divide a fraction by any quantity, multiply the denominator by that quantity, and retain the numerator.
The fraction a/b divided by c, is a/bc. Because a/b = ac/bc,
and a c^th of this a/bc; the quantity to be divided being a c^th part of what it was before, and the divisor the same.
(107.) COR. The result is the same, whether the denominator is multiplied by the quantity, or the numerator divided by it.
Let the fraction be ac/bd, if the denominator be multiplied by c, it becomes ac/bdc or a/bd; the quantity which arises from the division of the numerator by c.
(108.) To divide one fraction by another, invert the numerator and denominator of the divisor, and proceed as in multiplication.
Let a/b and c/d be the two fractions, then a/b ÷ c/d = a/b×d/c = ad/bc.
For if a/b = x, and c/d = y then as in Art. 105, a = bx, and c = dy; also, ad = bdx, and bc = bdy; therefore by Art. 70, ad/bc = bdx/bdy = x/y = a/b ÷ c/d.
(109.) The rule for multiplying the powers of the same quantity (Art. 78), will hold when one or both of the indices are negative.
Thus, a^m×a^- n = a^m - n; for a^m×a^- n = a^m×1/a^n (Art. 53.) = a^m/a^n = a^m - n; in the same manner, x^3×x^- 5 = x^3/x^5 = 1/x^2 = x^- 2.
Again, a^- m×a^- n = ^- m×a^- n = 1/a^m×1/a^n (Art. 53.), = 1/ a^m + n =
(110.) COR. If m = n, a^m×a^- m = a^m - m = a^0; also, a^m×a^- m = a^m/a^m = 1; therefore a^0 = 1; according to the notation adopted (Arts. 51. 53).
(111.) The rule for dividing any power of a quantity by any other power of the same quantity (Art. 84.) holds, whether those powers are positive or negative.
Thus, a^m ÷ a^- n = a^m ÷ 1/a^n (Art. 53), = a^m×a^n = a^m + n. Again, a^- m ÷ a^- n = 1/a^m ÷ 1/a^n = a^n/a^m (Art. 108.) = a^n - m (Art. 84).
(112.) COR. Hence it appears, that a quantity may be transferred from the numerator of a fraction to the denominator, and the contrary, by changing the sign of it's index. Thus,
ON INVOLUTION AND EVOLUTION.
(113.) If a quantity be continually multiplied by itself, it is said to be involved, or raised; and the power to which it is raised, is expressed by the number of times the quantity has been employed
in the multiplication.
D 2.
Thus, a×a, or a^2, is called the second power of a; a×a×a, or a^3, the third power; a×a…. (n), or a^n, the n^th power.
(114.) If the quantity to be involved be negative, the signs of the even powers will be positive, and the signs of the odd powers negative.
For - ax - a = a^2; - ax - ax - a = a^3, &c.
(115.) A simple quantity is raised to any power, by multiplying the index of every factor in the quantity by the exponent of the power, and prefixing the proper sign determined by the last article.
Thus, a^m raised to the n^th power is a^mn. Because a^m×a^m×a^m… to n factors, by the rule of multiplication, is a^mn; also, n factors, or a×a×a…. to n factors×b×b×b…. to n factors (Art. 75), = a^n×b
^n; and a^2b^3c raised to the fifth power is a^10b^15c^5. Also, - a^m raised to the n^th power is ± a^mn; where the positive or negative sign is to be prefixed, according as n is an even or odd
(116.) If the quantity to be involved be a fraction, both the numerator and denominator must be raised to the proposed power (Art. 105).
(117.) If the quantity proposed be a compound one, the involution may either be represented by the proper index, pr it may actually take place.
Let a + b be the quantity to be raised to any power.
If b be negative, or the quantity to be involved be a - b, wherever an odd power of b enters, the sign of the term must be negative (Art. 114.)
(118.) Evolution, or the extraction of roots, is the method of determining a quantity which raised to a proposed power will produce a given quantity.
(119.) Since the n^th power of a^m is a^mn, the n^th root of a^mn must be a^m; i. e. to extract any root of a single quantity, we must divide the index of that quantity by the index of the root
(120.) When the index of the quantity is not exactly divisible by the number which expresses the root to be extracted, that root must be represented
according to the notation pointed out in Art. 57. Thus, the square, cube, fourth, n^th root of a^2 + x^2, are respectively represented by
(121.) If the root to be extracted be expressed by an odd number, the sign of the root will be the same with the sign of the proposed quantity, as appears by Art. 114.
(122.) If the root to be extracted be expressed by an even number, and the quantity proposed be positive, the root may be either positive or negative. Because either a positive or negative quantity,
raised to such a power, is positive (Art. 114).
(123.) If the root proposed to be extracted be expressed by an even number, and the sign of the proposed quantity be negative, the root cannot be extracted; because no quantity, raised to an even
power, can produce a negative result. Such roots are called impossible.
(124.) Any root of a product may be found by taking that root of each factor, and multiplying the roots, so taken, together.
Thus, n^th power, is ab (Art. 115).
COR. If a = b, then a^1/n×a^1/n = a^2/n; and in the same manner,
(125.) Any root of a fraction may be found by taking that root of both the numerator and denominator (Art. 116).
Thus, the cube root of a^2/b^2 is a^2/3/b^2/3, or a^2/3×b^- 2/3; and
(126.) To extract the square root of a compound quantity.
Since the square root of a^2 + 2ab + b^2 is a + b (Art. 117), whatever be the values of a and b, we may obtain a general rule for the extraction of the square root, by observing in what manner a and
b may be derived from a^2 + 2ab + b^2.
Having arranged the terms according to the dimensions of one letter, a, the square root of the first a^2, is a, the first factor in the root; subtract its square from the whole quantity; and bring
down the remainder 2ab + b^2; divide 2 ab by 2 a, and the result is b, the other factor in the root; then multiply the sum of twice the first factor and the second (2a + b), by the second (b), and
subtract this product (2ab + b^2) from the remainder. If there be more terms, consider a + b as a new value of a; and it's square, that is a^2 + 2ab + b^2, having, by the first part of the process,
been subtracted from the proposed quantity, divide the remainder by the double of this new value of a, for a new factor in the root; and for a new subtrahend, multiply this factor by
twice the sum of the former factors increased by this factor. The process must be repeated till the root, or the necessary approximation to the root, is obtained.
Ex. 1.
To extract the square root of a^2 + 2ab + b^2 + 2ac + 2bc + c^2; or of its equal
Ex. 2.
To extract the square root of a^2 - ax + x^2/4.
Ex. 3.
To extract the square root of 1 + x.
(127.) It appears from the second example, that a trinomial a^2 - ax + x^2/4, in which four times the product of the first and last terms, is equal to the square of the middle term, is a complete
(128.) The method of extracting the cube root is discovered in the same manner.
The cube root of a^3 + 3a^2b + 3ab^2 + b^3 is a + b (Arts. 117, 118); and to obtain a + b from this compound quantity, arrange the terms as before, and the cube root of the first factor in the root
of the first term, a^3, is a the first factor in the root;
subtract it's cube from the whole quantity, and divide the first term of the remainder by 3a^2, the result is b, the second factor in the root; then subtract 3a^2b + 3ab^2 + b^3 from the remainder,
and the whole cube of a + b has been subtracted. If any quantity be left, proceed with a + b as a new a, and divide the last remainder by
(129.) The rules above laid down, for the extraction of the roots of compound quantities, are but little used in algebraical or fluxional operations; but it was necessary to give them at full length,
for the purpose of investigating rules for the extraction of the square and cube roots in numbers.
The square root of 100 is 10, of 10000 is 100, of 1000000 is 1000, &c. from which consideration it follows, that the square root of a numberless than 100 must consist of only one figure, of a number
between 100 and 10000 of two places of figures, of any number from 10000 to 1000000, of three places of figures, &c. If then a point be made over every second figure in any number, beginning with the
units, the number of points will shew the number of figures, or places, in the square root. Thus the square root of
Let the square root of 4357 be required.
Having pointed it according to the direction, it appears that the root consists of two places of figures; let a + b be the root, where a is the value of the figure in the ten's place, and b, of that
in the unit's; then is a the nearest square root of 4300 which does not exceed the true root, this appears to be 60; subtract the square of 60 (a^2) from the given number, and the remainder is 757;
divide this remainder by 120 (2a), and the quotient is 6 (the value of b), and the subtrahend, or quantity, to be taken from the last remainder 757, is 126×6,
It is said that a must be the greatest number whose square does not exceed 4300: it evidently cannot be a greater number than this; and if possible let it be some quantity x, less than this; then
since x is in the ten's place and b in the unit's, x + b is less than a; therefore the square of x + b, whatever be the value of b, must be less than a^2, and consequently x + b less than the true
If the root consist of three places of figures, let a represent the hundreds, and b the tens; then having obtained a and b as before, let the new value of a be the hundreds and tens together, and
find a new value of b for the units: and thus the process may be continued when there are more places of figures in the root.
(130.) The cyphers being omitted for the sake of expedition, the following rule is obtained from the foregoing process.
Point every second figure beginning with the unit's place, dividing by this process the whole number into several periods; find the greatest number whose square is contained in the first period, this
is the first figure in the root; subtract it's square from the first period, and to the remainder bring down the next period; divide this quantity, omitting the last figure, by twice the part of the
root already obtained, and annex the result to the root and also to the divisor; then multiply the divisor, as it now stands, by the part of the root last obtained, for the subtrahend. If there be
more periods to be brought down, the operation must be repeated.
Ex. 2.
Let the square root of 611524 be required.
(131.) In extracting the square root of a decimal, the pointing must be made the contrary way, beginning with the place of hundredths, or care must be taken to have an even number of decimal places;
because, if the root have 1, 2, 3, 4, &c. decimal places, the square must have 2, 4, 6, 8, &c. places (Art. 38).
Ex. 3.
To extract the square root of 64.853.
For every pair of cyphers which we suppose annexed to the decimal, another figure is obtained in the root.
(132.) The cube root of 1000 is 10, of 1000000 is 100, &c. therefore the cube root of a number less than 1000 consists of one figure, of any number between 1000 and 1000000, of two places of figures,
&c. If then a point be made over every third figure contained in any number, beginning with the units, the number of points will shew the number of places in its cube root.
Let the cube root of 405224 be required.
By pointing the number according to the direction, it appears that the root consists of two places; let a be the value of the figure in the ten's place, and b, of that in the unit's. Then a is the
greatest number whose cube is contained in 405000*, or 70; subtract it's cube from the whole quantity, and the remainder is 62224; divide this remainder by 3a^2, or 14700, and the quotient 4, or b,
is the second term in the root: then subtract the cube of 74 from the original number, and as the remainder is nothing, 74 is the cube root required. Observe, that the cyphers may be omitted in the
operation; and that as a^3 was at first subtracted, if from the first remainder, 3a^3b + 3ab^2 + b^3 be taken, the whole cube of a + b will be taken from the original quantity.
(133.) In extracting the cube root of a decimal, care must be taken that the decimal places be three, or some multiple of three, before the operation is begun; because there are three times as many
decimal places in the cube as there are in the root (Art. 38).
Ex. 2.
Required the cube root of 311897.91.
* See Art. 129.
The new value of a is 670, or, omitting the cypher, 67, and 3a^2, the new divisor, is 13467.. hence 8 is the next figure in the root; and
It appears from the pointing, that there is one decimal place in the root; therefore 67.8 is the root required, nearly. If three more cyphers be annexed to the decimal, another decimal place is
obtained in the root; and thus approximation may be made to the true root of the proposed number, to any degree of accuracy.
Since the first remainder is 3a^2b + 3 ab^2 + b^3, the exact value of b is not obtained by dividing by 3a^2, and if upon trial the subtrahend be found to be greater than the first remainder, the
value assumed for b is too great, and a less number must be tried. The
greater a is with respect to b, the more nearly is the true value obtained by division; and when a few places in the root are found, the number of figures may nearly be doubled, by division only.
ON SIMPLE EQUATIONS.
(134.) If one quantity be equal to another, or to nothing, and this equality be expressed algebraically, it constitutes an Equation.
Thus, x - a = b - x is an equation, of which x - a forms one side, and b - x the other.
(135.) When an equation is cleared of fractions and surds, if it contain the first power only of an unknown quantity, it is called a simple equation, or an equation of one dimension: if the square of
the unknown quantity be in any term, it is called a quadratic, or an equation of two dimensions; and in general, if the index of the highest power of the unknown quantity be n, it is called an
equation of n dimensions.
(136.) In any equation, quantities may be transposed from one side to the other, if their signs be changed, and the two sides will still be equal.
Let x + 10 = 15, then by subtracting 10 from each side, x + 10 - 10 = 15 - 10 (Art. 68), or x = 15 - 10.
Let x - 4 = 6, by adding 4 to each side, x - 4 + 4 = 6 + 4, or x = 6 + 4 (Art. 67).
If x - a + b = y; adding a - b to each side, x - a + b + a - b = y + a - b; or x = y + a - b.
(137.) COR. Hence, if the signs of all the terms on each side be changed, the two sides will still be equal.
Let x - a = b - 2x; by transposition, - b + 2x = x + a; or a - x = 2x - b.
(138.) If every term, on each side, be multiplied by the same quantity, the results will be equal (Art. 69).
(139.) COR. An equation may be cleared of fractions, by multiplying every term, successively, by the denominators of those fractions.
Let 3x + 5x/4 = 34; multiplying by 4, 12x + 5x = 136. (See Art. 103).
An equation may be cleared of fractions at once, by multiplying both sides by the product of all the denominators, or by any quantity which is a multiple of them all*.
Let x/2 + x/3 + x/4 = 13; multiplying by 2×3×4, 3×4×x + 2×4×x + 2×3×x = 2×3×4×13, or 12x + 8x + 6x = 312; that is, 26x = 312.
If each side be multiplied by 12, which is a multiple of 2, 3, and 4, the equation will become 12x/2 + 12x/3 + 12x/4 = 156; 6x + 4x + 3x = 156; that is, 13x = 156.
(140.) If each side of an equation be divided by the same quantity, the results will be equal.
Let 17x = 136; then x = 136/17 = 8 (Art. 70).
* If the least common multiple be made use of, the equation will be in the lowest terms.
(141.) If each side of an equation be raised to the same power, the results will be equal.
Let x^1/2 = 9; then x = 9×9 = 81 (Art. 69).
Also, if the same root be extracted on both sides, the results will be equal.
Let x = 81; then x^1/2 = 9 (Art. 118).
(142.) To find the value of an unknown quantity in a simple equation.
Let the equation first be cleared of fractions, then transpose all the terms which involve the unknown quantity to one side of the equation, and the known quantities to the other; divide both sides
by the coefficient, or sum of the coefficients, of the unknown quantity, and the value required is obtained.
Ex. 1.
To find the value of x in the equation 3x - 5 = 23 - x.
by transp. 3x + x = 23 + 5 (Art. 136.)
or 4x = 28
by division x = 28/4 = 7 (Art. 140.)
Ex. 2.
Let x + x/2 - x/3 = 4x - 17.
Mult. by 2, and 2x + x - 2x/3 = 8x - 34.
Mult, by 3, and 6x + 3x - 2x = 24x - 102 (Art. 139).
by transp. 6x + 3x - 2x - 24x = - 102,
or - 17x = - 102
17x = 102 (Art. 137)
x = 102/17 = 6.
Ex. 3.
1/a + b/x = c.
1 + ba/x = ca
x + ba = cax
x - cax = - ba
or cax - x = ba (Art. 137)
x = ba/ca - 1.
Ex. 4.
55 - x - 4 = 11x - 33
55 - 4 + 33 = 11x + x
84 = 12x
x = 84/12 = 7.
Ex. 5.
6x + 9x - 15 = 72 - 4x + 8
6x + 9x + 4x = 72 + 8 + 15
19x = 95
x = 95/19 = 5
E 2
(143.) If there be two independent simple equations involving two unknown quantities, they may be reduced to one, which involves only one of the unknown quantities, by any of the following methods:
1^st Method. In either equation, find the value of one of the unknown quantities in terms of the other and known quantities, and for it substitute this value in the other equation, which will then
only contain one unknown quantity, whose value may be found by the rules before laid down.
From the first equat. x = 10 - y; hence, 2x = 20 - 2y,
by subst. 20 - 2y - 3y = 5
20 - 5 = 2y + 3y
15 = 5y
y = 15/5 = 3
hence also, x = 10 - y = 10 - 3 = 7.
2^d Method. Find an expression for one of the unknown quantities, in each equation; put these expressions equal to each other, and from the resulting equation the other unknown quantity may be found.
From the first equat. x = a - y
from the second, bx = de - cy, and
ba - by = de - cy
cy - by = de - ba
Also, x = a - y; that is,
3^d Method. If either of the unknown quantities have the same coefficient in both equations, it may be exterminated by subtracting, or adding, the equations, according as the sign of the unknown
quantity, in the two cases, is the same or different.
By subtraction, 2y = 8, and y = 4.
By addition, 2x = 22, and x = 11 (Art. 67).
If the coefficients of the unknown quantity to be exterminated be different, multiply the terms of the first equation by the coefficient of the unknown quantity in the second, and the terms of the
second equation by the coefficient of the same unknown quantity, in the first; then add, or subtract, the resulting equations, as in the former case.
Ex. 1. Let
Multiply the terms of the first equation by 2, and the terms of the other by 3,
then 6x - 10y = 26
6x + 21y = 243
By subtraction, - 31y = - 217
and y = 217/31 = 7;
also, 3x - 5y = 13, or 3x - 35 = 13
therefore, 3x = 13 + 35 = 48
and x = 48/3 = 16.
Ex. 2.
From the first, max + mby = mc
from the other, max - nay = ad
by subtraction, mby + nay = mc - ad
Again, nax + nby = nc
mbx - nby = bd
Ex. 3.
From the first equat.
15x - 25y + 30 = 4x + 2y
15x - 4x - 25y - 2y = - 30
11x - 27y = - 30
from the second equat. 32 - x + 2y = 4x/2 + 4y/3 = 2x + 4y/3
96 - 3x + 6y = 6x + 4y
96 = 6x + 3x + 4y - 6y
or 9x - 2y = 96
and 11x - 27y = - 30
hence 99x - 22y = 1056
and 99x - 243y = - 270
221y = 1056 + 270 = 1326
y = 1326/221 = 6
also, 9x - 2y = 96
or 9x - 12 = 96
9x = 96 + 12 = 108
x = 108/9 = 12.
(144.) If there be three independent simple equations, and three unknown quantities, reduce two of the equations to one, containing only two of the unknown quantities, by the preceding rules; then
reduce the third equation and either of the former to one, containing the same two unknown quantities; and from the two equations thus obtained, the unknown quantities which they involve may be
found. The third quantity may be found by substituting their values in any of the proposed equations.
From the two first equat. 6x + 9y + 12z = 48
6x + 4y - 10z = 16
by subtr. 5y + 22z = 32
from the first and third, 10x + 15y + 20z = 80
10x - 12y + 6z = 12
by subtr. 27y + 14z = 68
and 5y + 22z = 32
hence 135y + 70z = 340
and 135y + 594z = 864
by subtr. 524z = 524
z = 1
5y + 22z = 32
that is, 5y + 22 = 32
5y = 32 - 22 = 10
y = 10/5 = 2
2x + 3y + 4z = 16
that is, 2x + 6 + 4 = 16
2x = 16 - 6 - 4 = 6
x = 3.
The same method may be applied to any number of simple equations.
(145.) That the unknown, quantities may have definite values, there must be as many independent equations as unknown quantities. When there are more equations than unknown quantities, the value of
any one of these quantities may be determined from different equations; and should the values, thus found, differ, the equations are incongruous; should they be the same, one or more of the equations
are unnecessary. When there are fewer equations than unknown quantities, one of these quantities cannot be
found, but in terms which involve some of the rest, whose values may be assumed at pleasure; and in such cases the number of answers is indefinite.
Thus, if x + y = a, then x = a - y; and assuming y at pleasure, we obtain a value of x, such, that x + y = a.
These equations must also be independent, that is, not deducible one from another.
Let x + y = a, and 2x + 2y = 2a; this latter equation being deducible from the former, it involves no different supposition, nor requires any thing more for it's truth, than that x + y = a should be
a just equation.
PROBLEMS WHICH PRODUCE SIMPLE EQUATIONS.
(146.) From certain quantities which are known, to investigate others which have a given relation to them, is the business of Algebra.
When a question is proposed to be resolved, we must first consider fully it's meaning and conditions. Then substituting for such unknown quantities as appear most convenient, we must proceed as if
they were already determined, and we wished to try whether they answer all the proposed conditions or not, till as many independent equations arise as we have assumed unknown quantities, which will
always be the case if the question be properly limited (Art. 145); and by the solution of these equations, the quantities sought will be determined.
PROB. 1.
A bankrupt owes A twice as much as he owes B, and C as much as he owes A and B together; out of £.300, which is to be divided amongst them, what must each receive?
Let x represent what B must receive;
then 2x = what A must receive,
and x + 2x, or 3x, = what C must receive;
amongst them they receive £.300; therefore
x + 2x + 3x = 300
6x = 300
x = 300/6 = 50, what B must receive
2x = 100, what A must receive
3x = 150, what C must receive.
PROB. 2.
To divide a line of 15 inches into two such parts, that one may be three fourths of the other.
Let 4x = one part,
then 3x = the other.
7x = 15, by the question,
x = 15/7
4x = 60/7 = 8 4/7, one part,
3x = 45/7 = 6 3/7, the other.
PROB. 3.
If A can perform a piece of work in 8 days, and B in 10 days, in what time will they finish it together?
Let x be the time required.
In one day, A performs 1/8 part of the work; therefore in x days, he performs x/8 parts of it; and in the same time, B performs x/10 parts of it; and calling the work 1,
x/8 + x/10 = 1
10x + 8x = 80
18x = 80
x = 80/18 = 4 8/18 = 4 4/9 days.
PROB. 4.
A workman was employed for 60 days, on condition that for every day he worked he should receive 15 pence; and for every day he played he should forfeit 5 pence; at the end of the time he had 20
shillings to receive; required the number of days he worked.
Let x be the number of days he worked, then 60 - x is the number he played,
15x his pay, in pence,
300 - 5x, sum forfeited,
15x - 300 + 5x = 240, by the question,
20x = 240 + 300 = 540
x = 27, the days he worked,
60 - x = 33, the days he played.
PROB. 5.
How much rye, at four shillings and sixpence a bushel, must be mixed with 50 bushels of wheat, at
six shillings a bushel, that the mixture may be worth five shillings a bushel?
Let x be the number of bushels required;
then 9x is the price of the rye in sixpences 600 the price of the wheat
therefore, 9x + 600 = 500 + 10x
and 100 = x, the number of bushels required.
PROB. 6.
A and B engage together in play; in the first game, A wins as much as he had and four shillings more, and finds he has twice as much as B; in the second game, B wins half as much as he had at first
and one shilling more, and then it appears that he has three times as much as A; what sum had each at first?
Let x be what A had, in shillings,
y what B had
2x + 4, what A has after the 1^st game
y - x - 4, what B has
by the question, 2x + 4 = 2y - 2x - 8
or 2y - 4x = 12
y - 2x = 6
also, y - x - 4 + y/2 + 1, what B has after the second game,
2x + 4 - y/2 - 1, what A has;
by the question, y - x - 4 + y/2 + 1 = 6x + 12 - 3y/2 - 3
or 2y - 2x - 8 + y + 2 = 12x + 24 - 3y - 6
hence 6y - 14x = 24
or 3y - 7x = 12
also, y - 2x = 6
therefore, 3y - 6x = 18
also, 3y - 7x = 12
by subtraction, x = 6
y - 2x = 6, or y - 12 = 6
y = 18.
PROB. 7.
A smuggler had a quantity of brandy which he expected would raise £9 : 18s.; after he had sold 10 gallons, a revenue officer seized one third of the remainder, in consequence of which he makes only
£8 : 2s.; required the number of gallons he had, and the price per gallon.
Let x be the number of gallons;
then 198/x is the price per gallon, in shillings,
66x - 660 = 36x
30x = 660
x = 22 the number of gallons,
198/x = 198/22 = 9 shillings, the price per gallon.
PROB. 8.
A and B play at bowls, and A bets B three shillings
to two upon every game; after a certain number of games it appears, that A has won three shillings; but had he ventured to bet five shillings to two, and lost one game more out of the same number, he
would have lost thirty shillings: how many games did they play?
Let x be the number of games A won,
y the number B won,
then 2x is what A won of B,
and 3y what B won of A.
2x - 3y = 3, by the question;
A would win on the 2^d supposition,
B would win,
5y + 5 - 2x + 2 = 30, by the question,
or 5y - 2x = 30 - 5 - 2 = 23
therefore, 5y - 2x = 23
and 2x - 3y = 3
by addition, 5y - 3y = 26
2y = 26
y = 13
2x = 3 + 3y = 3 + 39 = 42
x = 21
x + y = 34, the number of games.
PROB. 9.
A sum of money was divided equally amongst a certain number of persons; had there been three more, each would have received one shilling less, and had there been two fewer, each would have received
one shilling more than he did: required the number of persons, and what each received.
Let x be the number of persons,
y the sum each received, in shillings;
then xy is the sum divided,
therefore, xy - x + 3y - 3 = xy
or - x + 3y = 3
and xy + x - 2y - 2 = xy
or x - 2y = 2
also, - x + 3y = 3
therefore, y = 5
hence x - 2y = x - 10 = 2.
or x = 12.
ON QUADRATIC EQUATIONS.
(147.) When the terms of an equation involve the square of the unknown quantity, but the first power does not appear, the value of the square is obtained by the preceding rules; and by extracting the
square root on both sides, the quantity itself is found.
Ex. l.
Let 5x^2 - 45 = 0; to find x.
By trans. 5x^2 = 45
x^2 = 9
therefore (Art. 141), x = √9 = ± 3.
The signs + and - are both prefixed to the root, because the square root of a quantity may be either positive or negative (Art. 122). The sign of x may also be negative; but still x will be either
equal to + 3 or - 3.
Ex. 2.
Let ax^2 = bcd; to find x.
x^2 = bcd/a
x = ± √bcd/a.
(148.) If both the first and second powers of the unknown quantity be found in an equation, arrange the terms according to the dimensions of the unknown quantity, beginning with the highest, and
transpose the known quantities to the other side; then, if the square of the unknown quantity be affected with a coefficient, divide all the terms by this coefficient, and if it's sign be negative,
change the signs of all the terms (Art. 137), that the equation may be reduced to this form, x^2 ± px = ± q. Then add to both sides the square of half the coefficient of the first power of the
unknown quantity, by which means, the first side of the equation is made a complete square, and the other consists of known quantities; and by extracting the square root on both sides, a simple
equation is obtained, from which the value of the unknown quantity may be found.
Ex. 1.
Let x^2 + px = q; now, we know that x^2 + px + p^2/4 is the square of x + p/4 (Art. 127); add therefore, p^2/4 to both sides, and we have
x^2 + px + p^2/4 = q + p^2/4 then by extracting the square root on both sides,
In the same manner, if x^2 - px = q, x is found to be
Ex. 2.
Let x^2 - 12x + 35 = 0; to find x.
By transposition, x^2 - 12x = - 35, and adding the square of 6 to both sides of the equation,
x^2 - 12x + 36 = 36 - 35 = l;
then extracting the square root on both sides,
x - 6 = ± l
x = 6 ± 1 = 7 or 5; either of which, substituted for x in the original equation, answers the condition, that is, makes the whole equal to nothing.
Ex. 3.
6x + 2x + 2 = 3x^2 + 3x
3x^2 - 5x = 2
x^2 - 5x/3 = 2/3
x^2 - 5x/3 + 25/36 = 25/36 + 2/3
In this example, 25/36 and 2/3 are to be reduced to a common denominator, and since 36 is a complete square, the most convenient, method for the solution, is to multiply both the numerator and
denominator of 2/3 by 12, that the common denominator may be a square number (Art. 20).
Ex. 4.
(149.) Let x.
By transposition,
squaring both sides, 5x + 10 = 64 - 16x + x^2
x^2 - 21x = 10 - 64 = - 54
completing the square, x^2 - 21x + 441/4 = 441/4 - 54
^2 - 21x + 441/4 = 225/4
extracting the square root, x - 21/2 = ± 15/2
By this process two values of x are found; but on trial it appears, that 18 does not answer the condition of the equation, if we suppose that
5x + 10. The reason is, that 5x + 10 is the square of
It should be particularly observed, that since + x x + y is equal to - x x - y, in the multiplication and involution of quantities, new values are always introduced, which, if not again excluded by
the nature of the question, will appear in the final equation.
(150.) Every equation, where the unknown quantity is found in two terms, and it's index in one is twice as great as in the other, may be resolved in the same manner.
Ex. 5.
Let z + 4z^1/2 = 2l.
z + 4z^1/2 + 4 = 21 + 4 = 25
z^1/2 + 2 = ± 5
z^1/2 = ± 5 - 2 = 3, or -7.
therefore z = 9, or 49. (See Art. 149).
Ex. 6.
Let x^- 1 + x^- 1/2 = 6
x^- 1+ x^-1/2 + 1/4 = 6 + 1/4 = 25/4
x^- 1/2 + 1/2 = ± 5/2
and x^- 1/2 = 1/2, or - 1/3.
F 2
Ex. 7.
Let y^4 - 6y^2 - 27 = 0.
y^4 - 6y^2 = 27
y^4 - 6y^2 + 9 = 27 + 9 = 36
y^2 - 3 = ± 6
y^2 = 3 ± 6 = 9, or -3
y = ± 3, or ± √-3.
Ex. 8.
Let y^6 + ry^3 + q^3/27 = 0.
y^6 + ry^3 = - q^3/27
y^6 + ry^3 + r^2/4 = r^2/4 - q^3/27
y^3 + r/2 = ± √r^2/4 - q^3/27.
y^3 = - r/2 ± √r^2/4 - q^3/27.
(151.) When there are more equations and unknown quantities than one, a single equation, involving only one of the unknown quantities, may sometimes be obtained by the rules laid down for the
solution of simple equations; and one of the unknown quantities being discovered, the others may be obtained by substituting it's value in the preceding equations.
Ex. 9.
Letx and y.
From the first equation, 2x - x + y = 8
or x + y = 8
and x = 8 - y
from the 2^d equation, xy + 2y - x - 3y = x + 2
or xy - 2x - y = 2
by substitution,
8y - y^2 - l6 + 2y - y = 2
9y - y^2 = 16 + 2 = 18
y^2 - 9y = - 18
y^2 - 9y + 81/4 = 81/4 - 18 = 9/4
y - 9/2 = ± 3/2
x = 8 - y = 2, or 5.
The solution will often be rendered more simple by particular artifices, the proper application of which is best learned by experience.
Ex. 10.
^x and ^y.
From the second equation, 2xy = 56
and adding this to the first, x^2 + 2xy + y^2 = 121
subtracting it from the same, x^2 - 2xy + y^2 = 9
by extracting the sq. roots, x + y = ± 11
and x - y = ± 3
therefore, 2x = ± 14
x = 7, or - 7
and y = 4, or - 4.
(152.) It may sometimes be of use to substitute for
one of the unknown quantities, the product of the other and a third unknown quantity*.
Ex. 11.
* This substitution may be successfully applied whenever the sum of the dimensions of the unknown quantities, in every term of each equation, is the same.
(153.) The operation may sometimes be shortened by substituting for the unknown quantities, the sum and difference of two others*.
Ex. 12.
Let x and y.
Assume x = z + v
and y = z - v
then x + y = 2z = 12
or z = 6
hence, x = 6 + v
and y = 6 - v
also, since x^2/y + y^2/x = 18
x^3 + y^3 = 18xy
therefore, x^3 + y^3 = 432 + 36v^2
18xy = 648 - 18v^2
but x^3 + y^3 = 18xy
therefore, 432 + 36v^2 = 648 - 18v^2
54v^2 = 216
v^2 = 216/54 = 4
v = ± 2
x = 6 ± 2 = 8 or 4
y = 6
* This artifice may be used, when the unknown quantities in each equation are similarly involved.
† Other methods are given by Dr. Waring, Med. Alg. Cap. 4.
PROBLEMS PRODUCING QUADRATIC EQUATIONS.
PROB. 1.
(154.) A person bought a certain number of oxen for 80 guineas, and if he had bought 4 more for the same sum, they would have cost a guinea a piece less; required the number of oxen and price of
Let x be the number,
then 80/x is the price of each,
and ^d supposition,
80x = 80x + 320 - x^2 - 4x
x^2 + 4x = 320
x^2 + 4x + 4 = 324
x + 2 = ± 18
x = ± 18 - 2 = 16 or - 20.
80/x = 80/16 = 5 guineas, the price of each.
In this, and in many other cases, especially in the solution of philosophical questions, we deduce, from the algebraical process, answers which do not correspond with the conditions. The reason seems
to be, that the algebraical expression is more general than the common language, and the equation which is a proper
representation of the conditions, will also express other conditions, and answer other suppositions. In the foregoing instance, x may either represent a positive or a negative quantity, and cannot in
the operation represent a positive quantity alone (Art. 149); and the equation x is negative, or represents the diminution of stock, will be a proper expression for the solution of the following
problem: A person sells a certain number of oxen for 80 guineas; and, had he sold 4 fewer for the same sum, he would have received a guinea a piece more for them; required the number sold.
PROB. 2.
(155.) To divide a line of 20 inches into two such parts, that the rectangle under the whole and one part, may be equal to the square of the other.
Let x be the greater part, then will 20 - x be the less, and
x^2 + 20x = 400
x^2 + 20x + 100 = 400 + 100 = 500
x + 10 = ± √500
x = + √500 - 10, or - √500 - 10.
The observation contained in the preceding article may be applied here; and it is to be remarked, that the negative values thus deduced are not insignificant, or useless. Here the negative value
shews, that if the line be produced √500+10 inches, the square of the part produced is equal to the rectangle under the line given, and the line made up of the whole and part produced.
PROB. 3.
(156.) To find two numbers, whose sum, product, and the sum of whose squares, are equal to each other.
Let x + y and x - y be the numbers,
their sum is 2x
their product x^2 - y^2
the sum of their squares 2x^2 + 2y^2
and by the question 2x = 2x^2 + 2y^2
or x = x^2 + y^2
also, 2x = x^2 - y^2
therefore, 3x = 2x^2
x = 3/2
2x = x^2 - y^2
or 3 = 9/4 - y^2
y = ± √- 3/2
Since the square of every quantity is positive, a negative quantity has no square root; the conclusion therefore shews that there are no such numbers as the question supposes*.
* Collections of problems, producing simple and quadratic equations, may be seen in Bland's Algebra; Simpson's Exercises; and Dodson's Mathematical Repository, Vol. 1.
ON RATIOS.
(157.) Ratio is the relation which one quantity bears to another in respect of magnitude, the comparison being made by considering what multiple, part, or parts, one is of the other.
Thus, in comparing 6 with 3, we observe that it has a certain magnitude with respect to 3, which it contains twice; again, in comparing it with 2, we see that it has a different relative magnitude,
for it contains 2 three times, or it is greater when compared with 2 than it is when compared with 3. The ratio of a to b is usually expressed by two points placed between them, thus, a : b; and the
former is called the antecedent of the ratio, the latter the consequent.
(158.) COR. 1. When one antecedent is the same multiple, part, or parts, of it's consequent, that another antecedent is of it's consequent, the ratios are equal. Thus, the ratio of 4 : 6 is equal to
the ratio of 2 : 3, i. e. 4 has the same magnitude when compared with 6, that 2 has when compared with 3, since 4/6 = 2/3; the ratio of a : b is equal to the ratio of c : d, if a/b = c/d, because a/b
and c/d, represent the multiple, part, or parts, that a is of b, and c of d.
(159.) COR. 2. If the terms of a ratio be multiplied or divided by the same quantity, the ratio is not altered.
For a/b = ma/mb (Art. 89)
(160.) COR. 3. That ratio is greater than another, whose antecedent is the greater multiple, part, or parts, of it's consequent. Thus, the ratio of 7 : 4 is greater than the ratio of 8 : 5; because 7
/4, or 35/20 is greater than 8/5, or 32/20. These conclusions follow immediately from our idea of ratio.
(161.) A ratio is called a ratio of greater inequality, of less inequality, or of equality, according as the antecedent is greater, less than, or equal to, the consequent.
(162.) A ratio of greater inequality is diminished, and of less inequality increased, by adding any quantity to both it's terms.
If to the terms of the ratio 7 : 4, 1 be added, it becomes the ratio of 8 : 5, which is less than the former, (Art. l60). And in general, let x be added to the terms of the ratio a : b, and it
becomes a + x : b + x, which is greater or less than the former, according as b is greater or less than a.
(163.) Hence, a ratio of greater inequality is increased, and of less inequality diminished, by taking from the terms a quantity less than either of them.
(164.) If the antecedents of any ratios be multiplied together, and also the consequents, a new ratio results, which is said to be compounded of the former.
Thus, ac : bd is said to be compounded of the two a : b and c : d. It is also sometimes called the sum of the ratios; and when the ratio a : b is compounded with itself, the resulting ratio, a^2 : b^
2, is called the double of the ratio of a : b, and if three of these ratios be compounded together, the result a^3 : b^3, is called the triple of the first, &c. Also, the ratio of a : b is said to be
one third of the ratio of a^3 : b^3; and a^1/m : b^1/m is said to be an m^th part of the ratio of a : b.
(165.) Let the first ratio be a : 1; then a^2 : 1, a^3 : 1, …. a^n : 1, are twice, three times, … n times the first ratio; where n, the index of a, shews what multiple, or part, of the ratio a^n : 1,
the first ratio a : 1, is. On this account, the indices 1, 2, 3, … n, are called measures of the ratios a^1 : 1, a^2 : 1, a^3 : 1, …. a^n : 1.
(166.) If the consequent of the preceding ratio be the antecedent of the succeeding one, and any number of such ratios be taken, the ratio which arises from their composition, is that of the first
antecedent to the last consequent.
Let a : b, b : c, c : d, &c. be the ratios, the compound ratio is a x b x c : b x c x d, (Art. 164.); or, dividing by b x c (Art. 159), a : d.
(167.) A ratio of greater inequality, compounded with another, increases it; and a ratio of less inequality diminishes it.
Let the ratio of x : y be compounded with the ratio of a : b, and the resulting ratio ax : by is greater or less than the ratio a : b, according as ax/by is greater or
less than a/b (Art. 160); i. e. according as x is greater or less than y.
(168.) If the difference between the antecedent and consequent of a ratio be small when compared with either of them, the double of the ratio, or the ratio of their squares, is nearly obtained by
doubling this difference.
Let a + x : a be the proposed ratio, where x is small when compared with a; then a^2 + 2ax + x^2: a^2 is the ratio of the squares of the antecedent and consequent; but since x is small when compared
with a, x^2 or x x x is small when compared with 2a x x, and much smaller than a x a; therefore, a^2 + 2ax : a^2, or a + 2x : a (Art. 159), nearly express the ratio of a^2 + 2ax + x^2 : a^2.
Thus, the ratio of the square of 1001 to the square of 1000 is nearly 1002 : 1000; the real ratio is 1002.001 : 1000, in which the antecedent differs from its approximate value, only by one
thousandth part of an unit.
(169.) COR. Hence, the ratio of the square root of a + 2x to the square root of a is the ratio a + x : a, nearly; that is, if the difference of two quantities be small with respect to either of them,
the ratio of their square roots is nearly obtained by halving their difference.
(170.) In the same manner, a + 3x : a; a + 4x : a; a + mx : a; are nearly equal to the ratios mx be small when compared with a.
ON PROPORTION.
(171.) Four quantities are said to be proportionals, when the first is the same multiple, part, or parts, of the second, that the third is of the fourth. That is, when a/b = c/d, the four quantities
a, b, c, d, are called proportionals. This is usually expressed by saying, a is to b as c to d; and thus represented, a : b :: c : d.
The terms a and d are called the extremes, and b and c the means.
(172.) When four quantities are proportionals, the product of the extremes is equal to the product of the means.
Let a, b, c, d, be the four quantities; then, since they are proportionals, a/b = c/d (Art. 171); and by multiplying both sides by bd, ad = bc.
(173.) COR. 1. If the first be to the second as the second to the third, the product of the extremes is equal to the square of the mean.
(174.) COR. 2. Any three terms in a proportion being given, the fourth may be determined from the equation ad = bc.
(175.) If the product of two quantities be equal to the product of two others, the four are proportionals, making the terms of one product the means, and the terms of the other, the extremes.
Let xy = ab, then dividing by ay, x/a = b/y, or x : a :: b : y. (Art. 171).
(176.) If a : b :: c : d, and c : d :: e : f, then will a : b :: e : f.
Because a/b = c/d, and c/d = e/f; therefore, a/b = e/f; or a : b :: e : f.
(177.) If four quantities be proportionals, they are also proportionals when taken inversely.
If a : b :: c : d, then b : a :: d : c. For a/b = c/d, and dividing unity by each of these equal quantities, or taking their reciprocals, b/a = d/c; that is, b : a :: d : c.
(178.) If four quantities be proportionals, they are proportionals when taken alternately.
If a : b :: c : d, then a : c :: b : d.
Because the quantities are proportionals, a/b = c/d and multiplying by b/c, a/c = b/d, or a : c :: b : d.
Unless the four quantities are of the same kind, the alternation cannot take place, because this operation supposes the first to be some multiple, part, or parts, of the third.
One line may have to another line the same ratio that one weight has to another weight, but a line has no relation, in respect of magnitude to a weight. In cases of this kind, if the four quantities
be represented by numbers, or other quantities which are similar, the alternation may take place, and the conclusions drawn from it will be just.
(179.) When four quantities are proportionals, the first together with the second, is to the second, as the third together with the fourth, is to the fourth.
Let a : b :: c : d, then
componendo, a + b : b :: c + d : d.
Because a/b = c/d, by adding unity to each side, a/b + 1 = c/d + 1; that is,
(180.) Also, dividendo, the excess of the first above the second, is to the second, as the excess of the third above the fourth, is to the fourth.
Because a/b = c/d, by subtracting unity from each side, a/b - 1 = c/d - 1; that is,
(181.) Again, convertendo, thefirst is to it's excess above the second, as the third to it's excess above the fourth.
By the last article,
(182.) When four quantities are proportionals, the sum of the first and second is to their difference, as the sum of the third and fourth, to their difference.
Let a : b :: c : d; then, a + b : a - b :: c + d: c - d.
By Art. 179,
(183.) When any number of quantities are proportionals, as one antecedent is to it's consequent, so is the sum of all the antecedents, to the sum of all the consequents.
Let a : b :: c : d :: e : f, &c.
then a : b :: a + c + e : b + d + f.
Because a/b = c/d, ad = bc; in the same manner, af = be;
also, ab = ba; hence, ab + ad + af = ba + bc + be, or,
(184.) When four quantities are proportionals, if the first and second be multiplied, or divided, by any quantity, as also the third and fourth, the resulting quantities will be proportionals.
Let a : b :: c : d, then will ma : mb :: c/n : d/n.
For a/b = c/d; therefore,
or, ma : mb :: c/n : d/n.
(185.) If the first and third be multiplied, or divided, by any quantity, and also the second and fourth, the resulting quantities will be proportionals.
For a/b = c/d; therefore ma/b = mc/d; and
(Art. 69) or, ma : b/n :: mc : d/n.
(186.) COR. Hence, in any proportion, if instead of the second and fourth terms, quantities proportional to them be substituted, we have still a proportion. For b/n and d/n are in the same proportion
with b and d (Art. 184.)
(187.) In two ranks of proportionals, if the corresponding terms be multiplied together, the products will be proportionals.
Let a : b :: c : d
and e : f :: g : h
then will ae : bf :: eg : dh.
Because a/b = c/d and e/f = g/h; therefore, ae/bf = eg/dh; or, ae : bf :: eg : dh.
This is called compounding the proportions.
The proposition is true if applied to any number of proportions.
(188.) If four quantities be proportionals, the like powers, or roots of these quantities, will be proportionals.
Let a : b :: c : d, then a/b = c/d, and a^n/b^n = c^n/d^n; or, a^n : b^n :: c^n : d^n; where n is whole or fractional.
(189.) If two numbers, a and b, be prime to each other, they are the least in that proportion.
G 2
If possible, let a/b = c/d, where a and b are prime to each other, and respectively greater than c and d. If the latter numbers be not prime to each other, divide them by their greatest common
measure. Then divide a by b, and c by d, as in Art. go; thus,
and because a/b = c/d the first quotients m, m, are equal; again, since a/b = m + x/b, and c/d = m + r/d, we have x/b = r/d, or b/x = d/r; also, because b is greater than d, x is greater than r. In
the same manner, x/y = r/s, and y is greater than s, &c. thus the remainder in the latter division will become unity, sooner than the remainder in the former. Let s = 1; then x/y = r, and y, which is
greater than unity, will measure a and b (Art. 92), which is contrary to the supposition.
COR. Hence, if /ab = c/d, and a and b be prime to each other, c and d are equimultiples of a and b.
(190.) If a and b be each of them prime to c, ab is prime to c.
If not, let ab = mr, and c = ms; then since a and b are prime to c, they are respectively prime to ms, and therefore to m; and because ab = mr, we have a/m = r/b;
therefore b is a multiple of m (Art. I89, Cor.), which is absurd.
COR. 1. If b be equal to a, then a^2, and c have no common measure: or a^2/c is a fraction in it's lowest terms.
COR. 2. In the same manner, a^3/c, a^4/c, &c. are fractions in their lowest terms.
COR. 3. If a, b, and c, be each of them prime to d, e, and f, abc is prime to def.
For, if a be prime to d and e, it is prime to de, and if it be prime to de and f, it is prime to def. In the same manner, b and c are prime to def, consequently, abc is prime to def.
COR. 4. If a be prime to b, a^2 is prime to b^2, and a^3 to b^3, &c.
(191.) In the definition of Proportion, it is supposed that one quantity is some determinate multiple, part, or parts of another; or that the fraction arising from the division of one by the other,
(which expresses the multiple, part, or parts, that the former is of the latter), is a determinate fraction. This will be the case, whenever the two quantities have any common measure whatever.
Let x be a common measure of a and b, and let a = mx, b = nx; then a/b = mx/nx = m/n, where m and n are whole numbers.
But it sometimes happens that the quantities are
incommensurable, or adroit of no common measure whatever, as when one represents the circumference of a circle and the other it's diameter; in such cases, the value of a/b cannot be exactly expressed
by any fraction, m/n, whose numerator and denominator are whole numbers; yet a fraction of this kind may be found, which will express it's value to any required degree of accuracy.
Suppose x to be a measure of b, and let b=nx; also let a be greater than mx but less than x is diminished, since nx=b, n is increased, and 1/n diminished; therefore by diminishing x, the difference
between m/n and a/b may be made less than any that can be assigned.
If a and b as well as c and d be incommensurable, and if when a/b lies between m/n and m and n are increased, a/b is equal to c/d. If they are not equal, they musthave some assignable difference, and
because each of them lies between m/n and
is less than 1/n; but since n may, by the supposition, be increased without limit, 1/n may be diminished without limit, that is, it may become less than any assignable magnitude; therefore, a/b and c
/d have no assignable difference; that is, a/b is equal to c/d; and all the preceding propositions, respecting proportionals, are true of the four magnitudes, a, b, c, d.
ON VARIABLE QUANTITIES.
(192.) In the investigation of the relation which varying and dependent quantities bear to each other, the conclusions are more readily obtained, by expressing only two terms in each proportion, than
by retaining the four.
But though, in considering the variation of such quantities, two terms only are expressed, it will be necessary for the Learner to keep constantly in mind that four are supposed; and that the
operations, by which our conclusions are in this case obtained, are in reality the operations of proportionals.
(193) DEF. 1. One quantity is said to vary directly as another, when the two quantities depend wholly upon each other, and in such a manner, that if one be changed, the other is changed in the same
Let A and B be mutually dependent upon each other, in such a way, that if A be changed to any other value a, B must be changed to auother value b,
such, that A : a :: B : b, then A is said to vary directly as B.
Ex. If the altitude of a triangle be invariable, the area varies as the base. For if the base be increased, or diminished, the area is increased or diminished in the same proportion*.
(194.) DEF. 2. One quantity is said to vary inversely as another, when the former cannot be changed in any manner, but the reciprocal of the latter is changed in the same proportion.
A varies inversely as B, (A∝1/B if, when A is changed to a, B be changed to b, in such a manner that A : a :: 1/B : 1/b; or A : a :: b : B.
Ex. If the area of a triangle be given, the base varies inversely as the perpendicular altitude.
Let A and a represent the altitudes, B and b, the bases, of two equal triangles; then A x B = a x b; therefore (Art. 175). A : a :: b : B :: 1/B : 1/b.
(195.) DEF. 3. One quantity is said to vary as two others jointly, if, when the former is changed in any manner, the product of the other two be changed in the same proportion.
Thus, A varies as B and C jointly, (A∝BC), when A cannot be changed to a, but the product BC must be changed to bc, such, that A : a :: BC : bc.
* The sign ∝ placed between two quantities signifies that they vary as eaxh other.
Ex. The area of a triangle varies as it's base and perpendicular altitude jointly. Let A, B, P, represent the area, base, and perpendicular altitude of'one triangle; a, b, p, those of another; then
BP = 2A, and bp = 2a; therefore A/a = BP/bp, or A : a :: BP : bp.
(196.) DEF. 4. One quantity is said to vary directly as a second and inversely as a third, when the first cannot be changed in any manner, but the second multiplied by the reciprocal of the third, is
changed in the same proportion.
A varies directly as B, and inversely as C, (A∝B/C), when A : a : B/C : b/c; A, B, C, and a, b, c, being corresponding values of the three quantities.
Ex. The base of a triangle varies as the area directly and the perpendicular altitude inversely. The notation in the last Article being retained, BP/bp = A/a; and multiplying both sides by p/P we
have B/b = Ap/aP = A/P ÷ a/p; therefore, B : b :: A/P : a/p.
In the following articles, A, B, C, &c. represent corresponding values of any quantities, and a, b, c, &c. any other corresponding values of the same quantities.
(197.) If one quantity vary as a second, and that second as a third, the first varies as the third.
Let A : a :: B : b, and B : b :: C : c, then, (Art. 176), A : a :: C : c. That is, A∝C. In the same manner, if A∝B and B∝1/C, then Aα1/C.
(198.) If two quantities vary respectively as a third, their sum, or difference, or the square root of their product, will vary as the third.
Let A∝C and B∝C, then
Again, A : a :: C: c
and B : b :: C : c.
therefore, AB : ab C^2 : c^2 (Art. 187)
and √AB : √ab :: C : c (Art. 188); that is, Cα√AB.
(199.) If one quantity vary as another, it will also vary as any multiple, or part, of the other.
Let A∝B, and m be any constant quantity, then because A: a:: B : b, A : a :: mB: mb, or A : a :: B/m: b/m (Art. 184); that is, A∝mB or ∝ B/m.
(200.) COR. 1. If A vary as B, A is equal to B multiplied by some invariable quantity. For A : a :: mB : mb, altern. A : mB :: a : mb; if therefore m be so assumed that A = mB, then in all cases, a =
(201.) COR. 2. If we know any corresponding values of A and B, the constant quantity m may be found.
Let a and b be the two values known, then m = a/b; and in general, A = a/b x B.
Ex. Let S ∝ T^2, and when T = 1 suppose S = 16, then S = 16 T^2.
(202.) If one quantity vary as another, any power or root of the former will vary as the same power or root of the latter.
Let A vary as B, then A : a :: B : b, and by Art. 188, A^n : a^n :: B^n : b^n; that is, A^n∝B^n, where n is whole or fractional.
(203.) If one quantity vary as another, and each of them be multiplied or divided by any quantity, variable or invariable, the products or quotients will vary as each other.
Let A vary as B, and let T be any other quantity. Then, by the supposition, A : a :: B : b; therefore AT : at :: BT : bt, and A/T :: a/t :: B/T : b/t (Art. 185).
(204.) COR. If A∝B, dividing both by B, A/B ∝ B/B ∝ that is, A/B is constant.
(205.) If one quantity vary as two others jointly, either of the latter varies as the first directly and the other inversely.
Let V∝FT, then by Art. 203,
F∝V/T or T∝V/F.
(206.) COR. If the product of two quantities be invariable, those quantities vary inversely as each other.
Let B x P be constant, or B x P∝1; by division, B∝1/P.
(207.) If four quantities be always proportionals, and one or two of them be invariable, we may find how the others vary.
Ex. Let p, q, r, s, be always proportionals, and let p be invariable, then s∝qr. Because ps = qr (Art. 172), psαqr; and since p is constant, s∝qr, (Art. 199).
(208.) If one quantity vary as a second, and a third as a fourth, the product of the first and third will vary as the product of the second and fourth.
Let A∝B and C∝D, then AC∝BD.
Because A : a :: B : b
and C : c :: D : d
AC : ac :: BD : bd (Art. 187)
that is, AC∝BD.
(209.) When the increase or decrease of one quantity depends upon the increase or decrease of two others, and it appears that if either of these latter be invariable, the first varies as the other;
when they both vary, the first is as their product.
Let S∝V when T is given,
and S∝T when V is given;
when neither T nor V is given, S∝TV. The variation of S depends upon the variations of the two quantities T and V; let the changes take place separately, and whilst T is changed to t, let S be
changed to S^1; then by the supposition, S : S^1 :: T : t; but this value S^1 will again be changed to s, by the variation of V, and in the same proportion that V is changed; that is, S^1 : s :: V:
v, and by compounding this with the last proportion, S^1S : S^1s :: TV : tv; or, S : s :: TV: tv (Art. 184).
(210.) In the same manner, if there be any number of magnitudes P, Q, R, S, each of which vanes as another, V, when the rest are constant; when they are all changed, V varies as their product.
ON ARITHMETICAL PROGRESSION.
(211.) Quantities are said to be in arithmetical progression, when they increase or decrease by a common difference.
Thus, 1, 3, 5, 7, 9, &c. a, a +1, a + 2b, a + 3b, &c. a, a - b, a - 2b, a - 3b, &c. are in arithmetical progression.
Hence it is manifest, that if a be the first term and a + b the second, a + 2b is the third, a + 3b the fourth, &c. and b the n^th term.
(212.) The sum of a series of quantities in arithmetical progression is found by multiplying the sum of the Jirst and last terms by half the number of terms.
Let a be the first term, b the common difference, n the number of terms, and s the sum of the series: Then,
(213.) COR. Any three of the quantities s, a, n, b, being given, the fourth may be found from the equation
Ex. 1.
To find the sum of 14 terms of the series 1, 3, 5, 7, &c.
Here a = 1, 6 = 2, n = 14; therefore,
Ex. 2.
Required the sum of 9 terms of the series 11, 9, 7, 5, &c.
In this case a = 11, b = -2, n = 9; therefore, s =
Ex. 3.
If the first term of an arithmetical progression be 14, and the sum of 8 terms be 28, what is the common difference?
therefore, s = 28, a = 14, n = 8 therefore,
ON GEOMETRICAL PROGRESSION.
(214.) Quantities are said to be in geometrical progression, or continual proportion, when the first is to the second, as the second to the third, and as the third to the fourth, &c. that is, when
every succeeding term is a certain multiple, or part of the preceding term.
If a be the first term, and ar the second, the series will be a, ar, ar^2, ar^3, ar^3, &c.
For a : ar :: ar : ar^2 :: ar^2 : ar^3, &c.
(215.) The constant multiplier is called the common ratio, and it may be found by dividing the second term by the first, or any other term by that which precedes it.
(216.) If quantities be in geometrical progression, their differences are in geometrical progression.
Let a, ar,ar^2, ar^3, ar^4, &c. be the quantities; their differences, ar - a, ar^2 - ar, ar^4 - ar^3, &c. form a geometrical progression, whose first term is ar - a, and common ratio r.
(217.) Quantities in geometrical progression are proportional to their differences.
For a : ar :: ar - a : ar^2 - ar :: ar^2 - ar : ar^3 - ar^2, &c.
(218.) In any geometrical progression, the first term is to the third, as the square of the first to the square of the second.
Let a, ar, ar^2 &c. be the progression; then a : ar^2 :: a^2 : a^2r^2.
Hence it appears, that the duplicate ratio of two quantities (Euc. Def. 10. 5), is the ratio of their squares.
(219.) In the same manner it may be shewn, that the first term is to the n^th power, to the second raised to the same power.
(220.) If any terms be taken at equal intervals in a geometrical progression, they will be in geometrical progression.
Let a, ar … ar^n ….. ar^2n ….. ar^3n ….. &c. be the progression, then a, ar, ar^n, ar^2n, ar^3n &c. are at the interval of n terms, and form a geometrical progression, whose common ratio is r^n.
(221.) If the two extremes, and the number of terms in a geometrical progression be given, the means may be found.
Let a and b be the extremes, n the number of terms, and r the common ratio; then the progression is a, ar, ar^2, ar^3 ….. ar^n-1;; and since b is the last term, ar^n-l = b, and r being thus known,
the terms of the progression ar, ar^3, ar^3, &c. are known.
(222.) To find the sum of a series of quantities in geometrical progression, subtract thefirst term from the product of the last term and common ratio, and divide the remainder by the difference
between the common ratio and unity.
Let a be the first term, r the common ratio, n the
number of terms, y the last term, and s the sum of the series:
Then a + ar + ar^2 …. + ar^n-2 + ar^n-1 = s; and multiplying both sides by r,
(223) COR. 1. Prom the equation s, r, y, a, being given, the fourth may be found.
(224.) COR. 2. When r is a proper fraction, as n increases, the value of r^n, or of ar^n, decreases, and when n is increased without limit, ar^n becomes less, with respect to a, than any magnitude
that can be assigned; and therefore
This quantity sum of the series, is the limit to which the sum of the terms approaches, but never actually attains; it is however the true representative of the series continued sinefine; for this
series arises from the division of a by 1 - r; and therefore
Ex. 1.
To find the sum of 20 terms of the series, 1, 2, 4, 8, &c.
Here a = 1, r = 2, n = 20; therefore,
Ex. 2.
Required the sum of 12 terms of the series 64, l6, 4, &c.
Here a = 64, r 1/4, n = 12; therefore,
Ex. 3.
Required the sum of 12 terms of the series 1, -3, 9, -27, &c.
In this case, a = 1, r = -3, n = 12;
Ex. 4.
To find the sum of the series 1 - 1/2 + 1/4 - 1/8 &c. in infinitum.
Here a = 1, r = 1/2; therefore (Art. 224),
(225.) Recurring decimals are quantities in geometrical progression, where 1/10, 1/100, 1/1000, &c. is the common ratio, according as one, two, three, &c.
figures recur; and the vulgar fraction, corresponding to such a decimal, is found by summing the series.
Ex. 5.
Required the vulgar fraction corresponding to the decimal.123123123 &c.
Let.123123123 &c. = s; then, as in Art. 222, multiply both sides by 1000; and 123.123123123 &c. = 1000s, and by subtracting the former equation from the latter, 123 = 999s; therefore s = 123/999 = 41
ON PERMUTATIONS AND COMBINATIONS.
(226.) The different orders in which any quantities can be arranged, are called their permutations.
Thus, the permutations of a, b, c, taken two and two together, are ab, ba, ac, ca, bc, cb.
(227.) The combinations of quantities are the different collections that can be formed out of them, without regarding the order in which the quantities are placed.
Thus, ab, ac, bc, are the combinations of the quantities a, b, c, taken two and two; ab and ba, though different permutations, forming the same combination.
(228.) The number of permutations that can be formed out of n quantities, taken two and two together, is taken three and three together, is
In n things, a, b, c, d, &c. a may be placed before each of the rest, and thus form
H 2
the same manner, there are b stands first, and so of the rest; therefore there are, upon the whole, ab, ba, ac, ca, &c.
Again, of n - 1 things b, c, d, &c. taken two and two together, there are a to each of these, there are a stands first; the same may be said of b, c, d, &c. therefore there are, upon the whole,
(229.) COR. By following the same method, it appears, that in n things, if r of them be always taken together, there are
(230.) The number of combinations that can be formed out of n things, taken two and two together, is
The number of permutations in the first case is ab, admits of two permutations, ab, ba; therefore there are twice as many permutations as combinations, or the number of combinations is
Again, there are n things, taken three and three together; and each combination of three things admits of 3.2.1 permutations (Art. 228); therefore there are 3.2.1 times as
many permutations as combinations, and consequently the number of combinations is
(231.) COR. In the same manner it appears, that the number of combinations, in n things, each of which contains r of them, is
ON THE BINOMIAL THEOREM.
(232.) The method of raising a binomial to any power, by repeated multiplication, has before been laid down (Art. 117). The same thing may be done much more expeditiously by the following general
rule, which is called the Binomial Theorem.
Let x + a be the given binomial; and it's n^th power is x, beginning from n is diminished by unity, and the index of a, beginning from 0, is increased by unity, in every succeeding term. Also, the
coefficient of each term is found by multiplying the coefficient of the preceding term by the index of x in that term, and dividing by the index of a increased by unity.
To investigate this theorem, suppose n quantities, x + a, x + b, x + c, &c. multiplied together; it is manifest that the first term of the product will be x^n, and that, x^n-1, x^n-2 &c. the other
powers of x, will all be found in the remaining terms, with different combinations of a, b, c, d, &c.
therefore A = P + a, b = Q + aP, &c. that is, by introducing one factor, x = a, into the product, the coefficient of the second term is increased by a, and by introducing x = b into the product, that
coefficient is increased by b, &c. therefore the whole value of A is a + b + c + d + &c. Again, by the introduction of one factor, x + a, the coefficient of the third term, Q, is increased by aP, i.
e. by a multiplied by the preceding value of A, or by
In the same manner,
and so on; that is, A is the sum of the quantities a,
b, c, &c. B is the sum of the products of every two; C is the sum of the products of every three, &c. &c.
Let a = b = c = d= &c. then A, or a + b + c + d + &c. = nz; B = ab + ac + bc + &c. = a^2 x the number of combinations fo a, b, c, d, &c. taken two and two,
This proof applies only to those cases in which n is a whole positive number; but the rule holds when the index is fractional, or negative.
And multiplying by w^n = 1 + x,
And since these must form the same series, m/n = a; the same manner, the proposition may be proved when
* See Encyclop±dia Britan. vol. I. page 651.
It may not be improper to state, briefly, the nature of the proof alluded to in former Editions of this work, as the principle is of extensive application.
It is usually taken for granted, and may without much difficulty be proved, that whether r is positive or negative, whole or fractional, ^2 + cx^3 + &c. where a, b, c, &c. are definite magnitudes,
not dependent on the value of x. Let r is any whole positive number a = r, and z=0 when r is 1, 2, 3, &c. or z contains the factors r - 1, r - 2, r - 3, &c. in inf. (Art. 269); hence, r + z cannot be
expressed in finite terms unless Q, that is, unless z = 0; and since we know that a, or r + z may be expressed in infinite terms, it fallows that z = 0, and that a = r. In the same manner it appears
Ex. 1.
Ex. 2.
Ex. 3.
Ex. 4.
(233.) COR. 1. If either term of the binomial be negative, it's odd powers will be negative (Art. 114);
and consequently the signs of the terms, in which those odd powers are found, will be changed.
Ex. 5.
(234.) COR. 2. If the index of the power, to which a binomial is to be raised, be a whole positive number, the series will terminate, because the coefficient
(235.) COR. 3. When the index is a whole positive number, the coefficients of the terms taken backward, from the end of the series, are respectively equal to the coefficients of the corresponding
terms taken forward, from the beginning.
Thus, if a + x be raised to the 8^th power, the coefficients are 1, 8, 28, 56, 70, 56, 28, 8, 1.
In general, the coefficient of the
(236.) COR. 4. The sum of the coefficients 1 + n + ^n.
For if x=a=1, then ^n = 1 + n +
(237.) COR. 5. Since
By subtractingone series from the other,
(238.) The trinomial a + b + c may be raised to any power by considering two terms as one factor, and proceeding as before.
a^n - 2 + &c. and the powers of b + c may be determined by the binomial theorem.
The theorem by which a multinomial may be raised to any power, is given by Mr. Demoivre, Analyt. page 87.
ON SURDS.
(239.) A quantity may be reduced to the form of a given surd, by raising it to the power whose root the sUrd expresses, and affixing the radical sign.
(240.) If two surds have the same index, their product is found by taking the product of the quantities under the signs, and retaining the common index.
(241.) If the surds have coefficients, the product of these coefficients must be prefixed.
Thus, a√x×b √y = ab √xy
(242.) We must observe, that √-a^2, x √-a^2, or the square of √-a^2, is a^2 because it is that quantity whose square root is √-a^2.
COR. Hence, ^2 + b^2.
(243.) If the indices of two surds have a common denominator, let the quantities be raised to the powers expressed by their respective numerators, and their product may be found as before.
(244.) If the indices have not a common denominator, they may be transformed to others of the same
value, with a common denominator, and their product found as in Art. 243.
(245.) If two surds have the same rational quantity under the radical signs, their product isfouiid by making the sum of the indices the index of that quantity.
Thus, a^1/n x a^1/m = a^m/mn x a^n/mn (Art. 239), = a^m+n/mn; (See Art. 124.)
Ex. 2.
(246.) If the indices of two quantities have a common denominator, the quotient of one divided by the other is obtained by raising them, respectively, to the powers expressed by the numerators of
their indices, and extracting that root of the quotient which is expressed by the common denominator.
(247.) If the indices have not a common denominator, reduce them to others of the same value, with a common denominator, and proceed as before.
(248.) If two surds have the same rational quantity under the radical signs, their quotient is obtained by making the difference of the indices, the index of that quantity.
Thus, a^1/n divided by a^1/m, or a^m/mn divided by a^n/mn (Art. 239), that is mn, produce equal results a^m/a^n and a^m-n.
Ex. 2. 2^1/2 ÷ 2^1/3 = 2 ^3/6 ÷ 2^2/6 = 2^1/6.
(249.) The coefficient of a surd may be introduced under the radical sign, by first reducing it to the form of the surd, and then multiplying as in Art. 240.
(250.) Conversely, any quantity may be made the coefficient of a surd, if every part under the sign be divided by this quantity raised to the power whose root the sign expresses.
(251.)When surds have the same irrational part, their sum or difference is found by affixing the sum or difference of their coefficients to that irrational part.
(252.) The square root of a quantity cannot be partly rational and partly a quadratic surd.
If possible, let = √n = a + √m; then by squaring both sides, n = a^2 + 2a√m + m, and 2a√m = n - a^2 - m; therefore,
(253.) In any equation x + √y = a + √b, consisting of rational quantities and quadratic surds, the rational parts on each side are equal, and also the irrational parts.
If x be not equal to a, let x = a + m; then a + m + √y = a +√b, or m + √y = √b; that is, √b is partly rational and partly a quadratic surd, which is impossible (Art. 252); therefore x = a, and
consequently √y = √b.
(254.) If two quadratic surds √x and √y cannot be reduced to others which have the same irrational part, their product is irrational.
If possible, let √xy = rx, where r is a whole number or a fraction. Then xy = r^2x^2, and y = r^2x, therefore √y = r √x, that is, √y and √x may be so reduced as to have the same irrational part,
which is contrary to the supposition.
(255.) One quadratic surd, √x, cannot be made up of two others, √m and √n, which have not the same irrational part.
If possible, let √x = √m + √n; then by squaring both sides, x = m + n + 2√mn, and x - m - n = 2√mn, a rational quantity equal to an irrational one (Art. 254); which is absurd.
(256.) Let a a rational quantity, b a quadratic surd, x and y one or both of them, quadratic surds, then
By involution, c is even, the odd terms of the series are rational, and the even terms irrational; therefore ^c - 3y^3 + &c. and consequently
(257.) If c be an odd number,a and b, one or both quadratic surds, and x and y involve the same surds that a and b do, respectively, and also
By involution, x does, because c is an odd number, and the even terms, the same surd that y does; and since no part of a can consist of y, or it's parts (Art. 255), ^c -
(258.)The square root of a binomial, one of whose factors is a quadratic surd, and the other rational, may sometimes be expressed by a binomial, one or both of whose factors are quadratic surds.
Let a + √b the given binomial, and suppose x and y are one or both quadratic surds;
by multiplication
a + √b = x^2 + 2xy +y^2
and a = x^2 + y^2 (Art. 253),
by addition,
by subtraction, a - √^2 - b = 2y^2, and the root x + y =
From this conclusion it appears, that the square root of a + √b can only be expressed by a binomial of the form x + y, one or both of which are quadratic surds, when a^2 - b is a perfect square.
The values of x and y are ±
Required the square root of 3 + 2 √2.
In this case, a = 3, √b = 2 √2, and a^2 - b = 9 - 8 = 1; hence,
= 1; therefore, x + y = √2 + 1.
Ex. 2.
Required the square root of 7 - 2 √10.
Here, a = 7, √b = 2 √10, a^2 - b = 9; hence,
Ex. 3.
Required the square root of 4√- 5 - 1.
Here, a = - 1, √ b = 4 √ - 5, a^2 - b = 81; hence, x =
(259.) The c^th root of a binomial, one or both of whose factors are possible quadratic surds, may sometimes be expressed by a binomial of that description.
Let A + B be the given binomial surd, in which both terms are possible; the quantities under the radical signs whole numbers; and A is greater than B.
by mult.^th power = n^c, then x^2 - y^2 = n.
Again, by squaring both sides of the two first equations, we have
s and t the nearest integer values of ^2 + 2y^2 = s + t, exactly,
I 2
when the root of the proposed quantity can be obtained. We have therefore these two equations
x^2 - y^2 = n
x^2 + y ^2 =
therefore, if the root of the binomial ^th root of A + B is
Ex. 1.
Required the cube root of 10 + √108.
In this case, A = √108, B = 10; A^2 - B^2 = 8, and 8Q = n^3; if therefore Q = l, n = 2. Also,
f is some fraction less than unity; therefore s = 7, t = 1; and
If therefore the cube root of 10 + √108 can be expressed in the proposed form, it is √3 + 1; which, on trial, is found to succeed.
Let the cube root of 11 + 5 √7 be required.
Here, A= 5 √7 B=11, A^2 - B^2 = 54; therefore 54Q = n^3, and if Q = 4, n^3 = 2l6, and n = 6.
Ex. 3.
To find the cube root of 2√7 + 3√3.
Here, A = 2√7, B = 3√3, A^2 - B^2 = 1; hence Q = 1, and n = 1.
(260.) In the same manner, the c^th root of A - B is A is less than B, n is negative.
(261.) In the operation, it is required to find a number Q, such, that ^2 - B^2 be divisible by a, a, …..(m); b, b, ….. (n); d, d, ….. (r); &c. in succession, that is, let A^2 - B^2 = a^m b^nd^r &c.
also let
Q = a^xb^yd^z &c. then ^th power, if x, y, z, &c. be so assumed that m + x, n + y, r + z, are respectively equal to c, or some multiple of c. Thus, to find a number which multiplied by 180 will
produce a perfect cube, divide 180 as often as possible by 2, 3, 5, &c. and it appears that 2.2.3.3.5 = 180; if, therefore, it be multiplied by 2.3.5.5, it becomes 2^3.3^3.5^3, or
If the index of the root to be extracted be an even number, the square root may be found by Art. 258, when it can be expressed by a binomial of the same description; and if half the index be an even
number, the square root may again be taken, and so on, until the root remaining to be extracted is expressed by an odd number.
If A and B be divided by their greatest common measure, either integer or quadratic surd, in all cases where the c^th root can be obtained by this method, Q will either be unity, or some power of 2,
less than 2^c. See Dr. Waring's Med. Alg. p. 287.
THE END OF PART I.
ELEMENTS OF ALGEBRA.
PART II.
ON THE NATURE OF EQUATIONS.
(262.) Any equation, involving the powers of one unknown quantity, may be reduced to the form x^n - px^n - 1 + qx^n - 2- &c. = 0; where the whole expression is made equal to nothing, the terms are
arranged according to the dimensions of the unknown quantity, the coefficient of the highest dimension is unity, and the coefficients, p, q, r, &c. are affected with their proper signs.
An equation, where the index of the highest power of the unknown quantity is n, is said to be of n dimensions; and in speaking simply of an equation of n dimensions, we understand one reduced to the
above form, unless the contrary be expressed.
(263.) Any quantity, x^n - px^n - 1 + qx^n - 2 … + Px - Q, may be supposed to arise from the multiplication of n factors.
For, by actually multiplying the factors together, we obtain a quantity of n dimensions, similar to the proposed quantity, x^n - px^n - 1 + qx^n - 2 - &c. and if a, b, c, &c. can be so assumed that
the coefficients of the corresponding terms in the two quantities become equal, the whole expressions coincide. And these coefficients may be made equal, because we shall have n equations, to
determine the n quantities a, b, c, d, &c. (See Art. 145). If then the quantities, a, b, c, d, &c. be properly assumed, the equation, x^n - px^n - 1 + qx^n - 2 - &c. = 0, is the same with
We cannot suppose x^n - px^n - 1 + qx^n - 2 - &c. to be made up of more, or fewer, then n simple factors; because, on either supposition, the result would not be of the same number of dimensions with
the proposed quantity.
(264.) The quantities a, b, c, d, &c. are called roots of the equation, or values of x; because, if any one of them be substituted for x, the whole expression becomes nothing, which is the only
condition proposed by the equation.
* This proof, which is usually given, is imperfect; for if the n equations be reduced to one, containing only one of the quantities, a, this equation is a^n - pa^n - 2-1 + qa^n - 2 - &c. = 0, which
exactly coincides with the proposed equation; in supposing therefore that a can be found, we take for granted the proposition to be proved. The subject has exercised the skill of the most eminent
algebraical writers, but their reasonings upon it are of too abstruse a nature to be introduced in this place: The Learner must, at present, take for granted, that an equation may be made up of as
many simple factors as it has dimensions; and when he is farther advanced in the subject, he may consult Dr. Waring's Algebra, page 272; Phil. Trans. 1798, page 369; and' Demonstratio Nova; C. F.
Gauss, Helmstadt, 1799.
(265.) If the signs of the terms of an equation be all positive, it cannot have a positive root; and if the signs be alternately positive and negative, it cannot have a negative root.
If x^n + px^n - 1 + qx^n - 2 + &c. = 0, and any positive quantity, a, be substituted for x, the result is positive; consequently a is not a root of the equation.
If x^n - px^n - 1 + qx^n - 2 - &c. = 0, and a negative quantity, - a, be substituted for x, when n is an odd number the result is negative, and when n is an even number the result is positive;
therefore - a cannot in either case, be a root of the equation.
(266.) Every equation has as many roots as it has dimensions.
If x^n - px^n - 1 + qx^n - 2 + &c. = 0, or n quantities, a, b, c, &c. each of which, when substituted for x, makes the whole = 0, because in each case one of the factors becomes = 0; but any quantity
different from these, as e, when substituted for x, gives the product e will not answer the condition which the equation requires.
(267.) When one of the roots, a, is obtained, the equation ^n - px^n - 1 + qx^n - 2 - &c. = 0, is divisible by x - a, without a remainder, and is thus reducible to b, c, &c.
Ex. One root of the equation y^3 + 1 = 0, is - 1, or y + 1 = 0, and the equation may be depressed to a quadratic, in the following manner:
Hence, the other two roots are the roots of the quadratic y^2 - y + 1 = 0.
If two roots, a and b, be obtained, the equation is divisible by
Ex. Two roots of the equation x^6 - 1 = 0, are + 1 and - 1, or x - 1 = 0, and x + 1 = 0; therefore it may be depressed to a biquadratic by dividing by
Hence, the equation x^4 + x^2 + 1 = 0, contains the other four roots of the proposed equation.
(268.) Conversely, if the equation be divisible by x - a, without remainder, a is a root; if by
a and b are both roots; &c. Let Q be the quotient arising from the division, then the equation is a or b be substituted for x, the whole vanishes.
(269.) COR. 1. If a, b, c, &c. be the roots of an equation, that equation is ^3 - 7x + 6 = 0.
(270.) COR. 2. If the last term of an equation vanish, it is of the form x^n - px^n - 1 + qx^n - 2 …… Px = 0, which is divisible by x, or x - 0, without remainder; therefore, 0 is one of it's roots;
if the two last terms vanish, it is divisible by x^2, without remainder, or by
(271.) The coefficient of the second term of an equation, with it's proper sign, is the sum of the roots with their signs changed; the coefficient of the third term, is the sum of the products of
every two roots with their signs changed; the coefficient of the fourth term, is the sum of the products of every three roots, with their signs changed, &c. and the last term is the product of all
the roots, with their signs changed.
Let a, b, c, &c. be the roots of the equation; then
does not contain x, is the product of all those quantities.
(272.) COR. 1. If the roots be all positive, the signs of the terms will be alternately + and -. For the product of an odd number of negative quantities is negative, and of an even number, positive.
But if the roots be all negative, the signs of all the terms will be positive, because the equation arises from the multiplication of the positive quantities
(273.) COR. 2. Let the roots of the equation x^n - px^n - 1 … + Px^2 - Qx + R = 0, be a, b, c, d, &c. then R = abcd (n); Q = bcd (n -1) + acd (n - 1) + abd (n - 1) + &c. and Q/R = 1/a + 1/b + 1/c + &
c. that is, the coefficient of the last term but one, divided by the last term, is the sum of the reciprocals of the roots. In the same manner, P/R = 1/ab + 1/ac + 1/ad + 1/bc + 1/bd + 1/cd + &c.
(274.) Any equation, it has been observed, may be conceived to arise from the multiplication of the simple factors
Thus, a cubic equation may be supposed to be the product of three simple factors,
(275.) If the coefficients, in any equation, be whole numbers, the equation cannot have a fractional root.
If possible, let a/b, a fraction in it's lowest terms, be a root of the equation x^n - px^n - 1 + qx^n - 2 - &c. = 0; then ^n/b = pa^n - 1 - qa^n - 2b + &c. that is, a^n/b, a fraction in it's lowest
terms (Art 190. Cor. 2), is equal to a whole number, which is absurd; therefore a/b is not a root of the equation.
(276.) The roots a, b, c, &c. of an equation are impossible, when, as is frequently the case, they involve the square root of a negative quantity.
(277.) Impossible roots enter equations by pairs.
If a + √- b^2 be a root of the equation x^n - px^n - 1 + &c. = 0, then a - √- b^2 is also a root.
In the equation, for x substitute a + √- b^2, and the result will consist of two parts, possible quantities, which involve the powers of a and the even powers of √- b^2, and impossible quantities
which involve the odd powers of √- b^2; call the sum of the possible quantities A, and of the impossible B, then A + B is the whole result. Let now, a - √- b^2, be substituted for x, and the possible
part of the result will be the same as before, and the impossible part which arises
from the odd powers of - √- b^2, will only differ from the former impossible part in it's sign; therefore the result is A - B; and since by the supposition a + √- b^2 is a root of the equation, A + B
= 0; in which, as no part of A can be destroyed by B, A = 0 and B = 0; therefore A - B = 0, that is, the result, arising from the substitution of a - √- b^2 for x, is nothing; or a - √- b^2 is a root
of the equation.
The truth of the proposition is also manifest from this consideration, that if x^2 - mx + n = 0 be a quadratic factor of the equation (Art. 274), two values of x are
(278.) COR. 1. Hence it follows, that an equation of an odd number of dimensions, must have, at least one possible root; unless some of the coefficients are impossible, in which case the equation may
have an odd number of impossible roots.
(279.) COR. 2. By the same mode of reasoning it appears that, when the coefficients are rational, surd roots of the form ± √b, or a ± √b enter equations by pairs.
ON THE TRANSFORMATION OF EQUATIONS.
(280.) If the signs of all the terms in an equation be changed, it's roots are not altered.
Let a, b, c, &c. are substituted for x.
(281.) If the signs of the alternate terms, beginning with the second, be changed, the signs of all the roots are changed.
Let x^n - px^n - 1 - qx^n - 2 + &c. = 0, be an equation whose roots are a, b, - c, &c. for x substitute - y, and, when n is an even number, y^n + py^n - 1 - qy^n - 2 - &c. = 0; but when n is an odd
number, - y^n - py^n - 1 + qy^n - 2 + &c. = 0, or changing all the signs (Art. 280), y^n + py^n - 1 - qy^n - 2 - &c.= 0, as before; and since x = - y, or y = - x, the values of y are - a, - b, + c, &
Ex. Let it be required to change the signs of the roots of the equation x^3 - qx + r = 0.
This equation with all it's terms is x^3 + 0 - qx + r = 0; and changing the signs of the alternate terms, we have x^3 - 0 - qx - r = 0, or x^3 - qx - r = 0, an equation whose roots differ from the
roots of the former, only in their signs.
(282.) To transform an equation into one whose roots are greater, or less than the corresponding roots of the original equation, by any given quantity.
Let the roots of the equation x^3 - px^2 + qx - r = 0, be a, b, c; to transform it into one whose roots are a + e, b + e, c + e.
Assume x + e = y, or x = y - e; then,
In this last equation, since y = x + e, the values of y are a + e, b + e, c + e.
If y + e be substituted for x, the values of y in the resulting equation will be a - e, b - e, c - e.
(283.) In general, let the roots of the equation x^n - px^n - 1 + qx^n - 2 - &c. = 0, be a, b, c, &c. Assume y = x - e, or x = y + e, and by substitution,
and since y = x - e, the values of y, in this equation, are a - e, b - e, c - e, &c.
We may observe, that the last term of the transformed equation, e^n - pe^n - 1 + qe^n - 2 - &c. is the original quantity, with e in the place of x; the coefficient of the last term but one, with it's
proper sign, is obtained by multiplying every term of e^n - pe^n - 1 + qe^n - 2 - &c. by the index of e in that term, and diminishing the index by unity; the coefficient of the last term but two,
(284.) One use of this transformation is, to take away any term out of an equation. Thus, to transform an equation into one which shall want the second term, e must be so assumed that ne - p = 0, or
e = p/n (where p is the coefficient of the second term
with it's sign changed, and n the index of the highest power of the unknown quantity); and if the roots of the transformed equation can be found, the roots of the original equation may also be found,
because x = y + p/n.
Ex. 1. Let x^2 - px + q = 0 be the proposed equation. Assume x = y + p/2, then
or y^2 - p^2/4 + q = 0; hence, y^2 = p^2/4 - q, and
Ex. 2. To transform the equation x^3 - 9x^2 + 7x + 12 = 0 into one which shall want the second term.
Assume x = y + 3, then
that is, y^3 - 20y - 21 = 0; and if the values of y be a, b, c, the values of x are a + 3, b + 3, and c + 3.
(285.) To take away the third term of the equation, e must be so assumed, that
In this case we shall have a quadratic to solve; and in general, to take out the m^th term, by this method, it will be necessary to solve an equation of m - 1 dimensions.
Ex. To transform the equation x^3 - 6x^2 + 9x - 1 = 0 into one which shall want the third term.
Here n = 3, p = 6 and q = 9; therefore ^2 - 12e + 9 = 0, or e^2 - 4e + 3 = 0, in which the values of e are 1 and 3. Let x = y + 3, then
that is, y^3 + 3y^2 - 1 = 0. In the same manner, if x = y + 1, the transformed equation will want the third term.
(286.) To transform an equation into one whose roots are the roots of the original equation multiplied by any given quantity.
Let the roots of the equation x^n - px^n - 1 + qx^n - 2 - &c. = 0, be a, b, c, &c. to transform it into one whose roots are ma, mb, mc, &c.
Assume y = mx, or x = y/m; then substitute this value for x, and the equation becomes ^n, y^n - mpy^n - 1 + m^2qy^2 - &c. = 0, an equation whose roots are ma, mb, mc, &c.
This equation differs from the former, only in having the successive terms multiplied by 1, m, m^2, m^3, &c.
(287.) COR. 1. By this transformation, an equation may be cleared of fractions; or if the first term be affected with a coefficient, that coefficient may be taken away.
Ex. 1.^n - npx^n-1 + mqx^n-2 - mnrx^n-3 + &c. =0; transform this equation into one whose roots are mn times as great, and mny^n- mn^2py^n-1 + m^3n^2qy^n-2 - m^4n^4ry^n-3 + &c. = 0, or y^n - npy^n - 1
+ m^2nqy^n - 2 - m^3n^3ry^n - 3 + &c. = 0, an equation of the usual form.
Ex. 2. Let it be required to transform the equation 3y^3 - qy + r = 0 into one in which the coefficient of the highest term is unity.
The equation with all it's terms is 3y^3 + 0 - qy + r = 0; transform it into one whose roots are three times as great, by substituting z/3 fory; then 3z^3 + 3 × 0 - 9qz + 27r = 0, or z^3 - 3qz + 9r =
0, an equation of the required form.
(288.) COR. 2. In any equation, if the coefficients of the second, third, fourth terms, &c. be divisible, respectively, by m, m^2, m^3, &c. the roots have a common measure, m.
(289.) COR. 3. An equation may be transformed into one whose roots are 1/m parts of the roots of the
K 2
former, by dividing the second, third, fourth, &c. terms by m, m^2, m^3, &c. respectively.
(290.) To transform an equation into one whose roots are the reciprocals of the roots of the given equation.
Let the roots of the equation x^n - px^n-1 + qx^n-2 …. - Px + Q=0, be a, b, c, &c. to transform it into one whose roots are 1/a, 1/b, 1/c, &c.
Assume y = 1/x or x = 1/y; then, by substitution, 1/y^n - p/y^n - 1 + q/y^n - 2 …. - P/y + Q = 0, and multiplying by y^n, 1 - py + qy^2 … Py^n - 1 + qy^n = 0; that is Qy^n - Py^n-1 …. + qy^2 - py + 1
= 0; and since y = 1/x, the values of y are1/a, 1/b, 1/c, &c.
(291.) COR. l. If any term in the given equation be wanting, the corresponding term will be wanting in the transformed equation; thus, if the original equation want the second term, the transformed
equation will want the last term but one, &c. because the coefficients in the transformed equation are the coefficients in the original equation, in an inverted order.
(292.) COR. 2. If the coefficients of the terms, taken from the beginning of an equation, be the same with the coefficients of the corresponding terms, taken from the end, with the same signs, the
transformed will coincide with the original equation, and their roots will therefore be the same.
Let a, b, c, roots of the equation x^n - px^n-1 + qx^n-2 …. + qx^2 - px + 1= 0; the transformed equation will be y^n - py^n-1 + qy^n-2 …. + qy^2 - py + 1 = 0, and a, b, c, must also be roots of this
equation; but the roots of this equation are the reciprocals of the roots of the original equation, therefore 1/a, 1/b, 1/c, are also roots of the original equation.
Ex. The roots of the equations, x^4 - px^3 + qx^2 - px + 1=0; x^4 + qx^2 +1=0; and x^4 + l=0, are of the form a, b, 1/a, 1/b.
(293.) COR. 3. If the equation be of an odd number of dimensions, or if the middle term of an equation of an even number of dimensions be wanting, the same thing will hold when the signs of the
corresponding terms, taken from the beginning and end, are different.
Ex. The roots of the equation x^3 - px^2 + px - 1=0, are of the form 1, a, 1/a. For in this case, if the signs of all the terms of the transformed equation be changed, it will coincide with the
original equation; and by changing the signs of all the terms, we do not alter the roots. (Art. 280).
(294.) The equations described in the two last corollaries are called recurring equations.
(295.) COR. 4. One root of a recurring equation of an odd number of dimensions, will be +1, or - 1,
according as the sign of the last term is - or +; and the rest will be of the form a, 1/a, b, 1/b; &c.
For if + 1, in the former case, and -1, in the latter, be substituted for the unknown quantity, the whole vanishes; thus, if x^5 - px^4 + qx^3 - qx^3 + px - 1 = 0, and for x we substitute +1, it
becomes 1- p + q - q+p - 1= 0; and it appears from Art. 290, that if a, b, c, &c. be roots of the equation, 1/a, 1/b, 1/c, &c. are also roots.
(296.) To transform an equation into one whose roots are the squares of the roots of the given equation.
Let x^n - px^n-1 + qx^n-2 - rx^n-3 + sx ^n-4 - &c. = 0; by transposition, x^n+qx^n - 2 + sx^n - 4 + &c. = px^n - 1 + rx^n - 3 +&c.; and by squaring both sides, x^2n + 2qx^2n-2 + ^2x^2n-2 + 2prx^2n-4
+ &c. and again by transp. ^2, then y are the squares of the values of x.
(297.) COR. If the roots of the original equation be a, b, c, &c. then p^2 - 2q = a^2 + b^2 + c^2 + &c. q^2 - 2pr + 2s = a^2b^2 + a^2c^2 + &c. (Art. 271).
Other transformations may be seen in Dr. Waring's Meditationes Algebraicœ, Prob. 5; and indeed, whoever would fully understand the nature of equations, must have recourse to that Work.
ON THE LIMITS OF THE ROOTS OF EQUATIONS.
(298.) If a, b, c, - d, &c. be the roots of an equation, taken in order, that is, a greater than b, b greater than c, &c.* the equation is a, but greater than b, be substituted, the result will be
negative, because the first factor will be negative and the rest positive. If a quantity between b and c be substituted, the result will again be positive, because the two first factors are negative
and the rest positive, and so on. Thus, quantities which are limits to the roots of an equation, or between which the roots lie, if substituted, successively, for the unknown quantity, give results
alternately positive and negative.
(299.) Conversely, if two magnitudes, when substituted for the unknown quantity, give results affected with different signs, an odd number of roots must lie between them; and if a series of
magnitudes, taken in order, can be found, which give as many results, alternately positive and negative, as the equation has dimensions, these must be limits to the roots of the equation; because an
odd number of roots lies between
* In this series, the greater d is, the less is -d. And whenever a, b, c, - d, &c. are said to be the roots of an equation taken in order, a is supposed to be the greatest. Also, in speaking of the
limits of the roots of an equation, we understand the limits of the possible roots.
each two succeeding terms of the series, and there are as many terms as the equation has dimensions; therefore this odd number cannot exceed unity.
(300.) If the results arising from the substitution of two magnitudes, for the unknown quantity, be both positive or both negative, either no root of the equation, or an even number of roots, lies
between them.
(301.) COR. If m, and every quantity greater than m, when substituted for the unknown quantity, give positive results, m is greater than the greatest root of the equation.
(302.) To Jind a limit greater than the greatest root of an equation.
Let the roots of the equation be a, b, c, &c. transform it into one whose roots are a - e, b - e, a - e, &c. and if, by trial, such a value of e be found, that every term of the transformed equation
is positive, all it's roots are negative (Art. 265), and consequently e is greater than the greatest root of the proposed equation.
Ex. 1. To find a number greater than the greatest root of the equation x^3 - 5x^2 + 7x - 1 = 0.
Assume x = y + e, and we have
in which equation, if 3 be substituted for e, each of the quantities, e^3 - 5 e^2 + 7^e -1, 3e^2 - 10e + 7, 3e - 5, is positive, or all the values of y are negative; therefore 3 is greater than the
greatest value of x.
Ex. 2. In any cubic equation of this form, x^3 - qx + r = 0, Aq is greater than the greatest root.
By transforming the equation, as before,
and substituting √q for e, y^3 + 3√qy^2 + 2qy + r = 0, every term of which is positive; therefore √q is greater than the greatest value of x.
(303.) COR. If the signs of the roots be changed, a limit greater than the greatest root of the resulting equation, with it's sign changed, is less than the least root of the proposed equation.
Ex. Required a limit less than the least root of the equation y^3 - 3y + 72 = 0.
When the signs of the roots are changed, this equation becomes y^3 - 3y - 72 = 0.
Assume y = x + e; then
and if 5 be substituted for e, every term becomes positive, consequently 5 is greater than the greatest root of the equation y^3 -3y - 72 = 0; and - 5 less than the least root of the equation y^3 -
3y + 72 = 0.
(304.) The greatest negative coefficient increased by unity, is greater than the greatest root of an equation.
Let x^n - px^n - 1- qx^n - 2 - &c. = 0, and if the coefficients be equal, x^n - px^n - 1 - px^n - 2 - &c. = 0, or x
x, and the result is p, the sum of the series, to be taken from x^n, will be diminished, and the result greater than before. Also, if for x, any quantity, still greater, be substituted, as p + m + 1,
the result is
(305.) In any equation, x^n - px^n - 1 + qx^n - 2 - rx^n - 3 + sx^n - 4 - &c. = 0, whose roots are possible and positive,
Let a, b, c &c. be the roots, taken in order; then ab + ac + bc + &c. = q, and a^2 is greater than each of these products, therefore q, or a is greater than
Also, a^2 + b^2 + c^2 + &c. = p^2 - 2q (Art. 297); and since a is the greatest root, na^2 is greater than a^2 + b^2+ c^2 + &c. or p^2 - 2q; that is, a is greater than
Again, q^2 - 2pr + 28 = a^2b^2 + a^2c^2 + b^2c^2 + &c. (Art.297), and a^4 is greater than each of these products, of which there are ^4 is greater than a^2b^2+a^2c^2+ b^2c^2+ &c. or q^2 - 2pr + 2s;
that is, a is greater than
(306.) COR. If the equation have both positive and negative roots, and - d be the least root; then, when d is greater than the greatest root, it is greater than
(307.) The roots of the equation x^n - px^n -1 +qx^n - 2 - &c. = 0, when the roots of the latter equation are possible.
Let the roots of this equation, taken in order, be a, b, c, - d, &c. and in it, for x, substitute y + e, then by Art. 283,
the roots of which equation are a - e, b - e, c - e, - d - e, &c. and the coefficient of the last term but one of any equation of n dimensions, is the sum of the rectangles under n - 1 roots, with
their signs changed (Art. 271); therefore,
in which, if a, b, c, - d, &c. be successively substituted for e, the results are,
therefore a, b, c, - d are limits to the roots of the equation x for e, to the roots of the equation ^n - px^n - 1 + qx^n - 2 - &c. = 0.
(308.) COR. 1. It appears from the preceding demonstration, that a, b, c, - d, &c. are the roots of the equation x^n - px^n - 1 + qx^n - 2 - &c. = 0; also x^n - px^n - 1 + qx^n - 2 - &c. =
(309.) COR. 2. If the limiting equation to nx^n -1 -
(310.) COR. 3. The original equation has as many positive roots, and as many negative, as the limiting equation, and one more, which will be positive or negative according to the nature of the
(311.) Every equation whose roots are possible, has as many changes of signs from + to -, and from - to +, as it has positive roots; and as many continuations of the same sign, from + to +, and from
- to -, as it has negative roots.
Let x^n - px ^n - 1. … ± Sx^2 ± Px ± Q = 0, the equation of limits is P and Q are different, or the same.
Suppose α, β, γ, &c. to be the roots of the limiting equation; then the roots of the original equation are, by Art. 310, of this form, a, b, c, ± d, &c. therefore, P, with it's proper sign, = n×- α×-
β×- γ &c. and Q = - ax - bx - cx
ducts will have the same sign when the multiplier d is positive, or the root (- d) negative, and different signs when that root is positive. It appears then, that if the original equation have one
more change of signs than the limiting equation, it has one more positive root; and, if it have one more continuation of the same sign, it has one more negative root; therefore if it can be shewn
that every equation of n- 1 dimensions, and consequently the equation nx^n - 1 ^n - 2 + ^n - px^n - 1 + qx^n - 2 - &c. = 0, has as many changes of signs as it has positive roots, and as many
continuations of the same sign as it has negative roots, the same rule will be true in the equation x^n - px^n - 1 + qx^n - 2 - &c. = 0; or, in other words, if the rule be true of every equation of
one order, it is true of every equation of the next superior order. Now in every simple equation x-a = 0, or x + a = 0, the rule is true, therefore it is true in every quadratic x^2± px ± q = 0; and
if it be true in every quadratic, it is true in every cubic; and so on; that is, the rule is true in all cases.
In the demonstration, each root, ± d, is supposed to be distinct from the rest, and a possible quantity.
Hence, when all the roots are possible, the number of positive roots is exactly known.
Ex. The equation x^3 + x^2 - 14x +8 = 0, has two positive and one negative root; because the signs are +, +, -, +, in which there are two changes, one from + to -, and the other from - to +, and one
continuation of the sign +.
(312.) When any coefficient vanishes, it may be considered either as positive or negative, because the value of the whole expression is the same on either supposition.
Ex. If the roots of the equation x^3 - qx + r = 0 be possible, two of them are positive and the third is negative; for there are two changes of signs in the equation x^3± 0 - qx + r = 0, and one
continuation of the same sign.
(313.) If all the roots of an equation be positive, or all negative, and it's terms be multiplied by the terms of any arithmetical progression, the resulting equation will be a limiting equation to
the former.
Let the roots of the equation x^n - px ^n - 1 + qx^n - 2 - &c = 0, taken in order, be a, b, c, &c. and when they are substituted for x in the quantity nx^n - 1 - n - 1. px^n - 2+ &c. let the results
be +L, - M, + N, - &c. (Art. 307.); then, when they are substituted in x in x^n - px^n - 1 -qx^n - 2 - &c. or in Ax^n - Apx^n - 1 + Aqx^n - 2 - &c. the resultsare nothing; therefore when they are
substituted in the sum of Ax^n - Apx^n - 1 + Aqx^n-3 - &c. and Bx×a, b, c, &c. are limits of the roots of the equation
Conversely, the roots of this latter equation are limits to the roots of the former.
If B be negative, or the series an increasing one, the results will be, - BaL, + BbM, - BcN, + &c. therefore a, b, c, &c. are limits to the roots of the equation,
(314.) If an equation have both, positive and negative roots, and it's terms be multiplied by the terms of an arithmetical progression, an equation arises, whose roots are limits to the roots of the
former, with this exception, that either two of it's roots, or none, lie between the positive and negative roots of the original equation, according as a decreasing or increasing progression is used.
Let the roots of the equation x^n - px^n - 1 - qx^n - 2 - &c. = 0, taken in order, be a, - b, - c, &c. When these values are substituted for x in P x^n-1 - &c. = 0 are of the same form with the roots
of the original equation*, because there is the same number of changes of signs in both; let these roots, taken in order, be α, - β, - γ, &c. and since both
* Here we suppose that the first term is not taken out, and that the signs of the terms of the progression are not changed.
a, and - b, when substituted in the limit, give positive results, either two roots α, and -β, or none, lie between them (Art. 300); and - b, a negative quantity, cannot be greater than α, a positive
one; therefore the order of the magnitudes is a, α, - β, -b, - γ - c, &c. that is, when the terms of the equation are multiplied by the terms of the decreasing arithmetical progression, a and - b of
the original equation.
When B is negative, and nB less than A, or the series an ascending one, it may be proved, as before, that when a, - b, - c, &c. are substituted for x, in a is less than α, and either two roots -β,
and -γ or none, lie between aand - b; there cannot be two, because then - b, and - c, are both less than -γ, and when substituted for x, must give results affected with the same sign; but the results
are - BbM, and + BcN; therefore the order of the roots is α, a, - b, -β, - c, γ, - &c. that is, no root of the limiting equation lies between a, and - b. If A be less than nB, the first term of the
limiting equation is negative, and the signs of all the terms being changed, to reduce the equation to a proper form (Art. 262), the series becomes a decreasing one.
(315.) In the preceding demonstration, the limiting equation is supposed to contain as many roots as the original equation, and of the same form. If the
arithmetical progression begin from nothing, it may be proved in the same manner, that no root of the limit, thus deduced, lies between the positive and negative roots of the proposed equation.
(316.) Ex. Let the proposed equation be x^3. The roots of the equation 3x^2 - q = 0, are limits which lie between it's roots (Art. 307.)
Let the terms of the equation be multiplied by the series, 3, 2, 1, 0, and the limiting equation is 3x^3 - qx = 0, whose roots are √q/3, 0, - √q/3, two of which, 0 and - √q/3 lie between the positive
and negative roots of the proposed equation (Art. 314).
Let the terms be multiplied by the series 0, -1, -2, -3, the limiting equation thus obtained is 2qx - 3r = 0, whose root 3r/2q lies between the positive roots of the equation X^3 - qx + r = 0. (Art.
(317.) To find between which of the roots, of a proposed equation, any given number lies.
Let the roots of the proposed equation be diminished by the given number, and the number of negative roots, in the transformed equation, will shew it's place among the roots of the original equation.
Ex. To find between which of the roots of the equation x^3 - 9x^2 + 23x - 15 = 0, the number 2 lies.
Assume x = y + 2; then,
or y^3 - 3y^2 - y + 3 = 0, which has one negative root; and the roots of the proposed equation are all positive; therefore two of them are greater, and one is less than 2.
(318.) In general, the last term., and the coefficients of the other terms of the transformed equation are found by substituting the number, by which the roots are to be diminished, for x, in the
See Art. 283. And by substituting, successively, different numbers for x, the limits of the roots of the proposed equation may be found.
Ex. Let the proposed equation be x^4 - 2x^3 - 5x^2 + 10x - 3 = 0.
L 2
From the changes of signs in the proposed equation, it appears, that it has one negative and three positive roots; when these roots are diminished by 1, they become two positive and two negative;
when diminished by 2, they become one positive and three negative; and when diminished by 3, they all become negative; therefore one root of the proposed equation lies between 0 and 1, one between 1
and 2, and the third between 2 and 3.
By changing the signs of the roots, and proceeding in the same way, we may find, that the negative root lies between - 2, and - 3.
ON THE DEPRESSION AND SOLUTION OF EQUATIONS.
(319.)If an equation contain equal roots, these may he found, and the equation reduced as many dimensions lower as there are equal roots.
Let the roots of the equation x^n - px^n - 1 + qx^n - 2 - &c. = 0, be a, b, c, &c. then (Art. 307),
Suppose a = b; then,
the whole of which is divisible by x - a without remainder, that is, a is a root of the equation nx^n - 1 -
If three roots a, b, c, be equal, a, a (Art. 268).
In the same manner, if the original equation have m equal roots, the equation ^x - 3 - &c. = 0, has m - 1 of those roots.
(320.) Hence it appears, that when there are m equal roots, the two equations have a common measure of the form m roots of the original equation may thus be known. Divide this equation by n - m
dimensions, contains the other roots.
Ex. Let the equation x^3 - px^2 + qx - r = 0, have two equal roots; then 3x^2 - 2px + q = 0 has one of them; and the two equations have a common measure which is a simple equation; consequently, the
quantities 9x^3 - 9px^2 + 9qx - 9r and 3x^2 - 2px + q, have the same common measure, which is thus found,
* The equation
hence, ^3 - px^2 + qx - r = 0; that is,
Thus two roots of the equation are discovered; and since p is the sum of all the roots, the third root is the difference between p and the sum of the two equal roots.
Let the proposed equation be x^3 - 4x^2 + 5x - 2 = 0.
Here, p = 4, q = 5, r = 2; and one of the equal roots is p - 2 = 4 - 2 = 2.
But it must be observed, that though ^2 - 2px + q = 0.
(321.) If the roots of the equation x^3 - px^2 + qx - r = 0, be in arithmetical progression,
Let the roots be a - b, a, a + b; then p = 3a, q = 3a^2 -b^2, and r = a^3 - ab^2; hence, 9r - pq = - 6 ab^2, and 6q - 2p^2 = - 6b^2; therefore,
(322.) If two roots of an equation be of the form +a, - a, differing only in their signs, they may be found, and the equation depressed.
Change the signs of the roots, and the resulting equation has two roots +a, - a; thus we have two equations with a common measure, x^2 - a^2, which may be found, and the equation depressed, as in the
preceding case.
Ex. Required the roots of the equation x^4 + 3x^3 7x^2 + 27x - 18 = 0, two of which are of the form + a, - a. By changing the signs of the alternate terms we obtain the equation x^4 - 3x^3 - 7x^2 +
27x - 18 = 0, which has two roots of the form +a, - a; and the common quadratic divisor, of the two equations, is x^2 - 9 = 0, hence, x = ± 3. To obtain the other roots, divide x^4 + 3x^3 - 7x^2 -
27x -18 = 0, by x^2 - 9 = 0, and the roots of the resulting equation x^2+3x + 2 = 0, are the roots sought.
(323.) By this method, when the coefficients are rational, surd roots of the form ± √a may be discovered (See Art. 279)
(324.) When there are two other roots of the same form, the equations will have a common divisor x^4 - Qx^2 + R = 0, which contains the four roots, a, - a, b, - b.
If the roots of an equation have any other given relation, they may be found in a similar manner (See Waring's Alg. Cap. 3); but as particular relations of the roots to each other are rarely known,
it seems unnecessary to prosecute the subject farther, in this place,
SOLUTIONS OF RECURRING EQUATIONS.
(325.) The roots of a recurring equation of an even number of dimensions, exceeding a quadratic, may be found by the solution of an equation of half the number of dimensions.
Let x^n - px ^n -1 …… - px + 1 = 0; it's roots are of the form a, 1/a, b, 1/b, &c. (Art. 292); or it may be conceived to be made up of quadratic factors, i.e. if m = a + 1/a, n = b + 1/b, &c. of the
quadratic factors, x^2 - mx + 1; x^2 - nx +1; &c. Then by multiplying these together, and equating the coefficients with those of the proposed equation, the values of m, n, &c. may be found.
Moreover, for every value of m there are two values of x; therefore the equation for determining the value of m, will rise only to half as many dimensions as x rises in the original equation.
(326.) If the recurring equation be of an odd number of dimensions, +1, or - 1 is a root (Art. 295); and the equation may therefore be reduced to one of the same kind, of an even number of
dimensions, by division.
Ex. 1. Let x^3 -1 = 0. Unity is one root of this equation, and by dividing x^3 - 1 by x - 1, the equation x^2 + x + 1 = 0 is obtained, which contains the other two roots,
x^2 + x + 1 = 0
x^2 + x + 1/4 = 1/4 - 1 = - 3/4
x + 1/2 = ± √-3/2,
or ^3 - 1 = 0, or the three cube roots of 1, are 1,
In the same manner, the roots of the equation x^3 + 1 = 0 are found to be - 1,
Ex. 2. Let x^4 - 1 = 0. Two roots of this equation are +1, - 1; and by division, ^3 + 1 = 0, an equation which contains the other two roots, + √-1, and -√-1.
Ex.3. Let x^4 + 1 = 0. Assume ^4 + 1; that is, ^2 + 2 = 0, or m^2 = 2, and m = ± √2. Therefore the two quadratics which contain the roots of the biquadratic, are x^2 + √2.x + 1 = 0, and x^2 - √2.x +
1 = 0, from the solution of which it appears that the roots are
In the same manner may the roots of the equations x^5 + 1 = 0, and x^6 + 1 = 0, be found.
THE SOLUTION OF A CUBIC EQUATION BY CARDAN's RULE.
(327.) Let the equation be reduced to the form x^3 - qx + r = 0, where q and r may be positive or negative.
Assume x = a + b, then the equation becomes a and b, and have made only one supposition respecting them, viz. that a + b = x, we are at liberty to make another; let 3ab - q = 0, then the equation
becomes a^3 + b^3 + r = 0; also, since 3ab - q = 0, b = q / 3a, and by
substitution, a^3 + q3/27a^3 + r = 0, or a^6 + ra^3 + q3/27 = 0, an equation of a quadratic form; and by completing the square, a^6 + ra^3 + r^2 /4 - q^3/27, and a^3 + r/2 = ± a = ^3 + b^3 + r = 0, b
^3 b =
We may observe, that when the sign of √r2/4 - q3/27, in one part of the expression, is positive, it is negative in the other, that is
(328.) Since b = q / 3a, the value of x is also
Let x^3 + 6x - 20 = 0; here q = - 6, r = - 20,
(329.) COR. 1. Having obtained one value of x, the equation may be depressed to a quadratic, and the other roots found (Art. 267).
(330.) COR. 2. The possible values of a and b being discovered, the other roots are known without the solution of a quadratic.
The values of the cube roots of a^3 are a,^3, are b, a + b, three only of which can answer the conditions ozf the equation, the others having been introduced by involution. These nine values are,
* See Art. 259.
In the operation, we assume 3ab = q, that is, the product of the corresponding values of a and b is supposed to be possible. This consideration excludes the 2^d. 3^d. 4^th. 5^th. 7^th. and 9^th.
values of a + b, or x; therefore the three roots of the equation are a + b,
The value of xis also a + q/3a; therefore, if a, α a, β a, be the three roots of a^3, the roots of the cubic are a + q/3a; αa + q/3α; βa + q/3β α.
(331.) COR. 3. This solution only extends to those cases in which the cubic has two impossible roots.
For if the roots be m + √3n, m - √3n, and -2 m, then - q (the sum of the products of every two with their signs changed) = - 3m^2 -3n, and q/3 = m^2 + n;
also, r (the product of all the roots with their signs changed) = 2m^3 - 6mn, and r/2 = m^3 - 3mn; and by involution,
r^2/4 = m^6 - 6 m^4n + 9 m^2n^2
q^3/27 = m^6 + 3 m^4n + 3 m^2n^2 + n^3
Hence, r^2/4 = q^3/27 = -9m^4n + 6m^2n^2-n^3 = n be negative, that is, unless two roots of the proposed cubic be impossible.
SOLUTION OF A BIQUADRATIC BY DES CARTES's METHOD.
(332.) Any biquadratic may be reduced to the form x^4 + qx^2 + rx + s = 0, by taking away the second term (Art. 284). Suppose this to be made up of the two quadratics, x^2 + ex + f = 0, and x^2 - ex
+ g = 0, where + e and - e are made the coefficients of the second terms, because the second term of the biquadratic is wanting, that is, the sum of it's roots is 0. By multiplying these quadratics
together, we have ^2 = q, eg - ef = r, and fg = s; hence, g + f = y + e^2, also g - f = r/e, and by taking the sum and difference of these equations, 2g = q + e^2 +
r/e, and 2f = q + e^2 - r/e; therefore 4fg = q^2 + 2qe^2 + e^4 - r^2/e^2 = 4s, and multiplying by e^2, and arranging the terms according to the dimensions of e, e^6 + 2qe^4 + ^2, y^3 + 2qy^2 +
By the solution of this cubic, a value of y, and therefore of √y, or e, is obtained; also f and g, which are respectively equal to
It may be observed, that which ever value of y is used, the same values of x are obtained.
(333.) This solution can only be applied to those cases, in which two roots of the biquadratic are possible and two impossible.
Let the roots be e, the coefficient of the second term of one of the reducing quadratics, is the sum of two roots, it's different values are ^2, or ^y, are e are y are,
which are all possible, as in the preceding case. But if the roots of the biquadratic be a + b √-1, a - b√-1, - a + c, - a - c, the values of y are
DR. WARING's SOLUTION.
(334.) Let the proposed biquadratic be x^4 + 2px^3 = qx^2 + rx + s; now ^2 + 2pnx + n^2, if therefore n, if 8 n^3 + 4qn^2 + n be obtained and substituted in the equation ^2 + s; then extracting the
square root on both sides,
Ex. Let x^4 - 6x^3 + 5x^2 + 2x - 10 = 0 be the proposed equation.
By comparing this with the equation x^4 + 2px^3 - qx^2 - rx - 8 =, we have 2p = - 6, or p = - 3, q = -5, r = -2, s = 10; and ^2s - r^2 = 0, is 8n^3 - 20n^2 + 56 n +156 = 0, or 2n^3 -5n^2 +14n+ 39 =
0, one of whose roots is -3/2; hence, ^2 - 2x + 2 = 0; roots of these quadratics, - 1, 5, 1 + √-1, 1 - √ -1, are the roots of the proposed biquadratic.
(335.) This solution can only be applied to those cases in which two roots of the biquadratic are possible, and two impossible.
Let the roots be a, b, c, d, then n - √n^2 + s, the last term of one of the quadratics, to which the equation is reduced, is the product of two of them, as ab; therefore n - ab = √n^2 + S, and
squaring both sides, n^2 - 2nab + a^2b^2 = n^2 + s, or - 2nab + a^2b^2 = s = - abcd (Art. 271), and dividing both sides by - ab, 2 n - ab - cd, or 2n = ab + cd, and n are a, b, c, d, are possible,
the values of n are possible. Also, when these quantities are all impossible, the values of n are all possible; in neither case therefore
can the value of n be obtained by Cardan's rule; but if two roots of the biquadratic be possible and two roots impossible, two values of n will be impossible, and the cubic may be solved, and
consequently the roots of the proposed equation may be found.
THE METHOD OF DIVISORS.
(336.) Since the last term of an equation is the product of all the roots with their signs changed, if any root be a whole number, it may be found amongst the divisors of the last term.
Ex. Suppose x^3 - 4x^2 - 6x + 12 = 0; the divisors of the last term are 1, - 1, 2, - 2, 3, -3, 4, - 4, 6, - 6, 12, -12, and by substituting these successively for x, we find that - 2 is a root of the
(337.) When the last term admits of many divisors, the number of trials may be lessened by finding the limits between which the roots of the equation lie; or by increasing, or diminishing, the roots
of the equa-tion, and thus lessening the number of divisors of the last term.
(338.) The number of trials may also be lessened by substituting three or more terms of the arithmetical progression1, 0, -1, &c. for the unknown quantity, and farming the divisors of the results,
taken in order, into arithmetical progressions, in which the common difference is unity; as it will only be necessary to try those divisors of the last term of the equation which are found in these
Let a, with their signs changed, be substituted in the equation for x, the integral values of x will be discovered.
Ex. Let the proposed equation be x^3 - 4x^2 - 6x + 2 = 0.
SUPP. Results Divisors Progr.
X = 1 3 1, 3 3
X = 0 12 1, 2, 3, 4, 6, 12 2
X = -1 13 1, 13 1
The only decreasing progression that can be formed out of the divisors is 3, 2, 1, therefore if one root of the equation be a whole number, - 2 is that root; and on trial it is found to succeed.
(339.) If the highest power of the unknown quantity be affected with a coefficient, let x, then a + m, a, and a - m are divisors of the results, if the equation have a factor of the form mx + a, or a
root - a/m. Also m, the common difference, in the arithmetical progression a + m, a, a - m, is a divisor of the coefficient of the first term of the
M 2
equation. In this case therefore, all the decreasing progressions must be taken out of the divisors of the resulting quantities, in which the common difference is unity, or some divisor of the
coefficient of the highest term of the equation, and amongst them is the progression a + m, a, a - m; therefore by making trial, successively, of the terms corresponding to a in the progressions thus
obtained, the factor mx + a, which divides the equation without remainder, will be found.
Ex. To find a divisor of the equation 8x^3 - 26x^2 + 11x + 10 = 0, if it admit one of the form mx + a.
SUPP. Res Divisors Progress.
X = 1 3 1, 3, -1, - 3 3, 3, - 3
X = 0 10 1, 2, 5, 10, - 1, - 2, - 5, - 10 1, 2, - 5
X = - 1 - 35 1, 5, 7, 35, -1, -5, -7, -35 -1, 1, -7
The decreasing progressions, in which the common difference is a divisor of 8, formed out of the divisors, are 3, 1, -1; 3, 2, 1; and - 3, - 5, -7; therefore the factors to be tried are 2x + 1, x +
2, and 2x - 5, the last of which succeeds, and consequently 2x - 5, = 0 (Art. 268), or x = 5/2.
(340.) If an equation be of four, or more dimensions, though it has no divisor of the form mx + a, it may have one of the form ± mx^2 ± nx ± r.
To find when this is the case, let x substitute successively, p + e, p, p - e, &c. then
sors of the resulting quantities; and if they be respectively subtracted from, or added to, ne. When p = 0, and e = 1, this progression becomes n + r, r, - n + r &c. and in all cases m is a divisor
of the first term of the equation. Let therefore 1, 0, - 1, - 2, &c. be substituted for x in the proposed equation, and let the differences and sums of the divisors of the results, and m, 0, m, 4m, &
c. be taken; then if all the arithmetical progressions possible be formed out of these quantities, in order, amongst them will be found the progression n + r, r, - n + r, &c. therefore, by trial, the
divisor mx^2 + nx + r will be discovered, if the equation admit of a quadratic divisor whose coefficients are whole numbers.
Let the proposed equation be 3x^4 + 4x^3 +3x^2 - 2x + 2 = 0:
SUPP. Res Divisors Sq. Sum and Differences. Progress.
X = 1 10 1, 2, 5, 10 3 - 7, - 2, 1, 2, 4, 5, 8, 13 - 2, 1
X = 0 2 1, 2 0 - 2,- 1, 1, 2 2, - 1
X = - 1 6 1, 2, 3, 6 3 - 3, 0, 1, 2, 4, 5, 6, 9 6, - 3
X = - 2 34 1, 2, 17, 34 12 - 22,- 5, 10, 11, 13, 14, 29, 46 10, - 5
From the first progression, n = - 4, r = 2; from the other, n = 2, and r = -1; therefore, since m may either be positive or negative, the divisors to be tried
are ± 3x^2 - 4x + 2, and ± 3x^2 + 2x - 1; of which, -3x^2 + 2x - 1, or 3x^2 - 2x + 1 succeeds; consequently, the roots of the equation 3x^2 - 2x + 1 = 0, are two roots of the proposed biquadratic.
THE METHOD OF APPROXIMATION.
(341.) The most useful and general method of discovering the possible roots of numeral equations, is approximation. Find by trial (Art. 318), two numbers, which substituted for the unknown quantity
give, one a positive, and the other a negative result; and an odd number of roots lies between these two quantities, that is, one possible root at least, lies between them; then by increasing one of
the limits, and diminishing the other, an approximation may be made to the root; substitute this approximate value, increased or diminished by v, for the unknown quantity in the equation, neglect all
the powers of v above the first, as being small when compared with the other terms, and a simple equation is obtained for determining v, nearly; thus a nearer approximation is made to the root, and
by repeating the operation, the approximation may be made to any required degree of exactness.
Ex. Let the roots of the equation y^3 - 3y +1 = 0. be required.
When 1 is substituted for x the result is -1, and when 2 is substituted, the result is + 3, therefore one possible root lies between 1 and 2; try 1.5; and the result is -.125, or the root lies
between 1.5 and 2.
Let 1.5 + v = y; then,
that is, -.125 + 3.75v + 4.5v^2 + v^3 = 0, and neglecting the two last terms, -.125 + 3.75v = 0, or v = .125/3.75 = .033 nearly, and y = 1.5 + v = 1.533. Again, suppose 1.533 + v - y; by proceedingas
before, we find .003686437 + 4.050267v = 0, and v = -.003686437/4.050267 = -.0009101 &c. hence, y = 1.532089 nearly. The other roots may be found by the solution of a quadratic (Art. 267).
(342.) The accuracy of the approximation does not depend upon the ratio which the quantity assumed bears to the root, but upon it's being nearer to one root than to any other.
Let the roots of the equation x^n - px^n -1 + qx^n - 2 - &c = 0, be a + m, a + n, a + r, &c. of which a + m is the least; assume a + v - x, and let P - Qv + Rv^2 - Sv^3 + &c. = 0, be the transformed
equation, whose roots are m, n, r, &c. then q/p = 1/m + 1/n + 1/r + &c. (Art. 273), and P
sition that m is much less than n or r, &c. 1/m is much greater than 1/n + 1/r + &c. and P/Q = m nearly; but if m bear a finite ratio to n or r, the approximation will be less accurate, and the less
these magnitudes n, r &c. are, the greater error is made in supposing 1/n + 1/r + &c. to vanish, when compared with 1/m.
(343.) When m and n are nearly equal to each other, and much less than r, s, &c. and also both positive or both negative, then m the less of the two; but if one of these quantities be positive and
the other negative, 1/m + 1/n may be either positive or negative, and greater, equal to, or less than 1/r + 1/s + &c. and consequently P/Q is not necessarily an approximation to any of the quantities
m, n, r, r, &c.
Let P - Qv + Rv^2 = 0; the roots of this equation will be m and n, nearly. For if m, n, r, s, be the roots of the equation P- Qv+Rv^2 - Sv^3 + &c. = 0, P = mnrs, Q = mns + mnr + mrs + nrs, R = mn +
mr + ms + nr + ns + rs, and since m and n are small when compared with r and s, Q = mrs + nrs nearly, and R = rs nearly; therefore the equation P - Qv + Rv^2 = 0 becomes
hence, m and n. By the solution then of this quadratic, a much nearer approximation is made to the root a + m than by the former method, and at the same time, an approximation is also made to the
root a + n.
(344.) In the same manner, if t roots be nearly equal, in order to approximate to them, it will be necessary to solve an equation of t dimensions. See Dr. Waring's Med. Algeb. p. 186.
(345). If we have two equations, containing two unknown quantities, we may discover the values of these quantities nearly in the same manner.
Let x and y.
Find, by trial, approximate values of x and y; such are 20 and 1; and let x = 20 + v, y = 1 + z;
then x^2y = 400 + 40v + 400 z + v^2 + 40vz + v^2z = 405,
and xy - y^2 = 19 + v + 18z + vz - z^2 = 20,
and neglecting those terms in which z or v is of more than one dimension, or in which their product is found, as being small when compared with the rest,
400 + 400v + 400 z = 405
19 + v +18z = 20
40 v + 400z = 5
v + 10z = .125
and v + 18z + 19 = 20
or v + 18z = 1
hence, 8 z = .875
z = .109375
v = .125 - 10z = - .96875
therefore x = 19.03
and y = 1.109.
By making use of the values thus obtained, nearer approximations may be made to x and y.
ON THE REVERSION OF SERIES.
(346.) If two quantities Ax + Bx^2 + Cx^3 + &c. and ax + bx^2 + cx^3 + &c. be always equal, the invariable coefficients of the corresponding terms are equal.
For if these equal quantities be divided by x, we have A + Bx + Cx^2 + &c. = a + bx + cx^2 + &c. or when ^x vanishes, A = a, and A and a are invariable, therefore in all cases, A = a; hence also, bx
+ Cx^2 + &c. = bx + cx^2 + &c. or dividing by x, B + Cx + &c. = b + cx + &c. and when x vanishes, B = b, therefore in all cases, B = b. In the same manner, C = c, &c.
(347.) COR. If SA + Bx + Cx^2 + &c. = 0 in all cases, then A = 0, B = 0, C - 0, &c.
(348.) Approximation may be made to a root of an equation, by assuming for it a series, involving the powers of that quantity in terms of which it is sought, with indeterminate coefficients; this
series being substituted for the unknown quantity in the proposed equation, the coefficients may be found by making each term equal to 0, and thus the series, which expresses the value of the unknown
quantity, may be determined.
Ex. Let y^3 - 3y + x = 0; required the value of y in terms of x.
Lety = ax + bx^3 + cx^5 + dx^7 + &c. then
and supposing each term to vanish (Art. 347), - 3a + 1 = 0, or a = 1/3; a^3 - 3b = 0, b = a^3/3 = 1/3^4; 3a^2b - 3c = 0, and c = a^2 b = 1/3^6; &c. therefore
y = x/3 + x^3/3^4+x^5/3^6+c. and when x = 1, y = 1/3 + 1/3^4 + 1/3^6 + &c. = .347 &c. which is one root of the equation y^3 - 3y + 1 = 0.
If for y, the series ax + bx^2 + cx^3 + dx^4 + &c. had been assumed, the quantities b, d, &c. would have been found = 0; therefore the even terms are unnecessary.
(349.) Cor. The less x is assumed, the faster will this series converge, and the more accurately will y be obtained.
(350.) This method of approximation is similar to the former, in this respect, that the series will have a slow degree of convergency, unless one value of y be much less than any other. If this be
not the case, find m, an approximate value of y, by trial, and assume m ± v = y; then, when one value of v is much less than any other, it may be found by this reversion,
and consequently that value of y which is nearest to m, will be known.
In the example Art. 341, y being found nearly 1.5, assume v + 1.5 = y, and the equation is transformed into v^3 + 4.5v^2 + 3.75v - .125 = 0: call this v^3 + pv^2 + qv - x = 0, and take v = ax + bx^2
+ cx^3 + dx^4 + &c. then
hence, qa -1 = 0, and a = 1/q = 1/3.75 = .26666 &c. qb+ pa^2 = 0, and b = -pa^2/q = -.08533 &c. therefore v =
(351.) The same method may be used to find y in terms of x, when x and y, and their powers, are combined in any manner in the equation.
Ex. 1. Let x = ay + by^2 + cy^3 + &c. required the value of ^y in terms of ^x.
Assume y = Ax + Bx^2 + Cx^3 + &c.
therefore a A -1 = 0, or A = 1/a; a B + b A^2 = 0, or
Ex. 2. Let x = y - ay^3 + by^5 - &c. required the value of y in terms of x.
Assume y = Ax + Bx^3 + Cx^5 + &c.
hence, A - 1 = 0, or A = 1; B - a A^3 = 0, or B = a; C - 3a A^2B + bA^5 = 0, or
The method of determining the proper series to be assumed in each case, without previous trial, is given by Maclaurin; Alg. Pt. 2. Ch. 10.
ON THE SUMS OF THE POWERS OF THE ROOTS OF AN EQUATION.
(352). Let a, b, c, &c. be the roots of the equation x^n - px^n - 1 + qx^n - 2 ….. + wx^n - m - &c. = 0, and A B, C, ……… P, Q; R, S, the sum of the roots, sum of their squares, cubes, ……… m - 3, m
-2, m -1, m, powers respectively; then will A = p, B = p A - 2q, C = p B - qA + 3r, &c. and S = pR- qQ + rP ….
-mw; where + w is the coefficient of the
It appears by Art. 308 that
1/x-a = 1/x + a/x^2 + a^2/x^3 …. + a^m/x^m+1 + &c.
1/x-b = 1/x + b/x^2 + b^2/x^3 …. + b^m/x^m+1 + &c.
1/x-c = 1/x + c/x^2 + c^2/x^3 …. + c^m/x^m+1 + &c. &c.
and if x be supposed greater than any of the magnitudes a, b, c, &c. no quantity is lost in the division; therefore by addition,
and by equating the coefficients,
Ex. Let x^3 + 5x^2 - 6x - 8 = 0; then by comparing the terms of this, with the terms of the equation x^n - px^n - 1 + qx^n - 2&c. = 0, we have p = -5, q = - 6, r = 8.
Hence, the sum of the roots = p = -5 = A.
Sum of the squares pA- 2q = 25 + 12 = 37 = B.
Sum of the cubes = pB - qA + 3r = - 185 - 30 + 24 = - 191 = C &c.
(353.) The proposition also admits of the following proof.
The same notation being retained; let m and n be equal, and since a, b, c, &c. are roots of the equation,
If m be greater than n; multiply the proposed equation by x^m-n, then x^m - px^m - 1 + qx^m - 2 + wx^m - n
= 0; which equation has the roots a, b, c, &c. and m - n roots each equal to 0; therefore the sum of the m^th powers of the roots of this equation.is equal to the sum of the m^th powers of the roots
of the former; that is, S = pR - qQ + &c. to n terms.
When m is less than n: The sum of the m^th powers of the roots may be expressed in terms of p, q, r, …… w, where w is the coefficient of the m + 1^th term of the equation. For p^2 contains a^2 + b^2
+ c^2 + &c. with other combinations of the roots, as ab, ac, bc, &c. which combinations are contained in a multiple of q; also, p^3 contains a^3 + b^3 + c^3 + &c., with other combinations, such as a^
2b, a^2c, b^2a, &c. abc, acd, bcd, &c. and these combinations may be made up of p, q and r; for p x q contains the quantities a^2b, a^2c, b^2a, &c. and r is the sum of the quantities abc, acd, bcd, &
c. In the same manner it appears, that a^4 + b^4 + c^4 + &c. may be found in terms of p, q, r and s; and in general, a^m + b^m + c^m + &c. may be expressed in terms of p, q, r, …..w. Also, the number
of combinations of any particular form, as a^2b, cannot be altered by the introduction of the root c; consequently the numeral coefficient of the product pq, by which the combinations of that form
are taken away, is the same, whatever be the number of roots: Hence the expression for am + bm + cm &c. in the equation x^n - px^n - 1 + qx^n - 2 ….. + wx^n - m - &c. = 0, is the same with the
expression for. the sum of the m^th powers of the roots of the equation x^m - px^m -1 + qx^m - 2.....+ w = 0; that is, S = pR - qQ.... -mw.
This rule was given by Sir I. Newton for the purpose of approximating to the greatest root of an equation. Suppose the roots all possible, and one greater than the rest, the powers of this root
increase in a higher ratio than those of any other, and the 2m^th power of this root will approach nearer and nearer to a ratio of equality with the sum of the 2m^th powers of the roots, as m
increases; therefore by extracting the 2m^th root of this sum, an approximation is made to the greatest positive or least negative root (See Art. 306).
ON THE IMPOSSIBLE ROOTS OF AN EQUATION.
(354.) It has before been shewn (Art. 311), that there are as many positive roots in an equation as it has changes of signs, and as many negative roots as continuations of the same sign, when the
roots are all possible.
But this rule cannot be applied to impossible roots; which appears by the demonstration there given, as well as from the consideration, that an. impossible expression cannot be said to be either
positive or negative.
If then it appear from the terms of an equation that some roots may, according to the above rule, either be called positive or negative, they must be impossible. Thus, two roots of the equation x^3 +
qx + r = 0, or x^3 ± o + qx + r = 0, are impossible; be
cause it has two changes of signs or none, according as the Second term is supposed to be - o, or + o. In the same manner, if any term of an equation be wanting, and the sign of the adjacent terms be
both positive or both negative, the equation has, at least, two impossible roots: and if two succeeding terms be wanting, it must always have, at least, two impossible roots.
(355.) Impossible roots enter equations by pairs (Art. 277); they also Ke under the form of two positive or two negative roots.
Let ± a + √-b^2 and ± a - √-b^2 be the roots; then ^2 = 0, the signs of which equation shew that it has either two positive, or two negative roots.
(356.) Con. Hence, if the last term of an equation of an even number of dimensions be negative, it will have at least two possible roots, one positive and the other negative (Art. 271).
(357.) Let an equation be transformed into one whose roots are the squares of the roots of the former, (Art. 296), then as many negative roots as the transformed equation contains, so many impossible
roots, at least, are in the original equation, because the square of a possible quantity is always positive.
(358.) If any series of magnitudes be substituted in order, for the unknown quantity in an equation, there can only be as many changes of signs in the results, as the equation contains possible
N 2
Let ^2, a √-b^2 c, d, &c. whatever magnitude is substituted for x, the quantity x^2 - 2ax + a^2 + b^2, which is the sum of two squares, x in the product c, d, &c. (See Art. 298.)
(359.) The limiting equation* has, at least, as many possible roots as the original equation, wanting one.
or by adding the two last terms together, it is
(See Art. 307), in which, if c, d, &c. be successively substituted for x, the results are +, -, &c. therefore there are possible roots in this latter equation which lie between c, d, &c. or this
equation contains at least, as many possible roots, wanting one, as the original equation. It may contain more.
* This is the limiting equation mentioned Art. 307; the proposition is not necessarily true of the other limiting equations, Art. 314.)
(360.) COR. 1. Hence it follows, that there are; at least, as many impossible roots in the original equation as in the equation of limits. There may be more; therefore from the number of impossible
roots in the limiting equation, we cannot determine, exactly, the number in the original equation.
(361.) COR. 2. Hence also it appears, that if the possible roots of the limiting equation be substituted successively in the original equation, we know from the signs of the results, what possible
roots the latter contains. For, roots of the limiting equation lie between the possible roots of the proposed equation (Art. 359.)
Ex. Let x^n + 3 - a^n + 1x^2 + p^2d^n + 1 = 0.
Its limiting equation is n is an odd number, are n is even, the possible roots of the limiting equation are 0 and x, gives a positive or a negative result.
(362.) The roots of a quadratic equation are impossible, if the square of the middle term be less than four times the product of the extremes.
Let ax^2+ bx + c = 0; then
Which expression becomes impossible wheft b^2 is less than 4ac.
(363.) It appears from Art. 360, that there are impossible roots in an equation, whenever there are impossible roots in its limiting equation. In the same manner, if the next limit be taken, there
are impossible roots in the original equation, whenever there are impossible roots in this limit; and if the limit be thus brought down to a quadratic, when the roots of the quadratic are impossible,
there are impossible roots in the original equation corresponding to them. On this principle is founded Sir I. Newton's rule for discovering impossible roots in any equation.
Let the proposed equation be x^n - px^n - 1. … + Dx^n - r + 1 - Ex^n - r + Fx^n - r - 1 - &c. = 0. To obtain a limiting equation, which shall be a quadratic, corresponding to the terms Dx^n - r + 1 -
Ex^n - r + Fx^n - r - 1 let the succeeding terms be taken away, by multiplying by the terms of the arithmetical progressions n, n -1, n - 2, …., 1, 0; n - 1, n - 2, …. 2, 1, 0; n - 2, n - 3, …. 2, 1,
0; &c. and let the preceding term be taken away, by multiplying by the terms of the progressions, 0, 1, 2, …. r + 1; 0, 1, 2, …. r; &c. as follows;
The respective products being taken, and those quantities left out which are found in every product, we obtain a limiting quadratic ^n - r +1 - Ex^n - r + Fx^n - r - 1 and if two roots of this
quadratic be impossible, that is,
less than ^2 less than DF, there are impossible roots in the proposed equation, corresponding to them*.
Write down therefore a series of fractions
Ex. Let the proposed equation be x^3 - 4x^2 + 4x - 6 = 0.
In the series of fractions 3/1, 2/2, 1/3, if each term be divided by that which precedes it, we obtain 1/3, 1/3 to be placed over the terms of the equation;
* See Art. 315.
and since ^2, or 16/3, is less than - 4×-6, or 24, therefore the sign - must be placed under the third term; and + being placed under the first and last terms, there are two changes of signs;
therefore the equation contains two impossible roots.
(364.) The discovery of the number of impossible roots in an equation has given great trouble to Algebraists, and their researches, hitherto, have not been attended with any great success. In a cubic
equation x^3 - qx + r = 0 two roots are impossible or not, according as r^2/4 - q^3/27 is positive or negative (Art. 331). A biquadratic, x^4 + qx^2 + rx + s = 0, has two impossible roots, when two
roots of the equation ^2 = 0, are impossible; and all it's roots are impossible, when the roots of this cubic are all possible and two of them negative (Art. 333).
(365.) Dr. Waring has given a rule for determining the number of impossible roots in an equation of five dimensions, but the investigation cannot properly be introduced into an elementary treatise.
See Med. Algebraicœ, p. 82.
(366.) Sir I. Newton's rule (Art. 363) is general and easily applied, but as it is deduced from the nature of the inferior limits, it will not always detect impossible roots (Art. 360). The proof
also is defective, as it does not extend to that part of the rale which respects the number of impossible roots. Thus far however it may be depended upon, that it never shews impossible roots, but
when there are some such in the proposed equation.
Many other rules, which will frequently discover the impossible roots in any equation, may be seen in the Med. Alg. C. 2.
THE END OF PART II.
ELEMENTS OF ALGEBRA.
PART III.
ON UNLIMITED PROBLEMS.
(367.) WHEN there are more unknown quantities than independent equations, the number of corresponding values which those quantities admit, is indefinite (Art. 145). This number may be lessened, by
rejecting all the values which are not integers; it may be farther lessened, by rejecting all the negative values; and still farther, by rejecting all values which are not square, or cube numbers; &
c. By restrictions of this kind, the number of answers may be confined within definite limits.
(368.) If a simple equation express the relation of two unknown quantities, and their corresponding integral values be required; divide the whole equation by the coefficient which is the less of the
two, and suppose that part of the quotient, which is in a fractional form, equal to some whole number; thus a new
simple equation is obtained, with which we may proceed as before; let the operation be repeated, till the coefficient of one of the unknown quantities is unity, and the coefficient of the other a
whole number; then an integral value of the former may be obtained by substituting 0, or any whole number for the other; and from the preceding equations, integral values of the quantities proposed
may be found.
Ex. 1.
Let 5x + 7y = 29; to find the corresponding integral values of x and y.
Dividing the whole equation by 5, the less coefficient,
x + y + 2y/5 = 5 + 4/5
then 2 - y = 2p + p/2.
y = 2 - 2p - p/2
let p = 2s, then y = 2 - 5s, and x = 5 - y + p = 3 +5s + 2s = 3 + 7s.
If s = 0, then x = 3 and y = 2, the only positive whole numbers which answer the condition of the equation; for if s = 1, then x = 10, and y = - 3; and if s = - 1, then x = - 4, and y = 7.
To find a number which being divided by 3, 4, 5, the remainders are 2, 3, 4, respectively.
Let x be the number,
or x = 3p + 2; also, from the second condition,
that is, 3p - 1 = 4q
then p = 4r - 1, and x = 3p + 2 = 12r - 1;
again, from the third condition,
(369.) If the simple equation contain more unknown quantities, their corresponding integral values may be found in the same manner.
Ex. 3.
Let 4x + 3y + 10 = 5v; to find corresponding integral values of x, y and v.
Dividing the whole equation by 3, the least coefficient,
then x = 2v - 3p - 1
and y = v - 2v + 3p + 1 - 3 + p = 4p - v - 2,
and substituting for p and v, nothing, or any whole numbers, integral values of x and y are obtained: If v = 3, and p = 1, then x = 2 and y = 1; if v = 4, and p = 0, then x = 7 and y = - 6; &c.
(370.) In the solution of different kinds of unlimited problems, different expedients must be made use of, which expedients, and their application, are chiefly to be learned by practice.
Ex. 1.
To find what numbers are divisible by 3 without remainders.
Let a, b, c, d, &c. be the digits, or figures in the unit's, ten's, hundred's, thousand's, &c. place of any number, then the number is a + 10b + 100c + 1000d + &c. this divided by 3 is a/3 + 3b + b/3
+ 33c + c/3 + 333d + d/3 + &c. or
number is a multiple of 3 if the sum of it's digits be a multiple of 3. Thus 111, 252, 785l, &c are multiples of 3.
In the same manner, any number is a multiple of 9 if the sum of it's digits be a multiple of 9.
d + &c. which is a whole number when
COR. 1. Hence, if any number, and the sum of it's digits, be respectively divided by 9, the remainders are equal.
COR. 2. From this property of 9 may be deduced a rule which will sometimes detect an error in the multiplication of two numbers.
and if the sum of the digits in the multiplicand be divided by 9, the remainder is x; if the sum of the digits in the multiplier be divided by 9, the remainder is y; and if the sum of the digits in
the product be divided by 9, the remainder is the same as when the sum of the digits in xy is divided by 9, if there be no mistake in the operation.
Ex. 2.
To find a perfect number, that is, one which is equal to the sum of all the numbers which divide it without remainder.
Suppose y^nx to be a perfect number; it's divisors are 1, y, y^2 …. y^n, x, xy, xy^2 …. xy^n - 1; therefore y^nx = 1 + y + y^2. … + y^n + x+ xy + xy^2 …. + xy^n - 1.
hence, x may be a whole number, let y^n + 1 - 2y^n = 0, or y - 2 = 0, that is, y = 2; then x = 2^n + 1- 1. Also, let n be so assumed that 2^n + 1 -1 may have no divisor but unity, which was supposed
in taking the divisors of n = 1, the number is 2×3 or 6, which is equal to 1 + 2 + 3 the sum of it's divisors: If n = 2, the number is
Ex. 3.
To find two square numbers, whose sum is a square
Let x^2 and y^2 be the two squares;
then x^2 = n^2x^2 - 2nxy
x = n^2x - 2ny
And if n and y be assumed at pleasure, such a value of x is obtained, that x^2 + y^2 is a square number.
But if it be required to find integers of this description, let y = n^2 - 1, then x = 2n, and n being taken at pleasure, integral values of x and y, and consequently of x^2 and y^2, will be found.
Thus, if n = 2, then y = 3 and x = 4, and the two squares are 9 and 16, whose sum is 25.
Ex. 4.
To find two square numbers, whose difference is a square.
Let x^2 and y^2 be the two squares;
then -y^2 = - 2nxy + n^2y^2
And if y = 2n, then x = n^2 + 1. Thus, if n = 2, then y = 4, and x = 5; hence, x^2 - y^2 = 25 - 16 = 9*.
ON CONTINUED FRACTIONS.
(371.) To represent a/b in a continued fraction.
* On this subject, see the Edinburgh Transactions, Vol. II. p. 198.
Let b be contained in a, p times, with a remainder c; again, let c be contained in b, q times, with a remainder d, and so on; then we have
a = pb + c
b = qc + d
c = rd + e
(372.) COR. 1. An approximation may thus be made to the value of a fraction whose numerator and denominator are in too high terms, and the farther the division is continued, the nearer will the
approximation be to the true value.
(373.) COR. 2. This approximation is alternately less and greater than the true value. Thus p is less than a/b; and p + 1/q is greater, because a part of the denominator of the fraction is omitted: q
+ 1/r is too great for the denominator, therefore
Ex. To find a fraction which shall be nearly equal to 314159/100000, and in lower terms.
Here, p = 3, q = 7, r = 15, s = 1, &c. therefore 314159/100000
O 2
The first approximation is 3, which is too little; the next is 3 + 3/7 = 22/7, too great; the next is
(374.) To find the value of a continued fraction, when the denominators q, r, s, &c. recur in any certain order.
Ex. 1.
^2 + x, and x^2 + rx - r/q = 0, from the solution of which quadratic the value of x may be obtained.
Ex. 2.
ing both sides, ^3 - ax - b = 0; whence the value of x may be found.
(375.) In the same manner may the values of other quantities, which run on in infinitum, be found, if the factors recur.
Ex. 1. To find the value of
Let ^2, or x = a.
Ex. 2. Required the value of
Let ^4 - 2ax^2 + a^2, or b + x = x^2 - 2ax^2 + a^2; hence, x^2 - 2ax^2 - x + a^2 - b = 0; from which equation x may be found.
(376.) To find the value of a fraction when the numerator and denominator are evanescent.
Since the value of a fraction depends, not upon the absolute, but the relative magnitude of the numerator and denominator, if in their evanescent state they have a finite ratio, the value of the
fraction will be finite. To determine this value, substitute for
the variable quantity, it's magnitude, when the numerator and denominator vanish, increased by another variable quantity; then suppose this latter to decrease without limit, and the value of the
proposed fraction will be known.
Ex. 1. Required the value of
Let x = a + v, and the fraction becomes
Ex. 2. Required the value of
Let x = 1 + v, and the fraction becomes
(377.) To find the least common multiple of two quantities; or the least quantity which is divisible by each of them without remainder.
The product of the two quantities divided by their greatest common measure, is their least common multiple.
Let a and b be the two quantities, x their greatest common measure, m their least common multiple, and let m contain a, p times, and b, q times; that is, let m = pa = qb; then a/b = q/p; and since m
is the least possible, p and q are the least possible; therefore q/p is the fraction in it's lowest terms, and consequently q = a/x; hence, m = qb = ab/x.
Ex. What is the least common multiple of 18 and 12?
Their greatest common measure is 6; therefore their least common multiple is
(378.) Every other common multiple of a and b is a multiple of m.
Let n be any other common multiple of the two quantities; and, if possible, let m be contained in n, r times, with a remainder 8, which is less than m; then n - rm = s; and since a and b measure n
and rm, they measure n - rm, or s (Art. 91); that is, they have a common multiple less than m, which is contrary to the supposition.
(379.) To find, the least common multiple of three quantities a, b and c, take m the least common mul-
tiple of a and b, and n the least common multiple of m and c; then n is the least common multiple sought.
For every common multiple of a and b is a multiple of m (Art. 378); therefore every common multiple of a, b and c is a multiple of m and c; also, every multiple of m and c is a multiple of a, b and c
; consequently the least common multiple of m and c is the least common multiple of a, b and c.
(380.) Three quantities are said to be in harmonical proportion, when the first is to the third, as the difference of the first and second is to the difference of the second and third.
Any magnitudes A, B, C, D, E, &c. are said to be in harmonical progression, if A : C :: A - B : B - C; B : D :: B - C : C - D; C : E :: C - D : D - E; &c.
(381.) The reciprocals of quantities in harmonical progression are in arithmetical progression.
Let A, B, C, &c. be in harmonical progression; then A : C:: A - B : B - C; therefore AB - AC = AC - BC, and dividing both sides by ABC, 1/C - 1/B = 1/B - 1/A.
Again, B : D :: B - C: C - D; therefore BC - BD = DB - DC, and dividing by BCD, 1/D - 1/C = 1/C - 1/B; and 1/C - 1/B has been proved equal to 1/B - 1/A; therefore the quantities 1/A, 1/B, 1/C, 1/D, &
c. have a common difference, that is, they are in arithmetical progression.
(382.) Required the cube root of the binomial a + √- b^2.
by mult. ^2 = m - x^2; also from the equation, ^3 + 3x^2 √ - y^2 = a; or substituting for y^2 it's value m - x^2, x^3 = a, that is, 4x^3 - 3mx - a = 0, a cubic equation, whose roots, which are all
possible, may be found by approximation, or by a method which will be given in a following part of the work (Art. 515); hence, y, and consequently x + √- y^2, the root required, may be determined.
In the same manner it appears, that the c^th root may be extracted by the solution of an equation of c dimensions.
ON LOGARITHMS.
(383.) If there be a series of magnitudes a^0, a^1, a^2, a^3, … a^x; a^-1, a^-2, a^-3, …. a^-y, the indices, 0, 1, 2, 3, … x; -1, -2, -3, …. -y, are called the measures of the ratios of those
magnitudes to 1, or the logarithms of the magnitudes, for the reason assigned Art. 165. Thus, x, the logarithm of any number c, is such a quantity, that a^x = c.
Here a may be assumed at pleasure; and for every different value so assumed, a different system of loga-
rithms will be formed. In the common tabular logarithms, a is 10, and consequently 0, 1, 2, 3, …. x, are the logarithms of 1, 10, 100, 1000, ….
(384.) COR. 1. Since the tabular logarithm of 10 is 1, the logarithm of a number between 1 and 10 is less than 1; and in the same manner, the logarithm of a number between 10 and 100, is between 1
and 2; of a number between 100 and 1000, is between 2 and 3; &c.
These logarithms are also real quantities, to which approximation, sufficiently accurate for all practical purposes, may be made.
Thus, if x be the logarithm of 5, then x, and 10^2/3 is found to be less than 5, therefore 2/3 is less than the logarithm of 5: but 10^3/4 is greater than 5, or 3/4 is greater than the logarithm of
5; thus it appears that there is a value of x between 2/3 and 3/4, such that
(385.) COR. 2. If quantities be in geometrical progression, a^x, a^2x, a^3x, &c. their logarithms, x, 2x, 3x, &c. are in arithmetical progression.
The method of finding the logarithms of the natural numbers, or forming a table, is explained in the Doctrine of Fluxions.
(386.) The sum of the logarithms of two numbers is the logarithm of their product; and the difference of the logarithms is the logarithm of their quotient.
Let x = log. of c, and y = log. of d; then a^x = c and a^y = d; hence, a^x + y = dc, and a ^x - y = c/d; or x + y is the log. of dc, and x - y the log. of c/d.
Ex. 1. Log. of 3×7 = log. of 3 + log. of 7.
Ex. 2. Log. of pqr = log. of pq + log. of r = log. of p + log. of q + log. of r.
Ex. 3. Log. of 5/7 = log. of 5 - log. of 7.
(387.) If the log. of a number be multiplied by n, the product is the log. of that number raised to the n^th power.
Let d be the number whose log. is x, or a^x = d; then a^nx = d^n; that is, nx is the log. of d^n.
Exs. Log. of ^z = z×log. b.
(388.) If the log. of a number be divided by n, the quotient is the log. of the n^th root of that number.
Let a^x = d, then a^x/n = d^1/n, or x/n is the log. of d^1/n.
Ex. Log. of 5^1/4 = 1/4×log. of 5.
(389.) The utility of a table of logarithms in arithmetical calculations will from hence be manifest; the multiplication and division of numbers being performed by the addition and subtraction of
these artificial representatives; and the involution or evolution of numbers, by multiplying or dividing their logarithms by the indices of the powers or roots required.
Ex. Let the value of
Log. of 7 = .845098
1/2 log. of 2 = .150515
1/3 log. of 3 = .1590404
5) 1.1546534 sum
.2309306 = log. of 1.70188 &c. the value required.
ON INTEREST AND ANNUITIES.
(390.) Interest is the consideration paid for the use or forbearance of the payment of money. The rate of interest is the consideration paid for the use of a certain sum for a certain time, as of £1.
for one year.
When the interest of the principal alone, or sum lent, is taken, it is called simple interest; but if the interest, as soon as it becomes due, be considered as principal, and interest be charged upon
the whole, it is called compound interest.
The amount is the whole sum due at the end of any time, interest and principal together.
Discount is the abatement made for the payment of money before it becomes due.
(391.) To find the amount of a given sum, in any time, at simple interest.
Let P be the principal,
r, the interest of one pound for one year,
n, the time for which the interest is to be calculated,
M, the amount.
Then since the interest of a given sum, at a given rate, must be proportional to the time, 1 (year) : n (years) :: r : nr the interest of £1. for n years; and the interest of P£, must be P times as
great, or nrP; therefore the amount M = P + nrP.
(392). In this simple equation, any three of the quantities P, n, r, M being given, the fourth may be found; thus,
Ex. What sum must be paid down to receive £600, at the end of nine months, allowing 5 per cent, discount? Or, which is the same thing, what principal P will in nine months be equivalent, or amount to
£600., allowing 5 per cent, interest?
In this case, M = 600, n = 3/4 = .75, r = .05; hence,
(393.) To find the amount of an annuity, or pension left unpaid any number of years, allowing simple interest upon each sum or pension from the time it becomes due.
Let A be the annuity; then at the end of the first year, A becomes due, and at the end of the second year, the interest of the first annuity is rA (Art. 391); at the end of this year, the principal
becomes 2A, therefore the interest due at the end of the third year is
2rA; in the same manner, the interest due at the end of the fourth year is 3rA; &c. hence, the whole interest is nA; therefore the whole amount
(394.) Required the present value of an annuity to continue a certain number of years, allowing simple interest for the money.
Let P be the present value; then if P, and the annuity, at the same rate of interest, amount to the same sum, they are upon the whole of equal value. The amount of P, in n years, is P + nrP (Art.
391); and the amount of the annuity in the same time is n A
(395.) In this equation any three of the four quantities P, A, n, r being given, the other may be found.
(396.) COR. Let n be infinite, then P = nA/2 an infinite quantity, therefore for a finite annuity to continue for ever, an infinite sum ought, according to this calculation, to be paid; a conclusion
which shews the necessity of estimating the value of an annuity upon different principles.
(397.) To find the amount of a given sum at compound interest.
Let R = £1. together with it's interest for a year; then at the end of the first year, R becomes the principal, or sum due; therefore
1 : R :: R : R^2, the amount in two years;
1 : R :: R^2 : R^3, the amount in three years; &c. in the same manner, R^n is the amount in n years; and if P be the principal, the amount must be P times as great, or PR^n = M.
(398.) COR. 1. From this equation we have P = M/R^n.
Ex. What sum must be paid down to receive £600. at the end of three years, allowing 5 per cent, per ann. compound interest;
In this case, R = 1.05, n = 3, M = 600; and consequently P = M/R^n =
(399.) COR. 2. If P, R and M be given to find n, we have log. P + n×log. R = log. M; and n =
(400.) To find the amount of an annuity in any number of years, at compound interest.
Let A be the annuity, or sum due at the end of the first year; then 1 : R :: A : RA, it's amount at the end of the second year; therefore A + RA is the sum due at the end of the second year; in the
same manner,
the two payments at the end of the third year; and n years, that is,
(401.) COR. In this equation, any three of the quantities being given, the fourth may be found.
(402.) To find the present value of an annuity to be paid for n years, allowing compound interest.
Let P be the present value, A the annuity; then since PR^n is the amount of P in n years, and A in the same time; by the question,
(403.) COR. 1. Any three of the quantities P, A, R, n being given, the fourth may be found.
(404.) COR. 2. If the number of years be infinite, R^n is infinite, and 1/R^n vanishes; therefore
Ex. If the annual rent of a freehold estate be £1., what is it's value, allowing 5 per cent, compound interest?
In this case, A = 1, R - 1 = .05; therefore the present value P = 1/.05 = £20., or 20 years purchase.
(405.) COR. 3. The present value of an annuity, to commence at the expiration of p years, and to continue q years, is the difference between it's present value for p + q years, and it's present value
for p years.
Ex. What is the present value of an annuity of £l., for 14 years, to commence at the expiration of 7 years, allowing 5 per cent, compound interest?
The present value for 21 years =
(406.) The method of determiningthe present value of an annuity at simple interest, given in Art. 394, has been decried by several eminent Arithmeticians, and in it's stead, a solution of the
question has been proposed upon the following principle; "If the present value of each payment be determined separately, the sum of these values must be the value of the whole annuity."
Let x be the value or price paid down for the annuity, a the yearly payment, n the number of years for which it is to be paid, r the interest of £1. for one year. The present value of the first
payment is
(Art. 392), the present value of the second payment, or of a£. to be paid at the end of two years, is
(407.) These different conclusions arise from a circumstance which the Opponents seem not to have attended to. According to the former solution, no part of the interest of the price paid down is
employed in paying the annuity, till the principal is exhausted.
Let the annuity be always paid out of the principal x as long as it lasts, and afterwards out of the interest which has accrued; then x, x - a, x - 2a, x - 3 a, &c. are the sums in hand, during the
first, second, third, fourth, &c. years, the interest arising from which, rx, rx - ra, rx - 2ra, rx - 3ra, &c. that is, the whole interest, is x, is equal to the sum of all the annuities; therefore
According to the other calculation, part of the interest, as it arises, id employed in paying the annuity, but hot the whole. Thus, the first payment is made by a part of the principal, and the
interest of that part, which together amount to the annuity; and the other payments are made in the same manner; this is, in
effect, allowing interest upon that part of the whole interest which is incorporated with the principal. According to either calculation the seller has the advantage, since the whole, or a part of
the interest will remain at his disposal till the last annuity is paid off.
If the whole interest, as it arises, be incorporated with the principal, and employed in paying the annuity, compound interest is, in effect, allowed upon the whole. Let x be the price paid for the
annuity, n the number of years for which itis granted, and R = 1£. together with it's interest for one year. Then x in one year amounts to Rx, out of which the annuity being paid, Rx - a is the sum
in hand at the end of the first year; R^2x - Ra is the amount of this sum at the end of the second year, therefore R^2x - Ra - a is the sum in hand at the end of the second year; in the same manner,
R^xx - R^n - 1a - R^n - 2a … a is the sum left, after paying the last annuity, which ought to be nothing; therefore R^nx = R^n - 1a + R^n - 2a …… + a =
ON THE SUMMATION OF SERIES.
(408.) We have before seen the method of determining the sums of quantities in arithmetical and geometrical progression, but when the terms increase or decrease according to other laws, different
artifices must be used to obtain-general expressions for their sums.
P 2
The methods chiefly adopted, and which may be considered as belonging to Algebra, are, 1. The method of subtraction. 2. The summation of recurring series, by the scale of, relation. 3. The
Differential method. 4. The method of Increments.
(409.) The investigation of series whose sums are known by subtraction.
Ex. 1.
Ex. 2.
Ex. 3.
In the same manner, if 1 - 1/2 + 1/3 - 1/4 + &c. in inf. = S, we obtain 1/1.3 - 1/2.4 + 1/3.5 - &c. in inf. = 1/4.
Ex. 4.
Let 1/1.2 + 1/2.3 + 1/3.4 + &c. in inf. = S
then 1/2.3 + 1/3.4 + 1/4.5 + &c. in inf. = S - 1/2
by subt. 2/1.2.3 + 2/2.3.4 + 2/3.4.5 + &c. in inf. = 1/2
and 1/1.2.3 + 1/2.3.4 + 1/3.4.5 + &c. in inf. = 1/4.
Ex. 5.
If n be increased without limit, 1/mr + nr^2 vanishes, and the sum of the series is 1/mr.
If m = r = 1, we have 1/1.2 + 1/2.3 + 1/3.4 + &c. (to n terms) = 1 - 1/ 1 + n = n / 1 + n.
In the same manner,
Ex. 6.
The sum of n terms of the series, determined as in the last example, is
Let m = r = 1; then 1/1.2.3 + 1/2.3.4 + 1/3.4.5 +&c. in infinitum = 1/4; and 1/1.2.3 + 1/2.3.4 + &c. to n terms, =
Ex. 7.
(410.) To find the sum of the series 1/2.4.6 + 1/4.6.8 + 1/6.8.10 + &c. in infinitum.
When the sum of a series of this kind is required, take away the last factor out of each denominator, and assume the resulting series equal to S; and then proceed as in the former examples.
Let 1/2.4 + 1/4.6 + 1/6.8 + &c. in inf. = S
then 1/4.6 + 1/6.8 + 1/8.10 + &c. in inf. = S - 1/8
by subt. 4/2.4.6 + 4/4.6.8 + 4/6.8.10 + &c. in inf. = 1/8
and 1/2.4.6 + 1/4.6.8 + 1/6.8.10 + &c. in inf. = 1/32.
Ex. 8.
(411.) To find the sum of the series
n is less than m.
Ex. 9.
To find the sum of the series
* Here we take for granted that the terms of the assumed series converge to 0.
Let a = 3, b = m = r = 1, then 4/1.2.3 + 5/2.3.4 + 6/3.4.5 + &c. in inf. =
Let a = 0, b = 1, m = 1, r = 2; then 1/1.3.5 + 2/3.5.7 + 3/5.7.9 + &c. in inf. = 1/8.
See Philosophical Transactions, Vol.lxxii. page 389.
(412.) Similar to the method of subtraction is the following, given. by De Moivre, Miscel. Anal. p. 130.
Assume a series, whose terms converge to 0, involving the powers of an indeterminate quantityx; call the sum of the series S, and multiply both sides of the equation by a binomial, trinomial, &c.
which involves the powers of x and invariable coefficients; then if x be so assumed that the binomial, trinomial, &c. may vanish, and some of the first terms be transposed, the sum of the remaining
series is equal to the terms so transposed.
Ex. 1.
Let 1 + x/2 + x^2/3 + x^3/4 +&c. in inf. = S.
Multiplying both sides by x-1, we have
Ex. 2.
Assume 1 + x/2 + x^2/3 + x^3/4 + &c. in inf. = S, and multiplying both sides by x^2 - 1; then
Let x^2 - 1 = 0, then x = 1, or - 1; if x = 1, then - 1 - 1/2 + 2/1.3 + 2/2.4 + 2/3.5 + &c. in inf. = 0,
that is, 2/1.3 + 2/2.4 + 2/3.5 + &c. in inf. = 3/2,
and 1/1.3 + 1/2.4 + 1/3.5 + &c. in inf. = 3/4.
If x = - 1, then
- 1 + 1/2 + 2/1.3 - 2/2.4 + 2/3.5 - &c. in inf. = 0,
and 1/1.3 - 1/2.4 + 1/3.5 - &c. in inf. = 1/4.
Ex. 3.
Let 1 + x/2 +x^2/3 + x^3/4 + &c. in inf. = S, and multiply both sides by 2x - 1; then
x - 1 = 0, or x = 1/2, and - 1 + 3/1.2.2 + 4/2.3.2^2 + 5/3.4.2^3 + &c. = 0,
or 3/1.2.2 + 4/2.3.2^2 + 5/3.4.2^3 + &c. in inf. = 1.
(413.) If both sides of the equation be multiped by a binomial, each term of the series obtained will have two factors in it's denominator; if by a trinomial, each term will have three factors in
it's denominator; &c.
Ex. 4.
Let 1 + x/2 + x^2/3 + x^3/4 + &c. in inf. = S.
multiply both sides by
or 1 - 5x/2 + 5x^2/1.2.3 + 6^3/2.3.4 + 7x^4/3.4.5 + &c. =
If x = 1, then 1 - 5/2 + 5/1.2.3 + 6/2.3.4 + &c. = 0, and 5/1.2.3 + 6/2.3.4 + 7/3.4.5 + &c. in inf. = 3/2. If x = 1/2 then 1 - 5/4 + 5/1.2.3.2^2 + 6/2.3.4.2^3 + 7/3.4.5.2^4 + &c. = 0, or 5/1.2.3.2^2
+ 6/2.3.4.2^3 + 7/3.4.5.2^4 + &c. in inf. = 1/4.
Let 1/m + x/ m + r + x^2/m + 2r + &c. in inf. = S;
multiply both sides by the binomial ax - b; then
Let ax - b = 0, and transpose - b/m; then
latter may be found. Thus, let the sum of the infinite series 2/1.3×1/3 + 3/3.5×1/3^2 + 4/5.7×1/3^3 + &c. be required.
In this case, x = 1/3, therefore a/3 - b = 0, or a = 3b; also, m = 1, r = 2, and
To find the sum of n terms of the series,
and proceeding as before,
* This assumption renders the restriction in De Moivre's rule, respecting the convergence of the series, unnecessary.
^2 + 4/5.7×1/3^3 + &c. the sum of which, to n terms, is 1/4 -
Ex. 6.
To find the sum of 19/1.2.3×1/4 + 28/2.3.4×1/8 + 39/3.4.5×1/16 + 52/4.5.6×1/32 + &c. in inf.
Because the factors in the denominators increase by 1, and begin from 1, assume 1 + x/2 + x^2/3 + x^3/4 + x^4/5 + &c. = S, and multiply both sides by ax^2 - bx + c; then,
and since ax^2 - bx + c = 0, and one value of x is 1/2, because the powers of 1/2 are involved in the terms of the proposed series, we have a/4 - b/2 + c = 0; also, 6a - 3b + 2c = 19, and 12a - 8b +
6c = 28; from which three equations it appears that a = 6, b = 7, c = 2; and if these values be substituted in the general series, we have, 2 - 3 + 19/1.2.3×1/4 + 28/2.3.4×1/8 + 39/3.4.5×1/16 + 52/
4.5.6×1/32 + &c. = 0, or, 19/1.2.3×1/4 + 28/2.3.4×1/8 + 39/3.4.5×1/16 + &c. = 1.
The sum of n terms of this series, determined as in the last example, is
ON RECURRING SERIES.
(414.) If each succeeding term of a decreasing series bear an invariable relation to a certain number of the preceding terms, the sum of the series may be found.
Let a + bx + cx^2 + &c. be the proposed series; call it's terms A, B, C, D, &c. and let C = fxB + gx^2A, D = fxC + gx^2B, &c. where f + g is denominated the scale of relation; then, by the
A = A
B = B
C = fxB + gx^2A
D = fxC + gx^2B
E = fxD - gx^2C
&c. &c.
and if the whole sum A + B + C+ D + &c. in inf. = S, we have,
S - fxS - gx^2S = A + B - fxA; therefore, S =
In the same manner, if the scale of relation be f + g + &c. to n factors, the sum of the series is
Ex. 1.
To find the sum of the infinite series 1 + 3x + 9x^2 + &c. when x is less than 1/3.
Here f = 3, and the sum =
Ex. 2.
To find the sum of the infinite series 1 + 2x + 3x^2 + 4x^2 + &c. when x is less than 1.
Here, f = 2, g = - 1, and the sum =
If x be equal to, or greater than 1, the series is infinite; yet we know that it arises from the division of 1 by n terms may be determined.
The series after the n first terms becomes ^2 + &c. to n terms
If the sign of x be changed, 1 - 2x + 3x^2 - &c. to n terms n is an even or odd number.
Ex. 3.
To find the sum of n terms of the series 1 + 3x + 5x^2 + 7x^3 + &c.
Suppose f + g to be the scale of relation; then, 3f + g = 5, and 5f + 3g = 7; hence, f = 2, and g = - 1; and, by trial, it appears that the scale of relation is properly determined; hence,
After n terms, the series becomes
Ex. 4.
To find the sum of 1 + 2x + 3x^2 + 5x^3 + 8x^4 + &c. in inf. when the series converges.
In this case the scale of relation is 1 + 1, and consequently, the sum is
If x becomes negative, 1 - 2x + 3x^2 - 5x^3 + &c. in inf.
Ex. 5.
To find the sum of n terms of the series
In the series
n terms, the series becomes - x^n + 1 - 2x^n + 2 - &c. the sum of which is found in the same manner, to be nterms,
Hence, n terms, =
Ex. 6.
To find the sum of n terms of the series 1^2 + 2^2x + 3^2x^2 + 4^2x^3 = &c.
Let the scale of relation be f + g + h; then
9f + 4g + h = 16
16f + 9g + 4h = 25
25f + 16g + 9h = 36.
From these equations we obtain f = 3, g = - 3, h = 1, which values, when substituted, produce the successive terms of the proposed series; therefore S = x is less than 1.
After the first n terms, the series becomes n terms of the series is
On this subject the Reader may consult De Moivre's Misc. Analyt. p. 72. And Euler's Analys. Infinit. C. XIII.
ON THE DIFFERENTIAL METHOD.
(415.) In any series of quantities a, b, c, d, e, f, &c. if each term be taken from that which follows it, and the differences of these differences be taken, and so on, the following ranks of
differences will be obtained;
Q 2
1^st Diff. b - a, c - b, d - c, e - d, f - e, &c.
2^d Diff. c - 2b + a, d - 2c + b, e - 2d + c, f - 2e + d, &c.
3^d Diff. d - 3c + 3b - a, e - 3d + 3c - b, f - 3e + 3d - d, - c, &c.
4^th Diff e - 4d + 6c - 4b + a, f - 4e + 6d - 4c + b, &c.
5^th Diff. f - 5e + 10d - 10c + 5b - a, &c.
&c. &c.
Hence it appears, that the coefficients of the quantities a, b, c, d, &c. in the first term of the n^th differences, are the coefficients of the terms of a binomial raised to the n^th power, and that
their signs are alternately positive and negative; that is, the first term of the n^th differences is n^th is an even or an odd number.
COR. If the n^th differences vanish, the n terms; and the n terms.
(416.) Let d^I, d^II, d^III, d^IV, &c. represent the first terms in the first, second, third, fourth, &c. orders of differences; then
d^I = b - a
d^II = c - 2b + a
d^III = d - 3c + 3b - a
d^IV = e - 4d + 6c - 4b + a
and by transposition,
b = a + d^I
c = 2b - a + d^II = a + 2d^I + d^II
d = 3c - 3b + a + d^III = a + 3d^I+ 3d^II + d^III
e = 4d - 6c + 4b - a + d^IV = a + 4d^I + 6d^II + 4d^III + d^IV
&c. &c.
from which it is manifest, that the coefficients of a, d^I, d^II, d^III, &c. in the expression for the a, b, c, d, &c. are the coefficients of the terms of a binomial raised to the n^th power; that
is, the
(417.) COR. 1. The n^th term of the series is
Required the n^th term of the series 1, 3, 5, &c.
1, 3, 5, 7
2, 2, 2
0, 0
Here, a = 1, d^I = 2, d^II = 0; therefore the n^th term is
(418.) COR. 2. If the differences at length vanish, the n^th term of the series will be exactly determined; but if the differences do not vanish, we can only approximate to it; and the less the
differences become, when compared with the former differences, and with n, the nearer will the approximation be to the true value of that term.
(419.) Let the proposed series be 0, a, a + b, a + b + c, a + b + c +d, &c. then,
1^st Diff. a, b, c, d, &c.
2^d Diff. b - a, c - b, d - c, &c.
3^d Diff. c - 2b + a, d - 2c + b, &c.
4^th Diff. d - 3c + 3b - a, &c.
Let b - a = d^I, c - 2b + a = d^II, d - 3c + 3b - a = d^III, &c. then the n terms, is
If therefore a, b, c, d, &c. be the terms of any series, whose first, second, third, &c. differences are represented by d^I, d^II, d^III, &c. the sum, ofn terms of this series is
Ex. 1.
Required the sum of the series 1 + 3 + 5 + 7 + &c. continued to n terms.
1, 3, 5, 7
2, 2, 2
0, 0
In this case, a = 1, d^I = 2, d^II = 0; hence, the sum is
Ex. 2.
Required the sum of the series 1^2 + 2^2 + 3^2 + 4^2 + &c. continued to n terms.
1, 4, 9, 16
3, 5, 7
2, 2
Here, a = 1, d^I = 3, d^II = 2, d^III = 0; therefore the sum =
Ex. 3.
To find the sum of the series 1^3 + 2^3 + 3^3 + &c. to n terms.
1, 8, 27, 64, 125
7, 19, 37, 61
12, 18, 24
6, 6
Here, a = 1, d^I = 7, d^II = 12, d^III = 6, d^IV = 0; therefore the sum =
Sometimes the sum may be more readily obtained by beginning the series with one or more cyphers; thus, to find the sum of n terms of the series 1^3 + 2^3 + 3^3 + &c. take n + 1 terms of the series 0
+ 1^3 + 2^3 + 3^3 + &c.
0, 1, 8, 27, 64
1, 7, 19, 37
6, 12, 18
6, 6
Here, a = 0, d^I = 1, d^II = 6, d^III = 6; and the sum of
of n +1 terms is
The differential method may also be applied to the interpolation of series, and the quadrature of curves. See Emerson's Tract on this subject.
ON THE METHOD OF INCREMENTS.
(420.) Any variable quantity is called an integral. The magnitude by which it is increased at one step, is called the increment. Thus, 1 + 2 + 3. … + m =
When the quantity decreases, the increment becomes negative.
(421.) If two quantities begin to increase together, and their corresponding increments be always in the same ratio, their integrals, or the whole quantities generated, will be in that ratio.
Let the corresponding increments be A, B, C, &c. and a, b, c, &c. and let A : a :: B : b :: C : c &c. :: m : n; then A + B + C + &c. : a + b + c + &c. :: m : n, (Art. 183).
COR. When m = n, or the increments are equal, the integrals are also equal.
(422.) If an integral be represented by the product of quantities in arithmetical progression, as where r, is constants, and m is increased at every step by r, the increment of this integral, is
The first value of the integral is
(423.) COR. 1. Since an invariable quantity C has no increment, the increment of
(424) COR. 2 Hence, if the increment of an integral be represented by C must be determined by the nature of the question.
(425.) COR. 3. If the increment be A is invarible, the integral is
(426.) If A be the constant increment, and m the number of times it has been taken, the integral is mA ± C.
(427.) To find, therefore, the integral of any increment, let the increment be reduced to the products of arithmetical progressionals whose common difference is the quantity by which the variable
magnitude is increased at every step, and the integral of each increment will be found by multiplying it by the preceding term in the progression, and dividing it by the number of terms, thus
increased, and by the common difference.
(428.) The constant quantity which is to be added to, or subtracted from this result, in order to obtain the correct integral, must be determined by the nature of the question; thus, when x, the
integral obtained by the rule, is a, suppose the true integral is known to be b; then since x + C is in all cases the integral, a + C = b, or C = b - a; therefore the correct integral is x + b - a.
Ex. 1.
To find the sum of the series 1+ 2 + 3 + &c. continued to n terms.
The n^th term is n, and the increment of the sum is n + 1, whose integral, according to the rule, is
And this is the correct integral, because when n = 1, the sum is 1.2/2 = 1, as it ought to be.
Ex. 2.
To find the n^th term of the series 5, 9, 16, 26, 39, &c.
Take the differences, as in Art. 415,
5, 9, 16, 26, 39,
4, 7, 10, 13
3, 3, 3
It is manifest that the n^th term of any order of differences is the increment of the n^th term of the preceding order; therefore 3 is the increment of the n^th term of the first differences, and 3n
+ C is the n^th term; when n = 1, 3 + C = 4, or C = 1; hence the n^th term of the first differences, or the increment of the n^th term of the original series, is 3n + 1; consequently the n^th term
required is
let n = 1, and 1 + C = 5, or C = 4; therefore the n^th term is
Ex. 3.
To find the sum of the series 1^2 + 2^2 + 3^2 + &c. continued to n terms.
The increment of this sum is
Ex. 4.
To find the sum of the series 1^2 + 3^2 + 5^2 + &c. continued to n terms.
The increment of the sum is
Ex. 5.
To find the sum of n terms of the series. 1^3 + 2^3 + 3^3 + &c.
The increment of the sum is
The following table of figurate numbers is formed by making the n^th term of each succeeding rank equal to the sum of n terms of the preceding.
1^st Order 1, 1, 1, 1, 1, 1
2^d 1, 2, 3, 4, 5, 6
3^d 1, 3, 6, 10, 15, 21
4^th 1, 4, 10, 20, 35, 56
5^th 1, 5, 15, 35, 70, 126
&c. &c.
Ex. 6.
To find the sum of n terms of the m^th order of figurate numbers.
The sum of n terms of the 1^st order is n; therefore n + 1 is the increment of the sum of n terms of the 2^d order, and it's integral, n^th term of the 3^d order; consequently n terms, of the 3^d
and it's integral, n terms of the 3^d order, or the n^th term of the 4^th order; &c.
Thus it appears that
Ex. 7.
To find the sum of n terms of the series 1.2 + 2.3 + 3.4 + &c.
Ex. 8.
To find the sum of the n first terms of a series whose n^th term is an^3 + bn^2 + cn + d; a, b, c, d, being given quantities.
and by equating the coefficients, A = a; 3A + B = 3a + b, or 3a + B = 3a + b, that is, B = b; 2A + B + C = 3a + 2b + c, or 2a + b + C = 3a + 2b + c, hence, C = a + b + c; also, D = a + b + c + d;
therefore the increment of the sum is
which requires no correction.
(429.) Though in general it is convenient to reduce an increment to the products of arithmetical progressionals, in order to obtain it's integral; yet if a quantity of any other form can be found,
whose increment coincides with the increment proposed, this quantity, when properly corrected, is the integral (Art. 421).
Ex. 1.
To find the sum of n terms of the series 5 + 6 + 7 + &c.
Let An + Bn^2 + Cn^3 + &c. be the sum required; it's increment is ^2 - Cn^3 - &c. and the increment of the sum is also n + 5; therefore A + 2Bn + B + 3Cn^2 + 3Cn + C + &c. = n + 5; and by equating
the coefficients, C = 0; 2B = 1, or B = 1/2; A + B = 5, or A = 9/2; hence the sum required is
Ex. 2.
To find the number of shot in a pyramidal pile upon a square base whose side is known.
Let n be the number in one side of the base; then n^2 is the number contained in the first square; also, since one shot in the next square, will lie between every two in the former, n - 1 is the
number contained in the
side of the second square, and n^2+
Suppose An + Bn^2 + Cn^3 to be the sum of the series; it's increment is Bn^2 - CnM^3; and the incrrment of the series is also
and by equating the coefficients, 3C = 1, or C = 1/3, 3C + 2B = 2, or B = 1/2; C + B + A = 1, or 1/3 + 1/2 + A = 1, hence A = 1/6; and the sum of the series is n/6 + n^2/2 + n^3/3; which requires no
correction. If An + Bn^2 + Cn^3 + Dn^4 + &c. be assumed for the sum of the series, it is evident, from the process, that D, and the coefficient of every succeeding term, vanishes.
(430.) represent an integral, where r is constant, and m increases at every step by r it's increments is
For, the next value of the integral is
value be taken, the difference or increment, is
(431.) COR. 1. Hence, if
(432.) COR. 2. If the increment be
That is, if an increment can be reduced to the form
This integral may be corrected as in Art. 428.
Ex. 1.
To find the sum of m terms of the series 1/1.2.3 + 1/2.3.4 + 1/3.4.5 + &c.
+ C = 1/1.2.3, and C = 1/1.2.3 + 1/2.2.3 = 3/12 = 1/4; hence, the correct integral, or sum required, is 1/4 -
Ex. 2.
To find the sum of n terms of the series 5/1.2.3.4 + 7/2.3.4.5 + 9/3.4.5.6 + &c.
Ex. 3.
To find the sum of n terms of the series 1/1.3 + 1/2.4 + 1/3.5 + 1/4.6 + &c.
The increment of the sum is n = 1, then - 1/3 - 1/2.2.3 + C = 1/1.3; hence, C = 2/3 + 1/12 = 9/12 = 3/4, and the sum required is 3/4 -
Ex. 4.
To find the sum of n terms of the series 1/1.3 + 1/3.5 + 1/5.7 + &c.
The increment of the sum is n, i.e. it increases by 1 at every step; and
They who wish to prosecute this subject farther, may consult Dr. Waring's Fluxions, Sterling's Summation of Series, and Emerson's Method of Increments.
ON CHANCES.
(433.) If an event may take place in n different ways, and each of these be equally likely to happen, the probability that it will take place in a specified way is properly represented by 1/n,
certainty being represented by unity: Or, which is the same thing, if the value of certainty be unity, the value of the expectation that the event will happen in a specified way is 1/n.
For, the sum of all the probabilities is certainty, or unity, because the event must take place in some one of the ways, and the probabilities are equal; therefore each of them is 1/n.
(434) COR. If the value of certainty be a, the value of the expectation is a/n. But in the following articles we suppose the value of certainty to be unity.
R 2
(435.) If an event may happen in a ways, and fail in b ways, any of these being equally probable, the chance of it's happening is .
The chance of it's happening must, from the nature of the supposition, be to the chance of it's failing, as a to b; therefore the chance of it's happening: chance of it's happening, together with the
chance of it's failing :: a : a + b; and the event must either happen or fail, consequently the chance of it's happening together with the chance of it's failing is certainty; hence, the chance of
it's happening: certainty :: a : a + b; and the chance of it's happening
Also, since the chance of it's happening together with the chance of it's failing is certainty, which is represented by unity,
Ex. 1.
(436.) The probability of throwing an ace with a single die, in one trial, is 1/6; the probability of not throwing an ace is 5/6; the probability of throwing either an ace or a deuce, is 2/6; &c.
Ex. 2.
(437.) If n balls, a, b, c, d, &c. be thrown promiscuously into a bag, and a person draw out one of them, the probability that it will be a is 1/n; the probability that it will be either a or b is 2/
Ex. 3.
(438.) The same supposition being made, if two balls be drawn out, the probability that these will be a and b is
For there are n things taken two and two together (Art. 230); and each of these is equally likely to be taken; therefore the probability that a and b will be taken is
Ex. 4.
(439.) If 6 white and 5 black balls be thrown promiscuously into a bag, and a person draw out one of them, the probability that this will be a white ball is 6/11; and the probability that it will be
a black ball is 5/11.
(440.) From the Bills of Mortality in different places, tables have been constructed which shew
how many persons, upon an average, out of a certain number born, are left at the end of each year, to the extremity of life. From such tables, the probability of the continuance of a life, of any
proposed age, is known.
Ex. 1.
(441.) To find the probability that an individual of a given age will live one year.
Let A be the number, in the tables, of the given age, B the number left at the end of the year; then B/A is the probability that the individual will live one year; and
Ex. 2.
(442.) To find the probability that an individual of a given age will live any number of years.
Let A be the number, in the tables, of the given age, B, C, D,. … X, the number left at the end of 1, 2, 3,. … t, years; then B/A is the probability that
the individual will live one year; C/A the probability that he will live 2 years; and X/A the probability that he will live t years. Also, t, years.
These conclusions follow immediately from Art. 435.
(443.) If two events be independent of each other, and the probability that one will happen be 1/m, and the probability that the other will happen be 1/n, the probability that they will both happen
is 1/mn.
For, each of the m ways in which the first can happen or fail, may be combined with each of the n ways in which the other can happen or fail, and thus form mn combinations, and there is only one in
which both can happen; therefore the probability that this will be the case is 1/mn (Art. 433.)
(444.) COR. 1. The probability that both do not happen is 1 - 1/mn, or
be subtracted, the remainder is the probability that they do not both happen.
(445.) COR. 2. The probability that they will both fail is
(446.) COR. 3. The probability that one will happen and the other fail is
(447.) COR. 4. If there be any number of independent events, and the probabilities of their happening be 1/m, 1/n, 1/r, &c. respectively, the probability that they will all happen is
the two first will happen is 1/mn, and the probability that the two first and third will happen is 1/mnr; and the same proof may be extended to any number of events. When m = n = r &c. the
probability is 1/m^v, v being the number of events.
Ex. 1.
(448.) Required the probability of throwing an ace and then a deuce with one die.
The chance of throwing an ace is 1/6, and the chance of throwing a deuce in the second trial is 1/6; therefore the chance of both happening is 1/36.
Ex. 2.
(449.) If 6 white and 5 black balls be thrown promiscuously into a bag, what is the probability that a person will draw out first a white, and then a black ball?
The probability of drawing a white ball first is 6/11 (Art. 439), and this being done, the probability of drawing a black ball, is 5/10, or 1/2, because there are 5 white and 5 black balls left;
therefore the probability required is 6/11×1/2 = 3/11. Or we may reason thus;
unless the person draw a white ball first, the whole is at an end; therefore the probability that he will have a chance of drawing a black ball is 6/11, and when he has this chance, the probability
of it's succeeding is 5/10, or 1/2; therefore, the probability that both these events will take place is 6/11×1/2, or 3/11.
Ex. 3.
(450.) The same supposition being made, what is the chance of drawing a white ball and then two black balls?
The probability of drawing a white ball and then a black one is 3/11 (Art. 449); when these two are removed, there are 5 white and 4 black balls left; and the probability of drawing a black ball, out
of these, is 4/9; therefore the probability required is 3/11×4/9 = 4/33.
Ex. 4.
(451.) Required the probability of throwing an ace' with a single die, in two trials.
The chance of failing the first time is 5/6, and the chance of failing the next is 5/6; therefore the chance
of failing twice together is 25/36, and the chance of not failing, both times, is 1 - 25/36, or 11/36.
Ex. 5.
(452.) In how many trials may a person undertake, for an even wager, to throw an ace with a single die?
Let x be the number of trials; then, as in the last Art. the chance of failing x times together is
Ex. 6.
(453.) To find the probability that two individuals, P and Q, whose ages are known, will live a year.
Let the probability that P will live a year, determined by Art. 441, be 1/m; and the probability that Q will live a year, 1/n; then the probability that they will both be alive at the end of that
time is 1/m×1/n or 1/mn.
Ex. 7.
(454.) To find the probability that one of them, at least, will be alive at the end of any number of years.
The probability that P will die in a year is Q will die is
In the same manner, if 1/p be the probability that P will live t years, and 1/q the probability that Q will live the same time (Art. 442); the probability that one of them, at least, will be alive at
the end of the time is 1-
(455.) If the probability of an event's happening in one trial be represented by n trials.
The probability of it's happening in any one particular trial being n - 1 trials is
447); therefore the probability of it's happening in one particular trial, and failing in the rest, is n trials, the probability that it will happen in some one of these, and fail in the rest, is n
times as great, or n trials and fail in all the rest (Art. 230); therefore the probability that it will happen twice in n trials is t times is
(456.) COR. 1. The probability of the event's failing exactly t times in n trials may be shewn, in the same way, to be
(457.) COR. 2. The probability of the event's
happening at least t times in n trials is
For, if it happen every time, or fail only once, twice, …. n - t times, it happens t times; therefore the whole probability of it's happening, at least t times, is the sum of the probabilities of
it's happening every time, of failing only once, twice, …. n - t times; and the sum of these probabilities is
Ex. 1.
(458.) What is the probability of throwing an ace, twice, at least, in three trials, with a single die?
In this case, n = 3, t = 2, a = 1, b = 5; and the probability required is
Ex. 2.
(459.) What is the probability that out of five individuals, of a given age, three, at least, will die in a given time?
Let 1/m be the probability that any one of them will die in the given time (Art. 442); then we have given the probability of an event's happening in one instance, to find the probability of it's
happening three times, at least, in five instances.
In this case, a = 1, b = m - 1, n = 5, t = 3; therefore the probability required is
(460.) Much more might be said on a subject so extensive as the doctrine of chances; the Learner will however find the principal grounds of calculation in Articles 433, 435, 443, 455, and 457, and if
he wish for farther information, he may consult De Moivre's work on this subject. It may not be improper to caution him against applying principles which, on the first view, may appear self-evident,
as there is no subject in which he will be so likely to mistake as in the calculation of probabilities. A single instance will shew the danger of forming a hasty judgement, even in the most simple
case. The probability of throwing an ace with one die is 1/6, and since there is an equal probability of throwing an ace in the second trial, it might be supposed that the probability of throwing an
ace in two trials is 2/6.
This is not a just conclusion (Art. 451); for, it would follow, by the same mode of reasoning, that in six trials a person could not fail to throw an ace. The error, which is not easily seen, arises
from a tacit supposition that there must necessarily be a second trial, which is not the case if an ace be thrown in the first.
ON LIFE ANNUITIES.
(461.) To find the present value of an annuity of £1. to be continued during the life of an individual of a given age, allowing compound interest for the money.
Let r be the amount of £1., in one year; A the number of persons, in the tables, of the given age; B, C, D, &c. the number left at the end of 1, 2, 3, &c. years (Art. 440); then B/A is the value of
the life for one year, C/A, D/A, &c. it's value for 2, 3, &c. years; and the series must be continued to the end of the tables. Now the present value of £1., to be paid at the end of one year, is 1/r
(Art. 398); but it is only to be paid on condition that the annuitant is alive at the end of the year, of which event the probability is B/A; therefore the present value of the conditional annuity is
B/Ar (Art. 434); in the same manner, the present value of the second year's annuity is C/Ar^2; the present value of the third year's annuity is D/Ar^3; &c. therefore the whole value required is
(462.) De Moivre supposes, that out of eighty-six persons born, one dies every year, till they are all extinct.
This supposition is sufficiently exact, if our calculations be made for any age above ten, as will appear from an inspection of the tables; and on this supposition, the sum of the series
Let n be the number of years which any individual wants of 86; then will n be the number of persons living, of that age, out of which one dies every year; and n terms. The sun of the series n terms,
was found Art. 414. Ex. 3. to be n terms, is
(463.) COR. 1. This expression for the sum is the same with P be the present value of an annuity of £.1., to continue certain for n years, then P =
(464.) COR. 2. The present value of the annuity to continue for ever, from the death of the proposed individual, is
For, the whole present value of the annuity to continue for ever, is
(465.) To find the present value of an annuity of £.1., to be paid as long as two specified individuals are both living.
Find, by Art. 453, the probability that they will both be alive at the expiration of 1, 2, 3, &c. years, to the end of the tables; call these probabilities a, b, c, &c.
and r the amount of £1., in one year; then a/r + b/r^2 + c/r^3 + &c. is the present value of the annuity required. (See Art. 461).
(466.) To find the present value of an annuity of £1., to be paid as long as either of two specified individuals is living.
Find, by Art. 454, the probability that they will not both be extinct in 1, 2, 3, &c. years, to the end of the tables, and call these probabilities A, B, C, &c. then the present value of the annuity
is A/r + B/r^2 + C/r^3 + &c. (See Art. 461).
(467.) COR. If the annuity be M £., the present value is M times as great as in the former case, or M×
(468.) These are the mathematical principles on which the values of annuities for lives are calculated, and the reasoning may easily be applied to every proposed case. But in practice, these
calculations, as they require the combination of every year of each life with the corresponding years of every other life concerned in the question, will be found extremely laborious, and other
methods must be adopted when expedition is required. Writers on this subject, are De Moivre, Maseres, Simpson, Price, Morgan, and Waring.
THE END OF PART III.
S 2
ELEMENTS OF ALGEBRA.
PART IV.
THE APPLICATION OF ALGEBRA TO GEOMETRY.
(469.) THE signs made use of in algebraical calculations being general, the conclusions obtained by their assistance may, with great ease and convenience, be transferred from abstract magnitudes to
every class of particular quantities; thus, the relation of lines, surfaces, or solids, may generally be deduced from the principles of Algebra, and many properties of these quantities discovered,
which could not have been derived from principles purely geometrical.
(470.) Simple algebraical quantities may be represented by lines.
Any line AB, may be taken at pleasure to represent one quantity a, but if we have a second quantity, b, to represent, we must take a line which has to the former line, the same ratio that b has to a.
Instead of saying AB represents a, we may say AB = a, supposing AB to contain as many linear units as a contains numeral ones.
(471.) When a series of algebraical quantities is to be represented on one line, and each of them measured from the same point, the positive quantities being represented by lines taken in one
direction, the negative quantities must be represented by lines taken in the opposite direction.
Let a be the greatest of these quantities, then a - x may, by the variation of x, become equal to each of them in succession. Let AB be the given line, and A the point from which the quantities are
to be measured; take AB = a; and since a - x must be
measured from A, BD must be taken in the contrary direction = x, then AD = a - x; and that a - x may successively coincide with each quantity in the series, beginning with the greatest positive
quantity, x must increase; therefore BD, which is equal to x, must increase; and when x is greater than a, BD is greater than AB, and A.
(472.) COR. l. If the algebraical value of a line be found to be negative, the line must be measured in a direction opposite to that which, in the investigation, we supposed to be positive.
(473.) COR. 2. If quantities be measured upon, a line from it's intersection with another, the positive quantities being taken in one direction, the negative quantities must be taken in the other.
(474.) If a fourth proportional, to lines representing p, q, r be taken, it will represent q^r/p; and if p = 1, it will represent q^r; if also, q and r be equal, it will represent q^2.
(475.) If a mean proportional between lines representing a and b be taken, it will represent √ab, which, when a = 1, becomes √b. Hence it appears that any possible algebraical quantities may be
represented by lines; and conversely, lines may be expressed algebraically; and if the relations of the algebraical quantities be known, the relations of the lines are known.
(476.) The relations of surfaces to each other may be expressed algebraically.
Let the sides AB, AC of the rectangle AD contain the linear units a, b, respectively; then ab will be the number of superficial units contained in the
area. For, every unit in AB, or a, has b units in the area, corresponding to it; consequently there are, upon the whole, ab units in the area. Thus ab is a proper representation of the rectangle AD;
and by reducing other surfaces, to rectangles, their algebraical values may be found.
COR. Hence, the product of the two quantities a and b, is often called their rectangle; and when b is equal to a, this product is called the square of a.
(477.) In the same manner, if a, b, c represent the linear units in the three sides of a rectangular parallelepiped, abc will be the number of solid units contained in the figure; and consequently
solids may be compared, by comparing their algebraical values.
(478.) If the line PM move parallel to itself upon the indefinite line AP, and at the same time increase or decrease, the point M will trace out a straight line, or a curve. AP is called the abscissa
, and PM the ordinate; and the straight line, or curve, is said to be the locus of the point M.
The nature of the curve depends upon the relation of AP to PM; and this relation, when expressed algebraically, is called the equation to the curve.
(479.) Having given the nature, or construction of the curve, its equation may be found.
Let BM be a straight line cutting AP in a given
angle at B, the relation of AP to PM is expressed by a simple equation.
Suppose AP = x, PM = y, AB = a; then since the angles at B, P and M are invariable, BP bears an invariable ratio to PM, let this be the ratio of b : c.
Then since BP = AP - AB = x - a, we have x - a : y :: b : c, and by = cx - ca, or by - cx + ca = 0.
(480.) COR. A simple equation belongs to a straight line; because, by altering the values of b, c, cand a, and taking x and a, positive or negative, as the case requires, the equation by - cx + ca =
0 may be made to coincide with any proposed simple equation.
(481.) To find the equation to the Parabola.
Let a point S be taken without the right line CB, and let the indefinite line SM revolve about the point
S in the plane SBC; also, let CM, which is perpendicular to CB, cut SM in M; then, if SM be always equal to CM, the locus of the point M is a parabola.
Through S draw BSP at right angles to CB, and if SB be bisected in A, the curve will pass through A, as appears by the construction; draw MP perpendicular to BP, and let AP = x. PM = y, AS = a; then
SP^2 + PM^2 = (SM^2 = CM^2 =) BP^2, or ^2 - 2ax + a^2 + y^2 = x^2 + 2ax + a^2, or y^2 = 4ax.
(482.) To find the equation to the Ellipse.
Let two indefinite lines SM, HM, revolve, in a given plane, about the points S, H, and cut each
other in M, in such a manner, that SM + MH may be an invariable quantity; then the locus of the point M is an ellipse.
Bisect SH in C, and from M draw MP perpendicular to SH, or SH produced; let CP = x, PM = y, CS = c, SM + MH = 2a. Then ^2 - 2cx + x^2 + y^2 = 4a^2 - 4a×^2 + 2cx + x^2 + y^2; that is, by
transposition, 4a^2 + 4cx = 4a ^4 + 2a^2cx + c^2x^2 = a^2c^2 + 2a^2cx + a^2x^2 + a^2 y^2, or ^2 - c^2 = b^2, then a^2y^2 = a^2b^2- b^2x^2, and
(483.) COR. l. If S and H coincide, c = 0; hence, a = b, and y^2 = a^2 - x^2, the equation to a circle.
(484.) COR. 2. When x = + a, or - a, then y = 0; therefore taking CA = CD = a, the curve passes through A and D.
(485.) COR. 3. AP = z, then x = a - z; therefore AP and PM.
(486.) COR. 4. If AS be finite, and SM + MH be indefinitely increased, the limit to which the curve approaches is, at all finite distances from S, a parabola.
In this case z^2 vanishes when compared with 2az; therefore the limit to which the equation approaches is y^2 = b^2/a^2×2az; also, b^2 = a^2 - c^2 = a and c is finite and a is infinite, a + c is
ultimately equal to 2a; hence, b^2 = 2a×AS; therefore
(487.) To find the equation to the Hyperbola.
Let two indefinite lines SM, HM revolve, in a
given plane, about the points S, H, and cut each other in M, in such a manner that HM - SM may be a given quantity; then the locus of the point M is an Hyperbola.
Bisect SH in C, and draw MP perpendicular to HS, or HS produced; let CP = x, PM = y, SC = c, HM - SM = 2a. Then by proceeding as in Art. 482, c is greater than 2a (Euc. 20.1), let therefore b^2 = c^2
- a^2, then a^2b^2 = b^2x^2 - a^2b^2, or
(488.) COR. 1. The equation to the ellipse, y^2 = b^2/a^2×^2 be supposed to be negative.
(489.) COR. 2. The equation ^2 is positive; to a Parabola, when b^2 is infinite (Art. 486); and to an Hyperbola, when b^2 is negative.
(490.) COR. 3. If SM - HM = 2a, a figure, similar and equal to the former, will be traced out, which is called the opposite Hyperbola.
(491.) COR. 4. If x = ± a, then y = 0; therefore taking CA - CD = a, the curve passes through A and D.
(492.) COR. 5. If AP = z, then CP, or x = z + a,
and x^2 - a^2 = z^2 + 2az; hence, AP and PM.
(493.) COR. 6. In the opposite Hyperbola, x = z - a; therefore x^2 - a^2 = z^2 - 2az, and
(494.) To find the equation to the Cissoid of Diodes.
Let AB be the diameter of a semicircle ANB; from the points R and P, taken always at equal
distances from A and B, draw RN, PM, at right angles to AB, and join AN meeting PM in M; the point M will trace out a curve called the Cissoid of Diodes.
From the nature of the circle, AR×RB = RN^2 and by the construction, AR×RB = PB×AP; also, from the similar triangles APM, ARN, AP : PM :: AR : RN, or AP : PM :: PB : PB×AP; therefore PB ×
PM^2 = AP^3. Let AB = b, AP = x, PM = y; then
(495.) To find the equation to the Conchoid of Nicomedes.
Let AB be a line given in position, and about any point C, taken without it, let the indefinite line CM
revolve, and cut AB in E; then if EM be taken always of the same length, the point M will trace out a curve which is called the Conchoid of Nicomedes. Draw CAD and MP at right angles to AB, and MF
parallel to it; let CA = a, AD = EM = b, AP = x, PM = y. Then from the similar triangles CFM, MPE, CF(a + y) : FM(x) :: MP (y) : PE = xy/a + y; and EM^2 = EP^2 + PM^2, that is,
If EM be measured in the opposite direction from E, the equation to the curve is
(496.) To find the equation to the Logarithmic Curve.
If in the indefinite line AE, we take AB, BC, CD, &c. always equal to each other, and ordinates
AF, BG, CH, DI, &c. be drawn at right angles to AE, and in geometrical progression, the curve FGHI &c. which passes through their extremities, is called the Logarithmic Curve. From the nature of
logarithms (Art. 385), any abscissa AC is the logarithm of the corresponding ordinate CH, in a system which depends upon the magnitudes of AF and BG, supposing AB given; in the same system, let 1 be
the logarithm of a; also, let AC = x, CH = y; then x = log.y, and 1 = log. a, or x = x x log.a; therefore log. y = x x log. a = log.a^x; hence, y = a^x, the equation to the curve.
(497.) Having given the relation between one abscissa CP, and ordinate PM, in a curve, to find the
relation between the abscissaSQ, which is measured from a given point S in a given direction, and the ordinate QM, which is inclined to PM at a given angle.
Suppose PM perpendicular to CP; produce MQ and DPC till they meet in G, draw SB, SD, SF respectively parallel to MG, MP, DC; and let SB = FG = d, DC = f, CP = x, PM = y, SQ = z, QM = v.
Then in the triangle SQF, s : p :: z : pz/s = QF; hence, GM = GF + FQ + QM = d+ pz/s + v; and in the triangle MGP, 1 : s :: d + pz/s + v : sd + pz + sv = PM = y.
Also, in the triangle MKQ, m : q :: v: qv/m = KQ, and SK = SQ - KQ = z - qv/m; again, in the triangle SKE, 1 : m :: z - qv/m; mz - qv = SE = DP; hence, x and y be substituted in the equation which
represents the relation of CP to PM, an equation is obtained which represents the relation of SQ to QM.
(498.) COR. 1. Since the values of x and y are represented in simple terms of z and v, the equation to the curve will rise. to the same number of dimensions, whatever abscissa and ordinate are taken.
(499.) COR. 2. From the principles of Trigonometry it appears, that m, n, and s may be found in terms of p and q; therefore in the values of x and y, before obtained, there are only four independent
invariable quantities d, f, p, and q.
(500.) COR. 3. If the curve be a conic section whose centre is C and axis CP, then x and y their values, we have
(501.) COR. 4. The equation obtained in the last article may be made to coincide with any equation of two dimensions Av^2 + Bzv + Cv + Dz^2 + Ez + F = 0, by equating the coefficients of the
corresponding terms; because we shall have six equations to determine the six independent quantities, a^2, b^2, d, f, p, q. Hence it follows, that every equation of two dimensions belongs to some
conic section.
(502.) Having given the equation which expresses the relation between the abscissa and ordinate, the curve may be described.
For, any abscissa being assumed, the corresponding values of the ordinate are known from the equation; and thus, by assuming different values of the abscissa, the curve may be traced out.
(503.) Ex. 1. If ay = bx + cd be the proposed equa-
tion, it belongs to a right line (Art. 480). Let the abscissa be measured from the point A, along the line
AB; then, when x = 0, we have y = cd/d; from A, therefore, draw AC making a finite angle with AB, and equal to cd/a, and the line which belongs to the proposed equation must pass through C Also, if y
= 0, then x = - cd/b; take therefore, upon the line ADP, AD = cd/b, and the line to which the equation belongs must pass through D; therefore DCM is that line.
(504.) COR. If AP be taken to represent any value of x, and the ordinate PM be drawn parallel to AC, PM will represent the corresponding value of y.
(505.) Ex. 2. Let the equation to the curve be ax = y^2; then, when x = 0, we have y = 0, or the curve passes through A; when x is positive, y = ± √ax, and when x is infinite, these values are still
possible; therefore the curve has two infinite arcs lying the same way from A; but when x is negative, y
becomes impossible; therefore no part of the curve lies the other way.
(506.) Ex.3. Let the equation to the curve be xy = ab; then, when x is indefinitely small, y is indefinitely great, and when x is positive and indefinitely great, y is positive and indefinitely
small; therefore the curve will have
two infinite arcs between the lines AE and AB; also, when x is negative, y is negative, and when infinite, y is infinitely small, and when x is infinitely small, y is infinitely great; therefore the
curve will have two infinite ares between Ab and AF.
These lines EF, Bb, which continually approach to the curve, and whose distances from it become, at length, less than any that can be assigned, but which produced ever so far do not meet it, are
called Asymptotes.
(507.) Ex.4. Let x^4 - a^2x^2 + a^2y^2 = 0; then, y = ± x is nothing, y is nothing, or the curve passes through A, the point from which x
is measured. When x = ± a, then y = 0; therefore
the curve passes through B, and b, supposing AB = Ab = ± a; but if x be greater than a, y becomes impossible; therefore no part of the curve lies beyond B or b.
(508.) Ex. 5. To find the conic section to which any proposed quadratic equation belongs.
Hence, 1. If c^2 - 4af be positive, then, when ± x is infinite, y has four possible values; therefore the curve has four infinite arcs, or it is the hyperbola.
2. If c^2 - 4af = 0, the curve has only two infinite arcs; because, when parabola. But if 2bc - 4ae be also = 0, then y =
T 2
^2 is greater than 4ad, the curve becomes a right line.
3. If c^2 - 4af be negative, the curve has no infinite arc; for, when ± x is infinite, the values of y are impossible; hence the curve is an ellipse.
4. If c^2 - 4af be negative, 2bc - 4ae = 0, and b^2 - 4ad be also 0, or negative, all the values of y are impossible; in this case the ellipse wholly vanishes.
ON THE CONSTRUCTION OF EQUATIONS.
(509.) The relation between the abscissa and ordinate of a conic section is expressed by a quadratic equation, in which, for every different value of the abscissa, there are two corresponding values
of the ordinate, and if the abscissa be so drawn, and the conic section so constructed, that it's equation coincides with a proposed quadratic, the two ordinates will be the roots of that quadratic,
which may be determined to a tolerable degree of accuracy by actual measurement.
Let A and radius AM; take AP an abscissa, PM an ordinate meeting the circle in M and AM, and draw MB at right angles to AP; let AP = x, PM = y, AM = r, and the cosine of the angle APM (to the radius
l) = c; then 1 : c :: PM : PB = c x PM = cy,
and AM^2 - AP^2 + PM^2 AP^2 - 2AP x BP, or r^2 = x^2 + y^2-
2cxy; that is, y^2 - 2cxy + x^2 - r^2 = 0; which equation may be made to coincide with any proposed quadratic.
Ex. Let the roots of the equation y^2 - py + q = 0 be required.
Here 2cx = p, and x^2 - r^2 = q; and since there are three undetermined quantities c, x, and r, and only two conditions to be answered, one of these quantities may be assumed, of any finite
magnitude, at pleasure: suppose c = 1, then x = p/2, r^2 = x^2 - q = p^2/4 - q, and PM coincides with PAD; let therefore a circle be described with the radius DAP in D and C; take AP = p/2, and the
roots of the equation are PC and PD.
(510.) The intersections of two conic sections, may be determined by a biquadratic equation, and if the figures be so drawn that this biquadratic coincides with a proposed biquadratic, the roots of
the latter
equation may be found by measuring the ordinates which determine the points of intersection.
Let AP, C and radius CM, cutting the parabola in the points
from these points draw the ordinates to the axis, MP, C draw CD perpendicular to the axis, and CN parallel to it, meeting PM in N. Let AD = a, DC = b, CM = n, the parameter of the parabola = p, AP =
x, PM = y; then px = y^2; also, CM^2 = CN^2 + NM^2, or ^2 - 2ax + a^2 + y^2 - 2by + b^2 = n^2; and substituting for x it's value y^2/p, and arranging the terms according to the dimensions of y, we
obtain PM,
Ex. Let the roots of the equatiou y^2 - qy^2 + ry - s = 0 be required.
Assume p = l; then 2a - 1 = q, or ^2 + b^2 - n^2 = - s, or n^2 = a^2 - b^2 + s, and consequently, DC at right angles to it, and = - r^2/2; from the centre C, with the radius
(511.) When DC represents a negative quantity, the ordinates on the same side of the axis with C represent the negative roots of the equation; and the contrary.
(512.) COR. 1. If the circle touch the parabola, two root of the equation are equal; if it cut it only in two points, or touch it in one, two roots are. impossible; and if the circle fell wholly
within or without the parabola, or, if a^2 + b^2 + s be negative, all the roots are impossible.
(513.) COR. 2. If a^2 + b^2 = n^2, or the circle pass, through the point A, the last term of the equation, PM,
(514.) COR. 3. If a^2 + b^2 - n^2 = 0, and also b = 0, the equation becomes
These solutions may be obtained, and nearly in the same manner, by means of any two of the conic sections.
(515.) If the roots of a cubic equation, x^3 - qx + r = 0, be possible, they may be found by means of a table of cosines.
Let DAC be an angle whose cosine, to the radius m, is x; in AD, take AB = m; from B as a centre, with the radius BA, describe a circle cutting AM in C, and
from C, with the same radius, describe a circle cutting AD in D; join BC, CD, and draw BK, DM at right angles to AM, and CL at right angles to AD. Then, the triangles BAC and BCD being isosceles, the
angles BAC and BCD are equal, as also CBD and CDB; and the perpendiculars BK, CL bisect the bases AC, BD. Also, CM, the cosine of the m, be called c; then, from the similar triangles ABK, ACL, AB :
AK :: AC : AL, or m : x :: 2x : 2x^2/m = AL, and AL - AB
= 2x^2 - m = BL; hence, AD, or AL + BL, = 4x^2/m - m; again, AB : AK :: AD : AM, or m : x :: 4x^2/m - m : 4x^3/m^2 - x = AM, and AM - AC = CM = 4x^3/m^2 - x = AM, and AM - AC = CM = 4x^3/m^2 - 3x =
c, therefore 4x^3 - 3m^2x = m^2c, and 4x^3 - 3m^2x - m^2c = 0.
Let the equation 4x^3 - 3m^2x - m^2c = 0, or x^3 - 3m^2/4x - m^2/4 = 0, be made to coincide with the equation x^3 - qx + r = 0; that is, let 3m^2/4 = q, and m^2c/4 = - r; or m = √4q/3, and c = - 3r/
q; then from a table of cosines, find the angle whose cosine is -3r/q, to the radius √4q/3 and the cosine of one third of this angle, to the same radius, is one value of x.
(516.) COR. 1. If A be the arc whose cosine is c, and P the whole circumference, c is also the cosine of A + P, or A + 2P; therefore the cosines of x.
(517.) COR. 2. Since the radius is greater than the cosine, √4q/3 is greater than -3r/q, or 4q/3 is greater than 9r^2/q^2; that is, is q^3/27 greater than r^2/4; therefore this solution can only be
applied when the roots of the cubic equation are possible. (See Art. 331.)
GENERAL PROPERTIES OF CURVE LINES.
(518.) A Curve is said to be of n dimensions, when the equation belonging to it rises to n dimensions.
Let n dimensions, express the relation between the abscissa and ordinate of a curve, then for every different value of x, there are n values of y; therefore the ordinate will cut the curve in n, or
in n - 2, n - 4, &c. points, according as the equation has n, or n - 2, n - 4, &c. possible roots.
(519.) COR. 1. Hence, if the equation be of an odd number of dimensions, the curve will have, at least, one infinite arc on each side of the point from which the absciss± are measured; for, whatever
be the value of x, there is, at least, one possible value of y corresponding to it (Art. 278).
(520.) COR. 2. ax + b is the sum of the ordinates, cx^2 + dx + e the sum of the products of any two, &c. and gx^n + hx^n - 1 + lx^n - 2 + &c. is the product of all the ordinates (Art. 271).
(521.) If two parallel lines, be drawn
in a curve, and be cut by the right line AQ, in such a manner, that in each case, the sum of the ordinates, on one side of AQ is equal to the sum of the ordinates on the other, all lines drawn in the
curve, parallel to these, will be cut by AQ in the same manner.
Let A; also, let AP = q, AQ = r; then the equation, in the two cases, becomes AQ is equal to the sum of the ordinates on the other.
(522.) The line AQ is called a diameter of the curve.
(523.) If the abscissa APE, and ordinate NPQ cut a curve in as many points as it has dimensions,
the rectangle under the segments of the abscissa, PB x PC x PD x PE, will be to the rectangle under the ordinates, PM x PN x PO x PQ, in an invariable ratio.
Let ^n + hx^n - 1 + lx^n - 2 + &c. = PM x PN x PO x PQ (Art. 520); also, the values of x, when y = 0, are AB, AC, AD, AE, that is, the roots of the equation gx^n + hx^n - 1 + lx^n - 2 + &c. = 0, or x
^n + hx^n - 1/g + lx^n - 2/g + &c. = 0, are AB, AC, AD, AE; and consequently the quantity ^n + hx^n - 1 + lx^n - 2 + &c. = g x PB x PC x PD x PE = PM x PN x PO x PQ; therefore PB x PC x PD x PE : PM
x PN x PO x PQ :: 1 : g.
(524.) COR. If n = 2, the curve is a conic section (Art. 501); and if the abscissa be a diameter, or the ordinates on each side of it, PM, PO, equal to each other, the rectangle under the segments of
the abscissa is to the square of the ordinate in an invariable ratio.
(525.) If there be n right lines, be ordinates to the abscissa
AP, the relation between the abscissa and ordinates will be expressed by an equation of the form y^n -
For if AP = x, then PM = ax + b, a, b, c, d, e, f are invariable (Art. 479); that is, the values of y are ax + b, cx + d, ex + f, &c. therefore
(526.) If a curve have as many asymptotes as it has dimensions, and a right line be drawn which cuts them all, the parts of the line measured from the asymptotes to the curve, will, together, be
equal to the parts measured, in the same direction, from the curve to the asymptotes.
x is infinite, the former equation becomes y^n - axy^n - 1 + 1 &c. = 0, and the latter y^n - Axy^n -1 + &c. = 0, and these equations coincide (Art. 506), therefore A = a; also, ax + b is the sum of
the ordinates to the curve, and Ax + B, or ax + B, is the sum of the ordinates to the asymptotes, in all cases; hence, the difference of these, b - B, is an invariable quantity, whatever be the value
of x; and at an infinite distance this difference is nothing (Art 506); therefore it is always nothing, or b = B; consequently ax + b = Ax + B; that is, the sum of the ordinates to the curve is equal
to the sum of the ordinates to the asymptotes.
Let QONM be the curve, AP the abscissa, PQ an ordinate, meeting the curve in the points M, N, O, Q, and the asymptotes in a, b, c, d; then PM + PN + PO + PQ = Pa + Pb + Pc + Pd, and by trans-
position, PM - Pa + PO - Pc = Pb - PN + Pd -
PQ, or aM + cO = Nb + Qd.
(527.) COR. In the common hyperbola MCN, whose centre is O, and asymptotes Oa, Ob, if any line
aMNb be drawn cutting the curve in M, N, and the asymptotes in a, b, then aM is equal to Nb.
(528.) If a straight line be made to revolve about C, and cut the curve in as many points as it has dimensions; and if1/CD be always taken equal
to the locus of the point D will be a straight line.
Let ABP be an abscissa, and from M and D draw
MP and DB at right angles to ABP, and let AP to PM. Also, let AC = z, CM = v, CB = w, BD = u; x and y be substituted in the equation to the curve, the relation of AC to CM will be known; and the
coefficient of the last term but one of the transformed equation, divided by the. last term, will be the sum of the reciprocals of it's roots, or
and substituting these values for x and it's powers, and sv for y, in the terms of the original equation, we have the two last terms of the transformed equation, x cv and Pz^n + Qz^n - 1 divided,
respectively, by s^n, all the other terms involving the square or some higher power of v; hence,
Also, from the similar triangles MCP, BCD,
since the point C is fixed, AC, or z, is invariable; therefore the relation between w and u, or CB and BD, is expressed by a simple equation, that is, the locus of the point D is a straight line
(Art. 480).
(529.) In the general equation x be so assumed that two roots are impossible, two values of the ordinate belonging to this abscissa are impossible, that is, there are no lines which represent them.
Hence it is evident, that in deducing the properties of the ordinates from the equation to the curve, we must suppose all the roots of this equation possible; because, though the sums, powers,
products, &c. of such impossible quantities may become possible, and their relations, discovered by an algebraical process, may be expressed by possible quantities, yet the reasoning does not extend
to curves, in which the original quantities cannot be represented.
On the subject of Algebraical Curves, the Reader may consult Dr. Waring's Proprietates Algebraicarum Curvarum, and Euler's Anal. Infinitorum.
THE END. | {"url":"https://darwin-online.org.uk/content/contentblock?basepage=1&itemID=A896&hitpage=1&viewtype=text","timestamp":"2024-11-09T01:35:06Z","content_type":"application/xhtml+xml","content_length":"513789","record_id":"<urn:uuid:198de9f0-93c8-40bb-b731-e48bf841d83d>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00081.warc.gz"} |
Considerations of the Determination of the Position of a Point in Space. The Methods of Projections
2. The surfaces of all natural bodies can be considered as made up of points, and the first step we are going to make in this treatise must be to indicate how one can express the position of a point
in space.
Space is without limit: all parts of space are alike: there is nothing characteristic about any particular part so that it can serve as a reference for indicating the position of a particular point.
Thus, to define the position of a point in space it is necessary to refer this position to those of other objects which are of known position in some distinctive part of space, the number of objects
being as many as are required to define the point; and for the process to be amenable for easy and everyday use, it is necessary that these objects should be the simplest possible so that their
positions can be most easily imagined.
3. Amongst all the simple objects, we will investigate which present the most facility in determining the position of a point; and firstly, because geometry offers nothing more simple than a point,
we will examine what kind of considerations are involved if, to define the position of a point, one refers it to a certain number of other points whose positions are known; for the sake of clarity in
this exposition, we will designate these known points by the letters A, B, C, etc.
Suppose that in defining the position of a point we say we begin with that it is one metre from the point A.
Everyone knows that it is a property of the sphere that every point on its surface is of equal distance from its centre. Thus the definition given above satisfies this property; that is, the point to
be found could be any of those lying in the surface of a sphere with centre at A and of radius one metre. The points on the surface of this sphere are the only ones in all space which have this
required property, for all the points in space which are outside this sphere are further than one methre from A and all those which are between the surface and the centre are, contrariwise, nearer
than one metre. Therefore, the points on the surface of the sphere are the only ones which possess the property stated in the proposition. Finally, therefore, this proposition expresses that the
point required is one of those on the surface of a sphere with centre A and radius one metre. This makes the point distincitive from those in an infinity of other places in space, surface, and other
conditions are necessary if the required point is to be recognised amongst them.
Suppose that, in defining the position of this point, we say that it must also be two metres from a second known point, B: it is clear that the reasoning for this second condition is as for the
first. The point must be one of those on the surface of a sphere with centre at B and of radius two metres. This point, finding itself simultaneously on the surfaces of two spheres, can only be
confused with those others which are common to the two spheres’ surfaces and which lie in the spheres’ common intersection. Those who are familiar with geometrical concepts will know that the
intersection of two spherical surfaces is the circumference of a ircle, whose centre lies on the straight line joining the centres of the two spheres and whose plane is perpendicular to this line. So
by virtue of the two conditions stated together, the point searched for is distinguished from those generally on the surfaces fo the two spheres and is one lying on the circumference of the circle
which only satisfies both conditions. It is necessary therefore, to stipulate a third condition to absolutely determine the required point.
Suppose, finally, that this point must also be three metres from a third point C. This third condition places the point amongst all those on the surface of a third sphere with centre at C and of
three metres radius: and because we have seen that it must lie on the circumference of a circle of known position, to satisfy also the third condition it must be one of the points common to the
surface of the third sphere and the circumference of the circle. But it is known that the circumference of a circle and the surface of a sphere can only meet in two points: therefore, by virtue of
the three conditions, the point is distinguished from all those in space and can only be one of the two points found. If one further indicates on which side it lies of the plane passing through the
three centres of the spheres, i.e. points, A, B, and C, the point is absolutely determined and cannot be confused with any other.
One sees that determining the position of a point in space by referring it to known points, of which the number is necessarily three, involves one in considerations not simple enough for everyday
4. Let us see what will actually be the result if, instead of referring the position of a point to three other known points, it is referred to three lines of given position.
A line need not be considered to be of finite length but can always be indefinitely produced in one direction or the other. To simplify, we will label the lines we will be obliged to use
successively, A, B, C, etc.
If, in defining the position of a point, we say that it must be found, for example, at a distance of one metre from the first known line, A, we are saying that this point is one of those in the
surface of a cylinder of circular base with the line A as axis and of radius one metre, and which is indefinitely produced in both directions: for all the points on this surface possess the property
stated in the definition and are the only ones which possess it. In this way, the point is distinguished from others in space which are outside or inside of the cylinder, and it can only be confused
with those in the surface of the cylinder, amongst which one cannot distinguish it by means of the new condition.
Suppose, therefore, that the point sought is also to be placed at two metres from a second line B, one sees likewise that one places this point on the surface of a second cylinder, whose axis is in
the line B and shoe radius is two metres. But it is confused with all the other points on this cylinder surface if only this second condition is considered. Through uniting these two conditions the
point must be simultaneously on the first cylindrical surface and on the second: therefore, it can only be one of the common points of these two surfaces, i.e. one at their common intersection. This
line, on which the point must lie, has the curvature of both the surfaces of the first and second cylinders and is, in general, known as a curve of double curvature.
To distinguish the point from all those on this line it is necessary to resort to a third condition.
Suppose, finally, that the definition states that the point must also be at three metres from a third line C.
This new condition states that it is one of those points on a third cylinder of which the third line will be the axis and which will have a radius of three metres. Therefore, in taking the three
conditions together, the sought point can only be one of those which are common to the third cylinder’s surface and to the curve of double curvature – the intersection of the first two cylinders. But
this curve can be cut, in general, by the third cylindrical surface in eight points, and amongst these the point can be distinguished by circumstances, similar to those detailed in the previous case.
One sees that the considerations for determining the position of a point in space by recognition of its distances from three known straight lines are less simple than those in which the distances are
given from three points, and they are thus less able to serve as a basic method for everyday use.
5. Among the simple objects which geometry considers, it is necessary to notice principally, first, the point which has no dimensions, secondly the line which has one, and thirdly the plane which has
two. Let us investigate whether it is not more simple to determine the position of a point by recognising its distances from known planes, instead of using its distances from points or straight
Suppose we have non-parallel planes of known position in space, which we will designate successively, A, B, C, D, etc.
If, in defining the position of a point, we say it must be, for example, one metre from the first plane A, without stating on which side of the plane, we are saying that it must be one of those
points on two planes parallel to A, placed one either side of plane A, and both one metre from it, for all the points on both these parallel planes satisfy the expressed condition and are, in all
space, the only ones which satisfy it.
To distinguish amongst all the points of these two planes that which is in the required position, it is necessary again to have recourse to other conditions.
Suppose, secondly, that the point sought must be two metres from a second plane, B, then one places it on two planes parallel to plane B, both at two metres distance, and one on either side. To
satisfy at the same time the two conditions it is necessary that the point should be on one of the two planes parallel to plane A and on one of the two planes parallel to plane B; consequently it is
one of the points in the common intersection of these four planes. But the intersection of four planes of known position is a group of four straight lines equally of known position. Therefore, in
considering simultaneously both conditions, the point is no longer confused with all those in space, neither likewise with all those in the four planes, but only with those on four straight lines.
Finally, if the point must also be three metres from a third plane C, one expresses that it must be on one of the two other planes parallel to C, placed one on either side at three metres distance.
So, by virtue of three conditions, it must be simultaneously on one of the two last planes and on one of the four straight lines. But as each of the two planes has a common point with each of the
four straight lines, there are eight points in space which satisfy the three conditions; therefore, by these three conditions jointly the point required can only be one of the eight determined
points, and amongst these one can distinguish which by means of particular circumstances.
For example, if one indicates the distance of the point from the first plane A, one expresses also in what sense, with respect to this plane, the distance is to be taken; instead of two planes
parallel to plane A, there is only one which needs to be considered; it is that one which is situated on the side towards which the distance is normally measured. Likewise, if one indicates the
general sense in which distances from the second plane are to be measured, the point is no longer on the four lines of intersection of four parallel planes, but only on the intersection of two
planes, that is to say, on a straight line of known position. Finally, if one indicates also the sense in which the point is placed in relation to the third plane its position will in consequence be
entirely determined.
One can see, therefore, that although the plane is an object less simple than the line which has only one dimension and the point which has none, referring to planes provides an easier system for the
determination of points in space than to points or lines. It is this procedure which we will ordinarily employ in the application of algebra to geometry, or for finding the position of a point – the
principle of relating its distances to three planes of known position.
However, in descriptive geometry, which has been pratised for a long time by a large number of people and by many to whom time was precious, the process can again be simplified and, instead of
considering three planes, we find that, by means of projections, we only have need for two of these.
6. The projection of a point on a plane may be defined as the foot of the perpendicular lowered form the point to the plane.
It follows that if on two planes of known position in space one is given on each of these planes the projection of the point whose position one wishes to define, this point will be perfectly
In effect, if from the projection on the first plane one constructs a perpendicular to the plane, it is evident that it will pass through the point defined. Likewise, if from its projection on the
second plane one constructs a perpendicular to the plane, it also passes through the point defined. Therefore the point will be simultaneously on two lines of known position in space; therefore it
will be uniquely at their intersection and is, accordingly, perfectly determined.
Figure 1
If, from all the points on a straight line of indefinite length, AB, oriented in any direction in space, one can imagine perpendiculars dropped to a plane, LMNO, in some given position, all the
points at the meeting of these perpendicular with the plane will lie on another straight line of indefinite length, ab; for they will all lie in the plane passing through AB lying perpendicular to
the plane LMNO, and they will only be able to meet the latter at the common intersection of two planes, which, as one knows, is a straight line.
The line ab on the plan LMNO, which is formed by the projection of all the points from another line AB, is called the projection of the line AB onto the plane.
Since two points are sufficient to fix the position of a straight line, to construct the projection of a straight line it is only necessary to project these two points, the projection of the line
passing through the two points where the projectors meet the plane.
Figure 2
Being given on two non-parallel planes LMNO and LMPQ, the projections ab and a’b’ of the line AB, the projection of the line AB is fully determined; for if through one of the projections ab one
imagines a plane perpendicular to LMNO, this plane of known position must necessarily pass through line AB; likewise, if through the other projection a’b’ one imagines a plane perpendicular to LMPQ,
this plane of known position also passes through the line AB. The position of this line, which is simultaneously on two known planes, is consequently at their common intersection and its position is,
therefore, absolutely determined.
8. What has been said above is independent of the position of the planes of projection and equally of the angle between the planes; but if the angle formed by the two planes of projection is very
obtuse, the angle formed between the perpendiculars to these planes will be very acute, and any small drawing errors will cause considerable error in determining the position of the line AB. In order
to avoid this cause of inaccuracy, unless it is otherwise necessary for ease of presentment, the planes of projection are always made to be perpendicular to one another. As the majority of
draughtsmen who will practise this method are already familiar with the position of a horizontal plane and the direction of a plumbline, they will be quite used to supposing that of the two planes of
projection, one is horizontal and the other vertical.
The need for making the drawings of the two projections on a single sheet and for carrying out the operation in the same area, again calls for the draughtsmen to imagine that the vertical plane is
turned about its intersection with the horizontal plane, like a hinge, to lie flat in the horizontal plane and form with it one continuous plane; and it is in this state that he will construct his
Thus the vertical projection is always drawn in the horizontal plane and it is necessary to imagine that it is raised up and put back into place by means of a quarter revolution about the
intersection of the horizontal and vertical planes. It is necessary, accordingly, that this intersection line is made so that it can be clearly seen on the drawing.
Thus, in Fig. 2, the projection a’b’ of the line AB is not executed on a plane which is really vertical; one imagines that the plane is turned about the axis LM to the position LMP’Q’, and it is in
this position of the plane that one carries out the vertical projection a’’b’’.
Apart from the ease of execution which this arrangement allows, it has also the advantage of minimising the work of making projections. For instance, let us suppose that the points a, a’ are the
horizontal and vertical projections of point A; the plane carried through the lines Aa, Aa’ will be at the same time perpendicular to the two planes of projection, since it passes through lines which
are perpendicular to them; it will be then, also perpendicular to their common intersection LM, and the lines aC, a’C, at which it cuts the two planes, will be themselves perpendicular to LM.
But, when the vertical plane is turned about LM as a hinge, the line a’C does not cease, through this movement, to be perpendicular to LM, and it is still perpendicular to it when the vertical plane
is laid down to give the position Ca’’. Therefore, the two lines aC, Ca’’, both passing through the point C and both being perpendicular to LM, are in one straight line; it is the same with the lines
bD, Db’’ by resemblance to any other point such as B. From which it follows that, if one has the horizontal projection of a point, the projection of the same point on the vertical plane supposed laid
down, will be in the line taken through the horizontal projection perpendicular to the intersection, LM, of the two planes of projection, and vice versa.
This result is of very great use in practice.
9. Up to now we have considered the line AB (Fig. 1) to be of indefinite length, and we have occupied ourselves only with its direction; but it is possible for this line to be considered terminated
by the two points, A and B, and one may need to know its length. We are going to see how one can deduce this from its two projections.
When a straight line is parallel to one of the two planes upon which it is projected, its length is equal to that of its projection on this plane; for the line and its projection, being both
terminated by two perpendiculars to the plane of projection are parallel to each other and fall between parallel lines. Thus, in this case the projection being given, the length of the line which is
equal to it is also given.
One knows that a line is parallel to one of the two planes of projection when its projection onto the other plane is parallel to the intersection of the two planes.
If the line is oblique to both of the two planes, its length is greater than that of either of its projections, but may be deduced through a very simple construction.
Fig. 2. Let AB be the straight line, whose two projections ab and a’b’ are given, and whose length is to be found. If through one of its extremities A, and in the vertical plane which passes through
the line, one constructs a horizontal AE, produced as far as to meet at E the vertical dropped from the other extremity, one will form a right-angled triangle AEB, which is to be constructed to find
the length of AB, the hypotenuse. But, in this triangle, as well as the right angle one knows the side AE, which is equal to the projection ab. Furthermore, if in the vertical plane one takes through
the point a’ a horizontal a’e, which will be the projection of AE, it will cut the b’D in a point e, which will be the projection of point E. thus b’e will be the vertical projection of BE and will
be, in consequence, of the same length. Therefore, knowing the two sides of the right-angled triangle, it may easily be constructed, and its hypotenuse will give the length of AB.
Fig. 2, being in perspective, has no resemblance to the construction used in the method of projections; we are here going to give the construction of this first question in all its simplicity.
Figure 3
Fig. 3. The line LM, being supposed to be the intersection of the two planes of projection, and the lines ab and a’’b’’ being the given projections of a straight line, to find the length of this line
one takes through the point a’’ the horizontal He, which will cut the line bb’’ in a point e, and upon this horizontal one will transfer ab from e to H. One will then take the hypotenuse Hb’’ and the
length of this hypotenuse will be that of the line required.
As the two planes are at right angles, the operation which has been made on one of the planes could just as well be made on the other and would give the same result.
After the above, one sees that if one has the two projections of a body terminated by plane faces, by rectilineal edges and by solid angles, the projections of which become a system of lines, it will
be easy to find the length of any dimension one may wish; for such a dimension will be parallel to one of the two planes of projections or it will be oblique to both. In the first case the length
required will be equal to its projection; in the second, one will deduce it from these two projections through the procedure described above. | {"url":"https://themongeproject.online/considerations-of-the-determination-of-the-position-of-a-point-in-space-the-methods-of-projections","timestamp":"2024-11-09T16:00:58Z","content_type":"text/html","content_length":"62922","record_id":"<urn:uuid:40dd6feb-8254-4e4d-b574-59c9e7476a23>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00887.warc.gz"} |
The poor man's math blog
12 Jan
Some photos and statistics
11 May
Filling a void
17 Feb
The time it takes to change the time
26 Jan
A new year
17 Oct
Le parole che caratterizzano un’epoca
25 Aug
If it looks like a word, it is a word one third of the time
07 Jul
If it looks like a words, then it is a word (?)
02 Jun
Global time, emotional time, and the annoyance of DST
25 May
Suboptimal LaTeX #5: miscellanea
10 Mar
Suboptimal LaTeX #4: mathematics
03 Mar
Suboptimal LaTeX #3: mathematical environments
24 Feb
Suboptimal LaTeX #2: spacing
17 Feb
Suboptimal LaTeX #1: intro
06 Jan
How much is time wrong around the world?
05 Dec
Cycling in the rain
08 Nov
Facebook, Gramellini predice quando morirai
25 Oct
Give Python a bit of type safety with Pydoc Checker
24 Apr
Cosa vuol dire “rappresentativo”?
28 Feb
Quanti sono i laureati, tra eletti ed elettori
24 Feb
Is it worth to try to change a bad habit?
24 Oct
IOI translation systems: an actual solution
15 Oct
IOI translation systems: an ideal workflow
09 Oct
IOI translation systems: a survey of possible approaches
01 Oct
IOI 2012 — my experience
24 May
All possible actions of a cyclic group on an algebraic curve
14 May
On the distribution of Italian towns by their endings
29 Mar
Why a mathematician can be an excellent software engineer
29 Feb
pydepgraph – A dependencies analyzer for Python
28 Nov
La solidarietà cristiana in politica
17 Nov
Gödel, Escher, Bach: an Eternal Golden Braid
24 Jul
On structured theorems
12 Apr
Social networks local visualization
13 Jan
WordPress upgrade
29 Dec
Wikipedia people per born year with occupation
05 Dec
Using eso-pic to draw SISSA's letterhead
24 Nov
Threefolds and deformations of surface singularities
30 Sep
18 Aug
Today's top five #1
17 Aug
New tim.it captchas...
10 Jul
Commu 2009.07.10
02 Jul
Suggestion: Inside the living body
11 Apr
Easily recover your trigonometric identities
08 Apr
Time travel's technologies.
05 Apr
The will to leave
22 Mar
My two cents on the daylight saving time
21 Mar
How to import your mail to GMail
21 Feb
To you, my reader…
20 Feb
30 Jan
Pets tortured on the web for fun, again.
25 Jan
Two more attempts
07 Jan
Typography holidays
29 Dec
LaTeX class for lecture notes
27 Dec
How to take lecture notes with LaTeX
20 Dec
Commu's new release
19 Dec
Wet folding, first attempt
17 Dec
Le pene degli Abruzzi
16 Dec
Why I joined Facebook?
15 Dec
14 Dec
13 Dec
Gameknot applet
12 Dec | {"url":"http://blog.poormansmath.net/","timestamp":"2024-11-12T08:47:41Z","content_type":"text/html","content_length":"16117","record_id":"<urn:uuid:3d1365b2-39a1-4c35-90e4-4ed397385814>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00162.warc.gz"} |
Foundations of Physics
Strings Are Dead
Posted in anti-gravity, cosmology, defense, general relativity, gravity, innovation, particle physics, philosophy, physics, policy, quantum physics, science, space travel | 2 Comments on Strings Are
In 2014, I submitted my paper “A Universal Approach to Forces” to the journal Foundations of Physics. The 1999 Noble Laureate, Prof. Gerardus ‘t Hooft, editor of this journal, had suggested that I
submit this paper to the journal Physics Essays.
My previous 2009 submission “Gravitational acceleration without mass and noninertia fields” to Physics Essays, had taken 1.5 years to review and be accepted. Therefore, I decided against Prof.
Gerardus ‘t Hooft’s recommendation as I estimated that the entire 6 papers (now published as Super Physics for Super Technologies) would take up to 10 years and/or $20,000 to publish in peer reviewed
Prof. Gerardus ‘t Hooft had brought up something interesting in his 2008 paper “A locally finite model for gravity” that “… absence of matter now no longer guarantees local flatness…” meaning that
accelerations can be present in spacetime without the presence of mass. Wow! Isn’t this a precursor to propulsion physics, or the ability to modify spacetime without the use of mass?
As far as I could determine, he didn’t pursue this from the perspective of propulsion physics. A year earlier in 2007, I had just discovered the massless formula for gravitational acceleration g=τc^
2, published in the Physics Essays paper referred above. In effect, g=τc^2 was the mathematical solution to Prof. Gerardus ‘t Hooft’s “… absence of matter now no longer guarantees local flatness…”
Prof. Gerardus ‘t Hooft used string theory to arrive at his inference. Could he empirically prove it? No, not with strings. It took a different approach, numerical modeling within the context of
Einstein’s Special Theory of Relativity (STR) to derive a mathematic solution to Prof. Gerardus ‘t Hooft’s inference.
In 2013, I attended Dr. Brian Greens’s Gamow Memorial Lecture, held at the University of Colorado Boulder. If I had heard him correctly, the number of strings or string states being discovered has
been increasing, and were now in the 10^500 range.
I find these two encounters telling. While not rigorously proved, I infer that (i) string theories are unable to take us down a path the can be empirically proven, and (ii) they are opened ended i.e.
they can be used to propose any specific set of outcomes based on any specific set of inputs. The problem with this is that you now have to find a theory for why a specific set of inputs. I would
have thought that this would be heartbreaking for theoretical physicists.
In 2013, I presented the paper “Empirical Evidence Suggest A Need For A Different Gravitational Theory,” at the American Physical Society’s April conference held in Denver, CO. There I met some young
physicists and asked them about working on gravity modification. One of them summarized it very well, “Do you want me to commit career suicide?” This explains why many of our young physicists
continue to seek employment in the field of string theories where unfortunately, the hope of empirically testable findings, i.e. winning the Noble Prize, are next to nothing.
I think string theories are wrong.
Two transformations or contractions are present with motion, Lorentz-FitzGerald Transformation (LFT) in linear motion and Newtonian Gravitational Transformations (NGT) in gravitational fields.
The fundamental assumption or axiom of strings is that they expand when their energy (velocity) increases. This axiom (let’s name it the Tidal Axiom) appears to have its origins in tidal gravity
attributed to Prof. Roger Penrose. That is, macro bodies elongate as the body falls into a gravitational field. To be consistent with NGT the atoms and elementary particles would contract in the
direction of this fall. However, to be consistent with tidal gravity’s elongation, the distances between atoms in this macro body would increase at a rate consistent with the acceleration and
velocities experienced by the various parts of this macro body. That is, as the atoms get flatter, the distances apart get longer. Therefore, for a string to be consistent with LFT and NGT it would
have to contract, not expand. One suspects that this Tidal Axiom’s inconsistency with LFT and NGT has led to an explosion of string theories, each trying to explain Nature with no joy. See my
peer-reviewed 2013 paper New Evidence, Conditions, Instruments & Experiments for Gravitational Theories published in the Journal of Modern Physics, for more.
The vindication of this contraction is the discovery of the massless formula for gravitational acceleration g=τc^2 using Newtonian Gravitational Transformations (NGT) to contract an elementary
particle in a gravitational field. Neither quantum nor string theories have been able to achieve this, as quantum theories require point-like inelastic particles, while strings expand.
What worries me is that it takes about 70 to 100 years for a theory to evolve into commercially viable consumer products. Laser are good examples. So, if we are tying up our brightest scientific
minds with theories that cannot lead to empirical validations, can we be the primary technological superpower a 100 years from now?
The massless formula for gravitational acceleration g=τc^2, shows us that new theories on gravity and force fields will be similar to General Relativity, which is only a gravity theory. The mass
source in these new theories will be replaced by field and particle motions, not mass or momentum exchange. See my Journal of Modern Physics paper referred above on how to approach this and Super
Physics for Super Technologies on how to accomplish this.
Therefore, given that the primary axiom, the Tidal Axiom, of string theories is incorrect it is vital that we recognize that any mathematical work derived from string theories is invalidated. And
given that string theories are particle based theories, this mathematical work is not transferable to the new relativity type force field theories.
I forecast that both string and quantum gravity theories will be dead by 2017.
When I was seeking funding for my work, I looked at the Broad Agency Announcements (BAAs) for a category that includes gravity modification or interstellar propulsion. To my surprise, I could not
find this category in any of our research organizations, including DARPA, NASA, National Science Foundation (NSF), Air Force Research Lab, Naval Research Lab, Sandia National Lab or the Missile
Defense Agency.
So what are we going to do when our young graduates do not want to or cannot be employed in string theory disciplines?
(Originally published in the Huffington Post)
Need For New Experiments To Test Quantum Mechanics & Relativity
We now have a new physics, without adding additional dimensions, that challenge the foundations of contemporary theories. Note very carefully, this is not about the ability of quantum mechanics or
relativity to provide exact answers. That they do extremely well. With Ni fields, can we test for which is better or best?
A better nomenclature is a ‘single-structure test’, a test to validate the structure proposed by a hypothesis or theory. For example, Mercury’s precession is an excellent single-structure test for
relativity, but it does not say how this compares to say, quantum gravity. On the other hand, a ‘dual-structure’ test would compare any two different competing theories. The recent three photon
observation would be an example of a dual-structure test. Relativity requires that spacetime is smooth and continuous but quantum gravity requires spacetime to be “comprised of discrete, invisibly
small building blocks”. This three photon observation showed that spacetime was smooth and continuous down to distances smaller than predicted by quantum gravity. Therefore, suggesting that both
quantum foam and quantum gravity maybe in part or whole invalidated, while upholding relativity.
Therefore, the new tests would authenticate or invalidate Ni fields as opposed to quantum mechanics or relativity. That is, it is about testing for structure or principles not for exactness. Of
course both competing theories must first pass the single-structure test for exactness, before they can be considered for a dual-structure test.
Is it possible to design a single-structure test that will either prove or disprove that virtual particles are the carrier of force? Up to today that I know of, this test has not been done. Maybe
this is not possible. Things are different now. We have an alternate hypothesis, Ni fields, that force is expressed by the spatial gradient of time dilation. These are two very different principles.
A dual-structure test could be developed that considers these differences.
Except for the three photon observation, it does not make sense to conduct a dual-structure test on relativity versus quantum mechanics as alternate hypotheses, because they operate in different
domains, galactic versus Planck distances. Inserting a third alternative, Ni fields, could provide a means of developing more dual-structure tests for relativity and quantum mechanics with the Ni
field as an alternate hypothesis.
Could we conduct a single-structure test on Ni fields? On a problem where all other physicist-engineers (i.e. quantum mechanics, relativity or classical) have failed to solve? Prof. Eric Laithwaite’s
Big Wheel experiment would be such a problem. Until now no one has solved it. Not with classical mechanics, quantum mechanics, relativity or string theories. The Big Wheel experiment is basically
this. Pivot a wheel to the end of a 3-ft (1 m) rod. Spin this wheel to 3,000 rpm or more. Then rotate this rod with the spinning wheel at the other end. The technical description is, rotate the spin
It turns out that the solution to the Big Wheel experiment is that acceleration a=ωrωs√h is governed by the rotation ωr, spin ωs, and the physical structure √h, and produces weight loss and gain.
This is the second big win for Ni fields. The first is the unification of gravitational, electromagnetic and mechanical forces.
How interesting. We have a mechanical construction that does not change its mass, but is able to produce force. If the spin and rotation are of like sense to the observer, the force is toward the
observer. If unlike then the force is away from the observer. Going back to the Ω function, we note that in the Ω function, mass has been replaced by spin and rotation, and more importantly the
change in the rotation and spin appears to be equivalent to a change in mass. Further work is required to develop an Ω function into a theoretical model.
The next step in challenging the foundations of physics is to replace the mass based Ω function with an electromagnetic function. The contemporary work to unify electromagnetism with gravity is
focused on the tensor side. This essay, however, suggests that this may not be the case. If we can do this – which we should be able to do, as Ni fields explain electron motion in a magnetic field —
the new physics will enable us to use electrical circuits to create force, and will one day replace all combustion engines.
Imagine getting to Mars in 2 hours.
The How Of Interstellar Travel
But gravity modification is not the means for interstellar travel because mass cannot be accelerated past the velocity of light. To develop interstellar propulsion technology requires thinking
outside the box. One possibility is, how do we ‘arrive’ without ‘travelling’. Surprisingly, Nature shows us that this is possible. Both photons and particles with mass (electrons, protons & neutrons)
have probabilistic natures. If these particles pass through a slit they ‘arrive’ at either sides of the slit, not just straight ahead! This ‘arrival’ is governed by probabilities. Therefore,
interstellar travel technology requires an understanding of how probability is implemented in Nature, and we need to figure out how to control the ‘arrival’ event, somewhat like the Hitch Hiker’s
Guide to the Galaxy’s ‘infinite improbability drive’.
Neither relativity nor quantum mechanics can or has attempted to explain probabilities. So what is probability? And, in the single slit experiment why does it decrease as one moves orthogonally away
from the slit? I proposed that probabilities are a property of subspace and the way to interstellar travel. Subspace co-exists with spacetime but does not have the time dimension. So how do we test
for subspace? If it is associated with probability, then can we determine tests that can confirm subspace? I have suggested one in my book. More interestingly, for starters, can we alter the
probability of arrivals in the single slit experiments?
To challenge the foundations of pshyics, there are other questions we can ask. Why is the Doppler Effect not a special case of Gravitational Red/Blue shift? Why is the Hubble parameter not a
constant? Can we find the answers? Will seeking these answers keep us awake at night at the possibility of new unthinkable inventions that will take man where no man has gone before?
R.L. Amoroso, G. Hunter, M. Kafatos, and Vigier, Gravitation and Cosmology: From the Hubble Radius to the Plank Scale, Proceedings of a Symposium in Honour of the 80th Birthday of Jean-Pierre Vigier,
Edited by Amoroso, R.L., Hunter, G., Kafatos, M., and Vigier, J-P., (Kluwer Academic Publishers, Boston, USA, 2002).
H. Bondi, Reviews of Modern Physics, 29–3, 423 (1957). G. Hooft, Found Phys 38, 733 (2008).
B.T. Solomon, “An Approach to Gravity Modification as a Propulsion Technology”, Space, Propulsion and Energy Sciences International Forum (SPESIF 2009), edited by Glen Robertson, AIP Conference
Proceedings, 1103, 317 (2009).
B.T. Solomon, Phys. Essays 24, 327 (2011)
R. V. Wagoner, 26th SLAC Summer Institute on Particle Physics, SSI 98, 1 (1998).
Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity
Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.
Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification. | {"url":"https://demo.lifeboat.com/blog/tag/foundations-of-physics","timestamp":"2024-11-05T03:52:41Z","content_type":"text/html","content_length":"141482","record_id":"<urn:uuid:855643ed-3db2-40da-93af-556800c6d3fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00819.warc.gz"} |
Hexagon Quilt Calculator - Sum SQ
Understanding the Hexagon Quilt Calculator
The Hexagon Quilt Calculator is a user-friendly tool designed to assist quilters in determining the number of hexagons required for their project and the actual dimensions of the finished quilt top.
By inputting a few key measurements, quilters can obtain valuable information to guide their quilting process.
Key Features of the Calculator
1. Input fields for quilt width, length, and hexagon size
2. Calculation of hexagon columns and rows
3. Determination of the total number of full and half hexagons needed
4. Computation of the actual quilt top dimensions
How to Use the Hexagon Quilt Calculator
Using the Hexagon Quilt Calculator is straightforward. Follow these steps to get accurate results for your quilting project:
1. Enter the desired width of your quilt in inches in the “Width of quilt (in)” field.
2. Input the desired length of your quilt in inches in the “Length of quilt (in)” field.
3. Specify the size of each hexagon in inches in the “Size of hexagon (in)” field.
4. Click the “Calculate” button to generate the results.
The calculator will then display the following information:
• Number of hexagonal columns
• Number of hexagonal rows
• Total number of hexagons required
• Number of half hexagons needed
• Actual size of the quilt top (width and length)
Understanding the Calculations
To fully appreciate the value of the Hexagon Quilt Calculator, it’s important to understand the mathematics behind the calculations. Let’s break down the process:
Calculating Columns and Rows
The number of columns is determined by dividing the desired quilt width by the width of a hexagon. The width of a hexagon is calculated as the hexagon size multiplied by √3 (approximately 1.732).
columns = ceil(width / (hexagonSize * 1.732))
The number of rows is calculated by dividing the desired quilt length by the vertical spacing between hexagon centers. This spacing is 3/4 of the hexagon size.
rows = ceil((2 * length - hexagonSize) / (3 * hexagonSize))
The calculator ensures an odd number of rows for a balanced design by subtracting 1 if the result is even.
Determining the Number of Hexagons
The total number of hexagons is calculated based on the number of columns and rows. The formula accounts for the alternating pattern of hexagons in each row.
hexagons = (((rows / 2) + 0.5) * columns) + (((rows / 2) - 0.5) * (columns - 1))
Calculating Half Hexagons
Half hexagons are used to create straight edges on the sides of the quilt. The number of half hexagons is determined by the number of rows.
halfHexagons = ((rows / 2) - 0.5) * 2
Computing Actual Quilt Dimensions
The actual width and length of the quilt top may differ slightly from the initial input due to the hexagonal shape. The calculator provides these precise measurements:
actualWidth = columns * hexagonSize * 1.732
actualLength = (((rows - 1) / 2) * 3 * hexagonSize) + 2 * hexagonSize
Practical Examples
Let’s explore two examples to illustrate how the Hexagon Quilt Calculator can be used in real-world quilting projects.
Example 1: Baby Quilt
Suppose you want to create a baby quilt with the following specifications:
• Desired width: 36 inches
• Desired length: 48 inches
• Hexagon size: 2 inches
Using the calculator, you would get the following results:
• Number of hexagonal columns: 11
• Number of hexagonal rows: 15
• Number of hexagons: 150
• Number of half hexagons: 14
• Actual size of quilt top: 38.10 inches wide by 48.00 inches long
This information allows you to prepare the correct number of hexagons and half hexagons, and adjust your expectations for the final quilt size.
Example 2: Queen-Size Bed Quilt
For a larger project, let’s consider a queen-size bed quilt:
• Desired width: 90 inches
• Desired length: 108 inches
• Hexagon size: 3 inches
The calculator would provide these results:
• Number of hexagonal columns: 18
• Number of hexagonal rows: 23
• Number of hexagons: 378
• Number of half hexagons: 22
• Actual size of quilt top: 93.53 inches wide by 108.00 inches long
This example demonstrates how the calculator can handle larger projects, helping quilters plan for substantial quilts with precision.
Benefits of Using the Hexagon Quilt Calculator
1. Time-Saving: The calculator eliminates the need for manual calculations, reducing the time spent on project planning.
2. Accuracy: By providing precise measurements and hexagon counts, the calculator helps minimize errors in material preparation.
3. Material Efficiency: Knowing the exact number of hexagons needed helps prevent waste of fabric and other quilting materials.
4. Design Flexibility: Quilters can easily experiment with different hexagon sizes and quilt dimensions to achieve their desired design.
5. Confidence in Planning: With accurate calculations, quilters can approach their projects with greater confidence and clarity.
Tips for Successful Hexagon Quilting
While the Hexagon Quilt Calculator provides valuable information, consider these additional tips for successful hexagon quilting:
1. Fabric Selection: Choose fabrics that complement each other and enhance the geometric pattern of hexagons.
2. Cutting Precision: Use accurate cutting tools and techniques to ensure uniformity in your hexagon pieces.
3. Organizing Pieces: Sort and label your cut hexagons to maintain order during the piecing process.
4. Joining Technique: Practice joining hexagons neatly to create smooth seams and crisp points.
5. Pressing: Press seams consistently to achieve a flat and professional-looking quilt top.
6. Edge Finishing: Decide whether to use half hexagons or binding to finish the quilt edges. | {"url":"https://sumsq.com/hexagon-quilt-calculator/","timestamp":"2024-11-11T07:45:56Z","content_type":"text/html","content_length":"101965","record_id":"<urn:uuid:48d27e0f-b9c7-4248-9984-d418287065d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00833.warc.gz"} |
The Degree Measure as Utility Function over Positions in Networks
Rene (J.R.) van den Brink () and Agnieszka Rusinowska
Additional contact information
Rene (J.R.) van den Brink: Vrije Universiteit Amsterdam; Tinbergen Institute, The Netherlands
No 17-065/II, Tinbergen Institute Discussion Papers from Tinbergen Institute
Abstract: In this paper, we connect the social network theory on centrality measures to the economic theory of preferences and utility. Using the fact that networks form a special class of
cooperative TU-games, we provide a foundation for the degree measure as a von Neumann-Morgenstern expected utility function reflecting preferences over being in different positions in different
networks. The famous degree measure assigns to every position in a weighted network the sum of the weights of all links with its neighbours. A crucial property of a preference relation over network
positions is neutrality to ordinary risk. If a preference relation over network positions satisfies this property and some regularity properties, then it must be represented by a utility function
that is a multiple of the degree centrality measure. We show this in three steps. First, we characterize the degree measure as a centrality measure for weighted networks using four natural axioms.
Second, we relate these network centrality axioms to properties of preference relations over positions in networks. Third, we show that the expected utility function is equal to a multiple of the
degree measure if and only if it represents a regular preference relation that is neutral to ordinary risk. Similarly, we characterize a class of affine combinations of the outdegree and indegree
measure in weighted directed networks and deliver its interpretation as a von Neumann-Morgenstern expected utility function.
Keywords: Weighted network; network centrality; utility function; degree centrality; von Neumann-Morgenstern expected utility function; cooperative TU-game; weighted directed network. (search for
similar items in EconPapers)
JEL-codes: C02 D81 D85 (search for similar items in EconPapers)
Date: 2017-07-24
New Economics Papers: this item is included in nep-gth, nep-mic, nep-soc and nep-upt
References: View references in EconPapers View complete reference list from CitEc
Downloads: (external link)
https://papers.tinbergen.nl/17065.pdf (application/pdf)
Related works:
Working Paper: The degree measure as utility function over positions in networks (2017)
Working Paper: The degree measure as utility function over positions in networks (2017)
Working Paper: The degree measure as utility function over positions in networks (2017)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:tin:wpaper:20170065
Access Statistics for this paper
More papers in Tinbergen Institute Discussion Papers from Tinbergen Institute Contact information at EDIRC.
Bibliographic data for series maintained by Tinbergen Office +31 (0)10-4088900 (). | {"url":"https://econpapers.repec.org/paper/tinwpaper/20170065.htm","timestamp":"2024-11-05T00:55:12Z","content_type":"text/html","content_length":"16804","record_id":"<urn:uuid:c86a2f1f-3a26-474f-92b9-149ef5dc0949>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00706.warc.gz"} |
Histogram charts
A histogram is similar in appearance to a bar chart, but instead of comparing categories or looking for trends over time, each bar represents how data is distributed in a single category. Each bar
represents a continuous range of data or the number of frequencies for a specific data point.
Histograms are useful for showing the distribution of a single scale variable. Data are binned and summarized by using a count or percentage statistic. A variation of a histogram is a frequency
polygon, which is like a typical histogram except that the area graphic element is used instead of the bar graphic element.
Another variation of the histogram is the population pyramid. Its name is derived from its most common use: summarizing population data. When used with population data, it is split by gender to
provide two back-to-back, horizontal histograms of age data. In countries with a young population, the shape of the resulting graph resembles a pyramid.
Creating a histogram chart
1. In the Chart Type section, click the Histogram icon.
The canvas updates to display a histogram chart template.
2. Select a scale variable as the X-axis variable.
Note: The statistic for a histogram is Histogram or Histogram Percent. These statistics bin the data and calculate a count for each bin.
3. Click the Save visualization in the project control. Select Create a new asset or Append to existing asset. Provide a Visualization asset name, an optional description, and a chart name.
4. Click Apply to save the visualization to the project. The new visualization asset is now available on the Assets tab.
Lists variables that are available for the chart's X-axis.
Split by
Select a categorical variable that creates a table of charts, with a cell for each category in the Split by variable. Like grouping, split by variables essentially add more dimensions to your
chart by displaying information for each variable category.
Bin method
Specify a bin method that is used to create the chart bars. Available option include Auto bin, By bin width, and By bin num.
When enabled, the kernel density estimate curve is shown on the chart.
Show distribution curve
When enabled, the distribution fitting curve is shown on the chart.
The drop-down list provides to the following distribution options.
Automatically fits the distribution (the default setting).
Returns the value from a Beta distribution with specified shape parameters.
Returns the value from an exponential distribution.
Returns the value from the Gamma distribution, with the specified shape and scale parameters.
Returns the value from a log-normal distribution with specified parameters.
Returns the value from a normal distribution with specified mean and standard deviation.
Returns the value from a triangular distribution with specified parameters.
Returns the value from the uniform distribution between the minimum and maximum.
Returns the value from a Weibull distribution with specified parameters.
Bin width
The slider controls the size of the interval that is used to split the data into groups.
Primary title
The chart title.
The chart subtitle.
The chart footnote.
XAxis label
The x-axis label.
YAxis label
The y-axis label. | {"url":"https://www.ibm.com/docs/en/watsonx/w-and-w/1.1.x?topic=types-histogram-charts","timestamp":"2024-11-06T15:25:58Z","content_type":"text/html","content_length":"11416","record_id":"<urn:uuid:5bcb3b78-46be-4d6b-8df1-27f6c69f9fd8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00153.warc.gz"} |
Solar Sails I
Solar sails are in the news again, and this time not just for blowing up. The Japanese space agency is launching what they hope to be the first successful solar sail tomorrow. In honor of that, we
will be discussing the physics of solar sails. First of all, what the heck are solar sails? Solar sails are a means of propulsion based on the simple observation that "Hey, sails work on boats.
Therefore, they should work on interplanetary spacecraft (in space)." Boat sails work when air molecules hit into the sail and bounce back. By conservation of momentum, this gives the boat sail an
itty bitty boost in momentum. Summing over the large number of air molecules moving as wind, the boat gets pushed along in the water. A similar process works with solar sails, but instead of air
molecules doing the hitting, it's photons. Since each photon of a given wavelength has some momentum, by reflecting that photon the solar sail can gain a tiny bit of momentum. Summing over the large
number of photons coming from the sun over a long time frame we can get a considerable boost. So let's see how good solar sails are. First we need to find the net force on our sail. We will certainly
have to deal with gravitational forces (which will slow us down) : $$ F_{g} = \frac{-GM_{\odot}m}{r^2} $$ where big M is the mass of the sun and little m is the mass of the sail. Now we need to find
the radiation force on the sail. Since force is just rate of change of momentum, we can find the change of momentum of one photon per unit time, then find how many photons are hitting our sail. So
for one elastic collision of a photon with the sail, the change in momentum will be $$ \Delta p = 2 \frac{h\nu}{c} $$ and by conservation of momentum, this will also be the momentum gained by the
sail. Now we want to find the number of photons incident on a given area in a given time. This will just be the energy flux output by the sun ( energy/ m^2 s ) divided by the energy per photon. In
other words: $$ f_n = \frac{L_{\odot}}{4\pi r^2}\frac{1}{h\nu} .$$ So now we can get a force by $$ \text{Force} = \left(\frac{\Delta p}{\text{1 photon}} \right) \times \left(\frac{\text{number of
photons}}{area \times time}\right) \times \left( Area\right) $$ which is just $$ F_{rad} = 2 \frac{h\nu}{c} \times \frac{L_\odot}{4\pi r^2 h\nu} \times \pi R^2 = \frac{L_{\odot} R^2}{2cr^2} .$$ So
combining the radiation force with the gravitation force, we have a net force on the sail of $$ F = \left( \frac{L_{\odot} R^2}{2c} - GM_{\odot}m \right) \frac{1}{r^2} .$$ This can then be integrated
over r to find an effective potential, giving: $$ U = \left( \frac{L_{\odot} R^2}{2c} - GM_{\odot}m\right)\frac{1}{r} .$$ For simplicity, let's just write that $$ \alpha = \frac{L_{\odot} R^2}{2c} -
GM_{\odot}m $$ so $$ U = \frac{\alpha}{r} .$$ Now we can start saying some things about this sail. The most straightforward quantity to find would be the maximum velocity. By conservation of energy
(and starting from some r_0 at rest), we have that $$ v_f = \left[\frac{2\alpha}{m} \left(\frac{1}{r_0} - \frac{1}{r_f} \right) \right]^{1/2} $$ So as r_f goes to very large values, the subtracted
piece gets smaller and smaller. In the limit that r_f goes to infinity we have that $$ v_{max} = \left(\frac{2\alpha}{mr_0}\right)^{1/2} .$$ Plugging back in our long term for alpha and plugging in
some numbers we get: $$ v_{max} = 42,000 m/s \left( \frac{1.5 \times 10^{-4}}{\sigma} - 1\right)^{1/2} $$ where sigma is just the surface mass density [g/cm^2] of the sail. Below is a plot of maximum
velocity ( m/s) plotted against surface mass density (g/cm^2). For a sigma of 10^-4 g/cm^2, we get a max velocity of about 30,000 m/s. Not bad. From this graph we see that there must be some maximum
surface density, above which we don't get any (forward) motion at all. This makes sense since we want our radiation forces (which scale with area) to overcome our gravitational forces (which scale
with mass). And below this maximal surface density we see a power law behavior. Cool. We can also find the distance traveled as a function of time. Taking the final velocity equation above and
writing v as dr/dt, we see that $$ \frac{dr}{dt} = \left[ \frac{2\alpha}{m} \left( \frac{1}{r_0} - \frac{1}{r_f} \right)\right]^{1/2} $$ Rearranging and integrating, we can get time (in years) as a
function of distance r (in AU): $$ t = \frac{0.11 \left(\sqrt{(-1+r) r}+\text{Log}\left[1+\sqrt{\frac{-1+r}{r}}\right]+\frac{\text{Log}[r]}{2}\right)}{\sqrt{-1+\frac{1.5 \times 10^{-4}}{\sigma}}}$$ A
plot of t vs. r is shown below for typical solar system distances and a sigma of 10^-4 g/cm^2. We assume that we are launching from earth (1 AU). Since Pluto is at a distance of about 40 AU, we see
that our sail could get there in less than 7 years. For comparison, the New Horizons probe will use conventional propulsion to get to Pluto in 9.5 years (and it is the fastest spacecraft ever made).
Zooming in to our starting point around 1 AU, we see that there is a period of acceleration and then the maximum velocity is reached after a few months. Just eyeballing it, it looks like it takes at
least a month to reach appreciable speed. That it takes so long is a result of the very small forces involved due to radiation pressure. But even a small acceleration amounts to a considerable speed
if applied for long enough! Now Pluto is fine I guess (it's the second largest dwarf-planet!), but how about some interstellar flight? Well, the nearest star is Proxima Centauri which is about a
parsec away. A parsec is 310^16 m, or about 200,000 AU. From, the plot below (or plugging in to the equation above), we see that such a trip would take of order 10,000 years. That's a long time, but
its not too shabby considering this craft uses no fuel of its own. So solar sails can do some fairly impressive things simply by harnessing the free energy of the sun. Though this only provides a
very small acceleration, it can be taken over a long enough time to be useful. However, since the radiation pressure of the sun falls off as 1/r^2, we start to observe diminishing returns and the
sail reaches a max velocity. But overall the numbers seem fairly impressive. All that remains now is whether they are feasible to construct. Right now my only data point for feasibility was that it
was in Star Wars, but as I recall that was a long* time ago. | {"url":"https://thephysicsvirtuosi.com/posts/old/solar-sails-i/","timestamp":"2024-11-04T17:01:52Z","content_type":"text/html","content_length":"17019","record_id":"<urn:uuid:0d9a070b-aa60-406d-8322-0cd8705f145e>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00834.warc.gz"} |
aspecd.model module
You're reading an old version of this documentation. For up-to-date information, please have a look at v0.11.
aspecd.model module
Numerical models
Models are defined by (constant) parameters and variables the model is evaluated for. The variables can be thought of as the axes values of the resulting (calculated) dataset.
As a simple example, consider a polynomial defined by its (constant) coefficients. The model will evaluate the polynomial for the values, and the result will be a aspecd.dataset.CalculatedDataset
object containing the values of the evaluated model in its data, and the variables as its axes values.
Models can be seen as abstraction to simulations in some regard. In this respect, they will play a central role in conjunction with fitting models to data by adjusting their respective parameters, a
quite general approach in science and particularly in spectroscopy.
A bit of terminology
parameters :
constant parameters (sometimes termed coefficients) characterising the model
Example: In case of a polynomial, the coefficients would be the parameters of the model.
variables :
values to evaluate the model for
Example: In case of a polynomial, the x values the model is evaluated for would be the variables, with the y values being the corresponding depending values dictated by the model and its
Models provided within this module
Besides providing the basis for models for the ASpecD framework, this module comes with a (growing) number of general-purpose models useful for basically all kinds of spectroscopic data.
Here is a list as a first overview. For details, see the detailed documentation of each of the classes, readily accessible by the link.
Primitive models
Primitive models are mainly used to create test datasets that can be operated on afterwards. The particular strength and beauty of wrapping essential one-liners of code with a full-fledged model
class is twofold: These classes return ASpecD datasets, and you can work completely in context of recipe-driven data analysis, requiring no actual programming skills.
If nothing else, these primitive models can serve as a way to create datasets with fixed data dimensions. Those datasets may be used as templates for more advanced models, by using the
aspecd.model.Model.from_dataset() method.
Having that said, here you go with a list of primitive models:
• Dataset consisting entirely of zeros (in N dimensions)
• Dataset consisting entirely of ones (in N dimensions)
Mathematical models
Besides the primitive models listed above, there is a growing number of mathematical models implementing comparably simple mathematical equations that are often used. Packages derived from the ASpecD
framework may well define more specific models as well.
Composite models consisting of a sum of individual models
Often you encounter situations where a model consists of a (weighted) sum of individual models. A simple example would be a damped oscillation. Or think of a spectral line consisting of several
overlapping individual lines (Lorentzian or Gaussian).
All this can be easily set up using the aspecd.model.CompositeModel class that lets you conveniently specify a list of models, their individual parameters, and optional weights.
Family of curves
Systematically varying one parameter at a time for a given model is key to understanding the impact this parameter has. Therefore, automatically creating a family of curves with one parameter varied
is quite convenient.
To achieve this, use the class aspecd.model.FamilyOfCurves that will take the name of a model (needs to be the name of an existing model class) and create a family of curves for this model, adding
the name of the parameter as quantity to the additional axis.
Writing your own models
All models should inherit from the aspecd.model.Model class. Furthermore, they should conform to a series of requirements:
• Parameters are stored in the aspecd.model.Model.parameters dict.
Note that this is a dict. In the simplest case, you may name the corresponding key “coefficients”, as in case of a polynomial. In other cases, there are common names for parameters, such as “mu”
and “sigma” for a Gaussian. Whether the keys should be named this way or describe the actual meaning of the parameter is partly a matter of personal taste. Use whatever is more common in the
given context, but tend to be descriptive. Usually, implementing mathematical equations by simply naming every variable according to the mathematical notation is a bad idea, as the programmer
will not know what these variables represent.
• Models create calculated datasets of class aspecd.dataset.CalculatedDataset.
The data of these datasets need to have dimensions corresponding to the variables set for the model. Think of the variables as being the axes values of the resulting dataset.
The _origdata property of the dataset is automatically set accordingly (see below for details). This is crucially important to have the resulting dataset work as expected, including undo and redo
functionality within the ASpecD framework. Remember: A calculated dataset is a regular dataset, and you can perform all the tasks with you would do with other datasets, including processing,
analysis and alike.
• Model creation takes place entirely in the non-public _perform_task method of the model.
This method gets called from aspecd.model.Model.create(), but not before some background checks have been performed, including preparing the metadata of the aspecd.dataset.CalculatedDataset
object returned by aspecd.model.Model.create().
After calling out to _perform_task, the axes of the aspecd.dataset.CalculatedDataset object returned by aspecd.model.Model.create() are set accordingly, i.e. fitting to the shape of the data.
On the other hand, a series of things will be automatically taken care of for you:
• Metadata of the resulting aspecd.dataset.CalculatedDataset object are automatically set, including type (set to the full class name of the model) and parameters (copied over from the parameters
attribute of the model).
• Axes of the resulting aspecd.dataset.CalculatedDataset object are automatically adjusted according to the size and content of the aspecd.model.Model.variables attribute.
In case you used aspecd.model.Model.from_dataset(), the axes from the dataset will be copied over from there.
• The _origdata property of the dataset is automatically set accordingly. This is crucially important to have the resulting dataset work as expected, including undo and redo functionality within
the ASpecD framework.
Make sure your models do not raise errors such as ZeroDivisionError depending on the parameters set. Use the aspecd.utils.not_zero() function where appropriate. This is particularly important in
light of using models in the context of automated fitting.
class aspecd.model.Model
Bases: ToDictMixin
Base class for numerical models.
Models are defined by (constant) parameters and variables the model is evaluated for. The variables can be thought of as the axes values of the resulting (calculated) dataset.
As a simple example, consider a polynomial defined by its (constant) coefficients. The model will evaluate the polynomial for the values, and the result will be a aspecd.dataset.CalculatedDataset
object containing the values of the evaluated model in its data, and the variables as its axes values.
Models can be seen as abstraction to simulations in some regard. In this respect, they will play a central role in conjunction with fitting models to data by adjusting their respective
parameters, a quite general approach in science and particularly in spectroscopy.
Name of the model.
Defaults to the lower-case class name, don’t change!
constant parameters characterising the model
values to evaluate the model for
Usually numpy.ndarray arrays, one for each variable
The variables will become the values of the respective axes.
Short description, to be set in class definition
List of references with relevance for the implementation of the model.
Use appropriate record types from the bibrecord package.
Label that will be applied to the calculated dataset
Usually, labels provide a short and concise description of a dataset, at least in a given context.
List of dicts containing quantity and unit for each axis.
Needs to have the same length as the axes of the created dataset.
If you would like to skip one axis, set it to an empty dict, None, or False (i.e., anything that evaluates to False in Python). This is particularly helpful with models such as FamilyOfCurves
that auto-generate one axis.
Changed in version 0.3: New non-public method _sanitise_parameters()
Changed in version 0.6: New attributes label and axes
Create dataset containing the evaluated model as data
The actual model creation should be implemented within the non-public method _perform_task(). Furthermore, you should make sure your model will be evaluated for the values given in
aspecd.model.Model.values and the resulting dataset having set the axes appropriately.
Furthermore, don’t forget to set the _origdata property of the dataset, usually simply by copying the data property over there after it has been filled with content. This is crucially
important to have the resulting dataset work as expected, including undo and redo functionality within the ASpecD framework. Remember: A calculated dataset is a regular dataset, and you can
perform all the tasks with you would do with other datasets, including processing, analysis and alike.
dataset – Calculated dataset containing the evaluated model as data
Return type:
aspecd.exceptions.MissingParameterError – Raised if either parameters or values are not set
Evaluate model and return numerical data without any checks.
Usually, you should always use create() and obtain a dataset based on the model. However, create() performs a lot of additional checks. Therefore, if you are sure to have set all properties
as necessary and are interested in a probably much faster evaluation of the model for a given set of parameters, e.g. in context of fitting, this is the method of choice.
data – Numerical data of the model
Return type:
Obtain crucial information from an existing dataset.
Often, models should be calculated for the same values as an existing dataset. Therefore, you can set the aspecd.model.Model.values property from a given dataset.
If you get the variables from an existing dataset, the calculated dataset containing the evaluated model will have the same axes settings. Thus, it is pretty convenient to get a model with
identical axes, including quantity etcetera. This helps a lot with plotting both, an (experimental) dataset and the model, in one plot.
dataset (aspecd.dataset.Dataset) – Dataset to obtain crucial information for building the model from
aspecd.exceptions.MissingDatasetError – Raised if no dataset is provided
Set attributes from dictionary.
dict (dict) – Dictionary containing information of a task.
aspecd.plotting.MissingDictError – Raised if no dict is provided.
class aspecd.model.CompositeModel
Bases: Model
Composite model consisting of a weighted contributions of individual models.
Individual models can either be added up (default) or multiplied, depending on which operators are provided. Both situations occur frequently. If you would like to describe a spectrum as sum of
Gaussian or Lorentzian lines, you need to add the individual contributions. If you would like to model a damped oscillation, you would need to multiply the exponential decay onto the oscillation.
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to describe your data with a model consisting of two Lorentzian line shapes. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of
given length and axes range. Based on that you can create your model:
- kind: model
type: Zeros
shape: 1001
range: [0, 20]
result: dummy
- kind: model
type: CompositeModel
from_dataset: dummy
- Lorentzian
- Lorentzian
- position: 5
- position: 8
result: multiple_lorentzians
Note that you need to provide parameters for each of the individual models, even if the class for a model would work without explicitly providing parameters.
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
While adding up the contributions of the individual components works well for describing spectra, sometimes you need to multiply contributions. Suppose you would want to create a damped
oscillation consisting of a sine and an exponential. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you
can create your model:
- kind: model
type: Zeros
shape: 1001
range: [0, 20]
result: dummy
- kind: model
type: CompositeModel
from_dataset: dummy
- Sine
- Exponential
- frequency: 1
phase: 1.57
- rate: -0.2
- multiply
result: damped_oscillation
Again, you need to provide parameters for each of the individual models, even if the class for a model would work without explicitly providing parameters.
class aspecd.model.FamilyOfCurves
Bases: Model
Create a family of curves for a model, varying a single parameter.
Systematically varying one parameter at a time for a given model is key to understanding the impact this parameter has. Therefore, automatically creating a family of curves with one parameter
varied is quite convenient.
This class will take the name of a model (needs to be the name of an existing model class) and create a family of curves for this model, adding the name of the parameter as quantity to the
additional axis.
Name of the model the family of curves should be calculated for
Needs to be the name of an existing model class.
Name and values of the parameter to be varied
Name of the parameter that should be varied
Values of the parameter to be varied
ValueError – Raised if no model is provided
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a family of curves of a Gaussian with varying the width. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given
length and axes range. Based on that you can create your family of curves:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: FamilyOfCurves
from_dataset: dummy
model: Gaussian
parameter: width
values: [1., 1.5, 2., 2.5, 3]
result: gaussian_with_varied_width
This would create a 2D dataset with a Gaussian with standard values for amplitude and position and the value for the width varied as given.
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
If you would like to control additional parameters of the Gaussian, you can do that as well:
- kind: model
type: FamilyOfCurves
from_dataset: dummy
model: Gaussian
amplitude: 3.
position: -1
parameter: width
values: [1., 1.5, 2., 2.5, 3]
result: gaussian_with_varied_width
Note that if you provide a value for the parameter to be varied in the list of parameters, it will be silently overwritten by the values provided with vary.
class aspecd.model.Zeros
Bases: Model
Zeros of given shape.
One of the most primitive models: zeros in N dimensions.
This model is quite helpful for creating test datasets, e.g. with added noise (of different colour). Basically, it can be thought of as a wrapper for numpy.zeros(). Its particular strength is
that using this model, creating test datasets becomes straight-forward in context of recipe-driven data analysis.
All parameters necessary for this step.
shape of the data
Have in mind that ND datasets get huge very fast. Therefore, it is not the best idea to create an 3D dataset with zeros with 2**12 elements along each dimension.
range of each of the axes
Useful if you want to specify the axes values as well.
If the data are multidimensional, one range for each axis needs to be provided.
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Creating a dataset consisting of 2**10 zeros is quite simple:
- kind: model
type: Zeros
shape: 1024
result: 1d_zeros
Of course, you are not limited to 1D datasets, and you can easily create ND datasets as well:
- kind: model
type: Zeros
shape: [1024, 256, 256]
result: 3d_zeros
Please have in mind that the memory of your computer is usually limited and that ND datasets become huge very fast. Hence, creating a 3D array with 2**10 elements along each dimension is most
probably not the best idea.
Suppose you not only want to create a dataset with a given shape, but set the axes values (i.e., their range) as well:
- kind: model
type: Zeros
shape: 1024
range: [35, 42]
result: 1d_zeros
This would create a 1D dataset with 1024 values, with the axes values spanning a range from 35 to 42. Of course, the same can be done with ND datasets.
Now, let’s assume that you would want to play around with the different types of (coloured) noise. Therefore, you would want to first create a dataset and afterwards add noise to it:
- kind: model
type: Zeros
shape: 8192
result: 1d_zeros
- kind: processing
type: Noise
normalise: True
This would create a dataset consisting of 2**14 zeros and add pink (1/f) noise to it that is normalised (has an amplitude of 1). To check that the noise is really 1/f noise, you may look at its
power density. See aspecd.analysis.PowerDensitySpectrum for details, including how to even plot both, the power density spectrum and a linear fit together in one figure.
class aspecd.model.Ones
Bases: Model
Ones of given shape.
One of the most primitive models: ones in N dimensions.
This model is quite helpful for creating test datasets, e.g. with added noise (of different colour). Basically, it can be thought of as a wrapper for numpy.ones(). Its particular strength is that
using this model, creating test datasets becomes straight-forward in context of recipe-driven data analysis.
All parameters necessary for this step.
shape of the data
Have in mind that ND datasets get huge very fast. Therefore, it is not the best idea to create an 3D dataset with ones with 2**12 elements along each dimension.
range of each of the axes
Useful if you want to specify the axes values as well.
If the data are multidimensional, one range for each axis needs to be provided.
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Creating a dataset consisting of 2**10 ones is quite simple:
- kind: model
type: Ones
shape: 1024
result: 1d_ones
Of course, you are not limited to 1D datasets, and you can easily create ND datasets as well:
- kind: model
type: Ones
shape: [1024, 256, 256]
result: 3d_ones
Please have in mind that the memory of your computer is usually limited and that ND datasets become huge very fast. Hence, creating a 3D array with 2**10 elements along each dimension is most
probably not the best idea.
Suppose you not only want to create a dataset with a given shape, but set the axes values (i.e., their range) as well:
- kind: model
type: Ones
shape: 1024
range: [35, 42]
result: 1d_zeros
This would create a 1D dataset with 1024 values, with the axes values spanning a range from 35 to 42. Of course, the same can be done with ND datasets.
Now, let’s assume that you would want to play around with the different types of (coloured) noise. Therefore, you would want to first create a dataset and afterwards add noise to it:
- kind: model
type: Ones
shape: 8192
result: 1d_ones
- kind: processing
type: Noise
normalise: True
This would create a dataset consisting of 2**14 ones and add pink (1/f) noise to it that is normalised (has an amplitude of 1). To check that the noise is really 1/f noise, you may look at its
power density. See aspecd.analysis.PowerDensitySpectrum for details, including how to even plot both, the power density spectrum and a linear fit together in one figure.
class aspecd.model.Polynomial
Bases: Model
Evaluate a polynomial with given coefficients for the data provided in aspecd.model.Model.variables.
As the new numpy.polynomial package is used, particularly the numpy.polynomial.polynomial.Polynomial class, the coefficients are given in increasing order, with the first element corresponding to
Furthermore, the coefficients are assumed to be provided in the unscaled data domain (by using the numpy.polynomial.polynomial.Polynomial.convert() method).
All parameters necessary for this step.
coefficients of the polynomial to be evaluated
The number of coefficients determines the order (degree) of the polynomial. The coefficients have to be given in increasing order (see note above). Furthermore, you need to provide the
coefficients in the unscaled data domain (using the numpy.polynomial.polynomial.Polynomial.convert() method).
aspecd.exceptions.MissingParameterError – Raised if no coefficients are given
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a Polynomial of first order with a slope of 42 and an intercept of -3. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros)
of given length and axes range. Based on that you can create your Polynomial:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: Polynomial
from_dataset: dummy
coefficients: [-3, 42]
result: polynomial
Note that the coefficients are given in increasing order of the exponent, here intercept first, followed by the slope.
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
class aspecd.model.Gaussian
Bases: Model
Generalised Gaussian.
Creates a Gaussian function or Gaussian, with its characteristic symmetric “bell curve” shape.
The underlying mathematical equation may be written as follows:
\[f(x) = a \exp\left(-\frac{(x-b)^2}{2c^2}\right)\]
with \(a\) being the amplitude, \(b\) the position, and \(c\) the width of the Gaussian.
Note that this is a generalised Gaussian where you can set amplitude, position, and width independently. Hence, it is not normalised to an integral of one, and therefore not to be confused with
the the probability density function (PDF) of a normally distributed random variable. If you are interested in this, see the aspecd.model.NormalisedGaussian class.
All parameters necessary for this step.
Amplitude or height of the Gaussian
Default: 1
Position (of the maximum) of the Gaussian
Default: 0
Width of the Gaussian
The full width at half maximum (FWHM) is related to the width b by: \(2 \sqrt{2 \log(2)} b\).
Default: 1
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a Gaussian with standard values (amplitude=1, position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of
given length and axes range. Based on that you can create your Gaussian:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: Gaussian
from_dataset: dummy
result: gaussian
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
Of course, you can control all three parameters (amplitude, position, width) explicitly:
- kind: model
type: Gaussian
amplitude: 5
position: 1.5
width: 0.5
from_dataset: dummy
result: gaussian
This would create a Gaussian with an amplitude (height) of 5, situated at a value of 1.5 at the x axis, and with a width of 0.5.
class aspecd.model.NormalisedGaussian
Bases: Model
Normalised Gaussian.
Creates a Gaussian function or Gaussian, with its characteristic symmetric “bell curve” shape, normalised to an integral of one. Thus, it is the probability density function (PDF) of a normally
distributed random variable.
The underlying mathematical equation may be written as follows:
\[f(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(-\frac{(x-\mu)^2}{ 2\sigma^2}\right)\]
with \(\mu\) being the position and \(\sigma\) the width of the Gaussian, and \(\sigma^2\) the variance.
This class creates a normalised Gaussian, equivalent to the PDF of a normally distributed random variable. If you are interested in a Gaussian where you can set all three parameters (amplitude,
position, width) independently, see the aspecd.model.Gaussian class.
All parameters necessary for this step.
Position (of the maximum) of the Gaussian
For a normally distributed random variable \(x\), the position is identical to its expected value \(E(x)\) or mean \(\mu\). Other names include first moment and average.
Default: 0
Width of the Gaussian
The full width at half maximum (FWHM) is related to the width \(\sigma\) by: \(2 \sqrt{2 \log(2)} \sigma\). The squared value of the width is better known the variance \(\sigma^2\).
Default: 1
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a normalised Gaussian with standard values (position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of
given length and axes range. Based on that you can create your normalised Gaussian:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: NormalisedGaussian
from_dataset: dummy
result: normalised_gaussian
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
Of course, you can control position and width explicitly:
- kind: model
type: NormalisedGaussian
position: 1.5
width: 0.5
from_dataset: dummy
result: normalised_gaussian
This would create a normalised Gaussian with its maximum situated at a value of 1.5 at the x axis, and with a width of 0.5.
class aspecd.model.Lorentzian
Bases: Model
Generalised Lorentzian.
Creates a Lorentzian function or Lorentzian often used in spectroscopy, as the line shape of a purely lifetime-broadened spectral line is identical to such a Lorentzian.
The underlying mathematical equation may be written as follows:
\[f(x) = a \left[\frac{c^2}{(x-b)^2 + c^2}\right]\]
with \(a\) being the amplitude, \(b\) the position, and \(c\) the width of the Lorentzian.
Note that this is a generalised Lorentzian where you can set amplitude, position, and width independently. Hence, it is not normalised to an integral of one, and therefore not to be confused with
the the probability density function (PDF) of the Cauchy distribution. If you are interested in this, see the aspecd.model.NormalisedLorentzian class.
All parameters necessary for this step.
Amplitude or height of the Lorentzian
Default: 1
Position (of the maximum) of the Lorentzian
Default: 0
Width of the Lorentzian
The full width at half maximum (FWHM) is related to the width \(b\) by: \(2b\).
Default: 1
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a Lorentzian with standard values (amplitude=1, position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros)
of given length and axes range. Based on that you can create your Lorentzian:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: Lorentzian
from_dataset: dummy
result: lorentzian
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
Of course, you can control all three parameters (amplitude, position, width) explicitly:
- kind: model
type: Lorentzian
amplitude: 5
position: 1.5
width: 0.5
from_dataset: dummy
result: lorentzian
This would create a Lorentzian with an amplitude (height) of 5, situated at a value of 1.5 at the x axis, and with a width of 0.5.
class aspecd.model.NormalisedLorentzian
Bases: Model
Normalised Lorentzian.
Creates a normalised Lorentzian function or Lorentzian with an integral of one, i.e. the probability density function (PDF) of the Cauchy distribution.
The underlying mathematical equation may be written as follows:
\[f(x) = \frac{1}{\pi c} \left[\frac{c^2}{(x-b)^2 + c^2}\right] = \frac{c}{\pi[(x-b)^2 + c^2]}\]
with \(b\) being the position and \(c\) the width of the Lorentzian.
This class creates a normalised Lorentzian, equivalent to the PDF of the Cauchy distribution. If you are interested in a Lorentzian where you can set all three parameters (amplitude, position,
width) independently, see the aspecd.model.Lorentzian class.
All parameters necessary for this step.
Position (of the maximum) of the Lorentzian
Default: 0
Width of the Lorentzian
The full width at half maximum (FWHM) is related to the width \(b\) by: \(2b\).
Default: 1
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a normalised Lorentzian with standard values (position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of
given length and axes range. Based on that you can create your Lorentzian:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: NormalisedLorentzian
from_dataset: dummy
result: normalised_lorentzian
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
Of course, you can control position and width explicitly:
- kind: model
type: NormalisedLorentzian
position: 1.5
width: 0.5
from_dataset: dummy
result: normalised_lorentzian
This would create a normalised Lorentzian with its maximum situated at a value of 1.5 at the x axis, and with a width of 0.5.
class aspecd.model.Voigtian
Bases: Model
Voigt profile.
The Voigt profile (after Woldemar Voigt) is a probability distribution given by a convolution of a Cauchy-Lorentz distribution (with half-width at half-maximum gamma) and a Gaussian distribution
(with standard deviation sigma). It is often used for analyzing spectroscopic data.
In spectroscopy, a Voigt profile results from the convolution of two broadening mechanisms: life-time broadening (Lorentzian part) and inhomogeneous broadening (Gaussian part).
If sigma = 0, the PDF of the Cauchy distribution is returned. Conversely, if gamma = 0, the PDF of the Normal distribution is returned. If sigma = gamma = 0`, the return value is ``Inf for x = 0,
and 0 for all other x.
Note: Internally, the function scipy.special.voigt_profile() is used to calculate the data.
All parameters necessary for this step.
Position (of the maximum) of the Voigt profile
Default: 0
Standard deviation of the Gaussian part
Default: 1
Width of the Lorentzian part
The full width at half maximum (FWHM) is related to the width \(b\) by: \(2b\).
Default: 1
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a Voigt profile with standard values (position=0, gamma=1, sigma=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros)
of given length and axes range. Based on that you can create your Voigtian:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: Voigtian
from_dataset: dummy
result: voigtian
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
Of course, you can control position and widths of Gaussian and Lorentzian contributions explicitly:
- kind: model
type: Voigtian
position: 1.5
sigma: 0.5
gamma: 2
from_dataset: dummy
result: voigtian
This would create a Voigt profile with its maximum situated at a value of 1.5 at the x axis, and with a standard deviation of the Gaussian component of 0.5 and a line width of the Lorentzian part
of 2.
class aspecd.model.Sine
Bases: Model
Sine wave.
Creates a sine function with given amplitude, frequency, and phase.
The underlying mathematical equation may be written as follows:
\[f(x) = a \sin(fx + \phi)\]
with \(a\) being the amplitude, \(f\) the frequency, and \(\phi\) the phase of the sine.
All parameters necessary for this step.
Amplitude of the sine.
Note that the real amplitude (max-min) is twice the value given here. Nevertheless, calling this factor “amplitude” seems to be common.
Default: 1
Frequency of the sine (in radians).
Default: 1
Phase (i.e., shift) of the sine (in radians).
Setting the phase to \(\pi/2\) would result in a cosine.
Default: 0
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create a sine with standard values (amplitude=1, frequency=1, shift=0). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of
given length and axes range. Based on that you can create your sine:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: Sine
from_dataset: dummy
result: sine
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
Of course, you can control all three parameters (amplitude, frequency, and shift) explicitly:
- kind: model
type: Sine
amplitude: 42
frequency: 4.2
shift: 1.57
from_dataset: dummy
result: sine
This would create a sine with an amplitude of 42 (the actual amplitude, defined as max-min, would be twice this value), a frequency of 4.2 and a shift of about pi/2.
class aspecd.model.Exponential
Bases: Model
Exponential function.
Creates an exponential with given prefactor and rate.
The underlying mathematical equation may be written as follows:
\[f(x) = a \exp(bx)\]
with \(a\) being the prefactor and \(b\) the rate of the exponential.
All parameters necessary for this step.
Intercept of the exponential.
Default: 1
Rate of the exponential.
Default: 1
In case of modelling exponential decays, the rate constant will become negative. This rate constant (decay rate) is the inverse of the lifetime. Lifetime and half-life are related by a factor of
For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each
on a single aspect.
Suppose you would want to create an exponential with standard values (prefactor=1, rate=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given
length and axes range. Based on that you can create your exponential:
- kind: model
type: Zeros
shape: 1001
range: [-5, 5]
result: dummy
- kind: model
type: Exponential
from_dataset: dummy
result: exponential
Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset
Of course, you can control all parameters (prefactor, rate) explicitly:
- kind: model
type: Exponential
prefactor: 42
rate: 4.2
from_dataset: dummy
result: exponential
This would create an exponential with a prefactor of 42 (i.e. the intercept) and a rate of 4.2. | {"url":"https://docs.aspecd.de/v0.3/api/aspecd.model.html","timestamp":"2024-11-10T14:01:01Z","content_type":"text/html","content_length":"163479","record_id":"<urn:uuid:ccfb199c-4eef-488f-b436-c3bb06cbd616>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00200.warc.gz"} |
The Universe Is Not Locally Real. Or Is It?
What is the Concept of Locally Real?
In the view of local realism, entities have specific characteristics that are true to them regardless of whether someone is observing, embodying a world where things inherently hold their essence
untouched by observation. It also means that there’s a cap on how fast news can zip around; so things far apart can’t instantly sway each other.
In 2022, the Nobel Prize in Physics was awarded to pioneers in the field of intricate systems, a study that deeply impacts our grasp of local realism by showing how connected behavior can emerge from
systems we thought were separate. Exploring complex systems has shown us that even things we thought were separate can be linked, behaving together without chatting directly, shaking up our
old-school thinking about how things should connect in the universe.
In the realm of quantum mechanics, the concept of local realism is further challenged. Quantum entanglement weaves particles together so that one’s characteristics can instantaneously alter its
partner’s, no matter how vast the space between them. This strange twist of quantum mechanics, where particles are mystically linked and influence each other no matter how far apart they are, shakes
the core of our belief in a world where things can only affect their immediate surroundings. As we peel back the layers of quantum mechanics and tangled systems, our grasp on what was once seen as
solid – local realism in old-school physics – is getting a fresh look and some serious questioning.
Overview of Current Quantum Research and Theory
Quantum mechanics unravels the complex dance of tiny bits like atoms and even tinier particles, explaining how they move and interact in ways that are far from the big, predictable stuff we see every
day. Quantum mechanics pivots on notions of unpredictability and chance, diverging from the predictable paths charted by classical physics. This framework offers insights into the puzzling nature of
light and matter, demonstrating both wave-like and particle-like characteristics, along with the perplexing idea that particles can exist in multiple states at once until observed.
Local realism is the classical interpretation that objects have definite properties regardless of whether they are observed, and that physical processes cannot have instantaneous effects over long
distances. Quantum entanglement weaves a strange web, tying together the essence of two particles in such a way that what happens to one instantly echoes in the other, no matter how vast the space
between them.
John Bell crafted a mathematical theorem that, once put to the test in real-world experiments, showed unmistakably that the spooky predictions of quantum mechanics defy any explanation rooted in the
notion that objects have set traits and influences are local.
The tests upholding the breach of Bell’s principles profoundly shook our grasp on reality, revealing layers beyond what we once understood. They’ve shown us that quantum entanglement isn’t just
theory, and this revelation is pushing forward new explorations into how we might exchange information or send messages using the strange rules of the quantum realm.
Einstein’s General relativity, on the other hand, describes the force of gravity as a curvature of space-time, and it conflicts with the principles of quantum mechanics at very small scales, where
space-time becomes highly curved. This clash between the fabric of space-time bending under gravity and quantum mechanics at incredibly tiny scales is a hotbed for scientific inquiry, as researchers
strive to untangle these complex principles.
Quantum Entanglement
In the bizarre world of quantum mechanics, two or more particles can become so deeply linked that whatever happens to one instantly impacts its partner, no matter how far apart they are. In this
strange quantum world, how one particle behaves is mysteriously tied to another’s actions, so that if you tinker with one, the other seems to ‘know’ and responds immediately, no matter how far apart
they are in space.
Why don’t you explain Quantum Entanglement to me like I’m five?
Okay let’s imagine you have two magic toy cars. These cars are really special because no matter how far apart they are, if you turn one car to the left, the other car turns to the right at the same
time, and if you turn one car to the right, the other car turns to the left, instantly! It’s like they have an invisible string between them that lets them talk to each other, but this string is so
special that it works no matter how far away the cars are from each other. This is kind of like quantum entanglement, where two tiny particles, smaller than anything you can see, are connected in a
magical way so that what happens to one happens to the other, even if they are really, really far apart!
This phenomenon is a consequence of quantum superposition, where particles can exist in multiple states simultaneously until they are observed, at which point they “collapse” into a single state. In
the case of entangled particles, their states are linked in a way that observing the state of one particle instantly determines the state of the other particle, regardless of the distance between
One of the most remarkable aspects of quantum entanglement is its instantaneous nature, challenging the traditional idea of the speed of light as the universal speed limit. This poses a significant
challenge to our understanding of the fundamental principles of physics, such as causality and locality.
The implications of quantum entanglement for physics and our understanding of reality are profound. It has opened up new possibilities for quantum computing, cryptography, and communication, as well
as prompting physicists to revisit our understanding of space, time, and the nature of reality itself. Quantum entanglement has challenged our conventional understanding of the universe and continues
to push the boundaries of what we thought was possible in the quantum realm.
Neil deGrasse Tyson Explores Quantum Entanglement
The Universe is Not Locally Real
In a universe that is not locally real, the concept of locality, which states that events are only influenced by their immediate surroundings, does not hold true. This would imply that events that
seem to occur independently at different locations are actually connected in some way that transcends the limitations of space. This challenges our understanding of information transmission, as it
suggests that information could be instantaneously transmitted between distant points, defying the constraints of the speed of light.
Furthermore, the notion of causality is also disrupted in a universe that is not locally real. Causality dictates that an event is always preceded by its cause, and this sequence is crucial for our
understanding of the passage of time. However, in a universe where locality is not a defining factor, the concept of cause and effect becomes more ambiguous, as events occurring at different
locations could seemingly influence each other without any apparent temporal relationship.
Alain Aspect’s Experiments on Entangled Particles
Photo credit: Wikimedia Commons
Alain Aspect’s pioneering work has deeply influenced our grasp of quantum mechanics, particularly in the enigmatic behavior of entwined particles that defy conventional physics. Aspect’s pioneering
investigations have illuminated the enigmatic bonds between entangled particles, fundamentally questioning our established notions of reality. Alain Aspect’s pioneering experiments have not only
enriched our grasp of quantum mechanics but also laid the groundwork for advances in the realms of quantum computing and secure communication. We’re going to dive into the groundbreaking nature of
Aspect’s work, see how it shakes up our understanding of quantum mechanics, and think about how this could revolutionize technology in the years ahead.
Overview of A. Aspect’s Experiments
In the 1980s, Alain Aspect directed scientific trials that negated a hypothesis about unseen variables and went against a mathematical inequality, significantly influencing physicists. In his
studies, Aspect probed the orientation of photon duos springing from a mutual genesis.
By weaving in chance to the way he measured, Aspect showed that the characteristics of intertwined particles aren’t set in stone as hidden variable theories would suggest. Aspect’s work disrupted the
previously accepted CHSH inequality, which was thought to cap the connections seen between entangled particles.
Results of A. Aspect’s Experiments
Alain Aspect delved into the core of quantum theory, probing how reality behaves at its most fundamental level. Incorporating chance into his analysis, Aspect’s experiments demonstrated that the
outcomes weren’t prearranged by the apparatus used to observe them. This shook the belief that quantum mechanics was just a placeholder for something more certain, rather than being an exact mirror
of how things truly behave.
The key aspect of Aspect’s tests was seeing if Bell’s theorem, which says that local hidden variables can’t fully explain quantum mechanics, held up by checking if the CHSH inequality – a
mathematical formula for testing that idea – failed when particles were measured. Aspect’s work consistently showed a breach of this rule in numerous tests, conducted globally. This outcome was a
real triumph for those who argue that quantum mechanics doesn’t rely on hidden, predetermined factors, but rather embraces the unpredictability of nature.
John Clauser’s Variable Theory and its Implications on Local Reality
Photo credit: Nobel Prize (https://www.nobelprize.org/)
John Clauser’s innovative theory has transformed our grasp of what we perceive as local reality and its broader consequences. John Clauser’s concept suggests that the traits of tiny particles aren’t
set in stone, but rather come to be when we check on them. This bold idea shakes up our understanding of the quantum realm, sparking lively debates among scientists about the true essence of reality
at its most fundamental level. Clauser’s ideas have unlocked new insights into quantum mechanics’ entangled nature, paving the way for cutting-edge advancements in fields like quantum computing.
Grasping how Clauser’s ideas about variables affect our grasp of the local universe is key to pushing forward our comprehension of quantum phenomena and exploring its uses down the line.
Overview of J. Clauser’s Variable Theory
John Clauser’s variable theory played a crucial role in the development of the Bell test and the CHSH inequality. Initially, Clauser probed the enigmatic variables hidden beneath the surface of
quantum theory and their potential influence on its framework. Seeking to test quantum mechanics and the potential role of unseen variables, Clauser and others devised experiments, like the Bell test
using photon polarization, to determine if local realism held true or if quantum weirdness would prevail.
In collaboration with Stuart Freedman, Clauser performed the Bell test using a specific experimental setup with polarized photons. They worked to plug the gaps that might undermine their findings,
aiming for a stronger proof of quantum principles.
Clauser’s groundbreaking ideas, together with the Bell test and the intricate CHSH inequality, have been pivotal in demystifying quantum physics and unraveling the mysteries of our physical world.
These tests have shed light on the interconnectedness and distance-defying relationships between quantum entities, offering a deeper view into the universe’s core mechanisms.
Implications of J. Clauser’s Variable Theory on Local Reality
J.S. Bell’s Theorem, J.S. Clauser’s work significantly advanced our grasp of quantum mechanics and the experiments testing Bell’s ideas, through his innovative proposal regarding hidden factors that
influence the properties of tiny particles. Clauser theorized hidden variables underlying quantum particles’ behavior, culminating in his eminent Clauser-Horne-Shimony-Holt inequality.
Clauser’s work turned the tables on our grasp of quantum physics by showing how breaking the CHSH inequality calls into question age-old beliefs about local reality. His test with light to check the
Bell inequality crucially gave proof supporting the violation of the CHSH inequality, further questioning the classical ideas of locality and realism in the quantum realm.
Clauser’s collaboration with Stuart Freedman on conducting the influential Bell test using polarized photons eventually led to them being awarded the Nobel Prize in Physics in 2022. Their pioneering
efforts have carved a path for new discoveries in the realm of quantum mechanics, significantly swaying the ongoing discussions about what’s really happening at the microscopic scale. Clauser’s
unique perspective on what’s real and local has been pivotal in shaking up our standard views of the quantum realm.
John Stewart Bell’s Non-Local Reality Theory and Its Implications for Quantum Physics
Photo credit: Wikimedia Commons
The groundbreaking concept of non-local reality, introduced by John Stewart Bell, has significantly altered our understanding of quantum physics. John Bell’s theory delves into the mysterious ways
that particles can instantaneously connect with one another, no matter how far apart they are scattered. Bell’s theorem has ignited lively discussions and holds deep consequences for our grasp of
reality and the core tenets of quantum theory. We’re diving into Bell’s groundbreaking concept of a Non-Local Universe, unraveling its core ideas and examining how they reshape our grasp on the
enigmatic realm of quantum mechanics. Additionally, we’ll delve into how Bell’s theory has steered quantum physics experiments and stirred lively debates among scientists.
Overview of J.S. Bell’s Non-Local Reality Theory
J.S. Bell’s theory shakes up our usual thinking by suggesting that all things in the universe, no matter how far apart they are, might be woven together in a web of connections. This concept unfolds
a universe where every entity, spaced by lightyears or just millimeters, is intricately linked in an invisible web that transcends physical separation.
In Bell’s view, we’re entwined with a cosmic tapestry where every part is in concert, suggesting that our role as observers extends beyond isolated points and moments to being active participants
within this grand scheme. In this intricate web of existence, the observer is woven into the universal tapestry, able to touch and be touched by reality’s vast network.
The conclusions of Bell’s theory radically dispute the customary perception of the observer as the focal point of reality, instead intimating that the perceiver is a constituent of an expansive,
cohesive totality where the lines between examiner and examined are obscured. In this grand tapestry of existence, we’re not mere spectators but threads woven into the fabric where divisions between
seer and seen blur. This realization shifts our grasp of the cosmos and reshapes how we see ourselves within its vast expanse.
Implications for Quantum Physics
The way we comprehend the basic rules of physics is being turned on its head due to quantum connections that hint at unseen ties between particles. Particles seem to be linked in a way that defies
space, challenging the very basics of what we thought we knew about how things interact. This suggests that hidden rules might be at work, shaping the weird ways particles talk to each other and
pushing us toward a richer grasp of quantum mysteries.
However, there exist constraints on theoretical devices that could violate local realism and are often utilized to evaluate quantum correlations. Researchers have come up with a mix of simple and
complex strategies, like No Advantage for Nonlocal Calculation, to probe the deep connections in quantum physics and figure out how they tick. These initiatives are digging into the essence of
quantum links, how we gauge them, and the deep-set rules that command their behavior. Delving into these various ideas, we might unlock a deeper grasp of the core concepts that quantum mechanics
hinges on.
Quantum mechanics reveals that at its core, our cosmos doesn’t stick to the classic rules of space and time, showing us a world where things can be mysteriously linked in ways we wouldn’t expect.
This turns the idea of a universe with clear-cut, independent elements on its head, hinting at an underlying complexity where everything might be mysteriously linked.
As we uncover the deep ties that bind particles across vast distances, it’s clear that our quest in physics will pivot to grasping this profound interconnectedness hidden within the quantum fabric.
As we dive deeper into the quantum web that stitches everything together, expect a wave of fresh ideas and blueprints that paint a more intricate picture of how every piece of the cosmos is linked.
It’s crucial to grasp that this concept doesn’t suggest our existence or the cosmos itself is a mere illusion. It simply means that our traditional ideas of locality and separateness do not fully
explain the nature of the universe at a fundamental level. This fresh insight could flip our view of reality on its head and pave the way for significant leaps in how we grasp the cosmos.
Enjoyed reading? You may also like: | {"url":"https://discoveringcosmos.com/the-universe-is-not-locally-real/","timestamp":"2024-11-03T00:53:42Z","content_type":"text/html","content_length":"201167","record_id":"<urn:uuid:4c1a7f32-d87e-4262-9c32-c3ff47fd5b51>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00506.warc.gz"} |
X68000 development, a chronicle of quarter arsed failure
So, the quarter assed master of laziness has decided to dabble a bit in X68000 game coding while his arse is on that sepulchral couch anyway.
Knowing myself I'll run out of that magical steam of enthusiasm in a few short weeks and abandon yet another pile of quarter completed failure of potential in the long chain that I call life.
But until then I'll prolly reverse engineer some arcane knowledge out of this handsome tall monolith called the X68000 and perhaps someone else will one day find the discoveries useful, so they will
be documented here.
For the very earliest discoveries look at my posts in this
For the record most of my knowledge stems from the following sites:
The rest is by using the debug features of the XM6 Pro emulator in addition to good old fashioned intuition. | {"url":"https://nfggames.com/forum2/index.php?topic=5115.0","timestamp":"2024-11-12T02:11:54Z","content_type":"text/html","content_length":"187879","record_id":"<urn:uuid:0417612a-8674-4385-ae10-5d7a49e07f52>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00071.warc.gz"} |
Optimization in Graphics
No Wikipedia
3 courses cover this concept
Carnegie Mellon University
This is an intensive course on computer graphics, covering a variety of topics such as rendering, animation, and imaging. It requires previous knowledge in vector calculus, linear algebra, and C/C++
programming. Concepts include ray tracing, radiometry, and geometric optics, among others.
Carnegie Mellon University
Similar to Course ID 29, this course provides a comprehensive introduction to computer graphics. It also demands a strong mathematical and programming background. The topics covered include
rasterization, geometric transformations, and Monte Carlo ray tracing.
CSCI 2240 is a comprehensive exploration of 3D graphics, diving into rendering, geometry processing, simulation, and optimization. Expect a mathematically intensive approach to topics such as light
transport physics, 3D triangle mesh algorithms, and 3D shape optimization. Culminating in an open-ended project, students will be equipped to undertake graphics research and delve into recent
research papers. | {"url":"https://cogak.com/concept/1467","timestamp":"2024-11-09T22:10:26Z","content_type":"text/html","content_length":"110303","record_id":"<urn:uuid:ec7e9936-b6d1-4254-bf5b-ed625b578825>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00618.warc.gz"} |
KenKen and Recursive Backtracking
For this assignment, you are to design and implement a Java program for solving KenKen puzzles. If you are not familiar with KenKen, it is played on an NxN puzzle grid in which the numbers 1 through
N are placed.
To complete a puzzle, the player must fill in the grid such that the numbers 1 through N appear in every row and column. Furthermore, sets of outlined squares (called cages) have mathematical
constraints. For example, the top left square in the above puzzle is in a cage with the constraint "16x," which means that the three numbers in the cage have a product of 16 (i.e., 4*2*2 or 4*4*1).
For a person, solving a KenKen puzzle requires complex and careful logical reasoning. However, it turns out that even the most challenging KenKen puzzles can be easily solved by a computer using
recursive backtracking. In pseudocode:
For each spot in the grid
place a number in it
if a conflict occurs (i.e., a duplicate in any row/column or a violated constraint),
then remove the number and backtrack to try the next number
if there is no conflict, attempt to (recursively) fill in the rest of the grid
if you are unable to fill the rest of the grid (i.e., a future dead-end
is reached), then remove the number and backtrack to try the next number
Your program should prompt the user for a file that contains the specifications for a puzzle. The first line of the puzzle file should specify the size of the puzzle (you may assume a maximum size of
9x9). Each subsequent line should identify a cage, with the constraint first (e.g., "16*") followed by the coordinates in the cage. For example, the above puzzle would be represented as:
16* (0,0) (0,1) (1,1)
7+ (0,2) (0,3) (1,2)
2- (1,0) (2,0)
4 (1,3)
12* (2,1) (3,0) (3,1)
2/ (2,2) (2,3)
2/ (3,2) (3,3)
Note that we will use the '*' and '/' characters to represent multiplication and division. You may assume that the formula and the coordinates on a line are separated by whitespace, but that no
spaces appear within a formula or coordinate. Your program should display the solved puzzle if a solution exists, or display a message if no solution is possible. The solution to the above puzzle
would be: | {"url":"http://nifty.stanford.edu/2018/reed-nifty-remixes/Boggle/KenKen.html","timestamp":"2024-11-14T20:32:32Z","content_type":"text/html","content_length":"3074","record_id":"<urn:uuid:a176732f-37f3-439d-9255-6fce95b1e158>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00741.warc.gz"} |
CPLEX Error: Q not positive semidefinite
Hello GAMS World community,
Today I came across a peculiar thing (at least for me) and I hope that
you can clarify it for me. I searched the archive already for an
answer but couldn’t find anything fitting.
I formulated an easy QCP in GAMS and tried to solve it via CPLEX
obtaining the error
*** CPLEX Error 5002: Q in %s is not positive semi-definite
But as far as I see that simply isn’t true. Here is the whole code:
$TITLE example
i /i1*i10/
a(i) / i1 26
i2 26
i3 26.2
i4 26.2
i5 26.6
i6 27
i7 27.5
i8 30
i9 31
i10 31.4 /
c(i) / i1*i10 50/
d = sum(i,a(i))/10;
v1.lo(i) = 0.01;
v1.l(i) = 1;
eq_con1(i) …
v1(i) =L= c(i);
eq_con2 …
sum(i,v1(i)) =G= d;
eq_ob1 …
ob1 =E= sum(i, a(i)*v1(i)*v1(i));
MODEL model1 / eq_con1, eq_con2, eq_ob1 /;
OPTION QCP = CPLEX;
SOLVE model1 MAXIMIZING ob1 USING QCP;
The objective ob1 is given as a positive definite quadratic form, the
constraints are linear. So how does a non-definiteness arise?
I would be very thankful for any answers.
The standard model is to MINIMIZE a positive (semi)definite quadratic form.
Since you are MAXIMIZING the interface changes the sign of the objective and
the quadratic for is no longer positive definite.
Arne Stolbjerg Drud
ARKI Consulting & Development A/S
Bagsvaerdvej 246A, DK-2880 Bagsvaerd, Denmark
Phone: (+45) 44 49 03 23, Fax: (+45) 44 49 03 33, email: adrud@arki.dk
-----Original Message-----
From: gamsworld@googlegroups.com [mailto:gamsworld@googlegroups.com] On
Behalf Of nomaris
Sent: Monday, April 16, 2012 11:09 AM
To: gamsworld
Subject: CPLEX Error: Q not positive semidefinite
Hello GAMS World community,
Today I came across a peculiar thing (at least for me) and I hope that you
can clarify it for me. I searched the archive already for an answer but
couldn’t find anything fitting.
I formulated an easy QCP in GAMS and tried to solve it via CPLEX obtaining
the error
*** CPLEX Error 5002: Q in %s is not positive semi-definite
But as far as I see that simply isn’t true. Here is the whole code:
$TITLE example
i /i1*i10/
a(i) / i1 26
i2 26
i3 26.2
i4 26.2
i5 26.6
i6 27
i7 27.5
i8 30
i9 31
i10 31.4 /
c(i) / i1*i10 50/
d = sum(i,a(i))/10;
v1.lo(i) = 0.01;
v1.l(i) = 1;
eq_con1(i) …
v1(i) =L= c(i);
eq_con2 …
sum(i,v1(i)) =G= d;
eq_ob1 …
ob1 =E= sum(i, a(i)*v1(i)*v1(i));
MODEL model1 / eq_con1, eq_con2, eq_ob1 /;
OPTION QCP = CPLEX;
SOLVE model1 MAXIMIZING ob1 USING QCP;
The objective ob1 is given as a positive definite quadratic form, the
constraints are linear. So how does a non-definiteness arise?
I would be very thankful for any answers.
• Nomaris
“gamsworld” group.
To post to this group, send email to gamsworld@googlegroups.com.
To unsubscribe from this group, send email to
For more options, visit this group at | {"url":"https://forum.gams.com/t/cplex-error-q-not-positive-semidefinite/549","timestamp":"2024-11-11T09:48:37Z","content_type":"text/html","content_length":"19366","record_id":"<urn:uuid:612bb876-8598-43b4-be83-8caf1bf464c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00237.warc.gz"} |
Detailed Course Information
MTH 085 Applied Geometry includes the following: linear, square, and cubic units, dimensional analysis in metric and US customary measures, problem solving, angle measure, properties of pairs of
angles formed by system of parallel, perpendicular, and transversal lines; perimeter and area of polygons and circles; surface area and volume of solid figures such as prisms and pyramids;
similarity, ratio, and proportion, right triangle trigonometry. Oblique triangle trigonometry is an optional topic. Some algebra topics from MTH 075 will be applied. The course will emphasize clear
communication of mathematical results. Application problems are realistic with some data to be collected, analyzed, and discussed in group setting with results submitted in written form.
4.000 Credit hours
40.000 TO 48.000 Lecture hours
Syllabus Available
Levels: Credit
Schedule Types: Lecture
Mathematics Division
Mathematics Department
Course Attributes:
Tuition, Pre-college Level
Must be enrolled in one of the following Levels:
Skills Development
May not be enrolled in one of the following Colleges:
College Now | {"url":"https://crater.lanecc.edu/banp/zwckctlg.p_disp_course_detail?cat_term_in=202510&subj_code_in=MTH&crse_numb_in=085","timestamp":"2024-11-04T02:51:12Z","content_type":"text/html","content_length":"8709","record_id":"<urn:uuid:b680e4bb-6782-4432-b589-9904205c37b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00355.warc.gz"} |
Explicit formulas and determinantal representations for $\eta$-skew-Hermitian solution to a system of quaternion matrix equations
Explicit formulas and determinantal representations for $\eta$-skew-Hermitian solution to a system of quaternion matrix equations
Some necessary and sufficient conditions for the existence of the $\eta$-skew-Hermitian solution quaternion matrix equations the system of matrix equations with $\eta$-skew-Hermicity,
are established in this paper by using rank equalities of the coefficient matrices.
The general solutions to the system and its special cases are provided when they are consistent. Within the framework of the theory of noncommutative row-column determinants, we also give
determinantal representation formulas of finding their exact solutions that are analogs of Cramer's rule.
A numerical example
is also given to demonstrate the main results.
• There are currently no refbacks. | {"url":"https://journal.pmf.ni.ac.rs/filomat/index.php/filomat/article/view/11726/0","timestamp":"2024-11-10T15:54:21Z","content_type":"application/xhtml+xml","content_length":"17378","record_id":"<urn:uuid:1e684a90-538a-40b7-b83e-7bf04d19dbf9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00745.warc.gz"} |
Circuit Theory/State Variables - Wikibooks, open books for an open world
New techniques are needed to solve 2nd order and higher circuits:
• Symbolic solutions are so complicated, merely comparing answers is an exercise
• Analytical solution techniques are more fragmented
• The relationship between constants, initial conditions and circuit layout are becoming complicated
A change in strategy is needed if circuit analysis is going to:
• Move beyond the ideal
• Consider more complicated circuits
• Understand limitations/approximations of circuit modeling software
The solution is "State Variables." After a state variable analysis, the exercise of creating symbolic solution can be simplified by eliminating terms that don't have a significant impact on the
The State Space approach to circuit theory abandons the symbolic/analytical approach to circuit analysis. The state variable model involves describing a circuit in matrix form and then solving it
numerically using tools like series expansions, Simpson's rule, and Cramer's Rule. This was the original starting point of matlab.
"State" means "condition" or "status" of the energy storage elements of a circuit. Since resistors don't change (ideally) and don't store energy, they don't change the circuit's state. A state is a
snap shot in time of the currents and voltages. The goal of "State Space" analysis is to create a notation that describes all possible states.
The notation used to describe all states should be as simple as possible. Instead of trying to find a complex, high order differential equation, go back to something like Kirchhoff analysis and just
write terminal equations.
State variables are voltages across capacitors and currents through inductors. This means that purely resistive circuit cut sets are collapsed into single resistors that end up in series with an
inductor or parallel to capacitor. Rather than using the symbols v and i to represent these unknowns, they are both called x. Kirchhoff's equations are used instead of node or loop equations.
Terminal equations are substituted into the Kirchhoff's equations so that remaining resistor's currents and voltages are shared with inductors and capacitors.
This State Space Model describes the inputs (step function μ(t), initial conditions X(0)), the output Y(t) and A,B,C and D. A-B-C-D are transfer functions that combine as follows:
${\displaystyle {\frac {\mathbb {Y} }{\mathbb {\mu } }}=B(s)\left({\frac {1}{s-A(s)}}\right)C(s)+D(s)}$
A control systems class teaches how to build these block diagrams from a desired transfer function. Integrals "remember" or accumulate a history of past states. Derivatives predict future state and
both, in addition to the current state can be separately scaled. "A" represents feedback. "D" represents feed forward. There is lots to learn.
Don't try to figure out how a negative sign appeared in the denominator and where the addition term came from. How does the above help us predict voltages and currents in a circuit? Let's start by
defining terms and do some examples:
• A is a square matrix representing the circuit components (from Kirchhoff's equations.
• B is a column matrix or vector representing how the source impacts the circuit (from Kirchhoff's equations).
• C is a row matrix or vector representing how the output is computed (could be voltage or current)
• D is a single number that indicates a multiplier of the source .. is usually zero unless the source is directly connected to the output through a resistor.
A and B describe the circuit in general. If X is a column matrix (vector) representing all unknown voltages and currents, then:
${\displaystyle {\boldsymbol {\dot {X}}}={\boldsymbol {A}}{\boldsymbol {X}}+{\boldsymbol {B}}{\boldsymbol {\mu }}}$
At this point, X is known and represents a column of functions of time. The output can be derived from the known X's and the original step function μ using C and D:
${\displaystyle y={\boldsymbol {C}}{\boldsymbol {X}}+D*{\boldsymbol {\mu }}}$
screen shot of matlab with simulink toolbox showing how to get to the state-space block for wikibook circuit analysis
This would not be a step forward without tools such as MatLab. These are the relevant MatLab control system toolbox commands:
• step(A,B,C,D) assumes the initial conditions are zero
• initial(A,B,C,D,X(0)) just like step but takes into account the initial conditions X(0)
In addition, there is a simulink block called "State Space" that can be used the same way. | {"url":"https://en.m.wikibooks.org/wiki/Circuit_Theory/State_Variables","timestamp":"2024-11-08T05:30:56Z","content_type":"text/html","content_length":"38007","record_id":"<urn:uuid:44060c2b-912e-4d3c-b309-497d1212da61>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00131.warc.gz"} |
How to normalize vectors to unit norm in Python - kawahara.ca
How to normalize vectors to unit norm in Python
There are so many ways to normalize vectors… A common preprocessing step in machine learning is to normalize a vector before passing the vector into some machine learning algorithm e.g., before
training a support vector machine (SVM).
One way to normalize the vector is to apply some normalization to scale the vector to have a length of 1 i.e., a unit norm. There are different ways to define “length” such as as l1 or
l2-normalization. If you use l2-normalization, “unit norm” essentially means that if we squared each element in the vector, and summed them, it would equal 1.
(note this normalization is also often referred to as, unit norm or a vector of length 1 or a unit vector).
So given a matrix X, where the rows represent samples and the columns represent features of the sample, you can apply l2-normalization to normalize each row to a unit norm. This can be done easily in
Python using sklearn.
Here’s how to l2-normalize vectors to a unit vector in Python
import numpy as np
from sklearn import preprocessing
# 2 samples, with 3 dimensions.
# The 2 rows indicate 2 samples.
# The 3 columns indicate 3 features for each sample.
X = np.asarray([[-1,0,1],
[0,1,2]], dtype=np.float) # Float is needed.
# Before-normalization.
# Output,
# [[-1. 0. 1.]
# [ 0. 1. 2.]]
# l2-normalize the samples (rows).
X_normalized = preprocessing.normalize(X, norm='l2')
# After normalization.
# Output,
# [[-0.70710678 0. 0.70710678]
# [ 0. 0.4472136 0.89442719]]
Now what did this do?
It normalized each sample (row) in the X matrix so that the squared elements sum to 1.
We can check that this is the case:
# Square all the elements/features.
X_squared = X_normalized ** 2
# Output,
# [[ 0.5 0. 0.5]
# [ 0. 0.2 0.8]]
# Sum over the rows.
X_sum_squared = np.sum(X_squared, axis=1)
# Output,
# [ 1. 1.]
# Yay! Each row sums to 1 after being normalized.
As we see, if we square each element, and then sum along the rows, we get the expected value of “1” for each row.
How to l1-normalize vectors to a unit vector in Python
Now you might ask yourself, well that worked for L2 normalization. But what about L1 normalization?
In L2 normalization we normalize each sample (row) so the squared elements sum to 1. While in L1 normalization we normalize each sample (row) so the absolute value of each element sums to 1.
Let’s do another example for L1 normalization (where X is the same as above)!
X_normalized_l1 = preprocessing.normalize(X, norm='l1')
# [[-0.5 0. 0.5]
# [ 0. 0.3 0.67]]
Okay looks promising! Let’s do a quick sanity check.
# Absolute value of all elements/features.
X_abs = np.abs(X_normalized_l1)
# [[0.5 0. 0.5]
# [0 0.3 0.67]]
# Sum over the rows.
X_sum_abs = np.sum(X_abs, axis=1)
# Output,
# [ 1. 1.]
# Yay! Each row sums to 1 after being normalized.
We can now see that taking the absolute value of each element, and then summing across each row, gives the expected value of “1” for each row.
The full code for this example is here.
More reading and references:
Official Python documentation
Official Python example
4 thoughts on “How to normalize vectors to unit norm in Python”
1. Can you please also explain the L1 calculation. I am a 75 year old guy learning AI just for fun and to be able to explain it to my grand daughters. When I see the math formula of L2 I could not
make any sense of it but your example is crystal clear -and I thought is that all- why the heck they always come up with these complex formala’s instead of a simple example. Thank you for that.
1. Dear Hans van der Waal, I’m glad to hear that you found this helpful! I also have a hard time linking math equations to the often simple concepts. So these simple examples help clarify the
ideas for me too.
I just added a section with an example for L1 normalization. Hope it helps!
2. Just wondering! why do we need to convert vectors to unit norm in ML? what is the reason behind this? Also, I was looking at an example of preprocessing in stock movement data-set and the author
used normalizer(norm=’l2′). Any particular reason behind this? Does it have anything to do with the sparsity of the data? Sorry for too many questions.
1. Thanks for your questions Saurabh!
> why do we need to convert vectors to unit norm in ML?
We don’t have to. For some machine learning approaches (e.g., random forests), this may not be needed. The intuition for normalizing the vectors is that elements within the vector that have
large magnitudes may not be more important, so normalizing them puts all elements roughly in the same scale.
> the author used normalizer(norm=’l2′). Any particular reason behind this? Does it have anything to do with the sparsity of the data?
Was this normalization put on the trainable weights during the training phase? L2 normalization penalizes weights that have a large magnitude. Whereas L1 encourages weights to be sparse
(i.e., sets weights to be 0).
You can also preprocess the data using L2, which also penalizes large elements within the vector.
Hope that helps!
Questions/comments? If you just want to say thanks, consider sharing this article or following me on Twitter! | {"url":"http://kawahara.ca/how-to-normalize-vectors-to-unit-norm-in-python/","timestamp":"2024-11-05T00:14:53Z","content_type":"text/html","content_length":"320761","record_id":"<urn:uuid:068b397d-cc46-4afd-85eb-6af6300d5a06>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00475.warc.gz"} |
Busy beaver
Jump to navigation Jump to search
This article has multiple issues.
Please help
improve it
or discuss these issues on the
talk page
(Learn how and when to remove these template messages)
This article
may be too technical for most readers to understand
. Please
help improve it
make it understandable to non-experts
, without removing the technical details.
(October 2016) (Learn how and when to remove this template message)
This article includes a
list of references
, but
its sources remain unclear
because it has
insufficient inline citations
(February 2013) (Learn how and when to remove this template message)
(Learn how and when to remove this template message)
The busy beaver game consists of designing a halting, binary-alphabet Turing machine which writes the most 1s on the tape, using only a limited set of states. The rules for the 2-state game are as
1. the machine must have two states in addition to the halting state, and
2. the tape starts with 0s only.
As the player, you should conceive each state aiming for the maximum output of 1s on the tape while making sure the machine will halt eventually.
The nth busy beaver, BB-n or simply "busy beaver" is the Turing machine that wins the n-state Busy Beaver Game. That is, it attains the maximum number of 1s among all other possible n-state competing
Turing Machines. The BB-2 Turing machine, for instance, achieves four 1s in six steps.
The Busy Beaver Game has implications in computability theory, the halting problem, and complexity theory. The concept was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable
The game[edit]
The n-state busy beaver game (or BB-n game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications:
• The machine has n "operational" states plus a Halt state, where n is a positive integer, and one of the n states is distinguished as the starting state. (Typically, the states are labelled by 1,
2, ..., n, with state 1 as the starting state, or by A, B, C, ..., with state A as the starting state.)
• The machine uses a single two-way infinite (or unbounded) tape.
• The tape alphabet is {0, 1}, with 0 serving as the blank symbol.
• The machine's transition function takes two inputs:
1. the current non-Halt state,
2. the symbol in the current tape cell,
and produces three outputs:
1. a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten),
2. a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and
3. a state to transition into (which may be the Halt state).
There are thus (4n + 4)^2n n-state Turing machines meeting this definition.
The transition function may be seen as a finite table of 5-tuples, each of the form
(current state, current symbol, symbol to write, direction of shift, next state).
"Running" the machine consists of starting in the starting state, with the current tape cell being any cell of a blank (all-0) tape, and then iterating the transition function until the Halt state is
entered (if ever). If, and only if, the machine eventually halts, then the number of 1s finally remaining on the tape is called the machine's score.
The n-state busy beaver (BB-n) game is a contest to find such an n-state Turing machine having the largest possible score — the largest number of 1s on its tape after halting. A machine that attains
the largest possible score among all n-state Turing machines is called an n-state busy beaver, and a machine whose score is merely the highest so far attained (perhaps not the largest possible) is
called a champion n-state machine.
Radó required that each machine entered in the contest be accompanied by a statement of the exact number of steps it takes to reach the Halt state, thus allowing the score of each entry to be
verified (in principle) by running the machine for the stated number of steps. (If entries were to consist only of machine descriptions, then the problem of verifying every potential entry is
undecidable, because it is equivalent to the well-known halting problem — there would be no effective way to decide whether an arbitrary machine eventually halts.)
Related functions[edit]
The busy beaver function Σ[edit]
The busy beaver function quantifies the maximum score attainable by a Busy Beaver on a given measure. This is a noncomputable function. Also, a busy beaver function can be shown to grow faster
asymptotically than does any computable function.
The busy beaver function, Σ: N → N, is defined such that Σ(n) is the maximum attainable score (the maximum number of 1s finally on the tape) among all halting 2-symbol n-state Turing machines of the
above-described type, when started on a blank tape.
It is clear that Σ is a well-defined function: for every n, there are at most finitely many n-state Turing machines as above, up to isomorphism, hence at most finitely many possible running times.
This infinite sequence Σ is the busy beaver function, and any n-state 2-symbol Turing machine M for which σ(M) = Σ(n) (i.e., which attains the maximum score) is called a busy beaver. Note that for
each n, there exist at least four n-state busy beavers (because, given any n-state busy beaver, another is obtained by merely changing the shift direction in a halting transition, another by shifting
all direction changes to their opposite (with neutrals kept neutral), and the final by shifting the halt direction of the all-swapped busy beaver).
Radó's 1962 paper proved that if f: ℕ → ℕ is any computable function, then Σ(n) > f(n) for all sufficiently large n, and hence that Σ is not a computable function.
Moreover, this implies that it is undecidable by a general algorithm whether an arbitrary Turing machine is a busy beaver. (Such an algorithm cannot exist, because its existence would allow Σ to be
computed, which is a proven impossibility. In particular, such an algorithm could be used to construct another algorithm that would compute Σ as follows: for any given n, each of the finitely many n
-state 2-symbol Turing machines would be tested until an n-state busy beaver is found; this busy beaver machine would then be simulated to determine its score, which is by definition Σ(n).)
Even though Σ(n) is an uncomputable function, there are some small n for which it is possible to obtain its values and prove that they are correct. It is not hard to show that Σ(0) = 0, Σ(1) = 1, Σ
(2) = 4, and with progressively more difficulty it can be shown that Σ(3) = 6 and Σ(4) = 13 (sequence A028444 in the OEIS). Σ(n) has not yet been determined for any instance of n > 4, although lower
bounds have been established (see the Known values section below).
In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which Σ(n) is unknowable. To do so they constructed a 7910-state^[1] Turing machine whose
behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (Stationary Ramsey Property).^[2]^[3]
This was later reduced to 1919 states.^[4]^[5]
Complexity and unprovability of Σ[edit]
A variant of Kolmogorov complexity is defined as follows [cf. Boolos, Burgess & Jeffrey, 2007]: The complexity of a number n is the smallest number of states needed for a BB-class Turing machine that
halts with a single block of n consecutive 1s on an initially blank tape. The corresponding variant of Chaitin's incompleteness theorem states that, in the context of a given axiomatic system for the
natural numbers, there exists a number k such that no specific number can be proved to have complexity greater than k, and hence that no specific upper bound can be proven for Σ(k) (the latter is
because "the complexity of n is greater than k" would be proved if "n > Σ(k)" were proved). As mentioned in the cited reference, for any axiomatic system of "ordinary mathematics" the least value k
for which this is true is far less than 10↑↑10; consequently, in the context of ordinary mathematics, neither the value nor any upper-bound of Σ(10 ↑↑ 10) can be proven. (Gödel's first incompleteness
theorem is illustrated by this result: in an axiomatic system of ordinary mathematics, there is a true-but-unprovable sentence of the form "Σ(10 ↑↑ 10) = n", and there are infinitely many
true-but-unprovable sentences of the form "Σ(10 ↑↑ 10) < n".)
Maximum shifts function S[edit]
In addition to the function Σ, Radó [1962] introduced another extreme function for the BB-class of Turing machines, the maximum shifts function, S, defined as follows:
• s(M) = the number of shifts M makes before halting, for any M in E[n],
• S(n) = max{ s(M) | M ∈ E[n] } = the largest number of shifts made by any halting n-state 2-symbol Turing machine.
Because these Turing machines are required to have a shift in each and every transition or "step" (including any transition to a Halt state), the max-shifts function is at the same time a max-steps
Radó showed that S is noncomputable for the same reason that Σ is noncomputable — it grows faster than any computable function. He proved this simply by noting that for each n, S(n) ≥ Σ(n). Each
shift may write a 0 or a 1 on the tape, while Σ counts a subset of the shifts that wrote a 1, namely the ones that hadn't been overwritten by the time the Turing machine halted; consequently, S grows
at least as fast as Σ, which had already been proved to grow faster than any computable function.
The following connection between Σ and S was used by Lin & Radó [Computer Studies of Turing Machine Problems, 1965] to prove that Σ(3) = 6: For a given n, if S(n) is known then all n-state Turing
machines can (in principle) be run for up to S(n) steps, at which point any machine that hasn't yet halted will never halt. At that point, by observing which machines have halted with the most 1s on
the tape (i.e., the busy beavers), one obtains from their tapes the value of Σ(n). The approach used by Lin & Radó for the case of n = 3 was to conjecture that S(3) = 21, then to simulate all the
essentially different 3-state machines for up to 21 steps. By analyzing the behavior of the machines that had not halted within 21 steps, they succeeded in showing that none of those machines would
ever halt, thus proving the conjecture that S(3) = 21, and determining that Σ(3) = 6 by the procedure just described.
Inequalities relating Σ and S include the following (from [Ben-Amram, et al., 1996]), which are valid for all n ≥ 1:
{\displaystyle {\begin{aligned}S(n)&\geq \Sigma (n)\\S(n)&\leq (2n-1)\Sigma (3n+3)\\S(n)&<\Sigma (3n+6);\end{aligned}}}
and an asymptotically improved bound (from [Ben-Amram, Petersen, 2002]): there exists a constant c, such that for all n ≥ 2,
${\displaystyle S(n)\leq \Sigma \left(n+\left\lceil {\frac {8n}{\log _{2}n}}\right\rceil +c\right).}$
There is tendency S (n) to be near square of Σ (n), and in fact many machines give S (n) less than Σ ^2 (n).
Known values for Σ and S[edit]
The function values for Σ(n) and S(n) are only known exactly for n < 5.^[3]
The current 5-state busy beaver champion produces 4098 1s, using 47176870 steps (discovered by Heiner Marxen and Jürgen Buntrock in 1989), but there remain 18/19 (possibly under 10, see below)
machines with non-regular behavior which are believed to never halt, but which have not yet been proven to run infinitely. Skelet lists 42/43 unproven machines, but 24 are already proven. The
remaining machines have been simulated to 81.8 billion steps, but none halted. Daniel Briggs also proved some machines. ^[6] Other source says it left 98 machines, but there is an analysis of
holdouts ^[7]. So, it is very likely (and almost proven) that Σ(5) = 4098 and S(5) = 47176870, but this remains unproven, and it is unknown if there are any holdouts left (at 2018). At the moment the
record 6-state champion produces over 3.515×10^18267 1s (exactly (25*4^30341+23)/9), using over 7.412×10^36534 steps (found by Pavel Kropitz in 2010). As noted above, these are 2-symbol Turing
A simple extension of the 6-state machine leads to a 7-state machine which will write more than 10^10^10^10^18705353 1s to the tape, but there are undoubtedly much busier 7-state machines. However,
other busy beaver hunters have different sets of machines.
Milton Green, in his 1964 paper "A Lower Bound on Rado's Sigma Function for Binary Turing Machines", constructed a set of Turing machines demonstrating that
${\displaystyle \Sigma (2k)>3\uparrow ^{k-2}3>A(k-2,k-2)\qquad {\mbox{for }}k\geq 2,}$
where ↑ is Knuth up-arrow notation and A is Ackermann's function.
${\displaystyle \Sigma (10)>3\uparrow \uparrow \uparrow 3=3\uparrow \uparrow 3^{3^{3}}=3^{3^{3^{.^{.^{.^{3}}}}}}}$
(with 3^3^3 = 7625597484987 terms in the exponential tower), and
${\displaystyle \Sigma (12)>3\uparrow \uparrow \uparrow \uparrow 3=g_{1},}$
where the number g[1] is the enormous starting value in the sequence that defines Graham's number.
In 1964 Milton Green developed a lower bound for the Busy Beaver function that was published in the proceedings of the 1964 IEEE symposium on switching circuit theory and logical design. Heiner
Marxen and Juergen Buntrock described it as "a non-trivial (not primitive recursive) lower bound". This lower bound can be calculated but is too complex to state as a single expression in terms of n.
When n=8 the method gives Σ(8) ≥ 3 × (7 × 3^92 - 1) / 2 ≈ 8.248×10^44.
It can be derived from current lower bounds that:
${\displaystyle \Sigma (2k+1)>3\uparrow ^{k-2}31>A(k-2,k-2)\qquad {\mbox{for }}k\geq 2,}$
In contrast, the best current (as of 2018) lower bound on Σ(6) is 10^18267, which is greater than the lower bound given by Green's formula, 3^3 = 27 (which is tiny in comparison). In fact, it is much
greater than the lower bound: 3 ↑↑ 3 = 3^3^3 = 7625597484987, which is Green's first lower bound for Σ(8), and also much greater than the second lower bound: 3*(7*3^92-1)/2.
Σ(7) is by the same way much, much greater than current common lower bound 3^31 (nearly 618 trillions), so the second lower bound is also very, very weak.
Proof for uncomputability of S(n) and Σ(n)[edit]
Suppose that S(n) is a computable function and let EvalS denote a TM, evaluating S(n). Given a tape with n 1s it will produce S(n) 1s on the tape and then halt. Let Clean denote a Turing machine
cleaning the sequence of 1s initially written on the tape. Let Double denote a Turing machine evaluating function n + n. Given a tape with n 1s it will produce 2n 1s on the tape and then halt. Let us
create the composition Double | EvalS | Clean and let n[0] be the number of states of this machine. Let Create_n[0] denote a Turing machine creating n[0] 1s on an initially blank tape. This machine
may be constructed in a trivial manner to have n[0] states (the state i writes 1, moves the head right and switches to state i + 1, except the state n[0], which halts). Let N denote the sum n[0] + n
Let BadS denote the composition Create_n[0] | Double | EvalS | Clean. Notice that this machine has N states. Starting with an initially blank tape it first creates a sequence of n[0] 1s and then
doubles it, producing a sequence of N 1s. Then BadS will produce S(N) 1s on tape, and at last it will clear all 1s and then halt. But the phase of cleaning will continue at least S(N) steps, so the
time of working of BadS is strictly greater than S(N), which contradicts to the definition of the function S(n).
The uncomputability of Σ(n) may be proved in a similar way. In the above proof, one must exchange the machine EvalS with EvalΣ and Clean with Increment — a simple TM, searching for a first 0 on the
tape and replacing it with 1.
The uncomputability of S(n) can also be established by reference to the blank tape halting problem. The blank tape halting problem is the problem of deciding for any Turing machine whether or not it
will halt when started on an empty tape. The blank tape halting problem is equivalent to the standard halting problem and so it is also uncomputable. If S(n) was computable, then we could solve the
blank tape halting problem simply by running any given Turing machine with n states for S(n) steps; if it has still not halted, it never will. So, since the blank tape halting problem is not
computable, it follows that S(n) must likewise be uncomputable.
For any model of computation there exist simple analogs of the busy beaver. For example, the generalization to Turing machines with n states and m symbols defines the following generalized busy
beaver functions:
1. Σ(n, m): the largest number of non-zeros printable by an n-state, m-symbol machine started on an initially blank tape before halting, and
2. S(n, m): the largest number of steps taken by an n-state, m-symbol machine started on an initially blank tape before halting.
For example, the longest-running 3-state 3-symbol machine found so far runs 119112334170342540 steps before halting. The longest running 6-state, 2-symbol machine which has the additional property of
reversing the tape value at each step produces 6147 1s after 47339970 steps. So S[RTM](6) ≥ 47339970 and Σ[RTM](6) ≥ 6147.
It is possible to further generalize the busy beaver function by extending to more than one dimension.
Likewise we could define an analog to the Σ function for register machines as the largest number which can be present in any register on halting, for a given number of instructions.
Exact values and lower bounds[edit]
The following table lists the exact values and some known lower bounds for S(n, m) and Σ(n, m) for the generalized busy beaver problems. Note: entries listed as "???" are bounded from below by the
maximum of all entries to left and above. These machines either haven't been investigated or were subsequently surpassed by a smaller machine.
The Turing machines that achieve these values are available on both Heiner Marxen's and Pascal Michel's webpages. Each of these websites also contains some analysis of the Turing machines and
references to the proofs of the exact values.
Values of S(n,m)
2-state 3-state 4-state 5-state 6-state 7-state
2-symbol 6 21 107 47176870? > 7.4×10^36534 > 10^2*10^10^10^18705353
3-symbol 38 ≥ 119112334170342540 > 1.0×10^14072 ??? ??? ???
4-symbol ≥ 3932964 > 5.2×10^13036 ??? ??? ??? ???
5-symbol > 1.9×10^704 ??? ??? ??? ??? ???
6-symbol > 2.4×10^9866 ??? ??? ??? ??? ???
Values of Σ(n,m)
2-state 3-state 4-state 5-state 6-state 7-state
2-symbol 4 6 13 4098? > 3.5×10^18267 > 10^10^10^10^18705353
3-symbol 9 ≥ 374676383 > 1.3×10^7036 ??? ??? ???
4-symbol ≥ 2050 > 3.7×10^6518 ??? ??? ??? ???
5-symbol > 1.7×10^352 ??? ??? ??? ??? ???
6-symbol > 1.9×10^4933 ??? ??? ??? ??? ???
In addition to posing a rather challenging mathematical game, the busy beaver functions offer an entirely new approach to solving pure mathematics problems. Many open problems in mathematics could in
theory, but not in practice, be solved in a systematic way given the value of S(n) for a sufficiently large n.^[8]
Consider any conjecture that could be disproven via a counterexample among a countable number of cases (e.g. Goldbach's conjecture). Write a computer program that sequentially tests this conjecture
for increasing values. In the case of Goldbach's conjecture, we would consider every even number ≥ 4 sequentially and test whether or not it is the sum of two prime numbers. Suppose this program is
simulated on an n-state Turing machine. If it finds a counterexample (an even number ≥ 4 that is not the sum of 2 primes in our example), it halts and notifies us. However, if the conjecture is true,
then our program will never halt. (This program halts only if it finds a counterexample.)
Now, this program is simulated by an n-state Turing machine, so if we know S(n) we can decide (in a finite amount of time) whether or not it will ever halt by simply running the machine that many
steps. And if, after S(n) steps, the machine does not halt, we know that it never will and thus that there are no counterexamples to the given conjecture (i.e., no even numbers that are not the sum
of two primes). This would prove the conjecture to be true.
Thus specific values (or upper bounds) for S(n) could be used to systematically solve many open problems in mathematics (in theory). However, current results on the busy beaver problem suggest that
this will not be practical for two reasons:
• It is extremely hard to prove values for the busy beaver function (and the max shift function). It has only been proven for extremely small machines with fewer than 5 states, while one would
presumably need at least 20-50 states to make a useful machine. Furthermore, every known exact value of S(n) was proven by enumerating every n-state Turing machine and proving whether or not each
halts. One would have to calculate S(n) by some less direct method for it to actually be useful.
• But even if one did find a better way to calculate S(n), the values of the busy beaver function (and max shift function) get very large, very fast. S(6) > 10^36534 already requires special
pattern-based acceleration to be able to simulate to completion. Likewise, we know that S(10) > Σ(10) > 3 ↑↑↑ 3 is a gigantic number and S(17) > Σ(17) > G, where G is Graham's number - an
enormous number. Thus, even if we knew, say, S(30), it is completely unreasonable to run any machine that number of steps. There is not enough computational capacity in the known part of the
universe to have performed even S(6) operations directly.^[9]
These are tables of rules for the Turing machines that generate Σ(1) and S(1), Σ(2) and S(2), Σ(3) (but not S(3)), Σ(4) and S(4), and the best known lower bound for Σ(5) and S(5), and Σ(6) and S(6).
In the tables, columns represent the current state and rows represent the current symbol read from the tape. Each table entry is a string of three characters, indicating the symbol to write onto the
tape, the direction to move, and the new state (in that order). The halt state is shown as H.
Each machine begins in state A with an infinite tape that contains all 0s. Thus, the initial symbol read from the tape is a 0.
Result key: (starts at the position underlined, halts at the position in bold)
0 1RH
1 (not used)
Result: 0 0 1 0 0 (1 step, one "1" total)
A B
0 1RB 1LA
1 1LB 1RH
Result: 0 0 1 1 1 1 0 0 (6 steps, four "1"s total)
busy beaver
A B C
0 1RB 0RC 1LC
1 1RH 1RB 1LA
Result: 0 0 1 1 1 1 1 1 0 0 (14 steps, six "1"s total).
Unlike the previous machines, this one is a busy beaver only for Σ, but not for S. (S(3) = 21.)
2-symbol busy
A B C D
0 1RB 1LA 1RH 1RD
1 1LB 0LC 1LD 0RA
Result: 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 (107 steps, thirteen "1"s total)
current 5-state,
2-symbol best
(possible busy
A B C D E
0 1RB 1RC 1RD 1LA 1RH
1 1LC 1RB 0LE 1LD 0LA
Result: 4098 "1"s with 8191 "0"s interspersed in 47,176,870 steps.
current 6-state,
2-symbol best
A B C D E F
0 1RB 1RC 1LD 1RE 1LA 1LH
1 1LE 1RF 0RB 0LC 0RD 1RC
Result: ≈3.515 × 10^18267 "1"s in ≈7.412 × 10^36534 steps.
See also[edit]
• Radó, Tibor (May 1962). "On non-computable functions" (PDF). Bell System Technical Journal. 41 (3): 877–884. doi:10.1002/j.1538-7305.1962.tb00480.x.
This is where Radó first defined the busy beaver problem and proved that it was uncomputable and grew faster than any computable function.
• Lin, Shen; Radó, Tibor (April 1965). "Computer Studies of Turing Machine Problems". Journal of the ACM. 12 (2): 196–212. doi:10.1145/321264.321270.
The results of this paper had already appeared in part in Lin's 1963 doctoral dissertation, under Radó's guidance. Lin & Radó prove that Σ(3) = 6 and S(3) = 21 by proving that all 3-state
2-symbol Turing Machines which don't halt within 21 steps will never halt. (Most are proven automatically by a computer program, however 40 are proven by human inspection.)
• Brady, Allen H. (April 1983). "The determination of the value of Rado's noncomputable function Σ(k) for four-state Turing machines". Mathematics of Computation. 40 (162): 647–665. doi:10.1090/
S0025-5718-1983-0689479-6. JSTOR 2007539.
Brady proves that Σ(4) = 13 and S(4) = 107. Brady defines two new categories for non-halting 3-state 2-symbol Turing Machines: Christmas Trees and Counters. He uses a computer program to
prove that all but 27 machines which run over 107 steps are variants of Christmas Trees and Counters which can be proven to run infinitely. The last 27 machines (referred to as holdouts) are
proven by personal inspection by Brady himself not to halt.
• Machlin, Rona; Stout, Quentin F. (June 1990). "The complex behavior of simple machines". Physica D: Nonlinear Phenomena. 42 (1–3): 85–98. Bibcode:1990PhyD...42...85M. doi:10.1016/0167-2789(90)
90068-Z. hdl:2027.42/28528.
Machlin and Stout describe the busy beaver problem and many techniques used for finding busy beavers (which they apply to Turing Machines with 4-states and 2-symbols, thus verifying Brady's
proof). They suggest how to estimate a variant of Chaitin's halting probability (Ω).
• Marxen, Heiner; Buntrock, Jürgen (February 1990). "Attacking the Busy Beaver 5". Bulletin of the EATCS. 40: 247–251. Archived from the original on 2006-10-09. Retrieved 2006-12-03.
Marxen and Buntrock demonstrate that Σ(5) ≥ 4098 and S(5) ≥ 47176870 and describe in detail the method they used to find these machines and prove many others will never halt.
• Green, Milton W. (1964). A Lower Bound on Rado's Sigma Function for Binary Turing Machines. 1964 Proceedings of the Fifth Annual Symposium on Switching Circuit Theory and Logical Design. pp.
91–94. doi:10.1109/SWCT.1964.3.
Green recursively constructs machines for any number of states and provides the recursive function that computes their score (computes σ), thus providing a lower bound for Σ. This function's
growth is comparable to that of Ackermann's function.
• Dewdney, Alexander K. (1984). "A computer trap for the busy beaver, the hardest working Turing machine". Scientific American. 251 (2): 10–17.
Busy beaver programs are described by Alexander Dewdney in Scientific American, August 1984, pages 19–23, also March 1985 p. 23 and April 1985 p. 30.
• Chaitin, Gregory J. (1987). "Computing the Busy Beaver Function" (PDF). In Cover, T. M.; Gopinath, B. Open Problems in Communication and Computation. Springer. pp. 108–112. ISBN 978-0-387-96621-2
• Brady, Allen H. (1995). "The Busy Beaver Game and the Meaning of Life". In Herken, Rolf. The Universal Turing Machine: A Half-Century Survey (2nd ed.). Wien, New York: Springer-Verlag. pp.
237–254. ISBN 3-211-82637-8.
Wherein Brady (of 4-state fame) describes some history of the beast and calls its pursuit "The Busy Beaver Game". He describes other games (e.g. cellular automata and Conway's Game of Life).
Of particular interest is "The Busy Beaver Game in Two Dimensions" (p. 247). With 19 references.
• Booth, Taylor L. (1967). Sequential Machines and Automata Theory. New York: Wiley. ISBN 0-471-08848-X.
Cf Chapter 9, Turing Machines. A difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-recursion with reference to Turing Machines, halting
problem. A reference in Booth attributes busy beaver to Rado. Booth also defines Rado's busy beaver problem in "home problems" 3, 4, 5, 6 of Chapter 9, p. 396. Problem 3 is to "show that the
busy beaver problem is unsolvable... for all values of n."
• Ben-Amram, A. M.; Julstrom, B. A.; Zwick, U. (1996). "A note on Busy Beavers and other creatures". Mathematical Systems Theory. 29: 375–386. doi:10.1007/BF01192693.
Bounds between functions Σ and S.
• Ben-Amram, A. M.; Petersen, H. (2002). "Improved Bounds for Functions Related to Busy Beavers". Theory of Computing Systems. 35: 1–11. doi:10.1007/s00224-001-1052-0.
Improved bounds.
• Lafitte, G.; Papazian, C. (June 2007). "The fabric of small Turing machines". Computation and Logic in the Real World, Proceedings of the Third Conference on Computability in Europe. pp. 219–227.
CiteSeerX 10.1.1.104.3021.
This article contains a complete classification of the 2-state, 3-symbol Turing machines, and thus a proof for the (2, 3) busy beaver: Σ(2, 3) = 9 and S(2, 3) = 38.
• Boolos, George S.; Burgess, John P.; Jeffrey, Richard C. (2007). Computability and Logic (Fifth ed.). Cambridge University Press. ISBN 978-0-521-87752-7.
• Kropitz, Pavel (2010). Problém Busy Beaver (Bachelor thesis) (in Slovak). Charles University in Prague.
This is the description of ideas, of the algorithms and their implementation, with the description of the experiments examining 5-state and 6-state Turing machines by parallel run on 31
4-core computer and finally the best results for 6-state TM.
External links[edit]
Wikiversity hosts a quiz on the busy beaver | {"url":"https://static.hlt.bme.hu/semantics/external/pages/kisz%C3%A1m%C3%ADthat%C3%B3_f%C3%BCggv%C3%A9ny/en.wikipedia.org/wiki/Busy_beaver.html","timestamp":"2024-11-13T15:12:48Z","content_type":"text/html","content_length":"132946","record_id":"<urn:uuid:6138b0dc-fd8d-493f-9b26-10fab986a69d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00390.warc.gz"} |
Comments on Computational Complexity: How much credit should the conjecturer get? Is Conjecturer a word?Here's a more slippery slope. What credit to g...Regarding Serge Lang coming off as a nutcase, the ...Main Conjecture of Iwasawa TheoryThe Poincare Conjecture might not be too recent to...It seems like naming the conjecturer is even worse...I wouldn't say that the conjectures were solve...Wow, Serge Lang comes off as a complete nut-job.Too late, there already have been fierce battles o...
tag:blogger.com,1999:blog-3722233.post1037852391005039424..comments2024-11-13T15:38:29.005-06:00Lance Fortnowhttp://www.blogger.com/profile/
06752030912874378610noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-3722233.post-12355723712558458572009-08-20T10:19:44.988-05:002009-08-20T10:19:44.988-05:00Here's a more slippery slope.
What credit to give to a theorem's Announcer? What I mean is this: someone announces in public "X is true". He/she may even have given a talk outlining his/her proof. It may get
referenced in a paper or two. But the proof never appears (e.g. the announcement was made 10 years ago, nothing happened since).<br /><br />Now you find your own proof. It is a significant piece of
work. Where does the credit go?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-28564924612347781142009-08-16T01:06:39.323-05:002009-08-16T01:06:39.323-05:00Regarding Serge Lang
coming off as a nutcase, the linked article is actually a mild example. He wrote extensively on his disbelief that HIV causes AIDS.Lucahttps://www.blogger.com/profile/
17835412240486594185noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-49101115638069849472009-08-15T14:23:57.559-05:002009-08-15T14:23:57.559-05:00Main Conjecture of Iwasawa
TheoryAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-57013377695355667462009-08-15T13:37:35.499-05:002009-08-15T13:37:35.499-05:00The Poincare Conjecture might not be too recent
to provide evidence. The Poincare Conjecture for higher dimensions was proven decades ago (dimension 4 in the 80's, 5 and higher in the 60's), and those results are still called the Poincare
Conjecture.<br /><br />For example, people just as often say "the Poincare Conjecture for dimensions 5 and higher," etc. rather than Smale's theorem, Freedman's theorem, etc.<br />
<br />On a related note, sometimes conjecturers get <i>too</i> much credit by getting conjectures named after them that they didn't even conjecture. These can be extensions or generalizations of
the original. The Poincare conjecture in higher dimensions is one example, as is the smooth Poincare conjecture,
etc.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-88052628529485728512009-08-14T19:35:29.956-05:002009-08-14T19:35:29.956-05:00It seems like naming the conjecturer is even worse,
because it's quite likely that some obscure paper conjectures a given result. It seems the name should be awarded not only for stating the conjecture, but also realizing its importance (e.g.
Riemann).<br /><br />Also, in some situations, the conjecturer's name does stay attached to the theorem. Mordell's conjecture is still referred to by name, as are the Bieberbach conjecture
and the Weil conjectures. Mertens conjecture is false and is still usually referred to by that name. Other things, like the Kepler and Poincare conjectures, may be too recent to have name changes
yet.<br /><br />I conjecture that being famous for other results, as well as having your conjecture open for a long time, will help your name stay. As such, I think the Riemann Hypothesis will keep
its name.Unknownhttps://www.blogger.com/profile/
06229416831400925212noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-36671350454050816942009-08-14T17:16:13.254-05:002009-08-14T17:16:13.254-05:00I wouldn't say that the conjectures were
solved, I'd say they were proven.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-54876983159037310752009-08-14T15:52:56.203-05:002009-08-14T15:52:56.203-05:00Wow, Serge Lang
comes off as a complete nut-job.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-84928554215782079452009-08-14T15:10:45.133-05:002009-08-14T15:10:45.133-05:00Too late, there already
have been fierce battles over who conjectured what when. For example how a Weil conjecture became the Taniyama-Weil conjecture and then either the Taniyama-Shimura or the Taniyama-Shimura-Weil
conjecture depending on who is writing.<br /><br />Serge Lang tells the story (from an anti-Weil position) <a href="http://www.ams.org/notices/199511/forum.pdf" rel="nofollow">here</a>Lucahttps:// | {"url":"https://blog.computationalcomplexity.org/feeds/1037852391005039424/comments/default","timestamp":"2024-11-14T15:24:53Z","content_type":"application/atom+xml","content_length":"17188","record_id":"<urn:uuid:113af7c7-2570-43e5-8709-6d2c2bdb0354>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00884.warc.gz"} |
How to Teach Fractions KS2: Maths Bootcamp [3]
Want to know how to teach fractions to KS2 Maths pupils? You’ve found the right place!
This post is part of our interventions bootcamp series: designed to support Year 5 and 6 teachers and SATs booster group leaders achieve age related expectations with pupils who need that extra
It’s particularly aimed at those running interventions in school but is also relevant much more widely to your whole class teaching, supporting a mastery approach, to help you ensure each child is
getting appropriate support and challenge.
Each post follows a similar structure:
1. First you’ll diagnose where children are struggling with ‘the nuts and bolts’ of that area of Mathematics from the National Curriculum.
2. Next you’ll track back to the different stages of understanding and examine what the misconception might be in detail.
3. Finally we’ll give you strategies that can be used with whole classes, booster groups or alongside 1 to 1 interventions.
In this post on fractions, we help you solve problems such as:
• They keep multiplying diagonally when adding fractions.
• They just don’t seem to understand what the numerator and denominator are.
• They keep writing their answer as a fraction when they shouldn’t be.
Fraction Lessons Resource Pack
Plug gaps and help conquer common upper KS2 misconceptions in fractions
Download Free Now!
The How to Teach KS2 Maths Interventions Bootcamp series
How to teach fractions KS2
How to teach place value year 5 and year 6
How to teach multiplication KS2
Diagnosis: What are pupils’ key misconceptions with Fractions?
From the below children, find the best fit for those you are looking at. Then use this to guide where in the post to read for support:
Archie has, over time, learnt how to find a fraction of a number through repetition (such as 1/6 of 30). However, he struggles when finding 2/6 of 30. He has had lots of practice of fraction methods
but hasn’t had much exposure to concrete or pictorial resources during his fractions learning. When presented with adding fractions he multiplies the numbers or adds diagonally. He has some concept
of improper fractions but has no idea how to apply this in a problem context Stage: EMERGING.
Sehr can tell you about equal parts, what the denominator and numerator mean and can use this knowledge in the context of finding fractions of a number or shape. She begins to struggle when applying
her understanding to adding and subtracting fractions or converting between mixed and improper fractions. Stage: DEVELOPING.
Holly struggles a little with multiplying and dividing fractions as she gets their methods mixed up with adding and subtracting fractions. She can find fractions of numbers and is beginning to apply
her understanding in problem contexts but then makes errors and picks the wrong method to work out. Sometimes it seems as though she makes a ‘best guess’ at an answer rather than applying a method
she is confident with. Stage: SECURING.
Reece finds it difficult to work out what method or process he should undertake when presented with calculations such as ‘find 1/5 of 20’. He can colour in fractions of a shape such as 3/7 of a shape
with 7 equal parts but struggles when asked to colour 1/7 of a shape with 14 equal parts. Stage: PRE-BOOSTER: Reece’s needs are beyond the scope of a booster group or this post. He needs specifically
targeted 1-to-1 support from a trained professional to catch him up to a point where a booster intervention could be considered. (If you are looking for 1-to-1 support, then read this review I wrote
of Third Space Learning’s 1-to-1 Maths programme.)
For a more personalised diagnosis of pupils’ misconceptions in fractions download the free Fractions, Decimals and Percentages Diagnostic Quiz for years 5 and 6.
Misconceptions and strategies for ‘EMERGING’ in Fractions
For a child like Archie, he needs a concrete foundation built on the use of physical manipulatives and with fractions in particular, pictorial representations. Using a more visual approach, such as
fraction games, will give him a greater understanding of what fractions are, which will in turn give him the knowledge to attempt nearer age-appropriate fraction questions.
Underpinning skills and concepts
What are fractions?
1. Give children a piece of squared paper. Ask the how we can make halves out of the paper. Discuss simple different ways (folded in half vertically, horizontally, diagonally).
2. Discuss how we know this is half (they fit onto each other etc). Push towards making the deduction that the 2 parts are all equal in size.
3. Push children to thinking if there are other ways you could make half with the paper.
4. Repeat but folding into 4 fourths or quarters.
5. Children should realise that the foundation of everything else in fractions is that you have a whole and it is split into equal parts.
6. Shade 1 part of the 4. Ask what fraction is shaded in. Write ¼ and discuss the terms numerator and denominator. As suggested explanation is that the denominator is the ‘name’ of the fraction. IE
how many equal parts there are altogether. The numerator at this stage is whatever is shaded in. A better explanation is the numerator is whatever we are talking about/finding out. This could be
what is shaded, what isn’t shaded or written as well as drawn.
7. To secure understanding, ensure the children say ‘out of’ when reading the vinculum (horizontal line splitting the numerator from denominator), when looking at a written fraction. For example, 5/
7 would be said by the children as “five out of seven”.
8. Give children an example such as 1/6 where 1 equal part is shaded, discussing the denominator and numerator and what they show. Ask them what fraction is unshaded. The following answers show a
lack of understanding that means children need more practical time, experience and examples such as given above:
9. 1/5 (not understanding what the numerator and denominator mean or they are not applying this knowledge when looking at a fraction).
10. 5 or 5/5 (can see that there are 5 equal parts unshaded but are not able to put this in the context of a written fraction)
11. Once the children are able to see that there are 5/6 unshaded, put the 2 fractions alongside each other in both written and pictorial form (1/6 + 5/6). Ask them what fraction is shaded and
unshaded. Children should be readily ably to see that there is 1 part shaded + 5 parts unshaded = 6 parts. They may struggle with the denominator, either saying the total is 6/1 or 6/12. Focus
the children on saying the fraction as directed above: “one out of six are shaded and five out of six are unshaded) and looking at the pictorial fraction. Discuss how many parts we had to start
with (6) and how many we have now (6). Write 6/6. Say the fraction. Ask children what has happened to the numerator (added) and denominator (stayed the same). Ask what this fraction is the same
as (1 or 1 whole one).
12. Try the children them with different fractions, withdrawing the pictorial support until the children can explain and see the fraction shown and the other fraction that makes a whole without
13. Finally, to secure understanding that there must be equal parts, show the children a selection of fractions and shapes. Some of the shapes should be cut into equal parts, others not. Ask the
children to find and shade a shape with (for example) 3/5.
Fractions of numbers
1. Give the children the calculation ¼ of 12. Ask them what this means. Nudge them to saying there are 12 whole ones. Give the children 12 concrete manipulatives each (such as multilink). Conclude
that if we are finding ¼ we will need to have 4 groups with the same amount in each because the denominator means the total amount of equal parts. Discuss that when we are finding a fraction of a
number we will need to share and sharing is one way to divide. Give the children time to share out their 12 objects into 4 groups of equal size. Ask how many cakes are in each group (3). Write
down ¼ of 12 = 3 then ask “Can anyone see anything in that calculation that looks familiar?”. The children should be able to see the multiplication family 3, 4, 12. Discuss this even if they
don’t see it. IE we have found that 3 fits into 12, 4 times. Ask the children if there is a way we can re-write this calculation as a division statement (12 ÷ 4 = 3).
2. Give 1 or 2 more examples but move on quickly to finding more than 1 part (e.g. ¾ of 12).
3. Ask the children to split 12 into 4 equal sized groups again. This time ask what the calculation requires (3 out of 4). Push the children to seeing that they have 4 groups, so if we now need to
find 3 of them we can count the amount in 3 of our 4 groups because we know they are the same size. Children making connections here should be able to multiply 3 (one group) by 3 (3 total groups)
= 9. Use these children to explain how they can use multiplication in this step. Repeat for other fractions such as 3/5 and 4/7. Return to concrete manipulatives where children are not
understanding the concept of find 1 part and multiplying it by the numerator.
When children like Archie can confidently explain what fractions are, what the numerator and denominator show and how to find fractions of shapes and numbers, they are ready to move on to the next
stage of the booster – ‘DEVELOPING’.
Misconceptions and strategies for ‘DEVELOPING’ in Subtraction
Children like Sehr, who make errors when adding and subtracting fractions or converting between mixed and improper fractions, need to visualise the fractions more to eliminate those errors and
develop a deeper understanding they can then apply in problem contexts.
Adding subtracting fractions
It is recommended to relate anything the children do not understand to slices of the same size cake.
1. Give the children addition calculations with the same denominator such as 3/10 + 5/10. Note any children who are adding the denominator, multiplying or going diagonally. If any of these occurs,
return to the content on adding fractions in ‘Emerging’, focusing on use of pictorial representations to ground their understanding. It is important children ‘see’ rather than are just taught to
‘add the numerators, keep the denominator the same’ as otherwise this will continue to lead to errors, particularly when introducing multiplying and dividing fractions. IE discuss how there are 3
parts of the cake eaten already and another 5 parts of the cake are also going to be eaten. So, in that same cake, how many parts have been eaten altogether and how many parts of the cake were
there altogether.
2. Repeat with subtractions such as 5/7 – 2/7. Ask how many equal parts the cake has (7) and if there were 5 parts left and you are giving out another 2, how many equal parts will remain? And how
many did we have at the start again? 3 out of 7, 3/7.
3. Keep referring children during this process for addition and subtraction that you are adding or subtracting with the numerator whereas the denominator remains constant because you are still
looking at that same cake all the way through the calculation.
Ordering Fractions
1. Children should already be able to quickly order fractions with the same denominator such as 5/10, 3/10, 10/10. Initially, when looking at fractions with differing numerators children should draw
bar model pictorial representations one under the other to compare visually:
1. When children are able to see that the smaller the denominator, the less equal parts, so the larger each of those parts are, challenge the children to compare and order fractions with a large
degree of difference. For example, ordering ½, 3/20, 1/10, 7/8. Children can draw if necessary but should be able to see that 7/8 is the largest and ½ is the next largest. They should also be
able to notice that 2 fractions have the same numerator and they can use this to help order the fractions (a pictorial version of ½ and 1/10 would help explain this to the children).
2. When children are able to look at clearly different fractions and order them by just visualising them, they are ready to discuss finding the lowest common denominator. To keep things simple for
the children, write and draw 1/3 and 1/6. We know that 1/3 is a greater amount than 1/6 but how much? Children should be able to see that 1/6 looks half the size of a 1/3. Draw lines on 1/3 so it
is cut into 6. Ask what the 1/3 looks the same as. Write that as a fraction and ask what has happened to get from 1/3 to 2/6 (we have doubled both numbers). Explain that we have made the
denominator the same (common) and that it is the lowest one we could do this with. Experiment with the same fractions with higher denominators to demonstrate (such as 18 so the children can see
that in the worst case scenario, they can multiply the 2 numbers together).
1. Explain that when we turned 1/3 into 2/6 we did so by starting with our denominator. We looked at the relationship between 3 and 6 and saw that 3 fitted into 6 twice. So we doubled our
denominator. Explain that if we do this we also need to double our numerator. Refer to pictorial versions for this and other examples, including what to do when one denominator will not ‘fit’
into the other as 3 to 6 does (e.g. ¼ and 2/5)
2. Give children experience of applying there understanding in adding and subtracting contexts, e.g. ¼ + 2/5 = 5/20 + 8/20=13/20.
Mixed and improper fractions
• Show children 8/8 drawn and written. Discuss what this means (one whole one). Now draw 2 more 8/8. Again, discuss that this means we have 3 whole ones. Discuss what we have if someone eats a
slice of cake. We have 8/8 and 8/8 and 7/8. Discuss how this could be written (2 and 7/8). Spend time with examples linking the same sized numerator and denominator with whole numbers.
• Using the same example as above (3 whole ones), discuss how many pieces each cake has (8) and how many pieces we have altogether (24). Explain that we can also write this as 24/8. It is the same
as 3 whole ones because each cake has 8 pieces. Again, remove one slice of cake. Ask the children what size this now leaves (23/8). Ask them if we can change this to a mixed fraction with whole
numbers separate. Ask them how we could do this and nudge them to seeing that if a whole cake has 8 parts, we can take away 8 from 23 to put one cake separately at the side. We can then keep
doing this until we have less than a whole cake left (less than 8/8). In this example we will have 2 whole ones and 7/8 left. Compare with the mixed fraction we had to begin this section with.
• Nudge the children to seeing that rather than taking ‘8’ away each time, which is repeated subtraction which is in turn related to division, we can simply see how many times 8 fits into 23
(twice) and that is how many whole ones we have. The remaining amount is the fraction we have left.
• Again, give the children experience of applying this understanding in addition and subtraction contexts such as 2 ½ + ¾.
Misconceptions and Strategies for ‘SECURING’ in Subtraction
Children like Holly may well need to recap some of the content of ‘Developing’ to eliminate errors when ‘Securing’ their understanding. However, it may well be sufficient for these children to recap
in whole class contexts and focus their booster on the specifics that they really struggle with such as multiplying and dividing fractions. If after covering some of the content in this section, some
children are struggling and making errors, return to the ‘Developing’ content.
Multiplying Fractions
1. Discuss the fraction ¼ x 1/5. Ask what we know about multiplication (that it is repeated addition). Using an array, draw a whole one and split into 4 equal parts. Underneath, draw 4 more whole
ones split into 4 equal parts. So the total rows are 5 to represent the fifths. Discuss that each side totals 1 whole one and that the area totals one whole as well. We can choose how to split up
each side to help us.
1. Discuss how many parts there are (20) so ¼ x 1/5 = 1/20. Discuss how this can be calculated by multiplying the denominators together. Ask how many parts we should shade in (1 because our
numerators are both 1). Give other examples where you have to shade more than one in such as ¾ x 2/5. Where the amount shaded (numerator) is 6.
2. Move children towards not needing arrays anymore, once they have made the link to just needing to multiply the denominators together and then multiply their numerators together before potentially
simplifying the resulting fraction.
Read more: How to Simplify Fractions: A Primary School Guide
Dividing Fractions
1. When going through the process of how to divide fractions by a whole number, use a bar model. For example, in ¼ ÷ 6, draw a bar that represents one whole one. Split it into 4 to represent each
quarter. Underneath the first ¼, draw a comparison bar that splits into 6 (as we need to divide ¼ by 6). Continue the bar until all ¼ are split into 6. The total bars will show the equal parts
(denominator) and one of the sections will show how many we are looking at (numerator). So ¼ ÷ 6 = 1/24.
2. Repeat for calculations such as 6/7 ÷ 3 where the only difference is our numerator is not 1, it is 6. Draw a bar to represent 1. Split it into 7 equal parts. Under each part draw a comparative
bar that is split into 3. Colouring 1 in will find 1/7 so we need to colour 6 in to find 6/7. This then gives us 6 out of 21 coloured in. This can then be simplified to 2/7.
3. Once the children see that the link between the bars and multiplying the denominator by the whole number, the bars can be removed.
Problem Contexts
Problem contexts can be given during the booster with additional pictorial support where necessary but primarily this should be possible through whole class teaching and learning, providing the
children are secure on the areas of fractions that are required to solve the problem.
If your pupils have gaps that still need plugging in the run up to SATs, don’t panic! Read our blog on how to get all your Year 6 to achieve 100 in Maths (be sure to follow this order of topics to
revise) or get in touch with our schools team who can arrange a free demo of our one-to-one maths interventions.
Don’t forget to download your free subtraction resources
FREE Fractions Intervention Lessons for Years 5 and 6
FREE All Kinds of Fractions Word Problems
FREE Recognising, Simplifying, Comparing, Ordering and Equivalent Fractions Gap Plugging Pack (Years 2 to 6)
FREE Adding and Subtracting Fractions Gap Plugging Pack (Years 1 to 6)
FREE Multiplying and Dividing Fractions Gap Plugging Pack (Years 5 and 6)
FREE Fractions Decimals and Percentages Year 6 SATs Questions Pack
FREE All Kinds of Decimals and Percentages Word Problems
Further Reading
Every week Third Space Learning’s specialist primary maths tutors support thousands of students across hundreds of schools with weekly online 1 to 1 maths lessons designed to plug gaps and boost
Since 2013 these personalised one to one lessons have helped over 169,000 primary and secondary students become more confident, able mathematicians.
Learn about the scaffolded lesson content or request a personalised quote for your school to speak to us about your school’s needs and how we can help. | {"url":"https://thirdspacelearning.com/blog/how-to-teach-fractions-ks2-interventions-maths-year-5-year-6-bootcamp-3/","timestamp":"2024-11-07T02:46:20Z","content_type":"text/html","content_length":"150734","record_id":"<urn:uuid:d2f62ff3-2f24-40f2-bb4d-e5074b20b634>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00214.warc.gz"} |
Thermodynamics and Statistical Mechanics
Publication Details
Author: Richard Fitzpatrick
Publisher: World Scientific
Publication Date: 2020
ISBN: 978-981-122-335-8
This book provides a comprehensive exposition of the theory of equilibrium thermodynamics and statistical dynamics at a level suitable for well-prepared undergraduate students. The fundamental
message of the book is that all results in equilibrium thermodynamics and statistical mechanics follow from a single unprovable axiom---namely, the principle of equal a priori
probabilities---combined with elementary probability theory, elementary classical mechanics, and elementary quantum mechanics.
Table of Contents
1. Introduction. Atomic Theory of Matter; What is Thermodynamics? Need for a Statistical Approach; Microscopic and Macroscopic Systems; Classical and Statistical Therodynamics; Classical and Quantum
2. Probability Theory. Introduction; What is Probability?; Combining Probabilities; Two-State System; Combinatorial Analysis; Binomial Probability Distrubution; Mean, Variance, and Standard
Deviation; Application to Binomial Probability Distribution; Gaussian Probability Distribution; Central Limit Theorem; Exercises.
3. Statistical Mechanics. Introduction; Specification of State of Many-Particle System; Principle of Equal A Priori Probabilities; H-Theorem; Relaxation Time; Reversibility and Irreversibility;
Probability Calculation; Behavior of Density of States; Exercises.
4. Heat and Work. Brief History of Heat and Work; Macrostates and Microstates; Microscopic Interpretation of Heat and Work; Quasi-Static Processes; Exact and Inexact Differentials; Heat and Work;
5. Statistical Thermodynamics. Introduction; Thermal Interaction Between Macrosystems; Temperature; Mechanical Interaction Between Macrosystems; General Interaction Between Macrosystems; Entropy;
Properties of Entropy; Uses of Entropy; Entropy and Quantum Mechanics; Laws of Thermodynamics; Exercises.
6. Classical Thermodynamics. Introduction; Ideal Gas Equation of State; Specific Heat; Calculation of Specific Heats; Isothermal and Adiabatic Expansion; Hydeostatic Equilibrium of Atmosphere;
Isothermal Atmosphere; Adiabatic Atmosphere; Internal Energy; Enthalpy; Helmholtz Free Energy; Gibbs Free Energy; General Relation Between Specific Heats; Free Expansion of Gas; Van der Waals
Gas; Joule-Thompson Throttling; Heat Engines; Refrigerators; Exercises.
7. Multi-Phase Systems. Introduction; Equilibrium of Isolated System; Equilibrium of Constant-Temperature System; Equilibrium of Constant-Temperature Constant-Pressure System; Stability of
Single-Phase System; Equilibrium Between Phases; Clausius-Clapeyron Equation; Phase Diagrams; Vapor Pressure; Phase Transformation in van der Waals Fluid; Exercises.
8. Applications of Statistical Thermodynamics. Introduction; Canonical Probability Distribution; Spin-1/2 Paramagnetism; System with Specified Mean Eenrgy; Calculation of Mean Values; Partiton
Function; Ideal Monatomic Gas; Gibbs Paradox; General Paramagnetism; Equipartition Theorem; Harmonic Oscillators; Specific Heats; Specific Heats of Gases; Specific Heats of Solids; Maxwell
Velocity Distribution; Effusion; Ferromagnetism; Exercises.
9. Chemical Equilibria. Introduction; Grand Canonical Probability Distribution; Systems of Several Components; Equilibrium Between Phases; Gibbs Phase Rule; Dilute Solutions; Molality; Osmosis;
Boiling and Freezing Points; General Conditions for Chemical Equilibrium; Dissociation of Water; Ideal Gas Mixture; Chemical Potentials of Ideal Gas Mixture; Law of Mass Action; Temperature
Dependence of Equilibrium Constant; Saha Equation; Exercises.
10. Quantum Statistics. Introduction; Symmetry Requirements in Quantum Mechanics; Illustrative Example; Formulation of Statistical Problem; Fermi-Dirac Statistics; Photon Statistics; Bose-Einstein
Statistics; Maxwell-Boltzman Statistics; Quantum Statistics in Classical Limit; Quantum-Mechanical Treatment of Ideal Gas; Derivation of van der Waals Equation; Plank Radiation Law; Black-Body
Radiation; Stefam-Boltzmann Law; Conduction Electrons in Metal; Sommerfeld Expansion; White-Dwarf Stars; Chandresekhar Limit; Neutron Stars; Bose-Einstein Condensation; Exercises.
A. Physical Constants.
B. Classical Mechanics. Generalized Coordinates; Generalized Forces; Lagrange's Equation; Generalized Momenta; Calculus of Variations; Conditional Variation; Multi-Function Variation; Hamilton's
Principle; Hamilton's Equations.
C. Wave Mechanics. Introduction; Photoelectric Effect; Electron Diffraction; Representation of Waves via Complex Numbers; Schrodinger's Equation; Probability Interpretation of Wavefunction; Wave
Packets; Heisenberg's Uncertainty Principle; Stationary States; Three-Dimensional Wave Mechanics; Simple Harmonic Oscillator; Angular Momentum.
Purchase Details
This book can be purchased directly from World Scientific. Richard Fitzpatrick Last modified: Tue Mar 26 14:14:19 CDT 2013 | {"url":"https://farside.ph.utexas.edu/Books/Thermo/Thermo.html","timestamp":"2024-11-07T16:21:13Z","content_type":"text/html","content_length":"6749","record_id":"<urn:uuid:96cc10f8-444d-462a-af82-ec49fdee10f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00172.warc.gz"} |
Asymptotic p-value for a correlation coefficient
Asymptotic p-value for a correlation coefficient {corrfuns} R Documentation
Asymptotic p-value for a correlation coefficient
Asymptotic p-value a correlation coefficient.
correl(y, x, type = "pearson", rho = 0, alpha = 0.05)
y A numerical vector.
x A numerical vector.
type The type of correlation coefficient to compute, "pearson" or "spearman".
rho The hypothesized value of the true partial correlation.
alpha The significance level.
Fisher's transformation for the correlation coefficient is defined as \hat{z}=\frac{1}{2}\log\frac{1+r}{1-r} and its inverse is equal to \frac{\exp\left(2\hat{z}\right)-1}{\exp\left(2\hat{z}\right)
+1}. The estimated standard error of Fisher's transform is \frac{1}{\sqrt{n-3}} (Efron and Tibshirani, 1993, pg. 54). If on the other hand, you choose to calculate Spearman's correlation
coefficients, the estimated standard error is slightly different \simeq \frac{ 1.029563}{\sqrt{n-3}} (Fieller, Hartley and Pearson, 1957, Fieller and Pearson, 1961). R calculates confidence intervals
based in a different way and does hypothesis testing for zero values only. The function calculates asymptotic confidence intervals based upon Fisher's transform, assuming asymptotic normality of the
transform and performs hypothesis testing for the true (any, non only zero) value of the correlation. The sample distribution though is a t_{n-3}.
A list including:
result The correlation coefficient and the p-value for the test of zero correlation.
ci The asymptotic (1-\alpha)\% confidence interval for the true correlation coefficient.
Michail Tsagris
R implementation and documentation: Michail Tsagris mtsagris@uoc.gr.
Efron B. and Tibshirani R.J. (1993). An introduction to the bootstrap. Chapman & Hall/CRC.
Fieller E.C., Hartley H.O. and Pearson E.S. (1957). Tests for rank correlation coefficients. I. Biometrika, 44(3/4): 470–481.
Fieller E.C. and Pearson E.S. (1961). Tests for rank correlation coefficients: II. Biometrika, 48(1/2): 29–40.
See Also
correls, permcorrels
a <- correl( iris[, 1], iris[, 2] )
version 1.0 | {"url":"https://search.r-project.org/CRAN/refmans/corrfuns/html/correl.html","timestamp":"2024-11-07T23:38:30Z","content_type":"text/html","content_length":"4634","record_id":"<urn:uuid:4e5dea92-d76a-4ea1-8077-d525de924f9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00544.warc.gz"} |
Mathematical Statistics An... | Rent | 9781118771044
Presents a unified approach to parametric estimation, confidence intervals, hypothesis testing, and statistical modeling, which are uniquely based on the likelihood function
This book addresses mathematical statistics for upper-undergraduates and first year graduate students, tying chapters on estimation, confidence intervals, hypothesis testing, and statistical models
together to present a unifying focus on the likelihood function. It also emphasizes the important ideas in statistical modeling, such as sufficiency, exponential family distributions, and large
sample properties. Mathematical Statistics: An Introduction to Likelihood Based Inference makes advanced topics accessible and understandable and covers many topics in more depth than typical
mathematical statistics textbooks. It includes numerous examples, case studies, a large number of exercises ranging from drill and skill to extremely difficult problems, and many of the important
theorems of mathematical statistics along with their proofs.
In addition to the connected chapters mentioned above, Mathematical Statistics covers likelihood-based estimation, with emphasis on multidimensional parameter spaces and range dependent support. It
also includes a chapter on confidence intervals, which contains examples of exact confidence intervals along with the standard large sample confidence intervals based on the MLE's and bootstrap
confidence intervals. There’s also a chapter on parametric statistical models featuring sections on non-iid observations, linear regression, logistic regression, Poisson regression, and linear
• Prepares students with the tools needed to be successful in their future work in statistics data science
• Includes practical case studies including real-life data collected from Yellowstone National Park, the Donner party, and the Titanic voyage
• Emphasizes the important ideas to statistical modeling, such as sufficiency, exponential family distributions, and large sample properties
• Includes sections on Bayesian estimation and credible intervals
• Features examples, problems, and solutions
Mathematical Statistics: An Introduction to Likelihood Based Inference is an ideal textbook for upper-undergraduate and graduate courses in probability, mathematical statistics, and/or statistical | {"url":"https://www.ecampus.com/mathematical-statistics-introduction/bk/9781118771044&pos=6","timestamp":"2024-11-07T20:47:08Z","content_type":"application/xhtml+xml","content_length":"128074","record_id":"<urn:uuid:6973c88c-2248-4478-8b04-3872ee8d8bd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00614.warc.gz"} |
Shortest Paths | CS61B Guide
We've seen that Breadth-First Search can help us find the shortest path in an unweighted graph, where the shortest path was just defined to be the fewest number of edges traveled along a path. In the
following shortest-paths algorithms, we will discover how we can generalize the breadth-first traversal to find the path with the lowest total cost, where the cost is determined by different weights
on the edges. | {"url":"https://cs61b.bencuan.me/algorithms/shortest-paths","timestamp":"2024-11-13T10:54:37Z","content_type":"text/html","content_length":"117134","record_id":"<urn:uuid:e17b13f6-417c-4d25-b8a4-1a7cad6674b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00794.warc.gz"} |